id
stringlengths 30
36
| source
stringclasses 1
value | format
stringclasses 1
value | text
stringlengths 5
878k
|
---|---|---|---|
no-problem/9906/hep-ph9906525.html
|
ar5iv
|
text
|
# Signal of neutrinoless double beta decay, neutrino spectrum and oscillation scenarios
## 1 Informations on neutrino parameters
### 1.1 Massive neutrinos and $`0\nu 2\beta `$ decay
Atmospheric neutrino data can be interpreted in terms of a dominant $`\nu _\mu \nu _\tau `$ oscillation channel, although a sub-dominant channel $`\nu _\mu \nu _\mathrm{e}`$ is not excluded . The latter may be due to a $`\nu _e`$ component of the heaviest (lightest) neutrino state $`\nu _3`$ ($`\nu _1`$) for spectra with “normal” (“inverted”) hierarchy–our definition of “hierarchy” is discussed in section 3. Several possibilities are open for the interpretation of the solar neutrino data, depending on the frequencies of oscillation and mixings.
Hence, the indications for massive neutrinos are strong. However, there is quite a limited knowledge on the neutrino mass spectrum itself, and particularly on the lightest neutrino mass. The search for $`0\nu 2\beta `$ decay can shed light on this important issue. The bound of 0.2 eV obtained on the parameter
$$_{\mathrm{ee}}=|\underset{i}{}U_{\mathrm{e}i}^2m_i|$$
(1)
is sensibly smaller than the mass scales probed by present studies of $`\beta `$-decay, or those inferred in cosmology . In eq. (1), the non-negative quantities $`m_i,`$ $`i=1,2,3\mathrm{}N`$ are the neutrino masses ($`m_{i+1}m_i`$); the complex quantities $`U_\mathrm{}i,`$ $`\mathrm{}=\mathrm{e},\mu ,\tau \mathrm{},`$ are the elements of the mixing matrix, which relates the flavor eigenstates to the mass eigenstates: $`\nu _{\mathrm{}}(x)=_iU_\mathrm{}i\nu _i(x).`$ Hence, $`_{\mathrm{ee}}`$ can be thought of as (the absolute value of) the ee$``$entry of the neutrino mass matrix. Let us recall that, beside the $`(N1)(N2)/2`$ phases relevant to neutrino oscillations, there are still $`N1`$ physical phases in the lepton sector that have no analogy in the quark sector, and arise from the Majorana structure of the neutrino mass matrix. Notice that both the amplitudes and the phases of the elements of the mixing matrix $`U_{\mathrm{e}i}`$ are relevant in determining the size of $`_{\mathrm{ee}}.`$
### 1.2 Extremal values of $`_{\mathrm{ee}}`$ for $`0\nu 2\beta `$ decay
We obtain in this section the extremal values of $`_{\mathrm{ee}}`$ under arbitrary variations of the phases, keeping fixed the neutrino masses $`m_i`$ and the “mixing elements”<sup>1</sup><sup>1</sup>1In the following, we will always refer with the term “mixing elements” to the absolute value of the elements of the mixing matrix. $`|U_{\mathrm{e}i}^2|.`$ The maximum value of $`_{\mathrm{ee}}`$ is simply:
$$_{\mathrm{ee}}^{max}=\underset{i}{}|U_{\mathrm{e}i}^2|m_i.$$
(2)
The minimum value can be written as:
$$_{\mathrm{ee}}^{min}=\mathrm{max}\{2|U_{\mathrm{e}i}^2|m_i_{\mathrm{ee}}^{max},0\}.$$
(3)
To demonstrate this formula, let us consider the absolute value of the sum of three complex numbers: $`r=|z_1+z_2+z_3|.`$ We want to minimize $`r`$ by keeping fixed $`|z_i|,`$ namely, by varying the phases. Let us define the quantities $`r_{1,2,3}`$ and $`q_{1,2,3}`$ as: $`r_1=|z_1||z_2||z_3|,`$ $`q_1=|z_1||z_2+z_3|,`$ and similar eqs., but permuting the indices for $`r_{2,3}`$ and $`q_{2,3}.`$ Notice that at most one of the $`r_i`$‘s is positive. Assuming that $`r_1>0,`$ it is simple to show that $`r^{min}=r_1;`$ in fact, using twice the Schwartz inequality, we get $`r|q_1|=q_1r_1.`$ Similar considerations if $`r_2>0,`$ or $`r_3>0.`$ The last case has $`r_i0`$ for $`i=1,2,3.`$ If one of the $`r_i`$‘s is zero, then $`r^{min}=0,`$ hence we need to consider the case when $`r_i<0`$ for all $`i`$‘s. In this case, the quantity $`q_1`$ goes from negative, when the phases of $`z_2`$ and $`z_3`$ are equal, to positive, when these phases are opposite. By continuity, a phase choice exists such that $`q_1=0.`$ Since by proper choice of the phase of $`z_1`$ we can get $`r=|q_1|`$ we conclude that, again, $`r^{min}=0.`$ In conclusion, the general case is covered by the formula: $`r^{min}=\text{max}\{r_i,0\}.`$ This is equivalent to eq. (3), after noticing that $`r_i=2|z_i|_{i=1}^3|z_i|.`$ The generalization of these results to $`N`$ neutrinos is quite simple: Just limit the sum in eq. (2) to $`N=3.`$ However, we will be concerned only with the case of three neutrinos in the rest of the work.
The previous two equations give the extremal values of $`_{\mathrm{ee}},`$ once the neutrino spectrum and the mixing elements are known. Such extremal values are important, being independent of the complex phases. The information we get from the experimental upper bound is $`_{\mathrm{ee}}^{bound}_{\mathrm{ee}}^{min};`$ the informations we could get from a positive signal, instead, is $`_{\mathrm{ee}}^{signal}[_{\mathrm{ee}}^{min},_{\mathrm{ee}}^{max}].`$ In the following it will be shown how to use and represent $`_{\mathrm{ee}}^{min}`$ and $`_{\mathrm{ee}}^{max},`$ and what we can learn on them assuming specific neutrino spectra, and scenarios of neutrino oscillations.
## 2 Representation of $`_{\mathrm{ee}}^{min}`$ and $`_{\mathrm{ee}}^{max}`$
We introduce and discuss in this section a graphical representation of the values of $`_{\mathrm{ee}}^{min}`$ and $`_{\mathrm{ee}}^{max}.`$ For this purpose we will make reference to fig. 1, where the representation of $`_{\mathrm{ee}}^{min}`$ is displayed, for an illustrative choice of the neutrino spectrum: $`m_3=2m_2`$ and $`m_2=2m_1.`$ In order to fix the ideas, we point out from the beginning the two essential features of fig. 1: (1) the value of $`_{\mathrm{ee}}`$ at the vertices, namely the masses of the neutrinos $`m_i`$; (2) the position of the inner triangle (also determined by the masses of the neutrinos).
Let us begin by recalling some basic facts. The three mixing elements $`|U_{\mathrm{e}i}^2|`$ are constrained by the unitarity condition $`_i|U_{\mathrm{e}i}^2|=1.`$ This condition can be represented by using the inner region of one equilateral triangle with unit height, where the distance from the $`i^{th}`$ side represents the value of $`|U_{\mathrm{e}i}^2|,`$ see fig. 1 (this triangle was first used in , to analyze solar neutrino oscillations). To exemplify the use of the triangle, let us consider two special cases: (a) When $`\nu _\mathrm{e}`$ is an equal admixture of the three mass eigenstates, we have $`|U_{\mathrm{e}i}^2|=1/3.`$ This point is represented by the barycentre of the equilateral triangle of fig. 1. (b) When $`\nu _\mathrm{e}`$ coincides with the mass eigenstate $`\nu _1,`$ we have $`|U_{\mathrm{e1}}^2|=1,`$ and the other two mixing elements are zero. This point is represented by the $`1^{st}`$-vertex (by definition, the $`1^{st}`$ vertex is opposite to the $`1^{st}`$ side, denoted with the label $`|U_{\mathrm{e1}}^2|`$ in fig. 1, etc.).
From eq. (3), $`_{\mathrm{ee}}^{min}`$ is zero in the inner triangular region represented in fig. 1. The vertices of this inner triangle are given by:
$$|U_{\mathrm{e1}}^2|/|U_{\mathrm{e2}}^2|=m_2/m_1\text{ when }|U_{\mathrm{e3}}^2|=0,$$
(4)
and by the two additional equations obtained by the replacement $`31,`$ and $`32.`$ The condition $`|U_{\mathrm{e3}}^2|=0`$ in eq. (4) tells us that we are on the $`3^{rd}`$ (lower) side of the unitarity triangle of fig. 1.
At the $`i^{th}`$ vertex of the unitarity triangle $`_{\mathrm{ee}}^{min}=_{\mathrm{ee}}=m_i,`$ as is clear from eq. (3), and as illustrated in fig. 1. The value of $`_{\mathrm{ee}}^{min}`$ decreases linearly when moving from one vertex toward the inner triangle. In fact, $`_{\mathrm{ee}}^{min}`$ is non-zero only close to the vertices of the unitarity triangle (assuming $`m_1>0`$). This concludes the illustration of fig. 1.
The unitarity triangle can also be used to represent the maximum possible value $`_{\mathrm{ee}}.`$ Quite simply, $`_{\mathrm{ee}}^{max}`$ is the function of the mixing elements $`|U_{\mathrm{e}i}^2|`$ that interpolates linearly among the values $`_{\mathrm{ee}}=m_i`$ taken at the vertices of the unitarity triangle, as clear from eq. (2). However, since $`_{\mathrm{ee}}^{max}`$ is just the sum of positive contributions (eq. (2)), the analysis of $`_{\mathrm{ee}}^{max}`$ is nearly trivial.
## 3 Phenomenology of oscillations and $`0\nu 2\beta `$
We discuss now the $`0\nu 2\beta `$ signal assuming some specific spectra, and scenarios of oscillation, using the graphical representation introduced above. We take advantage of the indications from atmospheric and solar neutrinos, that can be accounted in terms of two different frequencies of neutrino oscillations, related to the mass differences squared $`\mathrm{\Delta }m_{atm}^2`$ and $`\mathrm{\Delta }m_{}^2`$ ($`\mathrm{\Delta }m_{atm}^2\mathrm{\Delta }m_{}^2`$). We consider the following three cases:
* Case \[$`𝒩`$\]: “normal” hierarchy, $`m_1(\mathrm{\Delta }m_{atm}^2)^{1/2}`$;
* Case \[$``$\]: “inverted” hierarchy, $`m_1(\mathrm{\Delta }m_{atm}^2)^{1/2}`$;
* Case \[$`𝒟`$\]: “normal” and “inverted” hierarchies, $`m_1(\mathrm{\Delta }m_{atm}^2)^{1/2}`$;
from these cases, it will be easy to understand also the “intermediate” situations when $`m_1(\mathrm{\Delta }m_{atm}^2)^{1/2}.`$ With the term “hierarchy” (either “normal” or “inverted”) we refer to the mass differences squared (see eqs. (5) and (8) below)<sup>2</sup><sup>2</sup>2In order to simplify the connection with the phenomenology, we use a definition of “hierarchy” that is relevant to neutrino oscillations, which involves just the mass differences squared. Notice that sometimes in the literature, “hierarchy” is used in reference to the neutrino spectrum itself.. We assume that the electronic admixture in atmospheric neutrinos is sub-dominant , and use for the mass splittings $`\mathrm{\Delta }m_{atm}^2`$ and $`\mathrm{\Delta }m_{}^2`$ the values suggested by the phenomenology. For solar neutrino solutions we use the terminology of , that we will recall in the following. A similar study has been performed in reference , with the goal to extract informations on the mixing angles, knowing $`_{\mathrm{ee}}`$ and the neutrino spectrum. For other recent works oriented toward the phenomenology, see .
### 3.1 Case \[$`𝒩`$\]: “normal” hierarchy, $`m_1(\mathrm{\Delta }m_{atm}^2)^{1/2}`$
What is the expected value of $`_{\mathrm{ee}}`$ for a neutrino spectrum with “normal” hierarchy:
$$m_3^2m_2^2=\mathrm{\Delta }m_{atm}^2m_2^2m_1^2=\mathrm{\Delta }m_{}^2,$$
(5)
assuming, to begin with, that $`m_1`$ is negligible? For the values of $`\mathrm{\Delta }m_{}^2`$ suggested by the MSW small mixing angle solution of the solar neutrino problem (SMA) or vacuum oscillation (VO), the only important contribution to $`0\nu 2\beta `$ decay rate comes from the heaviest eigenstate: $`_{\mathrm{ee}}|U_{\mathrm{e3}}^2|m_3.`$ It is possible to have a comparable contribution from the second eigenstate assuming MSW solutions of the solar neutrino problem with large mixing angle (LMA) $`\delta _{\mathrm{ee}}|_{}=|U_{\mathrm{e2}}^2|m_24\times 10^3`$ eV (using $`\mathrm{\Delta }m_{}^210^4`$ eV<sup>2</sup> and $`|U_{\mathrm{e2}}^2|0.4`$). This is of the same size of the contribution from the heaviest eigenstate, $`\delta _{\mathrm{ee}}|_{atm}=|U_{\mathrm{e3}}^2|m_3,`$ if $`|U_{\mathrm{e3}}^2|0.1`$ and $`\mathrm{\Delta }m_{atm}^22\times 10^3`$ eV$`^2.`$ We conclude that, if future experiments searching for the $`0\nu 2\beta `$ transition will prove that
$$_{\mathrm{ee}}>10^2\text{ eV},$$
(6)
the hypothesis of a spectrum with “normal” hierarchy and very small $`m_1`$ will be disfavoured <sup>3</sup><sup>3</sup>3Alternatively, one should postulate a different origin of the $`0\nu 2\beta `$ decay..
The function $`_{\mathrm{ee}}^{min}`$ is represented in fig. 2 for two different values of $`\mathrm{\Delta }m_{}^2:`$ $`10^4`$ eV<sup>2</sup> in the $`1^{st}`$ plot, and $`10^5`$ eV<sup>2</sup> in the $`2^{nd}`$ (we assumed $`\mathrm{\Delta }m_{atm}^2=2\times 10^3`$ eV<sup>2</sup>). Notice that assuming $`m_1=0`$ the inner triangle of fig. 1 degenerates into a line (for much smaller values of $`\mathrm{\Delta }m_{}^2,`$ say for VO, the line practically coincides with the side $`U_{\mathrm{e3}}=0`$). Recalling that the inner triangle corresponds to the region where $`_{\mathrm{ee}}^{min}=0,`$ we appreciate from fig. 2 the crucial dependence on the parameter $`|U_{\mathrm{e3}}^2|`$ of the $`0\nu 2\beta `$ transition rate.
Let us increase now the size of $`m_1,`$ keeping $`m_1(\mathrm{\Delta }m_{atm}^2)^{1/2}m_3.`$ The (degenerate) inner triangle in fig. 2 becomes an obtuse isosceles triangle when $`m_1m_2>(\mathrm{\Delta }m_{}^2)^{1/2};`$ the base being parallel to the $`3^{rd}`$ side, where $`U_{\mathrm{e3}}=0.`$ A complete suppression of the $`0\nu 2\beta `$ transition can take place for those solutions of the solar neutrino problem that fall in this inner triangle, and for this reason, the most important conclusion is unchanged: The size of $`|U_{\mathrm{e3}}^2|`$ is very important in determining whether the case $`_{\mathrm{ee}}^{min}=0`$ is possible or not. More precisely, this mixing element has to be compared with the height of the triangle, $`m_1/m_3`$ (see fig. 1). Incidentally, we notice the simple formula
$$_{\mathrm{ee}}|m_1+|U_{\mathrm{e3}}^2|m_3e^{i\phi }|\text{ where }\phi =\mathrm{arg}[U_{\mathrm{e3}}^2/U_{\mathrm{e1}}^2]$$
(7)
valid for the SMA case, which illustrates that $`_{\mathrm{ee}}0`$ is possible when $`m_1/m_3|U_{\mathrm{e3}}^2|`$ ($`m_3(\mathrm{\Delta }m_{atm}^2)^{1/2}`$ in present hypotheses) and the phases of $`U_{\mathrm{e3}}^2`$ and $`U_{\mathrm{e1}}^2`$ are opposite.
### 3.2 Case \[$``$\]: “inverted” hierarchy, $`m_1(\mathrm{\Delta }m_{atm}^2)^{1/2}`$
Let us assume a spectrum with “inverted” hierarchy, namely
$$m_3^2m_2^2=\mathrm{\Delta }m_{}^2m_2^2m_1^2=\mathrm{\Delta }m_{atm}^2,$$
(8)
and suppose, to begin with, that $`m_1`$ is negligible. In this case, since the sub-dominant mixing element is $`|U_{\mathrm{e1}}^2|,`$ we can obtain large maximum values :
$$_{\mathrm{ee}}^{max}(\mathrm{\Delta }m_{atm}^2)^{1/2}=(3\text{ to }9)\times 10^2\text{ eV.}$$
(9)
This could be close to the present bound , if also the nuclear matrix elements take the highest values allowed by present uncertainties, $`23`$ .
In these hypotheses, $`_{\mathrm{ee}}^{min}`$ can be (close to) zero only if $`|U_{\mathrm{e2}}^2|`$ is very close to $`|U_{\mathrm{e3}}^2|;`$ the contribution from $`|U_{\mathrm{e1}}^2|`$ being irrelevant. In a graphical representation like in fig. 2, this corresponds to the fact that the inner triangle almost coincides with the bisector $`|U_{\mathrm{e2}}^2|=|U_{\mathrm{e3}}^2|`$ (the “small” mixing element $`|U_{\mathrm{e1}}^2|`$ is represented by the distance from the $`1^{st}`$–right–side).
Let us increase the size of $`m_1,`$ keeping $`m_1(\mathrm{\Delta }m_{atm}^2)^{1/2}m_3.`$ The inner triangle is, in this assumption, acute isosceles, the base being parallel to the side $`U_{\mathrm{e1}}=0,`$ and with length $`m_1/m_3\times 2/\sqrt{3}.`$ Hence, only those solutions of the solar neutrino problem which have almost maximal mixing angles (VO, averaged oscillations and perhaps LMA) fall in the region where the $`0\nu 2\beta `$ transition rate may be strongly suppressed. In the case of SMA, since $`|U_{\mathrm{e3}}^2|`$ is small by assumption (and $`|U_{\mathrm{e1}}^2|`$ is not large) we have simply:
$$_{\mathrm{ee}}m_2(m_1^2+\mathrm{\Delta }m_{atm}^2)^{1/2}.$$
(10)
Hence, $`_{\mathrm{ee}}0`$ is impossible if the SMA solution is correct. Quite generally, in the case of “inverted” hierarchy, it is less likely that $`_{\mathrm{ee}}^{min}`$ is zero.
### 3.3 Case \[$`𝒟`$\]: “nearly degenerate” spectrum, $`m_1(\mathrm{\Delta }m_{atm}^2)^{1/2}`$
Largest values of $`_{\mathrm{ee}}`$ (up to the experimental bound) can be taken for a “nearly degenerate” neutrino spectrum . The maximum value is simply $`_{\mathrm{ee}}^{max}=m_1+𝒪(\mathrm{\Delta }m^2/m_1),`$ $`m_1`$ playing the role of mass spectrum offset.
The corresponding minimum value, $`_{\mathrm{ee}}^{min}/m_1=\mathrm{max}\{2|U_{\mathrm{e}i}^2|1,0\}`$ is represented in fig. 3 assuming “normal” hierarchy of the mass differences (eq. (5)); $`𝒪(\mathrm{\Delta }m^2/m_1^2)`$ terms have been neglected. From this figure it is visible that, to interpret properly the results of $`0\nu 2\beta `$ decay studies (and possibly, to exclude the inner region in the $`1^{st}`$ plot, the one where $`_{\mathrm{ee}}m_1`$ is possible) we need precise information on the mixing elements. This requires distinguishing among oscillation scenarios. The plots also illustrate the importance to quantify the size of $`|U_{\mathrm{e3}}^2|`$ . Similar considerations apply when the mass differences have “inverted” hierarchy, eq. (8) with $`|U_{\mathrm{e1}}^2|`$ playing the role of $`|U_{\mathrm{e3}}^2|.`$ Notice in particular that with approximate mass degeneracy the role of the sub-dominant mixing is almost the same for “normal” and “’inverted” hierarchy; this should be contrasted with the conclusions for the cases \[$`𝒩`$\] and \[$``$\], when $`m_1(\mathrm{\Delta }m_{atm}^2)^{1/2}`$.
In the particular case of SMA solution, eq. (7) is still valid, with $`m_1m_3`$ (and $`U_{\mathrm{e3}}U_{\mathrm{e1}}`$ for “inverted” hierarchy); hence, up to sub-dominant mixing terms $`_{\mathrm{ee}}_{\mathrm{ee}}^{max}m_1,`$ and a complete cancellation is impossible.
### 3.4 A complementary representation
In order to recapitulate and confirm the results obtained in this section, we present a complementary graphical representation. Supposing that the mixing elements are known with good precision, we can plot the range of values of $`_{\mathrm{ee}}`$ as a function of the only residual parameter: The mass of the lightest neutrino<sup>4</sup><sup>4</sup>4In practice, this representation will be useful when the parameters of oscillation will be known reliably.. This is done in fig. 4, where we assume the mass splittings $`\mathrm{\Delta }m_{atm}^2=2\times 10^3`$ eV<sup>2</sup> and $`\mathrm{\Delta }m_{}^2=10^4`$ eV<sup>2</sup> for “normal” and “inverted” hierarchy. The mixing $`|U_{\mathrm{e3}}^2|`$ (resp. $`|U_{\mathrm{e1}}^2|`$) with the heaviest (resp. lightest) state is $`0,2,4`$ and 6 $`\times 10^2`$ in the 4 types of curves, going from inner to outer ones. We fixed $`|U_{\mathrm{e2}}^2|=0.4`$ (resp. $`|U_{\mathrm{e3}}^2|=0.4`$), which corresponds roughly to an LMA solution. The figure confirms the conclusions obtained in section 3.1 for the case \[$`𝒩`$\], about the importance of $`\mathrm{\Delta }m_{}^2,`$ and of the small mixing element $`|U_{\mathrm{e3}}^2|.`$ For the case \[$``$\], instead, $`|U_{\mathrm{e1}}^2|`$ and $`\mathrm{\Delta }m_{}^2`$ are less important in agreement with the discussion in section 3.2.
This representation emphasizes that also a null experimental result may be a very important information on the massive neutrino parameters: In fact, $`_{\mathrm{ee}}^{min}<10^2`$ eV could rule out the assumption of “inverted” hierarchy, see the second plot of fig. 4; or, a bound on $`_{\mathrm{ee}}^{min}`$ at the $`10^3`$ level could amount to a measurement of the lightest neutrino mass, see the first plot of the same figure. Unfortunately, the value of $`m_1`$ determined in this way depends strongly on the parameters of oscillation, since:
$$_{\mathrm{ee}}^{min}=\left||U_{\mathrm{e2}}^2|(\mathrm{\Delta }m_{}^2)^{1/2}|U_{\mathrm{e3}}^2|(\mathrm{\Delta }m_{atm}^2)^{1/2}\right|\text{for }m_1=0;$$
(11)
so that, even in the LMA case we are considering, it will be a real challenge to prove that $`m_10.`$
## 4 Concluding remarks
### 4.1 On the case $`_{\mathrm{ee}}0`$
We regarded $`_{\mathrm{ee}}`$ as a function of several parameters: the mixing elements, the squared mass splittings, the mass of the lightest neutrino and the complex phases. Following this approach, one may be led to wonder whether the cases when the rate is small as a consequence of cancellations among the various parameters are (in some sense) “natural”.
We show here how the smallness can arise in a “natural” manner. Let us postulate that the neutrino mass matrix has a hierarchical structure, analogous to the structure of the Yukawa couplings of the charged fermions.
In this case, we can expect that the “ee-entry” of the neutrino mass matrix ($`=_{\mathrm{ee}}`$) is the smallest one, and also $`_{\mathrm{ee}}(\mathrm{\Delta }m_{atm}^2)^{1/2}.`$ This is what happens in the two models of references , where:
$$_{\mathrm{ee}}(\mathrm{\Delta }m_{atm}^2)^{1/2}\times (\mathrm{sin}\theta _C)^{2n};$$
(12)
$`\theta _C`$ is the Cabibbo angle, and $`n=2,3`$ in the two models respectively. The value of $`_{\mathrm{ee}}`$ in these models is rather small (see also ). Although the contribution from third family is modest, LMA solutions with relatively large mass splittings are possible in this type of models , which a priori may imply much larger values of $`_{\mathrm{ee}},`$ as remarked for the case of section 3.1. Thus, these models provide examples of cases when $`_{\mathrm{ee}}`$ is small as a consequence of cancellations among the various contributions.
In another sense, the statement $`_{\mathrm{ee}}0`$ is surely “natural” in a standard model framework, since at one loop level the radiative corrections are tiny: $`y_e^2/(4\pi )^25\times 10^{14},`$ where $`y_e`$ is the electron Yukawa coupling.
### 4.2 What is the maximum value of $`_{\mathrm{ee}}\mathrm{?}`$
Let us briefly summarize the results of section 3, about an aspect of importance for experimental search: The maximum value of $`_{\mathrm{ee}}`$ that we can a priori expect.
For given mixing elements, $`_{\mathrm{ee}}^{max}`$ increases passing from the cases discussed in sections 3.1 (case \[$`𝒩`$\]) to section 3.2 (case \[$``$\]), and finally to section 3.3 (case \[$`𝒟`$\]). Indeed, $`_{\mathrm{ee}}^{max}`$ reaches at most the $`10^2`$ eV level in case \[$`𝒩`$\], depending on the sub-dominant mixing element $`|U_{\mathrm{e3}}^2|`$ and on the scenario of oscillation (eq. (6)); it can be of the order of 3 to $`9\times 10^2`$ eV in the case \[$``$\], depending on the size of $`\mathrm{\Delta }m_{atm}^2`$ (eq. (9)); finally, $`_{\mathrm{ee}}^{max}`$ can be as large as the experimental upper limit of 0.2 eV in the case \[$`𝒟`$\]. In this sense, the a priori hope of a positive experimental result increases when going from \[$`𝒩`$\] to \[$``$\], and from \[$``$\] to \[$`𝒟`$\]<sup>5</sup><sup>5</sup>5On the contrary, one might argue that the case \[$`𝒩`$\] is more likely than \[$``$\], and this latter more likely than \[$`𝒟`$\], again on the basis of an analogy between the neutrino spectrum and the spectra of the charged fermions..
### 4.3 Studies of neutrino oscillations and search for $`0\nu 2\beta `$ decay
We have shown that the parameters of oscillations are strictly related to the possible value of the $`0\nu 2\beta `$ decay rate. However, the dependence on the type of spectrum is also essential. We summarize here some results of special interest (making reference for details to the previous section):
$``$ For the small angle MSW solution, $`_{\mathrm{ee}}`$ is quite large for “inverted” hierarchy in the case $`m_1(\mathrm{\Delta }m_{atm}^2)^{1/2},`$ see eq. (10); for “normal” hierarchy, we have instead eq. (7), which is smaller than the previous case by a factor of $`|U_{\mathrm{e3}}^2|`$ when $`m_1`$ is small, and possibly even smaller (eq. (7)).
$``$ For the large mixing angle MSW solution, contributions from “solar” frequency, order $`(\mathrm{\Delta }m_{}^2)^{1/2}`$ are not negligible, and they may lead to cancellations (or enhancements) depending on the size of $`|U_{\mathrm{e3}}^2|`$ in the case of “normal” hierarchy (sections 3.1 and 3.4).
$``$ For VO solution, and “normal” hierarchy, the dependence of $`_{\mathrm{ee}}^{min}`$ on $`|U_{\mathrm{e3}}^2|`$ in quite appreciable (section 3.1).
$``$ For “inverted” hierarchy, cancellations are not easy to obtain if $`m_1`$ is small in comparison with $`(\mathrm{\Delta }m_{atm}^2)^{1/2},`$ except for solutions of the solar neutrino problem with almost maximal mixing angles (section 3.2).
$``$ Largest values of $`_{\mathrm{ee}}`$ are taken in the case of “nearly degenerate” spectrum, $`m_1(\mathrm{\Delta }m_{atm}^2)^{1/2}`$ (section 3.3). In this extreme case, cancellations are possible especially for quite large mixing angle solutions, with relevant dependence on the size of the sub-dominant mixing, for both “normal” and “inverted” hierarchies.
### 4.4 Conclusions and perspectives
In this work, we discussed the interplay between the studies of neutrino oscillations and the search for $`0\nu 2\beta `$ decay. We introduced new graphical representations, aimed at clarifying the relations between the neutrino spectra, the scenarios of oscillations and the rate of the neutrinoless double beta decay. For the perspectives, it has to be noticed that the present information on massive neutrinos is compatible with quite different oscillations scenarios and neutrino spectra. Future experiments aiming at a signal of the $`0\nu 2\beta `$ process above the $`10^2`$ eV level will have an important role in deciding among the alternative possibilities.
###### Acknowledgments.
I thank R. Barbieri, C. Giunti, M. Maris and A. Yu. Smirnov for useful discussions, and the Referee of the work for having suggested important improvements. Earlier accounts were presented in .
|
no-problem/9906/astro-ph9906015.html
|
ar5iv
|
text
|
# The Distance to the Cygnus Loop from Hubble Space Telescope Imaging of the Primary Shock FrontBased on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555.
## 1 Introduction
The Cygnus Loop supernova remnant (SNR) is an extremely important laboratory for studying many astrophysical phenomena related to shock waves and their interaction with the interstellar medium (ISM). Its proximity, its large angular size, and relatively small foreground extinction all help make it an important object for studies across the entire electromagnetic spectrum. Our thoughts and understanding about the Cygnus Loop and what it represents have evolved dramatically over the last several decades, culminating in the current picture, recently summarized by Levenson et al. (1997), of a cavity explosion of a fairly massive star.
Yet while our understanding of the Cygnus Loop has evolved dramatically, there are other aspects of this important object that remain more in the realm of folklore and seem to carry on from one generation to the next. One of these aspects is the distance to the Cygnus Loop, which is an important and basic datum that affects nearly every other aspect or interpretation of this object. The oft-quoted value of this parameter is 770 pc, attributed to Minkowski (1958). He performed a velocity ellipse analysis of 37 filaments and used the proper motion measured by Hubble (1937) of 0$`\stackrel{}{\mathrm{.}}`$03 yr<sup>-1</sup> for the bright optical filaments to determine this value. Except for occasional “heretical” suggestions such as those of Sakibov & Smirnov (1983) (1.4 kpc) and Braun & Strom (1986) (460 pc), nearly all other researchers have assumed Minkowski’s value for the distance.
In this paper, we derive a new distance to the Cygnus Loop based on $`\mathrm{𝐻𝑆𝑇}`$ observations of a single filament on the extreme northeastern limb of the remnant. We obtain a proper motion for the filament by comparing the $`\mathrm{𝐻𝑆𝑇}`$ data to a digitized version of the POSS-I red plate, and use previous data on this filament’s shock velocity to constrain the distance. We briefly discuss the ramifications of this new distance estimate for previous studies of this important object.
## 2 Observations and Data Reduction
The filament we have observed in the Cygnus Loop has been studied a number of times previously with other instruments and ground-based telescopes (e.g. Raymond et al. 1983; henceforth RBFG , Fesen & Itoh (1985), Long et al. (1992) and Hester, Raymond & Blair 1994; henceforth HRB ). Located at RA = $`20^h56^m2\stackrel{\mathrm{s}}{\mathrm{.}}7`$ and Dec = 31°56′ 39$`\stackrel{}{\mathrm{.}}`$1 (J2000), it is on the extreme northeastern edge of the Cygnus Loop, about 5′ ahead of the bright radiative filaments seen in this region. This is a region of so-called ‘nonradiative’ shock front where the primary blast wave is encountering partially neutral preshock material (cf. RBFG). The X–ray emission from the Cygnus Loop is bounded by these faint Balmer-dominated shock fronts, as can be seen in the data presented by HRB and Levenson et al. (1997).
The imaging data reported in this paper were obtained on 1997 Nov. 16 with the WFPC2 camera on the Hubble Space Telescope. The image was obtained primarily for use as an EARLY ACQ exposure, as part of our Cycle 7 $`\mathrm{𝐻𝑆𝑇}`$ STIS campaign on this same filament. However, we expected the image to be interesting scientifically as well, and devoted three orbits out of our program for this purpose. We have worked directly with the calibrated data extracted from the Guest Observer tape provided by the STScI, using tasks available in the IRAF/STSDAS environment<sup>1</sup><sup>1</sup>1IRAF is distributed by the National Optical Astronomy Observatories, which is operated by the Association of Universities for Research in Astronomy, Inc. (AURA) under cooperative agreement with the National Science Foundation. The Space Telescope Science Data Analysis System (STSDAS) is distributed by the Space Telescope Science Institute.. The SNR filament was placed so that it crossed the WF2 and WF3 CCDs. We used two exposures per orbit and the F656N filter, which is centered near the Balmer H$`\alpha `$ line. The positioning was offset by $`\mathrm{\Delta }`$x = $`\mathrm{\Delta }`$y = 10 WFC pixels (1″) between each orbit. The two exposures from each orbit were combined individually to remove most of the cosmic ray events. Then the first and third orbit data were shifted to align with the data from the second orbit, and the three orbits of data combined into the final image. This produced an image clear of cosmic rays and camera hot pixels, although some effects from the ‘gutter’ between the WF2 and WF3 chips are still visible when the resulting data are displayed at high contrast. The total integration time was 7400 s. Since stellar contamination is not severe and the filament is known to emit primarily in H$`\alpha `$, no other filters were used.
Figure The Distance to the Cygnus Loop from Hubble Space Telescope Imaging of the Primary Shock Front<sup>1</sup><sup>1</sup>affiliation: Based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. shows a 720 by 1484 pixel (72$`\stackrel{}{\mathrm{.}}`$0 by 148$`\stackrel{}{\mathrm{.}}`$4) region from the combined data. In this image, north is toward the upper right corner (position angle 30.24° from vertical) and east to the upper left, as indicated. The brightest star at lower left is $`\mathrm{𝐻𝑆𝑇}`$ GSID 0269203438 at V=13.3 and position RA = $`20^h56^m7\stackrel{\mathrm{s}}{\mathrm{.}}41`$ and Dec = 31°55′ 35$`\stackrel{}{\mathrm{.}}`$70 (J2000). The filament stretches across the image as a ribbon of light, with variable intensity along both its length and width. Here we see the primary Cygnus Loop shock wave as a nearly edge-on sheet, gently rolling along our line of sight as it encounters very slightly differing preshock densities. We see no hard kinks or twists in the shock front that would be indicative of larger density contrast features (or a cloud/intercloud type morphology ala McKee & Ostriker (1977)). Rather, if appears that the brightness variations over the observed region are dominated by line of sight effects, with brighter regions corresponding to deeper columns and/or multiple shock crossings along a given line of sight. The crispest regions of edge-on shock material are at or below our ability to resolve with the 0$`\stackrel{}{\mathrm{.}}`$1 pixels of the WFPC2 Wide Field CCDs. Since the H$`\alpha `$ emission is expected to be formed very close behind the shock front ($`<10^{14}`$ cm; cf. RBFG), we are truly seeing a ‘snapshot’ of the Cygnus Loop shock front as it encounters the preshock medium. Several exceedingly faint filaments are seen in projection behind the primary shock (toward the bottom in Figure The Distance to the Cygnus Loop from Hubble Space Telescope Imaging of the Primary Shock Front<sup>1</sup><sup>1</sup>affiliation: Based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555.). These filaments presumably arise from other locations on the primary shock front seen in projection. Their extreme faintness may be due to lower preshock densities or lower neutral fractions in the preshock gas at those positions, or it may simply be that the path length through the emitting region is smaller.
## 3 Analysis
In contrast to Minkowski (1958), we determine the distance to the Cygnus Loop based on the properties of just a single filament. To do this we use the best value for the shock velocity at the observed position and a measurement of the proper motion of the filament. For measuring the proper motion we compare our $`\mathrm{𝐻𝑆𝑇}`$ image with a digitized version of the POSS-I red plate taken about 44 years earlier. We discuss these topics in the sections below.
### 3.1 Constraints on the Shock Velocity
The Balmer line emission from nonradiative shocks, such as the one we are considering, comes from neutral atoms that pass through the shock front and are collisionally excited by electrons before becoming ionized (Chevalier & Raymond (1978); Chevalier, Kirshner & Raymond ). In this zone immediately behind the shock front, a significant fraction of the neutral hydrogen atoms undergo charge exchange with the hot post-shock ions. This results in the Balmer lines having two distinct components - a narrow component with a thermal width representative of the pre-shock temperature and a broad component with a velocity spread representative of the post-shock ion temperature. Since the post-shock ion temperature depends on the shock velocity, the width of the broad component of the H$`\alpha `$ line is a diagnostic for the shock velocity.
The translation of the width of the broad component of the H$`\alpha `$ line to an actual shock velocity depends upon the equilibration mechanism between ions and electrons in the post-shock region. The shock energy thermalizes 3/4 the bulk velocity of the pre-shock particles so the increase in the temperature of the ions is higher than the increase in the temperature of the electrons by the ratio of their masses. The temperatures eventually come into equilibrium via Coulomb collisions. However, if there is rapid equilibration between the ions and electrons, for instance via plasma turbulence within the shock front (or some other mechanism), then the increase in ionic temperature is lower than otherwise. Therefore, a given width of the broad H$`\alpha `$ line implies a higher shock velocity for the case of rapid equilibration.
The two component line structure has been observed for the filament we are discussing, but with somewhat discrepant results. RBFG used the Whipple 1.5 m telescope and echelle spectrograph and a 2$`\stackrel{}{\mathrm{.}}`$5 by 7$`\stackrel{}{\mathrm{.}}`$5 aperture oriented east-west across the central portion of the filament. They determined $`\mathrm{\Delta }v_{narrow}`$ = 31 $`\mathrm{km}\mathrm{s}^1`$ and $`\mathrm{\Delta }v_{broad}`$ = 167 $`\mathrm{km}\mathrm{s}^1`$, which implies that the shock velocity is 170 $`\mathrm{km}\mathrm{s}^1`$ for the case of Coulomb equilibration and 210 $`\mathrm{km}\mathrm{s}^1`$ for rapid equilibration. Later HRB used the Kitt Peak 4 m telescope with a long slit (200$`\mathrm{}`$ by 1$`\stackrel{}{\mathrm{.}}`$2) echelle oriented nearly along the length of the filament (see HRB Figure 4). The width of the narrow component in their spectrum agrees with the RBFG value. However, they found $`\mathrm{\Delta }v_{broad}=130\pm 15\mathrm{km}\mathrm{s}^1`$, significantly lower than the RBFG value. The inferred shock velocity is then 130 $`\mathrm{km}\mathrm{s}^1`$ for Coulomb equilibration and 165 $`\mathrm{km}\mathrm{s}^1`$ for rapid equilibration.
Geometric considerations can also affect the observed broad component width. Slight non-tangencies (especially both into and out of the plane of the sky combined) would be expected to widen the measured broad component width compared with truly edge-on. Any such broadening in the observed profiles for this filament must indeed be very symmetrical, owing to the well-centered narrow H$`\alpha `$ component in both the RBFG and HRB data sets. HRB estimated the extent of non-tangencies to be $``$ 6° (plus and minus to keep the broad and narrow components centered), a number that is consistent with the apparent bumps and wiggles viewed along the filament in Figure 1. The widening occurs as the sine of this angle, allowing of order 20% broadening from geometric effects (worst case). Reconstructing the RBFG and HRB slits onto the resolved image in Figure 1 shows a high filling factor of very nearly edge-on shock material in the HRB slit and a larger fraction of non-tangent shock material in the RBFG slit. This is in the right direction to account for much of the observed difference in broad component width, and perhaps indicates the RBFG shock velocity estimates (e.g. 170 – 205 $`\mathrm{km}\mathrm{s}^1`$) are on the high side.
A different set of diagnostics for the shock velocity is the strength of lines from high ionization species arising further downstream from the H$`\alpha `$ zone. Since the ionization is due to collisions, the highest ionization stage reached by any element depends on the temperature of the post-shock gas which in turn depends on the shock velocity. Also these lines are formed further downstream where Coulomb collisions have in any case had time to equilibrate the ion and electron temperatures, so their strengths are not as sensitive to the equilibration mechanism. The filament under discussion here was observed with the Hopkins Ultraviolet Telescope (Long et al. (1992)) and its spectrum showed strong O VI $`\lambda \lambda `$ 1032,1038 and N V $`\lambda \lambda `$ 1239,1243 emission. By comparing the observed line strengths with shock model calculations, Long et al. (1992) found that the spectrum could be best fit by shock models with velocities 175 to 185 $`\mathrm{km}\mathrm{s}^1`$. Lower shock velocities could not produce the observed O VI emission and higher shock velocities resulted in an unacceptably high ratio of O VI to N V emission.
As HRB and Long et al. (1992) discuss, their observations can be reconciled if either the shock front is rapidly decelerating, or if there is rapid equilibration of ions and electrons in a 170 $`\mathrm{km}\mathrm{s}^1`$ shock front. If the shock is indeed decelerating, then the shock velocity appropriate for the last 50 years needs to be used in calculating the distance to the remnant. If the deceleration is so rapid that the velocity changes significantly over a period of 50 years, that also would need to be accounted for.
Given the uncertainties discussed above, we adopt $`v_{shock}=170\pm 20\mathrm{km}\mathrm{s}^1`$ as a reasonable estimate for the relevant shock velocity, and use this in the distance calculation below.
### 3.2 A New Proper Motion Measurement
We determine the proper motion of the filament by comparing our $`\mathrm{𝐻𝑆𝑇}`$ image with a digitized scan of the POSS-I red plate of the region. This scan, kindly provided by the Catalogs and Surveys Branch at STScI, was performed with 15 $`\mu `$m pixels, corresponding to 1$`\stackrel{}{\mathrm{.}}`$0 per pixel. The POSS image was taken on 1953 July 14 and the $`\mathrm{𝐻𝑆𝑇}`$ image on 1997 November 16, giving us a temporal separation of 16195 days ($`1.40\times 10^9`$ s) between the two epochs.
We obtain the proper motion of the filament by measuring the perpendicular distance between selected stars and the local shock front in both POSS and $`\mathrm{𝐻𝑆𝑇}`$ images. At the resolution of the POSS image, the shock looks smooth. In contrast, the $`\mathrm{𝐻𝑆𝑇}`$ image shows that the shock front has very complicated substructure. Therefore for our comparison we have convolved the $`\mathrm{𝐻𝑆𝑇}`$ image with a Gaussian of FWHM = 5$`\stackrel{}{\mathrm{.}}`$4, which corresponds to the point spread function determined for stars in the POSS image. In Figure The Distance to the Cygnus Loop from Hubble Space Telescope Imaging of the Primary Shock Front<sup>1</sup><sup>1</sup>affiliation: Based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. we show two locations where intensity profiles were taken along cuts passing through the shock front and a suitable star. In each case, the leftmost panel shows the POSS image, the middle panel shows the smoothed $`\mathrm{𝐻𝑆𝑇}`$ image and the right panel shows the original $`\mathrm{𝐻𝑆𝑇}`$ image. (The regions shown in the POSS images have the same size as the regions in the $`\mathrm{𝐻𝑆𝑇}`$ images - all are 74″ $`\times `$ 72″ although the alignments differ by about 20°). The intensity profiles were taken along the length of the boxes shown, and averaged over the width.
The results of our measurements are shown in Figure 1. For each location, we have plotted the background subtracted, normalized intensity profile along the cuts shown in Figure The Distance to the Cygnus Loop from Hubble Space Telescope Imaging of the Primary Shock Front<sup>1</sup><sup>1</sup>affiliation: Based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555.. The dotted line shows the POSS profile and the dashed line the profile from the smoothed $`\mathrm{𝐻𝑆𝑇}`$ image. The star positions have been aligned, and the advance of the shock front is clearly visible. For Position 1 (top panel), the shock front has advanced by 3$`\stackrel{}{\mathrm{.}}`$5 and for Position 2 (bottom panel) by 3$`\stackrel{}{\mathrm{.}}`$6.
Our use of stars as fiducials in measuring the proper motion of the shock front is justified only if the stars themselves do not have a high proper motion. The best way to test for this effect would be to obtain astrometric solutions based on independently determined positions of stars in the field. Unfortunately, there is only one catalogued star in the field of view of the HST image. Therefore, we have used the information in the respective FITS file headers to obtain astrometric solutions for both the digitized POSS and HST images and compared the displacement of our fiducial stars relative to a set of 14 other stars in the field. We find that the nominal changes in coordinates for our fiducial stars are not abnormal compared with other field stars. The standard deviation in the relative proper motion for all the stars is $``$ 0$`\stackrel{}{\mathrm{.}}`$5. For the specific stars used in our analysis, we find that the positional changes are 0$`\stackrel{}{\mathrm{.}}`$5 and 0$`\stackrel{}{\mathrm{.}}`$2 for the stars used at Position 1 and 2, respectively. The magnitude of errors thus introduced in the measurement of shock proper motion is similar to those due to other factors, as we discuss below.
Despite the poor resolution of the POSS image, this method should give reasonably accurate results if the substructure of the shock has not changed drastically between the two observations, and it is reassuring that the results for two different locations give very similar values for the proper motion. To examine the effects of changes in the filament substructure, we took a profile from the full resolution HST image and changed the intensities of substructures within the shock front in arbitrary ways and saw how that affected the location of the peak in the smoothed profile. We found that fairly extreme changes in the substructure, such as completely eliminating the second strongest peak in the Position 1 profile (ahead of the brightest band, see Figure The Distance to the Cygnus Loop from Hubble Space Telescope Imaging of the Primary Shock Front<sup>1</sup><sup>1</sup>affiliation: Based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555.), changed the derived proper motion by about 0$`\stackrel{}{\mathrm{.}}`$5. Another possible source of error is that we have taken profiles which are perpendicular to an “average” shock front. To estimate errors caused by profiles not being normal to the local shock front, we compared profiles at slightly different angles (passing through the same star) and found that the derived proper motion could vary by about 0$`\stackrel{}{\mathrm{.}}`$3. Experiments with several other methods and crosscuts at numerous other positions (using stars much farther from the local shock front) all gave answers consistent with those given above, typically within a few tenths of an arcsecond. Hence, we adopt a value for the filament proper motion of 3$`\stackrel{}{\mathrm{.}}`$6 $`\pm `$ 0$`\stackrel{}{\mathrm{.}}`$5 in 16195 days ($``$ 44 years) for use below.
### 3.3 Revised Distance to the Cygnus Loop
The above velocity and proper motion can now be converted into a distance, under the assumption that the motion of the filament is directly transverse to the line-of-sight. This assumption cannot be far off for several reasons, including the appearance of the filament, its position on the extreme limb of the SNR, and the fact that the narrow Balmer line components in spectral data are well-centered on the broad components (cf. HRB). For our best estimate values of $`v_{shock}=170\mathrm{km}\mathrm{s}^1`$ and proper motion = 3$`\stackrel{}{\mathrm{.}}`$6, we find d = 442 pc. Applying the uncertainties on these parameters as discussed above, we derive an allowed range of 342 – 573 pc (where the higher number corresponds to the high velocity – small proper motion limit and vice versa). We note that either a) substantial deceleration of this shock front over the time period of the measurement, b) widening of the observed broad Balmer component due to shock front geometry, or c) some combination of both would only lower the appropriate velocity and hence decrease this distance estimate. These uncertainties can clearly be reduced further, both by obtaining a second epoch of $`\mathrm{𝐻𝑆𝑇}`$ imaging data at some point and by better understanding of the electron – ion equilibration and possible deceleration of the shock front.
It should be noted that the shock velocity estimates implicitly assume that the kinetic energy dissipated in the shock front is transformed into thermal energy of the ions and electrons. If a large fraction of the shock energy is used to accelerate cosmic rays, a higher shock speed is required. Boulares & Cox (1988) present a cosmic ray dominated shock model for the filament in question with a shock speed of 365 km s<sup>-1</sup>. However, the shock precursor in this model reaches far too high a temperature to be consistent with the H$`\alpha `$ profile (HRB), and it is likely that only $``$ 10% of the shock energy goes into non-thermal particles. Thus consideration of cosmic ray acceleration might increase the $`v_{shock}`$ estimate by 5%.
While considerable uncertainty remains in the distance estimate, it is clear that the canonical value of 770 pc is no longer tenable. It is of interest to note that Braun & Strom (1986) concluded d = $`460\pm 160`$ pc more than a decade ago, based on Hubble’s (1937) and Minkowski’s (1958) original data but fitting for the best mean expansion velocity instead of using the extreme of the velocity ellipse (as done by Minkowski). Also, Shull & Hippelein (1991) estimated a distance of 600 pc, but with a large uncertainty that extended upward and downward by a factor of two. Taken in this light, the distance derived here is not out of line with the existing measurements for the bright filaments.
## 4 Concluding Remarks
A distance of $``$440 pc to the Cygnus Loop has some obvious and important ramifications for the determination of the Cygnus Loop’s basic physical properties. Quantities that depend linearly on distance should be reduced by a factor of $``$0.6, while properties depending on $`\mathrm{d}^2`$ will decrease by a factor of three. At a distance of 440 pc, 1″= $`0.6\times 10^{16}`$ cm and the angular dimensions of the Cygnus Loop (2.8° by 3.5°; cf. Levenson et al. (1997)) corresponds to linear dimensions of 21.5 pc by 27 pc. Hence, the fact that the crispest regions of edge-on shock in Figure 1 reduce to a single WFC pixel or less places an upper limit of $`0.6\times 10^{15}`$ cm on the size of the H$`\alpha `$ emitting region behind the shock, still in keeping with expectations (cf. RBFG). Centered at galactic latitude -8.6°, a z distance of about 66 (d/440) pc is now appropriate, placing the SNR much closer to the galactic mid-plane. The inferred X–ray luminosity (Ku et al. (1984)) drops to $`\mathrm{L}_\mathrm{x}(0.14\mathrm{k}\mathrm{e}\mathrm{V})=3.6\times 10^{34}(\mathrm{d}(\mathrm{pc})/440)\mathrm{ergs}\mathrm{s}^1`$. Other parameters can be similarly adjusted from the literature.
The smaller distance will also have ramifications for models such as the “cavity explosion” picture put forward most recently by Levenson et al. (1997). Since the cavity does not have to be as large, the inferred precursor star does not have to be as early as B0. Also, a smaller radius indicates a smaller age for the SNR would be appropriate. Ku et al. (1984) determine a Sedov age of 18,000 yrs, which reduces to 5000 (d(pc)/440) years. Our point here is not to argue for a Sedov model, but to simply point out that, as the “prototypical middle-aged SNR,” perhaps the Cygnus Loop should be considered to be on the young side of middle-aged.
It is interesting to note that another galactic SNR has recently undergone a similar contraction in its distance estimate. The Vela SNR has been assumed to be at a distance of 500 pc for many years, based on a crude estimate by Milne (1968). (Interestingly, this distance estimate was at least partially based on the assumed ‘known’ distance to the Cygnus Loop!) Several author’s over the last decade have provided reason to suspect a closer distance may be appropriate for Vela, and a recent absorption line study to stars of known distance toward Vela (Cha, Sembach, & Danks 1999) solidifies this result: Vela appears to be at a distance of only 250 pc or so.
That the distances to two of the most intensely studied galactic SNRs could be off by a factor of order two only serves to accentuate the need to exercise caution when performing a comparative analysis of galactic SNRs. At their revised distances, the linear sizes of the Cygnus Loop and Vela are quite similar, and yet their observed optical and X-ray morphologies are quite different from one another. Global evolutionary studies for galactic SNRs will continue to be fraught with uncertainty until distances to individual objects can be determined with reasonable accuracy.
Obviously, a second epoch of $`\mathrm{𝐻𝑆𝑇}`$ imaging data on this filament in a few years would provide a superior proper motion analysis and allow this result to be refined. Our nominal filament motion estimate of 3$`\stackrel{}{\mathrm{.}}`$6 corresponds to an expected 0$`\stackrel{}{\mathrm{.}}`$082 per year, or more than three WFC pixels motion in four years. Such as comparison would also allow a direct assessment of any changes in relative brightness of the shock front as a function of position and remove any effect of this on the proper motion determination.
We wish to thank Brian McClean, Barry Lasker, and others in the Catalogs and Surveys Branch at STScI for providing the digitized POSS-I data. We also thank the anonymous referee for a timely report with useful suggestions. This work has been supported by STScI grant GO-07289.01-96A to the Johns Hopkins University.
|
no-problem/9906/cond-mat9906224.html
|
ar5iv
|
text
|
# Density Matrices for a Chain of Oscillators
## 1 Introduction
The success of the density-matrix renormalization method (DMRG) in treating one-dimensional quantum systems is closely related to the properties of the involved density matrices. In the procedure, one determines the eigenvectors of these matrices and uses those with the largest eigenvalues as a truncated basis. To be able to single out a relatively small number, however, the density-matrix spectrum has to decrease rapidly enough. Indeed, it is usually found in the numerical calculations that the eigenvalues decay roughly exponentially.
In a previous publication it was pointed out that, for non-critical integrable models, the exponential behaviour is ultimately a consequence of the Yang-Baxter equations. For two spin one-half models, the transverse Ising chain and the uniaxial Heisenberg chain, analytical formulae were given and verified in detail in DMRG calculations.
In the present article, we want to extend these considerations to phonons, i.e. to a bosonic problem. So far, comparatively few DMRG studies have been dealing with bosons -. The difference to spin systems is that the full Hilbert space always has infinite dimension. Therefore any numerical treatment has to start with a truncation. One can do this in analogy with the DMRG procedure by selecting local states via the density matrix for a single site . This is still a nontrivial quantity with an infinite number of eigenstates in a full treatment, and it is of interest to find its properties in a solvable case. The same holds, of course, for the more complicated density matrix of a half-chain which is used in the DMRG algorithm.
The system we study is a purely bosonic model, a chain of $`L`$ harmonic oscillators with frequency $`\omega _0`$, coupled together by springs. It has a gap in the phonon spectrum and is a non-critical integrable system as the spin models mentioned above. We write the Hamiltonian
$$H=\underset{i=1}{\overset{L}{}}(\frac{1}{2}\frac{^2}{x_i^2}+\frac{1}{2}\omega _0^2x_i^2)+\underset{i=1}{\overset{L1}{}}\frac{1}{2}k(x_{i+1}x_i)^2$$
(1)
and will frequently use the form $`\omega _0=\mathrm{\hspace{0.17em}1}k`$, so that for $`k=0`$ there is no dispersion, while for $`k1`$ the spectrum becomes acoustic and the system critical.
We first consider in Section 2 the density matrix $`\rho _1`$ for one oscillator and show that it can always be written as the exponential of the Hamiltonian of a (new) harmonic oscillator. The spectrum therefore is purely exponential, with a decay rate depending on $`k`$ and (weakly) on the chosen site. This generalizes the known result for the case $`L=2`$ . The eigenfunctions have the character of squeezed states and are used later for numerical calculations. In Section 3, we turn to the density matrix $`\rho _h`$ for half of the system. We treat the case of small and large $`L`$ explicitely and find that $`\rho _h`$ has the same exponential form, with the number of oscillators in the exponent determined by the size of the system. The result in the thermodynamic limit is derived by relating the chain to a massive two-dimensional Gaussian model and its corner transfer matrices (CTMs). It is very similar to that for the spin chains in which lead to fermionic operators instead of bosonic ones. In particular, the spectrum without the degeneracies is purely exponential. Its form for different values of $`k`$ and different sizes $`L`$ is discussed in more detail in Section 4, including numerical results obtained by truncation and by DMRG calculations. These also illustrate, to what extent the degeneracies are reproduced in an approximate treatment. The concluding Section 5 contains a summary and additional remarks. Some details concerning the case $`L=4`$ and the Gaussian model are given in two appendices.
## 2 Density Matrix for One Oscillator
In this section we consider the case where one oscillator is singled out and all other form the environment. The corresponding density matrix (determined numerically) was used previously in the study of an electron-phonon system . Here, it can be found analytically.
The ground state of $`H`$ in (1) has the form
$$\mathrm{\Psi }(𝒙)=C\mathrm{exp}(\frac{1}{2}\underset{ij}{}A_{ij}x_ix_j)$$
(2)
where $`𝒙=(x_1,x_2,\mathrm{},x_L)`$. The matrix
$$A_{ij}=\underset{q}{}\omega _q\varphi _q(i)\varphi _q(j)$$
(3)
is determined by the frequencies $`\omega _q`$ and the eigenvectors $`\varphi _q(i)`$ of the normal modes. From the total density matrix
$$\rho (𝒙\text{,}𝒙^{\mathbf{}})=\mathrm{\Psi }(𝒙)\mathrm{\Psi }(𝒙^{\mathbf{}})$$
(4)
one then obtains the reduced one for oscillator $`l`$ by integrating over all other coordinates $`x_i=x_i^{}`$. This leads to
$$\rho _1(x_l,x_l^{})=C_1\mathrm{exp}(\frac{1}{2}(ab)x_l^2)\mathrm{exp}(\frac{b}{4}(x_lx_l^{})^2)\mathrm{exp}(\frac{1}{2}(ab)x_{l}^{}{}_{}{}^{2})$$
(5)
with the constants
$`a`$ $`=`$ $`A_{ll}`$ (6)
$`b`$ $`=`$ $`{\displaystyle \underset{i,jl}{}}A_{li}[A^{(l)}]_{ij}^1A_{jl}`$ (7)
where $`A^{(l)}`$ is the matrix obtained from $`A`$ by deleting the $`l`$-th row and column. The second exponential in (5) can be transformed into a differential operator, giving
$$\rho _1=C_2\mathrm{exp}(\frac{1}{4}\omega ^2y^2)\mathrm{exp}(\frac{1}{2}\frac{^2}{y^2})\mathrm{exp}(\frac{1}{4}\omega ^2y^2)$$
(8)
where $`y^2=bx_l^2/2`$ and $`\omega ^2/4=(ab)/b`$. Writing this in terms of Bose operators $`\alpha `$ , $`\alpha ^{}`$ one can bring it into diagonal form by an equation-of-motion method. The necessary Bogoljubov transformation is
$$\beta =\mathrm{cosh}\theta \alpha +\mathrm{sinh}\theta \alpha ^{}$$
(9)
with
$$e^\theta =(1+\frac{\omega ^2}{4})^{1/4}$$
(10)
As a result, one finds that $`\rho _1`$ has the form
$$\rho _1=K\mathrm{exp}()$$
(11)
where
$$=\epsilon \beta ^{}\beta $$
(12)
is the Hamiltonian of a harmonic oscillator with energy
$$\epsilon =\mathrm{\hspace{0.33em}2}\mathrm{sinh}^1(\frac{\omega }{2})=\mathrm{\hspace{0.33em}2}\mathrm{sinh}^1\left(\sqrt{a/b1}\right)$$
(13)
Therefore the eigenvalues of $`\rho _1`$ are $`w_n=Ke^{\epsilon n},n0`$, and the spectrum is purely exponential. The constant $`K`$ follows from the sum rule $`\mathrm{Tr}(\rho _1)=_nw_n=1`$.
This result is completely general. The details of the oscillating system and the position of the chosen oscillator enter only via the ratio $`a/b`$. The same constants $`a`$ and $`b`$ also determine the probability to find a certain elongation $`x_l`$. However, as seen from $`\rho _1(x_l,x_l)`$ in (5), this quantity depends on the difference ($`ab`$) and thus has no direct relation to $`\epsilon `$.
In the simplest case of two oscillators ($`L=\mathrm{\hspace{0.17em}2}`$) one finds explicitely
$$\epsilon =\mathrm{\hspace{0.25em}2}\mathrm{sinh}^1\left(\sqrt{4\omega _1\omega _2/(\omega _1\omega _2)^2}\right)$$
(14)
or, equivalently,
$$\epsilon =\mathrm{ln}(\mathrm{coth}^2(\frac{\eta }{2}))$$
(15)
where $`\omega _1=\omega _0`$ , $`\omega _2=\sqrt{\omega _0^2+2k}`$ are the two eigenfrequencies and $`e^{2\eta }=\omega _2/\omega _1`$. This is the result obtained in in a different way.
In Fig.1, $`\epsilon `$ is shown as a function of $`k`$, putting $`\omega _0=(1k)`$. For $`k0`$ it diverges logarithmically. In this limit the influence of the second oscillator vanishes, $`\mathrm{\Psi }(𝒙)`$ becomes a product state and one is left with only one nonzero eigenvalue $`w_0=\mathrm{\hspace{0.17em}1}`$. For $`k1`$, on the other hand, $`\epsilon `$ goes to zero as $`\sqrt{1k}`$ and the eigenvalues $`w_n`$ decrease only very slowly, which reflects the strong coupling. These features are encountered also in all other cases. For $`L=3`$ one can still give explicit analytical formulae, but for larger $`L`$ the problem has to be treated numerically. In the figure, two additional cases, $`L=10`$ and $`L=100`$, are shown. The limit $`L\mathrm{}`$, which is approached exponentially in $`L`$ with a correlation length increasing with $`k`$, is indistinguishable from $`L=100`$ on the given scale.
One can also investigate how $`\epsilon `$ varies with the position along the chain. The result is shown in Fig.2 for several values of k. One sees that $`\epsilon `$ is large at the ends. This corresponds to the fact that the influence of the environment is smaller there. At the next site, however, $`\epsilon `$ drops and then approaches the bulk value from below as one moves into the interior. The approach becomes slower as $`k`$ increases. The overall differences in the $`\epsilon `$-values are not very large, though, as seen in the figure.
Due to the form of $`\rho _1`$, its eigenstates are standard oscillator functions of a coordinate $`z`$ which differs from $`x_l`$ by a scale factor. Compared with the eigenfunctions of the uncoupled oscillator $`l`$, they are squeezed states whose spatial extent is reduced by a factor $`q=\sqrt{\omega _0/\gamma }`$, where $`\gamma =\sqrt{a(ab)}`$. For small $`k`$, $`q`$ approaches one and the two sets of functions coincide. With increasing $`k`$, the amount of squeezing increases, and it is then advantageous to choose the squeezed states as a local basis. This was done in the numerical calculations which will be presented in Section 4.
## 3 Density Matrix for a Half-Chain
We now turn to the central quantity in the usual DMRG calculations, the reduced density matrix for half of the system. It enters each time the system is enlarged in the infinite-size algorithm. We will determine its spectrum in the two limits of small and large $`L`$.
For $`L=2`$, one-half of the system is just one oscillator and $`\rho _h`$ has already been obtained in Section 2. We therefore proceed immediately to the case $`L=4`$. First, we note that the square root $`\rho _h^{1/2}`$ follows directly from $`\mathrm{\Psi }`$. If the coordinates along the chain are $`(x_2,x_1,x_1^{},x_2^{})`$, one has
$$\rho _h^{1/2}(x_1,x_2;x_1^{},x_2^{})=\mathrm{\Psi }(x_2,x_1,x_1^{},x_2^{})$$
(16)
Taking into account the form (2) and the symmetries, this leads to
$$\rho _h^{1/2}=C\mathrm{exp}\left\{\frac{1}{2}\underset{ij}{}a_{ij}(x_ix_j+x_i^{}x_j^{})\underset{ij}{}b_{ij}x_ix_j^{}\right\}$$
(17)
where the symmetric $`(2\times 2)`$ matrices $`a_{ij}`$ and $`b_{ij}`$ follow from the matrix $`A`$ of Section 2. Altogether one has six different coefficients which couple the variables as shown in the following figure.
The cross-couplings, shown as dashed lines, can be eliminated by introducing new coodinates $`y_i,y_i^{}`$. After that, a sequence of transformations similar to those in Section 2 brings $`\rho _h^{1/2}`$ (and thus $`\rho _h`$) into diagonal form. Some of the details are given in Appendix A. The final result is that $`\rho _h`$ has also the form (11) where now
$$=\underset{j=1}{\overset{2}{}}\epsilon _j\beta _j^{}\beta _j$$
(18)
describes two harmonic oscillators with energies $`\epsilon _1`$ and $`\epsilon _2`$. Thus one obtains a simple generalization of the case $`L=2`$. Also the variation of $`\epsilon _1`$ with $`k`$ is very similar to that of $`\epsilon `$ in Section 2. This is shown in Fig.3, where both quantities are plotted. In particular, one finds that they coincide in the limit $`k0`$. The ratio $`\epsilon _2/\epsilon _1`$ equals $`3`$ for small k, drops to a minimum of $`2.866`$ for $`k=0.34`$ and then increases continuously, because $`\epsilon _2`$, in contrast to $`\epsilon _1`$, stays finite as $`k1`$. The shape of the spectrum, which depends on the ratio $`\epsilon _2/\epsilon _1`$, will be discussed in the next section.
At this point one can already conjecture that the structure of $`\rho _h`$ remains the same also for larger $`L`$. A direct computation as above does not seen feasible, though. In the limit of large $`L`$, however, a different approach is possible. As in one first relates $`\rho _h`$ to the partition function of a two-dimensional classical system, which is a massive Gaussian model in our case, in the form of an infinite strip of width $`L`$ with a perpendicular cut. This connection is discussed in more detail in Appendix B. One then expresses the partition function as the product of four corner transfer matrices. In the case where $`L`$ is much larger than the correlation length, one can use the thermodynamic limit of these CTMs and finds for $`\rho _h`$ the form (11) with an operator $``$ which is very similar to $`H`$ in (1). The coefficients, however, are multiplied by additional site-dependent factors which increase linearly along the chain and reflect the corner geometry. Up to a prefactor it is the operator given in (32) in Appendix B and its diagonalization amounts to finding the normal modes of the corresponding vibrational problem. From the results in one obtains
$$=\underset{j1}{}(2j1)\epsilon \beta _j^{}\beta _j$$
(19)
with
$$\epsilon =\pi \frac{I(k^{})}{I(k)}$$
(20)
where $`I(k)`$ is the complete elliptic integral of the first kind and $`k^{}=\sqrt{1k^2}`$. Therefore $``$ describes an infinite set of harmonic oscillators with energies $`\epsilon _j=(2j1)\epsilon `$ and is a straightforward extension of the results for small $`L`$.
The parameter $`\epsilon \epsilon _1`$ is also shown in Fig.3. For $`k0`$, it has exactly the same expansion as $`\epsilon `$ for $`L=2`$ and $`\epsilon _1`$ for $`L=4`$. For $`k1`$, it vanishes only logarithmically, i.e. more slowly than the quantities for finite $`L`$.
One should note that the results (19),(20) are formally the same as for the transverse Ising chain in the disordered phase . The only difference is that there the operators $`\beta ,\beta ^{}`$ are fermionic (so that $`\beta ^{}\beta =0,1`$), whereas here they are bosonic. Such similarities can also be observed in the row transfer matrices of the Gaussian and the Ising model, if one uses the corresponding parametrizations . The consequences for the spectrum of $`\rho _h`$ are discussed below.
## 4 Spectra and Numerics
In the following, we show the density-matrix spectra for half-chains and discuss some numerical aspects. In the figures, the eigenvalues $`w_n`$ of $`\rho _h`$ are ordered according to magnitude and plotted on a semilogarithmic scale.
Figure 4 shows spectra for $`L=4`$ and several values of $`k`$. These results were obtained by calculating the two energies $`\epsilon _1,\epsilon _2`$ numerically from the formulae in Appendix A. Apart from the rapid decrease, one notes a clear ladder structure for the smallest three $`k`$’s. It results from the relation $`\epsilon _23\epsilon _1`$ which leads to the approximate degeneracies $`(1,1,1,2,2,2,3)`$ for the first seven levels. The steps for $`k=0.3`$ are less perfect, since $`\epsilon _2`$ deviates more from $`3\epsilon _1`$ in this case. For the two largest $`k`$’s, $`\epsilon _24\epsilon _1`$ and $`\epsilon _26\epsilon _1`$, so that the first step appears at these levels and the spectra look more stretched out.
It is interesting to see, how these results are recovered in a numerical treatment using a truncated Hilbert space. If one works with the eigenstates of $`\rho _1`$, a small number $`(57)`$ is sufficient for not too large $`k`$. For example, if $`k=0.5`$ and one chooses the same $`r`$ states (with some average $`\epsilon `$-value) for all four sites, the error in the ground-state energy $`E_0/L`$ is of the order $`10^r`$. The spectra which then result are shown in Fig.5 for three values of $`r`$. The first $`w_n`$ are always quite accurate, but there are characteristic differences for the following ones, which are connected with the number of steps, i.e. with the degeneracies. One can see that if $`r`$ states are kept, the pattern is correct for the first $`r`$ levels (counted from the top). At the next level, the state with energy $`r\epsilon _1`$ is missing and the corresponding step is absent. Thus there is a certain correspondence between the states in the local basis and in the density matrix. For smaller $`w_n`$, however, the situation is less clear, and the spectrum finally becomes irregular. The tails of the approximate spectra always lie below the exact one.
In order to obtain results also for $`L>4`$, we have carried out DMRG calculations, using $`7`$ states at each site, with an $`\epsilon `$ corresponding to $`L=30`$. With $`m=7`$ kept states per block, the error in $`E_0/L`$ was about $`310^7`$ for $`k=0.5`$. Fig.6 shows the resulting spectra for $`L=6`$ and $`L=14`$, together with the thermodynamic limit according to (19),(20). One notes, that the spectra for the two $`L`$’s are similar, though not identical. Compared to $`L=4`$, the degeneracies have changed to $`(1,1,1,2,2,3,4)`$. The latter two result from a third energy $`\epsilon _35\epsilon _1`$, which first appears for $`L=6`$. This shows that, indeed, the number of oscillators in $`\rho _h`$ is equal to the size $`L/2`$ of the half-chain. One also sees that for $`L=14`$ the first two steps have become perfect, so that $`\epsilon _2=3\epsilon _1`$ as for the infinite system. Up to some small deviations, this also holds for the next two steps. Only for the remaining levels $`8`$ and $`9`$, the degeneracies are not correct. This is the same effect as found above for $`L=4`$.
For $`L=14`$, the $`\epsilon _j`$ are also numerically very close to the large-$`L`$ limit. For example, $`\epsilon _1`$ agrees with the exact result $`4.0189`$ up to three decimal places. This can be understood from the short correlation length $`\xi 3`$ for $`k=0.5`$ which makes size-effects small. Finally, we want to mention that, in the thermodynamic limit, the multiplicities are just one-half of those found in the fermionic case for the ordered phase where $`\epsilon _j=2\epsilon j`$ . This is because the number $`P_j`$ of partitions without repetition is the same as that of the odd integers with repetition, $`P_j=P_{2j1}^{}`$. Therefore the degeneracies for the bosonic case are not as large as one might expect at first.
## 5 Conclusion
We have investigated a bosonic system, where the ground-state density matrices can be determined explicitely in various cases. It turned out that they are exponentials of oscillator Hamiltonians, so that all results are quite transparent. The spectra have exponential character and the eigenfunctions are oscillator states. For the single-site density matrix, these states are related to those of the chain oscillators by squeezing. For the half-chain density matrix, they are connected with certain normal modes concentrated near the middle of the system. The thermodynamic limit was obtained in the same way as for the integrable spin chains treated previously, and the spectra are very similar to those found there. By counting the degeneracies, one would arrive at formulae as given in .
Taking all this together, the chain treated here may serve as a standard example where one can see the features of the density matrices in detail. In this context, it would still be interesting to determine the half-chain density matrix for arbitrary sizes, in particular at the critical point, where the vibrational spectrum becomes acoustic. This case has already been studied by DMRG but, as for the critical spin models, the density-matrix spectra have yet to be explained. Another question is, whether the model of coupled oscillators, for which the ground state is known explicitely, could be used to study density matrices in higher dimensions. For the DMRG method, it would be quite important to know if the spectral properties change in this case.
Acknowledgements
We thank M. Kaulke, E. Jeckelmann, G. Babudjan, A. Pelster and H. Kleinert for discussions. M.C. Chung acknowledges the support of Deutscher Akademischer Austauschdienst (DAAD).
## Appendices
## Appendix A Four Oscillators
In order to diagonalize $`\rho _h^{1/2}`$ in (17) one proceeds as follows. First, new coordinates are introduced by a rotation $`(x_1,x_2)(y_1,y_2)`$ with angle $`\phi `$ and analogously for the primed quantities. This leads to new quadratic forms in the exponent, with coefficients $`\widehat{a}_{ij}`$ and $`\widehat{b}_{ij}`$. Choosing $`\mathrm{tan}2\phi =2b_{12}/(b_{11}b_{22})`$, the cross-term $`\widehat{b}_{12}`$ becomes zero. One then considers the factors
$$\mathrm{exp}(1/2\widehat{a}_{ii}y_i^2)\mathrm{exp}(\widehat{b}_{ii}y_iy_i^{})\mathrm{exp}(1/2\widehat{a}_{ii}y_{i}^{}{}_{}{}^{2}),i=1,2$$
(21)
which contain only $`(y_i,y_{}^{}{}_{i}{}^{})`$. These can be transformed as in Section 2 and one obtains exponentials of harmonic oscillators with energies
$$\nu _i=\mathrm{\hspace{0.33em}2}\mathrm{sinh}^1(\mathrm{\Omega }_i/2)$$
(22)
where
$$\mathrm{\Omega }_i/2=\sqrt{(\widehat{a}_{ii}+\widehat{b}_{ii})/(2\widehat{b}_{ii})}$$
(23)
In terms of the new coordinates $`z_i`$ one then has
$$\rho _h^{1/2}=C\mathrm{exp}(\mu z_1z_2)\mathrm{exp}(\underset{i}{}(\frac{1}{2}\frac{^2}{z_i^2}+\frac{1}{2}\nu _i^2z_i^2))\mathrm{exp}(\mu z_1z_2)$$
(24)
Here $`z_i=y_i/\lambda _i,\mu =\widehat{a}_{12}\lambda _1\lambda _2`$ and the $`\lambda _i`$ are given by
$$\lambda _i=\left(\frac{\nu _i}{\widehat{b}_{ii}\mathrm{\Omega }_i}\right)^{1/2}\left(1+\frac{\mathrm{\Omega }_i^2}{4}\right)^{1/4}$$
(25)
In the final step, one expresses (24) in terms of bosonic operators $`\alpha _i,\alpha _i^{}`$ and considers Heisenberg-like operators $`\rho _h^{1/2}\alpha _i\rho _h^{1/2}`$ which are found to be linear combinations of the $`\alpha _i,\alpha _i^{}`$. Therefore, a transformation as in the analogous fermionic case
$$\beta _j=\underset{i}{}(g_{ji}\alpha _i+h_{ji}\alpha _i^{})$$
(26)
brings $`\rho _h^{1/2}`$ into the form (11),(18) with energies $`\epsilon _j/2`$. These energies follow from a simple quadratic equation, namely
$$\mathrm{cosh}\frac{\epsilon _j}{2}=\frac{1}{2}(c_1+c_2)\pm \sqrt{\frac{1}{4}(c_1c_2)^2+4\rho ^2s_1s_2}$$
(27)
where $`c_i=\mathrm{cosh}\nu _i,s_i=\mathrm{sinh}\nu _i\text{and}\rho =\mu /2\sqrt{\nu _1\nu _2}.`$
These quantities have to be evaluated, starting from the initial constants $`a_{ij}`$ and $`b_{ij}`$, which are simple analytic expressions involving the four eigenfrequencies of the chain. It turns out that, for small values of $`k`$, the $`\rho `$-term in (27) is unimportant, which leads to $`\epsilon _j2\nu _j`$ and $`\epsilon _23\epsilon _1`$. The ratio $`\epsilon _2/\epsilon _1`$ thus has the same value as in the thermodynamic limit. Moreover, $`\epsilon _1`$ has the same asymptotic form, $`\epsilon _12\mathrm{ln}(4/k)`$, as for $`L=2`$ and $`L\mathrm{}`$. This can be attributed to the short correlation length which suppresses size effects in this limit.
## Appendix B Relation to the Gaussian Model
The Hamiltonian $`H`$ in (1) has a close relation to the transfer matrix of a two-dimensional Gaussian model (GM). The connection is the same as between the transverse Ising chain and the two-dimensional Ising model . Consider a lattice with variables $`x(\mathrm{}<x<\mathrm{})`$ at each site, a nearest-neighbour coupling energy $`\frac{1}{2}K(xx^{})^2`$ and an on-site energy $`\frac{1}{2}\mathrm{\Delta }x^2`$, all in units of $`k_BT`$. If the lattice is oriented diagonally, the appropriate transfer matrix $`T`$ involves the piece shown in the figure below.
One can then verify by a simple direct calculation (using two interpenetrating lattices) that, with periodic boundary conditions,
$$[H,T]=\mathrm{\hspace{0.25em}0}$$
(28)
provided that $`k=K^2`$ and $`\omega _0^2=\mathrm{\Delta }(\mathrm{\Delta }+4K)`$. In this case, $`T`$ and $`H`$ have common eigenfunctions and $`\mathrm{\Psi }`$ in (2) gives the maximal eigenvalue for $`T`$. This allows one to obtain $`\mathrm{\Psi }`$ and also $`\rho _h`$ from the partition function of a two-dimensional system . If the GM has open boundaries, one has to modify $`H`$ at the end, so as to preserve (28). However, for a system with $`L\xi `$, where $`\xi `$ is the correlation length given by $`\xi =2/\mathrm{ln}(1/k)`$, this effect is not important and can be neglected.
An alternative approach is to consider a Gaussian model with anisotropic couplings for periodic boundary conditions, to show that the $`T^{}`$s for different anisotropies commute and to realize that a proper derivative leads to $`H`$ . To do this, one uses an elliptic parametrization, so that the two couplings are, for example,
$$K_1=i/\mathrm{sn}(iu,k);K_2=ik\mathrm{sn}(iu,k)$$
(29)
with the Jacobi function $`sn`$ of modulus $`k`$. This parameter also determines the on-site energy $`\mathrm{\Delta }`$ and thus the distance to the critical point $`\mathrm{\Delta }=\mathrm{\hspace{0.17em}0}`$, as well as the correlation length. The parameter $`u`$, on the other hand, specifies the ratio $`K_1/K_2`$. It varies between $`0`$ and $`I(k^{})`$, where $`I`$ is the complete elliptic integral of the first kind and $`k^{}=\sqrt{1k^2}`$. The isotropic case corresponds to $`u=I(k^{})/2`$. (Our notation differs slightly from that in . We have interchanged $`kk^{}`$, written $`u`$ instead of $`\lambda \theta `$, used $`x=\sqrt{\lambda }\varphi `$ for the Gaussian variables and have set $`\alpha =1`$.) The derivative $`(\mathrm{ln}T/u)`$ then leads again to (1) with $`\omega _0=(1k)`$, which is the reason for choosing this parametrization in $`H`$.
As discussed in , the density matrix $`\rho _h`$ for half of the system is, for $`L\xi `$ and up to a prefactor,
$$\rho _h=ABCD$$
(30)
where $`A,B,C,D`$ are the corner transfer matrices for the four infinite quadrants of the two-dimensional system. Due to the integrability of the Gaussian model, i.e. the Yang-Baxter equations, these have the exponential form
$$A=e^{u_{\mathrm{CTM}}}$$
(31)
and similarly for $`B,C,D`$, with $`_{\mathrm{CTM}}`$ given by
$$_{CTM}=\underset{n1}{}\left\{\frac{1}{2}(2n1)\frac{^2}{x_n^2}+\frac{1}{2}(2n1)(1k)^2x_n^2+\frac{1}{2}2nk(x_{n+1}x_n)^2\right\}$$
(32)
This operator was studied in in connection with the Hamiltonian limit $`u0`$ of $`A`$, where one can determine its form simply by inspection. It is associated with a corner of Ramond type, i.e. without a site at the tip. In terms of vibrations, it describes a system of coupled oscillators, where the spring constants and inverse masses increase along the chain. It can be diagonalized with the help of Carlitz polynomials and then becomes the sum of harmonic oscillators with eigenvalues $`(2j1)\pi /2I(k^{})`$. For $`\rho _h`$ one needs $`ABCD`$, or $`A^4`$ if one has an isotropic model. In either case this gives a factor $`2I(k^{})`$, so that the energy becomes $`\epsilon _j=(2j1)\pi I(k^{})/I(k)`$ and one arrives at the result (19),(20).
|
no-problem/9906/cond-mat9906107.html
|
ar5iv
|
text
|
# Theory and Satellite Experiment for Critical Exponent 𝛼 of 𝜆-Transition in Superfluid Helium
## Abstract
On the basis recent seven-loop perturbation expansion for $`\nu ^1=3/(2\alpha )`$ we perform a careful reinvestigation of the critical exponent $`\alpha `$ governing the power behavior $`|T_cT|^\alpha `$ of the specific heat of superfluid helium near the phase transition. With the help of variational strong-coupling theory. we find $`\alpha =0.01126\pm 0.0010`$, in very good agreement with the space shuttle experimental value $`\alpha =0.01056\pm 0.00038`$.
1. The critical exponent $`\alpha `$ characterizing the power behavior $`|T_cT|^\alpha `$ of the specific heat of superfluid helium near the transition temperature $`T_c`$ is presently the best-measured critical exponent of all. A microgravity experiment in the Space Shuttle in October 1992 rendered a value with amazing precision
$$\alpha ^{\mathrm{ss}}=0.01056\pm 0.00038.$$
(1)
This represents a considerable change and improvement of the experimental number found a long time ago on earth by G. Ahlers :
$$\alpha =0.026\pm 0.004,$$
(2)
in which the sharp peak of the specific heat was broadened to $`10^6`$K by the tiny pressure difference between top and bottom of the sample. In space, the temperature could be brought to within $`10^8`$K close to $`T_c`$ without seeing this broadening.
The exponent $`\alpha `$ is extremely sensitive to the precise value of the critical exponent $`\nu `$ which determines the growth of the coherence length when approaching the critical temperature, $`\xi |TT_c|^\nu `$. Since $`\nu `$ lies very close to $`2/3`$, and $`\alpha `$ is related to $`\nu `$ by the scaling relation $`\alpha =23\nu `$, a tiny change of $`\nu `$ produces a large relative change of $`\alpha `$. Ahlers’ value was for many years an embarrassment to quantum field theorists who never could find $`\alpha `$ quite as negative — the field theoretic $`\nu `$-value came usually out smaller than $`\nu _{\mathrm{Ahl}}=0.6753\pm 0.0013`$. The space shuttle measurement was therefore extremely welcome since it comes much closer to previous theoretical values. In fact, it turned out to agree extremely well with the most recent theoretical determination of $`\alpha `$ by strong-coupling perturbation theory based on the recent seven-loop power series expansions of $`\nu `$ , which gave
$$\alpha ^{\mathrm{sc}}=0.0129\pm 0.0006.$$
(3)
The purpose of this Letter is to present yet another resummation of the perturbation expansion for $`\nu ^1`$ and for $`\alpha =23\nu `$ by variational perturbation theory, applied in a different way than in . Since it is a priori unclear which of the two results should be more accurate, we combine them to the slightly less negative average value with a larger error
$$\alpha ^{\mathrm{sc}}=0.01126\pm 0.0010.$$
(4)
Before entering the more technical part of the paper, a few comments are necessary on the reliability of error estimates for any theoretical result of this kind. They can certainly be trusted no more than the experimental numbers. Great care went into the analysis of Ahlers‘ data . Still, his final result (2) does not accommodate the space shuttle value (1). The same surprise may happen to theoretical results and their error limits in papers on resummation of divergent perturbation expansions, since there exists so far no safe way of determining the errors. The expansions in powers of the coupling constant $`g`$ are strongly divergent, and one knows accurately only the first seven coefficients, plus the leading growth behavior for large orders $`k`$ like $`\gamma (a)^kk!k\mathrm{\Gamma }(k+b)`$. The parameter $`b`$ is determined by the number of zero modes in a solution to a classical field equation, $`a`$ is the inverse energy of this solution, and $`\gamma `$ the entropy of its small oscillations.
The shortness of the available expansions and their divergence make estimates of the error range of the result a rather subjective procedure. All publications resumming critical exponents such as $`\alpha `$ calculate some sequences of $`N`$th-order resummed approximations $`\alpha _N`$, and estimate an error range from the way these tend to their limiting value. While these estimates may be statistically significant, there are unknown systematic errors. Otherwise one should be able to take the expansion for any function $`\stackrel{~}{f}(g)f(\alpha (g))`$ and find a limiting number $`f(\alpha )`$ which lies in the corresponding range of values. This is unfortunately not true in general. Such reexpansions can approach their limiting values in many different ways, and it is not clear which yields the most reliable result. One must therefore seek as much additional information on the series as possible.
One such additional information becomes available by resumming the expansions in powers of the bare coupling constant $`g_0`$ rather than the renormalized one $`g`$. The reason is that any function of the bare coupling constant $`f(g_0)`$ which has a finite critical limit approaches this limit with a nonleading inverse power of $`g_0^\omega `$, where $`\omega `$ is called the critical exponent of approach to scaling, whose size is known to be about $`0.8`$ for superfluid helium. Any resummation method which naturally incorporates his power behavior should converge faster than those which ignore it. This incorporation is precisely the virtue of variational perturbation theory, which we have therefore chosen for the resummation of $`\alpha `$.
For a second additional information we take advantage of our theoretical knowledge on the general form of the large-order behavior of the expansion coefficients
$`\gamma (a)^kk!k\mathrm{\Gamma }(k+b)\left(1+{\displaystyle \frac{c^{(1)}}{k}}+{\displaystyle \frac{c^{(2)}}{k^2}}+\mathrm{}\right).`$ (5)
In the previous paper we have done so by choosing the nonleading parameters $`c^i`$ to reproduce exactly the first seven known expansion coefficients of $`\alpha `$. The resulting expression (5) determines all expansion coefficients. The so-determined expression (5) predicts approximately all expansion coefficients, with increasing precision for increasing orders. The extended power series has then been resummed for increasing orders $`N`$, and from the $`N`$-behavior we have found the $`\alpha `$-value (3) with quite a small error range.
As a third additional information we use the fact that we know from theory in which way the infinite-order result is approached. Thus we may fit the approximate values $`\alpha _N`$ by an appropriate expansion in $`1/N`$ and achieve in this way a more accurate estimate of the limiting value than without such an extrapolation. The error can thus be made much smaller than the distance between the last two approximations, as has been verified in many model studies of divergent series .
The strategy of this paper goes as follows: We want to use all the additional informations on the expansion of the critical exponent $`\alpha `$ as above, but apply the variational resummation method in two more alternative ways. First, we reexpand the series $`\alpha (g_0)`$ in powers of a variable $`h`$ whose critical limit is no longer infinity but $`h=1`$. The closer distance to the expansion point $`h=0`$ leads us to expect a faster convergence. Second we resum two different expansions, one for $`\alpha `$, and one for $`f(\alpha )=\nu ^13/(2\alpha )`$. From the difference in the resulting $`\alpha `$-values and a comparison with the earlier result (3) we obtain an estimate of the systematic errors which is specified in Eq. (4).
2. The seven-loop power series expansion for $`\nu `$ in powers of the unrenormalized coupling constant of O(2)-invariant $`\varphi ^4`$-theory which lies in the universality class of superfluid helium reads
$`\nu ^1`$ $`=`$ $`20.4g_0+0.4681481481482289g_{0}^{}{}_{}{}^{2}0.66739g_{0}^{}{}_{}{}^{3}+1.079261838589703g_{0}^{}{}_{}{}^{4}1.91274g_{0}^{}{}_{}{}^{5}`$ (6)
$`+`$ $`3.644347291527398g_{0}^{}{}_{}{}^{6}7.37808g_{0}^{}{}_{}{}^{7}+\mathrm{}.`$ (7)
By fitting the expansion coefficients with the theoretical large-order behavior (5), this series has been extended to higher orders as follows
$`\mathrm{\Delta }\nu ^1`$ $`=`$ $`15.75313406543747g_{0}^{}{}_{}{}^{8}35.2944g_{0}^{}{}_{}{}^{9}+82.6900901520064g_{0}^{}{}_{}{}^{10}202.094g_{0}^{}{}_{}{}^{11}+514.3394395526179g_{0}^{}{}_{}{}^{12}`$ (8)
$``$ $`1361.42g_{0}^{}{}_{}{}^{13}+3744.242656157152g_{0}^{}{}_{}{}^{14}10691.7g_{0}^{}{}_{}{}^{15}+\mathrm{}.`$ (9)
The renormalized coupling constant is related to the unrenormalized one by an expansion $`g=_{k=1}^7a_kg_0^k`$. Its power behavior for large $`g_0`$ is determined by a series
$`s={\displaystyle \frac{d\mathrm{log}g(g_0)}{d\mathrm{log}g_0}}`$ $`=`$ $`1g_0+{\displaystyle \frac{947g_{0}^{}{}_{}{}^{2}}{675}}2.322324349407407g_{0}^{}{}_{}{}^{3}+4.276203609026057g_{0}^{}{}_{}{}^{4}`$ (10)
$``$ $`8.51611440473227g_{0}^{}{}_{}{}^{5}+18.05897631325589g_{0}^{}{}_{}{}^{6}+\mathrm{}.`$ (11)
A similar best fit of these by the theoretical large-order behavior extends this series by
$`\mathrm{\Delta }s`$ $`=`$ $`40.38657228730114g_{0}^{}{}_{}{}^{7}+94.6453399123477g_{0}^{}{}_{}{}^{8}231.3922442162566g_{0}^{}{}_{}{}^{9}+588.3206172579102g_{0}^{}{}_{}{}^{10}`$ (12)
$``$ $`1552.116358404217g_{0}^{}{}_{}{}^{11}+4242.372685080157g_{0}^{}{}_{}{}^{12}12001.18866491822g_{0}^{}{}_{}{}^{13}+35115.23006646194g_{0}^{}{}_{}{}^{14}`$ (13)
$``$ $`106234.4643086436g_{0}^{}{}_{}{}^{15}+332239.2175082959g_{0}^{}{}_{}{}^{16}+\mathrm{}.`$ (14)
Scaling implies that $`g(g_0)`$ becomes a constant for $`g_0\mathrm{}`$, implying that the power $`s`$ goes to zero in this limit. By inverting the expansion for $`s`$, we obtain an expansion for $`\nu ^1`$ in powers of $`h1s`$ as follows:
$`\nu ^1(h)`$ $`=`$ $`20.4h0.093037h^2+0.000485012h^30.0139286h^4+0.007349h^50.0140478h^6+0.0159545h^70.029175h^8`$ (15)
$`+`$ $`0.0521537h^90.102226h^{10}+0.224026h^{11}0.491045h^{12}+1.22506h^{13}3.00608h^{14}+8.29528h^{15}22.5967h^{16}.`$ (16)
This series has to be evaluated at $`h=1`$. For estimating the systematic errors of our resummation, we also calculate from (16) a series for $`\alpha =23\nu `$
$`\alpha (h)`$ $`=`$ $`0.50.3h0.129778h^20.0395474h^30.0243203h^40.0032498h^50.0121091h^6`$ (17)
$`+`$ $`0.00749308h^70.0194876h^8+0.0320172h^90.0651726h^{10}+0.14422h^{11}0.315055h^{12}`$ (18)
$`+`$ $`0.802395h^{13}1.95455h^{14}+5.49143h^{15}14.8771h^{16}+\mathrm{}.`$ (19)
3. In order to get a rough idea about the behavior of the reexpansions in powers of $`h`$, we plot their partial sums at $`h=1`$ in the upper row of Fig. 1.
After an initial apparent convergence, these show the typical divergence of perturbation expansions.
A rough resummation is possible using Padé approximants. The results are shown in Table I. The highest Padé approximants yield
$$\alpha ^{\mathrm{Pad}}=0.0123\pm 0.0050.$$
(20)
The error is estimated by the distance to the next lower approximation.
4. We now resum the expansions $`\nu ^1(h)`$ and $`\alpha (h)`$ by variational perturbation theory. This is applicable to divergent perturbation expansions
$$f(x)=\underset{n=0}{\overset{\mathrm{}}{}}a_nx^n,$$
(21)
which behave for large $`x`$ like
$$f(x)=x^{p/q}\underset{m=0}{\overset{\mathrm{}}{}}b_mx^{2m/q}$$
(22)
It is easy to adapt our function to this general behavior. Plotting the successive truncated power series for $`\nu ^1(h)`$ against $`h`$ in Fig. 2, we see that this function will have a zero somewhere above $`h=h_0=3`$.
We therefore go over to the variable $`x`$ defined by $`h=h(x)h_0x/(h_01+x)`$, in terms of which $`f(x)=\nu ^1(h(x))`$ behaves like (22) with $`p=0`$ and $`q=2`$, and has to be evaluated at $`x=1`$. The large-$`x`$ behavior is imposed upon the function with the expansion (21) as follows. We insert an auxiliary scale parameter $`\kappa `$ and define the truncated functions
$$f_N(x)\kappa ^p\underset{n=0}{\overset{N}{}}a_n\left(\frac{x}{\kappa ^q}\right)^n.$$
(23)
The parameter $`\kappa `$ will be set equal to $`1`$ at the end. Then we introduce a variational parameter $`K`$ by the replacement
$$\kappa \sqrt{K^2+\kappa ^2K^2}.$$
(24)
The functions $`f_N(x)`$ are so far independent of $`K`$. This is changed by expanding the square root in (24) in powers of $`\kappa ^2K^2`$, thereby treating this difference as a quantity of order $`x`$. This transforms the terms $`\kappa ^px^n/\kappa ^{qn}`$ in (23) into polynomials of $`r(\kappa ^2K^2)/K^2`$:
$$\kappa ^p\frac{x^n}{\kappa ^{qn}}K^p\frac{x^n}{K^{qn}}\left[1+\left(\genfrac{}{}{0pt}{}{(pqn)/2}{1}\right)r+\left(\genfrac{}{}{0pt}{}{(pqn)/2}{2}\right)r^2+\mathrm{}+\left(\genfrac{}{}{0pt}{}{(pqn)/2}{Nn}\right)r^{Nn}\right],$$
(25)
Setting now $`\kappa =1`$, and replacing the variational parameter $`K`$ by $`v`$ defined by $`K^2x/v`$, we obtain from (23) at $`x=1`$ the variational expansions
$$f_N(v)=\underset{n=0}{\overset{N}{}}a_nv^{qnp/2}\left[1+(v1)\right]_{Nn}^{(pqn)/2},$$
(26)
where the symbol $`\left[1+A\right]_{Nn}^{(pqn)/2}`$ is a short notation for the binomial expansion of $`(1+A)^{(pqn)/2}`$ in powers of $`A`$ up to the order $`A^{Nn}`$.
The variational expansions are optimized in $`v`$ by minima for odd, and by turning points for even $`N`$, as shown in Fig. 3. The extrema are plotted as a function of the order $`N`$ in the lower row of Fig. 1. The left-hand plot shows directly the extremal values of $`\nu _N^1(v)`$, the middle plot shows the $`\alpha `$-values $`\alpha _N=23\nu _N`$ corresponding to these. The right-hand plot, finally, shows the extremal values of $`\alpha _N(v)`$. All three sequences of approximations are fitted very well by a large $`N`$ expansion $`c_0+c_1/N^2+c_2/N^4`$, if we omit the lowest five data points which are not yet very regular. The inverse powers $`2`$ and $`4`$ of $`N`$ in this fit are determined by starting from a more general ansatz $`c_0+c_1/N^{p_1}+c_2/N^{p_2}`$ and varying $`p_1,p_2`$ until the sum of the square deviations of the fit from the points is minimal.
The highest-order data point is taken to be the one with $`N=12`$ since, up to this order, the successive asymptotic values $`c_0`$ change monotonously by decreasing amounts. Starting with $`N=13`$, the changes increase and reverse direction. In addition, the mean square deviations of the fits increasing drastically, indicating a decreasing usefulness of the extrapolated expansion coefficients in (9) and (14) for the extrapolation $`N\mathrm{}`$. From the parameter $`c_0`$ of the best fit for $`\alpha `$ which is indicated on top of the lower right-hand plot in Fig. 1, we find the critical exponent $`\alpha =0.01126`$ stated in Eq. (4), where the error estimate takes into account the basic systematic errors indicated by the difference between the resummation of $`\alpha =23\nu `$, and of $`\nu ^1`$, which by the lower middle plot in Fig. 1 yields $`\alpha =0.01226`$. It also accommodates our earlier seven-loop strong-coupling result (3) of Ref. . The dependence on the choice of $`h_0`$ is negligible as long as the resummed series $`\nu ^1(x)`$ and $`\alpha (x)`$ do not change their Borel character. Thus $`h_0=2.2`$ leads to results well within the error limits in (4).
Our number as well as many earlier results are displayed in Fig. 4. The entire subject is discussed in detail in the textbook H. Kleinert and V. Schulte-Frohlinde, Critical Exponents from Five-Loop Strong-Coupling $`\varphi ^4`$-Theory in 4-$`\epsilon `$ Dimensions, World Scientific, Singapore, 2000 (http://www.physik.fu-berlin.de/~kleinert/re.html#b8)
Acknowledgment
The author is grateful to Dr. J.A. Lipa for several interesting informations on his experiment.
Note added in proof:
A recent calculation of $`\alpha `$ by an improved high-temperature expansion yields the exponent $`\alpha =0.0150(17)`$ \[M. Campostrini, A. Pelissetto, P. Rossi, and E. Vicari, Phys. Rev. B 61, 5905 (2000)\].
|
no-problem/9906/astro-ph9906274.html
|
ar5iv
|
text
|
# Modelling the submillimetre-to-radio flaring behaviour of 3C 273
## 1 Introduction
Submillimetre-to-radio light curves of blazars show evidence of prominent structures, or flares, apparently propagating from high to low frequencies. A decisive step in the understanding of these flares was done by Marscher & Gear (MG85 (1985), hereafter MG85). They studied the strong 1983 outburst of 3C~273 by constructing at two epochs a quasi-simultaneous millimetre-to-infrared spectrum after subtracting a quiescent emission assumed to vary on a much longer time scale. They successfully fitted these two flaring spectra with self-absorbed synchrotron emission and showed that their temporal evolution can be understood as being due to a shock wave propagating down a relativistic jet. They identified three stages of the evolution of the shock according to the dominant cooling process of the electrons: 1) the Compton scattering loss phase, 2) the synchrotron radiation loss phase and 3) the adiabatic expansion loss phase.
Another shock model was developed by Hughes et al. (HAA85 (1985)) simultaneously to that of MG85. Their piston-driven shock model reproduces well the lower frequency flux and polarization observations of outbursts in BL Lacertae, but fails to describe the observed behaviour in the millimetre domain. A generalization of the three-stage shock model of MG85 was presented by Valtaoja et al. (VTU92 (1992)). Their model, based on observations, describes qualitatively the three stages of the MG85 model without going into the details of the physics of the shock. Finally, Qian et al. (QWB96 (1996)) proposed a burst-injection model to study the spectral evolution of superluminal radio knots. Their theoretical calculation is able to reproduce well the observed spectral evolution of the C4 knot in 3C~345 (Qian Q96 (1996)).
To constrain these shock models, we need to extract the properties of the outbursts from the observations. This step is difficult both at high and at low frequencies. At high frequencies because of the brevity of the outbursts that last only a few days to months, thus requiring a very well sampled set of observations in the not easily accessible submillimetre spectral range. At radio frequencies because they very often overlap due to their longer duration, making it difficult to isolate them.
The best observational constraints for the model of MG85 were obtained by Litchfield et al. (LSR95 (1995)) for the blazar 3C~279 and by Stevens et al. (SLR95 (1995), SLR96 (1996), SRG98 (1998)) for PKS~0420$-$014, 3C 345 and 3C 273, respectively. All these studies are based on isolated outbursts. The method used consists in constructing simultaneous multi-frequency spectra for as many epochs as possible after the subtraction of a quiescent spectrum assumed to be constant with time. The subtraction of a quiescent spectrum is convenient and seems to give good results, but has only weak physical justification. In 3C 273, there was a period of nearly constant flux at millimetre frequencies lasting just more than one year in 1989–1990, which was interpreted as its quiescent state (Robson et al. RLG93 (1993)). At radio frequencies, however, no similar constant flux period was ever observed and there is no evidence that such a state exists at a significant level above the contribution of the jet’s hot spot 3C~273A (see Fig. 2 of Türler et al. 1999a ).
The different approach presented here to derive the observed properties of the outbursts has the advantage to not rely on the assumption of a quiescent emission. The idea is to decompose a set of light curves covering a large time span into a series of flares. To our knowledge, the first attempt of such a decomposition was made by Legg (L84 (1984)), who fitted a ten years radio light curve of 3C~120 with twelve self-similar outbursts. Recently, Valtaoja et al. (VLT99 (1999)) decomposed the 22 GHz and 37 GHz radio light curves of many active galactic nuclei into several exponentially rising and decaying outbursts. What is new in our approach is that we fit the same outbursts simultaneously to twelve light curves covering more than two decades of frequency from the submillimetre to the radio domain. This adds a new dimension to the decomposition: the evolution of a flare is now a function of both time and frequency. The aim is to obtain both the spectral and temporal properties of a typical flare, from which individual flares differ only by a few parameters.
We use the light curves of 3C 273, the best observed quasar, to have as many observational constraints as possible. The flaring behaviour of 3C 273 was already the subject of several previous studies (e.g. Robson et al. RLG93 (1993); Stevens et al. SRG98 (1998)). Stevens et al. (SRG98 (1998)) obtain results for the first stage of the strong 1995 flare in very good agreement with the predictions of the MG85 shock model. The new approach presented here is however more powerful to constrain the two following stages of the evolution.
We describe below two different approaches. In Sect. 3 we model the light curve of each outburst by an analytic function that can smoothly evolve with frequency, whereas in Sect. 4 we directly model a self-absorbed synchrotron spectrum that evolves with time. The first approach is easier to implement, since it allows us to begin the decomposition with a single light curve before adding the others progressively. The second approach is more physical and gives better constraints to shock models. Our results are discussed in Sect. 5 and summarized in Sect. 6.
Throughout this paper the frequency $`\nu `$ is as measured in the observer’s frame and “$`\mathrm{log}`$” refers to the decimal logarithm “$`\mathrm{log}_{10}`$”. The convention for the spectral index $`\alpha `$ is $`S_\nu \nu ^{+\alpha }`$.
## 2 Observational material
This study is based on the light curves of the multi-wavelength database of 3C 273 presented by Türler et al. (1999a ). The twelve light curves we use are the five radio light curves: 5 GHz, 8.0 GHz, 15 GHz, 22 GHz and 37 GHz and the seven millimetre/submillimetre (mm/submm) light curves: 3.3 mm, 2.0 mm, 1.3 mm, 1.1 mm, 0.8 mm, 0.45 mm and 0.35 mm. At low frequency (5 to 15 GHz), we consider only the measurements of the University of Michigan Radio Astronomy Observatory (UMRAO). The observations at 22 GHz and 37 GHz are mainly from the Metsähovi Radio Observatory in Finland. The mm/submm observations are from various sources including the James Clerk Maxwell Telescope (JCMT), the Swedish-ESO Submillimetre Telescope (SEST) and the “Institut de Radio-Astronomie Millimétrique” (IRAM).
We analyse the observations from 1979.0 to 1996.6, except at low frequency where we extend the analysis up to: 1997.2 (15 GHz), 1997.5 (8.0 GHz) and 1998.0 (5 GHz), in order to include the decay of the 1995 flare. In the mm/submm range, we average repeated observations made within 3 days to avoid oversampling of the light curves at some epochs. This leaves us a total of 4352 observational points to constrain our fits. To observations without known uncertainties, we assign the average uncertainty of the other observations at the same frequency. The light curves are treated as if all their observations were made exactly at the same frequency, i.e. small differences of the observing frequency from one measurement to the other are not taken into account. This simplification should not much affect the results, since the spectrum is rather flat ($`\alpha \stackrel{>}{}0.5`$) in the considered submillimetre-to-radio domain (e.g. Türler et al. 1999a ).
## 3 The light-curve approach
We describe here an approach in which we minimize the number of model-dependent constraints. The light curve of each outburst at a given frequency is described by a simple analytical function. The choice of this function is purely empirical and does not rely on any physical model. The evolution with frequency of the outburst’s light curve is left as free as possible. This model has therefore many free parameters, which can adapt to a wide range of different situations.
### 3.1 Number of outbursts
One crucial parameter of the decomposition is the number of outbursts. Pushed by the wish to reproduce the small features seen in the light curves, one is tempted to add always more outbursts to the fit. In Türler et al. (1999b ), we published the results of a decomposition into nineteen outbursts using an approach which is similar to that described below. Here, we try to minimize as much as possible the number of outbursts to better constrain their spectral and temporal evolution. We end with twelve flares, which are absolutely necessary to describe the main features of the light curves.
The aim of the decomposition is not to reproduce the detailed structure of the light curves, but to derive the main characteristics of the outbursts. As a consequence, the $`\chi ^2`$ that we shall obtain will be statistically completely unacceptable and will have no meaning in terms of the probability that the model corresponds to what is observed. We will however refer to the obtained values of the reduced $`\chi ^2`$ (cf. Sect. 3.3), because it is the usual way to express the quality of a fit.
### 3.2 Parameterization
At a given frequency $`\nu `$, we model the light curve $`S_\nu (t)`$ of a single outburst of amplitude $`A(\nu )`$, starting at time $`t=t_0`$ and peaking at $`t=t_0+t_{\mathrm{rise}}(\nu )`$ by
$$S_\nu (t)=\frac{A(\nu )}{2}\left[1\mathrm{cos}\left(\pi \left(\frac{tt_0}{t_{\mathrm{rise}}(\nu )}\right)^{\rho (\nu )}\right)\right],$$
(1)
if $`t_0t<t_0+t_{\mathrm{rise}}(\nu )`$ and by
$$S_\nu (t)=A(\nu )\mathrm{exp}\left(\left(\frac{tt_0t_{\mathrm{rise}}(\nu )}{t_{\mathrm{fall}}(\nu )}\right)^{\varphi (\nu )}\right),$$
(2)
if $`tt_0+t_{\mathrm{rise}}(\nu )`$. The exponents $`\rho (\nu )`$ and $`\varphi (\nu )`$ define the shape of the light curve at frequency $`\nu `$ and $`t_{\mathrm{fall}}(\nu )`$ is the $`\mathrm{e}`$-folding decay time of the flare at frequency $`\nu `$. Different time profiles of an outburst defined by Eqs. (1) and (2) are shown in Fig. 1.
Rather than constraining the outburst parameters ($`A(\nu )`$, $`t_{\mathrm{rise}}(\nu )`$, $`t_{\mathrm{fall}}(\nu )`$, $`\rho (\nu )`$ and $`\varphi (\nu )`$) at each of the twelve light curve’s frequencies, we describe their logarithm by a cubic spline which we parameterize at only four frequencies spaced by 0.75 dex and covering the 3 – 600 GHz range (see Fig. 3). This reduces the number of free parameters by a factor three, while keeping the parameterization completely model-independent. We thus need a total of $`5\times 4`$ parameters to fully characterize the spectral and temporal evolution of an outburst, i.e. a surface in the three dimensional $`(S,\nu ,t)`$-space (cf. Fig. 4).
We impose that all individual outbursts are self-similar, in the sense that they all have the same evolution pattern, i.e. the same shape of the surface in the $`(S,\nu ,t)`$-space. What we allow to change from one outburst to the other is the normalization in flux $`S`$, frequency $`\nu `$ and time $`t`$, which changes, respectively, the amplitude of the outburst (strong or weak), the frequency at which the emission peaks (high- or low-frequency peaking) and the time scale of the evolution (long-lived or short-lived). A change in normalization corresponds to a shift of the position of the outburst’s characteristic surface in the $`(\mathrm{log}S,\mathrm{log}\nu ,\mathrm{log}t)`$-space. To define this position, we take the point of maximum flux as an arbitrary reference point on the surface. On average among all individual outbursts, this point is located at $`(\mathrm{log}S,\mathrm{log}\nu ,\mathrm{log}t)`$ and this average normalization defines what we call the typical outburst of 3C 273. We denote by $`\mathrm{\Delta }\mathrm{log}S`$, $`\mathrm{\Delta }\mathrm{log}\nu `$ and $`\mathrm{\Delta }\mathrm{log}t`$ the logarithmic shifts of this point with respect to the average position, i.e. $`\mathrm{\Delta }\mathrm{log}k=\mathrm{log}k\mathrm{log}k,k=S,\nu ,t`$. These $`12\times 3`$ logarithmic shifts plus the $`12`$ different start times $`t_0`$ of the flares give a total of $`48`$ parameters used to define the specificity of all outbursts.
The superimposed decays of the outbursts that started before 1979 are simply modelled by an hypothetical event of amplitude $`A_0(\nu )`$ at time $`t=t_0+t_{\mathrm{rise}}(\nu )=1979.0`$ and decaying with the $`\mathrm{e}`$-folding time $`t_{\mathrm{fall}}(\nu )`$ of the typical outburst at frequency $`\nu `$. The variation of the amplitude $`A_0(\nu )`$ with frequency is modelled by a cubic spline as for the five other variables, but parameterized at four slightly lower frequencies ($`\mathrm{log}(\nu /\text{GHz})=`$ 0.5, 1.0, 1.5 and 2.0), due to the fact that $`A_0(\nu )`$ is only well constrained for the radio light curves. Finally, we assume a constant contribution to the light curves due to the quiescent emission of the jet’s hot spot 3C 273A. This emission is modelled with a power law spectrum as given in Türler et al. (1999a ).
To summarize, this first parameterization uses a total of $`72`$ ($`20+48+4`$) parameters to adjust the 4352 observational points in the twelve light curves. The great number of free parameters still leaves more than four thousand degrees of freedom (d.o.f.) to the fit. The simultaneous fitting of the twelve light curves is performed by many iterative fits of small subsets of the $`72`$ parameters.
### 3.3 Results
Fig. 2 shows three representative light curves among the twelve fitted simultaneously with the outbursts parameterized as described in Sect. 3.2. The major features of the light curves are reproduced by the model with only about one outburst every 1.5 year starting simultaneously at all frequencies. The overall fit has a reduced $`\chi ^2`$ value of $`\chi _{\mathrm{red}}^2\chi ^2/\text{d.o.f.}=16.1`$. The main discrepancy between the model and the observations arises during 1984–1985, when the very different light curve features in the millimetre and radio domains cannot be correctly described by the 1983.4 flare alone.
The obtained evolution of the parameters with frequency for the typical outburst is shown in Fig. 3. The amplitude $`A(\nu )`$ of the light curve has a maximum at $`45`$ GHz. Both the rise time $`t_{\mathrm{rise}}(\nu )`$ and the $`\mathrm{e}`$-folding decay time $`t_{\mathrm{fall}}(\nu )`$ increase monotonically with wavelength. If we extrapolate the cubic spline to low frequency, it is striking to see that both $`t_{\mathrm{rise}}(\nu )`$ and $`t_{\mathrm{fall}}(\nu )`$ tend to very high values of the order of 10 years at 1 GHz, while the amplitude of the outburst would still be significant ($`A(\text{1\hspace{0.17em}GHz})1`$ Jy). Due to the lack of submillimetre observations before 1981, the amplitude $`A_0(\nu )`$ is not constrained at frequencies above $`300`$ GHz. The increase of $`A_0(\nu )`$ at these frequencies – due to the spline – is probably not real, but does not affect the fit because the corresponding decay time is short ($`t_{\mathrm{fall}}(\text{1000\hspace{0.17em}GHz})1`$ year). The two exponents $`\rho (\nu )`$ and $`\varphi (\nu )`$ which describe the shape of the outburst’s light curve are both higher at radio frequencies than in the mm/submm domain. As a consequence, the light curves at higher frequencies have a steeper rise just after the start of the outburst and a steeper decay just after the peak (see Figs. 1 and 7a).
The five parameters $`A(\nu )`$, $`t_{\mathrm{rise}}(\nu )`$, $`t_{\mathrm{fall}}(\nu )`$, $`\rho (\nu )`$ and $`\varphi (\nu )`$ define the typical outburst that can be represented in three dimensions in the $`(\mathrm{log}S,\mathrm{log}\nu ,\mathrm{log}t)`$-space as shown in Fig. 4d. The three other panels of Fig. 4 show the three Cartesian projections of this surface. The frequency and time axes cover the same logarithmical range of 4 dex, so that the dotted diagonal in Fig. 4c corresponds to $`\nu t^1`$. At least at low frequencies, both the maximum of the spectra and of the light curves follow quite well this diagonal. The outburst’s evolution is thus amazingly symmetric in Figs. 4a and 4b. The maximum amplitude of the typical outburst is of $`16`$ Jy and is reached after $`7.5`$ months at a frequency of $`45`$ GHz. The frequency $`\nu _\mathrm{m}`$ of the spectrum’s maximum is steadily decreasing with time (Fig. 4c). The corresponding flux density $`S_\mathrm{m}`$ is first increasing with decreasing frequency $`\nu _\mathrm{m}`$ according to $`S_\mathrm{m}\nu _\mathrm{m}^{0.7}`$, whereas it decreases as $`S_\mathrm{m}\nu _\mathrm{m}^{+1.0}`$ during the final decline of the outburst (Fig. 4b). This behaviour corresponds qualitatively to what is expected by shock models (e.g. MG85).
At frequencies above the spectral turnover ($`\nu \nu _\mathrm{m}`$), the spectral index $`\alpha `$ is first of $`0.5`$ and steepens very slightly to $`0.7`$ at the maximum development of the outburst. The somewhat chaotic behaviour during the final declining phase – due to the abrupt change in the parameter $`\varphi (\nu )`$ – does not enable us to define a reasonable spectral index during this last stage. At frequencies below the spectral turnover ($`\nu \nu _\mathrm{m}`$), the spectral index $`\alpha `$ is smoothly steepening with time from $`1.8`$ to $`2.5`$. This is what is expected from a synchrotron source that starts inhomogeneous and progressively becomes homogeneous (e.g. Marscher M77 (1977)).
The twelve individual outbursts have different amplitudes ranging from 5 Jy up to 32 Jy for the 1983.0 flare studied by MG85. The corresponding dispersion $`\sigma `$ of the amplitude shifts $`\mathrm{\Delta }\mathrm{log}S`$ is $`\sigma =0.23`$, which is slightly smaller than the dispersion of the frequency shifts $`\mathrm{\Delta }\mathrm{log}\nu `$ ($`\sigma =0.27`$) and the time shifts $`\mathrm{\Delta }\mathrm{log}t`$ ($`\sigma =0.29`$). The amplitude shifts $`\mathrm{\Delta }\mathrm{log}S`$ are obviously not correlated with either $`\mathrm{\Delta }\mathrm{log}\nu `$ or $`\mathrm{\Delta }\mathrm{log}t`$ (Fig. 4a and b). This is confirmed by a Spearman rank-order test (Bevington B69 (1969)), which yields that the observed correlations could occur by chance with a probability of more than 60 %. On the contrary, the shifts $`\mathrm{\Delta }\mathrm{log}\nu `$ and $`\mathrm{\Delta }\mathrm{log}t`$ align well along the $`\nu t^1`$ line (Fig. 4c) and the Spearman’s test probability of $`<`$ 0.01 % confirms that this anti-correlation is very significant.
## 4 The three-stage approach
In the light-curve approach described above, we model analytically the light curve of an outburst at different frequencies and show that the resulting typical flare is qualitatively in agreement with what is expected by shock models in relativistic jets. It is thus of interest to derive from the data the parameters that are relevant to those models.
The shock model of MG85 and its generalization by Valtaoja et al. (VTU92 (1992)) describe the evolution of the shock by three distinct stages: 1) a rising phase, 2) a peaking phase and 3) a declining phase<sup>1</sup><sup>1</sup>1We use here the terminology introduced by Qian et al. (QWB96 (1996)), because it is purely descriptive and free of any interpretation regarding the physical origin of these stages.. The three-stage approach presented below is similar to that of Valtaoja et al. (VTU92 (1992)), in the sense that its aim is simply to qualitatively describe the observations. It contains however more parameters in order to include those which are relevant to test the physical model of MG85.
The remarks of Sect. 3.1 concerning the number of outbursts and the quoted values of the reduced $`\chi ^2`$ apply equally here.
### 4.1 Parameterization
The self-absorbed synchrotron spectrum emitted by electrons with a power law energy distribution of the form $`N(E)E^s`$ can be expressed – by generalizing the homogeneous case (e.g. Pacholczyk P70 (1970); Stevens et al. SLR95 (1995)) – as
$$S_\nu =S_1\left(\frac{\nu }{\nu _1}\right)^{\alpha _{\mathrm{thick}}}\frac{1\mathrm{exp}((\nu /\nu _1)^{\alpha _{\mathrm{thin}}\alpha _{\mathrm{thick}}})}{1\mathrm{e}^1},$$
(3)
where $`(\nu /\nu _1)^{\alpha _{\mathrm{thin}}\alpha _{\mathrm{thick}}}`$ is equal to the optical depth $`\tau _\nu `$ at frequency $`\nu `$. $`S_1`$ and $`\nu _1`$ are respectively the flux density and the frequency corresponding to an optical depth of $`\tau _\nu =1`$. At high frequency ($`\nu \nu _1`$) the medium is optically thin ($`\tau _\nu 1`$) and the spectrum follows a power law of index $`\alpha _{\mathrm{thin}}=(s1)/2`$, whereas at low frequency ($`\nu \nu _1`$) it is optically thick ($`\tau _\nu 1`$) and the spectral index is $`\alpha _{\mathrm{thick}}`$. In the case of a homogeneous source, $`\alpha _{\mathrm{thick}}=+5/2`$.
The maximum $`S_\mathrm{m}S_\nu (\nu _\mathrm{m})`$ of the spectrum $`S_\nu `$ is reached at the turnover frequency $`\nu _\mathrm{m}`$ corresponding to an optical depth of $`\tau _\mathrm{m}=(\nu _\mathrm{m}/\nu _1)^{\alpha _{\mathrm{thin}}\alpha _{\mathrm{thick}}}`$. $`\tau _\mathrm{m}`$ is obtained by differentiating Eq. (3):
$$\frac{\mathrm{d}S_\nu }{\mathrm{d}\nu }=0\mathrm{exp}(\tau _\mathrm{m})1=\left(1\frac{\alpha _{\mathrm{thin}}}{\alpha _{\mathrm{thick}}}\right)\tau _\mathrm{m}.$$
(4)
By developing the exponential of Eq. (4) to the third order, we obtain a good approximate: $`\tau _\mathrm{m}=\frac{3}{2}\left(\sqrt{1\frac{8\alpha _{\mathrm{thin}}}{3\alpha _{\mathrm{thick}}}}1\right)`$. We can now rewrite Eq. (3) according to the turnover values $`\nu _\mathrm{m}`$, $`\tau _\mathrm{m}`$ and $`S_\mathrm{m}`$ by
$$S_\nu =S_\mathrm{m}\left(\frac{\nu }{\nu _\mathrm{m}}\right)^{\alpha _{\mathrm{thick}}}\frac{1\mathrm{exp}(\tau _\mathrm{m}(\nu /\nu _\mathrm{m})^{\alpha _{\mathrm{thin}}\alpha _{\mathrm{thick}}})}{1\mathrm{e}^{\tau _\mathrm{m}}}.$$
(5)
The evolution with time of the self-absorbed synchrotron spectrum of Eq. (5) is assumed to follow three distinct stages: 1) the rising phase for $`tt_0<t_\mathrm{r}`$ ; 2) the peaking phase for $`t_\mathrm{r}tt_0t_\mathrm{p}`$ and 3) the declining phase for $`tt_0>t_\mathrm{p}`$. The subscripts “$`\mathrm{r}`$” and “$`\mathrm{p}`$” refer to the end of the rising phase and the end of the peaking phase, respectively. We assume that during each stage $`i`$ ($`i=1,\mathrm{\hspace{0.17em}2},\mathrm{\hspace{0.17em}3}`$) both the turnover frequency $`\nu _\mathrm{m}(t)`$ and the turnover flux $`S_\mathrm{m}(t)`$ evolve with time as a power law, but with exponents that differ during the three stages:
$$\nu _\mathrm{m}(t)t^{\beta _i}\text{and}S_\mathrm{m}(t)t^{\gamma _i}S_\mathrm{m}\nu _\mathrm{m}^{\gamma _i/\beta _i}.$$
(6)
We thus need ten parameters: $`t_\mathrm{r}`$, $`t_\mathrm{p}`$, $`\nu _\mathrm{m}(t_\mathrm{r})`$, $`S_\mathrm{m}(t_\mathrm{r})`$, $`\beta _1`$, $`\beta _2`$, $`\beta _3`$, $`\gamma _1`$, $`\gamma _2`$ and $`\gamma _3`$, to describe the evolution of the spectral turnover in the three dimensional $`(S,\nu ,t)`$-space.
The model of MG85 predicts that both the optically thin $`\alpha _{\mathrm{thin}}`$ and thick $`\alpha _{\mathrm{thick}}`$ spectral indices should be flatter during the declining phase than during the rising and peaking phases (see Fig. 3 of Marscher et al. MGT92 (1992)). To test whether the spectrum is actually changing from the rising phase to the declining phase, we allow the two spectral indices $`\alpha _{\mathrm{thin}}`$ and $`\alpha _{\mathrm{thick}}`$ to have different values during these two stages. The transition during the intermediate peaking phase from the values in the rising phase ($`\alpha _{\mathrm{thin}}(t_\mathrm{r})`$ and $`\alpha _{\mathrm{thick}}(t_\mathrm{r})`$) to the values in the declining phase ($`\alpha _{\mathrm{thin}}(t_\mathrm{p})`$ and $`\alpha _{\mathrm{thick}}(t_\mathrm{p})`$) is assumed to be linear with the logarithm of time $`\mathrm{log}(t)`$. This adds the four parameters $`\alpha _{\mathrm{thin}}(t_\mathrm{r})`$, $`\alpha _{\mathrm{thin}}(t_\mathrm{p})`$, $`\alpha _{\mathrm{thick}}(t_\mathrm{r})`$ and $`\alpha _{\mathrm{thick}}(t_\mathrm{p})`$ to the model, having thus a total of fourteen parameters to fully define the evolution of a typical flare in the $`(S,\nu ,t)`$-space instead of the twenty parameters used in the first approach (Sect. 3.2).
The specificity of each outburst is modelled with a total of $`12\times 4`$ parameters exactly as described in Sect. 3.2 for the light-curve approach. We do not model again the superimposed decays of the outbursts that started before 1979, but simply use the same exponential decay as obtained by the first approach (Sect. 3.2). The constant contribution of the jet’s hot spot 3C 273A is also considered here. The total number of parameters in this second parameterization is a bit less than for the first one: 62 ($`12\times 4+14`$) instead of 72.
### 4.2 Results
To allow a better comparison with the results of the first approach (Sect. 3.2), we show in Fig. 5 the same light curves as in Fig. 2. The reduced $`\chi ^2`$ of the overall fit is now of $`\chi _{\mathrm{red}}^2=17.8`$. The higher frequency light curves are relatively better described here than with the first approach (compare Figs. 2 and 5). The start times $`t_0`$ of the outbursts are very similar to those obtained by the first approach, except for the fourth flare which is now starting much later at $`t_0`$ = 1984.1 instead of 1983.4. This later $`t_0`$ seems to be in better agreement with the observations, but the behaviour of 3C 273 during 1984–1985 is still poorly described.
The obtained values of the parameters are given in Table 1. They correspond to the spectral and temporal evolution of the typical outburst shown in Fig. 6. If the tracks followed by the maximum of the spectra and of the light curves are similar to those obtained by the first approach (Fig. 4), the spectral evolution of the outburst derived here is quite different. We obtain that the spectral turnover flux $`S_\mathrm{m}`$ increases during the first 50 days ($`t_\mathrm{r}=0.14`$ year) with decreasing turnover frequency $`\nu _\mathrm{m}`$ as $`S_\mathrm{m}\nu _\mathrm{m}^{1.0}`$. The subsequent very flat peaking phase is found to be relatively long, since it lasts 1.5 year and spans nearly one order of magnitude in frequency from 120 GHz to 13.8 GHz. The final declining phase is quite abrupt with a relation between $`S_\mathrm{m}`$ and $`\nu _\mathrm{m}`$ of $`S_\mathrm{m}\nu _\mathrm{m}^{+1.1}`$. The optically thin spectral index $`\alpha _{\mathrm{thin}}`$ is found to be clearly steeper in the rising phase than in the declining phase. It is flattening by $`\mathrm{\Delta }\alpha _{\mathrm{thin}}=+0.6`$ during the peaking phase from $`\alpha _{\mathrm{thin}}(t_\mathrm{r})=1.1`$ to $`\alpha _{\mathrm{thin}}(t_\mathrm{p})=0.5`$. The optically thick spectral index $`\alpha _{\mathrm{thick}}`$ is found to be more constant with a slight tendency to steepen with time. It has a mean value of $`\alpha _{\mathrm{thick}}=+1.65`$ and is steepening by $`\mathrm{\Delta }\alpha _{\mathrm{thick}}=+0.2`$ during the peaking phase.
For each outburst we obtain logarithmic shifts in amplitude $`\mathrm{\Delta }\mathrm{log}S`$, frequency $`\mathrm{\Delta }\mathrm{log}\nu `$ and time $`\mathrm{\Delta }\mathrm{log}t`$, which are similar to those obtained by the first approach (Sect. 3). The dispersions $`\sigma `$ of $`\mathrm{\Delta }\mathrm{log}S`$, $`\mathrm{\Delta }\mathrm{log}\nu `$ and $`\mathrm{\Delta }\mathrm{log}t`$ are $`0.20`$, $`0.34`$ and $`0.27`$, respectively. A possible correlation of $`\mathrm{\Delta }\mathrm{log}S`$ with either $`\mathrm{\Delta }\mathrm{log}\nu `$ or $`\mathrm{\Delta }\mathrm{log}t`$ is again not significant: the Spearman’s test probability that stronger correlations could occur by chance is $`>`$ 40 %. On the contrary, the strong correlation observed between $`\mathrm{\Delta }\mathrm{log}\nu `$ and $`\mathrm{\Delta }\mathrm{log}t`$ is most probably real (Spearman’s test probability $`<10^6`$).
## 5 Discussion
The two approaches presented above give comparable results, but differ concerning the existence of a nearly constant peaking phase and the shapes of the spectra (compare Figs. 4b and 6b). The origin of these differences can be understood by comparing the light curve profiles obtained by the two approaches, which are shown in Fig. 7. It is clear that the first approach allowing only a rising phase and a declining phase cannot mimic the three-stage profiles of Fig. 7b resulting from the second approach (Sect. 4.1). On the other hand, only the light-curve approach is able to produce a round peaking phase as seen in Fig. 7a. In a forthcoming paper (Türler et al. in preparation), we will present the results of an hybrid approach, which incorporates the advantages of both approaches in order to better define the properties of the typical outburst.
### 5.1 Do the outbursts correspond to VLBI components?
The decomposition of the light curves into distinct outbursts was motivated by the observation with very long baseline interferometry (VLBI) of distinct components in the jet structure of 3C 273. Since the detection of a new VLBI component (Krichbaum et al. KBK90 (1990)) associated with the strong optical/infrared flare of 1988 in 3C 273 (Courvoisier et al. CRB88 (1988)), there is good evidence that outbursts are related to the ejection of new VLBI knots. To test whether all outbursts are actually associated with superluminal components, we compare in Table 2 the start time $`t_0`$ of an outburst – as obtained by the three-stage approach – with the ejection time “$`t_0`$ (knot)” of a new VLBI knot as given by Abraham et al. (ACZ96 (1996)) and Zensus et al. (ZUC90 (1990)). For each of the eight first outbursts, we can identify one or two possibly associated VLBI components.
To test further this relationship, we compare the flux densities “$`F_{\mathrm{obs}}`$ (knot)” of the VLBI components observed at epoch $`t=1991.15`$ and at a frequency of 10.7 GHz (Abraham et al. ACZ96 (1996)) with the flux densities $`F_{\mathrm{exp}}(t=1991.15,\nu =10.7\mathrm{GHz})`$ expected at the same epoch and the same frequency according to the outburst parameters derived here. Table 2 shows that for the five first outbursts there is always one of the possibly associated knots (indicated by an arrow), which has the expected flux. For the three remaining outbursts and especially for the 1988.1 flare, the relation between $`F_{\mathrm{obs}}`$ (knot) and $`F_{\mathrm{exp}}`$ is not obvious. At this epoch however, the possibly associated components are still strongly blended by the core emission (component “D”) or might even still be part of the unresolved core<sup>2</sup><sup>2</sup>2In our model the core emission is entirely due to a superimposition of outbursts unresolved by the VLBI.. The total flux $`F_{\mathrm{exp}}=20.9`$ Jy expected by the 1986.3, 1988.1 and 1990.3 outbursts is indeed equal to the observed total flux $`F_{\mathrm{obs}}=21.4\pm 0.7`$ Jy of the C9, C10 and D components. These results strongly suggest that there is a close relation between the outbursts and the VLBI knots and hence that our decomposition describes a real physical aspect of the jet.
### 5.2 How can we understand the peculiarities of individual outbursts?
The relation found between the outbursts and the VLBI knots (Sect. 5.1) has established that our decomposition is not purely mathematical, but does correspond to a physical reality. There should therefore be a physical origin to the clear anti-correlation found between the frequency shifts $`\mathrm{\Delta }\mathrm{log}\nu `$ and the time shifts $`\mathrm{\Delta }\mathrm{log}t`$ of the individual outbursts. The observed frequency shifts $`\mathrm{\Delta }\mathrm{log}\nu `$ confirm that 3C 273 emits both low- and high-frequency peaking outbursts (Lainela et al. LVT92 (1992)). The relation between $`\mathrm{\Delta }\mathrm{log}\nu `$ and $`\mathrm{\Delta }\mathrm{log}t`$ clearly shows that high-frequency peaking flares evolve faster than low-frequency peaking outbursts. The alignment of the shifts along the $`\nu t^1`$ line (Figs. 4c and 6c) further suggests the relation $`\mathrm{\Delta }\mathrm{log}\nu =\mathrm{\Delta }\mathrm{log}t`$.
The origin of this relation could be due to a change $`\mathrm{\Delta }\mathrm{log}𝒟`$ of the Doppler factor $`𝒟=\gamma ^1(1\beta \mathrm{cos}\theta )^1`$, which depends on the flow speed $`\beta =v/c`$, the Lorentz factor $`\gamma =(1\beta ^2)^{1/2}`$ and the angle to the line of sight $`\theta `$. Observed quantities (unprimed) are related to emitted quantities (primed) as (e.g. Hughes & Miller HM91 (1991); Pearson & Zensus PZ87 (1987)):
$`\nu =𝒟\nu ^{}`$ $``$ $`\mathrm{\Delta }\mathrm{log}\nu =\mathrm{\Delta }\mathrm{log}𝒟+\mathrm{\Delta }\mathrm{log}\nu ^{}`$ (7)
$`t=𝒟^1t^{}`$ $``$ $`\mathrm{\Delta }\mathrm{log}t=\mathrm{\Delta }\mathrm{log}𝒟+\mathrm{\Delta }\mathrm{log}t^{}`$ (8)
$`S(\nu )=𝒟^3S^{}(\nu ^{})`$ $``$ $`\mathrm{\Delta }\mathrm{log}S=3\mathrm{\Delta }\mathrm{log}𝒟+\mathrm{\Delta }\mathrm{log}S^{}`$ (9)
If we assume that in the jet frame all outbursts are alike (i.e. $`\mathrm{\Delta }\mathrm{log}k^{}=0,k=S,\nu ,t`$), the observed relation $`\mathrm{\Delta }\mathrm{log}\nu =\mathrm{\Delta }\mathrm{log}t`$ can be interpreted as a change $`\mathrm{\Delta }\mathrm{log}𝒟`$ of the Doppler factor from one outburst to the other. In this case, however, there should also be correlations between $`\mathrm{\Delta }\mathrm{log}S`$ and both $`\mathrm{\Delta }\mathrm{log}\nu `$ and $`\mathrm{\Delta }\mathrm{log}t`$, which are not observed.
Alternatively, we can consider that the Doppler factor does not change ($`\mathrm{\Delta }\mathrm{log}𝒟=0`$) and that the observed relation between $`\mathrm{\Delta }\mathrm{log}\nu `$ and $`\mathrm{\Delta }\mathrm{log}t`$ is intrinsic and independent of possible flux variations $`\mathrm{\Delta }\mathrm{log}S`$. Such a correlation might be related to the distance from the core at which the shock forms (Lainela et al. LVT92 (1992)). Indeed, Blandford (B90 (1990)) shows that for a simple conical jet with constant speed $`v`$ the frequency of maximum emission $`\nu _\mathrm{m}`$ is inversely proportional to the distance down the jet $`r=vt`$ ($`\nu _\mathrm{m}r^1`$), while the corresponding flux density $`S_\mathrm{m}`$ is constant. Since the speed $`v`$ is constant, the turnover frequency $`\nu _\mathrm{m}`$ is then also inversely proportional to time ($`\nu _\mathrm{m}t^1`$), as observed. If a shock forms in such an underlying jet at a distance $`r_0`$ from the core, both the frequency range of the emission and the time scale of the evolution will depend on the distance $`r_0`$, as illustrated in Fig. 8. We therefore propose that short-lived and high-frequency peaking flares are actually inner outbursts, whereas long-lived and low-frequency peaking flares are outer outbursts.
This interpretation is supported by the existence of short-lived VLBI components which are only seen close to the core. In our decomposition, the two most short-lived and the most high-frequency peaking outbursts are the two successive flares of 1982.4 and 1983.1. Their start times correspond well to the period from 1981 to 1983 during which only short-lived VLBI components were formed (Abraham et al. ACZ96 (1996)). If our interpretation is right, the shifts $`\mathrm{\Delta }\mathrm{log}\nu `$ and $`\mathrm{\Delta }\mathrm{log}t`$ that we obtain suggest that the 1982.4 flare would have formed about two times closer to the core than the 1983.1 flare and four times closer than the typical outburst.
### 5.3 What are the constraints for shock models?
According to the shock model of MG85, the optically thin spectral index $`\alpha _{\mathrm{thin}}`$ should be steeper during the two first stages of the outburst evolution than the usual value of $`\alpha _{\mathrm{thin}}=(s1)/2`$ (Sect. 4.1). A steeper index arises due to the fact that the thickness $`x`$ of the emitting region behind the shock front is proportional to the cooling time $`t_{\mathrm{cool}}`$ of the electrons suffering radiative (Compton and/or synchrotron) losses. During the rising and peaking phases, radiative losses are dominant and therefore the thickness $`x`$ is frequency dependent as $`xt_{\mathrm{cool}}\nu ^{1/2}`$, which leads to a steeper optically thin spectral index of $`\alpha _{\mathrm{thin}}=s/2`$. Until now, the expected flattening of the spectral index by $`\mathrm{\Delta }\alpha _{\mathrm{thin}}=+0.5`$ from the rising and peaking phases to the declining phase was never observed and furthermore the optically thin spectral index $`\alpha _{\mathrm{thin}}`$ observed at the beginning of the outburst was often found to be already too flat ($`\alpha _{\mathrm{thin}}>1/2`$) to allow the expected subsequent flattening (Valtaoja et al. VHL88 (1988); Lainela L94 (1994)).
The present result that the optically thin spectral index is flattening with time by $`\mathrm{\Delta }\alpha _{\mathrm{thin}}=+0.6`$ is in good agreement with the change of $`\mathrm{\Delta }\alpha _{\mathrm{thin}}=+0.5`$ expected by the shock model of MG85. The observed flattening of the spectrum is contrary to the steepening with time expected as a result of radiative energy losses by the electrons. The observed behaviour can however also be understood as a change of slope with frequency rather than with time and thus it could conceivably be due to a spectral break that steepens the optically thin spectral index by a factor of 0.5 at higher frequencies. Such a break is expected in the case of continuous injection or reacceleration of electrons suffering radiative losses (Kardashev K62 (1962)) and is observed in several hot spots including 3C 273A (Meisenheimer et al. MRH89 (1989)). Whatever the interpretation, the flatter index, $`\alpha _{\mathrm{thin}}(t_\mathrm{p})`$, is the relevant index to determine that the electron energy index $`s`$ ($`N(E)E^s`$) is $`s=12\alpha _{\mathrm{thin}}(t_\mathrm{p})=+2.0`$. This value corresponds to the average value observed in several hot spots (Meisenheimer et al. MRH89 (1989)) and is in agreement with the values expected if the electrons are accelerated by a Fermi mechanism in a relativistic shock (e.g. Longair L94b (1994)).
The long flat peaking phase observed in 3C 273 contrasts with the complete absence of this stage in 3C 345 (Stevens et al. SLR96 (1996)). This difference is surprising, because the outburst’s evolution is otherwise very similar in these two objects with nearly the same indices for the rising and the declining phases: $`\gamma _1/\beta _1=0.99`$ in 3C 273 and $`0.86`$ in 3C 345 and $`\gamma _3/\beta _3=+1.14`$ in 3C 273 and $`+0.98`$ in 3C 345 ($`S_\mathrm{m}\nu _\mathrm{m}^{\gamma _i/\beta _i}`$). A value of $`\gamma _3/\beta _3+1`$ was also found in several other sources by Valtaoja et al. (VHL88 (1988)). This decrease of the turnover flux with decreasing frequency is steeper than expected by the simplest model of MG85; i.e. with a conical adiabatic jet having a constant Doppler factor $`𝒟`$. With $`s=2`$ and a magnetic field $`B`$ oriented perpendicular to the jet axis, their model predicts $`\gamma _3/\beta _3=+0.45`$. This discrepancy between the observations and the shock model of MG85 was already pointed out by Stevens et al. (SLR96 (1996)). We refer the reader to their discussion of two more general cases of the MG85 model: 1) a straight non-adiabatic jet and 2) a curved adiabatic jet. With the observed values of the indices $`\beta _3`$ and $`\gamma _3`$, these authors could determine the two free parameters of the model. In our case, with the constraints of all six indices $`\beta _i`$ and $`\gamma _i`$ ($`i=1,\mathrm{\hspace{0.17em}2},\mathrm{\hspace{0.17em}3}`$), we could not find a good agreement with either of the two models mentioned above. In a forthcoming paper (Türler et al. in preparation), we will further discuss this point and explore whether a non-conical non-adiabatic curved jet can well describe the observations.
## 6 Summary and conclusion
By using most available submillimetre-to-radio observations of 3C 273, we have been able to extract the properties of the spectral and temporal evolution of a typical outburst. The new approach we defined consists in decomposing the light curves into several self-similar outbursts. The main results of our decomposition are the followings:
* It is possible to understand the very different shapes of the submillimetre-to-radio light curves of 3C 273 with only about one outburst every 1.5 year starting simultaneously at all frequencies.
* There is no need to invoke any underlying quiescent emission apart from the weak contribution of the jet’s hot spot 3C 273A.
* The outbursts that we identify do well correspond to the observed VLBI components in the jet.
* There is good evidence that short-lived and high-frequency peaking flares are emitted closer to the core of the jet than long-lived and low-frequency peaking outbursts.
* The spectral and temporal evolution of the outbursts is found to be in good qualitative agreement with the evolution expected by shock models in relativistic jets.
* We observe a flattening of the optically thin spectral index from the rising to the declining phase of the shock evolution, which supports the idea proposed by MG85 that radiative (synchrotron and/or Compton) losses are the main cooling process of the electrons during the initial phase of the outburst.
We are aware that our decomposition is far from describing the detailed structure of the light curves and that the jet emission is much more complicated than this work tries to show. Nevertheless, the results suggest that the outbursts we identified are closely related to the VLBI knots, and hence that they describe a physical aspect of the jet. The new approach presented here is a powerful tool to derive the observed properties of millimetre and radio outbursts. It allows comparison between shock models and the observations and we are confident that such decompositions are able to further constrain present and future shock models. Finally, we would like to stress the importance of long-term multi-wavelength monitoring campaigns, which turn out to be essential towards a better understanding of the physics involved in relativistic jets.
|
no-problem/9906/nucl-ex9906012.html
|
ar5iv
|
text
|
# Thermal excitation of heavy nuclei with 5-15 GeV/c antiproton, proton and pion beams
## Abstract
Excitation-energy distributions have been derived from measurements of 5.0-14.6 GeV/c antiproton, proton and pion reactions with <sup>197</sup>Au target nuclei, using the ISiS 4$`\pi `$ detector array. The maximum probability for producing high excitation-energy events is found for the 8 GeV/c antiproton beam relative to other hadrons, <sup>3</sup>He and $`\overline{p}`$ beams from LEAR. For protons and pions, the excitation-energy distributions are nearly independent of hadron type and beam momentum above about 8 GeV/c. The excitation energy enhancement for $`\overline{p}`$ beams and the saturation effect are qualitatively consistent with intranuclear cascade code predictions. For all systems studied, maximum cluster sizes are observed for residues with E\*/A $``$ 6 MeV.
PACS:25.70.Pq,25.43.+t,25.80.Hp
thanks: Present address: Los Alamos National Laboratory, Los Alamos, NM 87545thanks: Present address: Epsilon, Inc., Dallas, TX 75240thanks: Present address: Lawrence Berkeley Laboratory, Berkeley, CA 94720thanks: Cambridge University, Cambridge, U.K.thanks: Barnes Hospital, Washington University, St. Louis, MO 63130
Much effort in nuclear physics has been devoted to the study of the formation of multiple complex fragments (3$`Z`$16), or multifragmentation, and its possible link to the nuclear liquid-gas phase transition . In particular, the strong interest created by the measurement of a latent heat by Pochodzalla et al. , which would signal a first order phase transition. However, in many cases multifragmentation of heavy nuclei is not only driven by the its thermal and Coulomb properties, but also by collective (dynamical) properties of the chaotic systems formed in energetic projectile-target interactions.
The thermal features of the breakup process are isolated most transparently in reactions induced by hadron and light-ion beams at energies in excess of 2 GeV. Transport calculations predict that such beams heat nuclei rapidly, $`\tau `$ 30-40 fm/c, at the same time producing little compression, low average angular momentum and minimal shape distortions. While excitation energy deposition $`E^{}`$ in GeV hadron-induced reactions is significantly less than the total available energy, transport calculations indicate that residues can be created in such collisions with $`E^{}`$ values well in excess of the multifragmentation threshold, $`E^{}/A`$ 5 MeV. The objective of the present study is to investigate the relative effectiveness of various hadron beams and momenta for producing E\* values in excess of $``$ 5 MeV/A, thereby identifying the optimum system for studies of thermal multifragmentation and underlying phenomena e.g. the liquid-gas phase transition.
In Fig. 1, excitation-energy predictions of the Toneev intranuclear cascade calculation (INC) are shown. Here the average excitation energy imparted to a <sup>197</sup>Au nucleus by proton, $`\pi ^{}`$ and antiproton beams is plotted as a function of beam momentum. For p and $`\pi ^{}`$ beams, the $`E^{}`$ values are predicted to be nearly identical; above about 8 GeV/c there is little dependence on beam momentum. These features have been verified qualitatively in charged-particle multiplicity studies by Hsi et al. and in earlier inclusive studies by Porile. However, the antiproton predictions exhibit a significant increase in average excitation energy. This enhancement derives from reabsorption of some fraction of the annihilation pions ($`<n_\pi >`$ 5), which complements the internal heating caused by the cascade of hadron-hadron collisions and $`\mathrm{\Delta }`$ resonance excitations. From the point of view of multifragmentation, this greatly enhances the probability for forming residues excited above the multifragmentation threshold, as shown in the inset of Fig. 1, where $`E^{}`$ distributions for 8 GeV/c $`\pi ^{}`$ and $`\overline{p}`$ are compared.
In this letter we present results for excitation-energy distributions derived from bombardments of <sup>197</sup>Au nuclei with 5.0-14.6 GeV/c hadron beams. Exclusive charged-particle multiplicities and energy spectra were measured at the Brookhaven AGS accelerator with the Indiana Silicon Sphere, a 4$`\pi `$ detector array with 162 gas-ion-chamber/silicon/CsI telescopes. Two experiments were performed. The first used untagged negative beams (largely $`\pi ^{}`$) at 5.0, 8.2, and 9.2 GeV/c and positive beams (primarily protons) at 6.2, 10.2, 12.8 and 14.6 GeV/c. The second was performed with a tagged 8.0 GeV/c negative beam, which provided simultaneous measurement of the $`\pi ^{}`$ (98%) and $`\overline{p}`$ (1%) reactions. The results at 8 GeV/c for the two $`\pi ^{}`$ experiments are found to be identical within error bars. Further experimental details can be found in. As part of our analysis, we have also compared with data from the 4.8 GeV <sup>3</sup>He + <sup>197</sup>Au reaction at LNS Saclay and the 1.2 GeV/c $`\overline{p}`$ \+ <sup>197</sup>Au reaction from LEAR.
In the experiments reported here, calculation of the excited residue charge and mass was made by subtracting fast cascade particles from the target charge and mass using the same procedure as in ref . Excitation-energy reconstruction was performed for each event by calorimetry according to the following prescription:
$$E^{}=\underset{i=1}{\overset{M_c}{}}K_i+M_n<K_n>+Q+E_\gamma .$$
(1)
Here, $`K_i`$ is the kinetic energy of each charged particle in an event of multiplicity $`M_c`$, $`M_n`$ and $`K_n`$ are the multiplicity and average kinetic energy of neutrons, $`Q`$ is the mass difference of the reconstructed event, and $`E_\gamma `$ is a small term to account for gamma de-excitation of the residual nucleus and excited fragments. This procedure is similar to those employed in and a full paper on the dependence of $`E^{}`$ values on the various assumptions of the reconstruction and the effects of fluctuations is in preparation.
For the present measurements the two most sensitive parameters of the reconstruction procedure involve the assumption concerning $`K_n`$ and the definition of the energy range for thermal-like particles. Since neutrons are not measured in ISiS, we use the neutron-charged particle correlations reported for LEAR data by Goldenbaum et al.. This correlation is in good agreement with similar results from heavy-ion reactions and model simulations. However, systematic uncertainties arise when the kinetic energies are assigned to the neutrons. The neutron average kinetic energies were taken from the predicted correlation between $`K_n`$ and E\*/A by SMM. Then Eq. 1 is iterated to obtain self-consistency. Similar values are obtained from an iterative procedure using $`E^{}`$ = $`aT^2`$ and an initial value of $`K_n`$ = $`T`$. Comparison between unfiltered and filtered simulations with SMM and the evaporation code SIMON both show that the use of $`3T/2`$ (initial $`K_n`$ = $`3T/2`$) overpredicts the values of $`K_n`$ by as much as 30% at high $`E^{}`$, resulting in an overestimation of $`E^{}`$ of about 10-12% ($``$ 20% for initial $`K_n`$=$`2T`$).
Thermal-like charged particles are defined by the spectral shapes, from which an upper cutoff of 30 MeV for H and $`9Z+40`$ MeV for heavier fragments was assigned . Our definition of $`E^{}`$ is a conservative one; for example, with the expanded charged particle acceptance ($`E/A<`$ 30 MeV) of Hauger et al., we obtain $`E^{}`$ values about 25% higher than reported here.
Finally, the stability of the $`E^{}`$ reconstruction procedure has been tested using SMM and SIMON. Both models give a strong linear correlation between the unfiltered and filtered $`E^{}`$. The average values are recovered by the method with deviation not larger than $`\pm `$10% over the useful range of the data ($`E^{}/A`$=2-9 MeV). A detailed comparison will be presented in a long paper. However, the approach taken above should be viewed as a conservative one, as should the $`E^{}`$ distributions.
In Figure 2 we show the reconstructed probability distributions for excitation energy and residue mass for the range of systems studied in this work, where $`_iP(E_i^{})=1`$. Values of $`E^{}`$ below 250 MeV become highly uncertain due to the dominance of neutron emission (unmeasured in ISiS) at low excitation energies and the suppression of $`M_c`$ 2 events by the ISiS trigger. Data for 6.2 GeV/c and 12.8 GeV/c protons (not shown) are similar to other proton and pion data in Figs. 2-5. Fig.2 demonstrates that the largest population of high excitation energy events is achieved with the 8.0 GeV/c $`\overline{p}`$ beam and the lowest with the 5.0 GeV/c $`\pi ^{}`$ beam. The 12.8 GeV/c proton distribution is slightly higher than for 14.6 GeV/c p, while that for the 6.2 GeV/c protons is slightly lower than the 8 GeV/c $`\pi ^{}`$. Thus, the data and the INC predictions of Fig. 1 are in qualititive agreement. Quantitatively, however, the INC calculations predict $`E^{}`$ distributions that extend significantly beyond the data, as discussed in.
The residue mass distributions show a somewhat different pattern. In this case the 14.6 GeV/c proton beam produces the lightest residues and the 5.0 GeV/c $`\pi ^{}`$ the heaviest. This mass dependence can be understood as a consequence of the fast cascade, which produces an increasing number of fast knockout particles as the beam momentum increases, and also leads to a density-depleted residue. This process produces the saturation in average excitation energy shown in the INC calculations of Fig.1 and the data in Fig.2. That is, the increase in total available beam energy for $`E^{}`$ deposition is counterbalanced by loss of energy due to mass loss $`\mathrm{\Delta }A`$ during the fast cascade. This mass loss, derived from the data, is shown in the top panel of Fig.3 as a function of deposited excitation energy.
The relative effectiveness of various beams in depositing high excitation energies is shown in the bottom panel of Fig. 3 and summarized in Table 1. Included here are comparable data from the 4.8 GeV <sup>3</sup>He + <sup>197</sup>Au reaction and the 1.2 GeV $`\overline{p}`$ \+ <sup>197</sup>Au reaction. In order to emphasize the probability for forming highly excited systems, we examine the ratio of total events with $`E^{}`$ greater than a given value to that for events with $`E^{}`$ 400 MeV. At $`E^{}`$=400 MeV, the event reconstruction should provide the greatest self-consistency among the data sets.
Figure 3 and Table 1 confirm that the 8.0 GeV/c antiproton beam produces a significant enhancement of high excitation energy events, particularly in the multifragmentation regime above 800-1000 MeV. In Table 1, the yield of events with excitation energy above the multifragmentation threshold for Au-like nuclei (a range that spans 800-1000 MeV or about 5 MeV/nucleon) is listed, compared to total events above $`E^{}>`$400 MeV ($`E^{}/A>`$2 MeV). The enhancement for the 8 GeV/c $`\overline{p}`$ beam is approximately 25% greater than the next most effective beam, 12.8 GeV/c protons. Relative to the $`\overline{p}`$ studies with 2.1 GeV/c $`\overline{p}`$ at LEAR , where negligible multifragmentation yield was observed, the probabilities for high $`E^{}`$ events at 8 GeV/c are over an order of magnitude greater. In this regard, we note that for the 1.8 GeV <sup>3</sup>He + <sup>197</sup>Au system , for which the charged-particle multiplicity data are very similar to those with LEAR beams , the total cross section for events with four or more Z $``$ 3 fragments is only 3.5 mb. At the higher energy of 4.8 GeV, this cross section has grown to 83 mb. Thus, when account is made for the rapid growth of multifragmentation cross section with increasing beam momentum, the ISiS results and those of Ref. appear to be self-consistent.
In Fig. 4 the excitation energy distributions are plotted as a function of $`E^{}/A`$ of the residue. We note here, as well as in Fig 5, that events with $`E^{}/A`$ 9 MeV comprise less than 1% of the data set. The same general features persist as in Fig. 3, except in this case the 14.6 GeV/c proton beam yields comparable probabilities in the region beyond about $`E^{}/A>`$ 9 MeV. Two factors account for this. First, the average residue mass is lighter for reactions at this momentum, as shown in Fig. 2. Second, the number of events obtained with the 8.0 GeV/c $`\overline{p}`$ beam ($``$ 25,000) were about two orders of magnitude lower than for the other beams, creating larger statistical uncertainties at the extremes.
Finally, in Fig. 5 we examine the dependence of the fragment size distributions on E\*/A, of relevance to discussions of critical phenomena and a nuclear liquid-gas phase transition. The top panel shows the number of observed IMFs per residue nucleon and the corresponding filter-corrected value as a function of $`E^{}/A`$ for the various reactions. This ratio is nearly identical for all systems, increasing systematically with increasing $`E^{}/A`$ up to 9 MeV and then becoming roughly constant thereafter. The same uniformity is observed in the fragment charge distributions, shown in the lower panel of Fig. 5, where the parameter $`\tau `$ from power-law fits to the charge distributions are plotted as a function of $`E^{}/A`$. Values of $`\tau `$ decrease steadily as the system is heated i.e. the probability for forming larger fragments increases. A minimum is reached at $`\tau `$ 2 near $`E^{}/A`$ 6 MeV, followed by a slight increase (smaller fragments). This signifies that maximum cluster sizes are obtained very near the multifragmentation threshold. Thereafter, additional excitation appears to produce a hotter environment, leading to an increased yield of lighter particles and clusters.
In summary, the heat content ($`E^{}`$) of equilibrium-like heavy residues formed in 5-15 GeV/c hadron-induced reactions has been investigated. The antiproton beam is found to be most effective in creating highly excited residues, in qualitative agreement with INC predictions. Relative to the threshold for multifragmentation in such systems ($`E^{}`$ 800-1000 MeV), the enhancement of high excitation energy events with antiprotons is at least 25-35% greater than other hadrons and over an order of magnitude greater than antiprotons from LEAR. Above momenta of about 8 GeV/c the probability for $`E^{}`$ deposition with hadron beams is nearly independent of hadron type or beam momentum, again consistent with INC calculations. The observed average number of IMF per residue nucleon and the power-law fits to the charge distributions show a universal behavior as a function of $`E^{}/A`$. This independence of the final multifragmentation state on collision dynamics suggests equilibrium-like behavior in the breakup of the hot residues . Therefore hadron, especially antiprotons around 6-8 GeV/c and pions or protons above 10-12 GeV/c, are very well suited to study thermal multifragmentation. For a given beam momentum, they provide a wide range of thermal energy ($`E^{}/A`$), which is an essential quantity for investigating latent heat in nuclear matter and related properties.
Acknowledgements
The authors thank J. Vanderwerp, W. Lozowski, K. Komisarcik and R.N. Yoder at IUCF and P. Pile, H. Brown, W. McGahern, J. Scaduto, L. Toler, J. Bunce, J. Gould, R. Hackenburg and C. Woody at AGS for their assistance with these experiments. This work was supported by the U.S. Department of Energy and National Science Foundation, the National Sciences and Engineering Research Council of Canada, Grant No. P03B 048 15 of the Polish State Committee for Scientific Research, Indiana University Office of Research and the University Graduate School, Simon Fraser University and the Robert A. Welch Foundation.
|
no-problem/9906/astro-ph9906250.html
|
ar5iv
|
text
|
# Martin Schwarzschild’s Contributions to Galaxy Dynamics
## 1 Introduction
The astronomical community’s debt to Martin Schwarzschild derives from much more than his published work, as many of us who were his students, collaborators and friends can testify. Nor did Schwarzschild’s contributions to galaxy dynamics constitute more than a small portion of his scientific output. Nevertheless it would be hard to think of another single figure whose work so influenced the development of many of the fields discussed at this meeting.
Those of us who came of scientific age after Schwarzschild’s retirement in 1979 tend to identify his contributions to galaxy dynamics with the remarkable series of papers on elliptical galaxies that began appearing at about the same time. But Schwarzschild’s interest in the structure and dynamics of stellar systems was lifelong; for instance, as early as 1951, he published the first of two papers with L. Spitzer concerning the influence of interstellar clouds on stellar velocities. A number of other papers from this decade dealt with the relation between the chemical composition and kinematics of stars in the Milky Way and other galaxies.
The following review focusses on three areas of galaxy dynamics where Schwarzschild’s contributions were particularly fundamental: the masses of stellar systems; the structure of galactic nuclei; and the dynamics of elliptical galaxies.
## 2 Masses of Stellar Systems
The study of the distribution of mass in external galaxies was still in its infancy when Schwarzschild published his 1954 paper, “Mass Distribution and Mass-Luminosity Ratio in Galaxies.” Here Schwarzschild re-analyzed the kinematical data in three galaxies – M31, M33 and NGC 3115 – for which earlier workers had found significantly different distributions of light and mass. In each galaxy, he showed that the data were in fact consistent with a constant ratio of mass to light, albeit with rather different values in the three systems. In the case of NGC 3115, for instance, Schwarzschild noted that a high central velocity dispersion recently measured by Minkowski implied a large deviation between circular and rotational velocities near the center of this galaxy, thus allowing $`M/L`$ to remain approximately constant in spite of a low central $`v_c`$.
But this paper also contained at least three, quite novel approaches to what we would now call the “dark matter problem.” First, Schwarzschild estimated the mass of M32 by assuming that its gravitational pull was responsible for the observed asymmetry in rotation velocity and morphology of its larger companion M31. He concluded that the mass-to-light ratio of M32 was of order $`200`$, in approximate agreement with his value for NGC 3115. Second, Schwarzschild presented a new and elegant method for evaluating the virial theorem, the strip-count formula. He showed that the potential energy of a spherical system could be expressed simply in terms of $`S(q)`$, the observed number of objects in a strip of unit width that passes a distance $`q`$ from the projected center. <sup>1</sup><sup>1</sup>1Strip counts had long been used to infer the density profiles of star clusters (e.g. Plummer 1911). Schwarzschild was apparently the first to notice that the potential energy could be computed directly from $`S(q)`$ without first converting it into a density profile. He applied his technique to the Coma cluster using Zwicky’s galaxy counts and obtained the “bewilderingly high value” of 800 for its mass-to-light ratio. Finally, this paper contained what was probably the first suggestion that white dwarfs, remnants of an earlier generation of star formation, might constitute a signficant fraction of the masses of galaxies.
In “Note on the Mass of M92” (1955), Schwarzschild and S. Bernstein used the strip-count formula to obtain one of the first accurate measurements of the mass-to-light ratio of a globular cluster. <sup>2</sup><sup>2</sup>2Those familiar with Schwarzschild’s legendary tact will be struck by the introduction to this paper, which contains a withering (but accurate) critique of a rival formula for evaluating the virial theorem.
## 3 Structure of Galactic Nuclei
Schwarzschild’s pivotal role in the development and deployment of the balloon-borne telescopes Stratoscope I and II is well known. <sup>3</sup><sup>3</sup>3A wonderfully clear account of the observation of convection cells in the Sun with Stratoscope I was written by Schwarzschild and his wife, Barbara, for Scientific American (1959). After its two initial flights, Stratoscope II, a 36-inch telescope, was reconfigured for high-definition photography and used to obtain images of galactic nuclei unblurred by the atmosphere. In “An Upper Limit to the Angular Diameter of the Nucleus of NGC 4151” (1968, 1973), Schwarzschild, R. Danielson and B. D. Savage reported that the nucleus of NGC 4151 had still not been resolved and accordingly that only an upper limit could be placed on its diameter, which they estimated at $`0.08^{\prime \prime }`$. They were thus able to show that the non-thermal continuum, which provides most of the nuclear light in this Seyfert galaxy, originated in a region much smaller than that associated with the emission lines.
The eighth, and final, flight of Stratoscope II was used to obtain high-resolution photographs of M31 and M32. The results for M32, while intriguing, were never published; the observations were made shortly before sunrise while the telescope was gradually descending and the resultant temperature differentials caused a substantial degradation in the quality of the images. But the data seemed to show no evidence for a distinct nucleus at a resolution of $`0.5^{\prime \prime }`$, consistent with what we now know about the luminosity profile of this galaxy. Observations taken during the same night of M31 were more successful; in “The Nucleus of M31” (1974), E. S. Light, R. E. Danielson and Schwarzschild presented $`0.2^{\prime \prime }`$ resolution photographs that clearly resolved the nucleus, showing it to have a core radius of only $`0.48^{\prime \prime }`$. More striking was the observed asymmetry of the nucleus, which was revealed to have a low intensity extension on one side of the bright peak. Light et al. raised the possibility that the offset was a result of non-uniform obscuration by dust, and noted that, in the absence of dust, “the observed asymmetry is an intrinsic property of the nucleus which will probably require a dynamic explanation.” The latter picture is now accepted by most astronomers due to the absence of color variations.
## 4 Elliptical Galaxy Dynamics
Starting in 1976, when he was 64 years old, Schwarzschild wrote or co-authored a remarkable series of 21 papers on the dynamics of elliptical galaxies. The first of these, a collaboration with M. Ruiz, dates from the “early days” of the field when it was still universally assumed that elliptical galaxies and bulges were rotationally-supported, axisymmetric systems. “An Approximate Dynamical Model for Spheroidal Stellar Systems” (1976) presented a novel approach to the problem of elliptical galaxy modelling. Ruiz and Schwarzschild wrote $`f(E,L_z)=f_0e^{E/\sigma ^2}g(L_z)`$, and assumed in addition that the density generated by $`f`$ was constant on spheroids of fixed eccentricity. The two assumptions are mildly inconsistent, as the authors fully realized, but together they permit an extremely elegant derivation of the function $`g(L_z)`$: one first matches the density profile on the rotation axis, which is independent of $`g`$, then uses the observed density in the equatorial plane to determine $`g(L_z)`$. Ruiz (1976) applied the model to the central region of M31, treating the nucleus and bulge as distinct components.
The bulge in Ruiz’s model of M31 was tipped out of the disk plane in order to reproduce the observed twist in the isophotes at about $`10^{}`$ from the center of this galaxy. Stark (1977) recognized that a coplanar and triaxial bulge could reproduce the twist in M31 equally well. At about the same time, a number of workers began publishing integrated spectra which showed that these objects were rotating much more slowly than expected for centrifugally flattened oblate spheroids. Schwarschild contributed to the emerging view of early-type galaxies as triaxial ellipsoids in two papers with T. B. Williams, “A Photometric Determination of Twists in Three Early-Type Galaxies,” I & II (1979). These studies revealed significant twists in the inner isophotes of three elliptical galaxies, which the authors cautiously interpreted as evidence that “many elliptical galaxies may have a more complicated basic structure than that of axially symmetric cofigurations.”
Schwarzschild’s most famous paper from this period is undoubtedly “A Numerical Model for a Triaxial Stellar System in Dynamical Equilibrium” (1979), in which he constructed the first completely self-consistent model of a triaxial galaxy. The approach was at the same time beautifully straightforward and quite novel. Schwarzschild’s insight was to treat individual, time-averaged orbits as building blocks for a galaxy – thus replacing the cumbersome self-consistency equations by a matrix equation that could be solved using standard numerical techniques. In the process, he discovered the four families of regular orbits in triaxial potentials, the boxes and the three types of tubes. His demonstration that most orbits in a non-axisymmetric potential could be regular – i.e. that they respected three effective integrals of the motion – was quite unexpected at the time.
Schwarzschild went on, in two subsequent studies, to develop a more complete understanding of these major orbit families. “On the Nonexistence of Three-Dimensional Tube Orbits Around the Intermediate Axis in a Triaxial Galaxy Model” (1979), with G. Heiligman, linked the existence of the tube orbits to the stability of the $`1:1`$ resonant orbits in the principal planes. The primary motivation for this work was the apparent absence of intermediate-axis tube orbits in the self-consistent triaxial model. The authors showed that the $`1:1`$ orbit in the $`XZ`$ plane <sup>4</sup><sup>4</sup>4Here and below, Schwarzschild’s convention is followed in which the $`X`$ and $`Z`$ axes are identified with the long and short axes of the triaxial figure. (i.e. the plane perpendicular to the intermediate axis) was generally unstable to vertical perturbations, a circumstance which they noted was “quite plausibly destructive for the existence of $`Y`$-tube orbits.” A second study with M. Vietri, “Analysis of Box Orbits in a Triaxial Galaxy” (1983) developed the picture of box orbits as perturbations of the stable, long-axis orbit. The key to the analysis was a careful treatment of the second-order terms: these terms were retained in the development of the transverse motion but omitted from the axial motion, thus allowing the equations for the different orders to be solved independently.
A remarkable paper from the following year, “Stellar Orbits in Angle Variables” (1984) with S. J. Ratcliff and K. M. Chang, showed how a complete description of a two-dimensional orbit could be obtained in terms of its action-angle variables. This problem currently goes under the name of “torus construction” but it is actually quite old, with antecedents in work of Einstein and Born on semi-classical quantization. Here again, the approach was beautifully direct. The authors asked simply: How must the Cartesian coordinates depend on the angles if the angles are to increase linearly with time? The result was a set of differential equations for $`x`$ and $`y`$ as functions of the angles. These equations are nonlinear, and Ratcliff et al. developed an iterative technique for solving them which worked well whenever the initial guess was sufficiently close to the true solution.
The slow observed rotation of elliptical galaxies was one of the factors that prompted Schwarzschild to construct his first triaxial model. Real elliptical galaxies probably do have rotating figures, and in 1982 Schwarzschild began investigating the effects of slow figure rotation on the triaxial self-consistency problem. “Retrograde Closed Orbits in a Rotating Triaxial Potential” (1982), with J. Heisler and D. Merritt, reported the existence of the “anomalous” orbits, $`1:1`$ resonant orbits that are tipped out of the $`YZ`$ plane by Coriolis forces. The anomalous orbits give rise to two families of $`X`$-tubes that circulate in opposite directions about the long axis of a rotating triaxial figure. In “A Model for Elliptical Radio Galaxies with Dust Lanes” (1982), T. S. van Albada, C. G. Kotanyi and Schwarzschild suggested that the dust lanes of Centaurus A and M84 consisted of matter moving along these anomalous orbits. <sup>5</sup><sup>5</sup>5Subsequent observations of Centaurus A revealed that the sense of rotation of the stellar body of this galaxy is probably opposite to that of the van Albada et al. model, implying that the outer dust ring has not yet reached a steady state. However a triaxial figure is probably still required to support the inner ring.
Schwarzschild made one attempt at achieving self-consistency in a triaxial model with rapid figure rotation; this initial attempt failed, as Schwarzschild reported at one of the Princeton “Tuesday lunches,” and the work was never published. However a subsequent effort, using a more slowly rotating figure, was successful. In “Triaxial Equilibrium Models for Elliptical Galaxies with Slow Figure Rotation” (1982), Schwarzschild chose a value for the rotation period that was long enough (of order $`10^9`$ years after scaling) that all four of the major orbit families existed out to the truncation radius of the model. He noted that the two branches of $`X`$-tubes must be equally populated if such a model is to be eight-fold symmetric, which means that a rotating model will lack any streaming around the long axis. This was another example of how the use of orbits as building blocks could lead to insights about a galaxy’s kinematics that would have been difficult to obtain from the Jeans or Boltzmann equations.
In his 1979 self-consistency study, Schwarzschild had found that box orbits alone could not reproduce the mass distribution of his triaxial model, since they tended to place too much mass along the major axis. His solution was to incorporate $`X`$-tube orbits which avoid the long axis. Schwarzschild noted that solutions incorporating the other major orbit family, the $`Z`$-tubes, were also likely to exist and that the question of the uniqueness of solutions “is thus left unanswered by the present investigation.” He returned to the uniqueness question in a 1986 paper, “Dynamical Models for Galactic Bars: Truncated Perfect Elliptic Disk.” Schwarzschild considered a strongly truncated, planar mass model that supported only one family of orbits, the boxes, and showed numerically that a self-consistent solution existed and that it was unique. Beyond the truncation radius in this two-dimensional model, tube orbits exist in addition to box orbits, and one might expect to find a certain degree of non-uniqueness in solutions that draw on both orbit families. This was shown to be the case in a study with P. T. de Zeeuw and C. Hunter that appeared the following year, “Nonuniqueness of Self-Consistent Equilibrium Solutions for the Perfect Elliptic Disk” (1987). A further step toward demonstrating non-uniqueness in the three-dimensional problem was taken by Hunter, de Zeeuw, C. Park and Schwarzschild in “Prolate Galaxy Models with Thin-Tube Orbits” (1990). The authors showed that a variety of self-consistent solutions for axisymmetric prolate models could be found by varying the relative occupation numbers of orbits from the two families of thin long-axis tubes.
In 1980, R. H. Miller asked Schwarzschild whether he could test the stability of the nonrotating triaxial model. Schwarzschild agreed, and assigned one of his students the task of re-integrating the orbits to provide initial conditions for the $`N`$-body code. In the process it was discovered that many of the orbits generated different masses in the grid of cells than they had in the original integrations. The discrepancy was eventually traced to the installation of a new computer at the Princeton Computer Center: the differences in the round-off algorithms of the two machines were sufficient to trigger the exponential instability of those orbits that were stochastic, leading to significantly modified trajectories after many orbital periods. Schwarzschild followed up this hint in the following year in a study with J. Goodman, “Semistochastic Orbits in a Triaxial Potential” (1981). Goodman and Schwarzschild tested the stability of box orbits by looking for exponential divergence of nearby trajectories. They noted that a large fraction of the box orbits were in fact chaotic, but that the chaos produced only modest changes in the shapes of the orbits over 50 oscillations. They coined the term “semi-stochasticity” to describe this phenomenon. The chaos was tentatively linked to the linear instability of the short- and intermediate-axis orbits.
Schwarzschild’s self-consistent triaxial models from 1979 and 1982 were based on the Hubble density profile, which has a large, constant-density core. It became increasingly clear throughout the 1980’s that the luminosity profiles of many galaxies might increase more steeply at small radii; indeed, Schwarzschild’s own Stratoscope observations of M31 and NGC 4151 had revealed pointlike nuclei in these galaxies. The behavior of box orbits is very sensitive to the central density of a triaxial model, and in 1989 Schwarzschild began to look in detail at the orbits in triaxial models with small or nonexistent cores. His two studies with J. Miralda-Escudé and J. F. Lees – “On the Orbit Structure of the Logarithmic Potential” (1989) and “The Orbital Structure of Galactic Halos” (1992) – revealed that the planar motion in centrally concentrated models is dominated by resonances, which generate families of orbits not seen in models with large cores. Schwarzschild, who was fiercely opposed to opaque terminology, gave these resonant orbits names that evoked their shapes like “banana,” “fish” and “pretzel;” these names have remained in widespread use. He also began to look in these papers at the behavior of orbits in potentials with central point masses representing black holes.
While Miller & Smith’s (1981) $`N`$-body study did not find any strong evidence for instability in Schwarzschild’s triaxial model, a number of examples of dynamical instabilities in other models of hot stellar systems began to be discussed at about this time. In “Orbital Contributions to the Stability of Triaxial Galaxies” (1989), de Zeeuw and Schwarzschild used an adiabatic deformation technique to evaluate the stability to small perturbations of Statler’s (1987) triaxial models based on the perfect ellipsoid. They found that the response of individual box orbits to barlike perturbations was often destabilizing, in the sense that the response density tended to reinforce the original perturbation; a similar mechanism drives the radial-orbit instability in spherical models. In “The Ring Instability in Radially Cold Oblate Models” (1991), the same authors investigated axisymmetric instabilities in oblate models constructed from thin tube orbits. They found that such models were unstable to radial clumping when sufficiently flat. These stability studies provided yet a further demonstration of the usefulness of an orbit-based approach to galaxy dynamics.
In one of his last papers, “Self-Consistent Models for Galactic Halos,” Schwarzschild revisited the triaxial self-consistency problem, this time using models based on the singular isothermal mass distribution. Such models are scale-free, which allowed Schwarzschild to construct orbit libraries by scaling the orbits computed at a single energy; the increase in efficiency enabled him to compute orbit libraries for six different choices of the model axis ratios. Schwarzschild found that most of the box orbits in these models were significantly stochastic, a rather different situation than he had been led to expect by his earlier work in two dimensions. He showed that the omission of the stochastic orbits could sometimes preclude a self-consistent solution, implying restrictions on the allowed shapes of isothermal halos. This study demonstrated clearly the importance of chaos in the phase space of realistic triaxial models and opened to door to a wealth of later studies of this fascinating topic.
## 5 Conclusion
It is sometimes said that a scientist’s career is over by the age of 35. One may safely assume that Martin Schwarzschild would have disagreed with this statement; in any case, all of the work cited here was published after that particular milestone had been passed. Without the contributions which Schwarzschild made in the late stages of his career, the field of galaxy dynamics would be an incomparably less rich and exciting one than it is today.
###### Acknowledgements.
I am indebted to the following people who provided details about Martin Schwarzschild’s research or unpublished work, or made helpful comments on the manuscript: C. Hunter, R. Miller, F. Schweizer, J. Sellwood, T. Statler, P. Teuben, S. Tremaine, T. van Albada, P. Vandervoort, T. Williams, and P. T. de Zeeuw.
|
no-problem/9906/astro-ph9906271.html
|
ar5iv
|
text
|
# Measuring the equation of state of the intergalactic medium
## 1 INTRODUCTION
The intergalactic medium (IGM) at high redshift ($`z2`$–5) manifests itself observationally by absorbing light from distant quasars. Resonant Ly$`\alpha `$ absorption by neutral hydrogen along the line of sight to a quasar results in a fluctuating Ly$`\alpha `$ transmission (optical depth). Regions of enhanced density give rise to increased absorption and appear as a forest of absorption lines bluewards of the quasar’s Ly$`\alpha `$ emission line. The fact that not all the light is absorbed implies that the IGM is highly ionized \[Gunn & Peterson 1965\], but the time and origin of the reionization of the gas are still unknown. Since quasars and young stars are sources of ionizing radiation, the ionization history of the gas depends on the evolution of the quasar population and the star formation history of the universe.
Hydrodynamical simulations of structure formation in a universe dominated by cold dark matter and including an ionizing background, have been very successful in explaining the properties of the Ly$`\alpha `$ forest \[e.g. Cen et al. 1994, Zhang, Anninos & Norman 1995, Hernquist et al. 1996, Miralda-Escudé et al. 1996, Theuns, Leonard & Efstathiou 1998, Theuns et al. 1998, Davé et al. 1999\]. They show that the low column density ($`N10^{14.5}\mathrm{cm}^2`$) absorption lines arise in a smoothly varying low-density ($`\delta 10`$) IGM. Since the overdensity is only mildly non-linear, the physical processes governing this medium are well understood and relatively easy to model. On large scales the dynamics are determined by gravity, while on small scales gas pressure is important. The existence of a simple physical framework and the abundance of superb observational data make the Ly$`\alpha `$ forest an extremely promising cosmological laboratory (see Rauch \[Rauch 1998\] for a review). In this paper, we shall investigate the effect of the thermal state of the IGM on the forest.
For the low-density gas responsible for the Ly$`\alpha `$ forest, shock heating is not important and the gas follows a well defined temperature-density relation. The competition between photoionization heating and adiabatic cooling results in a power-law ‘equation of state’ $`T=T_0(\rho /\overline{\rho })^{\gamma 1}`$ \[Hui & Gnedin 1997\]. This equation of state depends on cosmology and reionization history. For models with abrupt reionization, the IGM becomes nearly isothermal ($`\gamma 1`$) at the redshift of reionization. After reionization, the temperature at the mean density ($`T_0`$) decreases while the slope ($`\gamma 1`$) increases because higher density regions undergo less expansion and increased photoheating. Eventually, when photoheating balances adiabatic cooling as the universe expands, the imprints of the reionization history are washed out and the equation of state approaches an asymptotic state, $`\gamma =1.62`$, $`T_0\left[\mathrm{\Omega }_bh^2/\sqrt{\mathrm{\Omega }_mh^2}\right]^{1/1.7}`$ \[Hui & Gnedin 1997\]. Since the reionization history of the universe is still unknown, the physically reasonable ranges for the parameters of the equation of state are very large ($`10^{3.0}\mathrm{K}<\mathrm{T}_0<10^{4.5}\mathrm{K}`$ and $`1.2<\gamma <1.7`$ \[Hui, Gnedin & Zhang 1997, Hui & Gnedin 1997\]).
The smoothly varying IGM gives rise to a fluctuating optical depth in redshift space. Many of the optical depth maxima can be fitted quite accurately with Voigt profiles. The distribution of line widths depends on the initial power spectrum, the peculiar velocity gradients around the density peaks and on the temperature of the IGM. However, there is a lower limit to how narrow the absorption lines can be. Indeed, the optical depth will be smoothed on a scale determined by three processes \[Hui & Rutledge 1997\]: thermal broadening, baryon (Jeans) smoothing and possibly instrumental, or in the case of simulations, numerical resolution. The first two depend on the thermal state of the gas. While for high-resolution observations (echelle spectroscopy) the effective smoothing scale is not determined by the instrumental resolution, numerical resolution has in fact been the limiting factor in many simulations (see Theuns et al. \[Theuns et al. 1998\] for a discussion).
The distribution of line widths is generally expressed as the distribution of the widths of Voigt profile fits to the absorption lines, the $`b`$-parameters. While the first numerical simulations showed good agreement with the observed $`b`$-parameter distribution, higher resolution simulations of the standard cold dark matter model produced a larger fraction of narrow lines than observed \[Theuns et al. 1998, Bryan et al. 1998\]. Theuns et al. \[Theuns et al. 1998\] suggested that an increase in the temperature of the IGM might broaden the absorption lines, while Bryan et al. \[Bryan et al. 1998\] argued that the most natural way to broaden the lines is to change the density distribution directly. Note that increasing the temperature will also change the density distribution of the gas through increased baryon smoothing (Theuns, Schaye & Haehnelt 1999).
Theuns et al. \[Theuns et al. 1999a\] showed that changing the cosmology (lowering $`\mathrm{\Omega }_m`$ from 1.0 to 0.3 and doubling $`\mathrm{\Omega }_bh^2`$ to 0.025) significantly broadens the absorption lines, although some discrepancy with observations may remain. One way to increase the temperature further would be to change the reionization history. Uncertainties in the redshift of He reionization, in particular, can affect the temperature of the IGM. Haehnelt & Steinmetz \[Haehnelt & Steinmetz 1998\] demonstrated that different reionization histories result in observable differences in the $`b`$-distribution. Other mechanisms that have been proposed to boost the temperature are photoelectric heating of dust grains \[Nath, Sethi & Shchekinov 1999\], Compton heating by the hard X-ray background \[Madau & Efstathiou 1999\] and radiative transfer effects associated with the ionization of He ii by QSOs in the optically thick limit \[Abel & Haehnelt 1999\].
Unfortunately, the $`b`$-distribution is not very well suited for investigating the thermal state of the IGM. Although some of the broad lines correspond to density fluctuations on scales that are affected, among other things, by thermal smoothing \[Theuns et al. 1999b\], many are caused by heavy line blending and continuum fitting errors. The cutoff in the $`b`$-distribution, on the other hand, will depend mainly on the temperature of the IGM and is therefore potentially a powerful statistic. In practice its usefulness is limited because many narrow lines occur in the wings of broader lines. Such narrow lines are often introduced by numerical Voigt profile fitting algorithms (such as VPFIT \[Carswell et al. 1987\]) to improve the quality of the overall fit to the quasar spectrum. If the physical structure responsible for the absorption does not consist of discrete clouds, then the widths of these blended lines will have no relation to the thermal state of the gas. Furthermore, the number and widths of these narrow blended lines depend on the Voigt profile fitting algorithm that is used and on the signal to noise of the quasar spectrum.
The $`b`$-parameter distribution is usually integrated over a certain column density range. Since the $`b`$-distribution might depend on column density ($`N`$), more information is contained in the full $`b(N)`$-distribution. Scatter plots of the $`b(N)`$-distribution have been published for many observed QSO spectra \[e.g. Hu et al. 1995, Lu et al. 1996, Kirkman & Tytler 1997, Kim et al. 1997\]. These plots show a clear cutoff at low $`b`$-parameters. However, this cutoff is not absolute. There are some narrow lines, especially at low column densities. Lu et al. \[Lu et al. 1996\] and Kirkman & Tytler \[Kirkman & Tytler 1997\] use Monte Carlo simulations to show that many of these lines are caused by line blending and noise in the data. Some contamination from unidentified metal lines is also expected.
The cutoff in the $`b(N)`$-distribution increases slightly with column density. Hu et al. \[e.g. Hu et al. 1995\] conclude from Monte Carlo simulations that this correlation is primarily an artifact of the much larger number of lines at lower column density and the increased scatter in the $`b`$-determinations. Kirkman & Tytler \[Kirkman & Tytler 1997\] however, conclude from similar simulations that the correlation between the lowest $`b`$-values and column density is a real physical effect. (Note that this correlation is different from the one reported by Pettini et al. \[Pettini et al. 1990\]. They found a general correlation between the $`b`$-parameter and column density of all lines. This correlation was later shown to be an artifact of the line selection and fitting procedure \[Rauch et al. 1993\].) A lower envelope which increases with column density has also been seen in numerical simulations \[Zhang et al. 1997\].
In this paper we shall demonstrate that the cutoff in the $`b(N)`$-distribution is determined by the equation of state of the low-density gas. Furthermore, we shall show that the cutoff can be determined robustly and is unaffected by systematics like changes in cosmology (for a fixed equation of state) and can therefore be used to measure the equation of state of the IGM.
We test our methods using smoothed-particle hydrodynamic (SPH) simulations of the Ly$`\alpha `$ forest as described by Theuns et al. \[Theuns et al. 1998, Theuns et al. 1999a\]. The parameters of the simulations are summarised in section 2. The relation between the $`b(N)`$-cutoff and the equation of state is investigated in section 3, which forms the heart of the paper. Section 4 contains a detailed description of the procedure used to fit the cutoff in simulated Keck spectra. Systematic effects are discussed in section 5. In section 6 we test the procedure using Monte Carlo simulations. Finally, we summarise and discuss the main results in section 7.
## 2 SIMULATIONS
We have simulated six different cosmological models, characterised by their total matter density $`\mathrm{\Omega }_m`$, the value of the cosmological constant $`\mathrm{\Omega }_\mathrm{\Lambda }`$, the rms of mass fluctuations in spheres of radius 8$`h^1`$ Mpc, $`\sigma _8`$, the baryon density $`\mathrm{\Omega }_bh^2`$ and the present day value of the Hubble constant, $`H_0100h\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$. The parameters of these models are summarised in Table 1. In addition to these models, we simulated a model that has the same parameters as model Ob, but with the He i and He ii heating rates artificially doubled. This may provide a qualitative model of the heating due to radiative transfer effects during the reionization of helium \[Abel & Haehnelt 1999\]. We will call this model Ob-hot. The amplitude of the initial power spectrum is normalised to the observed abundance of galaxy clusters at $`z=0`$, using the fits computed by Eke, Cole & Frenk \[Eke, Cole & Frenk 1996\]. We model the evolution of a periodic, cubic region of the universe of comoving size $`2.5h^1`$ Mpc.
The code used is adapted from the HYDRA code of Couchman et al. \[Couchman, Thomas & Pearce 1995\], which uses smooth particle hydrodynamics (SPH) \[Lucy 1970, Gingold & Monaghan 1977\]; see Theuns et al. \[Theuns et al. 1999a, Theuns et al. 1998\] for details. These simulations use $`64^3`$ particles of each species, so the SPH particle masses are $`1.65\times 10^6(\mathrm{\Omega }_bh^2/0.0125)(h/0.5)^3\mathrm{M}_{}`$ and the CDM particles are more massive by a factor $`\mathrm{\Omega }_{\mathrm{CDM}}/\mathrm{\Omega }_b`$. This resolution is sufficient to simulate line widths reliably \[Theuns et al. 1998, Bryan et al. 1998, note that in the hotter simulations numerical convergence will be even better than in the cooler model S, which was investigated in detail by Theuns et al. \[Theuns et al. 1998\]\].
We assume that the IGM is ionized and photoheated by an imposed uniform background of UV-photons that originates from quasars, as computed by Haardt & Madau \[Haardt & Madau 1996\]. This flux is redshift dependent, due to the evolution of the quasar luminosity function. The amplitude of the flux is indicated as ‘HM’ in the $`\mathrm{\Gamma }_\text{}\text{i}`$ column of Table 1 (where $`\mathrm{\Gamma }_\text{}\text{i}`$ is the H i ionization rate due to the ionizing background). For the low $`\mathrm{\Omega }_bh^2`$ models, we have divided the ionizing flux by two, indicated as ‘HM/2’. We do not impose thermal equilibrium but solve the rate equations to track the abundances of H i, H ii and He i, He ii and He iii. We assume a helium abundance of $`Y=0.24`$ by mass. See Theuns et al. \[Theuns et al. 1998\] for further details.
At several output times we compute simulated spectra along 1200 random lines of sight through the simulation box. Each spectrum is convolved with a Gaussian with full width at half maximum of FWHM = 8 $`\mathrm{km}\mathrm{s}^1`$, then resampled onto pixels of width 3 $`\mathrm{km}\mathrm{s}^1`$ to mimic the instrumental profile and characteristics of the HIRES spectrograph on the Keck telescope. We rescale the background flux in the analysis stage such that the mean effective optical depth at a given redshift in all models is the same as for the Ob model. This model has a mean absorption in good agreement with observations \[Rauch et al. 1997\] (Ob has $`\overline{\tau }_{\mathrm{eff}}=0.93`$, 0.33 and 0.14 at $`z=4`$, 3 and 2). Finally, we add to the flux in every pixel a Gaussian random signal with zero mean and standard deviation $`\sigma =0.02`$ to mimic noise. The spectra cover a small enough velocity range to be fit by a flat continuum, as chosen by a simple procedure \[Theuns et al. 1998\] described as follows. A low average continuum is assumed initially, then all pixels below and not within 1 $`\sigma `$ of this level are rejected and a new average flux level for the remaining pixels is computed. The last two steps are repeated until the average flux varies by less than 1%. This final average flux level is adopted as the fitted continuum and the spectrum is renormalised accordingly. The absorption features in these mock observations are then fitted with Voigt profiles using an automated version of VPFIT \[Carswell et al. 1987\].
Although the simulated models have different equations of state, they cover only a limited part of the possible parameter space $`(T_0,\gamma )`$. The models were originally intended to investigate the dependence of QSO absorption line statistics on cosmology \[Theuns et al. 1999a\]. Changing the reionization history can lead to very different values of $`T_0`$ and $`\gamma `$. To quantify the relation between the cutoff in the $`b(N)`$-distribution and the equation of state, it is necessary to include models covering a wide range of $`T_0`$ and $`\gamma `$. We therefore created models with particular values of $`T_0`$ and $`\gamma `$ by imposing an equation of state on model Ob. This was done by moving the SPH particles in the temperature-density plane. The new models have the same three components (low- and high-density power-law equations of state and shocked gas) as the original model, but a different equation of state for the low-density gas. In particular, the intrinsic scatter around the power-law is left unchanged.
## 3 THE $`𝒃\mathbf{(}𝑵\mathbf{)}`$-CUTOFF AND THE EQUATION OF STATE
Fig. 1 shows a contour plot of the mass-weighted distribution of fluid elements (SPH particles) from a numerical simulation in the temperature-density diagram. Noting that the number density of fluid elements increases by an order of magnitude with each contour level, it is clear that the vast majority of the low-density gas ($`\rho _b/\overline{\rho }_b10`$) follows a power-law equation of state (dashed line). The two other components visible in Fig. 1 are hot, shocked gas which cannot cool within a Hubble time and colder, high-density gas for which H i and He ii line cooling is effective.
In Fig. 2a we plot the $`b(N)`$-distribution for 800 random absorption lines taken from the spectra of model Ob-hot at redshift $`z=3`$. A cutoff at low $`b`$-values, which increases with column density, can clearly be seen. In Fig. 2b only those lines for which VPFIT gives formal errors in both $`b`$ and $`N`$ that are smaller than 50 per cent are plotted. This excludes most of the lines that have column density $`N10^{12.5}\mathrm{cm}^2`$ as well as many of the very narrow lines below the cutoff. Although the formal errors of the Voigt profile fit given by VPFIT have only limited physical significance, lines in blends tend to have large errors. Since the $`b`$-parameters for blended lines can have values smaller than the minimum set by the thermal smoothing scale (i.e. thermal broadening and baryon smoothing), these lines will tend to smooth out any intrinsic cutoff. Removing the lines with the largest relative errors therefore results in a sharper cutoff. A smaller maximum allowed error would result in the removal of many of the regular, isolated lines.
The large number of data points plotted in Fig. 2a excludes the possibility that the slope in the cutoff is due to the large decrease in the number of lines with column density or the increase in scatter with decreasing column density (Hu et al. \[e.g. Hu et al. 1995\] reached this conclusion from analysing spectra that had about 250 absorption lines each).
The $`b(N)`$-distribution for the colder model S is plotted in Fig. 2c. Clearly, the distribution cuts off at lower $`b`$-values. Let us assume that the absence of lines with low $`b`$-values is due to the fact that there is a minimum line width set by the thermal state of the gas through the thermal broadening and/or baryon smoothing scales. Since the temperature of the low-density gas responsible for the Ly$`\alpha `$ forest increases with density (Fig. 1), we expect the minimum $`b`$-value to increase with column density, provided that the column density correlates with the density of the absorber.
To see whether this picture is correct, we need to investigate the relation between the Voigt profile parameters $`N`$ and $`b`$, and the density and temperature of the absorbing gas respectively. Peculiar velocities and thermal broadening make it difficult to identify the gas contributing to the optical depth at a particular point in redshift space. Furthermore, the centre of the Voigt profile fit to an absorption line will often be offset from the point where the optical depth is maximum. We therefore need to define a temperature and a density which are smooth and take redshift space distortions into account. We choose to use optical depth weighted quantities: the density of a pixel in velocity space is the sum, weighted by optical depth, of the density at all the pixels in real space that contribute to the optical depth of that pixel in velocity space. We then define the density corresponding to an absorption line to be the optical depth weighted density at the line centre. The temperature corresponding to absorption lines is defined similarly.
In Fig. 3 the optical depth weighted density and temperature are plotted for two random lines of sight (dashed lines in the middle two panels), as well as the flux (without noise), and real space density, temperature and peculiar velocity. The dashed curves in the top panels are the Voigt profiles fitted by VPFIT, vertical lines indicate the line centres. Absorption lines correspond to peaks in the (optically depth weighted) density and temperature, which are strongly correlated. Although the blends indicated by arrows can be traced back to substructure in the peaks, their profiles are mainly determined by the density and temperature of the gas in the main peaks.
In Fig. 4 the optical depth weighted gas density is plotted as a function of column density for the absorption lines plotted in Fig. 2b. There exists a tight correlation between these two quantities. Note that lines with column densities $`10^{13}\mathrm{cm}^2`$ correspond to local maxima in underdense regions.
The optical depth weighted temperature is plotted against the $`b`$-parameter in Fig. 5a. The result is a scatter plot with no apparent correlation. This is not surprising since many absorbers will be intrinsically broader than the local thermal broadening scale. In order to test whether the cutoff in the $`b(N)`$-distribution is a consequence of the existence of a minimum line width set by the thermal state of the gas, we need to look for a correlation between the temperature and $`b`$-parameters of the lines near the cutoff.
Determining a cutoff in an objective manner is nontrivial because of the existence of unphysically narrow lines in blends. We developed a fitting algorithm that is insensitive to these lines. This algorithm is described in the next section. Fig. 6 is a scatter plot of the lines with column density in the range $`10^{12.5}\mathrm{cm}^2\mathrm{N}10^{14.5}\mathrm{cm}^2`$. The cutoff fitted to this distribution is also shown (solid line). The lines that are used in the final iteration of the fitting algorithm, i.e. lines that are close to the solid line in Fig. 6, do indeed display a tight correlation between the temperature and $`b`$-parameter (Fig. 5b). The dashed line in Fig. 5b corresponds to the thermal width, $`b=(2k_BT/m_p)^{1/2}`$, where $`m_p`$ is the mass of a proton and $`k_B`$ is the Boltzmann constant. Lines corresponding to density peaks whose width in velocity space is much smaller than the thermal width, have Voigt profiles with this $`b`$-parameter. Since the temperature plotted in Fig. 5 is the smooth, optical depth weighted temperature, we do not expect the relation between $`T`$ and $`b`$ to be identical to the one indicated by the dashed line, even if all the line widths were purely thermal. Although other definitions of the density and temperature are possible and will give slightly different results, qualitatively the results will be the same for any sensible definition of these physical quantities. Figs. 4 and 5b therefore suggest that the cutoff in the $`b(N)`$-distribution should be strongly correlated with the equation of state of the absorbing gas.
Let us look in more detail at the relation between the $`b(N)`$-cutoff and the equation of state. We have already shown (Fig. 1) that almost all the low-density gas follows a power-law equation of state:
$$\mathrm{log}(T)=\mathrm{log}(T_0)+(\gamma 1)\mathrm{log}(\rho /\overline{\rho }).$$
(1)
The relations between the density/temperature and column density/$`b`$-parameter of the absorption lines near the cutoff can also be fitted by power-laws (Fig. 4 and Fig. 5b):
$$\mathrm{log}(\rho /\overline{\rho })=A+B\mathrm{log}(N/N_0),$$
(2)
$$\mathrm{log}(T)=C+D\mathrm{log}(b).$$
(3)
Combining these equations, we find that the $`b(N)`$-cutoff is also a power-law,
$$\mathrm{log}(b)=\mathrm{log}(b_0)+(\mathrm{\Gamma }1)\mathrm{log}(N/N_0),$$
(4)
whose coefficients are given by,
$$\mathrm{log}(b_0)=\frac{1}{D}\left[\mathrm{log}(T_0)C+(\gamma 1)A\right],$$
(5)
$$\mathrm{\Gamma }1=\frac{B}{D}(\gamma 1).$$
(6)
Hence the intercept of the cutoff, $`\mathrm{log}b_0`$, depends on both the amplitude and the slope of the equation of state, while the slope of the cutoff, $`\mathrm{\Gamma }1`$, is proportional to the slope of the equation of state. If we set $`N_0`$ equal to the column density corresponding to the mean gas density, then the coefficient $`A`$ vanishes and $`\mathrm{log}b_0`$ no longer depends on $`\gamma `$.
The cutoff in the $`b(N)`$-distribution is measured over a certain column density range. We choose to measure the cutoff over the interval $`10^{12.5}\mathrm{cm}^2\mathrm{N}10^{14.5}\mathrm{cm}^2`$. This interval corresponds roughly to the gas density range for which the equation of state is well fitted by a power-law. For lines with column density $`N10^{12.5}\mathrm{cm}^2`$, the scatter in the observations becomes very large due to noise and line blending. The relation between the gas overdensity and H i column density depends on redshift. Hence different column density intervals should be used for different redshifts if one wants to compare the equation of state for the same density range.
Fitting a cutoff to a finite number of lines introduces statistical uncertainty in the measured coefficients. We minimize the correlation between the errors in the coefficients by subtracting the average abscissa value (i.e. $`\mathrm{log}N`$) before fitting the cutoff. This can be done by setting $`\mathrm{log}N_0`$ in equation 4 equal to the mean $`\mathrm{log}N`$ of the lines in the column density range over which the cutoff is measured. This column density does in general not correspond to the mean gas density and hence $`\mathrm{log}b_0`$ will in general depend on $`\gamma `$. We will show below that this dependence can be removed by renormalising the equation of state.
In Fig. 7 we plot the temperature at mean density predicted from the power-law model (equation 5) as a function of the true $`T_0`$. Data points are determined by fitting power-laws to 500 sets of 300 random absorption lines. Error bars enclose 68 per cent confidence intervals around the medians. Solid circles are used for data from simulated models, open squares are used for models created by imposing an equation of state on model Ob. These conventions will be used throughout the paper. Fig. 8 is a similar plot for the slope of the equation of state (equation 6). The predicted and true parameters of the equation of state are highly correlated. The slight offset between the predicted and true quantities simply reflects the fact that the optical depth weighted density and temperature are not exactly the same as the true density and temperature of the absorbing gas (i.e. the $`T`$ and $`\rho `$ appearing in equation 1 are not exactly the same as those appearing in equations 2 and 3). The main conclusion to draw from these plots is that the power-law model works and that we can therefore use these equations to gain insight in the relationship between the equation of state and the cutoff in the $`b(N)`$-distribution.
The objective is to establish the relations between the cutoff parameters and the equation of state using simulations. These relations can then be used to measure the equation of state of the IGM using the observed cutoff in the $`b(N)`$-distribution. The amplitudes of the power-law fits to the cutoff and the equation of state are plotted against each other in Fig. 9. The relation between $`\mathrm{log}b_0`$ and $`\mathrm{log}T_0`$ is linear, implying that the coefficients appearing in equation 5 do not vary strongly with cosmology. The error bars, which indicate the dispersion in the cutoff of sets of 300 lines (typical for $`z=3`$), are small compared to the differences between the models. This means that measuring the cutoff in a single QSO spectrum can provide significant constraints on theoretical models.
The slope of the cutoff, $`\mathrm{\Gamma }1`$, is plotted against $`\gamma `$ in Fig. 10. The relation between the two is linear, but $`\mathrm{\Gamma }`$ increases only slowly with $`\gamma `$. The dispersion in the slope of the cutoff for a fixed equation of state is comparable to the difference between the models<sup>1</sup><sup>1</sup>1The simulated models all have similar values of $`\gamma `$ because they all have the same $`UV`$-background.. The weak dependence of $`\mathrm{\Gamma }`$ on $`\gamma `$ and the large spread in the measured $`\mathrm{\Gamma }`$ will make it difficult to put tight constraints on the slope of the equation of state.
### 3.1 Correlations
The intercept of the cutoff in the $`b(N)`$-distribution is a measure of the temperature at the characteristic density of the absorbers corresponding to the lines used to fit the cutoff. In general this is not the mean density and consequently the translation from the intercept $`b_0`$, to the temperature at mean density $`T_0`$, depends on the slope $`\gamma `$. This is illustrated in the left panel of Fig. 11, where the measured values of $`b_0`$ for models that have identical values of $`T_0`$, but a range of $`\gamma `$-values are compared. As predicted by the power-law model (equation 5), $`\mathrm{log}b_0`$ increases linearly with $`\gamma `$.
In principle, the index $`\gamma `$ can be measured using the slope of the cutoff, to which it is proportional (equation 6). However, $`\mathrm{\Gamma }`$ increases only slowly with $`\gamma `$ and the statistical variance in $`\mathrm{\Gamma }`$ is large, making it hard to put tight constraints on $`\gamma `$. It appears therefore that even though $`\mathrm{log}b_0`$ is very sensitive to $`\mathrm{log}T_0`$, and can be measured very precisely, the uncertainty in $`T_0`$ will be relatively large due to the weak constraints on $`\gamma `$. It is important to realise that any statistic that is sensitive to the temperature, depends in general on both $`T_0`$ and $`\gamma `$. It is for example impossible to determine $`T_0`$ by fitting the $`b`$-parameter distribution at $`NN(\overline{\rho })`$. Since this statistic is sensitive to the temperature at the density corresponding to a column density $`N`$, any equation of state $`(T_0,\gamma )`$ that has the correct temperature at this density will fit the data.
Although it is conventional to normalise the equation of state to the temperature at the mean density, it can in principle be normalised at any density, $`\rho _\delta \overline{\rho }(1+\delta )`$ say,
$$\mathrm{log}T=\mathrm{log}T_\delta +(\gamma 1)\mathrm{log}(\rho /\rho _\delta ).$$
(7)
Similarly, equations 2 and 4 can be generalised to
$$\mathrm{log}(\rho /\rho _\delta )=B\mathrm{log}(N/N_\delta ),$$
(8)
$$\mathrm{log}b=\mathrm{log}b_0+(\mathrm{\Gamma }1)\mathrm{log}(N/N_\delta ),$$
(9)
where $`N_\delta N(\rho =\rho _\delta )`$ and consequently the coefficient $`A`$ vanishes. In practice, the cutoff is fitted over a given column density interval and $`\mathrm{log}N_\delta `$ is set equal to the mean $`\mathrm{log}N`$ of lines in this interval. The measured cutoff is then converted into an equation of state normalised at the corresponding density, $`\rho _\delta =\rho (\mathrm{log}N_\delta =\mathrm{log}N)`$. Equation 5 then becomes $`\mathrm{log}b_0=(\mathrm{log}T_\delta C)/D`$ and the intercept of the cutoff depends only on the amplitude of the equation of state.
At redshift $`z=3`$, using the column density interval $`10^{12.5}\mathrm{cm}^2\mathrm{N}10^{14.5}\mathrm{cm}^2`$, $`N_\delta =10^{13.6}\mathrm{cm}^2`$ and $`\delta 2`$. In the right panel of Fig. 11 the intercept of the cutoff is plotted as a function of $`\gamma `$ for a set of models that all have the same temperature at this density. As expected, the intercept is insensitive to the slope of the equation of state.
In summary, the intercept of the $`b(N)`$-cutoff is a measure of the temperature of the gas responsible for the absorption lines that are used to determine the cutoff. If we normalise the equation of state to the temperature at the characteristic density of the gas, then the intercept of the cutoff depends only on the amplitude of the equation of state. The slope of the cutoff is always determined by the slope of the equation of state. Hence the cutoff in the $`b(N)`$-distribution can be used to determine both $`T_\delta `$, where $`\delta `$ is the density contrast corresponding to the mean $`\mathrm{log}N`$ of the lines used in the fit, and $`\gamma `$. The temperature at mean density, $`T_0`$, depends on both $`T_\delta `$ and $`\gamma `$ and therefore on both the intercept and the slope of the cutoff.
## 4 MEASURING THE CUTOFF
The main problem in measuring the cutoff in the $`b(N)`$-distribution is the fact that it is contaminated by spurious narrow lines. Line blending and blanketing, noise and the presence of unidentified metal lines all give rise to absorption lines that are narrower than the lower limit to the line width set by the thermal state of the gas. We have therefore developed an iterative procedure for fitting the cutoff that is insensitive to the presence of a small number of narrow lines. In order to minimize the effects of outliers, robust least absolute deviation fits are used. The first step is to fit a power-law to the entire set of lines. Then the lines that have $`b`$-parameters more than one mean absolute deviation above the fit are removed and a power-law is fitted to the remaining lines. These last two steps are repeated until convergence is achieved. Finally, the lines more than one mean absolute deviation below the fit are also taken out and the fit to the remaining lines is the measured cutoff. The algorithm works very well if there are not too many unphysically narrow lines. At the lowest column densities ($`10^{12.5}\mathrm{cm}^2`$) however, blends dominate and the cutoff is washed out.
Fortunately, there are ways to take out most of these unphysically narrow lines. We have already shown (Fig. 2) that removing lines with large relative errors in the Voigt profile parameters significantly sharpens the cutoff. We choose to consider only those lines with relative errors smaller than 50 per cent. A smaller maximum allowed error would result in the removal of many of the regular, isolated lines.
Another cut in the set of absorption lines can be made on the basis of theoretical arguments. Assuming that absorption lines arise from peaks in the optical depth $`\tau `$, and assuming that $`\mathrm{ln}\tau `$ is a Gaussian random variable (as is the case for linear fluctuations), Hui & Rutledge \[Hui & Rutledge 1997\] derive a single parameter analytical expression for the $`b`$-distribution:
$$\frac{dN}{db}\frac{b_\sigma ^4}{b^5}\mathrm{exp}\left[\frac{b_\sigma ^4}{b^4}\right],$$
(10)
where $`b_\sigma `$ is determined by the average amplitude of the fluctuations and by the effective smoothing scale. Fig. 12 shows the $`b`$-parameter distribution for the lines plotted in Fig. 6 and the best-fitting Hui-Rutledge function (dashed line). The $`b`$-value for which the Hui-Rutledge fit vanishes (dotted line) corresponds to the dashed line in Fig. 6. The $`b`$-distribution has a tail of narrow lines which is not present in the theoretical Hui-Rutledge function. These lines are indicated by diamonds in Fig. 6. Direct inspection shows that all these lines occur in blends. Two examples are the lines indicated by arrows in Fig. 3. The size of the low $`b`$-tail depends on the number of blended lines and therefore on the signal to noise of the spectrum and on the fitting procedure. However, we find that in general, virtually all of the lines with $`b`$-values smaller than the cutoff in the fitted Hui-Rutledge function are blends. We therefore remove these lines before fitting the cutoff in the $`b(N)`$-distribution.
Fig. 13 illustrates the effect of the two cuts (relative errors and Hui-Rutledge function). The probability distributions for the parameters of the fitted cutoff, $`b_0`$ and $`\mathrm{\Gamma }`$, are plotted for different cuts. The dotted lines are the distributions resulting from fitting the cutoff for the complete set of lines. Removing the lines with large errors or those with $`b`$-values smaller than the cutoff of the Hui-Rutledge fit results in a *smaller* intercept $`\mathrm{log}b_0`$<sup>2</sup><sup>2</sup>2The removal of very narrow lines reduces the scatter around the cutoff and therefore causes the algorithm to converge at a lower mean absolute deviation. More iterations are needed before convergence is obtained, yielding a lower final cutoff.. It makes no difference which cut is applied. Applying a cut in error-space does not affect the slope of the cutoff, $`\mathrm{\Gamma }1`$. However, taking out the lines below the Hui-Rutledge cutoff, removes the low-$`b`$, low-$`N`$ tail without affecting the higher column density end, and therefore yields a smaller slope.
A typical QSO spectrum at $`z3`$ has about 300 Ly$`\alpha `$ absorption lines between its Ly$`\alpha `$ and Ly$`\beta `$ emission lines with column densities in the range $`10^{12.5}\mathrm{cm}^2\mathrm{N}10^{14.5}\mathrm{cm}^2`$. The number density of Ly$`\alpha `$ lines decreases rapidly with decreasing redshift. At $`z2`$ there are typically less than 100 lines. The fact that the number of lines in an observed Ly$`\alpha `$ forest is finite introduces statistical variance. We therefore use many (500) realizations to determine the full probability distributions of the parameters of the cutoff.
Fig. 14 illustrates the effects of changing the number of absorption lines. The algorithm is surprisingly insensitive to the number of lines. The parameters of the cutoff vary only slightly for 60 lines or more. The variance does decrease as the number of lines increases, but remains almost the same for more than 200 lines. This suggests that the method should work even with the small number of lines per spectrum at $`z2`$. It also opens up the possibility of splitting higher redshift spectra into redshift bins. The fact that the cutoff depends weakly on the number of lines is not a problem, since we can determine the relation between the cutoff and the equation of state for any number of lines, in particular for the number of lines in an observed spectrum.
While the statistical variance in the intercept of the $`b(N)`$-cutoff is small, the variance in the slope is comparable to the differences between models. Fortunately, we can do better than measuring the cutoff for the complete set of lines in an observed spectrum. The bootstrap method (drawing $`n`$ random lines from the complete set of $`n`$ lines, with replacement) can be used to generate a large number of synthetic data sets. These data sets can then be used to obtain approximations to the probability distributions for the parameters of the cutoff. Since bootstrap resampling replaces a random fraction of the original lines by duplicated original lines, a smaller fraction of the $`b`$-$`N`$ space around the cutoff is filled and the variance in the measured cutoff increases. Although the bootstrap distribution is generally broader than the true distribution, its median is a robust estimate of the true median. When dealing with observed spectra, we will use the medians of the bootstrap distributions as our best estimates of the parameters of the $`b(N)`$-cutoff. In section 6 we will use Monte Carlo simulations to estimate the variance in the bootstrap medians.
## 5 SYSTEMATIC EFFECTS
In section 3 we established the relation between the cutoff in the $`b(N)`$-distribution and the equation of state of the low-density gas. In this section we will investigate whether other processes can change this relation.
### 5.1 Cosmology
Cosmology affects not only the equation of state, but also determines the evolution of structure. Theuns et al. \[Theuns et al. 1999a\] showed that the $`b`$-parameter distribution depends on cosmology. In particular, they showed that the effect of peculiar velocity gradients on the line widths can be very different in different CDM variants. We have recomputed simulated spectra for model S after imposing the equation of state of the significantly hotter model Ob. In Fig. 15 the probability distributions of the parameters of the $`b(N)`$-cutoff are compared for model Ob (solid lines) and this new model, S-hot (dashed). The distributions are almost indistinguishable. Also plotted is model Ob-vel (dot-dashed), which was created by setting all peculiar velocities in model Ob to zero. Again, the probability distributions are almost unchanged.
The line widths of many of the low column density lines are dominated by the Hubble flow. Fig. 16 illustrates the effect that changing the Hubble expansion has on the cutoff in the $`b(N)`$-distribution. The obvious way to change the Hubble flow is to change the Hubble constant. However, the equation of state also depends on the value of the Hubble constant. In order to isolate the effect of the Hubble flow, we changed the value of the Hubble constant in the analysis stage, i.e. just before computing the spectra, keeping the equation of state fixed. Increasing the value of the Hubble parameter at $`z=3`$ from a corresponding present day value of $`h=0.65`$ to 0.8 has no effect on the cutoff. Lowering $`h`$ to 0.5 shifts the slope to slightly larger values, but leaves the intercept unchanged.
We conclude that, unlike the $`b`$-distribution, the cutoff in the $`b(N)`$-distribution is independent of the assumed CDM model for a fixed equation of state.
### 5.2 Mean absorption
The optical depth in neutral hydrogen is proportional to the quantity $`\mathrm{\Omega }_b^2h^3/\mathrm{\Gamma }_\text{}\text{i}`$. Since it is still unclear what the dominant source of the metagalactic ionizing background is, the H i ionization rate, $`\mathrm{\Gamma }_\text{}\text{i}`$, is uncertain. Changing $`\mathrm{\Gamma }_\text{}\text{i}`$ has very little effect on the equation of state \[Hui & Gnedin 1997\], which means that the optical depth can be scaled to match the observed mean flux decrement in the analysis stage. However, the observations do show some scatter in the mean flux decrement. This scatter could be due to spatial variations of the ionizing background, or it could be caused by measurement errors. We therefore need to check whether errors in the assumed effective optical depth affect the relation between the cutoff and the equation of state.
Changing the effective optical depth by rescaling the ionizing background alters the relation between column density and gas density. This will shift the $`b(N)`$-distribution along the $`N`$-axis. Hence we expect the intercept of the cutoff to change and the slope to remain constant. In terms of our power-law model, increasing the photoionization rate (i.e. decreasing the effective optical depth) will increase the coefficient $`A`$ in equation 2 and thus increase the measured intercept $`b_0`$ for a given $`T_0`$ (equation 5), while leaving the slope $`\mathrm{\Gamma }1`$ constant (equation 6). Fig. 17 confirms these predictions. The dependence of $`b_0`$ on the effective optical depth turns out to be rather weak, even for models with a relatively steep cutoff. Realistic errors in the mean flux decrement (10–20% at $`z3`$) will give rise to errors in $`b_0`$ that are smaller than the statistical variance.
### 5.3 Signal to noise
The signal to noise ratio (S/N) per pixel in the simulated spectra is 50, comparable to the noise level in high-quality observations. However, spectra taken with for example the HIRES spectrograph on the KECK telescope have a S/N that varies across the spectrum. Fig. 18 illustrates the effect of changing the signal to noise (S/N) in the spectrum. The statistical variance in the cutoff increases rapidly when the S/N falls below 25. While the intercept increases slightly for S/N smaller than 25, the slope appears to be independent of the signal to noise. We therefore conclude that variations in the S/N in observed spectra are unimportant, as long as the signal to noise is greater than about 20.
### 5.4 Missing physics
Since the cosmological parameters and reionization history have yet to be fully constrained, the models used in this paper may not be correct. This will however not affect the results of this paper, as long as the bulk of the low-density gas follows a power-law equation of state. We should therefore ask if there are any physical processes, which have not been incorporated in the simulations, that could destroy the uniformity of the thermal state of the low-density gas. Feedback from massive stars and fluctuations in the spectrum of the photoionizing flux are two examples. Although these processes would predominantly affect the gas in virialized halos, they could introduce some additional scatter in the equation of state of the gas responsible for the low column density Ly$`\alpha `$ forest.
We recomputed simulated spectra for model Ob, after doubling the scatter around the fitted temperature-density relation. The $`b(N)`$-cutoff of this new model, Ob+scat, is indistinguishable from the one of model Ob (solid and dotted curves in Fig. 15). Note that although processes like feedback can heat the IGM locally, it is hard to think of any process, apart from adiabatic expansion, that could cool the low-density gas. Furthermore, whereas low-density gas that is heated to $`T>T_0(\rho /\overline{\rho })^{\gamma 1}`$ can only cool over a Hubble time, gas that is cooled below the equation of state is quickly reheated. Any additional scatter in the thermal state is thus unlikely to alter the cutoff in the gas temperature.
Any comparison between observations and simulations is complicated by the fact that observed spectra cover a much larger redshift path than the simulation boxes and therefore include effects like redshift evolution and cosmic variance, which are not present in the simulations. Large-scale fluctuations, especially in the ionization fraction (i.e. the relation between density and column density) could potentially distort the relation between the mean equation of state and the $`b(N)`$-cutoff.
Redshift evolution mainly affects the mean absorption and does so in a well defined manner, which can be modeled by comparing the observations to a combination of simulated spectra that have different effective optical depths. If the line of sight to the quasar passes an ionizing source, then lines from that region will be shifted to lower column densities and therefore have little effect on the measured $`b(N)`$-cutoff (since it increases with column density). Since the ionizing background originates from a collection of point sources, minima in the ionizing background would be shallower and more extended than maxima, except during reionization. In any case, if the sightline goes through a region where the mean neutral hydrogen density is substantially enhanced and therefore affects the measured cutoff, this will become clear when the spectrum is analyzed in redshift bins.
We conclude that even if local effects would produce a large scatter in the thermal state of the low-density gas, the relation between the mean equation of state and the $`b(N)`$-cutoff would remain unchanged.
## 6 MONTE CARLO SIMULATIONS
In section 4 we described how bootstrap resampling can be used to reduce the statistical variance in the measured $`b(N)`$-cutoff. Given a set of absorption lines from a single QSO spectrum, the bootstrap method is used to generate synthetic data sets, for which the cutoff is measured. The medians of the resulting probability distributions for the parameters of the cutoff are then used as best estimates of the true medians.
In order to see how well the equation of state can be measured, we performed Monte Carlo simulations. We drew 300 random lines, with column density in the range $`10^{12.5}\mathrm{cm}^2\mathrm{N}10^{14.5}\mathrm{cm}^2`$, from model Ob at $`z=3`$ and used the bootstrap method to generate probability distributions for the parameters of the cutoff. The median $`\mathrm{log}b_0`$ and $`\mathrm{\Gamma }`$ were then converted to measurements of $`\mathrm{log}T_0`$ and $`\gamma `$ using the linear relations determined from the simulations. The statistical variance in the medians of the bootstrap distributions was estimated from 100 Monte Carlo simulations. For 300 lines per spectrum the dispersion in the median is: $`\sigma _{\mathrm{stat}}(\mathrm{log}T_0)=0.033`$ (K), $`\sigma _{\mathrm{stat}}(\gamma )=0.13`$.
If multiple spectra are available, the statistical variance in the median can be reduced by summing the bootstrap probability distributions of the different spectra. Fig. 19 illustrates how the errors change when multiple spectra are used, each containing 100 (dashed curves) or 300 (solid curves) absorption lines in the column density range over which the cutoff is fitted. The bottom curve of each pair indicates the statistical $`1\sigma `$ error. The statistical dispersion asymptotes to the bin size used for determining the probability distributions.
Besides the statistical variance in the median, there is some scatter around the linear relations between the parameters of the cutoff and the equation of state. For example, for 300 lines per spectrum at $`z=3`$, the systematic errors (i.e. the dispersion of the solid circles around the dashed lines in Figs. 9 and 10) in the parameters of the cutoff are $`\sigma _{\mathrm{sys}}(\mathrm{log}b_0)=0.009(\mathrm{km}\mathrm{s}^1)`$ and $`\sigma _{\mathrm{sys}}(\mathrm{\Gamma })=0.015`$. The top curve of each pair in Fig. 19 indicates the total (statistical plus systematic) $`1\sigma `$ error.
For a single spectrum with 300 lines (typical for $`z=3`$), the predicted total errors are: $`\sigma (\mathrm{log}T_0)=0.040`$ (K), $`\sigma (\gamma )=0.17`$. For three spectra the errors reduce to $`\sigma (\mathrm{log}T_0)=0.029`$ (K) and $`\sigma (\gamma )=0.13`$. Adding more spectra has very little effect since systematic errors dominate. The errors are larger when there are only 100 absorption lines per spectrum. In this case the statistical variance is larger and the errors can be reduced significantly by adding more spectra. The fact that even for this small number of lines the constraints on the parameters of the equation of state are significant, suggests that the method will also work at $`z2`$.
## 7 SUMMARY AND DISCUSSION
Numerical simulations indicate that the smooth, photoionized intergalactic medium (IGM) responsible for the low column density Ly$`\alpha `$ forest follows a well defined temperature-density relation. For densities around the cosmic mean, shock-heating is negligible and the equation of state of the gas is well-described by a power-law $`T=T_0(\rho /\overline{\rho })^{\gamma 1}`$. The equation of state depends on cosmology, reionization history and the hard X-ray background. Although the absorption spectra can be fitted by a set of Voigt profiles, the lines do not in general correspond to discrete high-density gas clouds. Scatter plots of the distribution of line widths ($`b`$-parameters) as a function of column density ($`N`$) in observed QSO spectra clearly show a lower envelope, which increases with column density.
The decomposition of spectra produced by a fluctuating IGM into discrete Voigt profiles is artificial. However, the column density of the absorption lines correlates strongly with the density of the gas responsible for the absorption. Although the $`b`$-parameters are in general not correlated with the temperature of the gas, the line widths of the subset of lines that are close to the $`b(N)`$-cutoff do show a strong correlation with temperature. This implies that there exists a lower limit to the line width, set by the the thermal state of the absorbing gas, which in turn depends on its density. Hence the cutoff seen in the $`b(N)`$-distribution is a direct consequence of the existence of a temperature-density relation for the low-density gas and can be used to measure the equation of state of the IGM.
We developed an iterative procedure for fitting a power-law, $`b=b_0(N/N_0)^{\mathrm{\Gamma }1}`$, to the $`b(N)`$-cutoff over a certain column density range ($`10^{12.5}\mathrm{cm}^2\mathrm{N}10^{14.5}\mathrm{cm}^2`$ at $`z=3`$). The algorithm is insensitive to unphysically narrow lines, which occur in blends and as unidentified metal lines. The intercept of the power-law, $`\mathrm{log}b_0`$, can be measured very precisely and is shown to be very sensitive to $`\mathrm{log}T_0`$ (Fig. 9). The slope of the cutoff, $`\mathrm{\Gamma }1`$, is proportional to $`\gamma 1`$, but the dependence is weak and it is harder to measure (Fig. 10).
The intercept of the $`b(N)`$-cutoff is a measure of the temperature of the gas responsible for the absorption lines that are used to determine the cutoff. This gas is typically slightly overdense and consequently the intercept depends on both $`T_0`$, the temperature at mean density, and $`\gamma `$, the slope of the equation of state. However, if we normalise the equation of state to the temperature at the characteristic density of the gas, $`T=T_\delta (\rho /\rho _\delta )^{\gamma 1}`$, where $`\rho _\delta \overline{\rho }(1+\delta )`$, then the intercept of the cutoff depends only on the amplitude of the equation of state, $`T_\delta `$.
The relation between the cutoff and the equation of state is independent of the assumed cosmology (for a fixed equation of state). In particular, it remains unchanged when all peculiar velocities are set to zero and when the contribution of the Hubble flow to the line widths is varied. Changing the effective optical depth (i.e. rescaling the ionizing background), alters the relation between column density and gas density and thus the relation between $`\mathrm{log}b_0`$ and $`\mathrm{log}T_0`$. However, the dependence is weak and realistic errors in the measured mean absorption do not lead to significant errors in the derived value of $`T_0`$. Variations in the signal to noise ratio are also unimportant, as long as the ratio is greater than about 20.
The simulations used to determine the relation between the cutoff and the equation of state do not incorporate some potentially important physical processes, like feedback from e.g. star formation. This will however not change the results presented in this paper, as long as the bulk of the low-density gas follows a power-law equation of state. Since local effects like feedback would increase the temperature and since gas cooled to a temperature below that given by the equation of state is quickly reheated, any additional scatter in the thermal state of the gas is unlikely to affect the cutoff in the gas temperature. Doubling the scatter around the equation of state has no discernible effect on the $`b(N)`$-cutoff.
The finite number of absorption lines per QSO spectrum introduces statistical variance in the measured cutoff. The statistical variance can be reduced by using the bootstrap method to generate probability distributions for the parameters of the cutoff and using the medians as the best estimates of the true parameters. If multiple spectra are available, the variance can be further reduced by adding the bootstrap distributions of the different spectra. We use Monte Carlo simulations to estimate the statistical variance in the medians of the bootstrap distributions. Besides the statistical variance, there is a systematic uncertainty from the scatter in the linear relations between the parameters of the cutoff and the equation of state.
For a single spectrum at $`z=3`$ we predict the following total (statistical plus systematic) errors: $`\sigma (\mathrm{log}T_0)=0.040`$ (K) ($`\mathrm{\Delta }T_0/T_0=0.09`$), $`\sigma (\gamma )=0.17`$. For three spectra the errors reduce to $`\sigma (\mathrm{log}T_0)=0.029`$ (K) ($`\mathrm{\Delta }T_0/T_0=0.07`$) and $`\sigma (\gamma )=0.13`$. Increasing the number of spectra beyond three has very little effect on the total uncertainty because systematic errors dominate. These errors should be compared to the ranges considered to be physically reasonable, $`10^{3.0}\mathrm{K}<\mathrm{T}_0<10^{4.5}\mathrm{K}`$ and $`1.2<\gamma <1.7`$ \[Hui, Gnedin & Zhang 1997, Hui & Gnedin 1997\].
The analysis presented in this paper is for redshift $`z=3`$. At smaller redshifts, the constraints will be less tight because of the smaller number of lines per spectrum. However, we showed that even for one third of the number of lines typical at $`z=3`$, the constraints on the equation of state are significant. Furthermore, the statistical variance can be reduced by using multiple spectra. At higher redshifts the errors will also be somewhat larger than at $`z=3`$, mainly because the higher density of lines increases the number of blends and the errors in the continuum fit.
When using the simulations to convert the observed $`b(N)`$-cutoff into an equation of state, one has to be careful to treat the observed and simulated spectra in the same way. For example, the simulated and observed $`b(N)`$-distributions should have the same number of lines and the same continuum and Voigt profile fitting algorithms should be used for the simulated and observed spectra.
Given the existence of many high quality quasar absorption line spectra, it should be possible to greatly reduce the uncertainty in the equation of state of the low-density gas over the range $`z=24`$. This will allow us to put significant constraints on the reionization history of the universe.
## ACKNOWLEDGMENTS
We would like to thank M. Haehnelt and M. Rauch for stimulating discussions and R. Carswell for helping us with VPFIT. JS thanks the Isaac Newton Trust, St. John’s College and PPARC for support, AL thanks PPARC for the award of a research studentship and GE thanks PPARC for the award of a senior fellowship. This work has been supported by the TMR network on ‘The Formation and Evolution of Galaxies’, funded by the European Commission.
|
no-problem/9906/astro-ph9906061.html
|
ar5iv
|
text
|
# Changes in the angular separation of the lensed images PKS 1830-211 NE & SW
## 1 Introduction
PKS 1830-211 is a very bright and highly variable radio source at cm- and mm-wavelengths. As well as being a highly probable gravitational lens system (Rao & Subrahmanyan,, 1988), it is also identified by the EGRET instrument as a strong source of gamma-rays (Mattox et al.,, 1997). PKS 1830-211 is one of only two known lens systems (the other is B0218+357) which are bright and compact enough to be detected and imaged with mm-VLBI. All these observations suggest that the background source in this system is uncommon in many respects and can probably be best classified as a blazar.
Relatively rapid changes in the brightness distribution of the images had been reported earlier (Garrett et al.,, 1997), an effect that may be partly explained by the magnification provided by the lens system. In the case of PKS 1830-211, this magnification may be as large as 5–10 (Kochanek & Narayan,, 1992; Nair et al.,, 1993). Recent spectroscopic observations in the near-IR using the NTT with clear detections of both the H<sub>α</sub> and H<sub>β</sub> emission lines (see Lidman et al., (1999)) have finally revealed the redshift of the source to be $`z_s=2.507`$. In this paper we present multi-epoch VLBA 7 mm maps of both lensed radio images, in both polarised and total intensity.
## 2 Observations and Data Reduction
We made eight epochs of 7 mm, dual-polarisation VLBA observations of PKS 1830-211 between 1997 January 19 and 1997 April 30. Each epoch was separated in time by about 14 days. The data were correlated at NRAO, Socorro. For the sixth epoch (1997 April 03), the data quality was very poor partly due to bad weather at KP and weak fringes at BR. Since the SW and NE images are separated by about $`1^{\prime \prime }`$ on the sky, wide-field techniques were used to make maps of both images simultaneously from a single data-set (see Garrett et al., (1999)). The polarisation data analysis followed Leppänen et al., (1995).
## 3 Results & Discussion
Contour maps of both PKS 1830-211 NE and SW, in total and polarised intensity, for each epoch except the sixth (1997 April 03) are shown in Fig. 1. Superimposed on these maps are the positions and sizes of the Gaussian fits as determined by the AIPS task IMFIT. The size of the crosses represent the major and minor axes of the Gaussian components.
Our wide-field approach to the data analysis permits us to produce maps of both lensed images simultaneously, thus allowing us to measure with high precision the angular separation between the central peaks in the radio images.
We have measured the angular separation of the NE and SW image (NE–SW) by fitting Gaussian components to our highest-resolution, uniformly-weighted, total-intensity maps. In Fig. 2 we present the position of the peak in the NE image relative to the peak in the SW image.We have also included the results of previous 7 mm VLBI observations made by Garrett et al., (1997) in 1996.
We have estimated the errors on all these separation measurements by comparing measurements from sub-sets of the data for a given epoch. These suggest the separation measurements are accurate to 1/10 of the major axis of the uniformly weighted fitted beam i.e. $`30\mu `$as.
What are the possible explanations for this change in the image separation ?
First let us consider effects that are intrinsic to the background source. (In this case changes will appear in both images, separated by the time delay, and the angular separation changes result from a combination of both). In this context it is useful to note that for a simple FRW universe ($`q_0=0`$, $`\mathrm{\Lambda }=0`$, $`H_0=65`$ km s<sup>-1</sup> Mpc<sup>-1</sup>), a shift of $`80\mu `$as (the largest shift measured between epochs separated by $`2`$ weeks) corresponds to a linear distance of $`0.8`$ pc at $`z_s=2.507`$. If one assumes that the lens provides a magnification factor of $`10`$ (Kochanek & Narayan,, 1992; Nair et al.,, 1993), the linear distance then scales to $`0.08`$ pc. Given PKS 1830-211’s blazar characteristics, the most obvious interpretation is that the centroid of the peak in the brightness distribution of the core is changing on relatively short time-scales, perhaps for example, as shock fronts propagate along a continuous jet, appearing as bright and (later) fading regions of radio emission. A less conventional scenario is that it is the mm-VLBI “core” (i.e. the base of the jet) itself that is moving, as the effectiveness of the collimation mechanism changes. Since this area of jet physics remains poorly understood (see Marscher, (1995) for a recent review), and since measurements with this sort of linear resolution have not previously been possible in this type of source, it is not clear to us whether movements of the order of $`0.08`$ pc are reasonable or not.
Extrinsic explanations might include the effect of scattering by ionised gas encountered along the line of sight, specifically in the lens galaxy or our own galaxy. Since PKS 1830-211 lies close to the galactic plane, the effect of the ISM in our own galaxy is expected to dominate (Walker,, 1996). Jones et al., (1996) have measured the interstellar broadening of the SW image between 18 and 1.3 cm and estimate that the SW deconvolved core size is proportional to $`\lambda ^2`$, as predicted by Interstellar Scattering (ISS) theory. At 1.3 cm they measure a size of the SW image of $`0.6\times 0.2`$ mas. In our observations the size of the SW (and the NE) image change with epoch. However, a typical value for the core size is $`0.26\times 0.13`$ which is much larger than we might expect assuming a $`\lambda ^2`$ relation. Indeed the scaling in size between 1.3 cm and 0.7 mm is almost linear with $`\lambda `$, as expected from simple models of synchrotron radio emission. Hence, we suspect that the measured source size is dominated by its internal radio structure, rather than scattering effects.
However, in addition to image broadening, the effect of ISS can, in principle, also produce “image wander” — an apparent shift in the position of a source (Rickett,, 1990). This effect is considered to be small: an order of magnitude smaller than the scattering size itself. Indeed, observations of compact sources lying close to the galactic centre (where the effects of ISS should be severe) bear this out: for example, $`\lambda 18`$ cm VLBI observations of masers located close to the galactic centre (Gwinn et al.,, 1988) show that for water masers the r.m.s. wander of individual spots is $`<18\mu `$as over the course of 6 months. Since the effect of image-wander would also scale as $`\lambda ^2`$, we suspect that conventional ISS is not a compelling explanation for the changes in image separation that we measure. We also observe changes in the image separation with respect to the peaks in polarised intensity. These changes are less reliable than those observed in total intensity (since the polarised flux is very much fainter) but the initial indications are that the changes in the image separation in total intensity and polarised intensity are unrelated.
Milli-lensing produced by massive ($`10^3`$$`10^4M_{}`$) compact objects in the halo of the lens can certainly introduce shifts of $`80\mu `$as, but changes in the separation would be measured on relatively long time-scales: hundreds of years rather than the weeks or months observed here.
Another possibility is that the transverse velocity of the lens galaxy across the sky could introduce a relative proper motion between the NE and SW images. For highly magnified four-image lens systems the proper motions are expected to be a $``$ few tens $`\mu `$as yr<sup>-1</sup> (Kochanek et al.,, 1996), but for two-image systems such as PKS 1830-211 the motion is expected to be an order of magnitude smaller.
In summary, the changes in the measured image separation are most likely due to changes in the brightness distribution of the background radio source. The detection of source evolution on these short time-scales would be impossible if it were not for the fact that this is a lensed system which provides us with a magnified view and closely spaced multiple images that allow accurate relative position measurements to be made. If, as we strongly suspect, the changes we observe are due to internal motions in the radio structure on scales of $`>0.08`$ pc in $`2`$ weeks, this implies (unlensed) superluminal velocities in the rest frame of the background source of $`>3c`$.
|
no-problem/9906/cond-mat9906342.html
|
ar5iv
|
text
|
# Femtosecond Coherent Dynamics of the Fermi Edge Singularity and Exciton Hybrid
## Abstract
We study theoretically the coherent nonlinear optical response of doped semiconductor quantum wells with several subbands. When the Fermi energy approaches the exciton level of an upper subband, the absorption spectrum acquires a characteristic double–peak shape originating from the interference between the Fermi–edge singularity and the exciton resonance. We demonstrate that, for off–resonant pump excitation, the pump/probe spectrum undergoes a striking transformation in the coherent regime, with a time–dependent exchange of oscillator strength between the Fermi edge singularity and exciton peaks. We show that this effect originates from the many–body electron–hole correlations which determine the dynamical response of the Fermi sea. Possible experimental applications are discussed.
Ultrafast nonlinear spectroscopy offers a unique perspective into the role of many–body effects in semiconductors. While the linear absorption spectrum provides information about static properties, ultrafast time–resolved spectroscopy allows one to probe the system on time scales shorter than those governing the interactions between the elementary excitations. In the coherent regime, the dynamics of many–body correlations plays an important role in the transient changes of the absorption spectrum. For example, in undoped semiconductors, exciton–exciton interactions were shown to play a dominant role in the optical response for specific sequences of the optical pulses.
In modulation–doped quantum wells (QW), the optical properties are dominated by the Fermi edge singularity (FES). Unlike in the undoped case, where the linear absorption exhibits discrete bound state peaks whose width is ultimately determined by the homogeneous broadening, the FES is a continuum resonance whose lineshape is governed by the Coulomb interactions of the photoexcited carriers with the low–lying Fermi sea (FS) excitations. In this letter we study the role of such many–body correlations in pump/probe measurements, where the strong “pump” pulse excites the system at time $`t=0`$, while the weak “probe” pulse measures the optical response at time $`t=\tau `$. In the doped systems, the interactions are screened and there are no discrete bound states. Therefore, the many–body correlations enter into the nonlinear response not via exciton–exciton interactions, but mainly through the dynamical response of the FS during the course of the optical excitation. Since such electron–hole (e–h) correlations come from the “dressing” of the photoexcited e–h pair by gapless FS excitations, the response of the FS to ultrashort optical pulses is intrinsically unadiabatic . For resonant pump excitation, the electron–electron (e–e) scattering also leads to strong variations of the dephasing and relaxation times, from a few ps close to the FES to $`10`$ fs away from the Fermi level. However, such incoherent relaxation effects are suppressed for off–resonant excitation, when the pump is tuned below the FES resonance, in which case coherent effects dominate. Furthermore, for negative ($`\tau <0`$) time delays, the pump/probe signal is due to the coherent interaction of the pump pulse with the polarization induced in the sample by the probe pulse and, again, the effects of incoherent processes are strongly attenuated . Therefore, the coherent dynamics studied below can be best observed under off–resonant conditions or when the probe precedes the pump.
Here we investigate the ultrafast pump/probe dynamics of the FES–exciton hybrid, which is formed in asymmetric QW’s with partially occupied subbands. In such structures, interband optical transitions from the valence band to several conduction subbands are allowed due to the finite overlap between the hole and electron envelope wave–functions. The many–body effects on the linear absorption spectrum have been described by using the simple two–subband Hamiltonian
$$H=H_0\underset{ij}{}\underset{\mathrm{𝐩𝐤𝐪}}{}v_{ij}(𝐪)a_{i𝐩+𝐪}^{}a_{j𝐩}b_{𝐤𝐪}^{}b_𝐤,$$
(1)
with $`H_0=_{i𝐤}ϵ_{i𝐤}^ca_{i𝐤}^{}a_{i𝐤}+_𝐤(ϵ_𝐤^v+E_g)b_𝐤^{}b_𝐤`$. Here $`a_{i𝐤}^{}`$ and $`ϵ_{i𝐤}^c`$ are the creation operator and the energy of a conduction electron in the $`i`$th subband, $`b_𝐤^{}`$ and $`ϵ_𝐤^v`$ are those of a valence hole ($`E_g`$ is the bandgap), and $`v_{ij}(𝐪)`$ is the screened e–h interaction matrix with diagonal (off–diagonal) elements describing the intrasubband (intersubband) scattering. Due to the screening, the interaction potential is short–ranged and can be replaced by its s–wave component ; close to the Fermi surface, $`v_{ij}(𝐪)v_{ij}`$ . Here we consider the case where only the first subband is occupied, but the Fermi level is close to the exciton level (with binding energy $`E_B`$) below the bottom of the second subband \[see inset in Fig. 1(a)\]. For large values of the FES–exciton splitting $`\mathrm{\Delta }E_FE_B`$, where $`\mathrm{\Delta }`$ is the subband separation, the linear absorption spectrum consists of two well separated peaks, the lower corresponding to the FES from subband 1, and the higher corresponding to the Fano resonance from the exciton of subband 2 broadened by its coupling to the continuum of states in subband 1. With decreasing $`\mathrm{\Delta }E_FE_B`$, the FES and the exciton become hybridized due to the intersubband scattering arising from the Coulomb interaction. This results in the transfer of oscillator strength from the exciton to the FES and a strong enhancement of the absorption peak near the Fermi level due to the resonant scattering of the photoexcited electron by the exciton level.
In typical QW’s, $`v_{12}`$ is much smaller than $`v_{ii}`$ (a value $`v_{12}/v_{11}0.2`$ was deduced from the fit to the linear absorption spectrum in ). In the absence of coupling ($`v_{12}=0`$), the different nature of the exciton and FES leads to distinct dynamics under ultrafast excitation. In the presence of coupling, one should expect new effects coming from the interplay of this difference and the intersubband scattering that hybridizes the two resonances. Indeed, we demonstrate that, at negative time delays, the pump/probe spectrum undergoes a drastic transformation due to a transient light–induced redistribution of the oscillator strength between the FES and the exciton. We show that such a redistribution is a result of the dynamical FS response to the pump pulse. In fact, the ultrafast pump/probe spectra of the FES–exciton hybrid can serve as an experimental test of the difference between the FES and exciton dynamics.
Theory.— The total Hamiltonian of the system is $`H+H_p(t)+H_s(t)`$, with $`H_\alpha (t)`$ ($`\alpha =p,s`$) describing the optical excitations,
$$H_\alpha (t)=_\alpha (t)\underset{i}{}\left[\mu _iU_i^{}e^{i\omega _pt+i𝐤_\alpha 𝐫}+\text{h.c.}\right],$$
(2)
where $`U_i^{}=_𝐤a_{i𝐤}^{}b_{i𝐤}^{}`$ is the transition operator to the $`i`$th subband, $`\mu _i`$ is the dipole matrix element, and $`_\alpha (t)`$ are the amplitudes of the probe ($`\alpha =s`$) and pump ($`\alpha =p`$) electric fields, propagating in the directions $`𝐤_\alpha `$. In order to account for the e–h correlations that govern the dynamics of the hybrid, we use the multi–subband generalization of the method developed previously for the FES. The non–linear pump/probe spectrum of the system described by the “bare” Hamiltonian (1) represents the linear response to the probe alone of the system described by the Hamiltonian $`H+H_p(t)`$. In order to take advantage of the linear–response formalism, we adopt the “pump–dressed” effective Hamiltonian $`\stackrel{~}{H}(t)`$, which we derived from $`H+H_p(t)`$ using a time–dependent Schrieffer–Wolff transformation. In fact, such a treatment mimics nicely the spirit of the pump/probe experiments. The details will be published elsewhere, and here we present only the final expressions. Since the pump/probe signal is linear in the probe field, the essential physics can be captured by assuming a $`\delta `$–function probe pulse, $`_\tau (t)=_\tau e^{i\omega _p\tau }\delta (t\tau )`$, and a Gaussian pump pulse. The pump/probe polarization has the form ($`t>\tau `$)
$$P(t)=i_se^{i\omega _pt}\underset{ij}{}\mu _i\mu _j0|\stackrel{~}{U}_i(t)𝒦(t,\tau )\stackrel{~}{U}_j^{}(\tau )|0,$$
(3)
where $`𝒦(t,\tau )`$ is the time–evolution operator for the effective Hamiltonian,
$$\stackrel{~}{H}(t)=\underset{ij𝐤}{}ϵ_{ij𝐤}^c(t)a_{i𝐤}^{}a_{j𝐤}+\underset{𝐤}{}ϵ_𝐤^v(t)b_𝐤^{}b_𝐤+V_{eh}(t)+V_{ee}(t),$$
(4)
where $`V_{eh}`$ and $`V_{ee}`$ are the effective e-h and e-e interactions and $`\stackrel{~}{U}_i^{}(t)`$ is the effective transition operator given below. Here $`ϵ_{ij𝐤}^c(t)=\delta _{ij}ϵ_{i𝐤}^c+\mathrm{\Delta }ϵ_{ij𝐤}^c(t)`$, and $`ϵ_𝐤^v(t)=ϵ_𝐤^v+\mathrm{\Omega }+\mathrm{\Delta }ϵ_𝐤^v(t)`$ are the band dispersions with pump–induced self–energies: $`\mathrm{\Delta }ϵ_{ij𝐤}^c(t)=_p(t)[\mu _ip_{j𝐤}^{}(t)+\mu _jp_{i𝐤}(t)]/2`$, and $`\mathrm{\Delta }ϵ_𝐤^v(t)=_p(t)\mathrm{Re}_i\mu _ip_{i𝐤}(t)`$, with $`p_{i𝐤}(t)`$ satisfying
$$i\frac{p_{i𝐤}(t)}{t}=(ϵ_{i𝐤}^c+ϵ_𝐤^v+\mathrm{\Omega })p_{i𝐤}(t)\underset{j𝐪}{}v_{ij}p_{j𝐪}(t)\mu _i(t),$$
(5)
where $`\mathrm{\Omega }`$ is the detuning of $`\omega _p`$ measured from the Fermi level . Since $`p_{i𝐤}(t)`$ are linear in $`_p(t)`$, the self–energies are quadratic in the pump field. Note that (i) the time–dependence of the self–energies lasts for the duration of the pump, and (ii) that the pump induces additional intersubband scattering, described by $`\mathrm{\Delta }ϵ_{12𝐤}^c(t)`$. The effective transition operator appearing in Eq. (3) is $`\stackrel{~}{U}_i^{}(t)=_{j𝐤}\varphi _{ij𝐤}(t)a_{j𝐤}^{}b_𝐤^{}`$, with
$$\varphi _{ij𝐤}(t)=\delta _{ij}\left[1\frac{1}{2}\underset{l}{}|p_{l𝐤}(t)|^2\right]\frac{1}{2}p_{i𝐤}(t)p_{j𝐤}^{}(t).$$
(6)
In the single–subband case, Eq. (6) takes a familiar form $`\varphi _𝐤(t)=1|p_𝐤(t)|^2`$ — the usual Pauli blocking factor in the coherent limit; in a multi–subband case, the latter is a matrix.
Eqs. (36) are used here to study the pump/probe signal of the multi-subband QW during negative time delays ($`\tau <0`$) and for off-resonant excitation with detuning $`\mathrm{\Omega }E_F`$, in which case the coherent effects dominate. Similar to the single–subband case, the above expressions apply for $`\mu _i_p/\mathrm{\Omega }1`$ (or $`t_p`$, $`\mu _i_pt_p1`$ for short pump pulse duration). For $`\mathrm{\Omega }E_M`$ (or for $`E_Mt_p1`$), $`E_ME_F`$ being the characteristic Coulomb energy of the FS excitations, the corrections to the above effective parameters due to pair–pair and pair–FS interactions can be neglected for simplicity since they are perturbative in the screened interactions. One can also show that, due to the FS Pauli blocking and the screening, the pump-induced corrections in the interaction potentials in (4) are suppressed, as compared to the self-energies, by a factor $`(E_M/\mathrm{\Omega })^2`$ (or $`(E_Mt_p)^2`$ for short pump duration) and can therefore be neglected in the excitation regime of interest here. Finally, similar to the linear absorption calculations, the effects of $`V_{ee}`$ can be taken into account via a screened e–h potential in $`V_{eh}`$ and by treating the e-e scattering within the dephasing time approximation ; indeed, for off-resonant excitation the e-e scattering is suppressed, while at the same time, due to high FS electron density, the build–up of screening in doped QW’s occurs during time scales shorter than the typical pulse duration $``$ 100 fs.
Thus, in the coherent limit, the effective Hamiltonian (4) has the same operator form as the “bare” Hamiltonian (1), but with time–dependent band dispersions. To calculate the polarization (3), we adopt the multi-subband generalization of the coupled cluster expansion method (CCE) for time–dependent Hamiltonians . We consider the physically relevant limit of large hole mass and include the hole recoil broadening in the dephasing time. Under such conditions, the CCE provides an exact description of the dynamics arising from the effective Hamiltonian (4) and thus accounts for the e–h correlations leading to the unadiabadic response of the FS to the pump pulse nonperturbatively \[beyond the Hartree–Fock approximation (HFA)\].
Our approach has a straightforward physical interpretation. The photoexcited e–h state $`𝒦(t,\tau )\stackrel{~}{U}_i^{}(\tau )|0`$, entering into (3), can be viewed as describing the propagation of the e–h pair with amplitude $`\mathrm{\Phi }_{ij}(𝐤,t)`$ excited by the probe pulse at time $`\tau `$, dressed by the scattering of the FS excitations (dynamical FS response). The latter leads to a dynamical broadening described by the amplitude $`s_{ij}(𝐩,𝐤,t)`$ that satisfies the differential equation
$`i{\displaystyle \frac{s_{ij}(𝐩,𝐤,t)}{t}}=(ϵ_{i𝐩}^cϵ_{j𝐤}^c)s_{ij}(𝐩,𝐤,t)`$ $`+{\displaystyle \underset{l}{}}[\mathrm{\Delta }ϵ_{il𝐩}^c(t)s_{lj}(𝐩,𝐤,t)\mathrm{\Delta }ϵ_{lj𝐤}^c(t)s_{il}(𝐩,𝐤,t)]`$ (8)
$`{\displaystyle \underset{l}{}}\stackrel{~}{v}_{il}(𝐩,t)[\delta _{lj}+{\displaystyle \underset{q>k_F}{}}s_{lj}(𝐪,𝐤,t)],`$
with initial condition $`s_{ij}(𝐩,𝐤,\tau )=0`$, and p and k labeling respectively the ($`i`$th subband) FS electron and the ($`j`$th subband) FS hole. Since only the first subband is occupied, the only non–zero components of $`s_{ij}`$ are $`s_{11}(𝐩,𝐤,t)`$ and $`s_{21}(𝐩,𝐤,t)`$, which describe the intra and intersubband FS excitations respectively. The photoexcited e–h pair wavefunction $`\mathrm{\Phi }_{ij}(𝐤,t,\tau )`$ satisfies the Wannier–like equation
$`i{\displaystyle \frac{\mathrm{\Phi }_{ij}(𝐤,t)}{t}}={\displaystyle \underset{l}{}}[ϵ_{il𝐤}^c(t)+\delta _{lj}[ϵ_𝐤^v(t)+ϵ_A(t)]i\mathrm{\Gamma }]\mathrm{\Phi }_{lj}(𝐤,t){\displaystyle \underset{l,q>k_F}{}}\stackrel{~}{v}_{il}(𝐤,t)\mathrm{\Phi }_{lj}(𝐪,t)`$ (9)
with initial condition $`\mathrm{\Phi }_{ij}(𝐤,\tau )=\varphi _{ij𝐤}(\tau )`$, where $`ϵ_A(t)=_{k^{}<k_F}[v_{11}+_{p^{}>k_F}s_{11}(𝐩^{},𝐤^{},t)v_{11}]`$ is the self–energy due to the readjustment of the FS to the photoexcitation of a hole and $`\mathrm{\Gamma }`$ is the inverse dephasing time due to all the processes not included in $`H`$. In Eqs. (8) and (9), $`\stackrel{~}{v}_{ij}(𝐤,t)=v_{ij}_{l,k^{}<k_F}s_{il}(𝐤,𝐤^{},t)v_{lj}`$ is the effective e–h potential whose time–dependence is due to the dynamical FS response . Note that it is the interplay between this effective potential and the pump-induced self-energies that gives rise to the unadiabatic FS response to the pump field. In terms of $`\mathrm{\Phi }_{ij}(𝐤,t)`$, the polarization (3) takes the simple form ($`t>\tau `$)
$$P(t)=i_se^{i\omega _pt}\underset{ijl}{}\mu _i\mu _j\underset{k>k_F}{}\mathrm{\Phi }_{il}(𝐤,t)\varphi _{ij𝐤}^{}(t),$$
(10)
with $`\varphi _{ij}(𝐤,t)`$ given by (6). The nonlinear absorption spectrum is then proportional to $`\mathrm{Im}P(\omega )`$, where $`P(\omega )`$ is the Fourier transform of the rhs of (10).
Numerical results.— Below we present our results for the evolution of the pump/probe spectra of the FES–exciton hybrid. The spectra were obtained by the numerical solution of the coupled equations (9) and (8), with the time–dependent band dispersions $`ϵ_{ij𝐤}^c(t)`$ and $`ϵ_𝐤^v(t)`$. The calculations were performed at zero temperature for below–resonant pump with detuning $`\mathrm{\Omega }E_F`$ and duration $`t_pE_F/\mathrm{}=2.0`$, and by adopting the typical values of parameters $`v_{12}/v_{11}=0.2`$, $`\mathrm{\Gamma }=0.1E_F`$, and $`v_{11}𝒩=0.3`$, $`𝒩`$ being the density of states, previously extracted from fits to the linear absorption spectra ($`E_F1520`$ meV in typical GaAs/GaAlAs QW’s). Note, however, that similar results were also obtained for a broad range of parameter values. In Fig. 1(a) we plot the nonlinear absorption spectra at different negative time delays $`\tau <0`$. For better visibility, the curves are shifted vertically with decreasing $`|\tau |`$ (the highest curve represents the linear absorption spectrum). For the chosen value of $`\mathrm{\Delta }`$, the FES and excitonic components of the hybrid are distinguishable in the linear absorption spectrum, with the FES peak carrying larger oscillator strength. It can be seen that, at short $`\tau <0`$, the oscillator strength is first transferred to the exciton and then, with further increase in $`|\tau |`$, back to the FES. At the same time, both peaks experience a blueshift, which is larger for the FES than for the exciton peak because the ac–Stark effect for the exciton is weaker due to the subband separation $`\mathrm{\Delta }`$.
The transient exchange of oscillator strength originates from the different nature of the FES and exciton components of the hybrid. At negative time delays, the time–evolution of the exciton is governed by its dephasing time, which is essentially determined by the homogeneous broadening $`\mathrm{\Gamma }`$ (in doped systems the exciton–exciton correlations do not play a significant role due to the screening). The pump pulse first leads to a bleaching of the exciton peak, which then recovers its strength at $`|\tau |\mathrm{}/\mathrm{\Gamma }`$. On the other hand, since the FES is a many–body continuum resonance, (i) the bleaching of the FES peak is stronger, and (ii) the polarization decay of the FES is determined not by $`\mathrm{\Gamma }`$, but by the scattering with the low–lying FS excitations. This leads to much faster dynamics, roughly determined by the inverse Coulomb energy $`E_M`$ . However, the time–evolution of the hybrid spectrum is not a simple superposition of the dynamics of its components. Indeed, the pump-induced self-energies lead to the flattening of the subbands or, to the first approximation, to a time–dependent increase in the effective mass (and hence the density of states), which in turn increases the e–h scattering. Important is, however, that, due to the subband separation and different nature of the resonances, such an increase is stronger for the FES. Therefore, the effect of the pump is to reduce the excitonic enhancement of the FES peak (coming from the resonant scattering of the photoexcited electron by the exciton level) as compared to the linear absorption case, resulting in the oscillator strength transfer from the FES back to exciton. In fact, such a transfer is strong even for smaller $`\mathrm{\Delta }`$ \[see Fig. 1(b)\]. It should be emphasized that the above feature cannot be captured within the HFA. Indeed, the latter approximates the FES by a bound state and thus neglects the difference between the FES and exciton dynamics originating from the unadiabatic response of the FS to the change in the e–h correlations. This is demonstrated in Fig. 1(c) where we show the spectra obtained without the FS dynamical response, i.e., by setting $`s_{ij}=0`$. Although in that case both peaks show blue shift and broadening, there is no significant transfer of oscillator strength
In conclusion, we investigated theoretically the coherent nonlinear optical response of the FES–exciton hybrid in a QW with partially occupied subbands. We found a strong redistribution of the oscillator strength between the FES and exciton peaks which is caused by the different dynamics of the FES and exciton components of the hybrid as well as by their coupling due to the e–h correlations. This originates from the dynamical Fermi sea response and leads to a strong transient changes in the pump/probe spectra. Such systems can be used to probe the role of the many–body correlations in the Fermi liquid versus bound states dynamics.
This work was supported by the NSF grant ECS-9703453, and by HARL, Hitachi Ltd. The work of D.S.C. was supported by the Director, Office of Energy Research, Office of Basic Energy Sciences, Division of Material Sciences of the U.S. Department of Energy, under Contract No. DE-AC03-76SF00098.
FIG. 1
|
no-problem/9906/quant-ph9906030.html
|
ar5iv
|
text
|
# “Weighing” a Closed System and the Time-energy Uncertainty Principle
## Abstract
A gedanken-experiment is proposed for ‘weighing” the total mass of a closed system from within the system. We prove that for an internal observer the time $`\tau `$, required to measure the total energy with accuracy $`\mathrm{\Delta }E`$, is bounded according to $`\tau \mathrm{\Delta }E>\mathrm{}`$. This time-energy uncertainty principle for a closed system follows from the measurement back-reaction on the system. We generally examine what other conserved observables are in principle measurable within a closed system and what are the corresponding uncertainty relations.
Time and frequency are classically two conjugate variables. Nevertheless, the interpretation of the consequent quantum time-energy uncertainty relation is not straightforward as for the case of other conjugate variables. Aharonov and Bohm have shown that within quantum theory there is no fundamental restriction on the minimal time needed to measure the total energy with given accuracy . If the Hamiltonian of the system is known, one can in principle setup a measurement of the Hamiltonian, with arbitrary accuracy, at any time short as we please. Instead, $`\mathrm{\Delta }\tau `$ in the time-energy relation
$$\mathrm{\Delta }\tau \mathrm{\Delta }E\mathrm{}$$
(1)
can be interpreted as the uncertainty caused to the internal time $`\tau `$ of the system due to the measurement.
The Bohr-Einstein weighing gedanken-experiment illustrates this interpretation. The total mass of a closed box (before and after the emission of the photon) is there measured by weighing the system in an external gravitational field. The energy of the box is then deduced from the equivalence of mass and energy. Bohr has shown that the process of weighing introduces a quantum uncertainty in the location of the box in the external gravitational field. The uncertainty in the gravitational potential leads in turn to an uncertainty in the internal time $`\tau `$ of the clock within the box relative to the external time t .
The purpose of this note is to offer another interpretation of the time-uncertainty relation. As long as the energy is measured with respect to a clock external to the system, there is no fundamental restriction on the duration of the measurement. Suppose that an observer within a closed system measures the total energy. We will argue that:
* The internal time, $`\tau `$, needed to measure the total energy of an isolated system, within a precision $`\mathrm{\Delta }E`$, from within the system, satisfies $`\tau \mathrm{\Delta }E\mathrm{}`$.
Here $`\tau `$ is interpreted as the time shown by a physical clock within the system, and $`E`$ is the total energy of the system including the internal clock.
To illustrate this we first consider a gedanken-experiment for measuring the total energy of an isolated system, by employing gravity as in the Bohr-Einstein weighing experiment. Let the system be a spherical shell of radius $`R`$, and mass $`M`$, with an internal clock dynamical variable $`\tau `$. At a certain clock time, a test particle of mass $`mM`$, which for simplicity we take to be a spherical shell as well, is ejected outwards with an initial velocity $`v_0`$ and after traversing a distance $`z_{max}R`$ is observed to fall back to the shell surface at time $`\tau `$. Classically, the mass of the shell can then be deduced from $`M=2R^2v_0/G\tau `$, where $`G`$ is Newton’s constant.
However, the equivalence of energy and weight implies that the clock rate must be affected by the test-shell according to
$$\tau (z)=t\left(1+\frac{\varphi (z)}{c^2}\right)$$
(2)
Here $`\varphi (z)`$ is the gravitational potential at the position of the clock $`r=R`$, and $`c`$ is the velocity of light. Note that $`\varphi (z)`$ is a function of the hight, $`z=r_{shell}R`$, of the test shell. Particularly, for $`zR`$, the change in the potential at $`r=R`$ when the a shell location is $`z`$, is given by
$$\delta \varphi (z)=\varphi (z)\varphi (z=0)=\frac{Gm}{R^2}z$$
(3)
If the radial location of the shell has a quantum uncertainty $`\mathrm{\Delta }z`$, the above relation implies a quantum uncertainty $`\mathrm{\Delta }\tau `$ in the clock time. For weak gravitational fields, $`\frac{\varphi }{c^2}1`$, and
$$\frac{\mathrm{\Delta }\tau }{\tau }=\frac{Gm}{R^2}\frac{\mathrm{\Delta }z}{c^2}$$
(4)
The uncertainty $`\mathrm{\Delta }z`$ in the location of the test-shell cannot be too small, because then the uncertainty of the radial momentum of the shell becomes large. If we like to measure the mass with an accuracy $`\mathrm{\Delta }M`$, the change in the impulse, $`\delta p=F𝑑\tau F\tau `$, caused by $`\mathrm{\Delta }M`$ during the time $`\tau `$ must be larger than the quantum uncertainty in the momentum of the test shell
$$\frac{Gm\mathrm{\Delta }M}{R^2}\tau >\mathrm{\Delta }p_z$$
(5)
Combining the last two equations we obtain
$$\mathrm{\Delta }\tau \mathrm{\Delta }M>\frac{1}{c^2}\mathrm{\Delta }z\mathrm{\Delta }p_z>\mathrm{}.$$
(6)
Finally, using the relation $`\mathrm{\Delta }E=\mathrm{\Delta }Mc^2`$, and the requirement $`\tau >\mathrm{\Delta }\tau `$, we arrive to
$$\tau \mathrm{\Delta }E>\mathrm{}$$
(7)
The time-energy uncertainty relation derived above follows from the gravitational time dilation caused to the clock. We will now show that this conclusion follows most generally, irrespective of the details of the mechanism used, whenever the total energy including the internal clock energy, is measured with respect to the internal clock time.
Let us consider an isolated “box” described by a Hamiltonian $`H_c+H_{box}`$, where $`H_c`$ describes a clock, and $`H_{box}`$ the rest of the system in the box. To describe a measurement we will couple the total energy to a measuring device with coordinate $`z`$ and conjugate momentum $`p`$. For simplicity, we can take the Hamiltonian of the measuring device as $`H_{MD}=0`$. The total Hamiltonian including the von-Neumann measurement interaction is
$$H=H_c+H_{box}+\frac{1}{2}\left(g(\tau )H_c+H_cg(\tau )+2g(\tau )H_{box}\right)z$$
(8)
$`g(\tau )`$, is the coupling function that is nonzero during the measurement and is normalized: $`g(\tau )𝑑\tau =1`$. Since $`H_c=i\mathrm{}\frac{}{\tau }`$ an appropriate ordering was assumed to keep the Hamiltonian Hermitian.
Suppose that the system is in an energy eigenstate,
$$H\mathrm{\Psi }=E_0\mathrm{\Psi }$$
(9)
With the substitution
$$\mathrm{\Psi }=\psi (\tau )u_E|z$$
(10)
where $`H_{box}u_E=Eu_E`$ and $`|z`$ is an eigenstate of $`z`$, we get
$$\frac{\psi }{\tau }=\left[\frac{1}{2}\frac{z\frac{dg}{d\tau }}{1+zg}i\frac{E\tau }{\mathrm{}}+i\frac{E_0/\mathrm{}}{1+zg(\tau )}\right]\psi $$
(11)
and
$$\psi (\tau )=\frac{1}{\sqrt{1+g(\tau )z}}e^{i\frac{E\tau }{\mathrm{}}}e^{i\frac{E_0}{\mathrm{}}^\tau \frac{d\tau ^{}}{1+g(\tau ^{})z}}$$
(12)
It can now be shown that only if
$$g(\tau )z1$$
(13)
is satisfied, the solution $`\psi (\tau )`$ describes a measurement. In this particular case
$$\mathrm{\Psi }e^{i\frac{(EE_0)\tau }{\mathrm{}}}e^{i\frac{E_0z}{\mathrm{}}{\scriptscriptstyle g(\tau ^{})𝑑\tau ^{}}}u_E|z$$
(14)
Indeed the last term, $`\mathrm{exp}(izE_0g(\tau ^{})𝑑\tau ^{})`$, shifts the measuring device momentum $`p`$ by
$$\delta p=E_0g(\tau ^{})𝑑\tau ^{}=E_0$$
(15)
If the duration of the measurement is $`\tau _0`$ the magnitude of the coupling function is $`g(\tau )1/\tau _0`$. Since the accuracy $`\mathrm{\Delta }E_0`$ of the measurement is related to $`z`$ by $`\mathrm{\Delta }E_0=\mathrm{\Delta }p\mathrm{}/\mathrm{\Delta }z\mathrm{}/z`$ we finally obtain that eq. (13) implies
$$\tau _0\mathrm{\Delta }E_0\mathrm{}$$
(16)
Therefore the measurement succeeds only if the duration $`\tau `$ of the coupling satisfies the above uncertainty relation.
In passing let us compare the gravitational weighing experiment and the von-Neumann measurement discussed above. In both cases the measurement affects the rate of the clock. In the latter case, during the measurement the effective clock Hamiltonian changes $`H_cH_c(1+zg)`$. Therefore, the clock rate changes according to $`\tau =t(1+gz)`$, and $`gz`$ plays here the role of the gravitational potential $`\frac{\varphi (z)}{c^2}`$. The uncertainty of the clock caused by the test shell is here due to the uncertainty of the coordinate $`z`$ conjugate to the measuring device “pointer” $`p`$. In both cases the uncertainty relation is due to the measurement back-reaction on the clock. However a distinctive feature in the von-Neumann measurement is that for a too small duration, $`\tau <\frac{\mathrm{}}{\mathrm{\Delta }E}`$, the interaction does not yield the proper correlations with the measuring device, i.e., the von-Neumann measurement procedure fails .
Finally, a more general perspective is provided by considering the general question of the observability of conserved quantities from within a closed system. The weighing measurements discussed here and the consequent time-energy uncertainty relation are one special important case. However what is the general class of conserved observables, and what are the respective uncertainty relations? We suggest that every scalar quantity within a closed system is in principle measurable, and generally gives rise to analogous uncertainty relations.
Consider first a closed non-relativistic system. The symmetry generators of Galilean boosts and rotations, are $`G`$ and $`L`$, and of space and time translations are $`P`$ and $`H`$. All four generators are constants of motion, however they are not all measurable within a closed system. As is well-known, observables as position, velocity, angular momentum, etc, both in classical mechanics as well as in quantum mechanics, are relative observables. Indeed, we never measure the absolute position of a particle, but the distance in between the particle and some other object. Similarly, we never measure the angular momentum of a particle along an absolute axis, but along a direction defined by some other physical objects. Therefore the angular momentum of a closed system can be measured only with respect to a point within the system, say the location of the center of mass, and along a direction defined by constitutes of the system. With respect to the center of mass of a closed system
$`L=L_{cm}+L_i`$ (17)
$`H=H_{cm}+H_i`$ (18)
$`P=P_{cm}+P_i`$ (19)
Since $`L_i`$ (along a certain direction) and $`H_i`$ are scalars and since they are defined exclusively in terms of internal variables they are internally measurable. By definition $`P_i`$ must identically vanish.
Let us consider in more details the analogous uncertainty relation in a non-relativistic measurement of $`L_i`$. For simplicity let our system be a rotating rigid disc of mass $`M`$. The axis of rotation can be located as the axis on which the centrifugal forces vanish. Since distances are measured relative to this axis, the moment of inertia, $`(I=m_ir_i^2)`$, can also be measured. Therefore, by measuring the angular velocity $`\omega `$ one can deduce what is the angular momentum from $`L_i=I\omega `$. To this end we will consider a measurement of the centrifugal force on a test particle of mass $`mM`$. We let $`m`$ slide along a radial track with $`\theta =constant`$, with respect to the disc, and measure the acceleration $`a=\omega ^2r`$. Classically this enables us to determine the angular momentum.
For a quantum test particle, we note however that a quantum uncertainty in its radial position $`r`$ introduces an uncertainty in the contribution of the test particle to the total moment of inertia $`\mathrm{\Delta }I=2mr\mathrm{\Delta }r`$. This in turn causes, via the conservation of angular momentum, an uncertainty $`\mathrm{\Delta }\omega \frac{I}{\omega }\mathrm{\Delta }I`$ in the angular momentum. Hence after time $`T`$ the relative angle of the disc becomes uncertain with respect to an external frame of reference by the amount
$$\mathrm{\Delta }\theta =T\mathrm{\Delta }\omega $$
(20)
On the other hand, we cannot have very small $`\mathrm{\Delta }r`$ because then the uncertainty in the radial momentum $`\mathrm{\Delta }p`$ becomes large. Indeed, we must also require that the change in the impulse, $`F𝑑tm\omega ^2rT`$, when $`\omega `$ is measured with precision $`\mathrm{\Delta }\omega `$, must be larger than the uncertainty in the radial momentum of the particle
$$2m\omega rT\mathrm{\Delta }\omega >\mathrm{\Delta }p$$
(21)
Combining the last two equations and using $`\mathrm{\Delta }LI\mathrm{\Delta }\omega \omega \mathrm{\Delta }I`$, we finally obtain
$$\mathrm{\Delta }\theta \mathrm{\Delta }L>\mathrm{}$$
(22)
Hence a measurement of $`L`$ with accuracy $`\mathrm{\Delta }L`$ causes a minimal uncertainty $`\mathrm{\Delta }\theta >\mathrm{}/\mathrm{\Delta }L`$ in the relative angle of the disc and an external frame. That is in complete analogy with our previous discussion; there, weighing the system has caused an uncertainty in the internal time.
In a relativistic theory the 10 generators of boosts, rotations, and space-time translations, form the Poincaré group. The observables in a closed system must be scalars with respect to Poincaré group. It is well known that the group has two Casimir invariants $`C_1=P_\mu P^\mu =m^2`$ where $`P_\mu `$ is the energy-momentum four-vector, and $`C_2=W_\mu W^\mu =m^2s(s+1)`$ where $`W_\mu `$ is the Pauli-Lubanski pseudo-vector. The mass and spin are two scalars. Hence in a relativistic system, the non-relativistic internal energy $`H_i`$ becomes the rest mass $`m=\sqrt{E^2p^2}`$, and the internal angular momentum corresponds to the spin $`s`$. Similarly in our weighing experiment the total energy is measured with respect to the rest mass of the shell system, hence what we have measured is the rest mass of a closed system.
In conclusion we have shown that the energy of a closed system can be measured from within the system. However while quantum theory poses no limitation on the duration of the measurement of energy in an open system, from within a closed system the duration of the measurement satisfies a time-energy uncertainty. Similar uncertainty relations can be found for other conserved observables.
Acknowledgment We acknowledge the support of the Basic Research Foundation, grant 614/95, administered by the Israel Academy of Sciences and Humanities. The work of Y. A. was supported by NSF grant PHY-9601280.
|
no-problem/9906/physics9906052.html
|
ar5iv
|
text
|
# Viscous stabilization of 2D drainage displacements with trapping
## Abstract
We investigate the stabilization mechanisms due to viscous forces in the invasion front during drainage displacement in two-dimensional porous media using a network simulator. We find that in horizontal displacement the capillary pressure difference between two different points along the front varies almost linearly as function of height separation in the direction of the displacement. The numerical result supports arguments taking into account the loopless displacement pattern where nonwetting fluid flow in separate strands (paths). As a consequence, we show that existing theories developed for viscous stabilization, are not compatible with drainage when loopless strands dominate the displacement process.
Immiscible displacement of one fluid by another fluid in porous media generates front structures and patterns ranging from compact to ramified and fractal . When a nonwetting fluid displaces a wetting fluid (drainage) at low injection rate, the nonwetting fluid generates a pattern of fractal dimension equal to the cluster formed by invasion percolation . The displacement is controlled solely by the capillary pressure, that is the pressure difference between the two fluids across a pore meniscus. At high injection rate and when the viscosity of the nonwetting fluid is higher or equal to the viscosity of the wetting fluid, the width of the displacement front stabilizes and a more compact pattern is generated
The purpose of the present letter is to investigate the stabilization mechanisms of the front due to viscous forces.To study the stabilization mechanisms we consider two-dimensional (2D) horizontal drainage at different injection rates. Since the displacement is performed within the plane we neglect gravity. We present simulations where we have calculated the capillary pressure difference $`\mathrm{\Delta }P_c`$ between two different pore menisci along the front separated a height $`\mathrm{\Delta }h`$ in the direction of the displacement \[Fig. 1(a)\]. The simulations are based on a network model that properly describes the dynamics of the fluid-fluid displacement as well as the capillary and viscous pressure buildup . Simulations show that for a wide range of injection rates and different fluid viscosities $`\mathrm{\Delta }P_c`$ varies almost linearly with $`\mathrm{\Delta }h`$ (Figs. 2 and 3). Assuming a power law behavior $`\mathrm{\Delta }P_c\mathrm{\Delta }h^\kappa `$ we find $`\kappa =1.0\pm 0.1`$. This is a surprising result because the viscous force field that stabilizes the front, is non homogeneous due to trapping of wetting fluid behind the front and to the fractal behavior of the front structure.
Based on the observation that the displacement structures are characterized by loopless strands of nonwetting fluid \[Fig. 1(a)\], we also present arguments being supported by our numerical findings. We conjecture that the arguments might affect the behavior of the front width $`w_s`$ as function of the capillary number $`C_a`$. Here $`C_a`$ denotes the ratio between viscous and capillary forces and in the following $`C_aQ\mu _{nw}/\mathrm{\Sigma }\gamma `$, where $`Q`$ is the injection rate, $`\mathrm{\Sigma }`$ is the cross section of the inlet, and $`\mu _{nw}`$ is the viscosity of the nonwetting phase.
In the literature there has been suggested slightly different scaling behavior of $`w_s`$ as function of $`C_a`$ and a general consensus has not yet been reached. However, none of them consider the evidence observed here that the displacement patterns are loopless and that nonwetting fluid only flows in strands to displace wetting fluid. As a consequence, we show that earlier proposed theories can not be used to describe drainage when loopless nonwetting strands dominate the displacements.
Before we present the numerical results and the theoretical evidence, we briefly introduce the network model. The model porous medium consists of a square lattice of cylindrical tubes oriented at $`45^{}`$ to the longest side of the lattice \[Fig. 1(a)\]. Four tubes meet at each intersection where we put a node having no volume. The disorder is introduced by (1) assigning the tubes a radius $`r`$ chosen at random inside a defined interval or (2) moving the intersections a randomly chosen distance away from their initial positions. In (1) all tubes have equal length $`d`$ but different $`r`$. (2) results in a distorted square lattice giving the tubes different lengths. Here $`r=d/2\alpha `$ where $`\alpha `$ is the aspect ratio between the tube length and its radius.
The tubes are initially filled with a wetting fluid of viscosity $`\mu _w`$ and a nonwetting fluid of viscosity $`\mu _{nw}\mu _w`$, is injected at constant injection rate $`Q`$ along the bottom row (inlet). The viscosity ratio $`M`$ is defined as $`M\mu _{nw}/\mu _w`$. The wetting fluid is displaced and flows out along the top row (outlet). There are periodic boundary conditions in the orthogonal direction. The fluids are assumed immiscible, hence an interface (a meniscus) is located where the fluids meet in the tubes. The capillary pressure $`p_c`$ of a meniscus is given by $`p_c=(2\gamma /r)\left[1\mathrm{cos}(2\pi x/d)\right]`$. The first term is Young-Laplace law for a cylindrical tube when perfect wetting is assumed and in the second term $`x`$ is the position of the meniscus in the tube ($`0xd`$). Thus, with respect to the capillary pressure we treat the tubes as if they were hourglass shaped with effective radii following a smooth function. By letting $`p_c`$ vary as above, we include the effect of local readjustments of the menisci at pore level which is important for the description of burst dynamics . The detailed modeling of $`p_c`$ costs computation time, but is necessary in order to properly simulate the capillary pressure behavior along the front.
The volume flux $`q_{ij}`$ through a tube between the $`i`$th and the $`j`$th node is given by Washburn equation : $`q_{ij}=(\sigma _{ij}k_{ij}/\mu _{ij})(p_jp_ip_{c,ij})/d_{ij}`$. Here $`k_{ij}`$ is the permeability of the tube, $`\sigma _{ij}`$ is the average cross section of the tube, $`p_i`$ and $`p_j`$ is the pressures at node $`i`$ and $`j`$ respectively, and $`p_{c,ij}`$ is the sum of the capillary pressures of the menisci inside the tube. A tube partially filled with both liquids, is allowed to contain one or two menisci. Furthermore, $`\mu _{ij}`$ denotes the effective viscosity given by the sum of the volume fractions of each fluid inside the tube multiplied by their respective viscosities. Inserting the above equation for $`q_{ij}`$ into Kirchhoff equations at every node (volume flux conservation), $`_jq_{ij}=0`$, constitutes a set of linear equations which are to be solved for $`p_i`$. The set of equations is solved by using the Conjugate Gradient method with the constraint that $`Q`$ is held fixed. See Refs. for details on the numerical scheme updating the menisci and solving $`p_i`$.
The front between the two phases is detected by running a Hoshen-Kopelman algorithm on the lattice. The front width is defined as the standard deviation of the distances between each meniscus along the front and the average front position in the direction of the displacement. $`\mathrm{\Delta }P_c`$ as function of $`\mathrm{\Delta }h`$ is calculated by taking the mean of the capillary pressure differences between all pairs of menisci separated a height $`\mathrm{\Delta }h`$ along the front. The capillary pressure difference between a pair of menisci is calculated by taking the capillary pressure of the meniscus closest to the inlet minus the capillary pressure of the meniscus closest to the outlet \[Fig. 1(a)\].
Figure 2 shows $`\mathrm{\Delta }P_c`$ as function of $`\mathrm{\Delta }h`$ for simulations performed at three different $`C_a`$’s with $`M=100`$ or $`1`$. The simulations with $`M=100`$ were performed on a $`25\times 35`$ nodes lattice with $`\mu _{nw}=10`$ P, $`\mu _w=0.10`$ P, and $`\gamma =30`$ dyn/cm. The disorder was introduced by choosing the tube radii at random in the interval $`0.05dr_{ij}d`$. The tube length was $`d=0.1`$ cm. The simulations with $`M=1`$ were performed on a distorted lattice of $`40\times 60`$ nodes where $`0.02\text{cm}d_{ij}0.18`$ cm and $`r_{ij}=d_{ij}/2\alpha `$ with $`\alpha =1.25`$. Here $`\mu _{nw}=\mu _w=0.5`$ P. To obtain reliable average quantities we did 10–30 simulations at each $`C_a`$ with different sets of random $`r_{ij}`$ or $`d_{ij}`$.
From Fig. 2 we observe that $`\mathrm{\Delta }P_c`$ increases roughly linearly as function of $`\mathrm{\Delta }h`$. At lowest $`C_a`$ no clear stabilization of the front was observed due to the finite size of the system. At higher $`C_a`$ the viscous gradient stabilizes the front. The gradient causes the capillary pressure of the menisci closest to the inlet to exceed the capillary pressure of the menisci lying in the uppermost part. Thus, the menisci closest to the inlet will more easily penetrate a narrow tube compared to menisci further down stream. This will eventually stabilize the front.
To save computation time and thereby be able to study $`\mathrm{\Delta }P_c`$ on larger lattices in the small $`C_a`$ regime, we have generated bond invasion percolation (IP) patterns with trapping on lattices of $`200\times 300`$ nodes. The IP patterns were generated on the bonds in a square lattice with the bonds oriented diagonally at $`45^{}`$. Hence, the bonds correspond to the tubes in our network model. Each bond was assigned a random number $`f_{ij}`$ in the interval $`[0,1]`$. A small stabilizing gradient $`g=0.05`$ was applied, giving an occupation threshold $`t_{ij}`$ of every bond: $`t_{ij}=f_{ij}+gh_{ij}`$ . Here $`h_{ij}`$ denotes the height of the bond above the bottom row. The occupation of bonds started at the bottom row, and the next bond to be occupied was always the bond with the lowest threshold value from the set of empty bonds along the invasion front. The generated IP patterns are similar to the site-bond IP patterns in and we assume they are statistical equal to structures that would have been obtained in a corresponding complete displacement simulation.
When the IP patterns became well developed with trapped (wetting) clusters of sizes between the bond length and the front width, the tubes in our network model were filled with nonwetting and wetting fluid according to occupied and empty bonds in the IP lattice. Moreover, the radii $`r_{ij}`$ of the tubes were mapped to the random numbers $`f_{ij}`$ of the bonds as $`r_{ij}=[0.05+0.95(1f_{ij})]d`$. Thus, $`0.05dr_{ij}d`$ and we set the tube length $`d=0.1\text{cm}`$. Note that $`r_{ij}`$ is mapped to $`1f_{ij}`$ because in our IP algorithm the next bond to be invaded is the one with the lowest threshold value, opposite to the network model, where the widest tube will be invaded first.
After the initiation of the tube network was completed, the network model was started and the simulations were run a limited number of time steps before it was stopped. The number of time steps where chosen sufficiently large to let the menisci along the front adjust according to the viscous pressure set up by the injection rate.
Totally, we generated four IP patterns with different sets of $`f_{ij}`$ and every pattern was loaded into the network model. The result of the calculated $`\mathrm{\Delta }P_c`$ versus $`\mathrm{\Delta }h`$ is shown in Fig. 3 for $`C_a=9.5\times 10^5`$ and $`M=100`$. If we assume a power law $`\mathrm{\Delta }P_c\mathrm{\Delta }h^\kappa `$, we find $`\kappa =1.0\pm 0.1`$. The slope of the straight line in Fig. 3 is 1.0. We have also calculated $`\mathrm{\Delta }P_c`$ for $`C_a=2\times 10^6`$ with $`M=1`$ and $`M=100`$ by using one of the generated IP patterns. The result of those simulations is consistent with Fig. 3.
Wilkinson was the first to use percolation theory to deduce a power law between $`w_s`$ and $`C_a`$ when only viscous forces stabilize the front. In 3D, where trapping of wetting fluid is assumed to be of little importance, he suggested $`w_sC_{a}^{}{}_{}{}^{\alpha }`$ and $`\alpha =\nu /(1+t\beta +\nu )`$. Here $`t`$ is the conductivity exponent and $`\beta `$ is the order parameter exponent in percolation. Blunt et al. used a similar approach, however, they found $`\alpha =\nu /(1+t+\nu )`$ in 3D. This is identical to the result of Lenormand discussing limits of fractal patterns between capillary fingering and stable displacement in 2D porous media. Blunt et al. also deduced a scaling relation for the pressure drop $`\mathrm{\Delta }P_{nw}`$ across a height difference $`\mathrm{\Delta }h`$ in the nonwetting phase of the front and found $`\mathrm{\Delta }P_{nw}\mathrm{\Delta }h^{t/\nu +1}`$. Later on, Xu et al. used the arguments of Gouyet et al. and Wilkinson to show that $`\mathrm{\Delta }P_{nw}\mathrm{\Delta }h^{t/\nu +d_\text{E}1\beta /\nu }`$, where $`d_\text{E}`$ is the Euclidean dimension of the space in which the front is embedded. They also argued that $`\mathrm{\Delta }P_c=\mathrm{\Delta }P_{nw}\mathrm{\Delta }P_w`$ where $`\mathrm{\Delta }P_w`$ denoting the pressure drop in the wetting phase of the front, is linearly dependent on $`\mathrm{\Delta }h`$ due to the compact displaced fluid \[see Fig. 1(a)\]. Thus, the result of Xu et al. would in 2D predict $`\mathrm{\Delta }P_c\mathrm{\Delta }h^{1.9}`$ where we have used $`t=1.3`$, $`\nu =4/3`$, $`\beta =5/36`$, and $`d_\text{E}=2`$. Our simulations give $`\mathrm{\Delta }P_c\mathrm{\Delta }h^\kappa `$ and $`\kappa =1.0\pm 0.1`$. Below we present an alternative view on the displacement pattern from those first suggested by Wilkinson. The alternative view is based upon the loopless nonwetting strands and is supported by our numerical result.
The simulated displacement patterns show that the nonwetting fluid contains no closed loops \[Fig. 1(a)\] because wetting fluid may be trapped in single tubes, due to volume conservation \[Fig. 1(b)\]. Because of fluid trapping in single tubes, the invading fluid flows in separate strands that cannot coalesce. We note that the definition in in Fig. 1(b) can be easily generalized to 3D , since increasing the coordination number of the lattice does not change the trapping rule. Therefore, we expect loopless patterns to develop in 3D lattices and our arguments that we present below should apply there too. We also note that trapping of wetting fluid is more difficult in real porous media due to a more complex topology of pores and throats there. Loopless IP patterns have earlier been observed in Refs. .
From Fig. 1(a) we may separate the displacement pattern into two parts: one consisting of the frontal region continuously covering new tubes, and the other consisting of the more static structure behind the front. The frontal region is supplied by nonwetting fluid through strands connecting the frontal region to the inlet. When the strands approach the frontal region they are more likely to split. Since we are dealing with a square lattice, a splitting strand may create either two or three new strands. As the strands proceed further into the frontal region they split again and again and eventually they cover the frontal region completely \[see Fig. 1(a)\].
On IP patterns without loops the length $`l`$ of the minimum path between two points separated an Euclidean distance $`R`$ scales like $`lR^{D_s}`$ where $`D_s`$ is the fractal dimension of the shortest path. We assume that the displacement pattern of the frontal region for length less than the correlation length (in our case $`w_s`$) is statistically equal to IP patterns in . Therefore, the length of a strand in the frontal region is proportional to $`\mathrm{\Delta }h^{D_s}`$ when $`\mathrm{\Delta }h`$ is less than $`w_s`$. If we assume that on the average every tube in the lattice has same mobility ($`k_{ij}/\mu _{ij}`$), this causes the fluid pressure within a single strand to drop like $`\mathrm{\Delta }h^{D_s}`$ as long as the strand does not split. When the strand splits volume conservation causes the volume fluxes through the new strands to be less than the flux in the strand before it splits. Hence, following a path where strands split will cause the pressure to drop as $`\mathrm{\Delta }h^\kappa `$ where $`\kappa D_s`$.
From the above arguments we conclude that the pressure drop $`\mathrm{\Delta }P_{nw}`$, in the nonwetting phase of the frontal region (that is the strands) should scale as $`\mathrm{\Delta }P_{nw}\mathrm{\Delta }h^\kappa `$ where $`\kappa D_s`$. In 2D two different values for $`D_s`$ have been reported: $`D_s=1.22`$ and $`D_s=1.14`$ . Both values are consistent with our simulations finding $`\kappa =1.0\pm 0.1`$.
The evidence that $`\kappa 1.0`$ may influence the scaling of $`w_s`$ as function of $`C_a`$. At low $`C_a`$ simulations show that $`\mathrm{\Delta }\widehat{P}_cC_a\mathrm{\Delta }h^{1.0}`$ . Here $`\mathrm{\Delta }\widehat{P}_c`$ denotes the capillary pressure difference when the front is stationary. That means, $`\mathrm{\Delta }\widehat{P}_c`$ excludes situations where nonwetting fluid rapidly invades new tubes due to local instabilities. At sufficiently low $`C_a`$ the displacement can be mapped to percolation giving $`\mathrm{\Delta }\widehat{P}_cff_c\xi ^{1/\nu }`$ . Here $`f`$ is the occupation probability of the bonds, $`f_c`$ is the percolation threshold, and $`\xi w_s`$ is the correlation length. By combining the above relations, we obtain $`w_sC_{a}^{}{}_{}{}^{\alpha }`$ where $`\alpha =\nu /(1+\nu \kappa )`$. In 2D $`\nu =4/3`$ and inserting $`\kappa =1.0`$ gives $`\alpha 0.57`$. At high $`C_a`$ we expect a crossover to another type of behavior since it is not clear if the mapping to percolation is valid there. We note that Wilkinson’s result gives $`\alpha 0.38`$ in 2D.
In summary we conclude that $`\mathrm{\Delta }P_c\mathrm{\Delta }h^\kappa `$ where our simulations gives $`\kappa =1.0\pm 0.1`$. By describing the displacement structure in terms of loopless strands we have argued that $`\kappa D_s`$, where $`D_s`$ is the fractal dimension of the shortest path between two points on IP patterns without loops. In 2D two values of $`D_s`$ has been reported ($`1.14`$ and $`1.22`$ ) and both are consistent with our numerical result $`\kappa 1.0`$. We conclude that earlier suggested theories are not compatible in situations where a loopless pattern with nonwetting strands dominate the displacement. We have also shown that $`\alpha `$ in $`w_sC_{a}^{}{}_{}{}^{\alpha }`$, may be influenced by the evidence that $`\kappa D_s`$. Work is in progress to investigate our arguments in 3D and the effect of loops on $`\kappa `$.
The authors thank J. Feder and E. G. Flekkøy for valuable comments. The work is supported by the Norwegian Research Council (NFR) through a “SUP” program and we acknowledge them for a grant of computer time.
|
no-problem/9906/cond-mat9906435.html
|
ar5iv
|
text
|
# Analysis of the phenomenon of speculative trading in one of its basic manifestations: postage stamp bubbles
## I Introduction
In his “Treatise on general sociology” , Vilfredo Pareto pointed out that the construction of celestial mechanics has been favoured by the fact that the mass of the sun is many times larger than the masses of the largest planets. In other circumstances, for instance with a double star in place of the sun or with a sun’s mass only a few times larger than the mass of the largest planets, the movements of the planets would be considerably more complicated. As a result, the three Kepler’s laws would no longer hold; instead of a 2-body problem, one would have to tackle a 3-or 4-body problem, which cannot be done without a thorough understanding of non-integrable hamiltonian dynamics and computer-assisted numerical computations. Under such conditions, the understanding of the laws of gravitation might have been delayed by at least two centuries.
In some respects, one is facing a similar difficulty in the analysis of financial markets as one has to deal with a many-body problem. First, many investors are active in a typical trading day and their market impact drives constantly the prices up and down. The difficulty is increased by the recent suggestion that the effective number $`N`$ of traders who count on the market is not very large in the sense of the usual “thermodynamical limit” in physical systems (which usually provide important simplification for modeling), probably of the order of hundreds, as all models of market microstructure lead to trivial deterministic dynamics when the limit of large $`N`$ is taken . Secondly, the many-body nature of the problem is further complicated by the interconnection between the equity, bond, commodity and real estate markets. This is shown by the following examples.
* In Vigreux et al. , one can find a spectacular example of the influence of new bond emissions on the price level in the equity market: between 1954 and 1962, several large bonds have been issued at the Paris Stock Exchange which, by absorbing a substantial part of available funds, brought down the equity market by as much as $`20\%`$ for the largest emissions.
* The connection between the real estate market and the equity market has been illustrated in the early 1990s when the burst of the speculative bubble in Japan provoked a parallel fall (of as much as 50 percent) in both markets . The recent financial crises in Malaysia and Thailand also seem to have been triggered by a fall in property prices . The role of intermediaries and of herding has also been pointed out .
* It can be remembered that the Great Depression of 1929-1933 was, apart from the Stock market crash of Oct. 1929, marked by a sharp decline in wheat prices which in fact already started in 1925.
* These last years, one has witnessed that the US stock exchange is very sensitive to rumors concerning interest rates. Pushing the illustration, a sybillin remark from the president of the Federal Reserve suggesting a drop of the short-term rate is enough to trigger important sell-out of bonds, with investment reported to stocks, leading to a surge of the Dow Jones lasting typically a full week. Inversely, when the Dow Jones drops, the long-term interest rates fall down, which is a proof that the cash taken out of the stock market has been carried over to the bond market. In a nutshell, there is a kind of pendulum dynamics of the cash between the two markets.
To deal jointly with stocks, commodities and property is an awesome perspective for this involves almost the whole economy either directly or indirectly.
Simpler phenomenologies appear when analyzing stock market price fluctuations at short-time scales, from the tick scale (trade to trade transaction time) to scales of about one month, for which the coupling between different markets is less overwhelming, at least in normal circumstances, and for which the structure may be argued to be controlled in large part by simple market rules. Exponentially truncated Lévy laws with exponent around $`\alpha 1.5`$ for the 6-year period 1984-1989 and power laws with exponents $`\alpha 3`$ for the 2-year period 1994-1995 , superposition of Gaussian motivated by an analogy with turbulence or stretched exponentials have been proposed to describe the empirical distribution of price returns in organized markets
Another strategy to simplify the problem is to study periods when financial markets were still embryonic. This was the case before 1850; since in addition wheat was before the 20th century by far the most important commodity in Western Europe, wheat price patterns can be expected to constitute a fairly isolated phenomenon (with the obvious qualification that they are influenced by meteorological factors). This approach has been explored by Roehner and Roehner and Sornette .
In the present paper, we present an alternative empirical investigation which exemplifies one single factor underlying market dynamics, namely “speculation”. In recent years, many groups have come up with interesting microscopic models of stock market price dynamics that put emphasis on such an endogenous speculative origin for the observed complexity of market prices . Here, we present what we consider to be probably the purest case illustrating speculation in a market, as it occurs in the collector’s stamp market, just like the motion of planets was for Kepler and Newton the purest case of frictionless motion. This market has a number of definite advantages in terms of simplicity.
1. It is relatively isolated from other speculative markets because the proportion of the collectors is by far larger than that of the investors.
2. “Production” and “consumption” take on particularly simple forms: production is restricted to a short time span and the production figures are statistically well known; since most collectors’ stamps are not actually used on letters, consumption is basically non-existent; it only occurs by wear and tear or by accident at a small and probably fairly constant rate.
3. In contrast to gold, silver or copper coins, stamps cannot be melted. A few decades after they have been issued, they can no longer be used and have therefore no intrinsic value; in other words, their prices are solely determined by the judgment of the collectors.
4. In contrast to other collectibles such as paintings or furniture, stamps are fairly liquid assets. Any valuable stamp can be sold to a trader at a price given in the current catalogue (a discount might be applied which takes into account the state of conservation of the stamp).
5. Stamp markets display huge price bubbles. Multiplication of the current price by a factor of about $`10`$ within a decade is not uncommon.
6. Stamp prices range from a fraction of a dollar to several thousand dollars. This gives the opportunity to observe the speculative behavior of collectors when they are confronted with stakes of different magnitudes.
7. The identification of what is a speculative bubble in the stamp market does not suffer from the same uncertainties as in other markets. Indeed, in recent years, an active debate between economists has been aimed at the problem of an unambiguous and rigorous definition of speculative bubbles, by trying to distinguish those price increases due to changes in fundamentals from those resulting from pure speculation . The challenge stems for the fact that this question is rather ill-posed in general because one does not know and does not have access to all relevant fundamentals. For instance, should the construction of the Opera-Bastille theatre be incorporated in the list of fundamentals defining the real-estate market in the 11th Paris district? In the stamp market, there are very few fundamentals and they are well-known. The definition of a speculative bubble is thus much clearer.
To be fair, one has to recognize that, as far as its statistical analysis is concerned, the stamp market also has a number of drawbacks. First, stamp catalogues are published only every year (sometimes even every two or three years). As the catalogues are the only practical mean for knowing the prices of stamps in a fairly systematic way, this precludes any investigation of short term fluctuations. The bulk of stamp transactions takes place between private individuals; as a result it is almost impossible to estimate the volume of the transactions.
Let us now explain how an exploration of the stamp market may provide clues for a better understanding of the mechanisms and patterns of speculation. Generally speaking, it may be argued that several kinds of agents participate in a given market. For instance, the operators in real estate markets can be divided into two subgroups. (i) Residents who buy and sell for their personal use and (ii) speculators or property developers who make money by selling and buying property. As an illustration, the later group represented about $`20\%`$ of the buyers in the 1997 Paris real estate market (La Vie Française 1998, No 27589,9). The collective behavior of the residents obviously will not be the same as that of the speculators. Yet, only the combined result of their actions is accessible to observation. No doubt that such an intermingling of different mechanisms markedly contributes to blur and obscure the interpretation of the phenomenon. Roehner has tried to separate out residents and speculators by investigating the price bubble at the level of separate districts. The proportion of the speculators turned out to vary within a 1:2 margin. In the present paper, we pursue the same objective but, instead of looking at different districts, we are going to consider different stamps. In collector’s circles, some stamps are known to be speculative assets; examples will be given below. In summary, comparing the price evolution of different stamps may give us an insight into the collective behavior of different populations of economic agents.
Throughout this paper, we consider mainly, though not exclusively, the market of non-used French stamps. The restriction to new stamps is made because their quality and therefore their price are much easier to control. At a time of rapid worldwide internationalization, it could seem more surprising to restrict ourselves to French or British stamps. At the collectors’s level, there is a strong force that works against internationalization. In the past half-century, the number of sovereign countries has been multiplied by a factor three. Furthermore in most countries, new stamps have been issued at a much faster rate than before World War II; as a result there has been a huge increase in the number of stamps; this is reflected in the growing size of worldwide stamp catalogues; for instance the French Yvert and Tellier catalogue, which used to be in two volumes, now has no less than eight volumes. In the face of such a bewildering diversity, it is not surprising that collectors are more and more tempted to restrict themselves to only one country or group of countries. This is also in agreement with the typical collector’s psychology of specialization to a narrow niche that suits his/her fancy.
The paper is organized as follows. In the second section, we provide estimates for the size of the stamp market and we discuss the question of the reliability of the prices given in the catalogues; we also sketch the long-term evolution of the stamp market. Section 3 provides some selected examples of price bubbles which shed light on the nature of speculative forces. In section 4, we propose a tentative classification of speculative markets according to the value taken by two important parameters, namely the amplitude of the price peak and a second parameter which summarizes the form of the peak.
## II The stamp market
### A Turn over
Compared to stock or real estate markets, the markets of stamps for collection are small. The French market can serve to illustrate this point. Approximately 40 stamps for collection are issued every year. Sales of these newly issued stamps for collection can be estimated (for 1984) to be of the order of 400 million francs (less than US$80 millions) . Of this, about no more thant $`5\%`$ are used for mailing. This results from two factors: (i) the issued stamps for collection have facial values that very rarely correspond to mailing values; (ii) the state of conservation of a stamp for collection is so determinant for its value (discount for less than perfect conservation can reach $`50\%`$ or more), that few collectors take the risk to use these stamps for mailing. The 400 million francs issue of stamps for collection for 1984 has only very slowly varied over the years, being 420 million francs for 1993. This must be compared to the total value of 5.5 billions francs in 1993 of stamps issued for mailing.
As we already noted, it is more difficult to estimate the other transactions. The turn over of the five main traders was of the order of 300 million francs. If the total transaction figure is assumed (somewhat arbitrarily) to be four times larger, one obtains an overall figure of less than 2 billion francs. This is larger than 400 million francs because it comprises trading of all previously issued stamps. Let us compare this figure to the transactions on stocks, on real estates or on works of art.
* By 1984, the annual transactions on the Paris Stock Exchange were of the order of 100 billions francs.
* In 1984, 35000 appartments have been sold in Paris (figure given by the Chambre des Notaires, i.e. the Lawyers Association); at an average price of one million francs per appartment, this represented an amount of 35 billions francs.
* The turnover of public auctions in works of art was in 1975 of the order ot one billion francs, while private transactions were estimated at about 1.5 billion francs .
The French stamp market thus represented in the 1980s about 2% of the transactions on the stock market, about 6% of the real estate sales in the city of Paris, (i.e. excluding the suburbs); they were approximately of the same magnitude as those of works of art.
### B Estimating the price of stamps
The stamp catalogues provide the prices of all existing stamps. In countries such as Britain, France, Germany and the United States, such catalogues have been published annually for more than a century. They thus constitute a valuable source of information for anyone who wants to study either price bubbles or the long term trend of stamp prices. However, the question arises as to whether the prices given in the catalogues truly reflect the prices in actual transactions. From a collector’s perspective, this is a complex question; yet from a statistical point of view, it will be seen to have a simple answer.
The prices listed in a catalogue are for stamps in a perfect state of conservation. It is however obvious that a stamp that has been bought several decades ago can hardly be in a perfect state. In other words, its price will always be less than listed in the catalogue. Statistically however, there is a close connection between negociated prices and the prices listed in the catalogue. This has been proved by Feuilloley using a sample of 300 stamps; the correlation was about 0.90, while the regression coefficient was about 0.5 which means that on average the real prices were only half the prices listed in the Yvert and Tellier catalogue. In the following, we are interested in the evolution of relative prices rather than in their absolute magnitude. The catalogues can thus be considered as a reliable source.
### C Long-term trend of French stamp prices
Fig.1 shows the long term price trend of (i) Nineteenth century stamps (ii) All stamps listed in the Yvert & Tellier catalogue. The deflated price of 19th century stamps has increased at an average rate $`r_{19}=5.2\%`$, while the average price of all stamps has grown at a rate $`r_{\mathrm{all}}=2.1\%`$.
The rate for 19th century stamps is easier to rationalize than the second one for all stamps, since it concerns the evolution of a sample of stamps which remained unchanged in the course of time. During the same time span, the net national income (at constant prices) has increased at an annual rate of $`r_0=3.1\%`$. The difference $`r_{19}r_0=2.1\%`$ can be interpreted with the following simple model.
* In the course of time, the offer, i.e. the number of 19th century stamps, has decreased at a constant rate of $`d\%`$. Thus, the residual number of stamps after a time $`t`$ is $`N_0e^{dt}`$, starting from an initial number $`N_0`$. One can advance the following rational for this decrease. There are two types collectors :
1. the ’amateurs’, those who have just a small collection. At some point in time they stop collecting, and as their collection’s worth is low, the stamps will be thrown away.
2. The ‘professionals’: they collect ‘seriously’, and their collection has a certain value, and even when they die their relatives will be aware of the collection’s value and sell it. Thus, the stamps re-enter the market.
What makes things difficult is that the amateurs will normally not have the rare stamps. Thus, the rare stamps’depreciation rate is much flatter than that of the everyday stamps. For the rare stamps, the depreciation rate is probably rather close to zero.
* On the demand side, one must consider the total amount of money $`M`$ that the total number of collectors $`C`$ are willing to devote to purchasing 19th century stamps. Let us denote by $`c`$ the proportion of collectors in the total population $`N`$, and by $`f`$ the fraction of his/her revenue $`R`$ that a collector is willing to spend on 19th century stamps. One has:
$$C=cN,M=fRC=(NR)cf=fcI,$$
(1)
where $`I`$ denotes the national income.
Within this simple framework, the difference $`r_{19}r_0=2.1\%`$ can be attributed to the following factors.
* The proportion $`c`$ of collectors in the total population has increased.
* The number of 19th stamps has slightly decreased.
* The proportion $`f`$ of a typical collector’s revenue spent on 19th century stamps has increased. This conjecture seems quite reasonable in the light of Engel’s law which states that, as per capita income increases, the percentage spent on items other than food, clothing or housing increases too.
* The number of stamps decrease because of their finite “half-time”.
Equating (1) to $`N_0e^{dt}p(t)`$ and using $`I=I_0e^{r_0t}`$ gives the estimation
$$r_{19}=r_0+d.$$
(2)
Actually, the equality should be replaced by the inequality $``$, to take into account that the fraction $`f`$ of the revenue and the proportion $`c`$ of collectors may have also increased. This gives a lower bound for the half-time ($`\mathrm{ln}2/d`$) of about thirty years. If in addition, we incorporate a risk factor, requiring that the rate $`r_{19}`$ of stamp price appreciation should include the price of risk of the stamp destruction, typically proportional to the standard deviation of the Poisson process of stamp destruction, this doubles the lower bound of the expected half-time for 19th century stamps to sixty years. Indeed, including the price of risk means that there must be a remuneration resulting from the fact that the process is not certain and exhibits fluctuations. In this framework, the interest rate $`r_{19}`$ must incorporate a remuneration which is typically proportional to the risk, measured usually by the standard deviation of the uncertain process. For a Poisson process, the standard deviation is $`1/d`$. Adding this contribution, $`d`$ is replaced by $`2d`$ in (2) and the corresponding estimation for $`d`$ is halved, hence the doubling of the half-time.
If the interpretation provided by this model is correct, one would expect the price of 19th century stamps to continue to increase at a faster rate than the net national income. Since in addition, there are no taxes on stamp sales, stamps are likely to constitute a good investment for the foreseeable future. Notice that the difference $`r_{19}r_0=2.1\%`$ between the rate of return of 19th stamps and the net national income does not reflect the influence of a risk factor but rather that of a shift on the supply-demand curve towards increasing scarsity of supply and increasing demand for old and rare stamps.
## III Speculative bubbles
In this section, we address the following questions :
1. Has the price of an item a determining influence on the way a speculative bubble unfolds?
2. Are there different speculative patterns?
3. Is it possible to predict at least the upper bounds for the amplitude of the price peaks?
4. Why does a specific stamp become the target of a speculative process?
### A How versus why
The first three questions have to do with how the speculative process develops. In other words, given that a speculative process has taken place, one tries to analyse its characteristics; in contrast, the fourth question refers to the why’s. In a previous paper , we have already emphasized that this latter question is very difficult to address specifically and this difficulty is a clue for the origin of bubbles. As an illustration, consider the two stamps that have been issued the same day (27 Oct. 1979) with the same number of copies (6 millions) and with the same face value (2 francs). They only differed by their themes: one (No 2059 in the Cérès catalogue) represented an ancient painting of “Diane at the bath”, while the other (No 2060) represented a painting by Van Gogh. As can be seen in Fig.2, only the second stamp experienced a speculative process which multiplied its price by 10 in less than six years.
When invited to comment on that enigma, a stamp expert explained that the speculation probably started when a big collector (or trader) happened by chance to come in possession of a substantial proportion of the total number of stamps that had been issued. In fact, many different rumours circulated in philatelic circles as to how this happened. One possible explanation is that speculation seemed, at least in that case, to be triggered by purely subjective factors. Another scenario explored by Roehner and Sornette (see also ) is that chance plays a crucial role is nucleating the bubble which is then amplified by multiplicative effects and/or path dependent positive feedback effects . In this last scenario, the initial price inflation of the “Van Gogh” stamp was a “lucky” event, or maybe even the act of speculation by a big collector, which was afterwards amplified by the action of positive feedbacks as described for instance in the Polya Urn model (see for a modern extension to describe self-organization): the bubble fed on itself, reinforcing itself by the increasing attraction presented by the “Van Gogh” stamp. A similar scenario has been documented for the real estate bubble that culminated in 1991 in France : a booming real-estate market is attracting to everybody but the poor who cannot enter the market: sellers cash in a substantial profit; buyers are not frightened by the astronomical price and buy confidently with the expectation that they will also cash a profit when they sell in the future. This is sustainable only as long as there is liquidity, i.e. a reservoir of potential buyers is continuously replenishing.
### B Impact of initial price levels
We now turn to the first question, namely what is the influence of the price level. To a large extent, we find that the price is irrelevant. In other words, an expensive stamp and a cheap stamp, which both became the targets of a speculative process, experience parallel price trajectories. However, there is a low but statistically significant correlation between the price level and the price amplification factor. This fact is illustrated by Fig.3 for French stamps and by Fig.4 for British stamps. Fig.3 shows the evolutions of a very rare stamp and of a fairly common stamp. In 1904, the price of stamp No 2 was 200 francs while the price of stamp No 16 was 0.05 francs. In spite of such a large price gap, the price evolutions are fairly similar. It is true that the timing was not the same, with the bubble for the cheap stamp beginning to build up about 10 years earlier; but the overall increase was of the same order as well as the subsequent decrease. Fig.4 provides a similar example. In 1965, stamp No 90 was worth 4000 francs against 60 francs for stamp No 155. Yet, the price evolutions are very similar. In fact, almost all British stamps issued before World War II and having a face value of a pound or more followed a similar evolution. Stamp No 106 provides an example of a stamp issued in 1902-1910 but having a face value of only half a penny; in this case, the speculative increase is very small in comparison.
We now examine if there is a correlation between the initial price of a stamp and its price amplification factor. The results are presented in table 1a. In this table and the others below, the coefficient of amplification are given in current value, following the habit of professionals. The coefficient of linear correlation between the logarithms of the 1965 prices and the price amplification factor is equal to 0.55; in other words, the higher the initial price the stronger the speculation seems to be. A similar observation was made for the district-level prices during the Paris real estate bubble . When the bubble started in 1984, the price in the most expensive district (6th) was twice as high as the price in the cheapest one (10th). The bubble first began to built up in the most expensive districts (6th, 7th, 16th, 17th) and then spread to cheaper districts with a delay of about 6 to 12 months. Furthermore there was a low (but nevertheless significant) correlation of 0.49 between the 1984 prices and the price amplification factors as shown in table 1b.
### C Speculative patterns
#### 1 Corners
We now turn to the second question, namely whether different speculation patterns can be identified. Table 2 shows that there is a marked difference in terms of price increase rate between the first and the second half of the table. Let us for instance consider more closely one of the episodes in the second group, namely the bubble which concerned a few French stamps in the mid 1980s. Owing to its short duration and to its high increase rate, one may wonder whether this was not a deliberate attempt to corner the market, i.e. to take the control of the market by buying all available stamps. This question clearly raises two other ones: what is the percentage of the total “production” of a stamp that it is necessary to buy in order to control the market and create such a “squeeze”. Is it within the financial capability of big operators? Let us consider the Van Gogh stamp (Cérès No 2060); its face value is 2 francs and 6 millions copies have been issued representing a total amount of 12 million francs, that is to say about 3 percent of the total annual value of newly issued stamps or 4 percent of the annual turnover of the five major French traders. There should therefore be no problem for a trader or an important collector to buy at least 75 percent of the 6 million stamps. Once the bubble has reached its peak-level however, it becomes much more difficult to keep the market under control for the 6 million stamps now represent $`60\times 6=360`$ millions francs, an amount which is of the same magnitude as the global turnover of the five major traders. This has two consequences:
1. There is obviously an upper bound to the price level that can be reached during a speculative bubble; this ceiling price is determined both by the ability of the main operators to control the market and by the subsequent ability of the market to absorb the offer. It seems that 60 francs for the Van Gogh stamp was either close to or even beyond this upper bound.
2. The 360 millions francs represent about 20 percent of our estimate for annual transactions; clearly the market is not going to devote 20 percent of its purchasing power to just one stamp among several thousand other French stamps. In other words, the market will obviously be unwilling to absorb the 6 million stamps (or even a substantial fraction of them) at such a high price. The price is therefore bound to decrease before large stocks of stamps can be absorbed by the market.
#### 2 Shape of the price peak
In order to characterize the shape of the bubble peaks, we use the quantification defined by Roehner and Sornette describing the price $`p(t)`$ as a function of time according to :
$$p(t)=a\mathrm{exp}\left[\text{sgn}(\tau )\left|\frac{tt_0}{\tau }\right|^\alpha \right],$$
(3)
where $`t_0`$ denotes the turning point of the peak and $`\tau `$ is a characteristic time scale for the maturation of the bubble. The key parameter that quantifies the shape of the peak is the exponent $`\alpha `$.
* If $`\alpha `$ is equal to $`1`$, one retrieves an exponential growth up to the turning point followed by an exponential decay. $`x=\mathrm{ln}(p)`$ is thus linear up to the maximum, with a tent-like structure.
* If $`\alpha <1`$ and $`\tau >0`$, the function describes a sharp peak (accelerating rise before the peak and decelerating drop after the peak).
* If $`\alpha >1`$ and $`\tau <0`$, the function describes a flat trough (decelerating drop followed by an accelerating rise).
* If $`\alpha >1`$ and $`\tau >0`$, the function describes a “flat peak” (decelerating rise followed by an accelerating drop).
* If $`\alpha <1`$ and $`\tau <0`$, the function describes a sharp trough (accelerating drop followed by a decelerating rise).
Table 2 shows that almost all price peaks that we examined have a peak exponent $`\alpha >1`$. This corresponds to a flat peak pattern. This must be contrasted to the situations found for most commodity price peaks for which a sharp peak-flat through pattern holds . The only stamp bubble which clearly displays a sharp peak pattern is the one that occurred in France during World War II. The “causes” of this peak are relatively easy to enumerate, namely
1. the need to sweep black market profits under the carpet,
2. the demand generated by a number of passionate collectors belonging to the German occupation troops,
3. the fact that during the war, savings accumulated as a result of consumption restrictions.
Yet, none of these reasons explains why the shape of these peaks was so much different.
## IV Conclusion
By analysing the stamp market, we have tried to document a vivid demonstration of speculation and of its main characteristics in a one of its most basic manifestation. Our observations emphasize the role of fancy and collective behavior. With J.M. Keynes and his parallel between the stock market and newspapers’ beauty contests (see section 12 of the General Theory ), one could argue that the main charateristic of the successful speculator is his/her ability to predict what the (average) behavior of the rest of the public will be. However, such a model probably underestimates the importance of social interaction and mimetic contagion. For instance, one can hardly expect an isolated collector to be capable of predicting that, among hundreds of other stamps, the Van Gogh stamp will become the target of an important speculation movement. Such a behavior is more likely to be propagated by the interactions between collectors and by philatelic publications. Unfortunately, lack of data prevents us from statistically estimating the strength and frequency of those social interactions.
In order to provide a unified overview of some of the results presented in this paper and in and to compare the shape of bubbles found in commodities and in collecting items, Fig.5 shows comparison between different types of speculative movements.
A question left open by our present study is the possible correlation between the structure and shape of the speculative bubble and the degree of market liquidity. By this, we mean the following. Speculators are very useful to provide “liquidity” in the market. Otherwise, in the stamp or real-estate markets, exchanges would occur only between collectors or residents and would be very scarse since the purpose of a collector is to collect and that of a resident is to reside! The task of collecting and changing living place would thus be proportionally more difficult. This positive role of speculation is classical and is often advanced to defenders of free capitalistic markets. On the other hand, a larger proportion of speculators imply that traders are more in phase, there is less friction and the speculative bubbles should develop faster. This could be tested empirically by correlating the exponent $`\alpha `$ to the ratio of residents to speculators in the 20 Parisian districts.
Our results on the independence of the shape of the speculative peaks with respect to the price of the stamps suggest that risk aversion (related to the amount of money involved in a transaction) does not play an important role in the speculative bubbles observed in the stamp market. This is a very interesting information, whose validity should be investigated for other markets, including financial markets. Theoretical models of financial crashes using rational expectation theory coupled to herding behavior of a fraction of the traders suggest also that risk aversion is not a determining factor . It is thus reassuring that a similar conclusion is obtained on two very different markets and by very different approaches.
Acknowledgements: We would like to express our gratitude to Mrs. Paganini and Luppi (Céres-Philatélie) and to the librarians of the Documentation Center for Collectors Stamps (Musée de la Poste-France Telecom).
|
no-problem/9906/astro-ph9906028.html
|
ar5iv
|
text
|
# Black hole formation via hypercritical accretion during common envelope evolution
## 1. INTRODUCTION
Common envelope evolution, in which the components of a binary system are engulfed by a common gaseous envelope, is a brief but crucial phase in the formation of many compact binary systems. Drag forces, due to velocity differences between the orbiting components of the binary and the surrounding gas, work to shrink the binary orbit, while the potential energy released acts to expel the common envelope. Extensive numerical work (e.g. Bodenheimer & Taam 1984; Livio & Soker 1988; Taam, Bodenheimer & Rozyczka 1994; Sandquist et al. 1998) has only partially succeeded in reducing the large uncertainties surrounding the efficiency of this process (Iben & Livio 1993; Livio 1994; Rasio & Livio 1996).
For neutron stars, an equally fundamental uncertainty concerns the accretion rate during the common envelope phase. At the densities of a typical giant envelope, the Bondi-Hoyle accretion rate $`\dot{M}_{\mathrm{BH}}`$ for an inspiralling neutron star would be extremely large, typically of the order of $`1M_{}\mathrm{yr}^1`$, and often much larger still. This is many orders of magnitude in excess of the Eddington rate<sup>1</sup><sup>1</sup>1the Eddington ‘limit’ here implies only an order-of-magnitude estimate of the accretion rate, since accretion in common envelopes is not generally spherically symmetric of $`10^8M_{}\mathrm{yr}^1`$ — obtained by equating the outward radiation pressure with gravity — leading to starkly different predicted outcomes. If $`\dot{M}`$ is limited to the Eddington rate (or is even several orders of magnitude higher), then ejection of the stellar envelope will occur when the total accreted mass is $`M_{\mathrm{acc}}M_{}`$, and the neutron star will survive. Conversely, uninhibited accretion at close to $`\dot{M}_{\mathrm{BH}}`$ will lead almost inevitably to collapse to a black hole.
Recent work has tended to suggest that $`\dot{M}_{\mathrm{BH}}`$ is the appropriate rate for neutron star accretion in a stellar envelope (see e.g. Bethe & Brown 1998 and references therein). At sufficiently high accretion rates, the Eddington limit becomes irrelevant because photons are trapped and advected inward with the flow (Rees 1978; Begelman 1979; Blondin 1986). A hot envelope develops around the neutron star, and eventually high enough densities and temperatures are reached that the accretion energy can be radiated away by neutrinos. This hypercritical mode of accretion has been considered in the context of fallback onto neutron stars in supernovae (e.g. Colgate 1971; Zeldovich, Ivanova & Nadezhin 1971; Chevalier 1989; Houck & Chevalier 1991), as a component of models for gamma-ray bursts (Popham, Woosley & Fryer 1999), and applied to the problem of accretion in a stellar envelope (Chevalier 1993, 1996; Brown 1995; Fryer, Benz & Herant 1996). A striking consequence of ubiquitous hypercritical accretion would be that neutron star-neutron star binaries cannot form from massive binaries via the usual common envelope evolution route, which would instead lead to black hole-neutron star binaries. The observed population of neutron star-neutron star binaries would have to form instead via a rarer channel in which the binary progenitors are of nearly equal mass (Brown 1995), and would be an order of magnitude rarer than black hole-neutron star binaries (Bethe & Brown 1998). The robustness of this conclusion is of great interest, since these binaries are both potential gamma-ray burst progenitors, and the most promising targets for the early LIGO detection of gravitational waves (Abramovici et al. 1992).
The velocity of the neutron star relative to the common envelope ensures that the accretion flow cannot be spherically symmetric, at least at large radii of the order of the accretion radius. Moreover, gradients in the envelope structure across the accretion radius introduce angular momentum into the flow, which modifies its properties even at small radii, via the formation of a rotationally supported disk. In this paper we investigate how these complexities affect hypercritical accretion via hydrodynamic simulations of the outer parts of the accretion flow. Our approach thus complements the consideration of angular momentum presented by Chevalier (1996), though as discussed later we draw different conclusions as to its likely importance.
The plan of this paper is as follows. In §2 we briefly review the properties of spherical hypercritical accretion that are relevant for the common envelope application, and in §3 we outline the numerical methods used. The principal computational limitation is the restriction to two dimensional simulations. Results for flows with zero and non-zero angular momentum are presented in §4 and §5, and the implications and remaining uncertainties discussed in §6.
## 2. HYPERCRITICAL ACCRETION
The simplest approximation to the accretion flow in a common envelope is to describe it as Bondi-Hoyle-Lyttleton accretion from a uniform medium onto a point mass. The properties of such flows have been extensively studied, both analytically (Hoyle & Lyttleton 1939; Bondi & Hoyle 1944), and numerically (e.g. Ruffert 1997; Kley, Shankar & Burkert 1995; Benensohn, Lamb & Taam 1997, and references therein). For an accretor of mass $`M`$, the critical radius is the accretion radius<sup>2</sup><sup>2</sup>2There are a variety of definitions of $`R_a`$ in the literature, but all lead to accretion rates that are essentially the same for our purposes,
$$R_a=\frac{2GM}{v_{\mathrm{}}^2+c_{\mathrm{}}^2},$$
(1)
where $`v_{\mathrm{}}`$ and $`c_{\mathrm{}}`$ are the velocity and sound speed far upstream of the accreting star. Material falling into a cylinder with this radius is accreted, so that,
$$\dot{M}_{\mathrm{BH}}=\pi R_a^2\rho _{\mathrm{}}\sqrt{v_{\mathrm{}}^2+c_{\mathrm{}}^2},$$
(2)
where $`\rho _{\mathrm{}}`$ is the density of the gas far upstream of the accretor, and the last factor is included as it provides better results for low Mach number accretors. The timescale for setting up the flow is of the order of the sound crossing time at the accretion radius, $`t_a=R_a/c_{\mathrm{}}`$. Generally, since we are interested in inspiral through a pressure supported atmosphere, we expect Mach numbers of order unity, though this will vary somewhat at different radii in the star.
Numerical values for $`R_a`$ and $`\dot{M}_{\mathrm{BH}}`$ require specification of a stellar model. Fryer, Benz & Herant (1996) tabulate values for several giant and main sequence stars (the giants are of greater interest here, though somewhat analagous considerations apply to neutron star-main sequence collisions, for example in globular clusters). For example, for a $`20M_{}`$ giant $`R_a10^{11}\mathrm{cm}`$, while $`\dot{M}_{\mathrm{BH}}`$ varies from $`10^2M_{}\mathrm{yr}^1`$ on upwards. For a more distended $`10M_{}`$ model lower accretion rates of $`10^2M_{}\mathrm{yr}^1`$ are of course expected in the outer regions. These accretion rates can only be realized in the hypercritical regime, since the Eddington limit on the luminosity,
$$L_{\mathrm{Edd}}=\frac{4\pi GMm_pc}{\sigma _T},$$
(3)
where $`m_p`$ is the proton mass and $`\sigma _T`$ the Thomson cross-section, corresponds to a few $`\times 10^8M_{}\mathrm{yr}^1`$.
For hypercritical accretion to occur the timescale for photons to diffuse out of the flow must exceed the timescale for them to be advected inwards. For a spherical flow whose opacity $`\kappa _{\mathrm{es}}`$ is dominated by electron scattering, the optical depth is
$$\tau _{\mathrm{es}}=\rho (R)\kappa _{\mathrm{es}}𝑑R,$$
(4)
and the diffusion time is roughly,
$$t_{\mathrm{diff}}\frac{R}{c}\tau _{\mathrm{es}}.$$
(5)
For a flow in free-fall, where $`v_r=\sqrt{2GM/R}`$, equating $`t_{\mathrm{diff}}`$ to the free-fall time $`R/v_r`$ yields an estimate for the trapping radius (Blondin 1986),
$$R_{\mathrm{trap}}=\frac{1}{2}\frac{\dot{M}c^2}{L_{\mathrm{Edd}}}\frac{2GM}{c^2}.$$
(6)
In addition to the photons being trapped, we must also be able to eventually lose the accretion energy from the system. For a black hole, this is possible at any accretion rate since the energy can be invisibly advected across the horizon (e.g. Narayan, Garcia & McClintock 1997). In a neutron star system, high accretion rates are required because the pressure at the base of the accretion flow must be able to become large enough to allow neutrino losses (primarily from electron positron annihilation into neutrino pairs) to balance the accretion energy, $`GM\dot{M}/R_{\mathrm{ns}}`$. In the spherically symmetric case, this requirement usually requires the formation of a hot envelope at small radii, in which $`v_r`$ is subsonic, bounded from free falling gas by a shock at radius $`R_{\mathrm{sh}}`$. Within this inner region, the density and pressure scale as a radiation dominated $`\gamma =4/3`$ ($`n=3`$) polytrope (e.g. Brown 1995; Fryer, Benz & Herant 1996),
$$\rho =\rho _0\left(\frac{R}{R_{\mathrm{sh}}}\right)^3,p=p_0\left(\frac{R}{R_{\mathrm{sh}}}\right)^4.$$
(7)
A detailed calculation (Houck & Chevalier 1991; Brown 1995) finds for $`R_{\mathrm{sh}}`$,
$$R_{\mathrm{sh}}2.6\times 10^8\mathrm{cm}\left(\frac{\dot{M}}{M_{}\mathrm{yr}^1}\right)^{0.37}.$$
(8)
Requiring that $`R_{\mathrm{sh}}<R_{\mathrm{trap}}`$ gives a limit on the accretion rate, of roughly $`\dot{M}_{\mathrm{hyper}}10^4M_{}\mathrm{yr}^1`$. Even allowing for a generous margin of error in this crude estimate, for example due to a non-spherical geometry, this is substantially less than the Bondi-Hoyle rate in an envelope, which should thus be safely in the hypercritical regime. Of course we also require in the common envelope case that $`R_{\mathrm{sh}}<R_a`$, but for spherical accretion and $`R_a10^{11}\mathrm{cm}`$ this is an easily satisfied constraint, $`\dot{M}10^7M_{}\mathrm{yr}^1`$.
If angular momentum and other possible complications can be neglected, the scenario for the inspiral of a neutron star then has three stages. First, the neutron star accretes at less than the Eddington rate as it encounters the tenuous outer atmosphere of the companion. Negligible mass is accumulated in this phase. As the density grows, the implied Bondi-Hoyle rate first rises above $`\dot{M}_{\mathrm{Edd}}`$, but is below $`\dot{M}_{\mathrm{hyper}}`$. In this regime outward diffusion of photons is effective at limiting accretion, again to negligible levels. Finally, the Bondi-Hoyle rate rises well above the critical value for hypercritical accretion, and gas starts accreting freely onto the neutron star. The outcome then depends critically on how much mass can be accumulated before enough energy has been lost to unbind the envelope. Brown (1995) estimates this energy as,
$$E3<v_{\mathrm{}}^2>\mathrm{\Delta }M$$
(9)
where $`\mathrm{\Delta }M`$ is the accreted mass, and $`<v_{\mathrm{}}^2>`$ is the average mean velocity squared during the hypercritical accretion phase. For typical binding energies and velocities, $`\mathrm{\Delta }M1M_{}`$, sometimes vastly so, and the mass accreted will probably exceed the maximum mass for a neutron star and force collapse to a black hole.
## 3. NUMERICAL METHOD
We investigate how angular momentum could affect the above scenario of hypercritical accretion by studying the inviscid, purely hydrodynamic behavior of infalling gas at large radii, $`RR_{ns}`$. The hypercritical regime corresponds to the complete dominance of advection of photons over diffusion, so that the effects of radiation transport are negligible. We also neglect magnetic fields, which is potentially less forgivable. To attain the required dynamic range in three dimensional calculations including magnetic fields unfortunately remain difficult.
### 3.1. 2D simulations
We have investigated two dimensional simulations both in spherical polar geometry (with $`v`$ parallel to $`\theta =0`$ and axisymmetry in the $`\varphi `$ direction), and in cylindrical geometry $`(z,R,\varphi )`$. The spherical polar simulations represent the most faithful two dimensional representation of the flow, but necessarily exclude the possibility of studying angular momentum or density gradients in the ambient medium. For such zero angular momentum flows, we found only that the analytic Bondi-Hoyle estimate of equation (2) provides a good estimate of the accretion rate (for low Mach numbers the actual accretion rate is somewhat higher than the estimate, but the difference is not so great as to qualitatively affect the result).
For these reasons (and following Benensohn, Lamb & Taam 1997 and most other two dimensional calculations) we focus on the cylindrical polar calculations. The neutron star is represented as a point mass at $`R=0`$, surrounded by a fixed computational mesh in $`(R,\varphi )`$. We use uniform zoning in the $`\varphi `$ direction, and choose the radial grid such that $`R_{i+1}=\beta R_i`$ with $`\beta >1`$ a constant. This amounts to choosing cells that have the same shape, in our case roughly square, at all radii.
Cylindrical polar simulations in which $`z`$ is the ignored co-ordinate allow for angular momentum, but correspond to the rather unphysical assumption that the central mass moves through a thin sheet of gas with zero gradients in the perpendicular direction. This may lead to qualitative errors, for example in the strength and prominence of ‘flip-flop’ instabilities at high Mach numbers (compare e.g. Livio 1992; Benensohn, Lamb & Taam (1997); Ruffert (1997)), though untangling the effects of geometry and resolution in the various calculations is difficult. We will not be interested here in transient disks or instabilities in the wake, where these issues are most worrisome.
We assume that the gas is dominated by radiation pressure out to the outer boundary of the simulation at a few $`R_a`$. The equation of state can then be modelled using a simple adiabatic relation,
$$p=(\gamma 1)ϵ,$$
(10)
where $`ϵ`$ is the energy per unit volume and we take $`\gamma =4/3`$, corresponding to a radiation dominated gas where $`p=1/3ϵ`$.
The calculations use the ZEUS-3D code developed by the Laboratory for Computational Astrophysics (Clarke, Norman & Fiedler 1994). ZEUS is an Eulerian finite difference code which employs an artificial viscosity to handle shocks. The algorithms and design of the code are closely similar to those detailed by Stone & Norman (1992a, 1992b).
### 3.2. Boundary conditions
For the outer boundary condition, we impose inflow of gas at velocity $`v_{\mathrm{}}`$ and sound speed $`c_{\mathrm{}}`$ at the outer boundary, $`R_{\mathrm{out}}`$, for $`\pi /2<\varphi <\pi /2`$. Over the remainder of the outer boundary outflow boundary conditions are specified, implemented as simple continuation of fluid variables on the grid into the boundary zones. We take a Mach number $`=1.5`$, appropriate to the generally mildly supersonic accretion flows in common envelopes.
An inspiralling neutron star encounters radial gradients in $`\rho _{\mathrm{}}`$, $`c_{\mathrm{}}`$ and $`v_{\mathrm{}}`$ across the accretion radius. In most cases the density gradient is the principal effect (e.g. Fryer, Benz & Herant 1996), which we model as a simple exponential,
$$\rho _{\mathrm{}}e^{ϵ_\rho \mathrm{\Delta }r/R_a}$$
(11)
where $`\mathrm{\Delta }r`$ is the radial distance in the common envelope of the unperturbed medium from the neutron star orbit, and $`ϵ_\rho `$ measures the strength of the gradient across the accretion radius. Values of $`ϵ_\rho `$ for various stars vary substantially, and have been tabulated by Fryer, Benz & Herant (1996).
The inner boundary condition is reflecting ($`v_R=0`$ at $`R=R_{\mathrm{in}}`$), allowing for the formation of a pressure supported inner envelope for zero angular momentum accretion. This implicitly assumes low accretion rates where the shock radius $`R_{\mathrm{sh}}`$ is large enough to exceed $`R_{\mathrm{in}}`$. For runs with angular momentum, the choice of inner boundary condition is less important as angular momentum provides substantial support against gravity at small radii. Our boundary condition then amounts to assuming that radial flow through the disk is slow – the plausibility of this can be verified post facto by studying the properties of the disks formed in the calculation.
## 4. DISK FORMATION
Figure 1 shows results for a series of Bondi-Hoyle accretion simulations in which the density gradient in the ambient medium was varied from $`ϵ_\rho =0`$ to $`ϵ_\rho =0.4`$. All the calculations used a grid with $`n_\varphi =144`$ and $`n_R=160`$. The inner boundary was at $`R_{\mathrm{in}}=R_a/60`$ and the outer boundary at $`R_{\mathrm{out}}=4R_a`$, giving a grid with $`\beta 1.03`$. The calculations were run until $`t=32t_a`$, where the time unit $`t_a`$ is the sound crossing time of the accretion radius $`R_a`$. We plot in Fig. 1 only the inner region of the accretion flow. The $`ϵ_\rho =0.2`$ run was also recomputed at modestly higher resolution ($`n_r=n_\varphi =200`$) until $`t=100t_a`$, in order to ascertain how quickly mass accumulated in the outer parts of the disk at a later time. No qualitative changes were observed to occur during this longer simulation.
In the absence of density gradients in the ambient gas, the structure of the flow resembles closely that seen in simulations of Bondi-Hoyle accretion with the same parameters and an absorbing inner boundary condition. A pressure supported, roughly symmetric envelope has developed around the central object, and this displaces the bow shock upstream into the flow. At this resolution and Mach number, the flow is found to be only rather weakly transient. Increasing the density gradient first leads to an asymmetric displacement of the bow shock, followed for more extreme density gradients by the formation of a clear disk in the inner regions of the simulation. For the parameters adopted here, $`ϵ_\rho 0.2`$ suffices to create a clear and persistent disk surrounding the central object.
Figure 2 plots the velocity and density fields corresponding to the images in Fig. 1. Over this radial range there is a large variation in velocity, which we normalise to the local Keplerian value $`v_k=\sqrt{(GM/R)}`$. For the zero density gradient case, the outer flow is clearly not cylindrically symmetric, but within $`0.1R_a`$ of the central object the envelope is both reasonably symmetric and characterised by only small rotational motions. Conversely, a clear disk is produced in the calculations with stronger density gradients, $`ϵ_\rho 0.2`$, the flow here is disk-like out to at least $`R_a/2`$.
Figure 3 shows how the volume averaged rotational velocity from the simulations, normalised to $`v_k`$, varies as a function of radius. Negligible rotation is seen in the $`ϵ_\rho =0`$ run, with $`<v_\varphi >/v_k0.1`$ at all radii, while the runs with density gradients all show the signature of a disk in which both pressure gradients and rotational support are significant. For the two runs with the largest gradients, a sizeable disk extending out to almost $`R_a`$, with typical angular velocity $`\mathrm{\Omega }\mathrm{\Omega }_k/2`$, is clearly produced.
## 5. DISK EVOLUTION
For hypercritical accretion, the disks formed around the neutron star will be advection dominated. The properties of advection dominated flows have been extensively studied, both at the high accretion rates of interest here (e.g. Begelman & Meier 1982) and more recently at lower accretion rates (e.g. Narayan & Yi 1994, 1995). These disks are hot, geometrically thick, and only weakly bound to the accreting object. The presence of a disk may affect the outcome of common envelope evolution in two ways, via a modification of the accretion rate onto the neuton star or via feedback of energy into the stellar envelope.
The most direct potential influence of a disk is via the accretion rate. As infall continues, the disk will reach a quasi-steady state in which the rate of infall onto the outer disk balances the rate of disk accretion. This rate need not be the Bondi-Hoyle rate, and in principle could be lower (for an illustration, in our inviscid simulations, where the viscosity is very low, we eventually reach a steady state where there is close to zero ongoing accumulation of mass in the disk). We estimate whether this is important below, and show that for the usually considered values of $`\alpha _{\mathrm{SS}}`$, the Shakura-Sunyaev (1973) viscosity parameter, the disk is probably able to transport mass inwards at the Bondi-Hoyle rate.
The presence of a disk also makes the formation of outflows or jets probable (e.g. Livio 1999). Strong outflows could themselves affect the fate of the system by reducing the accretion rate onto the neutron star. Much weaker jets, if they arose from deep in the neutron star potential well, would still provide an important additional energy input into the envelope, and shorten the common envelope phase.
### 5.1. Accretion rate
The accretion rate through the disk can be estimated from the measured disk mass and the inferred viscous timescale. For a viscosity parameterized via the Shakura-Sunyeav (1973) form, $`\nu =\alpha _{\mathrm{SS}}c_s^2/\mathrm{\Omega }_k`$, and at radius $`R`$ the viscous timescale is,
$$t_\nu =\frac{R^2}{\nu }=\frac{R^2\mathrm{\Omega }_k}{\alpha _{\mathrm{SS}}c_s^2}.$$
(12)
The mass in the disk will drain onto the central object on the viscous timescale at the outer radius, with an accretion rate $`\dot{M}_{\mathrm{disk}}M_{\mathrm{disk}}/t_\nu `$. The value of $`\alpha _{\mathrm{SS}}`$ appropriate to thick, radiation dominated disks is unknown. If MHD instabilites (Balbus & Hawley 1991) are responsible for the viscosity, then simulations of thin gas pressure dominated disks find typically that $`\alpha _{\mathrm{SS}}10^2`$ (Stone et al. 1996; Brandenburg et al. 1996). There remain large uncertainties in the theoretical expectation for disks of the kind that we are interested in. However, taking $`\alpha _{\mathrm{SS}}=10^2`$, and evaluating the accretion rate from the disk mass and run of sound speed obtained at $`t=100t_a`$ in the long duration $`ϵ_\rho =0.2`$ run, we find that $`\dot{M}_{\mathrm{disk}}\dot{M}_{\mathrm{BH}}`$. This is a crude estimate, which in particular ignores completely the expansion of the disk expected from viscous evolution (Lynden-Bell & Pringle 1974). As a result, the value $`\dot{M}_{\mathrm{BH}}`$ should be regarded as an upper limit to the mean accretion rate through the disk. Furthermore, at least at small radii the accretion is unlikely even to be steady (Chevalier 1996), since the much slower fall-off of pressure with radius in an advection dominated disk as compared to a spherical envelope ($`pR^{5/2}`$ as compared to $`pR^4`$) requires non-steady accretion to reach the extreme conditions required for neutrino emission. However, it suggests that there is no strong reason to believe that, in the absence of outflows, the formation of a disk creates an insurmountable bottleneck to rapid accretion.
### 5.2. Outflows and jets
Most accretion disk systems are observed to generate jets or less well collimated outflows. Although a thick disk such as that formed in the simulations has a relatively small density contrast between the equatorial plane and the polar regions, advection dominated flows are also hot enough that the gas is only weakly bound to the central mass. Detailed solutions show that the Bernoulli constant,
$$\mathrm{Be}\frac{1}{2}v_R^2+\frac{1}{2}\mathrm{\Omega }^2R^2\mathrm{\Omega }_k^2R^2+\frac{\gamma }{\gamma 1}c_s^2,$$
(13)
which measures the energy the gas would possess if adiabatically moved to infinity, is positive for an often wide range of angles close to the poles (Narayan & Yi 1994; 1995). Although the outcome depends additionally on the outer boundary conditions and the detailed physics of outflow generation, this positivity of $`\mathrm{Be}`$ is likely to imply that outflows are a generic feature of advection dominated disks ( Narayan & Yi 1995; Blandford & Begelman 1999). For our purposes, we can distinguish two extreme possibilities, in which outflows are either self-similar or generated exclusively from the inner disk at $`RR_{\mathrm{ns}}`$.
#### 5.2.1 Outflows
Self-similar outflows from advection dominated flows represent the model considered by Blandford & Begelman (1999). In this case the fraction of accreting mass lost in the outflow is the same for each decade in disk radius, so that the remaining mass accretion rate through the disk decreases inwards as $`\dot{M}R^n`$ with $`0<n<1`$. The appropriate value of $`n`$, which within this model measures the efficiency with which mass is ejected, has not been determined for any realistic thick disk model, though it is has been suggested that $`n`$ is large for simulations of convection in thick disks (Stone, Pringle & Begelman 1999). Empirically $`n`$ must be $`n1`$ if this model is to be successful in explaining the extremely low radiative efficiencies of disks around black holes in low luminosity galactic nuclei (e.g. Reynolds et al. 1996). In the common envelope case, the disk extends over an extremely large range of radii, from the neutron star surface at $`R10^6\mathrm{cm}`$ out to of the order of the accretion radius at $`R10^{11}\mathrm{cm}`$. As a result, the consequence of outflows that are inefficient at removing angular momentum would be to greatly reduce the accretion rate onto the neutron star. Cygnus X-2 may be an example of a neutron star that has survived an episode of super-Eddington mass transfer (though here during thermal timescale mass transfer rather than common envelope evolution) without accreting a significant fraction of the transferred mass (King & Ritter 1999; King & Begelman 1999). If this is indeed the case, it provides some support for the scenario of strong outflows under physical conditions analagous to those encountered during common envelope inspiral.
#### 5.2.2 Jets
Alternatively, outflows may arise predominantly from the inner disk. In this case, the energy feedback into the common envelope from outflows originating deep in the neutron star potential well could be highly significant. If, as observations of jet systems suggest (Livio 1999), the jet is launched with a velocity roughly equal to the local Keplerian velocity, $`v_{\mathrm{jet}}^2GM_{\mathrm{ns}}/R_{\mathrm{ns}}`$, then the energy deposition after $`\mathrm{\Delta }M_{\mathrm{jet}}`$ of mass has been ejected is just,
$$E_{\mathrm{jet}}\alpha _{\mathrm{jet}}\frac{GM_{\mathrm{ns}}\mathrm{\Delta }M_{\mathrm{jet}}}{R_{\mathrm{ns}}},$$
(14)
where $`\alpha _{\mathrm{jet}}`$ is an efficiency factor that will depend on the specific jet model. Ejecting the common envelope requires an energy deposition of around $`2\times 10^{48}\mathrm{ergs}`$ (Brown 1995), which could be achieved for a mass loss in a jet as low as,
$$\mathrm{\Delta }M_{\mathrm{jet}}5\times 10^6\alpha _{\mathrm{jet}}^1M_{}.$$
(15)
The energetic feedback from such an outflow could thus be important for hastening the ejection of the envelope, even if the mass loss itself was far too small to significantly impact the accretion rate. Indeed for this type of jet a large energy deposition could only be avoided if either the accretion rate was $`\dot{M}_{\mathrm{BH}}`$, or if the efficiency of the jet production or coupling to the envelope was extremely low. A low efficiency of coupling to the envelope could arise if the jet was extremely well collimated and able to escape the star entirely, which remains a possibility. However, even for a collimation angle of $`10^2`$, which is typical for many jet sources, the ejection of $`0.2M_{}`$ is sufficient to unbind the envelope. If jets are able to deposit energy into the envelope then an immediate consequence would be that the outcome of common envelope evolution should depend on the depth of the potential well at the surface of the inspiralling compact object – more compact objects should lead to a higher value of the common envelope efficiency parameter $`\alpha _{\mathrm{CE}}`$ and eject the envelope more easily.
## 6. DISCUSSION
In this paper we have considered the probable fate of neutron stars during common envelope evolution. Numerical simulations of the hydrodynamics of Bondi-Hoyle accretion at large radii from the neutron star show that modest density gradients, typical of those expected in giant envelopes, lead to persistent rotationally supported disks around the accreting object. The presence of angular momentum means that spherically symmetric treatments of hypercritical accretion are likely to be a poor approximation for studying the the common envelope phase. In particular, whether the neutron star accretes enough mass to force it to collapse to a black hole prior to loss of the envelope depends entirely on the subsequent evolution of a thick, advection dominated, accretion disk. Little is known about the behavior of such disks, opening plenty of room for legitimate dispute as to the outcome.
Simple estimates suggest that the disks formed in the simulations are sufficiently hot and thick to support accretion rates which could be as large as the inferred Bondi-Hoyle rate in the common envelope phase, provided only that the disk viscosity is not surprisingly small. The presence of a disk does not create a serious bottleneck in the accretion flow at $`RR_a`$. However, more complex and uncertain physics is likely to come into play close to the neutron star. Jets are observed almost universally from accreting systems possessing thick, hot accretion disks, of the kind envisaged here. In the case of SS433, which arguably is the closest prototype to the physical condition of a neutron star in a common envelope, the mass outflow in the jets is extremely strong (e.g. Watson et al. 1986). During common envelope evolution, a jet would allow for a strong feedback of energy from close to the neutron star surface into the stellar envelope, leading to a more rapid ejection of the envelope than would be possible from gas and gravitational drag alone. Since the fate of the neutron star depends on a sensitive balance between the rate at which it accretes and the rate at which energy is deposited into the common envelope (Brown 1995), this would improve the chances of neutron star survival both by lowering the accretion rate and by reducing the epoch of common envelope evolution. In general, jets must be avoided at all costs if the neutron star is to be able to accrete a large mass during inspiral. It is also possible that enough of the radiated neutrino energy could be absorbed at a larger radius to drive explosions, as discussed in the context of spherical accretion models by Fryer, Benz & Herant (1996).
Observationally, several binary pulsars are known whose properties would be consistent with the neutron star having survived a phase of common envelope evolution. Camilo et al. (1996) identify four pulsars which have relatively large companion masses (in excess of 0.45 $`M_{}`$), and which do not follow the eccentricity–orbital period relation expected for lower mass binary pulsars (Phinney 1992). These systems are likely to have undergone deep common envelope evolution (van den Heuvel 1994), although probably the companions were of rather lower mass (1–6 $`M_{}`$) than we have been discussing here. Nonetheless, what is striking is that these neutron stars appear to have not only survived, but are now observed to be rotating with spin periods that are rapid, yet significantly slower than pulsars believed to have been spun-up via disk accretion. This is consistent with the scenario argued here, in which common envelope evolution involves accretion rates that are vastly super-Eddington yet still insufficient to force collapse of the neutron star to a black hole. It is also consistent with the suggestion of Brown, Lee & Bethe (1999), who point out that the inferred masses of black holes in transient sources (which appear to be a large fraction of the mass of the immediate progenitor) imply that hypercritical accretion onto black holes in supernovae must be reasonably efficient. In supernovae, mass must be physically ejected from the system to avoid being accreted eventually, whereas in the common envelope case a modestly lowered accretion rate can suffice to enable the neutron star to survive until the envelope is lost.
PJA thanks Brad Hansen for many useful discussions, and Space Telescope Science Institute for their usual hospitality. ML acknowledges support from NASA Grant NAG5-6857.
|
no-problem/9906/gr-qc9906098.html
|
ar5iv
|
text
|
# Detectability of the primordial origin of the gravitational wave background in the Universe
## 1 Introduction
Early Universe cosmology is reaching a stage where theories put forward for the generation of primordial fluctuations can be severely constrained by observations. It is already the case with present day observations and this will be even more so in the near future due in particular to the Cosmic Microwave Background (CMB) anisotropy measurements with unprecedented resolution by the satellites MAP (NASA) and PLANCK (ESA). At present, only inflationary scenarios seem capable to explain the existing bulk of data, in particular the acoustic (Doppler) peak in the CMB, and one hopes that the increasing amount of observations will finally lead us to the “right” inflationary model or at least restrict the remaining viable models to only a small number.
We would like here to deal with a generic aspect, one that is common to all inflationary models, namely the time coherence of the cosmological perturbations. All inflationary scenarios have in common an accelerated stage of expansion during which fluctuations are generated on super-horizon scales, i.e. with wavelength larger than the Hubble radius. The fluctuations responsible for the CMB fluctuations, whether temperature fluctuations or polarization, though they originate from vacuum quantum fluctuations, were for a long time on “super-horizon” scales and this is why they appear to us as classical fluctuations with random amplitude and fixed temporal phase. In other words, soon after the end of inflation, cosmological perturbations appear to consist of only the growing, or quasi-isotropic, modes with an excellent accuracy. Remarkably enough, this coherence has a very distinct observational signature resulting in periodic acoustic peaks in the CMB temperature anisotropy multipoles $`C_\mathrm{l}^\mathrm{S}`$ and also in the corresponding multipoles of the CMB polarization. Hence, the detection of these periodic peaks would be a dramatic confirmation of their primordial origin.
As well known, the generation of a gravitational wave (GW) background on a vast range of frequencies is also an important prediction of inflationary models (first quantitatively calculated in Starobinsky 1979), one that could constitute, if observed, a crucial experimental confirmation of these scenarios. In addition, what was said above concerning the time coherence of the fluctuations is equally valid for the primordial scalar fluctuations as well as for the primordial tensorial fluctuations, or primordial GW background. For them too, their primordial origin will uncover itself in the presence of a periodic structure in the multipole power spectrum which we call primordial peaks. Clearly, they are much more difficult to track than acoustic peaks produced by scalar (energy density) fluctuations. Note that these primordial peaks are periodic, with a periodicity (Polarski & Starobinsky 1996)
$$\mathrm{\Delta }l=\pi \left(\frac{\eta _0}{\eta _{\mathrm{rec}}}1\right),$$
(1)
which is approximately half the spacing between primordial acoustic peaks produced by scalar fluctuations (due to the difference between the light velocity which is relevant for (1) and the sound velocity in the baryon-photon plasma at recombination which enters into the corresponding expression for the spacing between acoustic peaks). Note that, strictly speaking, Eq.(1) becomes exact for $`l\mathrm{}`$ only. However (see Fig. 1), it turns out that Eq.(1) is already a good approximation for the spacing between the first and second peaks. In Eq.(1), $`\eta ^t\frac{dt^{}}{a(t^{})}`$ and $`\eta _0`$, resp. $`\eta _{\mathrm{rec}}`$ are evaluated today, resp. at recombination.
Of course, the detection of these peaks is much more complicated than the discovery of a long-wave GW background in the Universe through the B-mode polarization of the CMB, though such a discovery would represent a great achievement in itself (for its prospects see, e.g., Kamionkowsky & Kosowsky 1998). However, the significantly smaller effect which we consider in this paper \- the existence of multiple primordial peaks in the angular spectrum of the B-mode CMB polarization - is fundamental and remarkable enough to justify hard efforts to detect it for two reasons. The first reason, explained above, concerns the primordial origin of the GW background; the second one is related to the use of Eq. (1) in order to determine fundamental cosmological parameters.
The discovery of the (asymptotic) periodicity of the $`\mathrm{\Delta }T/T`$ peaks produced by a primordial GW background will immediately give us an unbiased value of one of the most important parameters: the ratio of the present conformal time to the recombination conformal time $`\eta _0/\eta _{\mathrm{rec}}`$. Furthermore, by combining this result with the periodicity scale of acoustic peaks in the CMB anisotropy and the E-mode of the CMB polarization at $`l>200`$ (which is much easier to measure) we can directly find the value of the sound velocity $`c_\mathrm{s}`$ in the cosmic photon-baryon plasma at the moment of recombination. This, in turn, leads to a new way of determining the present baryon density $`n_\mathrm{B}`$ which is free of ”cosmic confusion”.
Actually the observation of the peaks in the multipoles $`C_\mathrm{l}^T`$ due to the primordial GW is a hopeless experimental challenge with the presently existing technology. On the other hand, the observation of this coherence in a direct detection experiment of the primordial GW background is even worse: it would require a resolution in frequency $`\mathrm{\Delta }\nu 10^{18}`$Hz (as briefly mentioned in Polarski & Starobinsky 1996, p.389), something that is clearly impossible to achieve (see also Allen, Flanagan & Papa 2000 for a recent careful investigation).
A better prospect for the detection of these peaks might perhaps be offered by the measurement of the CMB polarization as scheduled by PLANCK. We expect the CMB to be also polarized and important physical information could be extracted from it. In particular, the scalar fluctuations will not contribute to the so-called B-mode polarization (Kamionkowski et al. 1997a; Seljak & Zaldarriaga 1997), therefore the latter bears the imprint of the primordial GW only. Hence, CMB polarization measurements might enable us to show the presence of a GW background of primordial origin. It is the aim of this letter to investigate whether the sensitivity of PLANCK is sufficient for this purpose. We will do this using a concrete, viable model (Lesgourgues et al. 1999a, 1999b) in which the generated GW background can be fairly high, with $`C_{10}^{(T)}C_{10}^{(S)}`$ (note that here, $`C_\mathrm{l}^{(T)}`$, resp. $`C_\mathrm{l}^{(S)}`$, stands for the temperature anisotropy multipoles produced by tensorial, resp. scalar, perturbations).
## 2 The model and the induced polarization
The primordial GW produced during the inflationary stage originate from vacuum fluctuations of the quantized tensorial metric perturbations. Each polarization state $`\lambda `$ – where $`\lambda =\times ,+`$, and the polarization tensor is normalized to $`e_{\mathrm{ij}}(𝐤)e^{\mathrm{ij}}(𝐤)=1`$ – has an amplitude $`h_\lambda `$ (in Fourier space) given by
$$h_\lambda =\sqrt{32\pi G}\varphi _\lambda $$
(2)
where $`\varphi _\lambda `$ corresponds to a real massless scalar field. The production of a GW background is a generic feature of all inflationary models.
Let us briefly describe the BSI (Broken Scale Invariant) inflationary model used here. The power spectrum of this model has a characteristic scale which is due to a rapid change in slope of the inflaton potential $`V(\phi )`$ from $`A_+>0`$ to $`A_{}>0`$ (when $`\phi `$ decreases) in some neighbourhood $`\mathrm{\Delta }\phi `$ of $`\phi _0`$ (Starobinsky 1992). As a consequence, one of the two slow-roll conditions is violated and this is why the scalar perturbation spectrum $`k^3\mathrm{\Phi }^2(k)`$ is non-flat around the scale $`k_0=a(t_{\mathrm{k}_0})H_{\mathrm{k}_0}`$, which becomes larger than the Hubble radius when $`\phi (t_{\mathrm{k}_0})=\phi _0`$ ($`H\dot{a}/a`$ is the Hubble parameter). The spectrum can be basically represented as “step-like” while its shape is determined solely by the parameter $`p\frac{A_{}}{A_+}`$ and is independent of the characteristic scale $`k_0`$. In particular, an inverted step is obtained for $`p<1`$. This model could nicely account for the possible appearance of a spike in the matter power spectrum (Einasto et al. 1997). We will assume that the inflaton potential satisfies the slow-roll conditions far from the point $`k_0`$ and consider a particular behaviour of the spectral indices $`n_\mathrm{T}(k)`$ and $`n_\mathrm{s}(k)`$. This model was thoroughly investigated previously (Lesgourgues et al. 1999b; Lesgourgues et al. 1999; Polarski 1999) and it was found to be in agreement with observations in the presence of a large cosmological constant ($`\mathrm{\Omega }_\mathrm{\Lambda }0.7`$, as favoured by recent observations). We refer the interested reader to the literature for further technical details about our model and the possible observational hints in support of its BSI spectrum.
Also our model allows a high fraction of the temperature anisotropy to originate from tensorial fluctuations with $`C_{10}^{(T)}C_{10}^{(S)}`$. This last property is significantly different from scale free single-field slow roll inflation for which the height of the Doppler peak precludes a high contribution of the GW to $`\frac{\mathrm{\Delta }T}{T}`$ on large angular scales where the power spectrum gets normalised. It is this fact which is of interest to us here as we may hope that the B-polarization is large enough for our purposes.
We introduce now the polarization tensor and the multipole power spectra needed besides $`C_\mathrm{l}^T`$, where
$$a_{\mathrm{lm}}^Ta_{\mathrm{l}^{}\mathrm{m}^{}}^TC_\mathrm{l}^T\delta _{\mathrm{ll}^{}}\delta _{\mathrm{mm}^{}}$$
(3)
and the coefficients $`a_{\mathrm{lm}}^T`$ are defined through
$$\frac{\mathrm{\Delta }T}{T}=\underset{\mathrm{l}=0}{\overset{\mathrm{}}{}}\underset{\mathrm{m}=\mathrm{l}}{\overset{m=l}{}}a_{\mathrm{lm}}^TY_{\mathrm{lm}}.$$
(4)
The symmetric, trace-free polarization tensor $`P_{\mathrm{ab}}`$ can be expanded as follows
$$\frac{P_{\mathrm{ab}}}{T}=\underset{\mathrm{l}=0}{\overset{\mathrm{}}{}}\underset{\mathrm{m}=\mathrm{l}}{\overset{m=l}{}}\left(a_{\mathrm{lm}}^EY_{\mathrm{lm},\mathrm{ab}}^E+a_{\mathrm{lm}}^BY_{\mathrm{lm},\mathrm{ab}}^B\right),$$
(5)
where $`Y_{\mathrm{lm}}^{E,B}`$ are electric and magnetic type tensor spherical harmonics, with parity $`(1)^l\mathrm{and}(1)^{l+1}`$ respectively. A description of the CMB requires the three power spectra
$$C_\mathrm{l}^T|a_{\mathrm{lm}}^T|^2,C_\mathrm{l}^E|a_{\mathrm{lm}}^E|^2,C_\mathrm{l}^B|a_{\mathrm{lm}}^B|^2,$$
(6)
together with the only non vanishing cross correlation function
$$C_\mathrm{l}^{TE}a_{\mathrm{lm}}^Ta_{\mathrm{lm}}^E.$$
(7)
Indeed, because of parity, the cross-correlation functions $`C_\mathrm{l}^{TB},C_\mathrm{l}^{EB}`$ vanish. Among the different types of primordial perturbations, only the primordial GW can produce B-mode polarization. Hence the latter offers a unique opportunity to probe the possible presence of a GW background and in particular its primordial origin.
## 3 Statistical analysis
We want first to investigate whether Planck has the required sensitivity in order to see possible small peaks in the power spectrum $`C_\mathrm{l}^B`$. Our method will make use of the Fisher information matrix $`F_{\mathrm{ij}}`$.
Using the CMB Boltzmann code CMBFAST (Seljak & Zaldarriaga 1996), we compute the derivative of the $`C_\mathrm{l}`$’s with respect to each parameter $`\theta _\mathrm{i}`$ on which the spectra may depend in a given model. The Fisher matrix (Jungman et al. 1996a, 1996b; Tegmark et al. 1997; see also Bond et al. 1997; Copeland et al. 1998; Eisenstein et al. 1998; Wang et al. 1999; Stompor & Efstathiou 1999) is then obtained by adding the derivatives, weighted by the inverse of the covariance matrix of the estimators of the polarized and unpolarized CMB power spectra for the PLANCK satellite mission , $`\mathrm{Cov}(\mathrm{C}_\mathrm{l}^\mathrm{X},\mathrm{C}_\mathrm{l}^\mathrm{Y})`$:
$$F_{\mathrm{ij}}=\underset{\mathrm{l}=2}{\overset{+\mathrm{}}{}}\underset{\mathrm{X},\mathrm{Y}}{}\frac{C_\mathrm{l}^X}{\theta _\mathrm{i}}\mathrm{Cov}^1(C_\mathrm{l}^X,C_\mathrm{l}^Y)\frac{C_\mathrm{l}^Y}{\theta _\mathrm{j}},$$
(8)
where $`\{X,Y\}\{T,E,B,TE\}`$ (Kamionkowski et al. 1997b; Zaldarriaga et al. 1997; Prunet et al. 1998a, 2000). The Fisher matrix $`F_{\mathrm{ij}}`$ measures basically the width and the shape of the likelihood function around the maximum likelihood point. Assuming that a fit to the PLANCK data yields a maximum likelihood for the model under consideration (for which the derivatives were computed), the $`1\sigma `$ error on the parameter $`\theta _\mathrm{i}`$, for any unbiased estimator of $`\theta _\mathrm{i}`$ and however precise the observations may be, satisfies
$$\mathrm{\Delta }\theta _\mathrm{i}\sqrt{(F^1)_{\mathrm{ii}}},$$
(9)
if all the parameters are estimated from the data, and
$$\mathrm{\Delta }\theta _\mathrm{i}F_{\mathrm{ii}}^{\frac{1}{2}},$$
(10)
when all other parameters are known.
Each multipole will be measured by Planck with unprecedented precision of the order of 1%, thereby allowing for an accurate extraction of the cosmological parameters. Still, one should remember that a given model with its spectra implies a set of parameters, each having a particular value, which define the model. Even though the power spectra $`C_\mathrm{l}^X`$ for some given parameter combination might be measured with very high precision, each parameter separately is usually constrained only at the percent level due to the possible degeneracy of the spectra with respect to a change in the parameter combination. In computing the covariance matrix of the CMB power spectra, we accounted for the presence of foregrounds (both polarized and unpolarized) in the measurement of the CMB power spectra, using the method described in Bouchet et al. 1999 (see also Prunet et al. 1998a, 2000).
In order to use this approach we need to quantify the appearance of peaks with the help of some additional parameter $`\theta _\mathrm{i}s`$. For this purpose, we adopt the following strategy: we compare the $`C_\mathrm{l}^B`$ curve of our inflationary model where peaks are present with a smoothed version $`C_{\mathrm{l},\mathrm{sm}}^B`$ which contains no peaks anymore. Obviously, we can write
$$C_\mathrm{l}^B=C_{\mathrm{l},\mathrm{sm}}^B+s(C_\mathrm{l}^BC_{\mathrm{l},\mathrm{sm}}^B).$$
(11)
Hence, the parameter $`s`$ enters the Fisher matrix through the quantity
$$\frac{C_\mathrm{l}^B}{s}=C_\mathrm{l}^BC_{\mathrm{l},\mathrm{sm}}^B.$$
(12)
Note that $`s=1`$ corresponds to the original model which is assumed to be the correct one. We stress that it is perfectly self-consistent to smooth only the $`C_\mathrm{l}^B`$ spectrum since the possible appearance of peaks in the other spectra is due to the scalar perturbations only. This is well known for the temperature anisotropy, and it is also true for the E-mode polarization multipoles $`C_\mathrm{l}^E`$. In summary, what we really measure with the help of the parameter $`s`$ is the presence of a time-coherent GW background, in other words, a GW background which is of primordial origin.
For completeness, we take also into account the additional information provided by the $`T,E`$ and $`TE`$ modes: we smooth the tensor contibutions $`C_\mathrm{l}^{X,(T)}`$, $`X\{T,E,TE\}`$, and calculate
$$\frac{C_\mathrm{l}^X}{s}=C_\mathrm{l}^{X,(T)}C_{\mathrm{l},\mathrm{sm}}^{X,(T)}.$$
(13)
We stress that in general, statistical separation of the tensor contribution from the scalar contribution requires prior knowledge about the underlying theory (which is available here by assumption). Even so, this will change $`F_{\mathrm{ss}}`$ only by a small amount, due to observational uncertainties in tensor-scalar separation, a drawback which does not affect the B mode.
We fix the parameters of our model to $`\mathrm{\Omega }_{\mathrm{tot}}=1,\mathrm{\Omega }_\mathrm{\Lambda }=0.65,\mathrm{\Omega }_\mathrm{b}=0.04,h=0.6,p=0.58,k_0=0.016h\mathrm{Mpc}^1,n_\mathrm{S}(k<k_0)=1,n_\mathrm{T}(k_0)=0.125`$. For these parameters, Eq. (1) gives $`\mathrm{\Delta }l160`$ (assuming 3 kinds of massless or very light neutrinos). As shown in (Lesgourgues et al. 1999b), this choice is consistent with current constraints, despite a fairly high GW contribution to the CMB temperature anisotropy with $`C_{10}^{(T)}/C_{10}^{(S)}=0.85`$. We find that the $`1\sigma `$ error $`\mathrm{\Delta }s`$ on the parameter $`s`$ satisfies
$$\mathrm{\Delta }s\sqrt{(F^1)_{\mathrm{ss}}}=2.68$$
(14)
if all other parameters are extracted from the same data as well, while essentially the same result is obtained
$$\mathrm{\Delta }sF_{\mathrm{ss}}^{\frac{1}{2}}=2.63$$
(15)
when all other parameters are known. This is not surprising since the error in the measurement of this parameter is dominated by the noise and the foregrounds and not by a possible degeneracy with the other parameters. Since in both cases $`\mathrm{\Delta }s1`$, Planck clearly does not seem to have the level of sensitivity required in order to see the primordial peaks in the B-mode polarization, at least for our model. We recall however that our model admits a large GW background, in any case substantially larger than in usual single-field slow-roll inflationary models. Therefore, a negative result for this model is almost certain to imply, for the particular problem under consideration, rather gloomy prospects for most, if not all, viable inflationary models<sup>1</sup><sup>1</sup>1Also, in our model, it is possible to neglect the gravitational lensing contamination of the B mode (Zaldarriaga & Seljak 1998), in contrast with models with a low tensor contribution. Indeed, in our model, gravitational lensing generates a B-polarized signal that dominates the primordial gravitational wave signal for $`l>140`$. However, we checked with a specific Fisher matrix analysis that from the measurement of $`T,E,TE`$ modes alone, the $`C_\mathrm{l}^B`$ contamination can be substracted with 4% accuracy, and therefore neglected up to $`l=350`$, while our result for $`F_{\mathrm{ss}}`$ depends mainly on multipoles $`C_\mathrm{l}^B`$ with $`150<l<350`$..
It is interesting to evaluate what is the sensitivity required for other future experiments. If we imagine an idealized experiment, with only one channel, and no foregrounds contamination at all, we find that only a sensitivity ten times higher than that achieved by Planck’s best channel will allow a clear detection with $`\mathrm{\Delta }s0.1`$. The assumption of no foregrounds contamination is clearly an idealization if we compare the expected level of the dust polarized B-mode power spectrum (see for instance Prunet et al. 1998b) to the CMB spectrum shown in Fig. 1.
However, the level of contamination is very inhomogeneous on the sky, and one expects to find some locations where the contamination level by dust would be at least ten times smaller than the mean level computed for a galactic latitude $`b>20^{}`$. Of course, the draw-back of observing a smaller part of the sky is that it increases the sample variance. Indeed, in the no-foregrounds case, the sample variance part of the covariance of the estimator of a given B-mode multipole $`C_{\mathrm{}}^B`$ is approximately given by
$$\mathrm{\Delta }C_{\mathrm{}}^B/C_{\mathrm{}}^B\sqrt{\frac{2f_{\mathrm{sky}}}{2\mathrm{}+1}}$$
(16)
where $`f_{\mathrm{sky}}`$ is the fraction of the sky covered by the experiment. However, since we are interested in multipoles $`\mathrm{}250`$, a rather small region (typically $`400\mathrm{deg}^2`$) should be sufficient for this sample variance to be smaller than the noise. Thus a dedicated, long-time observation of a particularly clean region of the sky, like the Polatron experiment <sup>2</sup><sup>2</sup>2see http://astro.caltech.edu/ lgg/polatron/polatron.html, with possibly a poorer angular resolution than Polatron but with a significant gain in sensitivity, should be able to constrain the coherence parameter $`s`$ to a reasonable accuracy, especially if we take into account the expected progress in bolometer technology.
In conclusion, it is not unreasonable to expect that in the upcoming decades, CMB polarization experiments, in addition to addressing the very existence of a cosmic gravitational wave background (which we think will already be settled by Planck), will also answer the fundamental question concerning the primordial origin of this background.
###### Acknowledgements.
A.S. was partially supported by the grant of the Russian Foundation for Basic Research No. 99-02-16224 and by the Russian Research Project ”Cosmomicrophysics”. This paper was finished during his stay at the Institute of Theoretical Physics, ETH, Zürich. J. Lesgourgues is supported by the European community TMR network grant ERBFMRXCT960090.
|
no-problem/9906/physics9906061.html
|
ar5iv
|
text
|
# Inhomogeneity of dusty crystals and plasma diagnostics
## I Introduction
Formation of dust crystals (DC) takes place in a vertical electric field of the sheath, the gravitational field and a horizontal electrical field. The external field, acting in vertical and horizontal traps, stabilizes the 3-dimensional DC of finite size and linear chains of $`d`$-ions (horizontal traps for confinement of one-dimensional DC are used in ). The pressure of the boundaries and the external field violate the translational invariance and lead to a dependence of the distances between nearest neighbors in the lattice of dust particles on the position of the particles (see Fig.1 and Fig.2). Therefore macroscopic inhomogeneity in a lattice is a new phenomenon not present in the usual infinite (very large) crystal.
Even in the approximation of central force for interparticle interaction between $`d`$-ions, DC possess a layered structure (the layered structure of usual atomic crystals, as graphite, is connected with the anisotropy of the interparticle interaction).
The vertical and horizontal distances between nearest neighbours (lattice “constants” $`R_{}`$ and $`R_{}`$) are in general different functions of position in different directions from the center of the crystal (center of inertia). Deformation of DC in the fields of the traps depends on its characteristics and on the plasma parameters. Therefore the electrostrictional response of $`d`$-ion systems on a static external disturbance can be used as a diagnostic tool for DC and the surrounding plasma. In particular the charge $`Q`$ of $`d`$-ions, the screening length $`R_D,`$ the concentration of the small ions and the electric field in the sheath can be determined. In the present paper the possibility to use the inhomogeneity of DC for plasma diagnostics is considered theoretically.
Recently dusty plasma diagnostics appear on basis of investigations of the dispersion curves $`\omega (k)`$ for $`d`$-ion sound and properties of forced oscillations of linear $`d`$-ion’s chains in an electric field and under the action of laser impulses . The static diagnostic, suggested in this paper is simpler for the theoretical description and experimental realization than the dynamic sounding considered in .
For the description of a lattice configuration of $`N`$ $`d`$-ions in a state of deformation under action of external gravitational and electric forces $`\overline{f_n}=V_n`$ and interparticles forces $`\overline{F_n}=U_n`$ we will use the balance equations. Here $`V_n`$ is the potential energy of the $`d`$-ion with number $`n`$, $`U_n`$ is the potential energy of interaction between the $`d`$-ion with number $`n`$ and all other ones. We do not take into account the force connected with momentum transfer from the small ions to the $`d`$-ions. This force very often can be omitted, because in the case when it is essential, the $`d`$-ions can be found not only below the sheath, but also on top of it, which is not observed in the experiment discussed below.
We will use the simple approximation of nearest neighbours for the description of interparticle interaction. This approximation apparently gives a good picture of the inhomogeneity of DC under the action of external forces and with a screening length $`R_DR_{},R_{}`$. We also will neglect a possible dependence of the $`d`$-ion charge $`Q`$ on the location in the inhomogeneous DC (w.r.t. $`d`$-ion density). Therefore we suggest $`Q=const`$ in our considerations.
## II Equations of static equilibrium
For the case of inhomogeneous three-dimensional DC we will use a simple quasi-one-dimensional model of DC, in which the layer lattice with a real potential is changed into a one-dimensional vertical chain of particles. The effective potential for this model can be calculated by integration of the interaction with the nearest layer with distributed charge $`\sigma =Q/S_0`$ ($`S_0`$ is the surface for a $`d`$-ion in horizontal direction)
$$U(r)_{xy}=\frac{2\pi }{S_0}\underset{0}{\overset{\mathrm{}}{}}𝑑\rho \rho U\left(\sqrt{z^2+\rho ^2}\right)$$
(1)
For the Debye-Hueckel interaction and simple hexagonal lattice the potential (1) has the form
$$U(z)=\frac{Q^2}{2}e^{\varkappa r}_{x,y}=U_0e^{\varkappa z},\varkappa =\frac{1}{R_D},U_0=2\pi \frac{Q\sigma }{\varkappa },\sigma =\frac{2Q}{\sqrt{3}R_{}^2}$$
(2)
This model permits to calculate the dependence of the distances between the nearest slabs $`R_n(z)`$ as a function of height.
In the general case of pair interaction between the $`d`$-ions in the external electric and gravitational fields of the sheath the potential energy can be written in the form
$$U+V=\underset{k=1}{\overset{N1}{}}U_k+\underset{k=1}{\overset{N}{}}V_k,U_k=U(R_k),V_k=V(z_k),R_k=z_{k+1}z_k$$
(3)
Here we take into account only interaction between neighbouring particles. The potential energy for a horizontal chain of $`N`$ interacting $`d`$-ions in the electric field of the trap, has an analogous form and stabilizes this chain in the $`x`$-direction $`(z_kx_k)`$. The conditions of balance of external and internal forces lead to a system of equations which determines the configuration of the $`d`$-ions:
$$\{\begin{array}{c}U_k^{}U_{k+1}^{}+V_{k+1}^{}=0,k=1,2,3,\mathrm{}N2,U_k^{}=\frac{dU}{dR_k}V_k^{}=\frac{dV}{dz_k},\hfill \\ U_1^{}+V_1^{}=0,\hfill \\ U_{N1}^{}+V_N^{}=0.\hfill \end{array}$$
(4)
Summation of the left parts of these equations leads to the obvious condition of zero sum of the external fields: $`\underset{k=1}{\overset{N}{}}V_k^{}=0`$.
For the stabilization of horizontal chains an external field in the form of a parabolic well in the chain direction has been used in .
$$V_k=\frac{1}{2}m\omega _0^2(x_nX_0)^2,X_0=\frac{1}{N}\underset{k=1}{\overset{N}{}}x_k$$
(5)
Here $`X_0`$ is the center of inertia for a chain and $`\omega _0`$ is a parameter, which determines the shape of the pit. According to the vertical electric field in a sheath changes linearly with the height. This dependence is realized approximately in the regions not too close to the lower electrode and the border of the presheath: the quadratic approximation for the potential $`\phi (z)`$ in the plasma layer is also used in for the analysis of the equations of motion of DC . Therefore in the case of a vertical potential well we use in eq.(4) the expansion
$$\begin{array}{c}V(z_k)=mgz_k+Q\phi _0+Q\phi _0^{}(z_kX_0)+\frac{1}{2}\omega _0^2(z_kX_0)^2,\hfill \\ \phi _0=\phi (X_0),\phi _0^{}=\phi ^{}(X_0),m\omega _0^2=Q\phi ^{\prime \prime }(X_0).\hfill \end{array}$$
(6)
The parabolic approximation for the vertical electric field is reasonable for the case of sufficiently thin DC . To estimate the maximal thickness $`\mathrm{}=z_Nz_1=2(X_0z_1)`$ of DC, for which this approximation is true, let us consider $`\phi (z)=\phi (0)\mathrm{exp}(z/R_D)`$ and use the condition
$`{\displaystyle \frac{1}{3}}\left|Q\phi _0^{\prime \prime \prime }\right|(X_0z_1)^2{\displaystyle \frac{1}{3}}\left|Q\phi _0^{}\right|\left({\displaystyle \frac{\mathrm{}}{2R_D}}\right)^2<Q\phi _0^{\prime \prime }(X_0z_1)\left|Q\phi _0^{}\right|{\displaystyle \frac{\mathrm{}}{2R_D}}`$
Then the necessary inequality is $`\mathrm{}<6R_D`$ which is usually satisfied (see, for example, ). The linear terms in Eq.(6) are really absent because of the condition of zero total external forces:
$$mg+Q\phi ^{}(X_0)=0.$$
(7)
This condition determines the position of the center of inertia for the system of levitated $`d`$-ions.
By use of the parabolic approximation (6) in the balance equations (4) and subtracting from each equation the previous one, we find
$$\{\begin{array}{c}2U_k^{}U_{k+1}^{}U_{k1}^{}+m\omega _0^2R_k=0,k=2,3,\mathrm{}N2,\hfill \\ 2U_1^{}U_2^{}+m\omega _0^2R_1=0,\hfill \\ 2U_{N1}^{}U_{N2}^{}+m\omega _0^2R_{N1}=0.\hfill \end{array}$$
(8)
As follows from eq.(8) the intervals $`R_k`$ are symmetric with respect to the center:
$`R_1=R_{N1},R_2=R_{N2},\mathrm{},R_k=R_{Nk},\mathrm{}`$
## III Structure of DC with an attractive (for large distances) and with a purely repulsive potential
According to eqs.(4), (8) for isolated systems of $`d`$-ions ($`V_k^{}=0`$), there are two different possibilities when external fields are absent.
If the pair interaction between $`d`$-ions is a nonmonotonic function and leads to repulsion at small distances and to attraction at large distances, then the solution of eq.(4) reads
$$U_1^{}=U_2^{}=\mathrm{}=U_{N1}^{}=0$$
(9)
.
This solution describes a homogeneous chain of $`N`$ $`d`$-ions with equal distances between nearest neighbours $`R_1=R_2=\mathrm{}=R_{N1}=R_0`$. The potential energy has a minimum for this state. In this case the weakly inhomogeneous configurations of $`d`$-ions with external force $`V_k^{}0`$ can be described on basis of small deformations $`|R_0R_k|R_0`$. Then we use the expansion
$$U(R_k)=U_0+\frac{m\mathrm{\Omega }^2}{2}(R_kR_0)^2,m\mathrm{\Omega }^2=U^{\prime \prime }(R_0)$$
(10)
If the pair interaction $`U(R_k)`$ has a monotonic purely repulsive form, the $`d`$-ions of an isolated system are unstable and, according to (9) all $`R_k`$ are infinite. In this case stabilization of the system in a weak external field playing the role of a trap, leads also to a slightly inhomogeneous state, in which the deviations of the intervals from the average are small,
$$R_0=\frac{1}{N1}\underset{k=1}{\overset{N1}{}}R_k,\left|R_0R_k\right|R_0$$
(11)
In this case the alternative quadratic expansion of the energy for $`dd`$ interactions has the form
$$\begin{array}{c}\underset{k=1}{\overset{N1}{}}U_k=(N1)U_0+U_0^{}\underset{k=1}{\overset{N1}{}}(R_kR_0)+\frac{1}{2}m\mathrm{\Omega }^2\underset{k=1}{\overset{N1}{}}(R_kR_0)^2=\hfill \\ (N1)U(R_0)(N1)U_0^{}R_0+U_0^{}(z_Nz_1)+\frac{1}{2}m\mathrm{\Omega }^2\underset{k=1}{\overset{N1}{}}(R_kR_0)^2,\hfill \\ U_0=U(R_0),U_0^{}=\frac{dU(R_0)}{d(R_0)},m\mathrm{\Omega }^2=\frac{d^2U(R_0)}{dR_0^2},\underset{k=1}{\overset{N1}{}}R_k=z_Nz_1.\hfill \end{array}$$
(12)
For a potential with a well $`U^{}(R_0)=0`$ the expansions (10) and (12) coincide, therefore small deformations $`s_k=R_0R_k`$ of the system in an external field can then be described by the general equations of force balance:
$$\{\begin{array}{c}2\mathrm{cosh}ts_ks_{k+1}s_{k1}\frac{\omega _0^2}{\mathrm{\Omega }^2}R_0=0,\mathrm{cosh}t=1+\frac{\omega _0^2}{2\mathrm{\Omega }^2},k=2,3,\mathrm{},N2\hfill \\ 2\mathrm{cosh}ts_1s_2\frac{\omega _0^2}{\mathrm{\Omega }^2}R_0\frac{U_0^{}}{m\mathrm{\Omega }^2}=0,\hfill \\ 2\mathrm{cosh}ts_{N1}s_{N2}\frac{\omega _0^2}{\mathrm{\Omega }^2}R_0\frac{U_0^{}}{m\mathrm{\Omega }^2}=0.\hfill \end{array}$$
(13)
Here for purely repulsive interaction $`R_0`$ is the average. For the case with attraction $`U_0^{}=0`$ and $`R_0`$ is the equilibrium distance in the isolated system of $`d`$-ions.
## IV Solutions and numerical results
A general solution of the equations in finite differences (13) can be obtained in the form
$$s_k=R_0R_k=R_0+Ae^{kt}+Be^{kt}.$$
(14)
Taking into account the symmetry of the system $`s_k=s_{Nk}`$, the connection between the coefficients $`B=Ae^{Nt}`$ can be found. The coefficient $`A`$ can be found from the boundary condition for $`k=1`$ (or for $`k=N1`$). Finally for the interval number $`k`$ and purely repulsive potential we find
$$R_k=\left(R_0\frac{U^{}(R_0)}{m\mathrm{\Omega }^2}\right)\frac{\mathrm{cosh}\left(\frac{N}{2}k\right)t}{\mathrm{cosh}\frac{Nt}{2}}=\left(R_0\frac{U^{}(R_0)}{m\mathrm{\Omega }^2}\right)\frac{C_{k1}^{}(\mathrm{cosh}t)+C_{Nk1}^{}(\mathrm{cosh}t)}{C_{N1}^{}(\mathrm{cosh}t)}$$
(15)
Here $`C_n^{}(x)`$ are the Gegenbauer polynomials. For the case of interaction with attraction $`U^{}(R_0)=0`$, the intervals $`R_k`$ have the form:
$$R_k=R_0\frac{\mathrm{cosh}\left(\frac{N}{2}k\right)t}{\mathrm{cosh}\frac{N}{2}t}.$$
(16)
Therefore in the parabolic trap formed by the external forces, a chain of $`d`$ -ions is compressed symmetrically w.r.t. the center of inertia, and the central regions more strongly than the ones outwards $`R_1>R_2>\mathrm{}`$ For the resulting electrostrictional reduction of the length $`\mathrm{}`$ of an entire chain it follows from Eq.(16) that ($`R_0`$ is the equilibrium distance in a homogeneous chain)
$$\mathrm{}=\underset{k=1}{\overset{N1}{}}R_k=2R_0\frac{\mathrm{cosh}\frac{t}{2}\mathrm{sinh}\frac{N1}{2}t}{\mathrm{sinh}t\mathrm{cosh}\frac{Nt}{2}}<(N1)R_0$$
(17)
For sufficiently long ($`N1`$) horizontal chains and for (in vertical direction) quasi-one-dimensional dusty crystals the profile distributions of charge density and mass and thereby the “constants” of the elastic forces can be obtained in the approximation of continuous media by use of eqs.(16), (17). The surface density of charge is proportional to to the mass density and therefore there is balance of the external volume electric and gravitational forces in each point of a horizontal plane at fixed height. This means that even inhomogeneous planes (Fig.1) and horizontal chains (Fig.2), which are more dense in the center, are not suspended in the central part of the dusty system, where the density is higher. Enlargement of the density in the center of horizontal crystalline planes is observed in the experiments , but quantitative measurements are unknown to us. Parallel to oscillation and wave measurements in horizontal chains, the equilibrium positions of $`d`$-ions have also been determined in the electric field of a horizontal trap . According to the data of these papers for the case $`N=12`$ the ratios of the intervals between neighbouring $`d`$-ions in the direction of the center are $`R_1:R_2:\mathrm{}R_6=1.44:1.22:1.11:1.05:1.01:1.00`$. These results are reasonably described by our formula (15), in which for $`t=0.18`$ (and correspondingly $`\omega _0=0.2\mathrm{\Omega }`$) these ratios are $`1.43:1.27:1.15:1.07:1.02:1.00`$.
The experimental data for the other half of the chain $`R_6:R_7:\mathrm{}R_{11}=1:1.01:1.01:1.08:1.20:1.32`$ agree less with our theory for the (with respect to the center) symmetric chain and they are essentially different from the experimental data for the first half of the chain. We think that this asymmetry is a consequence of the asymmetric and not exactly parabolic $`V(x)\frac{1}{2}m\omega _0^2x^2`$ shape of the external electric field (here $`x`$ is the distance from the center of the chain). According to $`m\omega _0^2=2.5510^{11}`$ kg$``$s<sup>-2</sup>, $`m=6.7310^{13}`$ kg. Using the data on the equilibrium configuration $`R_n`$ and the parameter of the trap $`\omega _0=6.15`$ s<sup>-1</sup> we find the important characteristic of $`dd`$ interaction $`\mathrm{\Omega }=5\omega _0=30.7`$ s<sup>-1</sup>.
For the chain with $`N=4`$ the experimental data, according to , give $`\omega _0=6.25`$ s<sup>-1</sup> and $`R_1=1989`$ $`\mu `$m, $`R_2=1910`$ $`\mu `$m, $`R_3=2031`$ $`\mu `$m. The average interval $`R_{}=1960`$ $`\mu `$m for the case $`N=4`$ is twice as large as $`R_{}=10^3`$ $`\mu `$m for the chain with $`N=12`$. This is probably connected with the higher charges (almost three times) of the $`d`$-ions in and therefore with the stronger repulsion between them at the partially same compressing external field of the horizontal trap. For the conditions of the experiments the asymmetry of the external field, connected with the nonquadratic form of the potential $`V(x)`$ is still stronger than in and this was the reason to use for the bordering intervals the expression $`R_1=(R_1+R_3)/2=2010`$ $`\mu `$m. Then according to (10) we have $`R_1/R_2=1+\omega _0^2/2\mathrm{\Omega }^2=1.05`$ and $`\mathrm{\Omega }=3.16\omega _0=19.7`$ s<sup>-1</sup>.
It is necessary to emphasize that all the results for the case $`N=4`$ and $`N=12`$ are applicable for both cases: purely repulsive $`dd`$ interaction and $`dd`$ interaction with an attractive part, because, as follows from eqs.(15) and (16), the ratios of intervals $`R_k`$ are the same in these cases.
The known experimental data on equilibrium intervals $`R_{}`$ between neighbouring ions in vertical traps concern only dust crystals with two horizontal crystalline planes ($`N=2`$) and have been obtained in . In a dust crystal with $`N=3`$ has been investigated but the thickness of the crystal was not measured.
According to the ratio $`(R_0R_{})/R_0=0.2`$ and does not depend on the ion’s mass.
In experiments are reported with dust crystals, formed by $`d`$-ions with radii 4.7 $`\mu `$m and 2.4 $`\mu `$ which leads to a difference of gravitational force proportional to $`m_1/m_28`$. The position of the center of inertia $`X_0`$ of the dust crystal must be considerably changed in this case: a lighter crystal will shift over a distance $`R_D`$, as follows from Eq.(7). A measurement of this effect was not reported in .
According to (16)
$$1\frac{R_{}}{R_0}=\frac{\omega _0^2}{\omega _0^2+2\mathrm{\Omega }^2}=0.2$$
(18)
and therefore $`\mathrm{\Omega }=\sqrt{2}\omega _0`$. In contrast with the parameter $`\omega _0^2=\frac{1}{m}V_0^{\prime \prime }(x)`$ for the vertical electric field in a sheath is here unknown. It can be determined only on basis of knowledge of the interaction potential between the $`d`$-ions, via the parameter $`\mathrm{\Omega }^2=\frac{1}{m}U^{\prime \prime }(R_{})`$.
Purely repulsive interaction (1) leads to another result. For $`N=2`$ the exact system of balance eqs.(3) has the form
$$\{\begin{array}{c}U^{}(R_{})+V^{}(z_1)=0,R_{}=z_2z_1,X_0=\frac{z_1+z_2}{2},\hfill \\ U^{}(R_{})+V^{}(z_2)=0,z_{2,1}=X_0\pm \frac{1}{2}R_{}.\hfill \end{array}$$
(19)
In this case a more general model for the external potential (6) than the linear one can be used for the description of the electric field in a sheath. Let us take
$$E(X_0\pm \frac{1}{2}R_{})=E(X_0)e^{\pm \frac{1}{2}\varkappa R_{}}.$$
(20)
For the position of the center of inertia we have
$$mg=QE(X_0)\mathrm{cosh}\frac{1}{2}\varkappa R_{},$$
(21)
and according to (19) with $`U(R_{})`$ taken from (2) we find
$$V^{}(z_2)V^{}(z_1)=2QE(X_0)\mathrm{sinh}\frac{1}{2}\varkappa R_{}=2\frac{4\pi Q^2}{\sqrt{3}R_{}^2}e^{\varkappa R_{}}.$$
(22)
By eliminating $`E(X_0)`$ from these equations we finally find
$$\mathrm{tanh}\frac{\varkappa R_{}}{2}=\alpha e^{\varkappa R_{}},\alpha =\frac{4\pi Q^2}{\sqrt{3}R_{}^2mg}.$$
(23)
Using the experimental data for $`m`$, $`Q`$, $`R_{}=450\mu `$m, $`R_{}=360`$ $`\mu `$m we estimate the Debye radius $`R_D=973`$ $`\mu `$m. In the case of a dust crystal with a lower $`m`$, $`Q`$ and $`R_{}=350\mu `$m, $`R_{}=280\mu `$m (see also ) we find $`R_D=933\mu `$m. Therefore the Debye length $`R_D`$ is approximately of the order of the interval between nearest $`d`$-ions, which is in agreement with the estimates of .
For the electric field in a sheath we find from eq. (21) $`E(X_0)=2.8410^3`$ V$``$m<sup>-1</sup> and $`E(X_0+\mathrm{\Delta })=1.9910^3`$V$``$m<sup>-1</sup> for the case of $`d`$-ions with radii 4.7 $`\mu `$m and 2.4 $`\mu `$m.
Let us neglect small changes of Debye radius $`R_D`$ and take
$$\frac{E(X_0)}{E(X_0+\mathrm{\Delta })}=\mathrm{exp}\left(\frac{\mathrm{\Delta }}{R_D}\right)=1.42,R_D=950\mu \text{m}.$$
(24)
Then we obtain for the shift upwards $`\mathrm{\Delta }`$ of the lighter crystal
$$\mathrm{\Delta }=0.35R_D=332\mu \text{m}.$$
(25)
The moving of a dust crystal inside the sheath can be observed by different microgravity experiments (see some discussion for example in ).
We suggest here some experiments in which the properties of DC can be studied under conditions of microgravity and even changing gravity.
One of these experiments (under terrestrial conditions) can be performed in a horizontal discharge, where in the horizontal direction there is only an electric force and momentum transfer from the small ions to the dust particles. For such experiment the latter force can be very essential in contrast to the conditions considered in this paper.
The second group of experiments is connected with the effective gravity created in space stations by rotation of dusty plasma. If $`h`$ and $`g_{\text{eff}}`$ are the distance from the axis of rotation to the negative electrode and the acceleration of the center of inertia for the dusty system respectively, the obvious connection is given by
$$g_{\text{eff}}=\omega ^2(hX_0).$$
(26)
For $`g_{\text{eff}}=g`$ and $`h=1`$ m (rotation of the container inside the space station or rocket) or $`h=10`$ m (rotation of the space station as a whole) we find $`\omega =3`$ s<sup>-1</sup> and $`\omega =1`$ s<sup>-1</sup> respectively, which are conditions of weakly inhomogeneous ($`hR_k`$) artificial gravitational field where our results, obtained above, are applicable. Measuring the dependence $`X_0=X_0(\omega )`$ would permit to investigate the profile of the electric field in a sheath and other characteristics of the dusty system and plasma. Of special interest is the investigation of the deformation of DC in an essentially inhomogeneous rotation field ($`hX_00.05`$ m and $`\omega 15`$ s<sup>-1</sup>). A detailed consideration of such experiment will be given in a separate paper.
In the case of a dust crystal with three horizontal crystalline planes the static equilibrium is described by the system of eqs.(4) with $`N=3`$. In the approximation for the electric fields used before the coordinate of the average $`d`$-ions $`z_2`$ and the value $`E_0`$ can be eliminated on basis of balance of external fields:
$$3mg=QE_0\underset{n=1}{\overset{3}{}}e^{\varkappa z_n}=QE_0e^{\varkappa z_2}(1+e^{\varkappa R_1}+e^{\varkappa R_2}).$$
(27)
For the system of equations determining the vertical intervals $`R_1`$ and $`R_2`$, we obtain
$$\begin{array}{c}\frac{\alpha }{3}(e^{\varkappa R_1}2e^{\varkappa R_2})+\frac{1e^{\varkappa R_2}}{1+e^{\varkappa R_1}+e^{\varkappa R_2}}=0,\hfill \\ \frac{\alpha }{3}(e^{\varkappa R_2}2e^{\varkappa R_1})+\frac{e^{\varkappa R_1}1}{1+e^{\varkappa R_1}+e^{\varkappa R_2}}=0.\hfill \end{array}$$
(28)
Even for the highest pressure of neutrals in , $`p=300`$ mTorr ($`Q=7.210^3`$e, $`R_{}=0.28`$ mm, $`\varkappa R_{}=0.61`$) the parameter $`\alpha /3=0.0531`$. Suggesting $`R_1=R_2=R_{}`$ and $`\varkappa R_{}1`$ it follows from eq.(28) that
$$\varkappa R_{}\frac{\alpha }{1+\alpha }=0.14.$$
(29)
Let us emphasize that the vertical compression is symmetric $`(R_1=R_2)`$ with respect to the central plane only for $`\varkappa R_{}1`$. In contrast to the approximate equations (8) for the parabolic wells, the exact equations (28) are not symmetric for the interchange $`R_1R_2`$.
From eq.(27)-(29) with the Debye radius given in it follows that vertical compression is important:
$$\frac{R_{}R_{}}{R_{}}=0.77.$$
(30)
In the framework of the quadratic approximation for the potential energy of the system with $`N=3`$ we find according to eqs.(15)-(16)
$$\frac{R_{}R_{}}{R_{}}=\frac{\omega _0^2}{\omega _0^2+\mathrm{\Omega }^2}.$$
(31)
Unfortunately the vertical interval $`R_{}`$ has not been measured in . The experimental data obtained in are not sufficient to choose which variant is preferable for purely repulsive interaction or interaction with an attractive part: the quadratic model or the more exact description (27)-(28).
## V Conclusions.
The method of dusty plasma diagnostics discussed above and based on an analysis of the inhomogeneity of the linear structures of $`d`$-ions, seems very attractive. In contrast to the situation in the usual sound method, the $`d`$-ions of a small dust crystal or a linear dust chain have additional degrees of freedom. It gives the possibility to extract additional information from the static response (change of the equilibrium distances between the $`d`$-ions) or the dynamic response (oscillations and waves in inhomogeneous structures). The sounding by small clusters of $`d`$-ions cannot change essentially the plasma parameters (although some distortion of the micro-field in the plasma can be stimulated by the traps, which stabilize the $`d`$-clusters). The advantage of static diagnostics is the simplicity of the measurements of the inhomogeneous structure and the simple connection with the parameters of the interaction between $`d`$-ions, their shielding and the characteristics of rf plasma. The precise theoretical consideration of the dynamical experiments , which are based on the excitation of the eigenmodes in linear chains and dust crystals, seems a more complicated problem.
We would like to stress that the most general consideration of the equilibrium inhomogeneous configurations of dusty systems can be based on translationally non-invariant solutions of the connected system of kinetic equations for plasmas and Poisson’s equation, where the separation between an external field and $`dd`$ interaction is absent. The equilibrium positions for the $`d`$-ions can be found as the points of space where the self-consistent electric field is in balance with gravity. However, this program is too complicated and, as we showed, not necessary for a reasonable theoretical description of the existing experiments.
We are grateful to Dr. A.Melzer for fruitful discussions and private communications about the experimental results. We also would like to thank Dr. H.Thomas and Dr. J.Goree for private communications connected with the papers .
This work have been performed with the support of INTAS grant N 96-0617.
|
no-problem/9906/cond-mat9906248.html
|
ar5iv
|
text
|
# Voronoi-Delaunay analysis of normal modes in a simple model glass.
## I Introduction
The thermodynamic properties of glasses at low temperatures differ from those of the corresponding crystals . At low temperatures the specific heat is strongly enhanced compared to the Debye contribution stemming from the sound waves. The excitations underlying this enhancement have been shown to be two-level systems below $`T1`$ K and nearly harmonic vibrations above. The vibrational density of state, $`Z(\nu )`$, plotted as $`Z(\nu )/\nu ^2`$ has a maximum, typically near 1 THz, the boson peak.
This low temperature / low frequency behavior can be described by the soft potential model. In this model one assumes that one common type of structural unit is responsible for the excess excitations. One introduces an effective potential describing the motion of this unit. Depending on the parameters this potential is a single well or a double well. In the first case it describes a low frequency localized vibration and in the second tunneling through the barrier (two level systems) or relaxation over the barrier. For low energies one can give a general form for the distribution of the parameters describing the effective potentials. Fitting this model to the experimental data, one finds that 20–100 atoms or molecular units move collectively in the tunneling and in localized vibrations. It should be emphasized that the concept of low frequency localized vibrations is an idealization. These modes will always interact with the sound waves of similar frequency and, therefore, also among each other. This delocalises the modes and they are only quasi-local or resonant. Due to level repulsion, for sufficiently high densities of these modes, the interaction will change their density of states from $`Z(\nu )\nu ^4`$ to $`Z(\nu )\nu `$ thus creating the boson peak. Such a model does, however, not say anything about the physical nature of the localized modes or their origin in different types of glass.
The problem of local dynamics in the amorphous state is closely connected with the problem of the so-called medium-range order in glasses . Recently it has been shown that a computer model of amorphous argon has a heterogeneous structure containing regions of more “perfect” or “imperfect” atomic arrangements on a nanometer scale. In the regions of perfect structure the elementary packings of four neighboring atoms (the Delaunay simplices) are close to either regular tetrahedra or quart-octahedra , i.e. quarters of regular octahedra. In the regions of imperfect structure the local configurations of the neighboring atoms differ markedly from these ideal shapes. A partial spectrum of the vibrational states of the atoms in the regions of more “imperfect” structure displays an excess for low-frequency modes.
Quasi-localized low-frequency vibrations have been observed in computer simulations of the soft sphere glass (SSG) and of numerous other materials, such as e.g. SiO<sub>2</sub> , Se in Ni-Zr and Pd-Si , in amorphous ice and in amorphous and quasi-crystalline Al-Zn-Mg . It was shown that these modes are centered at atoms whose structural surrounding differs substantially from the average. It has been established that the directions of the eigenvectors of soft vibrations strongly correlate with those of the relaxation jumps at low temperatures.
One hypothesis on the origin of the soft mode is that the most active atoms oscillate between neighboring minima of the potential energy formed by a cage of surrounding atoms. These minima correspond to some more “perfect” local arrangements of the atoms. The coupling to the rest of the material changes this double well system to a soft single well one. One example for such a situation is the interstitial atom in an fcc metal. A medium sized interstitial occupies the octahedral site. Increasing the size of the interstitial atom the octahedral site becomes unstable and the interstitial moves to an off-center position. The impeding instability is indicated by low lying resonance vibrations . The instability in this example is caused by a local compression which causes the simultaneous occurrence of high frequency localized vibrations. In the glass the modes are more extended typically string like groups of some twenty atoms . Instead of the single interstitial atom one has to take a group of atoms and due to the laking symmetry the energy minima will be shifted relative to each other. Keeping this in mind the underlying mechanism can still be true. The simultaneous occurrence of low and high frequency localized modes centered on one atom has indeed been observed .
In the present paper we want to verify and concretize this notion for the SSG. For this purpose we combine the harmonic analysis of Ref. with the Voronoi-Delaunay geometrical description of the local structure used in Ref. . First we shift the atoms of the model along the eigenvector of a low frequency quasi-localized normal mode and observe the changes in the local atomic arrangements, caused by the shifting. This allows us to visualize the specific transformations of local structure which accompany the movement of atoms in the soft vibrations. In a next step we calculate the atomic perfectness weighted with the squared amplitudes of the vibrational modes. This quantity varies only weakly with frequency. Considering that the vibrations are connected with changes in the geometry, we introduce a “structural dynamical matrix”. We will show in the following that there is a strong correlation between the “structural eigenvectors” and their vibrational counterparts. This correlation divides the vibrations, as regards structure changes, into separate classes: longitudinal and transverse extended, high frequency localized and low frequency quasi-localized modes.
## II The soft sphere glass
We use 55 glassy configurations of 500 atoms each, interacting via a soft sphere pair potential
$$u(r)=ϵ\left(\frac{\sigma }{r}\right)^6+A\left(\frac{r}{\sigma }\right)^4+B.$$
(1)
To simplify the simulation the potential is cut off at $`r/\sigma =3.0`$ and shifted by a polynomial with $`A=2.54\times 10^5ϵ`$ and $`B=3.43\times 10^3ϵ`$. The calculations are done with a fixed atomic density, $`\rho /\sigma ^3=1`$ and periodic boundary conditions. The configurations were obtained by a quench from the liquid to $`T=0`$ K. From the pair correlation one finds a nearest neighbor distance of around $`1.1\sigma `$. For more details see Ref. .
The inverse sixth-power potential is a well-studied theoretical model that mimics many of the structural and thermodynamic properties of bcc forming melts including the existence, in its bcc crystal form, of very soft shear modes. In the glassy structure one finds a boson peak with a maximum near $`\nu =0.1(ϵ/m\sigma ^2)^{1/2}`$ extending to about $`\nu =0.4(ϵ/m\sigma ^2)^{1/2}`$. The enhancement of the vibrational density of states over the Debye value is by a factor of 2.5..
As before the frequencies and eigenvectors of normal vibrations are calculated by the diagonalisation of the force constant matrix. Imaginary frequencies are absent in the spectrum because the system is in an absolute local minimum of potential energy. For the given number of atoms the minimal $`q`$-value for sound waves is $`q_{\mathrm{min}}=0.79\sigma ^1`$ giving minimal frequencies of 0.18 and 0.62 $`(ϵ/m\sigma ^2)^{1/2}`$ for the transverse and longitudinal sound waves, respectively. Resonant modes with frequencies well below 0.18 $`(ϵ/m\sigma ^2)^{1/2}`$ will, therefore, be seen as low frequency localized modes. This is reflected in the participation ratios given in Ref. . One finds proper localized modes at frequencies $`\nu >2(ϵ/m\sigma ^2)^{1/2}`$ and (quasi-)localized low frequency modes with $`\nu <0.2(ϵ/m\sigma ^2)^{1/2}`$. The great majority of modes ($`0.2<\nu <2`$) extends over the system. These latter modes have been called diffusons due to their non-propagating character. Nevertheless for the SSG as for other systems it is possible to extract via the dynamic structure factor some very broad “phonon dispersions”..
The SSG was used in extensive studies of the influence of quench rate on the glass structure. In these studies the Voronoi method was used to identify pentagonal rings which can be used as signature of icosahedral packing.
## III Voronoi-Delaunay description of local structure.
By definition, the Voronoi polyhedron (VP) of an atom is that region of space which is closer to the given atom than to any other atom of the system. A dual system spanning space is formed by the Delaunay simplices (DS). These are tetrahedra formed by four atoms which lie on the surface of a sphere which does not contain any other atom. Both VP and DS fill the space of the system without gaps and overlaps. In our calculations we do a Voronoi-Delaunay tessellation of the glass configurations by the algorithm described in Ref. .
It was found earlier that two main types of DS are predominant in mono-atomic glasses namely DS similar to ideal tetrahedra and DS resembling a quarter of a regular octahedron (quart-octahedron). Following Ref. we introduce as quantitative measure of tetragonality of a DS
$$T=\underset{i<j}{}\frac{(l_il_j)^2}{15\overline{l}^2}$$
(2)
where $`i`$ and $`j`$ designate the edges of the simplex, and $`\overline{l}`$ is the average edge-length. This measure was constructed to be zero for ideal tetrahedron and to increase with distortion. For computational reasons we slightly modify the previous measure of octagonality using:
$$O=\left\{\underset{m=1}{\overset{6}{}}g_mO_m^1\right\}^1$$
(3)
where
$$g_m=\frac{e^{3\frac{\delta _m}{\sigma }}}{_{i=1}^6e^{3\frac{\delta _i}{\sigma }}},\delta _m=\frac{l_m\overline{l}}{\overline{l}},\sigma =\left[\frac{1}{6}\delta _m^2\right]^{\frac{1}{2}},$$
and
$$O_m=\underset{i<j;i,jm}{}\frac{(l_il_j)^2}{10\overline{l}^2}+\underset{im}{}\frac{(l_il_m/\sqrt{2})^2}{5\overline{l}^2}$$
(4)
In a perfect quart-octahedral DS one edge is $`\sqrt{2}`$ times larger then the other edges. In Eq. (4), the previously used measure , it was originally assumed that the $`m`$-th edge is the longest. The modified expression (3) weights the six possible values $`O_m`$ in such a way that the smallest one dominates. The octagonality thus tends to zero when the DS is close to the perfect quart-octahedron. This weighting allows us to avoid the use of logical functions for the selection of the maximal edge and guarantees differentiability which is essential for our investigation. For the relevant low values of $`O`$, i.e simplices close to quart-octahedral shape, our expression reproduces the values of the original definition.
The tetrahedral and quart-octahedral DS can be unified in one class of “perfect”, or “ideal” simplices . We measure the ideality of the DS shape by
$$S=\left[g_T\left(\frac{T}{T_c}\right)^1+g_O\left(\frac{O}{O_c}\right)^1\right]^1$$
(5)
where
$$g_T=\frac{e^{3\frac{T}{T_c}}}{e^{3\frac{T}{T_c}}+e^{3\frac{O}{O_c}}},g_O=\frac{e^{3\frac{O}{O_c}}}{e^{3\frac{T}{T_c}}+e^{3\frac{O}{O_c}}}.$$
$`S`$ tends to zero when the simplex takes the shape of an ideal tetrahedron or quart-octahedron. Contrary to the expression proposed in Ref. our measure is differentiable with respect to the atomic coordinates. The relative weights of tetrahedricity and octahedricity, $`T_c=0.016`$, $`O_c=0.033`$, are taken from Ref. . The relation of the values $`T`$, $`O`$ to the distortion of a DS can be also seen from the values $`T_O0.050`$ of the tetrahedricity of an ideal quart-octahedron and $`O_T0.084`$ octahedricity of an ideal tetrahedron.
Each atom in the glass is corner of approximately 24 DS. The structural environment of an individual atom can be characterized by the average ideality of over these DS :
$$S_{\mathrm{atom}}=\frac{1}{n_{DS}}\underset{i=1}{\overset{n_{DS}}{}}S_i$$
(6)
where $`n_{DS}`$ is the number of DS surrounding the atom.
Another widely used measure of the atomic neighborhood is the sphericity of the Voronoi cell:
$$Sph=\frac{1}{36\pi }\frac{F^3}{V^2}1.$$
(7)
Here $`F`$ is the surface area of the VP, and $`V`$ is its volume. This measure is minimal for a sphere , $`Sph=0`$, and again increases with distortion.
In analogy to the potential energy one can take the total tetrahedricity, ideality or sphericity to characterize the structure. We introduce an average “structure potential” by
$$T=\frac{1}{N_{DS}}\underset{i=1}{\overset{N_{DS}}{}}T_i$$
(8)
and analogously $`S`$ and $`Sph`$ where $`N_{DS}`$ is the number of DS in the system. Since dynamics is concerned with the motion of the atoms it is often more useful to average over the atomic quantities defined by Eq. 6
$$T_{\mathrm{atomic}}=\frac{1}{N}\underset{i=1}{\overset{N}{}}T_{\mathrm{atomic}}.$$
(9)
Both definition give similar values.
In table I we compare the values of the three measures for an ideal fcc-structure an icosahedron and our glass. The values of the glassy structure clearly deviate from the ones of both the ideal configurations. It is, however, not possible to define unambiguously a nearness to either structure using these measures.
## IV Soft vibrations and change of the local structure
Using the quantities defined above we will now illustrate for one example of a quasi-localized soft mode the relationship between softness and local geometry. In Fig. 1 (solid line) we show the average potential energy per atom as function of the displacement along a single soft eigenmode, i.e. one of the soft potentials which are described by the soft potential model discussed in the introduction. The atoms are shifted along the direction of the $`3N`$-dimensional eigenvector $`𝐞`$ as
$$𝐑^n(x)=𝐑_0^n+x𝐞^n.$$
(10)
Here $`𝐑_0^n`$ is the equilibrium position of atom $`n`$. For simplicity we have not normalized the amplitude x to an effective atomic amplitude as is usually done in the soft potential model. Fig. 1 corresponds to a very well localized soft mode with $`\nu =0.04(ϵ/m\sigma ^2)^{1/2}`$, effective mass $`13m`$ and participation ratio $`0.14`$. These values would guarantee a very narrow resonance in the infinite medium .
As mentioned in the introduction it has been speculated that the soft modes in glasses originate in some “soft” atomic configurations where, in the extreme case, the atoms are stabilized by the embedding matrix in a position lying between minima of the potential energy given by its near neighbors. In Fig. 1 we show by the dashed line the average potential energy of the 13 most active atoms of our soft mode, i.e. the atoms with the largest amplitude $`𝐞^n𝐞^n`$. This partial potential energy is indeed double-well shaped with minima at $`x_m\pm 1.0`$ which corresponds to maximal displacements of individual atoms by $`|𝐑^n𝐑_0^n|0.27\sigma `$ from the equilibrium configuration. Note that at $`x=0`$ the selected atoms have somewhat smaller potential energy than the average. This reflects the reduced number of nearest neighbors reported earlier.
In order to understand the changes in the local structure as the atoms oscillate between the two partial minima of the potential energy, we look at the changes of the Delaunay tesselation caused by the displacements of the atoms in the normal mode. In Figs. 2 and 3 the five most active atoms of the soft mode are shown by the black spheres, and their geometrical neighbors by gray spheres. We consider two atoms as geometrical neighbors if they share a DS. The DS are visualized by line segments.
In Fig. 2 we concentrate on DS with nearly ideal tetrahedral shape. For the sake of clarity only the most perfect tetrahedra ($`T<0.003`$) are drawn. In equilibrium ($`x=0`$) two tetrahedra are found which satisfy this condition (Fig. 2, b). After shifting in the “positive” direction (Fig. 2,a) one perfect tetrahedron has disappeared, but 6 new ones appeared. In particular, a 5-fold ring of perfect tetrahedra is created (seen sideways in the right half of the picture). This ring is known to be the densest possible packing of 7 equal spheres. After the displacement in the “negative” direction (Fig. 2,c) again one tetrahedron is lost, but 4 new perfectly tetrahedral DS are gained.
Similarly Fig. 3 demonstrates the appearance of new quart-octahedral DS in the neighborhood of the active atoms. Only DS with $`O<0.008`$ are shown. For $`x=1`$ one perfect octahedron, consisting of 3 active atoms and 3 of their neighbors, is observed. Together with the 5-fold ring of the perfect tetrahedra, it forms locally a pattern of perfect, non-crystalline structure which does not exist at $`x=0`$. At $`x=1`$, a large dense cluster of 12 ideal quart-octahedra appears (Fig. 3, c). It indicates several octahedral configurations in the neighborhood of the active atoms, although they cannot be seen clearly on the figure.
In general, the number of perfect DS (tetrahedra or quart-octahedra) increases as the atoms are shifted from the equilibrium position to the local minima of the partial potential energy. This tendency is summarized in the double-well behavior of the average ideality, $`S_{\mathrm{atom}}`$, as function of the normal coordinate of the mode (Fig. 4, dashed line) . We remind that a lower values of $`S_{\mathrm{atom}}`$ means a more perfect atomic neighborhood. Note that the minima of the partial ideality are situated approximately at the same values of $`x`$ as the minima of the partial potential energy. In the equilibrium position ($`x=0`$) the active atoms have relatively imperfect neighborhoods compared to the rest of the atoms. After the displacements this is considerably “improved”.
The curve of the average perfectness $`S_{\mathrm{atom}}(x)`$, averaged over all atoms (solid line) is almost flat at $`x=0`$ and resembles the behavior of the average potential per atom energy of the mode (Fig. 1, solid line).
The double-well behavior of the partial potential energy and the ideality of the atomic environment is specific for a number of low-frequent vibrations. It becomes less pronounced as the frequency increases. However, we have not noticed a sharp transition between localized and delocalized low-frequency modes. Displacements of the atoms along the modes with medium and high frequencies also destroy DS of near perfect shape.
The geometrical peculiarities of the random soft sphere packing play an important role for the localized low-frequent vibrations. These vibrations can be visualized as complex collective motions which organize the atoms in some “perfect” but non-crystalline arrangements. Although these arrangements consist of elements present in the crystalline structure (tetrahedra and quart-octahedra) these are connected to each other in a way which is incompatible with rotational and translational symmetries of the crystal. Slight deviations of the shape of the DS make a variety of spatial arrangements possible which differ from the closed packed fcc and hcp ones. The pentagonal rings typical for locally icosahedric structure is one such example, compare e.g. Ref. . The lack of translational symmetry restricts these structures to ranges of a few interatomic distances.
## V Correlation between structure and vibration
We have seen that a low frequency quasi-localized vibration has a specific impact on the structure surrounding the most active atoms of the vibration. We will now investigate how far a general relationship between structural measures and dynamics can be seen. We will here concentrate on tetragonality, Eq. 2. Qualitatively we find the same trends also for ideality, Eq. 5, and sphericity, Eq. 7.
As a first possible relation between structural measures and vibration one can take the atomic tetragonality weighted by the amplitudes on the atoms. This would show whether e.g. atoms with low values of $`T_{\mathrm{atom}}`$ participate particularly strongly in vibrations in some frequency range. We define
$$T(\nu )=\frac{1}{N}\underset{n}{}T_{\mathrm{atomic}}^n𝐞^n(\nu )𝐞^n(\nu )$$
(11)
where $`𝐞^n(\nu )`$ stands for the three components on atom $`n`$ of a vibrational eigenvector to frequency $`\nu `$ and $`\mathrm{}`$ denotes averaging over configurations and eigenvectors to similar frequencies. Taking the average value, the dashed line in Fig. 5, one observes only a slight variation with frequency. Only at the smallest frequencies a small upturn is found. The contour plot shows that the $`T(\nu )`$-values fall in general into a narrow band. This is different for the high frequency localized modes ($`\nu >2(ϵ/m\sigma )^{1/2}`$). These modes have a large spread of $`T(\nu )`$-values without a clear preference to large or low atomic tetragonalities. This large spread is a direct consequence of the strong localization to even single atoms. The low frequency modes involve always larger numbers of atoms, from 10 upwards, and therefore average over many different atomic tetragonalities.
In the previous section we noted a connection between low frequency vibrations and changes of structural elements. In order to quantify this notion we will now treat the average tetragonality, Eq. 8, as a structural potential and in analogy to the usual dynamic matrix define a tetragonality matrix
$$𝒯_{\alpha \beta }^{mn}=\frac{^2T}{R_\alpha ^mR_\beta ^n}.$$
(12)
Diagonalisation of this matrix gives the eigenmodes of tetragonality change and the corresponding eigenvalues, which we will denote by $`𝐞_T`$ and $`\lambda _T`$, respectively. To keep in line with the vibrations we use a tetragonality frequency $`\nu _T=\sqrt{\lambda }/2\pi `$.
In analogy to Eq. 11, where we defined an amplitude weighted tetragonality as function of frequency, we calculate the expectation value of the tetragonality matrix with respect to the vibrations, i.e. an amplitude weighted structural curvature,
$$e(\nu )𝒯e(\nu )=\underset{\alpha \beta }{\overset{mn}{}}e_\alpha ^m(\nu )𝒯_{\alpha \beta }^{mn}e_\beta ^n(\nu ).$$
(13)
This expectation value shows several interesting features, Fig. 6. Most obviously there is a clear more or less linear increase with frequency. This linearity breaks down at the lowest frequencies $`\nu <0.2(ϵ/m\sigma )^{1/2}`$), i.e. in the frequency range of the boson peak, where we find a distinct upturn. This upturn corresponds to the one in Fig. 5 but is much more pronounced. It clearly indicates a structural difference of the excess modes in the boson peak. where a small maximum resembling a boson peak is seen. It should be remembered that the translational invariance requires $`e(\nu )𝒯e(\nu )0`$ for $`\nu 0`$. For pure translation we get of course zero. Due to the limited system size sound waves below $`\nu <0.2(ϵ/m\sigma )^{1/2}`$ were eliminated and we do not see the increase towards this “structural boson peak” on the low frequency side. The small dips of the curve for $`\nu 0.62,0.88,\mathrm{}.(ϵ/m\sigma )^{1/2}`$ coincide with the frequencies of the longitudinal sound waves in the SSG.
To get some deeper insight into the interplay of vibration and structure change we calculate the correlation matrix between the vibrational eigenmodes and their tetragonality counterparts
$$e(\nu )e_T(\nu _T)=\underset{n\alpha }{}\left(e_\alpha ^n(\nu )e_{T}^{}{}_{\alpha }{}^{n}(\nu _T)\right)^2.$$
(14)
The resulting correlation, Fig. 7, shows several interesting features. First there is a clear overall correlation as expected from Fig. 6. The correlation is highest for the highest frequency modes. From the participation ratios on can see that both vibrational and tetragonality modes are localized for the highest frequencies. For the great majority of modes two groups can be distinguished. The largest contribution stems from a broad band stretching from the lowest $`\nu `$-values to the peak at the maximal $`\nu `$-values. In front of this band (higher $`\nu `$-values) there is a smaller one which can be identified as being due to longitudinal phonons which are well separated from the other vibrations in the SSG. A third group is seen as narrow ridge at low $`\nu `$ covering a major part of the $`\nu _T`$ range. This last feature shows again the difference between the quasi-localized low frequency modes and the rest of the spectrum. In a larger system interaction will of course mix these features. This does, however, not change the underlying nature of the “naked” modes. Fig. 7 does not only show separate peaks for the longitudinal phonons permitted by the system size but also regarding the $`\nu _T`$-direction separate “phonons” are seen, both transversal and longitudinal. Checking the participation ratios of $`e_T`$ one finds all modes with low $`\nu _T`$ to be extended, no low frequency localized modes are seen. The observed correlation is insufficient to predict localization at low frequencies. This is not too surprising as it has been observed earlier that these modes are produced by a subtle interplay of local compression and in addition a resulting soft direction in configurational space involving several atoms. The tetragonality reproduces the first feature, seen in the high frequency modes, but not the second one.
To illustrate the fine details governing localization on one hand, and the stability of the overall correlation on the other one we repeat the above calculation for a mixed measure of ideality and tetragonality
$$𝒮𝒯_{\alpha \beta }^{mn}=\frac{0.6}{\mathrm{tr}𝒯}𝒯_{\alpha \beta }^{mn}+\frac{0.4}{\mathrm{tr}𝒮}𝒮_{\alpha \beta }^{mn}.$$
(15)
The weighting of $`𝒯`$ and $`𝒮`$ was chosen somewhat arbitrarily to move the lowest eigenvalues to $`(2\pi \nu _{ST})^20`$. Qualitatively the correlation, Fig. 8 is the same as the one for tetragonality, Fig. 7. The phonons in $`ST`$-space are no longer so clearly discernible but low frequency localized $`ST`$-modes are found which are correlated to the low frequency vibrations. The occurrence of the low frequency modes for the mixed $`𝒮𝒯`$ matrix is in agreement with Fig. 1 where the near zero curvature of $`S_{\mathrm{atom}}`$ is due to changes of both tetrahedricity and octahedricity. The difference between Figs. 7 and 8 illustrates that the formation of low frequency quasi-localized modes depends much more subtly on structural details than is the case for high frequency modes.
## VI Conclusion
We have shown that the Voronoi-Delaunay geometrical approach gives an insight into the geometrical effects underlying the vibrations in the glass. The lowest frequency quasi-localized vibrations can be envisaged as being caused by an instability of the local geometry which is stabilized by the embedding lattice. A group of atoms is trapped between two configurations which can be considered as more perfect. We introduce different measures to quantify this perfectness. For the great majority of modes there is only a weak correlation between the amplitude on the single atoms and their perfectness. This reflects the delocalisation of the modes. The high frequency localized modes which are concentrated on one or two atoms show a large scatter of their geometrical parameters which indicates that they are caused by different local distortions. At the low frequency side there is a small increase of tetragonality which is, however, masked by the width of the distribution.
Introducing structural dynamic matrices correlation effects are clearly observable. These correlations divide the vibrations into different groups. First there are two bands of extended modes, longitudinal and transverse ones. The separation of these two band is due to the large difference in longitudinal and transverse sound velocity for the considered model. At high frequencies localized vibrations are correlated to high frequency structural modes. At the lowest frequencies, the region of the boson peak, the vibrations show a distinctly different correlation behavior. This is a clear indication of their structural origin. Using a suitable mixture of structural measures low frequency structural modes can be defined which are correlated to the low frequency quasi-localized (resonant) modes.
## VII Acknowledgment
One of the authors (V.A.L.) gratefully acknowledges the hospitality and financial support of the Forschungszentrum Jülich.
|
no-problem/9906/math9906067.html
|
ar5iv
|
text
|
# Untitled Document
Bianchi Orbifolds of Small Discriminant
A. Hatcher
Let $`𝒪_D`$ be the ring of integers in the imaginary quadratic field $`\text{}(\sqrt{D})`$ of discriminant $`D<0`$. Then $`PGL_2(𝒪_D)`$ is a discrete subgroup of the isometry group $`PSL_2(\text{})`$ ($`=PGL_2(\text{})`$) of hyperbolic $`3`$-space $`^3`$. The quotient space $`^3/PGL_2(𝒪_D)=X_D`$ is topologically a noncompact $`3`$-manifold whose cusps (ends) are of the form $`S^2\times [0,\mathrm{})`$. The number of cusps of $`X_D`$ is known to be $`h_D`$, the class number of $`𝒪_D`$. So $`X_D`$ is a closed manifold $`\widehat{X}_D`$ with $`h_D`$ points removed.
For small $`D`$, including the $`31`$ discriminants in the range $`D>100`$, R. Riley has done computer calculations of the Ford fundamental domain $`F_D`$ for the action of $`PGL_2(𝒪_D)`$ on $`^3`$. (See for an account of the techniques; for about half of these $`D`$’s, Bianchi had calculated the fundamental domains — by hand, presumably — almost a century ago, in ,.) Riley’s computer output includes how the faces of $`F_D`$ are identified by elements of $`PGL_2(𝒪_D)`$. So it becomes a pleasant exercise in geometric visualization to try to recognize the manifold $`\widehat{X}_D`$. The results of carrying out this exercise for $`D>100`$ are listed in Table I. ($`P^3`$ denotes real projective $`3`$-space, $`\mathrm{}`$ is connected sum.)
| $`D`$ | | $`3`$ | $`4`$ | $`7`$ | $`8`$ | $`11`$ | $`15`$ | $`19`$ | $`20`$ | $`23`$ | $`24`$ | $`31`$ | $`35`$ | $`39`$ |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| $`\widehat{X}_D`$ | | $`S^3`$ | $`S^3`$ | $`S^3`$ | $`S^3`$ | $`S^3`$ | $`S^3`$ | $`S^3`$ | $`S^3`$ | $`S^3`$ | $`S^3`$ | $`S^3`$ | $`S^3`$ | $`S^3`$ |
| $`40`$ | $`43`$ | $`47`$ | $`51`$ | $`52`$ | $`55`$ | $`56`$ | $`59`$ | $`67`$ | $`68`$ | $`71`$ | $`79`$ |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| $`P^3`$ | $`P^3`$ | $`S^3`$ | $`S^3`$ | $`P^3`$ | $`P^3`$ | $`S^3`$ | $`S^3`$ | $`P^3\mathrm{}P^3`$ | $`S^3`$ | $`S^3`$ | $`P^3`$ |
| $`83`$ | $`84`$ | $`87`$ | $`88`$ | $`91`$ | $`95`$ |
| --- | --- | --- | --- | --- | --- |
| $`P^3`$ | $`S^1\times S^2`$ | $`S^1\times S^2`$ | $`P^3\mathrm{}P^3\mathrm{}P^3`$ | $`P^3\mathrm{}P^3`$ | $`P^3`$ |
Table I
There are exactly $`19`$ $`D`$’s in this range for which $`\widehat{X}_D`$ is the $`3`$-sphere. Since $`\pi _1X_D`$ ($`=\pi _1\widehat{X}_D`$) is $`PGL_2(𝒪_D)/torsion`$, the question of when $`\widehat{X}_D`$ is $`S^3`$ is equivalent (assuming no $`\widehat{X}_D`$’s are counterexamples to the Poincaré Conjecture) to when $`PGL_2(𝒪_D)`$ is generated by torsion. In the $`19`$ cases when $`\widehat{X}_D=S^3`$, we have determined in addition the orbifold structure on $`X_D`$, namely, the embedded graph in $`X_D`$ consisting of images under the quotient map $`^3X_D`$ of axes of rotations of torsion elements of $`PGL_2(𝒪_D)`$, each edge of this graph being labelled by the order of the corresponding torsion element. These orbifold structures are shown in the figures on the next page. (Labels “2” on edges are omitted. The small circles denote the cusp spheres. The sphere $`S^3=\widehat{X}_D`$ is regarded as $`3`$-space compactified by a point at infinity.)
Table II below shows the orbifold structure on $`X_D`$ in a few cases when $`\widehat{X}_DS^3`$. In the top row are the first four cases when $`\widehat{X}_D=P^3`$. Here we view $`P^3`$ as a ball with antipodal points of its boundary sphere (indicated by the dashed-line circles) identified. The lower part of the Table represents the case $`D=84`$, when $`\widehat{X}_D=S^1\times S^2`$. The periodic extension of the graph shown, modulo its translation symmetries, gives the “singular” locus of the orbifold structure on $`X_{84}`$, a graph lying on the torus $`S^1\times S^1`$ which decomposes $`S^1\times S^2`$ into two $`S^1\times D^2`$’s. The vertical direction in the figure represents the meridian circles $`\{x\}\times D^2`$ on these $`S^1\times D^2`$’s.
Table II
For the index two subgroup $`PSL_2(𝒪_D)`$ of $`PGL_2(𝒪_D)`$, the quotient space $`Y_D=^3/PSL_2(𝒪_D)`$ is a $`2`$-sheeted branched cover of $`X_D`$, branched in such a way that the cusp spheres of $`X_D`$ become cusp tori of $`Y_D`$ (except for $`D=3,4`$, when they remain spheres). It turns out that in the $`19`$ cases when $`\widehat{X}_D=S^3`$, this branching condition at cusps uniquely determines the branched covering $`Y_DX_D`$, and one can very easily by inspection determine the topological type of $`Y_D`$. This is given in Table III, in which $`Y_D`$ is strictly speaking the interior of the compact manifold listed. (Notations: $`B^3=`$ 3-ball, $`D^2=`$ 2-disk, $`T^2=`$ torus, $`I=[0,1]`$.)
$`D`$ $`Y_D`$ $`3`$ $`B^3`$ $`4`$ $`B^3`$ $`7`$ $`S^1\times D^2`$ $`8`$ $`S^1\times D^2`$ $`11`$ $`S^1\times D^2`$ $`15`$ $`S^1\times D^2\mathrm{}S^1\times D^2`$ $`19`$ $`S^1\times D^2`$ $`20`$ $`S^1\times D^2\mathrm{}S^1\times D^2`$ $`23`$ $`S^1\times D^2\mathrm{}T^2\times I`$ $`24`$ $`S^1\times D^2\mathrm{}S^1\times D^2`$ $`D`$ $`Y_D`$ $`31`$ $`S^1\times D^2\mathrm{}T^2\times I`$ $`35`$ $`S^1\times D^2\mathrm{}S^1\times D^2\mathrm{}S^1\times S^2`$ $`39`$ $`S^1\times D^2\mathrm{}S^1\times D^2\mathrm{}T^2\times I`$ $`47`$ $`S^1\times D^2\mathrm{}T^2\times I\mathrm{}T^2\times I`$ $`51`$ $`S^1\times D^2\mathrm{}S^1\times D^2\mathrm{}S^1\times S^2`$ $`56`$ $`S^1\times D^2\mathrm{}S^1\times D^2\mathrm{}T^2\times I\mathrm{}S^1\times S^2`$ $`59`$ $`S^1\times D^2\mathrm{}T^2\times I\mathrm{}S^1\times S^2`$ $`68`$ $`S^1\times D^2\mathrm{}S^1\times D^2\mathrm{}T^2\times I\mathrm{}S^1\times S^2`$ $`71`$ $`S^1\times D^2\mathrm{}T^2\times I\mathrm{}T^2\times I\mathrm{}T^2\times I`$
Table III
In the $`14`$ cases that $`Y_D`$ does not contain a connected summand $`S^1\times S^2`$, the restriction map $`H^1(Y_D;\text{})H^1(Y_D;\text{})`$ is injective; in other words, “$`Y_D`$ has no cuspidal cohomology.” It is known (see ,,,,) that these are the only cases when this happens, for arbitrary $`D<0`$.
Perhaps the first thing one notices about the pictures of the orbifolds $`X_D`$ is the symmetries. In each case there is a reflectional symmetry through a plane parallel to the plane of the page. This reflection presumably corresponds to the $`\text{}_2`$ extension of $`PGL_2(𝒪_D)`$ obtained by adjoining complex conjugation, the Galois automorphism of $`𝒪_D`$. When $`D`$ has more than one distinct prime divisor, there is also a $`180^{}`$ rotational symmetry evident in the pictures. (This symmetry does not appear in the Ford domains, however.) Such a symmetry is predicted by general theory: Bianchi already described a group $`G_DPGL_2(\text{})`$ containing $`PGL_2(𝒪_D)`$ as a normal subgroup of finite index, with quotient $`G_D/PGL_2(𝒪_D)_D/2_D`$, the mod $`2`$ ideal class group (or genera group) of $`𝒪_D`$. According to Gauss, this quotient has $`\text{}_2`$-rank equal to one less than the number of distinct prime divisors of $`D`$.
Even when $`\widehat{X}_D`$ is not $`S^3`$, $`\widehat{X}_D`$ may have $`S^3`$ as the quotient space corresponding to a finite extension of $`PGL_2(𝒪_D)`$, such as the group $`G_D`$ above. This happens in the cases $`D=40,52,55`$ in Table II, when $`\widehat{X}_D=P^3`$. It also happens for $`D=84`$, when $`\widehat{X}_D=S^1\times S^2`$ has a $`\text{}_2\times \text{}_2`$ ($`_{84}/2_{84}`$) quotient which is $`S^3`$, the $`\text{}_2\times \text{}_2`$ action on $`X_{84}`$ restricting to the full symmetry group of the singular locus on the torus shown in Table II.
It appears that except for $`D=3,4`$, the remaining $`17`$ orbifolds $`X_D`$ with $`\widehat{X}_D=S^3`$ are Haken orbifolds . That is, by repeatedly splitting open along incompressible $`2`$-dimenional suborbifolds, $`X_D`$ can be reduced to a disjoint union of finitely many orbifolds of the form $`\text{}^3/\mathrm{\Gamma }`$ for $`\mathrm{\Gamma }`$ a finite subgroup of $`SO(3)`$ (acting on $`\text{}^3`$ as isometries). These splitting surfaces are all separating, so such a hierarchy for $`X_D`$ yields a way of building up $`PGL_2(𝒪_D)`$ from finite subgroups of $`SO(3)`$ by iterated free product with amalgamation constructions. These hierarchies are in general far from unique. As a very simple example, the orbifold $`X_8`$ as drawn can be split successively along horizontal and vertical planes through the cusp, in either order, yielding the two structures
$$\left(O(24)_{C(4)}D(8)\right)_{C(3)C(2)}\left(D(6)_{C(2)}D(4)\right)$$
and
$$\left(O(24)_{C(3)}D(6)\right)_{C(4)C(2)}\left(D(8)_{C(2)}D(4)\right)$$
where $`O(24)`$ is the octahedral group, $`D(2n)`$ is the dihedral group of order $`2n`$, and $`C(n)`$ is the cyclic group of order $`n`$. In more subtle examples, not even the collection of finite subgroups of $`SO(3)`$ which start the iterated amalgamated free product construction is unique, though of course the noncyclic subgroups among these are unique, corresponding to the vertices in the singular locus of the orbifold structure.
In all cases except $`D=3,4`$, there is a splitting
$$PGL_2(𝒪_D)PGL_2(\text{})_A(\mathrm{?})$$
amalgamated over $`A=PSL_2(\text{})`$, arising as follows. In the upper half-space model of $`^3`$, bounded below by the plane , there lies $`^2`$, the half-plane above . The orbifold $`^2/PGL_2(\text{})`$ is a triangle with one vertex at the cusp at $`\mathrm{}`$. This triangle is embedded in $`X_D`$, and the boundary of a small regular neighborhood of this triangle is the surface corresponding to $`PSL_2(\text{})`$ in the splitting above. This surface can be take to be totally geodesic in $`X_D`$. It should be of interest to find other totally geodesic incompressible surfaces in $`X_D`$, since these are more likely to be defined arithmetically. For example, as Riley has pointed out, the cuspidal classes in $`H^1(Y_D;\text{})H^2(Y_D,Y_D;\text{})`$ found in ,, are represented by totally geodesic surfaces formed by the intersections of the Ford domain with certain planes parallel to $`^2^3`$. These non-separating surfaces in $`Y_D`$ pass down to non-separating (totally geodesic) surfaces in $`X_D`$, which are often non-orientable. Since non-separating surfaces do not exist in $`S^3`$, it follows from ,, that the only values of $`D<100`$ for which $`\widehat{X}_D`$ could be $`S^3`$ are $`119,164,191,311,356,404,479`$, and $`776`$. Riley’s computer calculations eliminate $`164`$ from this list.
References
M.Baker, Ramified primes and the homology of the Bianchi groups, I.H.E.S. preprint (1982).
L. Bianchi, Sui gruppi di sostitutioni lineari con coefficienti a corpi quadratici imaginarii, Math. Annalen 40 (1892), 332-412. \[This article and the following one are reprinted in Bianchi’s collected workes, Opere, vol.1, Editioni Cremonese, Rome, 1952.\]
L. Bianchi, Sui gruppi di sostitutioni lineari, Math. Annalen 42 (1892), 30-57.
F. Grunewald and J. Schwermer, Arithmetic quotients of hyperbolic 3-space, cusp forms and link complements, Duke Math. J. 48 (1981), 351-358.
R. Riley, Application of a computer implementation of Poincaré’s theorem on fundamental polyhedra, Math. of Computation 40 (1983), 607-632.
J. Rohlfs, On the cuspidal cohomology of the Bianchi modular groups, Math. Z. 188 (1985), 253-269.
W. Thurston, Geometry and topology of 3-manifolds, xeroxed notes.
K. Vogtmann, Rational homology of Bianchi groups, Math. Annalen 272 (1985), 399-419.
R. Zimmert, Zur $`SL_2`$ der ganzen Zahlen eines imaginär quadratischen Zahlkörpers, Invent. math. 19 (1973), 73-82.
December 1983
Cornell University
|
no-problem/9906/gr-qc9906113.html
|
ar5iv
|
text
|
# References
Anderson et al. reply (to the comment of Murphy on “Indication, from Pioneer 10/11, Galileo, and Ulysses Data, of an Apparent Anomalous, Weak, Long-Range Acceleration”).
> We conclude that Murphy’s proposal (radiation of the power of the main-bus electrical systems from the rear of the craft) can not explain the anomalous Pioneer acceleration.
In his comment Murphy proposes that the anomalous acceleration seen in the Pioneer 10/11 spacecraft can be “explained, at least in part, by non-isotropic radiative cooling of the spacecraft.” So, the question is, does “at least in part” mean this effect comes near to explaining the anomaly? We argue it does not .
Murphy considers radiation of the power of the main-bus electrical systems from the rear of the craft. For the Pioneers, the aft has a louver system, and “the louver system acts to control the heat rejection of the radiating platform…A bimetallic spring, thermally coupled radiatively to the platform, provides the motive force for altering the angle of each blade. In a closed position the heat rejection of the platform is minimized by virtue of the “blockage” of the blades while open louvers provide the platform with a nearly unobsructed view of space.”
If these louvers were open, then, Murphy calculates this would produce an acceleration $`a_0=9.2\times 10^8`$ cm s<sup>-2</sup>. Murphy uses numbers for thermal radiation that correspond to the position of the spacecraft near Jupiter, i.e., 5.5 AU. At that time, the spring temperature was about 56 F, meaning the opening angle of the louvers was down to 20. This reduces his estimate for the effective $`a_0`$ to $`a\mathrm{sin}(20^{})a_0=3.2\times 10^8`$ cm s<sup>-2</sup>.
However, our effect could only be seen well beyond 5.5 AU; i.e., further than 10-15 AU. By 9 AU the actuator spring temperature had already reached $``$40. This means the louver doors were closed (i.e., the louver angle was zero) from there on out. Thus, from our quoting of the radiation properties above, any contribution of the thermal radiation to the Pioneer anomalous acceleration should be small. (Certainly it would not be expected to be higher than it was at a 20 opening angle .)
In 1984 Pioneer 10 was at about 33 AU and the power was about 105 W. (Always reduce the effect of the total power numbers by 8 W to account for the radio-beam power.) In (1987, 1992, 1996) the craft was at $``$(41, 55, 65) AU and the power was $``$(95, 80, 70) W. The louvers were inactive. No decrease in $`a_P`$ was seen.
We conclude that this proposal can not explain the anomalous Pioneer acceleration.
Heat radiation should be a more significant systematic for Ulysses than for the Pioneers. However, in principle this could be separated out since accelerations along the lines of sight towards the Earth and towards the Sun could be differentiated. This is one of the reasons why a detailed calculation of the Ulysses orbit from near Jupiter encounter to Sun perihelion was undertaken, using CHASMP.
This turned out to be a much more difficult calculation than imagined. Because of a failed nutation damper, an inordinate number of spacecraft maneuvers were required (257). Even so, the analysis has now been completed. The results are disheartening. For an unexpected reason, any fit is not significant. The anomaly is dominated by (what appear to be) gas leaks. That is, after each maneuver the measured anomaly changes. The measured anomalies randomly change sign and magnitude. The values go up to about an order of magnitude larger than $`a_P`$. So, although the Ulysses data was useful for range/Doppler checks to test models, like Galileo it could not provide a good number for $`a_P`$.
The gas leaks so far found in the Pioneers are about an order of magnitude too small to explain $`a_P`$. Even so, we feel that some systematic or combination of systematics (such as heat or gas leaks) will most likely explain the anomaly. However, such an explanation has yet to be demonstrated.
This work was supported by the Pioneer Project, NASA/Ames Research Center, and was performed at the Jet Propulsion Laboratory, California Institute of Technology, under contract with NASA. P.A.L. and A.S.L. acknowledge support by a grant from NASA through the Ultraviolet, Visible, and Gravitational Astrophysics Program. M.M.N. acknowledges support by the U.S. DOE.
John D. Anderson,<sup>a</sup> Philip A. Laing,<sup>b</sup> Eunice L. Lau,<sup>a</sup> Anthony S. Liu,<sup>c</sup> Michael Martin Nieto,<sup>d</sup> and Slava G. Turyshev<sup>a</sup>
<sup>a</sup>Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109
<sup>b</sup> The Aerospace Corporation, 2350 E. El Segundo Blvd., El Segundo, CA 90245-4691
<sup>c</sup> Astrodynamic Sciences, 2393 Silver Ridge Ave., Los Angeles, CA 90039
<sup>d</sup> Theoretical Division (MS-B285), Los Alamos National Laboratory, University of California, Los Alamos, NM 87545
Received
PACS numbers: 04.80.-y, 95.10.Eg, 95.55.Pe
|
no-problem/9906/astro-ph9906304.html
|
ar5iv
|
text
|
# Temperature Correlations in a Compact Hyperbolic Universe
## 1 Introduction
Einstein’s equations do not specify the global structure of spacetime. In other words, to a given local metric, a large number of topologically distinct models remain unspecified. In the absence of the unified theory that describes the global structure as well as the local one, one must resort to the observational methods to determine the global topology of the universe.
Assuming that the spatial hypersurface is homogeneous, the observed high degree of isotropy in the cosmic microwave background (CMB) points to the Friedmann-Robertson-Walker (FRW) models as the best candidates of the cosmological models. However, if one would allow the spatial hypersurface being multiply-connected, a variety of locally FRW models which are globally anisotropic and inhomogeneous may be consistent with the current observational data.
Constraints on the topological identification scales using the COBE data have been obtained for some flat models with no cosmological constant (Stevens, Scott & Silk 1993; de Oliveira, Smoot & Starobinsky 1996; Levin, Scannapieco & Silk 1998) and some limited compact hyperbolic (CH) models (Levin, Barrow, Bunn & Silk 1997; Bond, Pogosyan & Souradeep 1998). The large-angular temperature fluctuations discovered by the COBE constrain the possible number of the copies of the fundamental domain inside the last scattering surface to less than $``$8 for compact flat multiply-connected models.
On the other hand, a large amount of CMB anisotropies on large scales could be produced in the low density universe due to the decay of gravitational potential near the present epoch (Cornish, Spergel & Starkman 1998). Therefore we expect that the constraint on the possible number of copies is less stringent for CH models. However, since the effect of the non-trivial topology becomes more and more significant as the volume of the space decreases, it is very important to investigate the viability of the CH models with small comoving volume.
From a theoretical point of view, the ”smallness” of the spatial hypersurface is an advantage for giving a natural mechanism leading to homogeneity and isotropy. It is well known that geodesic flows on CH spaces are strongly chaotic. Therefore, initial perturbations would be smoothed out due to the mixing effects (Lockhart, Misra & Prigogine 1982; Gurzadyan & Kocharyan 1992; Ellis & Tavakol 1994). In inflationary scenarios, a certain physical process is indispensable that homogenises the initial patch beyond the horizon scale before the onset of inflation for accomplishing the sufficient smoothing of the observable universe (Goldwirth & Piran 1989; Goldwirth 1991). The chaotic mixing in CH spaces may provide a solution to the pre-inflationary initial value problem (Cornish, Spergel & Starkman 1996).
If we live in a small universe which is defined to be a locally homogeneous and isotropic space that is multiply-connected on scales comparable to or smaller than the horizon, the future astronomical satellite missions such as MAP and PLANCK might reveal some specific features in CMB (Cornish, Spergel, & Starkman 1998; Weeks 1998).
So far, a variety of CH manifolds have been constructed by mathematicians. However, the number of the known CH manifolds with small volume is relatively small. In this paper, we investigate CH models whose spatial hypersurface is isometric to the Thurston manifold which is the second smallest in the known CH manifolds with volume 0.98139 times cube of the curvature radius. The smallest one is the Weeks manifold with volume 96 percent of that of the Thurston manifold(see e.g. Fomenko & Kunii 1997). However, the fundamental domain(which tesselates the infinite space) of the Thurston manifold is much simpler than that of the Weeks manifold. For simplicity, we investigate the Thurston models rather than the Weeks models. The fundamental domain of the Thurston manifold is a polygon with 16 faces, which can be constructed by appropriately identifying 8 faces with the remaining 8 faces(see the appendix of Inoue 1999a). It should be noted that the volume of CH manifolds must be larger than 0.16668 times cube of the curvature radius although no concrete examples of manifolds with such small volumes are known (Gabai, Meyerhoff & Thurston 1996).
## 2 Computation of eigenmodes
So far various kinds of numerical techniques have been proposed to overcome the difficulty of computing the CMB in CH models. For several CH models, CMB fluctuations have been computed using the method of images without carrying out the mode expansion (Bond, Pogosyan, & Souradeep 1998). They obtained the result that the COBE data strongly constrains the CH models so that the comoving volume of the fundamental domain are at least comparable to the comoving volume inside the last scattering surface. Since the method of images requires the sum of exponentially increasing images, it is difficult to obtain the distinct eigenmodes which are necessary to estimate the effect of the power spectrum with discrete peaks. Alternatively, one of the author proposed a numerical approach called the direct boundary element method for computing eigenmodes of the Laplace-Beltrami operator (Inoue 1999a). 14 eigenmodes have been computed for the Thurston manifold. It is numerically found that the expansion coefficients behave as if they are random Gaussian numbers.
In this work, we have numerically computed 36 eigenmodes in the Thurston manifold up to $`k=13`$ (the curvature radius is normalized to one) which are approximated by quadrature shape functions which converges to the solutions faster than constant valued shape functions. As we shall see, the contribution of the higher modes to the angular power spectra on large angular scales are relatively small for low-density models. In other words, the effect of the non-trivial topolgy is almost determined by the lower modes. We confirm the previous computed eigenvalues within $`|\delta k|0.01`$.
We see from figure 1 that the number of eigenmodes below $`k`$ is nicely fitted to the Weyl’s asymptotic formula
$$N(k)=\frac{\text{Vol}(M)(k^21)^{3/2}}{6\pi ^2},k>>1,$$
(1)
where Vol$`(M)`$ denotes the volume of a manifold $`M`$. The random Gaussian behavior is again observed for 31 modes $`5.404k<13`$ but five degenerated states have an eigenmode which shows the non-Gaussian behavior due to the global symmetry of the fundamental domain. It is found that the five eigenmodes have $`Z2`$ symmetry (invariant with respect to the rotation by an angle $`\pi `$) on the center (where the minimum length of the periodic geodesic which lies on the point is locally maximal) of the fundamental domain. In this case, one would observe an axis around which the fluctuation is rotationally symmetric at the center. Therefore, the correlation between expansion coefficients leads to a non-Gaussian behavior. Nevertheless, it is found that appropriate choices of the linear combination of the degenerated modes recover the generic Gaussian behavior. Furthermore, the symmetry of CH manifolds depends on the observing point. If one randomly choose a point on the manifold, the probability of observing an exact symmetry of the manifold is very small. The result supports the previous investigations of the expansion coefficients which show the Gaussian behavior in classically chaotic systems (Aurich & Steiner 1989; Haake & Zyczkowski 1990) although the global symmetry in the system can hide the generic property (Balazs & Voros 1986).
## 3 Temperature Fluctuations
Perturbations in CH models can be written in terms of linear combination of eigenmodes on the universal covering space multiplied by the expansion coefficients and the initial fluctuations plus time evolution of the perturbations. The expansion coefficients include the information of the periodicity in the universal covering space. As CH models are locally homogeneous and isotropic, the time evolution of the perturbations coincides with that in open models.
The dominant physical effects producing CMB anisotropies (Hu, Sugiyama & Silk 1997) on large angular scales are the ordinary Sachs-Wolfe (OSW) effect (Sachs & Wolfe 1967), which is the gravitational redshift effect in between the last scattering surface and the present epoch, and the integrated Sachs-Wolfe (ISW) effect, which is the gravitational blue-shift effect caused by the decay of gravitational potential at the curvature domination epoch, $`1+z(1\mathrm{\Omega }_0)/\mathrm{\Omega }_0`$. For the COBE scales, we can ignore the contribution from the acoustic oscillations. Then the time evolution of the adiabatic growing mode of the Newtonian gravitational potential is analytically given as (see e.g. Kodama & Sasaki 1986; Mukhanov, Feldman & Brandenberger 1992)
$$\mathrm{\Phi }_t(\eta )=\mathrm{\Phi }_t(0)\frac{5(\mathrm{sinh}^2\eta 3\eta \mathrm{sinh}\eta +4\mathrm{cosh}\eta 4)}{(\mathrm{cosh}\eta 1)^3},$$
(2)
where $`\eta `$ denotes the conformal time. The two-point temperature correlations in a CH cosmological model can be written in terms of the gravitational potential. Assuming that the initial fluctuations obey the Gaussian statistic, and neglecting the tensor-type perturbations, the angular power spectrum $`C_l`$ can be written as
$`(2l+1)C_l`$ $`=`$ $`{\displaystyle \underset{m=l}{\overset{l}{}}}|a_{lm}|^2`$ (3)
$`=`$ $`{\displaystyle \underset{\nu ,m}{}}{\displaystyle \frac{4\pi ^4𝒫_\mathrm{\Phi }(\nu )}{\nu (\nu ^2+1)\text{Vol}(M)}}|\xi _{\nu lm}|^2|F_{\nu l}|^2,`$
where
$$F_{\nu l}(\eta _o)\frac{1}{3}\mathrm{\Phi }_t(\eta _{})X_{\nu l}(\eta _o\eta _{})+2_\eta _{}^{\eta _o}𝑑\eta \frac{d\mathrm{\Phi }_t}{d\eta }X_{\nu l}(\eta _o\eta ).$$
(4)
Here, $`\nu =\sqrt{k^21}`$, $`𝒫_\mathrm{\Phi }(\nu )`$ is the initial power spectrum, and $`\eta _{}`$ and $`\eta _o`$ are the conformal time of the last scattering and the present conformal time, respectively. $`X_{\nu l}`$ denotes the radial eigenfunctions in open models and $`\xi _{\nu lm}`$ denotes the expansion coefficients. From now on we assume that the initial power spectrum is the (extended) Harrison-Zeldovich spectrum $`i.e.`$, $`𝒫_\mathrm{\Phi }(\nu )=Const.`$.
Although the low-lying modes give an appreciable contribution to the large angular power, contributions of higher eigenmodes may not completely be negligible. While the computation of highly-excited eigenmodes is a difficult task, we have so far succeeded to calculate the exact eigenmodes up to $`k=13`$ as we mentioned before. However, we are going to assume that $`\xi _{\nu lm}`$ ’s are also random Gaussian numbers for higher modes. Since the information of the periodicity in the real space is lost by this approximation, we will only employ this approximation to the statistics in the $`k`$-space which is expected to be not changed because the periodicity is not apparent in the $`k`$-space. As CH models are globally inhomogeneous, the expected correlation statistics depend on the point of the observer. Therefore, one can interpret that one realization for the expansion coefficients corresponds to a certain point of the observer in the fundamental domain. In order to apply the random Gaussian approximation, one must also estimate the variance of the expansion coefficients. The expansion coefficients are written in terms of eigenmodes $`u_\nu `$ and spherical harmonics $`Y_{lm}`$ as
$$\xi _{\nu lm}X_{\nu l}(\chi _o)=u_\nu (\chi _o,\theta ,\varphi )Y_{lm}^{}(\theta ,\varphi )𝑑\mathrm{\Omega }.$$
(5)
It should be noted that (5) is satisfied at arbitrary radius $`\chi _o`$. Let us consider a sphere with large radius $`\chi _o>>1`$ on the Poincar$`\stackrel{´}{\text{e}}`$ ball which is the image of the upper hyperboloid in the four-dimensional Minkowski space ($`y_0,y_1,y_2,y_3`$) by a stereographic projection onto the unit ball on the ($`0,y_1,y_2,y_3`$) plane using a point ($`1,0,0,0`$) as the base point. One can expect the random behavior of the mode functions on the sphere as the surface of the sphere which is pulled back by the discrete isometry group fills the fundamental domain ergodically. The (apparent) angular fluctuation scale $`\delta \theta `$ of $`k`$-mode is approximated in terms of two parameters $`\chi _o`$ and $`k`$ as ,
$$\delta \theta ^2\frac{16\pi ^2\text{Vol}(M)}{k^2(\mathrm{sinh}(2(\chi _o+r_{ave}))\mathrm{sinh}(2(\chi _or_{ave}))4r_{ave})},$$
(6)
where $`r_{ave}`$ denotes the averaged radius of the inradius and outradius of the fundamental domain. One can approximate $`u_\nu ^{}(\chi _o)u_\nu (\chi _o^{})`$ by choosing an appropriate radius $`\chi _o^{}`$ which satisfies $`k^2\mathrm{exp}(2\chi _o^{})=k^2\mathrm{exp}(2\chi _o)`$. Averaging (5) over $`l`$ and $`m`$, one obtains
$$<|\xi _{\nu ^{}lm}|^2>\frac{\mathrm{exp}(2\chi _o^{})}{\mathrm{exp}(2\chi _o)}<|\xi _{\nu lm}|^2>,$$
(7)
which gives $`<|\xi _{\nu lm}|^2>\nu ^2`$. We have found that the computed variances of $`\xi _{\nu lm}`$’s for $`2l20`$, $`lml`$ are remarkably in good agreement with the analytical estimate.
From figure 2, one can see that the uncertainty in the Gaussian approximation is very small. Remarkably, each realization gives almost the same value so that 100 points for given $`l`$ are plotted as a tiny speck. The contribution of higher modes becomes significant as $`\mathrm{\Omega }_0`$ is increased because the curvature dominant era is shifted to the late time so that the OSW effect becomes dominant over the ISW effect. It is found that contributions of the modes $`k>13`$ to $`C_l`$ for $`2l20`$ are approximately $`7`$ percent and $`10`$ percent for $`\mathrm{\Omega }_0=0.2`$ and $`\mathrm{\Omega }_0=0.4`$, respectively. Thus contribution of modes $`k>13`$ which we employ Gaussian approximation is almost negligible on large angular scales especially in low $`\mathrm{\Omega }_0`$ models.
One realization (for the initial fluctuation) of a typical CMB fluctuation as seen by COBE is plotted in figure 3 for $`\mathrm{\Omega }_0=0.2`$. In the simulation, we used only ”exact” 36 eigenmodes. We have chosen a point where the injective radius is maximal as the center (belonging to the ”thick” part of the manifold). One can see that the structure due to the periodical boundary conditions is not apparent. However, approximated number of copies of the fundamental domain inside the last scattering surface is $`500`$ for the Thurston model with $`\mathrm{\Omega }_0=0.2`$. Therefore, the effect of the non-trivial topology is expected to be significant.
The mode cut-off at $`k=5.404`$ which corresponds to the largest wavelength inside the fundamental domain causes the suppression of the angular power on large angular scales as in compact flat models. However, the decay of the Newtonian potential in the curvature dominant era makes the difference. Since the bulk of the large angular power comes from the decay of the potential well after the last scattering time, the large angular power does not suffer the significant suppression. We see from figure 4 that the slope of the large angular power is not steep even for the model with $`\mathrm{\Omega }_0=0.2`$ in contrast to the compact flat models without cosmological constant. The two peaks in the power spectrum for the CH model are important in understanding the effect of the non-trivial topology. The angular scale which gives the first peak is equivalent to the angular fluctuation scale of the lowest eigenmode ($`k=5.404`$) on the last scattering surface. Substituting the comoving radius of the last scattering surface in unit of the curvature radius $`R_{curv}`$,
$$R_{LSS}=R_{curv}\mathrm{cosh}^1(2/\mathrm{\Omega }_01)$$
(8)
into (6) gives the angular scales $`l=17`$ for $`\mathrm{\Omega }_0=0.2`$ and $`l=7.4`$ for $`\mathrm{\Omega }_0=0.4`$. Beyond this scale, the OSW contribution is strongly suppressed as in compact flat models. However, eigenmodes with angular scales below the given scale at the last scattering can have large angular scales after the last scattering. Therefore, in the presence of the ISW effect, the suppression of the power beyond the scale which corresponds to the first peak is very weak in contrast to flat models. The angular scale which gives the second peak corresponds to the scale of the projected lowest eigenmode at the last scattering. Below this scale, the angular power asymptotically converges to that of open models because the effect of the modes with wavelength larger than the cut-off wavelength is negligible. Since we have ignored the effects of subhorizon perturbations at the last scattering such as the so-called ’early’ ISW effect during the matter-radiation equality epoch and the Doppler effect due to the acoustic velocity, the angular power on large to intermediate scales must be slightly boosted. However, these effects are irrelevant to the global effect of the non-trivial topology inasmuch as one considers the typical topological identification scale that is not significantly smaller than the present horizon.
In figure 5, the angular power spectra for low-Omega models are plotted with the COBE data (Gorski et al, 1996) (diamonds). They have been calculated using 36 eigenmodes and the Gaussian approximation taking account of $``$10 percent contributions from higher eigenmodes. The slope of the power becomes steep as $`\mathrm{\Omega }_0`$ is lowered since the ISW contribution transfers to the large scales.
We have performed a simple $`\chi ^2`$ fitting analysis to the COBE DMR band power measurements (Tegmark 1997) (boxes) which are uncorrelated . We have adjusted the normalization of the initial power to minimise the value of $`\chi ^2`$. As shown in table 1, the angular power for a model with $`\mathrm{\Omega }=0.1`$ is still within the acceptable range. The apparent primordial spectral index is approximately $`n=1.6`$ for $`\mathrm{\Omega }=0.1`$.
## 4 CONCLUSIONS
Thus the Thurston models with $`\mathrm{\Omega }0.1`$ are not constrained by the angular power spectrum from the COBE data, which confirms the preliminary result by one of the author (Inoue 1999b). The peak at $`l4`$ in the COBE data may be merely the coincidence due to the large cosmic variance but it is interesting that a model with $`\mathrm{\Omega }_00.6`$ has the first peak in this scale. Consequently, the Thurston models agree well with the COBE data than any FRW models. The similar conclusion that the constraints $`\mathrm{\Omega }0.3`$ for an orbifold model with volume $`0.7173068R_{curv}^3`$ have been obtained in (Aurich 1999). Although orbifolds have singular points, the behavior of eigenmodes for orbifolds is expected to be similar to that of manifolds. Therefore, the result for an orbifold model supports our conclusion.
## Acknowledgments
We would like to thank Dr. Jeff Weeks and the Geometry Center in University of Minnesota for providing us the data of CH spaces and Dr. Neil J. Cornish for useful comments. The numerical computation in this work was carried out by VPP 800 at the Data Processing Center in Kyoto University. K.T. Inoue is supported by JSPS Research Fellowships for Young Scientists, and this work is supported partially by Grant-in-Aid for Scientific Research Fund (No.9809834, No.11640235).
|
no-problem/9906/astro-ph9906120.html
|
ar5iv
|
text
|
# The spectral evolution of post-AGB stars
## 1 Introduction
The transition phase between the Asymptotic Giant Branch (AGB) and planetary nebulae (PNe) has gained much attention over the last decade. AGB stars lose mass fast and get obscured by their circumstellar dust. As the star leaves the AGB, its mass loss rate decreases significantly and the star may become sufficiently hot to ionize its circumstellar material and be observable as a PN. During the transition from the AGB to the PN phase (the post-AGB or proto-planetary nebula phase) the dust shell created during the AGB moves away from the central star and becomes optically thin after a few hundred years; the obscured star becomes observable. The transition time from the AGB to the PN phase is estimated to be a few thousand years (e.g. Pottasch 1984).
Whereas PN are relatively easy to find because of their rich optical emission line spectra, post-AGB stars have more inconspicuous spectra and are therefore much harder to find. The number of post-AGB stars only started to become large after the IRAS mission, that was successful in detecting objects surrounded by circumstellar dust.
Several samples of post-AGB stars are presented in the literature (e.g. Volk & Kwok 1989, Hrivnak, Kwok & Volk 1989, van der Veen, Habing & Geballe 1989, Trams et al. 1991, Oudmaijer et al. 1992, Slijkhuis 1992). Most of these objects are stars with supergiant-type spectra, surrounded by dust shells. These samples of post-AGB stars have in common that the original criteria which were employed to find them implicitly made assumptions on the spectral energy distribution (SED) of post-AGB stars. Some authors used criteria on the IRAS colours, because post-AGB stars were expected to be located in a region in the IRAS colour-colour diagram between AGB stars and PN (e.g. Volk & Kwok 1989, Hrivnak et al. 1989, van der Veen et al. 1989, Slijkhuis 1992). Other authors loosened this criterion and searched for objects in the entire colour-colour diagram, but with an additional criterion that the central star should be optically visible (e.g. Trams et al. 1991, Oudmaijer et al. 1992, Oudmaijer 1996).
Such samples are subject to selection effects, so the objects that have been selected do not have to be representative for the entire population of post-AGB stars. In order to understand these selection effects and to obtain a handle on the kinds of objects that could have been missed, it is useful to investigate the spectral energy distribution from a theoretical point of view by following the spectral evolution of a post-AGB star with an expanding circumstellar shell. Moreover, this type of study allows one to investigate and understand the processes that occur in the circumstellar shell during the transition.
Several such studies have been published. Most of these studies focus on the expanding dust shell with a dust radiative transfer model. Authors like e.g. Siebenmorgen, Zijlstra & Krügel (1994), Szczerba & Marten (1993), Loup (1991), Slijkhuis & Groenewegen (1992) and Volk & Kwok (1989) performed calculations describing the evolution of the circumstellar dust shell. Work concerning the evolution of a star with an expanding shell has also been performed with photo-ionization codes (Volk 1992), and hydrodynamical models (Frank et al. 1993, Mellema 1993, Marten & Schönberner 1991). None of these models include dust, except for Volk (1992) who used the output of the photo-ionization code cloudy (Ferland 1993) as input for a dust model.
Our objective is to investigate the spectral evolution of a hydrogen burning post-AGB star with a photo-ionization model containing a dust code. The aim of this work is twofold:
Firstly we investigate the processes in the circumstellar envelope. The emphasis in this paper will be on the infrared properties of this shell. For this, both the expansion of the shell and the evolution of the central star have to be taken into account.
Secondly, we investigate the influence of certain assumptions on the evolutionary timescales. Only few post-AGB evolutionary grids have been published, never giving a fine grid for the coolest part of the evolution. The original Schönberner tracks (1979, 1983) presented only a limited number of time points of the evolution, while Vassiliadis & Wood (1994) omit the phase between 5000 K and 10 000 K altogether. The predicted timescales that are available are calculated with a pre-defined end of the AGB and assumed post-AGB mass loss rates. These choices can influence the post-AGB evolutionary timescales considerably, as was already demonstrated by Trams et al. (1989) and Górny, Tylenda & Szczerba (1994). The situation has changed now with the results of Blöcker (1995a,b), who calculated new evolutionary sequences, and made extensive tables available describing certain key parameters during the post-AGB phase. The published relation between the envelope mass and effective temperature allows one to construct detailed timescales using one’s own mass loss prescriptions during the (post-) AGB evolution, which makes the model results less dependent on the mass loss formulation that was used by Schönberner (1979, 1981, 1983).
In this paper first we describe the method to calculate evolutionary timescales and the adopted mass loss prescriptions. The results of Blöcker (1995a,b) are used as a basis to create synthetic evolutionary tracks, and some aspects of the evolutionary timescales that are predicted are discussed. Next we describe the photo-ionization code cloudy that was used and the assumptions that were made to conduct the study of the spectral evolution of post-AGB stars. We then present the first results of a parameter study of a typical post-AGB object, based on the 0.605 M track from Blöcker (1995b).
## 2 The central star evolution
Many stellar evolutionary models are presented in the recent literature, but only two groups calculate the AGB quantitatively including mass loss. These are Vassiliadis & Wood (1993, 1994) and Blöcker & Schönberner (1991) and Blöcker (1995a,b). Both groups calculate the evolution of a star from the main sequence through the red giant phase to the white dwarf stage. There are slight differences in the core mass – luminosity relations and the use of a different initial – final mass relation, but their results show qualitatively the same behaviour in the evolution of stars in the HR diagram. The main differences between the models are the mass loss prescriptions on the AGB.
The AGB mass loss rates in the formulation of Vassiliadis & Wood are derived from the mass loss – pulsation period ($`\dot{M}`$$`P`$) relation given by Wood (1990). For periods longer than 500 d for low mass stars, and larger pulsation periods for higher mass objects, a maximum value of the AGB mass loss rate is invoked (of the order of 10<sup>-5</sup> M yr<sup>-1</sup>). Blöcker used a mass loss rate which is dependent on the luminosity of the star. He fitted the results of Bowen’s (1988) theoretical study of mass loss in Mira variables. Basically this is the Reimers mass loss (Reimers 1975) multiplied by a luminosity dependent factor. These mass loss rates are not limited by a maximum value. Since the adopted AGB mass loss rates by the above authors differ, some striking differences between the results of their calculations exist. The $`\dot{M}`$$`P`$ relation of Wood (1990) is questioned by Groenewegen & de Jong (1994). Using the synthetic evolutionary model of Groenewegen & de Jong (1993), they were able to fit the luminosity function of carbon stars in the LMC with the Bowen mass loss adopted by Blöcker (1995a), but not with the Vassiliadis & Wood mass loss rates.
The mass loss rates do not only govern the evolution on the AGB. During the post-AGB phase, the mass loss rates also have a drastic influence on the timescales of the AGB – PN transition. Since the temperature of the star for a given core mass is determined by the mass of the stellar envelope, larger mass loss rates will cause the star to evolve to higher temperatures more quickly and can therefore decrease the timescale of the transition strongly. Trams et al. (1989) showed that when the adopted post-AGB mass loss rates of a 0.546 M star are raised by a factor of 5 to 10 to a value of 10<sup>-7</sup> M yr<sup>-1</sup>, the transition time from the AGB to the PN phase is shortened from 100 000 yr to only 5000 yr. This would make a low mass star readily observable as a PN, while the (longer) predicted timescales prevent such objects to become an observable PN since the circumstellar shell would have moved far away from the star and would have dispersed into the interstellar medium long before the star emits a sufficient amount of ionizing photons. In order to study the evolutionary timescales of post-AGB stars, it is important to understand post-AGB mass loss better.
### 2.1 Determination of the evolutionary timescales
The evolutionary timescales of hydrogen burning post-AGB stars are determined by the core mass and the mass loss rates of the star. In this section we present the computational details of this procedure, compute the evolutionary timescales and investigate several aspects of these timescales. We use the tracks by Blöcker (1995a,b) to determine the evolutionary timescales. Blöcker calculated several tracks for his investigation of the evolution of stars on the AGB and beyond. The tracks include a complete calculation of a 3 M object that ends as a 0.605 M white dwarf and a 4 M (0.696 M) sequence. To these sequences, the 1.0 M (0.565 M) track of Schönberner (1983) was added. We start with a recapitulation of the mass loss laws we have used (see also Blöcker 1995b).
#### 2.1.1 The mass loss laws
During the red giant branch, the mass loss is parameterized by the Reimers mass loss rate $`\dot{M}_\mathrm{R}`$ given in equation (1), where the scaling parameter $`\eta `$ is set to 1 for all tracks.
$$\dot{M}_\mathrm{R}/(\mathrm{M}_{}\mathrm{yr}^1)=4\times 10^{13}\eta \frac{(L/\mathrm{L}_{})(R/\mathrm{R}_{})}{(M/\mathrm{M}_{})}$$
(1)
With $`L`$, $`R`$ and $`M`$ the luminosity, radius and mass of the star. Since AGB stars lose mass at a faster rate than red giants, a different formulation is adopted for the AGB mass loss rates. To this end, Blöcker fitted the numerical results of Bowen, as given in equation (2).
$$\dot{M}_{\mathrm{B1}}/(\mathrm{M}_{}\mathrm{yr}^1)=$$
(2)
$`4.83\times 10^9(M_{\mathrm{ZAMS}}/\mathrm{M}_{})^{2.1}(L/\mathrm{L}_{})^{2.7}\dot{M}_\mathrm{R}`$
$$\dot{M}_{\mathrm{B2}}/(\mathrm{M}_{}\mathrm{yr}^1)=4.83\times 10^9(M/\mathrm{M}_{})^{2.1}(L/\mathrm{L}_{})^{2.7}\dot{M}_\mathrm{R}$$
The difference between $`\dot{M}_{\mathrm{B1}}`$ and $`\dot{M}_{\mathrm{B2}}`$ is the division by the current mass of the star instead of the initial mass of the star. The timescales for the 0.565 M track of Schönberner were recalculated by us using the $`M_{\mathrm{env}}`$$`T_{\mathrm{eff}}`$ relation given by Schönberner but with the mass loss prescriptions of Blöcker. For all tracks the $`\dot{M}_{\mathrm{B1}}`$ prescription was used, except for the 0.605 M track where $`\dot{M}_{\mathrm{B2}}`$ was used, which results in larger AGB mass loss rates.
The mass loss rates can reach values of the order of 10<sup>-4</sup> M yr<sup>-1</sup> to 10<sup>-3</sup> M yr<sup>-1</sup> and it is clear that if this mass loss would last for a long time, the star would evaporate. Thus, an end to the AGB (-wind) has to be invoked. Blöcker used the pulsation period to define the end of the high mass loss phase; he assumed that AGB stars pulsate in the fundamental mode, where the period can be calculated using equation (3) (Ostlie & Cox 1986).
$$\mathrm{lg}(P_0/\mathrm{d})=1.920.73\mathrm{lg}(M/\mathrm{M}_{})+1.86\mathrm{lg}(R/\mathrm{R}_{})$$
(3)
When the central star has reached an inferred pulsation period of $`P_\mathrm{a}`$ = 100 d (which occurs when the star has a surface temperature somewhere between roughly 4500 K and 6000 K) the Bowen mass loss stops. When the star has subsequently reached an inferred pulsation period of $`P_\mathrm{b}`$ = 50 d the post-AGB mass loss starts; in between the mass loss rates are connected by a smooth transition. Hence the following definitions will be used in the remainder of the paper: the AGB phase is that part of the evolution where the pulsation period is greater than $`P_\mathrm{b}`$, the (AGB) transition phase is that part of the evolution where the pulsation period is in between $`P_\mathrm{a}`$ and $`P_\mathrm{b}`$, and finally the post-AGB phase is that part of the evolution where the pulsation period is smaller than $`P_\mathrm{b}`$.
One should realize that post-AGB stars with pulsation periods larger than 50 d exist (e.g. HD 52961 with a period of 72 d; Waelkens et al. 1991, Fernie 1995). Thus the pulsation period recipe used by Blöcker should only be regarded as an approximate parameterization of the end of the AGB.
For lower temperatures the post-AGB mass loss is given by the Reimers law (1). Since the Reimers mass loss is proportional to $`T_{\mathrm{eff}}`$<sup>-2</sup> for a constant luminosity and stellar mass, the post-AGB mass loss rates decrease during the evolution of the object. A radiation driven wind will take over when the star has reached temperatures above approximately 20 000 K. The mass loss rate for this wind, based on Pauldrach et al. (1988), is given by equation (4). Hence at any stage of the post-AGB evolution either equation (1) or (4) is used, whichever of the two yields the biggest mass loss.
$$\dot{M}_{\mathrm{CPN}}/(\mathrm{M}_{}\mathrm{yr}^1)=1.29\times 10^{15}(L/\mathrm{L}_{})^{1.86}$$
(4)
During the evolution on the AGB and beyond, the envelope mass is reduced due to two processes. In the outer parts of the envelope, mass is lost through a wind, and at the bottom of the envelope, mass is diminished by hydrogen burning. From Trams et al. (1989) we find the mass loss due to hydrogen burning:
$$\dot{M}_\mathrm{H}/(\mathrm{M}_{}\mathrm{yr}^1)=1.012\times 10^{11}(L/\mathrm{L}_{})X_\mathrm{e}^1$$
(5)
With $`X_\mathrm{e}`$ the hydrogen mass fraction in the envelope (70 %).
#### 2.1.2 The evolutionary timescales
The evolutionary timescales for hydrogen burning post-AGB stars depend firstly on the core mass and secondly on the mass loss rates. The evolutionary timescales can be calculated relatively easy by making use of the fact that there exists a unique relation between the envelope mass and the stellar temperature for every core mass. One can calculate the evolutionary timescales by combining this relation with the mass loss prescriptions given in the previous section. Dr. Blöcker kindly provided us with tables from which the $`M_{\mathrm{env}}`$$`T_{\mathrm{eff}}`$ relation could be reproduced.
The timescale for the envelope depletion is given by:
$$\frac{\mathrm{\Delta }M_{\mathrm{env}}}{\mathrm{\Delta }t}=\dot{M}_{\mathrm{wind}}+\dot{M}_\mathrm{H}$$
(6)
Using the $`M_{\mathrm{env}}`$$`T_{\mathrm{eff}}`$ relation this expression can be transformed into an expression for the evolutionary rate $`\mathrm{\Delta }T_{\mathrm{eff}}/\mathrm{\Delta }t`$ of the central star
$$\frac{\mathrm{\Delta }T_{\mathrm{eff}}}{\mathrm{\Delta }t}=\frac{\mathrm{\Delta }T_{\mathrm{eff}}}{\mathrm{\Delta }M_{\mathrm{env}}}\times \frac{\mathrm{\Delta }M_{\mathrm{env}}}{\mathrm{\Delta }t}=\frac{\dot{M}_{\mathrm{wind}}+\dot{M}_\mathrm{H}}{\frac{\text{d}M_{\mathrm{env}}}{\text{d}T_{\mathrm{eff}}}}$$
(7)
Integrating this expression yields the actual timescales for the evolution of the central star.
In Fig. 1 some relations are presented for the 0.565 M, 0.605 M and 0.696 M tracks. The upper panel shows the envelope mass as function of photospheric temperature. The second panel presents the mass loss rates as function of temperature, and the third panel shows the evolutionary rate in kelvin per year as function of temperature. In the second panel it is visible that the transition phase occurs at higher temperatures for larger core masses with the recipes described above.
The evolutionary rate, mass loss rate and envelope mass are related to each other in the following way. The evolutionary rate in terms of increase in temperature per year is slow on the steep part of the $`M_{\mathrm{env}}`$$`T_{\mathrm{eff}}`$ relation, and more rapid on the shallow part. Larger mass loss rates imply of course more rapid changes in the stellar temperature. These effects are visible in Fig. 1: at low temperatures in the post-AGB phase the evolutionary rates are smallest, while for higher temperatures the evolutionary rate increases. The minima around lg($`T_{\mathrm{eff}}`$/K) $``$ 3.7 to 3.9 correspond to the onset of the smaller post-AGB mass loss, slowing down the evolution which accelerates later on the shallow part of the $`M_{\mathrm{env}}`$$`T_{\mathrm{eff}}`$ relation.
Interestingly, the minimum in the evolutionary rate occurs right after the end of the AGB transition phase. The net increase in effective temperature in kelvin per year is the slowest for all tracks just after the start of the post-AGB evolution, which is when the temperature of the objects correspond to G or F spectral types. The increase in temperature is less than 1 K yr<sup>-1</sup> for the 0.565 M and 0.605 M tracks. How does this evolutionary rate compare with the observations? Fernie & Sasselov (1989) calculated the possible increase in temperature for UU Her stars. From the absence of a change in pulsation period of UU Her, 89 Her and HD 161796, they place an upper limit of 0.5 K yr<sup>-1</sup> on the temperature increase. Their conclusion was that these objects can not be post-AGB stars because the evolutionary rates should be much higher. However, this may be the case when one averages over the entire post-AGB temperature span, but the observed lack of temperature increase is consistent with the predictions for this temperature range, as was already shown by Schönberner & Blöcker (1993). Therefore a post-AGB nature for the UU Her stars can not be excluded on this basis.
### 2.2 The influence of mass loss on the evolutionary timescales
It is evident from the above that the value of the post-AGB mass loss rate has an important effect on the evolutionary timescales. However, the mass loss rate is not known observationally for cool post-AGB stars. The usual tracer of mass loss, H$`\alpha `$ emission, which is often observed in the spectra of post-AGB objects is likely to be the result of stellar pulsations (see the discussion by Oudmaijer & Bakker 1994 and Lèbre et al. 1996). A possible tracer of mass loss in cool post-AGB stars is the CO first-overtone emission at 2.3 $`\mu `$m (Oudmaijer et al. 1995), but, given this is true, the mass loss rates still have to be determined.
The lack of theoretical and observational values for post-AGB mass loss rates forced Schönberner and Blöcker to resort to the heuristic Reimers law for the cool part of the post-AGB evolution. As an illustration of the effect of the post-AGB mass loss rates on the evolutionary timescales, we have calculated these timescales for three core masses with the post-AGB mass loss rate at 0, 1, 5 and 10 times the standard post-AGB value (indicated as 0$`\times `$pAGB etc.). The results for the 0.565 M, 0.605 M and 0.696 M tracks are presented in Table 1, where the timescales since the end of the transition mass loss phase are given. The increase of the post-AGB mass loss rates indeed decreases the timescales of the evolution, confirming the results of Trams et al. (1989) and Górny et al. (1994). The 0$`\times `$pAGB mass loss rate effectively determines the slowest possible evolutionary speed, since the only mass loss is through hydrogen burning. On average, the difference in speed between 0$`\times `$pAGB and 1$`\times `$pAGB is roughly a factor of two to three.
### 2.3 Distribution over spectral type
The availability of the evolutionary rates allows us to investigate the predicted distribution over spectral type. Oudmaijer, Waters & Pottasch (1993) and Oudmaijer (1996) used the coarse grid of the Schönberner (1979, 1983) tracks and found that a star spends by far most of the time as a B-type star. Only for the 0.644 M track half of the time is spent in the G phase, and somewhat less as a B star, while almost no time is spent as an F or A star.
One might not expect that an object would spend a large fraction of the time as a B star, because the evolutionary rates are largest for B spectral type (Fig. 1). This can be understood however when we consider the large range of temperatures that corresponds to spectral type B: roughly between 10 000 K and 30 000 K. In contrast, A stars only have a temperature range between approximately 7500 K and 10 000 K. The large evolutionary rates multiplied by the temperature range then result in a longer time spent as a B star during the post-AGB evolution. The large fraction of time that is spent as a G star in the 0.644 M track is explained by the steep $`M_{\mathrm{env}}`$$`T_{\mathrm{eff}}`$ relation for temperatures less than approximately 6000 K, as can be deduced from equation (7). The distribution over spectral type for the tracks presented here with 1$`\times `$pAGB mass loss is calculated using the conversion from effective temperature to spectral type listed by Straižys & Kuriliene (1981). These are $`T_{\mathrm{eff}}`$(A0I) = 9800 K, $`T_{\mathrm{eff}}`$(F0I) = 7400 K and $`T_{\mathrm{eff}}`$(G0I) = 5700 K.
To allow for a comparison with the distributions presented by Oudmaijer et al. (1993) we will assume in this section that the post-AGB phase starts when $`T_{\mathrm{eff}}`$ = 5000 K and ends when $`T_{\mathrm{eff}}`$ = 25 000 K. The distributions can be easily obtained by calculating the time spent as a G star (5000 K – G0), F star (G0 – F0) etc., and subsequently dividing these numbers by the total time spent as a post-AGB star. The resulting distributions are plotted in Fig. 2. For comparison the distribution over spectral type of the sample of 21 post-AGB objects in the list of Oudmaijer et al. (1992) is given. The observed distribution peaks at F, while no B stars are found at all.<sup>1</sup><sup>1</sup>1 The few B type objects in the sample of Oudmaijer et al. (1992) appear to have low effective temperatures. HR 4049 and HD 44179 (the central star of the Red Rectangle) are listed in the literature as a B star, but abundance analyses showed that the effective temperature is lower than typical for a B star: about 7500 K (late A or early F type; van Hoof et al. 1991, Waelkens et al. 1992). The reason that these objects were classified as B, instead of late A stars, is the extreme metal deficiency of these stars, so that few metallic lines are present in the spectra. Thus the observed spectra mimic the spectra of hot objects.
One should realize that planetary nebula central stars with temperatures below 25 000 K are observed: e.g. IRAS19336–0400 with $`T_{\mathrm{eff}}`$ = 23 000 K (Van de Steene, Jacoby & Pottasch 1996, Van de Steene & van Hoof 1995). Hence one could also assume an upper limit of 20 000 K for the post-AGB regime. However, the distribution over spectral type is not very different in this case, the fraction of B-type stars will be lower by approximately 10 %.
The 0.605 M distribution is different from the results presented by Oudmaijer et al. (1993) for the 0.598 M Schönberner track. In the present plot, the distribution peaks at F, while the 0.598 M distribution peaks at G. The distribution for the same track is different in Oudmaijer (1996), since there the evolutionary timescales were shortened by 1000 yr ‘in order to make the old 0.598 M track consistent with the new calculations’ (Marten & Schönberner 1991). This resulted in a distribution that strongly peaks at spectral type B.
The main differences between the old 0.598 M and new 0.605 M calculations are the progenitor mass of the star (1 M and 3 M respectively), the mass loss prescriptions and the definition of the end of the AGB. In the Schönberner calculations, no transition wind was assumed: the AGB mass loss would abruptly change into a post-AGB wind at 5000 K. In the Blöcker calculations, a transition wind is assumed between 5000 K and 6000 K for the 0.605 M track. This implies shorter timescales for the G type phase of the 0.605 M track with respect to the older calculations, but does not affect the time spent as F, A or B stars, where for both tracks a Reimers post-AGB mass loss is assumed. The only explanation we can find for the apparent difference in evolutionary speed is that the dependence of the temperature on the envelope mass of the 0.605 M track is different from the 0.598 M track. Apparently, the slope of the $`M_{\mathrm{env}}`$$`T_{\mathrm{eff}}`$ relation can differ significantly for stars with a different evolutionary past, even when the resulting core masses are nearly identical.
The predicted distributions in Fig. 2 show large differences. For the smallest core mass, most of the time is spent as a B star. For the 0.605 M case the lifetime is distributed evenly over the F and B phase. The fastest track with a core mass of 0.696 M shows a minimum at A, while the rest of the lifetime is spread evenly over G, F and B. The 0.605 M distribution nicely reproduces the observed peak at F. However, all tracks predict many more B stars than observed.
In order to compare the observed and predicted distributions more quantitatively, we have performed a Kolmogorov-Smirnov test using both an upper limit of 20 000 K and 25 000 K for the post-AGB regime. The resulting probabilities are given in Table 2. From these tests it appears that 0.565 M track is an unlikely model for the observed post-AGB sample. On the other hand, the 0.605 M and the 0.696 M tracks can not be excluded. Given the fact that stars on the 0.696 M track evolve much faster than on the 0.605 M track, they would constitute only a small fraction of the total number of observable post-AGB stars. We will adopt the 0.605 M track to describe the post-AGB evolution in the remainder of this paper.
This exercise shows that the predictions of the distribution over spectral type are subject to large uncertainties, depending both on the $`M_{\mathrm{env}}`$$`T_{\mathrm{eff}}`$ relation and the assumed mass loss prescription. However, regardless what assumptions are made, one would expect a fair number of B-type post-AGB stars, which are not observed in the sample depicted in Fig. 2. Hence this discrepancy remains unresolved. Reversely it can be stated that when a larger sample of post-AGB stars with reliable temperature determinations becomes available, the procedure described here can be a very effective means for testing evolutionary tracks.
## 3 The model
In order to calculate the spectral evolution of post-AGB stars, we used the photo-ionization code cloudy version 84.12a (Ferland 1993). Some modifications have been made to the code to facilitate the computations. The most important change was the introduction of several broadband photometric filters, including the Johnson and the IRAS filters. The in-band fluxes for these filters were calculated by folding both the spectral energy distribution and the emission line contribution with the filter passband. Internal extinction due to continuum opacities was included both in the continuum and the line contribution.
A dust model written by P.G. Martin was already included in the original code. For our modeling we used the grain species labelled ‘ISM Silicate’ and ‘ISM Graphite’. The optical constants were taken from Martin & Rouleau (1991). The absorption and scattering cross sections were calculated assuming a standard ISM grain size distribution (Mathis, Rumpl & Nordsieck 1977). All calculations were done assuming a dust-to-gas mass ratio of 1/150. For the chemical composition of the gas we assumed the abundances given in Aller & Czyzak (1983), supplemented with educated guesses for elements not listed therein (as given in cloudy).
The original code only allowed for the computation of a model with a constant dust-to-gas ratio throughout the entire nebula. However, the density profiles we calculate extend from the stellar surface outward. It is not realistic to assume that dust is present near the stellar surface and therefore we introduced new code in cloudy to solve this problem. This enabled the dust to exist only outside a prescribed radius or, alternatively, only in those regions where the equilibrium temperature of the dust would be below a prescribed sublimation temperature. These prescriptions work as a binary switch: at a certain radius either no dust or the full amount is present. In those regions where dust exists, the dust-to-gas ratio is assumed to be constant.
Two models for the dust formation are adopted. In the first model it is assumed that dust is only formed in the AGB wind, hence in material that was ejected before the stellar pulsation period reached $`P_\mathrm{b}`$. We will call this the AGB-only dust formation model. In the second model it is assumed that the dust formation continues in the post-AGB wind. We will call this the post-AGB dust formation model. Due to limitations of the code, which we discuss below, we will only investigate the spectral evolution after the post-AGB phase has started. This implies that for the AGB-only dust formation model, the inner dust radius is already at a distance from the central star and the equilibrium temperature of the grains is always below the sublimation temperature. In the post-AGB dust formation model we assume that dust only exists in those parts of the nebula where the equilibrium temperature of the grains is below the sublimation temperature. The assumed values for the sublimation temperature are 1500 K for graphite and 1000 K for silicates.
It should be noted that we only try to model dust formation and not the destruction of grains by the stellar UV field or shocks. Especially in the AGB-only dust formation model the grains at the inner dust radius are always exposed and it is expected that they eventually will be destroyed. However, little is known about grain destruction, and the rates at which this destruction occurs are very uncertain. In the case of continuing dust formation in the post-AGB wind this problem can be expected to be of lesser importance since there is a constant supply of new grains shielding the older grains.
The radiative transport in cloudy is treated in one dimension only, i.e. the equations are solved radially outwards. This assumption makes the code unsuitable to compute models of nebulae with significant amounts of scattered light and/or diffuse emission when the nebula has a moderate to high absorption optical depth. For low optical depths re-absorption of diffuse emission in the nebula is negligible and the assumptions in cloudy work very well. However, for moderate to high optical depths the re-absorption of diffuse emission that is produced in the outer parts of the nebula and is radiated inwards becomes important. The assumptions made in cloudy make it impossible to account for this energy source, nor for the amount of flux absorbed in these regions. Since the circumstellar envelope is optically thick in the AGB phase, our calculations always start shortly after the transition from the AGB to the post-AGB phase is complete.
For low temperature models the only source of diffuse light at optical and UV wavelengths is scattering of central star light by dust grains; for the highest temperature models bound-free emission also plays a role. Since for the highest temperatures the models have a low absorption optical depth, the bound-free emission causes only minor problems and we can judge the quality of the models primarily by investigating the (wavelength-averaged) scattering optical depth. An example of this approach will be shown in Section 4.
In the calculations, the central star was assumed to emit as a blackbody. All the models were calculated for a distance of 1 kpc.
### 3.1 Density profiles of the circumstellar shell
The density profiles of the expanding shell are calculated for the homogeneous and spherical case. Using the mass loss prescriptions described above and assuming an outflow velocity they can be calculated for any moment in time $`t`$.
$$\rho (r,t)=\frac{\dot{M}_{\mathrm{wind}}(t_{\mathrm{ej}})}{4\pi r^2(t,t_{\mathrm{ej}})v_{\mathrm{exp}}(t_{\mathrm{ej}})}$$
(8)
with
$$r(t,t_{\mathrm{ej}})=R_{}(t_{\mathrm{ej}})+(tt_{\mathrm{ej}})v_{\mathrm{exp}}(t_{\mathrm{ej}})$$
the distance from the centre of the star, $`R_{}`$ the stellar radius; $`t_{\mathrm{ej}}`$ stands for the time when a certain layer was ejected, $`v_{\mathrm{exp}}`$ is the expansion velocity of the wind at the moment of ejection. It is assumed to be constant thereafter and hydrodynamical effects are neglected (see also Section 3.3). In particular, the post-AGB wind has a higher velocity than the AGB shell and will eventually overtake it. That part of the post-AGB wind which has done so contains little mass and is simply discarded.
### 3.2 The AGB circumstellar shell
The AGB shell is the principal contributor to the IRAS fluxes, and it is necessary to have a good description of the stellar temperature as function of the time during the AGB, in order to compute the mass loss history using equation (2). The $`M_{\mathrm{env}}`$$`T_{\mathrm{eff}}`$ relation discussed in Section 2.1.2 starts at temperatures roughly between 3000 K and 4000 K (the relations we received from Blöcker extend to lower temperatures than given in his paper). A description of the AGB evolution for lower temperatures is lacking. Unfortunately, the evolution of $`T_{\mathrm{eff}}`$ and $`L`$ on the AGB is not shown in the Blöcker papers. We therefore simply extrapolate the $`M_{\mathrm{env}}`$$`T_{\mathrm{eff}}`$ relation logarithmically to lower temperatures. The ‘AGB’ star then evolves according to the extended relation. The $`M_{\mathrm{env}}`$$`T_{\mathrm{eff}}`$ relation rises rather steeply at the low temperature end so that only a limited amount of extrapolation is necessary. In this way we find reasonable start temperatures for the AGB. Since our method implicitly assumes that the luminosity remains constant during the AGB, the Bowen mass loss rates decrease with temperature. We investigate two different cases of AGB mass loss in the extrapolated part of the $`M_{\mathrm{env}}`$$`T_{\mathrm{eff}}`$ relation: the normal Bowen mass loss and a constant mass loss held at the value of the Bowen mass loss rate at the first point of the $`M_{\mathrm{env}}`$$`T_{\mathrm{eff}}`$ relation as given by Blöcker. These choices do not have implications for the post-AGB evolution.
### 3.3 Expansion velocities
The observed mean expansion velocity of AGB winds is 15 km s<sup>-1</sup> (Olofsson 1993), and this will be used as the typical AGB outflow velocity. During the post-AGB phase the situation is different. Slijkhuis & Groenewegen (1992) assumed that the post-AGB wind has the same outflow velocity (15 km s<sup>-1</sup>) as the AGB wind. The escape velocity (and hence also the outflow velocity) increases with temperature however. The increasing radiation pressure from the hotter star on the less dense post-AGB wind accelerates the dust and will decrease the densities in the post-AGB wind. The escape velocity for a 10 000 K star is already of the order of 100 km s<sup>-1</sup> to 150 km s<sup>-1</sup>. In addition, Szczerba & Marten (1993) found that dust was accelerated to 150 km s<sup>-1</sup> during the post-AGB phase. We therefore assume an expansion velocity of 150 km s<sup>-1</sup> in the post-AGB phase.
One should realize that the scenario of a fast wind that follows a slow wind results in a collision between the two winds (as shown by e.g. Mellema 1993 and Frank et al. 1993). When inspecting the plots of Mellema (1993) we find that the effects of the colliding winds only start to become significant when the radiation driven wind ($`\dot{M}_{\mathrm{CPN}}`$) with velocities in excess of thousands of kilometers per second has developed. The effect of colliding winds will be neglected in the further calculations in this paper.
## 4 The model runs
In this section we will investigate the spectral evolution of a post-AGB star by varying certain parameters that influence the mass loss rate, wind velocity and the evolutionary speed of the central star. We restrict ourselves to the 0.605 M track and the emphasis will be on the infrared properties of the dust shell around the star. We calculated a total of seven runs. Every individual run consists of four different series of models. These individual models represent post-AGB shells with carbon-rich or oxygen-rich dust, both with and without dust formation in the post-AGB wind. We assumed the total mass of the AGB and post-AGB shell to be 2 M. With the extrapolated $`M_{\mathrm{env}}`$$`T_{\mathrm{eff}}`$ relation this implies a start temperature of the AGB of 2543 K. The different run parameters are outlined in Table 3.
The main differences between the runs are threefold. Firstly, as stated above, the post-AGB wind velocity is subject to uncertainty. In order to assess the influence of this velocity we used three different values: 15 km s<sup>-1</sup>, 150 km s<sup>-1</sup> and 1500 km s<sup>-1</sup>. Secondly, the differences between a decreasing AGB mass loss rate and a constant mass loss rate are investigated. In the second case the constant mass loss is kept at the value of the Bowen mass loss rate at the first point of the $`M_{\mathrm{env}}`$$`T_{\mathrm{eff}}`$ relation as given by Blöcker. This value for the mass loss of $`\dot{M}`$= 1.99$`\times 10^4`$ M yr<sup>-1</sup> will then be adopted throughout the extrapolated part of the $`M_{\mathrm{env}}`$$`T_{\mathrm{eff}}`$ relation, i.e. for temperatures between 2543 K and 3743 K. Approximately 85 % of the total shell mass is ejected in this phase. Thirdly, the definition of the end of the AGB is adjusted. As an alternative the pulsation periods that define the start of the transition wind and the start of the post-AGB phase are set to 125 d and 75 d respectively, which corresponds to a transition phase occurring between 4723 K and 5418 K. For these runs the start of the transition period (defined by $`P_\mathrm{a}`$) will be reached earlier, however the transition period itself will take much longer due to the fact that the $`M_{\mathrm{env}}`$$`T_{\mathrm{eff}}`$ relation is much steeper now in the transition region, making the evolutionary speed much slower. This gives the paradoxical result that an earlier start of the transition phase gives rise to a later start of the post-AGB phase. Also the smaller mass loss rates result in a slower evolution between 5418 K to 6042 K (the start of the post-AGB phase in the ($`P_\mathrm{a}`$,$`P_\mathrm{b}`$) = (100d,50d) runs). After the star has reached $`T_{\mathrm{eff}}`$ = 6042 K, the post-AGB mass loss rates are the same, and consequently the evolutionary speed in terms of kelvin per year is identical.
The models are calculated at the temperatures listed in Table 4. The second and third entries in this table reflect the time that has passed since the end of the transition wind for the ($`P_\mathrm{a}`$,$`P_\mathrm{b}`$) = (100d,50d) and ($`P_\mathrm{a}`$,$`P_\mathrm{b}`$) = (125d,75d) case respectively.
We will start with the results of run 4, which we will use as a reference since we consider the parameters for this run the most realistic. Subsequently we discuss the main differences between this model run and the other runs. The main results of run 4 are visualized in Fig. 3, where the tracks in the IRAS colour-colour diagram are plotted, in Fig. 4 where certain fluxes are plotted, and in Figs. 5 and 6 where a selection of spectral energy distributions is presented. In these figures a distance of 1 kpc to the object is assumed. We will discuss the results for the oxygen-rich and carbon-rich dust separately.
For this run we have made an estimate for the wavelength-averaged scattering optical depth (see also the discussion in Section 3). For the models without post-AGB dust formation the results are shown in Fig. 4. The results for the models with post-AGB dust formation are not shown, but they are almost identical. It can be seen that the scattering optical depth for the silicate and graphite models are very similar, while on the other hand the absorption optical depth is much higher for the graphite models when compared to the silicate models (this can be judged by comparing the V magnitudes for both models). This is caused by a combination of two effects. Firstly, graphite is a very efficient absorber at optical wavelengths and a relatively less efficient scatterer. The reverse is the case for silicates. This tends to level out the differences. Secondly, in the averaging process the optical depths are weighted by the output spectrum. This spectrum peaks much more towards the red for the graphite models due to the higher internal extinction. The scattering optical depth is lower at longer wavelengths and this tends to level out the differences even further. The latter effect also explains why the scattering optical depth drops off so slowly, or even rises towards higher temperatures. The peak of the energy distribution shifts towards the blue, where the scattering efficiency is much higher, thus countering the effects of the dilution of the circumstellar envelope.
Using these results we were able to obtain a worst case estimate for the amount of energy that could have been missed in the dust emission due to the fact that the scattering processes were not properly treated in our models. The amounts for the models shown in Fig. 4 are typically 10 % to 20 % of the total far-IR flux for the silicate models, and 5 % to 10 % for the graphite models. The worst case estimates for the models not shown in Fig. 4 are all below 40 %. These estimates reflect on the accuracy of the absolute fluxes and magnitudes predicted by our models. However, they are expected to cancel in a first order approximation when colours are calculated. Therefore we decided to omit the lowest temperature models whenever absolute fluxes are shown, but to include them when colours are shown.
Scattering processes will not only influence the total amount of energy absorbed, but also the run of the dust temperature with radius. This can influence the shape of the spectrum and hence also the colours. It is impossible to make an estimate for the magnitude of this effect and we will assume it to be negligible.
### 4.1 Silicate dust
We will start with a description of the silicate tracks in the IRAS colour-colour diagram (Fig. 3). At first the – and – colours both increase. At some point the – colour remains constant, while – still increases. This is followed by a decrease in both the – and – colours. Hence the track makes a clockwise loop in the colour-colour diagram.
The first part of the track represents the cooling of the dust shell due to its expansion. From Table 4 one sees the inner radius of the shell has increased by a factor of 4 between $`T_{\mathrm{eff}}`$ = 6200 K and $`T_{\mathrm{eff}}`$ = 8000 K, which leads to a cooling and a decrease of the IRAS fluxes (Fig. 4). The following evolution in the IRAS colour-colour diagram is rather counter-intuitive. Normally one would expect that the shell would make the familiar counter-clockwise loop in the colour-colour diagram as found by Loup (1991), Volk & Kwok (1989) and Slijkhuis & Groenewegen (1992). In such a loop, the shell continues to cool, until the photospheric radiation begins to dominate the emission, first at the 12 $`\mu `$m band. At this time, the 25 $`\mu `$m flux and the – colour decrease strongly due to the expansion and cooling of the shell. The – colour will continue to increase slowly. Later, as the star starts to dominate at 25 $`\mu `$m, the – colour will remain constant and the – colour will start to decrease. When the star is the dominant contributor in all three IRAS bands, the loop will end in the Rayleigh-Jeans point. However, the above authors assumed a constant temperature of the central star, while the effects of an evolving star on the IRAS colours can not be neglected; in the 0.605 M track the star rapidly evolves toward higher effective temperatures. For example, the kinematic age of the shell, and thus its outer (or inner) radius, has increased by only 25 % between $`T_{\mathrm{eff}}`$ = 8000 K and $`T_{\mathrm{eff}}`$ = 12 000 K. One can regard the circumstellar shell as essentially stationary around the evolving star. An evolving star embedded in a stationary shell results in a heating of the dust and an increase of the IRAS fluxes (Fig. 4). The effect of re-heating of the circumstellar envelope was already found by Marten, Szczerba & Blöcker (1993) in their calculations. The re-heating continues even beyond the last data point, hence in the colour-colour diagram the track keeps evolving to higher colour temperatures in both colours. Around the turning point the 12 $`\mu `$m flux is partly due to the photosphere. The photospheric flux decreases for higher temperatures so that the 12 $`\mu `$m flux reacts later to the rising dust flux than the 25 $`\mu `$m and 60 $`\mu `$m flux. Therefore the – colour starts decreasing later than the – colour. The result is a clockwise loop in the colour-colour diagram.
One should note the behaviour of the IRAS flux densities in Fig. 4. The flux densities reach a minimum during the slow evolution of the star. The minimum is followed by gradual increase in the total infrared output due to the heating of the shell resulting from the increasing absorbing efficiency of the dust (see also the discussion in Section 4.2). A model star like this (i.e. with silicate dust) would have a larger chance of being detected in an infrared survey like IRAS when it has evolved to higher temperatures. This fact, combined with the predicted distribution over spectral type implies that hot post-AGB stars, or young PN should be expected to be more abundant in samples of evolved stars with infrared excess.
#### 4.1.1 Dust formation in the post-AGB wind
The addition of dust formation in the post-AGB wind changes the spectral energy distribution. From Fig. 4 it is visible that the 25 $`\mu `$m and 60 $`\mu `$m flux densities have the same values as in the AGB-only dust formation case. The 12 $`\mu `$m band is strongly affected by the addition of the hot dust. This is due to the presence of the 10 $`\mu `$m silicate emission feature which causes the emissivity of the hot silicate dust to peak strongly at 10 $`\mu `$m. The larger 12 $`\mu `$m flux density is immediately reflected in the colour-colour diagram. It puts the starting point at a slightly higher colour temperature than before. As the feature increases in strength, the track bends toward higher – colour temperatures. Then, when the star is evolving more rapidly than the shell expands, the cool AGB shell is heating up, and the track moves down. At the same time, the – colour temperature decreases, reflecting a weakening of the silicate feature relative to the 25 $`\mu `$m flux. This effect is caused by the circumstances under which dust in the post-AGB wind is formed. As the star becomes hotter, also the distance increases at which the equilibrium temperature of the dust goes below 1000 K. Therefore the dust formation will take place at larger distances from the star, in a diluted local radiation field where the dust density will be lower. Consequently, the silicate feature weakens and the track moves to lower – colour temperatures. Eventually the contribution of the 10 $`\mu `$m feature will become negligible and the track will move asymptotically to the track without post-AGB dust formation.
#### 4.1.2 Different post-AGB wind velocities
In Fig. 7 the effect of a different post-AGB wind velocity is shown for silicate dust in the upper panel. The models have been calculated for velocities of 15 km s<sup>-1</sup>, 150 km s<sup>-1</sup> and 1500 km s<sup>-1</sup>. The 1500 km s<sup>-1</sup> tracks resemble the AGB-only dust formation models, because the amount of dust in the post-AGB wind is not large enough to yield an observable effect. The low outflow velocity model shows all the effects outlined for the 150 km s<sup>-1</sup> models even stronger because the amount of hot dust has increased.
### 4.2 Carbon-rich dust
Let us now turn to the models with carbon-rich dust. This type of dust is a more efficient absorber than silicate dust as can be seen in Fig. 4. The visual magnitude is larger than for the oxygen rich models, reflecting a larger circumstellar extinction in the Johnson V band. One can also see that the infrared energy output is larger.
The beginning of the track of the carbon-dust models in the colour-colour diagram (Fig. 3) is located at a higher colour temperature than for the oxygen-rich models because more energy is absorbed by the dust. At first, the track moves to lower temperatures, and as the star begins to evolve more rapidly, the curve makes a slight backward loop in the diagram due to the heating of the shell. Then, contrary to the silicate model, the shell cools again.
This somewhat unexpected result can be explained by the properties of the absorbing material. It is instructive to investigate the effective cross section $`Q_0`$ of the two grain species as a function of the effective temperature of the central star spectrum (i.e. a blackbody). $`Q_0`$ is defined as
$$Q_0=\frac{_0^{\mathrm{}}B_\nu (T_{\mathrm{eff}})\alpha _\nu d\nu }{_0^{\mathrm{}}B_\nu (T_{\mathrm{eff}})d\nu }$$
(9)
With $`B_\nu `$ the blackbody intensity distribution and $`\alpha _\nu `$ the absorption cross section of the grains. The formula is chosen in such a way that the total energy absorbed by the grains is proportional to $`L\times Q_0`$. Note that $`L`$ is constant during the evolution. The absorption coefficients have been chosen such that both grain species have the same dust-to-gas mass ratio. The resulting curves are shown in Fig. 8
For temperatures between approximately 1000 K and 50 000 K silicates absorb less energy than graphite. It can be a factor of 10 lower in the temperature range for cool post-AGB stars. This is caused by the fact that silicates are inefficient absorbers at optical wavelengths. From these results we can expect that for temperatures between 10 000 K and 50 000 K the total IR emission of silicates will rise much more drastically than for graphite. For temperatures above 50 000 K the peak of the energy distribution shifts into the EUV. At these wavelengths silicates are more efficient absorbers and we can see that for these temperatures the total amount of energy absorbed by the silicates is higher than for graphite.
Thus, in the last data points of the track in the IRAS colour-colour diagram, the absorption efficiency of graphite does increase less strongly with rising effective temperature. In this case, the interplay between the evolving star (now heating up the shell less rapidly) and the expanding shell (giving rise to cooling) has as a result that the expanding shell becomes the more dominant factor and consequently the shell cools. In Fig. 4 these two effects are illustrated by the decreasing IRAS fluxes.
#### 4.2.1 Dust formation in the post-AGB wind
In contrast to oxygen-rich dust, graphite has no feature in the IRAS 12 $`\mu `$m pass band. Also the shape for the emissivity law of graphite is such that the hot dust peaks at near-infrared wavelengths, contributing negligibly to the 12 $`\mu `$m band (Figs. 4 and 6). Therefore one does not expect a large effect of the hot dust on the IRAS 12 $`\mu `$m flux. However, the 12 $`\mu `$m, 25 $`\mu `$m and 60 $`\mu `$m flux densities of these models are lower than for the corresponding AGB-only dust formation models (Fig. 4). This stems from the fact that the hot dust absorbs a large fraction of the stellar energy, yielding a cooler AGB shell.
#### 4.2.2 Different post-AGB wind velocities
The effect of the post-AGB outflow velocity on the graphite models is presented in the lower panel of Fig. 7. The 1500 km s<sup>-1</sup> track behaves almost the same as the 150 km s<sup>-1</sup> track and the track without post-AGB dust. The 15 km s<sup>-1</sup> track however is different, first it moves upward, then bends downward and finally makes the counter-clockwise loop. The initial increase in the – colour is caused by the fact that the large amount of hot dust now shields the AGB shell more efficiently, making the AGB shell even cooler than previously. As the star becomes hotter, the dust condensation radius moves rapidly outward, making shielding less efficient. The AGB shell now obtains more energy from the central star and heats up, resulting in higher – temperatures. The tracks evolve asymptotically towards each other and finally the counter-clockwise loop sets in, as was already explained.
### 4.3 Constant mass loss
In Fig. 9 the silicate tracks for run 4 (decreasing AGB mass loss) and run 3 (constant mass loss) in the IRAS colour-colour diagram are shown. Both the – and the – colour temperature of the constant mass loss track are slightly higher than for the decreasing mass loss track.
The constant mass loss creates relatively more cool dust further from the star. This cooler dust emits less radiation and therefore the hotter dust at the inner parts of the AGB shell (where the mass loss rates for both tracks are the same) will dominate the spectrum more. This gives the result that the integrated spectrum of all dust appears hotter. This effect is strongest at 60 $`\mu `$m, less at 25 $`\mu `$m and absent at 12 $`\mu `$m. As a whole, the effect of different AGB mass loss rates is small. The effect on the graphite tracks is basically the same and is not shown.
### 4.4 Different end of the AGB
For all the models that have been discussed so far, it was assumed that the transition mass loss starts at $`P_\mathrm{a}`$ = 100 d, and that the Reimers mass loss starts at $`P_\mathrm{b}`$ = 50 d. However, the evolutionary behaviour of the calculated models strongly depends on the exact moment of ‘superwind’ cessation (Szczerba 1993). In this section, we will present an investigation into the effect of changing this unknown parameter.
As stated before, the evolution of the central star in the ($`P_\mathrm{a}`$,$`P_\mathrm{b}`$) = (100d,50d) and the ($`P_\mathrm{a}`$,$`P_\mathrm{b}`$) = (125d,75d) models is identical after $`T_{\mathrm{eff}}`$ = 6042 K. However, the density structure of the circumstellar shell is entirely different when the star has reached this temperature. Whereas the ($`P_\mathrm{a}`$,$`P_\mathrm{b}`$) = (100d,50d) model then barely has a detached shell, the ($`P_\mathrm{a}`$,$`P_\mathrm{b}`$) = (125d,75d) model already has a shell with a kinematic age in excess of 4500 yr, which has much cooler dust.
We will now look at the results for this run in the colour-colour diagram. It appears that the alternative end of the AGB entirely changes the path in the colour-colour diagram (Fig. 10). A selection of spectral energy distributions is presented in Fig. 11.
From the start, the silicate dust track moves strongly to the upper left, making a turn to the lower right, and finally a bend to the left. Since the star evolves slowly (it takes approximately 750 yr to go from 5450 K to 5500 K), the circumstellar shell cools rapidly, and only the stellar photosphere is visible at 12 $`\mu `$m (Fig. 11). This is reflected in an increasing – colour temperature: the photospheric 12 $`\mu `$m flux remains approximately the same, while the 25 $`\mu `$m flux and the – temperature decrease due to the cooling of the shell. Subsequently the star starts to evolve more rapidly than the shell expands, and the – colour temperature decreases. This is due to the fact that the shell now heats up, increasing the 25 $`\mu `$m flux and decreasing the – colour, contrary to the 12 $`\mu `$m band which is still dominated by the decreasing photospheric emission of the evolving star. The shell continues to heat up, and the dust emission at 12 $`\mu `$m overtakes the decreasing photospheric flux density. The curve bends to higher – temperatures. The contribution of nebular bound-free emission at 12 $`\mu `$m for the hottest models increases the – colour temperature even more, and the track makes the, now familiar, clockwise loop.
All effects outlined above appear less strong for the carbon-rich model. It initially starts on a cooling curve, and later bends to the clockwise loop. This is explained by the fact that the dust emission from the carbon grains still is present at 12 $`\mu `$m in the first points, so that the shell cools in all colours. For the remainder of the evolution there always remains some dust emission in the 12 $`\mu `$m band, making the loop to the left less pronounced.
### 4.5 A near-IR colour-colour diagram
In this section we present the evolution of a post-AGB star in a colour-colour diagram which uses colours at shorter wavelengths: the K– vs. – diagram. This diagram gives a different view of the evolution since in all models the Johnson K band will be dominated either by the stellar continuum or the bound-free emission of the ionized part of the nebula. Only in the models containing hot graphite grains, part of the flux in this band will originate from the grains. So, contrary to the IRAS colour-colour diagram which we presented earlier, this diagram contains information both on the central star and the dust. The results for run 4 are shown in Fig. 12, giving both the tracks using silicate and graphite grains and both the tracks assuming AGB-only and post-AGB dust formation.
The first thing we notice is that all four tracks, at least in a qualitative sense, look quite similar. This suggests that this colour-colour diagram is less sensitive to particulars of the grain emission and thus that the information on the evolution of the central star and the nebula is less ‘contaminated’ when compared with the IRAS colour-colour diagram. We have only investigated this for the 3 M track and it is not yet clear if this observation is valid in a more general context. In the past years the IRAS colour-colour diagram has proven to be a very useful tool for studying post-AGB evolution. Our knowledge has increased considerably since its introduction. However, when we try to understand the details of this evolution better, the information from the IRAS colour-colour diagram becomes more and more confusing. The K– vs. – diagram may be a valuable additional tool for the study of post-AGB evolution.
We will only discuss the evolution of the K– colour in Fig. 12 since the evolution of the – colour already has been discussed earlier. At first both the flux in the K band and the 12 $`\mu `$m band are decreasing. However the 12 $`\mu `$m flux decreases more rapidly and therefore K– evolves towards hotter colours. This can be understood if we realize that the central star heats up relatively slowly while the circumstellar shell expands relatively rapidly. At a temperature of around 8000 K the central star evolution will speed up considerably, reversing the preceding argument. The flux in the K band continues to drop, while the flux in the 12 $`\mu `$m band increases now, resulting in ever cooler K– colours. This evolution continues until the central star starts to ionize a considerable part of the circumstellar shell and the K band flux will start to rise again due to nebular bound-free emission. This rise is so rapid that it reverses the evolution of the K– colour.
The tracks for silicate and graphite grains are qualitatively similar, but are offset with respect to each other. This is mainly due to the different absorption efficiency of silicate and graphite grains (see Section 4.2). The difference between the silicate track with and without post-AGB dust formation can be understood solely from the presence or absence of the 10 $`\mu `$m emission feature. The difference between the graphite tracks with and without post-AGB dust formation stems from the fact that the hot dust in the post-AGB section of the wind mainly radiates around 3 $`\mu `$m and thus contributes to the K band flux.
## 5 Discussion and conclusions
In this paper we have presented a new model to calculate the spectral evolution of a hydrogen burning post-AGB star. The main new ingredient of this model is the possibility of extracting timescales of the post-AGB central star evolution from the most recent evolutionary calculations. Hence, contrary to previous studies, it is possible now to investigate other AGB and/or post-AGB mass loss rates and different prescriptions for the start of the post-AGB phase. The use of a photo-ionization code in which a dust code is built in, gives us the opportunity to study the evolution of the infrared emission of the circumstellar material.
We have performed a parameter study on a typical post-AGB star with a core mass of 0.605 M taken from Blöcker (1995b). By varying the parameters that govern the mass loss and the evolutionary timescales of the post-AGB phase by a reasonable amount we find that:
1. The influence of the evolving star on the infrared colour evolution can not be neglected. In particular, the phase wherein the circumstellar shell can be considered as a stationary shell around a star that rapidly increases in temperature results in clockwise loops in the IRAS colour-colour diagram. This was not observed in many previous studies in which the temperature of the central star was taken to be constant. Instead, in these studies the evolutionary path followed a counter-clockwise loop in the colour-colour diagram because of the inevitable cooling of the shell.
Only Szczerba & Marten (1993) who used a Blöcker track obtained a roughly similar result with their dust radiative transfer code. Volk (1992) found a slight deviation from the counter-clockwise loop in the colour-colour diagram. He used the coarse grid of evolutionary timescales from the 0.644 M Schönberner (1983) track. The output of the photo-ionization code cloudy was used as input for a dust radiative transfer model. It is not clear to what extent that choice influenced the path followed in the IRAS colour-colour diagram.
The crucial influence of the evolution of the central star warrants further parameter studies with other stellar models, where the increase in temperature as a function of time will be different. It is expected that the heating of the shell will be more important for more rapidly evolving (i.e. larger core mass) stars.
2. The tracks that are followed in the IRAS colour-colour diagram are very sensitive to the adopted dust opacity law and solid state features. Two main differences between the silicate and graphite models that were studied are evident. Firstly the IRAS colours of the silicate models are very sensitive to newly synthesized dust in the post-AGB wind because of the silicate 10 $`\mu `$m feature that contributes significantly to the 12 $`\mu `$m flux density. In contrast, hot graphite dust emits mainly shortward of the 12 $`\mu `$m pass band and thus post-AGB dust formation has less influence in this case. Secondly, the dependency of the absorption efficiency on the central star temperature is much stronger for silicates than for graphite in the temperature regime studied here. Therefore silicates react much stronger to the heating of the central star and thus give rise to much larger loops in the IRAS colour-colour diagram.
Our knowledge of dust opacities, and certainly of solid state features in the mid- and far-infrared, will improve when the results of the ISO mission have been digested (Waters et al. 1996). For example, the well-known 21 $`\mu `$m and 30 $`\mu `$m features that have been observed in the infrared spectrum of carbon-rich post-AGB stars and planetary nebulae (e.g. Omont et al. 1995) have not been taken into account in this study. In addition, the wavelength coverage up to 200 $`\mu `$m will be of great help to determine the wavelength dependence of the dust opacities towards long wavelengths.
3. A third decisive factor that governs the evolution of the IRAS colours is the definition of the end of the AGB. The sooner an object enters the transition phase, i.e. the sooner the heavy AGB mass loss ceases, the longer the evolution to higher effective temperatures will last. This results in cool circumstellar dust shells, and consequently these models are on a location in the IRAS colour-colour diagram where not many post-AGB stars were expected previously. In this respect it is noteworthy to refer to Fig. 10 where oxygen-rich post-AGB stars are predicted to be present in the upper part of region VIII. A re-investigation of the sources in that region would be useful in testing whether this scenario is realistic.
4. Changing the mass loss prescription in the coolest part of the AGB evolution that we considered has little influence on the IRAS colours of the models.
In general we find that the variation of the parameters mentioned above, which are still not very well determined, result in a variety of different paths in the IRAS colour-colour diagram. This is certainly part of the explanation why planetary nebulae do not occupy a well-structured region in the IRAS colour-colour diagram (cf. Volk 1992). As a by-product of this investigation we find that the same location in the IRAS colour-colour diagram can be occupied by objects with an entirely different evolutionary past. Apparently, the location in the IRAS colour-colour diagram can not a priori give a unique determination of the evolutionary status of an object.
As an alternative to the IRAS colour-colour diagram, the K– vs. – colour diagram is presented. The tracks in this diagram seem less affected by particulars of the grain emission. Hence this diagram might prove to be a valuable additional tool for studying post-AGB evolution.
The feedback of observational work, after a sufficient number of parameter studies, will be of help to assess the selection effects in our post-AGB sample selection criteria, and will give constraints on the assumptions that now have to be made on the central star and nebular evolution.
## Acknowledgments
We are very grateful to Thomas Blöcker who has kindly provided us with the tables from which the evolutionary timescales could be reconstructed. The photo-ionization code cloudy was used. The code is written by Gary Ferland and obtained from the University of Kentucky, USA. We would like to thank the referee Ryszard Szczerba and Thomas Blöcker for critically reading the manuscript. PvH and RDO were supported by NFRA grants 782–372–033 and 782–372–031.
|
no-problem/9906/astro-ph9906241.html
|
ar5iv
|
text
|
# Evidence Against the Sciama Model of Radiative Decay of Massive Neutrinos 1footnote 11footnote 1Based on the development and utilization of the Espectrógrafo Ultravioleta de Radiación Difusa, a collaboration of the Spanish Instituto Nacional de Tecnica Aeroespacial and the Center for EUV Astrophysics, University of California, Berkeley
## 1 Introduction
Relic neutrinos, if massive, could contribute significantly to the density of the universe, and if appropriately concentrated, could explain puzzling characteristics of luminous matter in galaxies. Melott (1984) suggested that if these particles were radiatively decaying, they could be responsible for the sharp hydrogen ionization edges seen in many galaxies and that this decay would not violate existing observational data if the decay energy was somewhat greater than 13 eV and the lifetime for decay was about 10<sup>24</sup> s. In a subsequent paper, Melott et al. 1988 showed this idea was consistent with observations of star formation, galaxy formation and morphology, and other phenomena. Subsequently, Sciama and collaborators in an extensive set of papers (Sciama 1990, 1993, 1995, 1997a, 1997b, 1998, Sciama et al. 1993) showed that if the decay lifetime was an order of magnitude less than that suggested by Melott his theory could explain a large number of otherwise puzzling astronomical phenomena, including the ionization state of the intergalactic medium and the anomalous ionization of the interstellar medium (ISM) in our own Milky Way Galaxy. Although massive neutrinos cannot be contemplated within the framework of the standard model of particle physics, they can be accommodated in the supersymmetric extensions of the standard model, especially if R-parity is broken (cf. Gato et al. 1985, Bowyer et al. , 1995). Recent observational and experimental results suggest they do, in fact, have mass (Fukuda et al. 1998a,b and Athanassopoulos et al. 1998a,b).
A number of searches have been made for evidence of radiatively decaying massive neutrinos in clusters of galaxies. Davidsen et al. (1991) severely constrained the parameter space available for these particles through observations of the cluster of galaxies Abell 665, and Fabian et al. 1991 obtained similar results from a study of the cluster of galaxies surrounding the quasar 3C263. However, Sciama et al. (1993) and Bowyer et al. (1995) have shown that these observations do not rule out the Sciama scenario.
An all-pervading neutrino flux in the Galaxy at a wavelength near the ionization limit of hydrogen would be difficult to observe because of absorption by the ISM. However, Bowyer et al. (1995) pointed out that this flux would be observable from Earth orbit in several well-defined directions where the density of the ISM is extremely low. An observational complexity which could complicate these measurements is emission from an upper atmosphere oxygen recombination feature at 911 $`\mathrm{\AA }`$ (Chakrabarti et al. 1983).
In this paper we report results of spectral observations made in the region $``$ 912 $`\mathrm{\AA }`$ where the radiation in the Sciama scenario would be present, and compare the data obtained with the flux expected.
## 2 Observations
The observations were made with an extreme ultraviolet spectrometer covering the band-pass from 350-1100 $`\mathrm{\AA }`$ which was specifically designed for studies of diffuse emission. The instrument (the Espectrógrafo Ultravioleta extremo para la Radiación Difusa, EURD) is capable of providing measurements of the diffuse UV background which are more than 100 times more sensitive than existing measurements in this band-pass, with a spectral resolution of about 6$`\mathrm{\AA }`$ . The instrument is described in detail by Bowyer et al. 1997.
The instrument was flown onboard the Spanish MINISAT-01 satellite launched on April 21, 1997. The spacecraft is in a retrograde orbit with an inclination of 151 and is at an altitude of 575 $`\mathrm{km}`$ . The spectrometer continuously views the anti-Sun direction. Details of the spacecraft and the EURD observational parameters are provided in Morales et al. 1996.
We examined EURD data in the 890 to 915 $`\mathrm{\AA }`$ bandpass in an attempt to detect the emission which would be present if the Sciama scenario was operative. Data from the spectrometer were typically collected over the entire night-time portion of the orbit. Higher count rates are always experienced at spacecraft sunrise and sunset due to geocoronal effects, but deep night intensities are typically constant and low. For the search for radiation from the Sciama scenario, we sorted the data to exclude all sunrise and sunset data and all other data associated with high backgrounds. Given the low in-flight counting rate and the absolute fixed electronics dead time of 100 $`\mu \mathrm{s}`$ per photon, dead time corrections were about 1 % and were therefore ignored.
The EURD spectrograph employs a number of vetoes to reduce unwanted background and to permit evaluation of those background events which cannot be otherwise eliminated (Bowyer et al. l997). The detector is surrounded by an anti-coincidence shield and all counts triggering this shield (about 20 percent) are rejected. Remaining internal background components include charged particles that are missed by the anti-coincidence system, Compton scattered $`\gamma `$-rays, and radioactivity within the detector and in the spacecraft. An additional background is produced by photons scattered by the grating onto the detector. This scattered emission is mostly a continuum arising from the wings of the zero and first order of the hydrogen Lyman-alpha line whose peaks were designed to fall beyond the ends of the detector.
The entrance aperture of the instrument has a filter wheel with three positions: Open, Closed, and a $`\mathrm{MgF}_2`$ filter. The Open position provides spectral data plus backgrounds. The Closed position gives an estimate of the internal background, and the $`\mathrm{MgF}_2`$ filter position gives an estimate of the scattered radiation. Observations were carried out sequentially with each of these apertures; the complete cycle time was 90 s.
We corrected the deep night spectral data for backgrounds using the $`\mathrm{MgF}_2`$ and Closed apertures. We summed the background corrected data in the 890 to 915 $`\mathrm{\AA }`$ band as a function of time. We included data to 915 $`\mathrm{\AA }`$ to assure all counts shortward of 912 $`\mathrm{\AA }`$ were included in the sample given the spectral resolution of the instrument. In some neutrino decay scenarios, two lines will be produced whose relative intensities are uncertain. However, the sum of both of these lines is the key parameter to be measured, and in the Sciama scenario these lines will be separated by 0.2 eV, or 13 $`\mathrm{\AA }`$ at 900 $`\mathrm{\AA }`$ . Hence the flux from both these lines will be included in the data reported here. For this study we utilized data obtained between 18 June 1997 and 29 June 1998. Data were regularly obtained over most of this period, with occasional gaps because of spacecraft problems or instrument shutdowns. Data were summed over 10 day intervals, providing typically about 3500 counts, to obtain good counting statistics.
Unfortunately in regards to our search for the Sciama line, oxygen recombination radiation was substantial at the altitude of the MINISAT satellite even in the anti-Sun view direction. A spectrum of the radiation detected around 912 $`\mathrm{\AA }`$ is shown in Fig. 1. This spectrum shows a profile that is consistent with the line shape obtained by Feldman et al. 1992 given the resolution of this instrument. Just longward of 912 $`\mathrm{\AA }`$ the spectrum is dominated by the Lyman series lines of geocoronal hydrogen (López-Moreno et al. l998). Both the oxygen recombination feature and the Lyman series of hydrogen vary in time; the data shown in Fig. 1 are from a period when the oxygen recombination radiation was more pronounced.
We determined the EURD counts-to-flux conversion factor in the region around 800 $`\mathrm{\AA }`$ using an in-flight calibration strategy based on simultaneous EUV observations of the Moon with EUVE and EURD (Flynn et al. 1998), and, longward of 912 $`\mathrm{\AA }`$ , to fits to stellar spectra (Morales et al. , in progress). It is estimated that this calibration is good to $`\pm `$ 20% in the band around 912 $`\mathrm{\AA }`$ because of the quality of the fit to stellar spectra. This in-flight calibration yields a conversion of 6.5 $`\times 10^4`$ $`\mathrm{ph}\mathrm{cm}^2\mathrm{str}^1`$ per count at 912 $`\mathrm{\AA }`$ ; this is within a factor of three of the preflight calibration (Bowyer et al. l997). This difference is easily understood as being the result of the degradation of the detector photocathode during the almost 2 year time period between the laboratory calibration and in-orbit operation. The resulting fluxes are shown in Fig. 2. These fluxes are the total fluxes obtained in this bandpass, uncorrected for any Lyman series emission as seen in Fig. 1.
The expected emission in the Sciama scenario can best be considered in two parts. The first is produced in the Local Interstellar Cloud (LIC) which surrounds the Sun; this emission is intermixed with absorption. The second component is emission from beyond the LIC which is absorbed by this cloud.
Formally, the emission is given by the relation:
$$I(l)=B+\frac{R_{\mathrm{prod}}}{4\pi n_o\sigma }\left(1\mathrm{exp}\left[n_o\sigma d_{\mathrm{cl}}(l)\right]\right)+\frac{R_{\mathrm{prod}}\left[d_e(l)d_{\mathrm{cl}}(l)\right]}{4\pi }\mathrm{exp}\left[n_o\sigma d_{\mathrm{cl}}(l)\right]$$
(1)
where we have included a background, $`B`$ (which could be due to anything, but is mostly due to oxygen recombination radiation); $`R_{\mathrm{prod}}`$ is the photon production rate; $`n_o`$ is the density of the LIC; $`\sigma `$ is the effective ISM cross section for absorption (Rumph et al. 1994); $`d_{\mathrm{cl}}`$ is the distance to the cloud edge; and $`d_e`$ is the distance to the edge of the neutral free region. The symbol $`l`$ indicates variation with ecliptic longitude. The most recent (small) revision of the theory (Sciama l998) requires a photon production rate of $`2\pm 1\times 10^{16}\mathrm{s}^1\mathrm{cm}^3`$.
We have used the model of Redfield and Linsky (1999) for data on the LIC. This is a three dimensional model which is based on ISM absorption features in the spectra of nearby stars obtained with HST, EUVE, and ground based telescopes. Minimum hydrogen columns in the plane of the ecliptic in this model are $`2.5\times 10^{16}\mathrm{cm}^2`$, maximum columns are $`2.5\times 10^{18}\mathrm{cm}^2`$.
In the region beyond the LIC, Welsh et al. 1998 used high resolution optical spectroscopy to determine the amount of ISM sodium in the line of sight to stars within 300 pc of the Sun. They found that the ISM is essentially free of neutral gas out to more than 70 pc in most directions. Sfeir et al. 1999 have obtained an extensive set of sodium absorption data and have modeled the extent of this ionized region, or Local Bubble. We have used the N(H) = 1$`\times 10^{19}`$ $`\mathrm{cm}^2`$ contour of their model, where the ionized region of the Local Bubble abruptly ends, as the limit to the region from which the Sciama line could be detected. This contour is typically at 100 pc in the plane of the ecliptic. We have incorporated these results in Eqn. 1, and we show the expected emission in the plane of the ecliptic for the Sciama scenario in Fig. 2.
## 3 Discussion and Conclusions
The geocoronal oxygen background is obvious in the data shown in Fig. 2, but in those view directions in which the absorption by the LIC is small because of the Sun’s location within the cloud, the flux from radiatively decaying neutrinos should be far more intense than the oxygen emission. It is obvious by inspection that the emission predicted by the Sciama theory is not present.
We have fit our data shown in Fig. 2 to a model described by Eqn. 1 in which we treat the background $`B`$ and the photon production rate $`R_{\mathrm{prod}}`$ as parameters. Our best fit value for $`B`$ is 2200 $`\mathrm{ph}\mathrm{s}^1`$ $`\mathrm{cm}^2\mathrm{str}^1`$ . Our best fit for $`R_{\mathrm{prod}}`$ is consistent with zero and has a 95% confidence upper limit of 0.6 $`\times 10^{16}`$ $`\mathrm{s}^1\mathrm{cm}^3`$ , which is one third of the production rate required by the theory.
The EURD data appear to be completely incompatible with the Sciama model of radiatively decaying massive neutrinos. We believe that the only parameters in this study that could be challenged, in principle, are the conversion factor from observed EURD counts-to-flux, and the LIC model. In evaluating the calibration issue, we note that while the most accurate conversion factor can be derived from stellar spectra, this result requires substantial justification (to be discussed elsewhere) and is not necessary for this work. The EURD in-flight calibration is firmly established to within a factor of two through our in-flight observations of the intensity of the geocoronal hydrogen Lyman lines. The counts-to-flux conversion using these in-flight results would have to be incorrect by more than a factor of five to reduce the predicted emission in the Sciama scenario to the level of the background shown in Fig. 2 if all the uncertainties are added in their worst directions. We can think of no way that this could be realized. The other factor that could be challenged, the LIC model, would have to be incorrect by a factor of twenty to reduce the observed flux to the level of the background shown in Fig. 2. This possibility is considered to be extremely unlikely (J. Linsky, private communication).
Although we believe our data rule out the Sciama model of radiatively decaying neutrinos, we note that we cannot exclude the earlier model of Melott with its longer lifetime. In this respect, it is intriguing to note that we do observe a faint line at $`710\AA `$ in long integrations with the EURD instrument which we have not been able to identify as either an upper atmospheric airglow line or as emission from the interstellar medium. (Bowyer et al. , in progress)
## 4 Acknowledgements
We wish to acknowledge many useful discussions with Dennis Sciama. We thank Jeff Linsky and Seth Redfield for access to their model before publication, and Seth Redfield for help in utilizing this model. We than Daphne Sfeir for access to her model before publication and her help in utilizing this model. The authors wish to thank J. Cobb for devising and implementing complex data processing programs which convert the spacecraft data to forms amenable to scientific analysis. Partial support for the development of the EURD instrument was provided by NASA grant NGR 05-003-450 and INTA grant IGE 490056.
When the NASA funds were withdrawn from the instrument development at Berkeley by Ed Weiler, the instrument was completed with funds provided by S. Bowyer. The UCB analysis and interpretation is carried out through the volunteer efforts of S. Bowyer, J. Cobb, J. Edelstein, E. Korpela, and M. Lampton. The work by C. Morales and J. Trapero is supported in part by DGCYT grant PB94-0007. J.F. Gómez is supported in part by DGCYT grant PB95-0066 and Junta de Andalucia (Spain). J. Pérez-Mercader is supported by funds provided by the Spanish ministries of Education and Defense.
|
no-problem/9906/cond-mat9906384.html
|
ar5iv
|
text
|
# Kosterlitz-Thouless vs Ginzburg-Landau description of 2D superconducting fluctuations
## Abstract
We evaluate the charge and spin susceptibilities of the 2D attractive Hubbard model and we compare our results with Montecarlo simulations on the same model. We discuss the possibility to include topological Kosterlitz-Thouless superconducting fluctuations in a standard perturbative approach substituting in the fluctuation propagator the Ginzburg-Landau correlation length with the Kosterlitz-Thouless correlation length.
PACS numbers:74.20.De, 74.20.Mn, 71.10.-w
The discovery of spin and charge pseudogaps in the normal state of underdoped superconducting cuprates has triggered a renewed interest on the physics of preformed Cooper pairs. The actual source of the pseudogaps (pairing, and/or spin-, and/or charge fluctuations) and the leading mechanisms responsible for the reduction of the superfluid density at low temperature (classical phase fluctuations, collective modes, quasiparticle excitations) are still debated. However, many indications support the idea that pairing occurs below some crossover temperature $`T^{}`$, while the phase coherence is established at a sizable lower temperature. The low density of carriers resulting in a low superfluid density and the short coherence length $`\xi _010÷20\AA `$, support the relevance of the superconducting phase fluctuations in the thermodynamic and dynamic properties of these materials. Moreover, although no discontinuity of the superfluid density at $`T_c`$ is observed, the strong anisotropy of the cuprates suggests that some features of a Kosterlitz-Thouless (KT) transition could be present in these systems . Therefore it is worth investigating the effects of the topological vortex-antivortex phase fluctuations on the various properties of a 2D superconductor. In particular, an important issue concerns the inclusion of these effects in evaluating thermodynamic quantities like the spin susceptibility or the charge susceptibility. In this context, the aim of the present work is to look for possible connections between the perturbative scheme leading to the standard time-dependent Ginzburg-Landau (TDGL) results and the KT physics.
Halperin and Nelson have shown that, in the KT regime, the contributions of superconducting fluctuations to the conductivity above $`T_{KT}`$ have the same functional form, in terms of the correlation length $`\xi `$, as the Aslamazov-Larkin contributions of the standard TDGL theory, $`\sigma _{KT}(\xi )\sigma _{GL}(\xi )\xi ^2`$. The same holds for the fluctuation contribution to the diamagnetism $`\chi _{KT}^d(\xi )\chi _{GL}^d(\xi )\xi ^2`$. In spite of the same correlation length dependence, conductivity and diamagnetism in KT or TDGL theory have a completely different temperature dependence, induced by the different temperature dependence of the correlation length in the two theories. The KT correlation length diverges exponentially at $`T_{KT}`$ while the GL correlation length diverges as a power-law with the classical exponent $`\nu =\frac{1}{2}`$. Therefore the KT conductivity and the diamagnetic susceptibility diverge exponentially at $`T_{KT}`$ while the same quantities in the TDGL theory diverge as a power-law at $`T_c`$ with a critical exponent $`\gamma =1`$. In the present work we investigate the possibility that, in analogy with conductivity and diamagnetism, the correct behavior of the spin and charge susceptibilities in the KT regime can be simply recovered by inserting the KT correlation length in their TDGL expressions. We shall find that this prescription does work for the spin susceptibility while it does not for the charge susceptibility.
We analyze the two-dimensional negative-$`U`$ Hubbard model which is the simplest minimal model where the distinct occurrence of pairing and phase coherence can be investigated. Within this model, the spin susceptibility $`\chi _s`$ and the charge compressibility $`\chi _c`$ are calculated on a two-dimensional square lattice by performing a loop expansion with the fermions exchanging the Cooper-fluctuations propagator in the standard form. Before giving the technical details of our treatment, we immediately present our results.
Figure 1 shows the behavior of the spin susceptibility when the correlation length is assumed either of the GL form (dashed line with crosses) or of the KT form (dotted line with stars). Both curves are compared with the Montecarlo data obtained in Ref. for the negative-$`U`$ Hubbard model with $`U=4t`$ ($`t`$ is the nearest-neighbor hopping) at filling $`n=0.5`$ electrons per cell. The critical temperature $`T_{KT}`$ of the KT superconducting transition, as extracted from numerical calculations, is $`T_{KT}=0.05t`$ and has been used as the input critical temperature for our perturbative calculations. In the Montecarlo data, for $`T`$ less then $`T^{}tT_{KT}`$, $`\chi _s`$ starts decreasing. This indicates the existence of strong superconducting fluctuations in the temperature range between the mean-field transition temperature ($`T_{BCS}0.6t`$) and the true KT transition. It is apparent from Fig. 1 that the rapid decrease of the spin susceptibility in the Montecarlo results is well fitted by inserting in the correlation length the KT temperature dependence as given by the expression
$$\xi _{KT}(T)=\xi _c\mathrm{exp}\left[b\sqrt{\frac{T(T_{BCS}T_{KT})}{T_{BCS}(TT_{KT})}}\right].$$
(1)
Here $`\xi _c`$ is an effective size of the core of the vortex that we take of the order of the zero temperature correlation length $`\xi _0`$, and $`b`$ is a positive constant of the order of unity. This specific form of the KT correlation length has been derived along the line of Ref., although it differs slightly from the one commonly quoted in the literature . We shall comment on this later. Notice that the KT mass term (inverse square of the correlation length) of the Cooper propagator remains small and generates strong fluctuations, in a wider range of temperatures than the GL mass with the same critical temperature in agreement with Montecarlo data. The GL correlation length is instead completely inadequate to reproduce the Montecarlo data in the all range of temperatures.
The fit in Fig. 1 stops at $`T0.1t`$ because there are no numerical data below this value. This also appears to be the lower limit for our approach to work. Indeed for $`T0.09t`$ the TDGL expression for $`\chi _s`$ develops a non physical behavior $`(\chi _s<0)`$, indicating that the perturbative scheme no longer applies near $`T_{KT}`$. Whit this caution in mind, the results of Fig. 1 indicate that the simple loop expansion we adopted is able to reproduce the spin susceptibility in a wide range of temperatures. They support the idea that the main effect of the vortex-antivortex phase fluctuations on the spin susceptibility is embedded in (and satisfactorily accounted for by) the temperature dependence of the $`\xi _{KT}(T)`$ correlation length, in analogy with the conductivity and diamagnetism.
On the other hand, as seen in Fig. 2, the same type of calculations for the charge susceptibility fail in describing the nearly constant (but with sizeable error bars) behavior obtained numerically. In particular, we find that the Aslamazov-Larkin (AL) contribution, which does not contribute to the spin susceptibility, strongly enhances $`\chi _c(T)`$ and eventually leads to a divergent $`\chi _c`$ near $`T_{KT}`$. As a consequence $`\chi _c(T)`$ strongly deviates from the Montecarlo results for $`T<T_{BCS}`$. In Fig. 2 we also report the RPA resummation of the bare bubble in the charge channel that fits the available Montecarlo data, to obtain, by extrapolation, the $`\chi _c(T)`$ at higher temperature.
With this respect $`\chi _c`$ appears to behave as the specific heat $`c_v`$, for which the 2D-TDGL expression $`c_v\xi _{GL}^2`$ does not reproduce the correct KT result $`c_v\xi _{KT}^2`$, even when expressed in terms of the correlation length. For the specific heat this happens despite the free energies in the two theories have the same leading behavior when written in terms of the respective correlation lengths: $`F_{GL}\xi _{GL}^2\mathrm{ln}\xi _{GL}`$ and $`F_{KT}\xi _{KT}^2`$ . Indeed, since $`c_v`$ involves the second derivative of $`F`$ with respect to temperature the different temperature dependences of the correlation lengths (and the subleading $`\mathrm{ln}\xi _{GL}`$ factor) lead to completely different results in the two theories. Our result for $`\chi _c`$ has the same origin: The charge response at $`\omega =0,q0`$ can be obtained as a chemical potential derivative of the free energy. Now, since the critical temperature depends on the chemical potential $`T_c=T_c(\mu )`$, a total derivative with respect to $`\mu `$ also involves derivatives with respect to $`T_c`$, and, in turn, derivatives of $`\xi `$. Therefore the temperature dependence in $`\chi _c`$ not only arises from the temperature dependence of $`\xi (T)`$, but also depends on $`d\xi /dT_c`$. In fact one gets the same TDGL singular contribution $`\xi ^2`$ for $`\chi _c`$ and $`c_v`$. Our simple perturbative expansion, where the leading temperature dependence only arises from the mass term $`\xi ^2`$ of the Cooper fluctuation propagator in the TDGL expression, fails to reproduce the correct temperature dependence for $`\chi _c`$ in the same way as it fails in evaluating the specific heat.
We now describe the details of our calculations. The model we consider is given by
$$H=t\underset{<i,j>\sigma }{}c_{i\sigma }^{}c_{j\sigma }+U\underset{i}{}n_in_i\mu \underset{i\sigma }{}n_{i\sigma }$$
(2)
where $`t`$ is the hopping between nearest-neighbor sites, $`U<0`$ the strength of the attraction and $`\mu `$ the chemical potential. The standard ladder resummation of diagrams leads to the Cooper pair propagator $`L(q,\mathrm{\Omega }_l)=U/\left(1+U\chi _0^{pp}(q,\mathrm{\Omega }_l)\right)`$ where $`\chi _0^{pp}(q,\mathrm{\Omega }_l)`$ is the bare particle-particle bubble, being $`q`$ the momenta and $`\mathrm{\Omega }_l`$ the Matsubara frequency. In the normal state, within the standard GL approach, at small $`q`$ and $`\mathrm{\Omega }_l`$ one has
$$L^1(q,\mathrm{\Omega }_l)=N_0\left(ϵ+\eta q^2+\gamma \mathrm{\Omega }_l\right)$$
(3)
where $`N_0`$ is the density of states at the Fermi energy, $`\eta =7\zeta (3)/(32\pi ^2)(v_F/T_c)^2\xi _0^2`$, and $`\gamma =\pi /(8T_c)`$. The mass term $`ϵ=\mathrm{ln}(T/T_c)=(\xi /\xi _0)^2`$ of the propagator controls the distance from the superconducting transition. In the standard GL approach $`ϵ\xi _{GL}^2`$ and near $`T_c`$ it goes to zero as $`(TT_c)/T_c`$.
We study the charge and spin susceptibilities by evaluating the one loop corrections $`\mathrm{\Delta }\chi _c`$ (charge channel) and $`\mathrm{\Delta }\chi _s`$ (spin channel) to the bare particle-hole bubble $`\chi _0^{ph}`$, $`\chi _{c,s}^{ph}=\chi _0^{ph}+\mathrm{\Delta }\chi _{c,s}`$. The charge $`(c)`$ and spin $`(s)`$ bubbles $`\chi _{c,s}^{ph}`$ are then inserted in the RPA resummation to get the charge and spin susceptibilities (see below). In the one loop expansion, we include diagrams containing only one integration on the bosonic variables $`(q,\mathrm{\Omega }_l)`$ ($`i.e.`$ one bosonic loop) of the fluctuation propagator $`L(q,\mathrm{\Omega }_l)`$, obtaining three kinds of diagrams which contribute differently to the spin and charge susceptibilities: the selfenergy diagrams, where $`L(q,\mathrm{\Omega }_l)`$ renormalizes the one particle bare Green function (DOS contribution); the vertex diagrams, where $`L(q,\mathrm{\Omega }_l)`$ renormalizes the vertex, connecting two bare Green function (Maki-Thompson (MT) contribution); the Aslamazov-Larkin (AL) diagrams, containing two fluctuation propagators. Moreover it is necessary to add the counterterms (CT) proportional to the shift of the chemical potential $`\delta \mu `$, which is required to preserve the number of particles. We notice that the one loop expansion for the charge and the spin susceptibilities satisfies the relation, derived from spin and charge conservation, $`\chi _{s,c}(q=0,\mathrm{\Omega }0)=0`$. One obtains:
$`\mathrm{\Delta }\chi _s`$ $`=`$ $`4DOS2MT+4CT`$ (4)
$`\mathrm{\Delta }\chi _c`$ $`=`$ $`4DOS+2MT+4AL+4CT.`$ (5)
The absence of the AL contribution and the (opposite) sign of the MT diagrams in the spin susceptibility is the consequence of the vertex spin structure, as shown in Ref. . Moreover the leading DOS contributions to the charge susceptibility cancel the MT ones. The AL diagrams give therefore the most important contribution to the charge susceptibility (being the CT diagrams subdominant respect to them) .
According to the physical assumption outlined above that the TDGL and KT temperature dependencies are essentially ruled by the correlation lengths, we have alternatively taken Eq.(3) with $`\xi =\xi _{GL}`$ and $`\xi =\xi _{KT}`$. In the calculation with $`\xi _{GL}`$ we used $`T_c=T_{KT}`$ and the mass term $`ϵ=\mathrm{ln}(T/T_c)`$, while in the calculation with $`\xi _{KT}`$ we used Eq.(1) with $`b=1.6`$ and $`\xi _c=\xi _0`$. In both cases we took the coefficients $`\eta `$ and $`\gamma `$ given by the corresponding expressions reported below Eq.(3) calculated with $`T_c=T_{BCS}`$. This choice was motivated by the plausible assumption that $`\eta `$ and $`\gamma `$ change little once the fluctuations are predominantly in the phase sector. In any case we checked that our results are rather stable with respect to modifications of $`\eta `$ and $`\gamma `$.
The charge and the spin susceptibilities are finally obtained by the RPA resummation of the corrected charge and spin bubbles $`\chi _{c,s}=\chi _{c,s}^{ph}/\left(1\pm (\stackrel{~}{U}_{c,s}/2)\chi _{c,s}^{ph}\right)`$ where the plus (minus) sign is associated to the charge (spin) susceptibility. Notice that, following the analysis of Ref. , the RPA expressions of both susceptibilities contain an effective local interaction $`\stackrel{~}{U}_{c,s}`$ instead of the bare $`U`$ in order to properly fit the high temperature region of the Montecarlo data. The validity of the RPA form for the spin susceptibility is also found in the context of the positive-$`U`$ Hubbard model . However, while in Ref. the bare bubbles were resummed and a value $`\stackrel{~}{U}=6.5`$ was obtained for $`U=4t`$ and $`<n>=0.5`$, in our case we resum the bubbles already containing the $`\mathrm{\Delta }\chi _s`$ corrections and a different value $`\stackrel{~}{U}_s=4.6`$ is needed to match the RPA calculation with the high temperature Montecarlo data. For the charge susceptibility the comparison with the RPA resummation in terms of the $`\chi _0^{ph}`$ reported in Fig. 2 gives $`\stackrel{~}{U}_c=1.6`$.
We now comment on the expression in Eq.1 that we used for the KT correlation lenght. We wrote this expression following Halperin and Nelson . They introduce into the KT correlation length $`\xi _{KT}a\mathrm{exp}\left[b(\pi J/k_BT1)\right]`$ for the classical XY model (with coupling J and lattice spacing $`a`$) a temperature dependent $`J(T)=n_s(T)/8m`$ and take $`a=\xi _c`$. Here the superfluid density $`n_s(T)`$ is taken to vanish linearly at a temperature $`T_0(>T_{KT})`$ to be determined selfconsistently by the request that $`T_0`$ should include the effect of the fluctuations at scale lower than $`\xi _c`$. Our expression (1) is obtained by taking $`T_0T_{BCS}`$ and $`\xi _c\xi _0`$ with the idea that phase fluctuations are the most important effect all over the range of temperatures $`T_{KT}TT_{BCS}`$ (at least in evaluating $`\chi _s`$ and $`\chi _c`$).
The results of the simple procedure outlined above are quite satisfactory for the spin susceptibility. This indicates that the main temperature dependence of this quantity actually arises from the specific KT temperature dependence of the correlation length, which thus brings along the physics of the vortex-antivortex phase fluctuations into a simple perturbative scheme. The same is not true for the compressibility, as for the specific heat, since these quantities also involve temperature derivatives of $`\xi _{KT}`$.
Our method, developed for the 2D attractive Hubbard model, can be useful to understand the role of the superconducting phase fluctuations in quasi-2D cuprate superconductors. In this context the recent finding that KT signatures, which are absent in the static conductivity, are progressively more evident in the dynamical conductivity at shorter timescales encourages to extend our analisys to other frequency-dependent quantities. In particular it is of obvious interest to explore the possibility to include in a simple perturbative scheme along the lines followed in the present work the effects of KT topological phase fluctuations on dynamical quantities like the optical conductivity and single-particle spectra.
Acknowledgments. We acknowledge S. Caprara, C. Di Castro, P. Pieri, G. C. Strinati and A. A. Varlamov for helpful discussions.
|
no-problem/9906/astro-ph9906344.html
|
ar5iv
|
text
|
# Comparing estimators of the galaxy correlation function
## 1 Introduction
The two–point correlation function $`\xi (r)`$ has been the primary tool for quantifying large–scale cosmic structure (see Peebles 1980). Several estimators have been used in the literature to measure this statistical quantity from the redshift surveys. The power–law shape of $`\xi (r)`$ seems to be well established for $`0.1<r<10h^1`$ Mpc ($`h`$ being the Hubble constant in units of 100 km s<sup>-1</sup> Mpc<sup>-1</sup>):
$$\xi (r)=\left(\frac{r}{r_0}\right)^\gamma .$$
(1)
However the reported values in the literature for the exponent $`\gamma `$ and the so–called correlation length $`r_0`$ (just related to the amplitude $`A`$ of $`\xi (r)=Ar^\gamma `$ by $`A=r_0^\gamma `$) vary somewhat depending on the sample analyzed, the estimator used, the weighting scheme, and the fitting procedure employed.
Redshift-space distortions affect strongly the correlation function at small scales; the real-space correlation function $`\xi (r)`$ is sometimes derived from the $`\xi (r_\mathrm{p},\pi )`$ which depends on the radial and projected separations. For example, Davis & Peebles (1983) found that for the CfA-I redshift survey the values of the fit for the real-space correlation function are consistent with $`\gamma =1.77\pm 0.04`$ and $`r_0=5.4\pm 0.3`$ $`h^1`$ Mpc. From the APM galaxy survey Maddox et al. (1990) inferred that $`\gamma 1.66`$ from measurements of the angular two–point correlation function and the use of the Limber equation. Other estimates of the two–point correlation function in redshift space for the CfA (I and II) catalogues have produced a variety of fits for $`\xi (s)=(s/s_0)^{\gamma _s}`$ (de Lapparent, Geller & Huchra 1988; Martínez et al. 1993; Park et al. 1994) with values for $`\gamma _s1.31.9`$ and $`s_04.512h^1`$ Mpc. For the Pisces–Perseus redshift survey Bonometto et al. (1994) found $`\gamma _s=1.51\pm 0.04`$ and $`s_0=7.4\pm 0.7`$ $`h^1`$ Mpc while, for the SSRS, Maurogordato, Schaeffer & da Costa (1992) found $`\gamma _s1.6`$ and $`s_058.5`$ $`h^1`$ Mpc. Luminosity segregation and the presence of large scale inhomogeneities affect the estimation of the parameters $`\gamma _s`$ and $`s_0`$ from the data (Hamilton 1988; Davis et al. 1988; Martínez et al. 1993). In particular, for the first slice of the CfA-II sample (de Lapparent et al. 1986) the two–point correlation function shows a flatter shape with $`\gamma _s1.2`$ and $`s_010`$ $`h^1`$ Mpc (de Lapparent et al. 1988; Martínez et al. 1993). Recent analyses of the shallower Stromlo–APM redshift survey performed by Loveday et al. (1995) have provided fits for the redshift-space correlation function ($`\gamma _s1.47`$ and $`s_05.9`$ $`h^1`$ Mpc) and for the real-space correlation function ($`\gamma 1.71`$ and $`r_05.1`$ $`h^1`$ Mpc). Regarding optical galaxies, it is worth mentioning the best fitting values for $`\xi (s)`$ reported by Hermit et al. (1996) for the ORS catalogue, $`1.5\gamma _s1.7`$ and $`6.5s_08.8`$ $`h^1`$ Mpc, and the corresponding values for the derived real space correlation function $`1.5\gamma 1.7`$ and $`4.9r_07.3`$ $`h^1`$ Mpc.
IRAS galaxies present typically a lower value of the slope of the two–point correlation function: $`\gamma 1.6`$ (Davis et al. 1988; Saunders, Rowan–Robinson & Lawrence 1992). For the 1.2–Jy IRAS galaxy redshift survey, Fisher et al. (1994) found that the parameters fitting the redshift space two–point correlation function were $`\gamma _s1.28`$ and $`s_04.53`$ and for the derived real space correlation function $`\gamma 1.66`$ and $`r_03.76`$. These results are in agreement with the values obtained for the QDOT-IRAS (1 in 6) redshift survey (Moore et al. 1994; Martínez & Coles 1994).
It is however important to have a good knowledge of the shape of the two–point correlation function at small scales and in particular of the value of $`\gamma `$ because it provides important constraints on models of structure formation. The parameters obtained by fitting the estimated two-point correlation function to a power–law may depend on the estimator used to measure $`\xi (r)`$ from the redshift surveys.
In this paper we compare some of the estimators of $`\xi (r)`$ commonly used in the literature concerning the large-scale structure of the Universe and in the literature regarding the statistics of the spatial point processes. The paper is organized as follows. We give the necessary definitions in Section 2. Section 3 illustrates the application of the estimators on galaxy samples with different types of limitations. In Section 4 we present a new method for extracting artificial galaxies from simulations and we introduce the so-called Cox processes. In Section 5 we perform the comparison of the given estimators under various conditions (number of auxiliary random points used, number of galaxies, etc). Our scheme to compute the errors of the correlation function is introduced in Section 6 and applied to the extracted synthetic galaxy samples. Finally, in Section 7 we state our main conclusions.
## 2 Estimators of the correlation function
In the framework of the statistical analysis of the large scale structure of the Universe, one assumes that the three–dimensional point pattern of galaxies is a sample of a stationary and isotropic point field. For such a point field the intensity $`\lambda `$ is the first order characteristic; $`\lambda `$ equals the mean number of points per unit volume. Second order characteristics are the correlation function $`\xi (r)`$ and the pair correlation function $`g(r)`$, which satisfy
$`g(r)=1+\xi (r).`$ (2)
The function $`g(r)`$ is defined as follows. Consider an infinitesimal ball $`B`$ of volume d$`V`$. The probability of having a point of the point field in $`B`$ is $`\lambda \mathrm{d}V`$. If there are two such balls $`B_1`$ and $`B_2`$ of volumes d$`V_1`$ and d$`V_2`$ and inter-centre distance $`r`$ then the probability to have a point in each ball can be denoted by $`P(r)`$. It can be expressed as
$`P(r)=g(r)\lambda \mathrm{d}V_1\lambda \mathrm{d}V_2.`$ (3)
The factor of proportionality $`g(r)`$ is the pair correlation function. It is clear that, in the case of complete randomness of the point distribution, $`g(r)=1`$.
For statistical estimation of $`\xi (r)`$, $`N`$ points are given inside a window $`W`$ of observation, which is a three–dimensional body of volume $`V`$.
Several estimators of $`\xi `$ are commonly used. The most extensively used one is that of Davis & Peebles (1983), for which an auxiliary random sample containing $`N_{\mathrm{rd}}`$ points must be generated in $`W`$ and the following quantity must be computed:
$$\widehat{\xi }_{\mathrm{DP}}(r)=\frac{DD(r)}{DR(r)}\times \frac{N_{\mathrm{rd}}}{N}1,$$
(4)
where $`DD(r)`$ is the number of all pairs in the catalogue (window $`W`$) with separation “close to $`r`$”, i.e., inside the interval $`[rdr/2,r+dr/2]`$, and $`DR(r)`$ is the number of pairs between the data and the random sample with separation in the same interval. The symbol $`\widehat{}`$ on top of an statistical quantity denotes its estimator. For flux–limited samples one has to weight each galaxy by means of the inverse of the selection function; since we basically deal in this paper with complete samples, this will not be considered here.
Another possibility is to use the estimator proposed by Hamilton (1993), which has become very popular since its introduction and reads:
$$\widehat{\xi }_{\mathrm{HAM}}(r)=\frac{DD(r)\times RR(r)}{[DR(r)]^2}1,$$
(5)
where also the number of pairs in the random catalogue with separation in the interval mentioned above, $`RR(r)`$, is taken into account. Hamilton (1993) has shown that the dependence of $`\widehat{\xi }_{\mathrm{HAM}}`$ on the uncertainty in the mean density is of second order, while in $`\widehat{\xi }_{\mathrm{DP}}`$ it is linear and presumably dominates at large scales. He also considers the accurate computation of $`RR`$ and $`DR`$ by a combination of analytical and numerical integration, decomposing the separations into their radial and spatial parts.
One more estimator was proposed simultaneously (in the literal sense of the word<sup>1</sup><sup>1</sup>1Both papers were received in ApJ the very same day.) to Hamilton’s, by Landy& Szalay (1993):
$$\widehat{\xi }_{\mathrm{LS}}(r)=1+\frac{DD(r)}{RR(r)}\times \left(\frac{N_{\mathrm{rd}}}{N}\right)^22\frac{DR(r)}{RR(r)}\times \frac{N_{\mathrm{rd}}}{N}.$$
(6)
Szapudi & Szalay (1997) claim that LS behaves like HAM except for a small bias.
A different kind of estimator was introduced by Rivolo (1986), in which random samples do not explicitly appear:
$$\widehat{\xi }_{\mathrm{RIV}}(r)=\frac{V}{N^2}\underset{i=1}{\overset{N}{}}\frac{n_i(r)}{V_i}1,$$
(7)
where $`n_i(r)`$ is the number of neighbours at distance in the interval $`[rdr/2,r+dr/2]`$ from galaxy $`i`$ and $`V_i`$ is the volume of the intersection with $`W`$ of the shell centred at the $`i`$th galaxy and having radii $`rdr/2`$ and $`r+dr/2`$. In the case of $`W`$ being a cube, an analytic expression for $`V_i`$ is provided in Baddeley et al. (1993). By the way, $`\widehat{\xi }_{\mathrm{RIV}}`$ is closely related to Ripley’s estimator of the so-called $`K`$–function, which is an integral of the correlation function $`g(r)`$ (Ripley 1981; Stoyan & Stoyan 1994; Kerscher 1998).
Before introducing a fifth estimator $`\widehat{\xi }_{\mathrm{STO}}(r)`$, which is commonly used in the framework of spatial point processes, let us define a naive estimator $`\varrho ^{}(r)`$ of the product density $`\varrho (r)=\lambda ^2g(r)`$:
$`\varrho ^{}(r)={\displaystyle \frac{DD(r)}{4\pi r^2drV}}.`$ (8)
The estimator of $`\xi (r)`$ is then
$`\xi ^{}(r)={\displaystyle \frac{\varrho ^{}(r)}{\widehat{\lambda }^2}}1={\displaystyle \frac{DD(r)/N}{4\pi r^2dr\widehat{\lambda }}},`$ (9)
with $`\widehat{\lambda }=N/V`$.
A smoothened version $`\stackrel{~}{DD}(r)`$ of $`DD(r)`$ can be obtained by means of a kernel function $`k(x)`$. Here the Epanechnikov kernel is used
$`k(x)=\{\begin{array}{cc}\frac{3}{4w}\left(1\frac{x^2}{w^2}\right)\hfill & \text{for }|x|w\hfill \\ 0\hfill & \text{otherwise}\hfill \end{array}.`$ (12)
The parameter $`w`$ is called bandwidth. Now $`\stackrel{~}{DD}(r)`$ is
$`\stackrel{~}{DD}(r)={\displaystyle \underset{i=1}{\overset{N}{}}}{\displaystyle \underset{\genfrac{}{}{0pt}{}{j=1}{ij}}{\overset{N}{}}}k(r|𝐱_i𝐱_j|),`$ (13)
where $`𝐱_i`$ is the location of the $`i`$th galaxy in $`𝐑^\mathrm{𝟑}`$ and those pairs with distances close to $`r`$ will contribute to the sum. Of course, the vagueness of the expression “close to $`r`$” is not completely overcome by means of the kernel function; the choice of the bandwidth $`w`$ is an art (see below).
A serious drawback of the naive estimator $`\varrho ^{}(r)`$ is that it is not edge–corrected and certainly there are edge–effects: points close to the boundary of $`W`$ do not find as many neighbours as points in the inner region of $`W`$ do. Thus $`DD(r)`$ or $`\stackrel{~}{DD}(r)`$ tends to be smaller than expected and the estimator $`\varrho ^{}(r)`$ produces too small values. Let us remark that the problems with edge–effects in three–dimensional space are much more serious than in one– and two–dimensional space, which is typical in many fields of spatial statistics: for a square of unit side length the fraction of the area wasted by a buffer zone of width 0.1 would be 36 %, while the fraction of the volume in a unit cube would be 48.8 %. Consequently, careful edge–correction is necessary. Various forms of doing it are presented in Stoyan and Stoyan (1994) for planar point processes. Here a form is used which is suitable for the case of homogeneous (not necessarily isotropic) point fields and it yields an unbiased estimator of $`\varrho `$, which reads:
$`\widehat{\varrho }_{\mathrm{STO}}(r)={\displaystyle \frac{1}{4\pi r^2}}{\displaystyle \underset{i=1}{\overset{n}{}}}{\displaystyle \underset{\genfrac{}{}{0pt}{}{j=1}{ji}}{\overset{n}{}}}{\displaystyle \frac{k(r|𝐱_i𝐱_j|)}{V(WW_{𝐱_i𝐱_j})}},`$ (14)
from which we have that $`\widehat{g}_{\mathrm{STO}}(r)=1+\widehat{\xi }_{\mathrm{STO}}(r)=\widehat{\varrho }_{\mathrm{STO}}(r)/\widehat{\lambda }^2`$. Here $`W_𝐲`$ denotes the window $`W`$ shifted by the vector $`𝐲`$, $`W_𝐲=W+𝐲=\{𝐱:𝐱=𝐳+𝐲,𝐳W\}`$. The denominator is the volume of the window intersected with a version of the window which has been shifted by the vector $`𝐱_i𝐱_j`$ and it can be written also as $`W_{𝐱_i}W_{𝐱_j}`$ (see Fig.1). Clearly, this volume is smaller than the window volume which appears in the naive estimator; thus edge–correction is done.
We want to emphasize here that the point process does not need to be isotropic to get good estimates of $`\xi (r)`$ through $`\widehat{\xi }_{\mathrm{STO}}`$, contrary to the four previously mentioned estimators. This property of the $`\widehat{\xi }_{\mathrm{STO}}`$ estimator makes it very useful, especially when measuring the correlation function in redshift space, because peculiar motions act to erase small scale correlations, flattening thus the shape of the correlation function and providing smaller values for $`\gamma `$. Beware applying statistics which are not suitable for anisotropic processes, since experience shows that deviations from isotropy may cause great errors if isotropic case estimators are used. In such cases, one can improve the STO estimator by replacing $`4\pi r^2`$ in the denominator of Eq. 14 by the quantity $`4\pi |𝐱_i𝐱_j|^2`$.
The estimator $`\widehat{\xi }_{\mathrm{STO}}`$ uses a smoothing kernel in order to reduce shot noise. The problem of shot noise arises, especially, with DP and HAM estimators because, at small scales, $`DR(r)`$ becomes very small due to the fact that the number of Poisson points within a shell of radius $`r`$ is approximately proportional to $`r^2`$. It is worth mentioning that Davis & Peebles (1983) already tried to reduce the shot noise by smoothing $`DR(r)`$ at small scales ($`r<2`$ $`h^1`$ Mpc). Other authors change the estimator used at small scales (van de Weygaert 1991). Other solutions to this problem will be commented in section 5.
There is still another well-known but little appreciated (Blanchard & Alimi 1988) estimator introduced for the study of the angular correlation function by Peebles & Hauser (1974). Its three–dimensional counterpart is
$`\widehat{\xi }_{\mathrm{PH}}={\displaystyle \frac{DD(r)}{RR(r)}}\times \left({\displaystyle \frac{N_{\mathrm{rd}}}{N}}\right)^21.`$ (15)
Peacock (1992) argues that $`\widehat{\xi }_{\mathrm{PH}}`$ and $`\widehat{\xi }_{\mathrm{DP}}`$ should be equivalent when applied to a large volume; however the latter is less sensitive to whether there is a rich cluster close to the border of the sample.
It can be shown (Kerscher 1998) that $`\widehat{\xi }_{\mathrm{PH}}`$ is nothing else than the isotropized Monte Carlo counterpart of $`\widehat{\xi }_{\mathrm{STO}}`$ in which the smoothing kernel has been substituted by the standard count of pairs $`DD(r)`$.
The relation between the estimators LS and PH can be easily deduced from their definitions given in Eqs. 6 and 15,
$`\widehat{\xi }_{\mathrm{LS}}=\widehat{\xi }_{\mathrm{PH}}+22\left({\displaystyle \frac{DR(r)}{RR(r)}}\times {\displaystyle \frac{N_{\mathrm{rd}}}{N}}\right).`$ (16)
In a broad sense, most of the estimators consist of a sum of pairs in the numerator whereas the denominator is an edge-corrected version of the denominator in $`\varrho ^{}(r)`$. The differences among them lie essentially in the way of performing this border correction in the denominator. In cosmology we have to cope most often with complicated windows, so the calculation of $`RR`$ and $`DR`$ has to be performed through Monte Carlo integration.
Within this general scheme, RIV presents (at first sight) a certain deviation by summing means of edge-corrected counts of pairs instead of summing the means first and dividing them afterwards like the other estimators do. HAM and STO both present a new approach to the problem: the former arises from minimizing the dependence of the variance on the (not always well known) intensity and the latter introduces a smoothing in the counting of pairs of galaxies.
The estimator $`\widehat{\varrho }_{\mathrm{STO}}(r)`$ has an irregularity property for small $`r`$ resulting from the denominator $`4\pi r^2`$. If the numerator of $`\widehat{\varrho }(r)`$ vanishes, then $`\widehat{\varrho }_{\mathrm{STO}}(r)=0`$ by definition. But if there is at least one pair with a very small interpoint distance then the numerator is positive and $`\widehat{\varrho }_{\mathrm{STO}}(r)`$ may take a very large value. This problem is discussed in Stoyan and Stoyan (1996). For many point fields this effect does not play a role and it suffices to avoid too small values of $`r`$. However, in the case of galaxies $`\xi (r)`$ is known to have a pole at $`r=0`$ and the multiplicity of this pole is the value of the exponent $`\gamma `$. Thus small values of $`r`$ are important and it is precisely in this region where we can observe remarkable differences among the various estimators considered. The set of small $`r`$ values is not an easy zone to study clustering in because at small distances there are few points and the shot noise dominates; consequently it would be interesting to check if any of the estimators is able to cope at least moderately well with this kind of noise.
On the other hand, in the STO case a contrary effect influences the estimation problem, namely the fact that kernel estimators tend to smooth the results. This may lead to values of $`\widehat{\varrho }(r)`$ which are too low for small $`r`$. Stoyan and Stoyan (1996) recommend to use large samples and small values of the bandwidth $`w`$, taking numerical experiments with statistical data from simulated point fields in order to find out the best value; such experiments have led to the result that a good choice of $`w`$ would be
$`wc\lambda ^{1/3}`$ (17)
with the coefficient $`c`$ being around 0.1 for point fields such as the Poisson point process. For cluster processes, values of $`c`$ around 0.05 have yielded acceptable estimates of $`\xi (r)`$ also for small $`r`$ and this is the value we use throughout the paper.
## 3 The estimators acting on galaxy samples
The aim of this Section is to stress the fact that there exists no “perfect estimator” but that, as Doguwa & Upton (1986) remark, the usefulness of an estimator can depend on the kind of process/sample/distance range under study.
### 3.1 Comparison between DP and HAM
The currently most widely used estimators in the literature are DP and HAM. In this Section we are going to perform a comparison between them by analyzing results of applying them to galaxy samples which have been obtained in different ways.
#### 3.1.1 Complete volume–limited samples
We plot in Fig. 2 the quotient between the Hamilton and the Davis & Peebles estimators of the correlation function for a volume–limited sample extracted from Stromlo-APM, where the values of the correlation functions and of bootstrap errors have been provided to us by J. Loveday. In that case the relative differences are again small and much less significant than the bootstrap errors. In fact this result is used by Loveday et al. (1995) to clarify a possible concern regarding the HAM estimator, showing that it does not remove intrinsic large-scale clustering. So it seems that the main difference between both estimators happens when they are applied to a sample whose density is poorly known, where HAM works better. This is a very sparse sample (only 1 in 20 galaxies from the angular sample is included in the redshift survey), therefore at small scales the statistical quantities are rather noisy. It is interesting to note that the value of $`\xi (r)`$ at $`r=1.23h^1`$ Mpc is 2.7 for DP and 2.4 for HAM. Although at this scale the error bar is quite large (between 5 and 10), it is clear that the value of $`\xi (r)`$, assuming it follows a power law, is underestimated by both estimators, indicating a strong bias. At the same scale the RIV estimator provides a larger value for $`\xi (r)`$, 13.6, which clearly is more acceptable.
#### 3.1.2 Samples with non-uniform density
The expressions we have presented for the estimators are adequate for samples which are either complete or volume–limited. They can be generalized to other kinds of limitation by assigning to each galaxy weights inversely proportional to a certain selection function. This function represents the fraction of the total population of galaxies satisfying the limitation criterion at a certain distance. The weighting scheme used or the uncertainty in the knowledge of the selection function can influence, however, the result for the correlation function. Since we want to compare estimators this added uncertainty would disturb unnecessarily the measure, so we shall mainly work with complete or volume–limited samples.
Nonetheless, we want to show briefly in this subsection an example of the difference of applying DP and HAM to incomplete samples. In particular we have used two samples extracted from the Optical Redshift Survey (described in Santiago et al. 1995), one limited in apparent magnitude and the other in diameter. What we show in Fig. 3 is the quotient between both estimators, i.e., $`\widehat{\xi }_{\mathrm{HAM}}/\widehat{\xi }_{\mathrm{DP}}`$, calculated by Hermit et al. (1996). The differences are only noticeable at very large scales and they are bigger for the magnitude–limited sample (bottom panel) than for the diameter–limited sample (upper panel). This fact is remarkable because the latter sample is sparser at large distances than the former, since the selection function is steeper for the diameter–limited sample than for the magnitude–limited one (Santiago et al. 1996). However, the Galactic extinction affects galaxy magnitudes more strongly than diameters. The selection function used by Hermit et al. (1986) incorporates an angular dependence modelling the extinction and this fact could explain the deviations observed in Fig. 3. In fact for the Las Campanas redshift survey, having a very complex selection function, Tucker et al. (1997) have shown that, at large scales, the differences between both estimators can be as larger as the signal itself.
### 3.2 The six estimators acting on a volume–limited sample
Now we shall apply the six mentioned estimators to a complete sample, volume–limited to 79$`h^1`$ Mpc, extracted from the Perseus-Pisces Survey (for a thorough description of the sample, see Kerscher et al. 1997). The results can be observed in Fig. 4 and show that, at small and intermediate scale, all estimators behave similarly except STO, which gives a bigger value of $`g`$; as we shall later see, this estimator has a smaller variance than the others at small scales, important for the determination of $`\gamma `$. This can be interpreted saying that its nature makes it less sensitive to local anisotropies due to peculiar motions. This result mainly indicates that all the estimators measure the two-point correlation function rather well in the “easy” range $`2<r<15`$$`h^1`$ Mpc. For bigger scales, relevant for information on a possible trend to homogeneity of the matter distribution, there are some differences as well. Therefore, it is worth to study the behaviour of the different estimators on controllable point sets in order to know the deviation of each one from the true value of the two-point correlation function and the ensemble variance. The test performed in Section 5 points in this direction.
## 4 Description of the artificial samples
### 4.1 Cox processes
We shall make use of an artificial sample which is a particular kind of a so-called segment Cox point process. This is a clustering process for which an analytical expression of its 2–point correlation function is known and therefore can be used as a test to check the accuracy of the $`\xi `$–estimators. The variant we are going to use is produced in the following way: segments of length $`l`$ are randomly scattered inside a cube $`W`$ (see Fig. 5) and on these segments points are randomly distributed. Let $`L_V`$ be the length density of the system of segments, $`L_V=\lambda _\mathrm{s}l`$, where $`\lambda _\mathrm{s}`$ is the mean number of segments per unit volume. If $`\lambda _l`$ is the mean number of points on a segment per unit length, then the intensity $`\lambda `$ of the resulting point process is
$`\lambda =\lambda _lL_V=\lambda _l\lambda _\mathrm{s}l.`$ (18)
For this point field the correlation function can be easily calculated taking into account that the point field has a driving random measure equal to the random length measure of the system of segments. Stoyan, Kendall and Mecke (1995) have shown that the pair correlation function of the point field equals the pair correlation function of the system of segments, which reads
$`\xi _{\mathrm{Cox}}(r)={\displaystyle \frac{1}{2\pi r^2L_V}}{\displaystyle \frac{1}{2\pi rlL_V}}`$ (19)
for $`rl`$ and vanishes for larger $`r`$. As we can see, the expression is independent of the intensity $`\lambda _l`$.
In Section 5.2 we shall use 10 realizations of a segment Cox process generated inside a cube of sidelength $`L=100`$ $`h^1`$ Mpc with values of the parameters $`\lambda _\mathrm{s}=10^3,\lambda _\mathrm{l}=0.6`$, and $`l=10`$ $`h^1`$ Mpc, which produces sets containing $`N6000`$ points.
### 4.2 Simulated galaxies
In this subsection we show how a sample of synthetic galaxies was obtained from a simulation of a CDM–type Universe. The cubic region modeled was of sidelength 80$`h^1`$ Mpc, a standard $`\mathrm{\Omega }=1`$ Universe was chosen, and the initial computational grid was 32<sup>3</sup>, with the same number of particles. The run started from small perturbation amplitudes and was terminated when the $`\sigma _8`$ parameter, the mass dispersion in 8$`h^1`$ Mpc radius spheres, was close to the observed value 1. We used H. Couchman’s public domain adaptive P<sup>3</sup>M code (which can be obtained at http://coho.astro.uwo.ca/pub/ap3m/ap3m.html), and the initial data were those of the test model supplied with this code. The initial density perturbation spectrum was close to the observed one for scales of 8–10$`h^1`$ Mpc with a rather sharp cutoff used to eliminate numerical effects:
$$P(k)k^1\mathrm{exp}((k/k_c)^{16}).$$
(20)
The cutoff wavenumber $`k_c=0.96h`$Mpc<sup>-1</sup> is lower than the Nyquist frequency used in the computations (with a 32<sup>3</sup> grid the smallest usable wavelength is 5$`h^1`$ Mpc, while the cutoff wavelength is 6.5$`h^1`$ Mpc). The final state of the model represents a continuous distribution of dark matter in the computational volume (see Fig. 6).
In order to get closer to observations one has to predict the positions of luminous objects (galaxies, their groups or clusters) on the basis of this distribution. There exist many essentially phenomenological methods for doing this, and we have applied another one, the recent equal–mass binary tree approach. These trees are known as multidimensional $`k`$-trees; they were used first in the statistics of cosmological data by van de Weygaert (1988) and have now been resurrected by Suisalu et al. (1999), who give in that paper the detailed description of their motivation and of the intricacies of their use. The present application is ideal for these trees, having a perfectly shaped volume and a number of particles that is a power of 2.
The equal–mass trees are constructed by dividing the sample volume successively into smaller subvolumes, keeping the mass (number of points) of the two subvolumes equal. In order to illustrate the method, we show in Fig. 7 how a planar point process with $`2^4`$ points is divided by means of the equal–mass tree for the two different starting directions.
This procedure assigns a fixed mass to a given level of subdivision, while the values of the subvolumes and their positions describe the density distribution for a given mass scale. One can select objects applying either a mass or a density bias and we choose the latter. In other words, for a given level of subdivision all cells have the same mass, but different density. The density is just proportional to the inverse of the volume of the cell. The mass within a cell will form a galaxy if its density exceeds a given threshold. We have applied this procedure to the CDM simulation. In Fig. 8 we have plotted the number of cells $`N`$ with density exceeding a given density threshold $`n`$ for each level $`l`$ of subdivision. It can be seen that the isolevel lines split into three, showing the scatter for trees that have different starting directions.
For the present study we used samples selected on the basis of a fixed threshold density, $`n=10^6`$ (in units of number of points divided by the fraction of the whole volume occupied by the cell), and for four levels. Each level can be assigned a fixed mass, $`M_{\mathrm{}}=1.4\times 10^{17}h^12^{\mathrm{}}M_{}`$. The mass range for our samples runs from $`4.3\times 10^{12}h^1M_{}`$ for the finest subdivision, somewhat higher than the total mass of a giant galaxy, to $`3.4\times 10^{13}h^1M_{}`$, characteristic for a group or a poor cluster of galaxies. Each object gets its coordinates from the centre of the cell that collapsed to form it, and we used a fixed starting direction to construct a tree.
The spatial distribution of the objects of our samples is shown on the left side of Fig. 9. From top to bottom the panels correspond to levels $`\mathrm{}=12,13,14,15`$ and the number of points of each subsample is respectively $`N=762,1930,4734,11284`$. As it can be seen, the geometry of the mass distribution for different mass levels does not differ much.
## 5 Comparison of the estimators of the correlation function
### 5.1 Dependence on $`N`$
We have calculated the pair correlation function $`g(r)=1+\xi (r)`$ for the four samples shown in Fig. 9 by means of four of the estimators described in Section 2. Our aim was to check the influence of the total number of points $`N`$ on each of them. The extracted galaxies we have described in Section 4.2 are appropriate for this check because these samples trace the same structure with increasing number of points for bigger levels $`\mathrm{}`$.
The results are shown in the right panels of Fig. 9. We can see that at large scales there is full agreement among the four methods but, at short distances, STO and RIV still agree rather well, while DP and HAM deviate from this behaviour. In all cases we have used random realizations containing $`N_{\mathrm{rd}}=20000`$ points each. This is a typical number of random points used in the computation of $`\xi (r)`$ (Dalton et al. 1994, Tucker et al. 1997). We see in the plot that the relation among the different estimators remains similar from one panel to the other although $`N`$ is varying by a factor 15 in total.
The conclusion is that $`N`$, provided it is big enough to trace satisfactorily the main structures present in the sample, does not have a significant influence on the estimation of the correlation function.
We have repeated this analysis by using the same data sample ($`\mathrm{}`$ =12, $`N=762`$ simulated galaxies), but different realizations of random samples (different seeds). For $`10^4`$ random points the differences in correlation functions were appreciable for all four estimators that use auxiliar random samples, but for $`10^5`$ points the correlation functions practically coincided, except for small $`r`$ values for DP and HAM. In next subsection we study in more detail the dependence on $`N_{\mathrm{rd}}`$ by means of the Cox processes.
### 5.2 Dependence on $`N_{\mathrm{rd}}`$
First we have performed a couple of tests on 10 Cox processes of the kind described in Section 4.1, consisting in calculating for them $`\xi `$ and the ensemble error with the four estimators introduced in Section 2 depending on $`N_{\mathrm{rd}}`$. We see in Fig. 10 what happens when we increase the number of random points: $`10^4,10^5,10^6`$. Our aim is to check if the value of $`N_{\mathrm{rd}}`$ is the source of the differences among them. In Fig. 10 the results of $`\xi `$ for very small distances have been suppressed since the use of Poisson samples introduces shot noise in the estimators because the local fluctuations become important. One sees that increasing the number of random points helps reducing the variances, but of course for using a very large number of random points, one has to resort to efficient searching algorithms like those based on the multidimensional binary tree (Martínez et al. 1990) to count the number of pairs $`RR(r)`$ and $`DR(r)`$. Alternatively, one has to use analytical expressions for the evaluation of these quantities (see the appendix in Hamilton (1993)).
Except for the first bin in DP and HAM, the results are practically the same using $`N_{\mathrm{rd}}=10^5`$ than using $`N_{\mathrm{rd}}=10^6`$; that means that, for this process and choice of parameters, $`N_{\mathrm{rd}}=10^5`$ is “big enough”. Let us notice that in this case the difference between PH and LS is very small, tending to 0 as $`N_{\mathrm{rd}}`$ increases, since then $`(DR(r)/RR(r))\times (N_{\mathrm{rd}}/N)`$ tends to 1 (see Eq. 14).
As we can see, DP and HAM estimators have a larger scatter for the correlation function at short distances than do PH and LS. This is due to the fact that the shot noise acts to create spurious clustering in the random samples at small distances, influencing the computational number of pairs $`DR(r)`$ and $`RR(r)`$ and through those the estimators HAM and DP. The bigger problem is $`DR(r)`$ which does not enter in the estimator PH. If one wishes to use $`DR(r)`$ as a background number of pairs to normalize the quantity $`DD(r)`$, one has to use a large enough random sample in order to make the fluctuations negligible. But, how large? The intensity (number density) of the random sample should be at least that of the local intensity of the real catalog in the clustered regions. For example, for the segment Cox processes used here, we deduce a priori the number of random points needed to estimate reliably $`\xi (r)`$ at small separations. From the expression of the correlation function given in Eq. 19, we know that for this kind of process the average density at a distance 0.3 $`h^1`$ Mpc of a given point is 172.5 times the mean number density, $`6\times 10^3`$; therefore if we want to map these distances with the random sample, we need at least $`10^6`$ random points in order that the intensity of the random catalog equals the previous value of the local density. At this point it is interesting to remark that at the smallest interpoint separations, the effects of the finite boundaries on the estimates of $`\xi (r)`$ are less important than at large scales; however it is more difficult to cope with them with this kind of estimators, because one needs to use a huge amount of random points or other sophisticated solutions to get reliable results.
Another practical rule to decide if the random catalogue used is large enough is to repeat the calculations using different random seeds – if the results differ appreciably in the region of interest, then it is necessary to increase the size of the random sample (or to choose another estimator).
At intermediate scale all the estimators give the right result with moderate error bars whereas at large scales the errors increase for all estimators. Therefore, the difficulty to obtain accurate estimates of $`\xi `$ at big distances does not seem to be only due to the form of a particular estimator or to the number of random points used but to the statistic itself. Note, however that we have limited our analysis to scales $`rl`$; at the end of Section 5.4 we will compare some estimators at longer distances by means of simulations of the cluster distribution.
### 5.3 Estimation of biases
We shall now consider the results of the previous subsection for the biggest $`N_{\mathrm{rd}}`$ used. Although, as we have seen, increasing $`N_{\mathrm{rd}}`$ reduces the variances, the same effect is not found for the bias. We shall proceed to plot in Fig. 11 a measure of the bias in the form of a quotient between the mean of the 10 estimated values of $`g`$ for each estimator using $`N_{\mathrm{rd}}=10^6`$, and the theoretical $`g_{\mathrm{Cox}}`$. We want also to include for the comparison the STO and RIV estimators. We shall estimate the volumes entering their definition by means of analytical expressions which are available for this simple geometry.
At distances $`r2`$$`h^1`$ Mpc the biases of all the methods are of the same order and the results for $`g(r)`$ are quite reliable when compared with the expected theoretical values given in Eq. 19. At short distances the estimator STO performs very well providing the smallest bias. This good performance is probably related with the fact that the segment Cox process is at small scales locally anisotropic (points randomly placed on a segment) and as we have explained the STO estimator deals well with this kind of process. The other estimators show a clear bias at small scales, underestimating the true value of the correlation function. It is expected that for very large windows and a large number of points in the point sample all estimators are of a similar quality (Hermit et al. 1996).
### 5.4 Variance at large scales
The variance for an estimator on a Cox process could be different from that of the same estimator applied to galaxy catalogues or cosmological simulations. Moreover, the kind of Cox process used here has a limitation due to the finite length of the segment employed to generate the point distribution, namely that $`\xi `$ vanishes for a distance greater than that length. In order to see what happens in the absence of such limitation we have taken 10 CDM cluster simulations produced by Croft & Efstathiou (1994) and calculated $`g`$ on them using the six estimators. The results of their standard deviation show in Fig. 12 that, at large scale, HAM and LS have a smaller variance than the others, which could not have been appreciated in the Cox processes where we should not go farther than 10$`h^1`$ Mpc in distance. This result supports Hamilton’s claim that the estimator proposed by him (Hamilton 1993) is more reliable on large scales, where the correlation function is small. Its use provides interesting clues on the transition to homogeneity of the galaxy distribution at large scales (Martínez, 1999). Other tests have been performed on simulations for which $`g(r)=1`$ at large scales. For these simulations, Hamilton’s estimator has a small systematic bias but a very little estimation variance. Combining both quantities in the square deviation of the true value, HAM shows a large degree of precision at large scales. The reason for that lies in the fact that the term $`DR(r)`$ in Eq. 5 is related to an improvement of the estimator of the intensity (Stoyan & Stoyan 1998).
## 6 Estimation of errors using Cox processes
After having performed the previous tests, we are now ready to use Cox processes for estimating errors. We shall do it on the extracted galaxy sample corresponding to the $`\mathrm{}=12`$ level but the method would be analogous in the other cases.
As Hamilton (1993) points out (see references therein), five methods of estimating the variance of $`\xi `$ are commonly used: Poissonian error, idem enhanced by a certain factor, bootstrap, ensemble error coming from calculating $`\xi `$ in subregions of the sample and, finally, ensemble error coming from artificial samples suffering the same selection effects than the real sample. The kind of error we are going to give belongs to the fifth group.
We simulate 10 Cox segment point fields with the following values of the parameters $`l=20`$ and $`\lambda _\mathrm{s}=4\times 10^5`$. This leads to a correlation function which is comparable with the 2–point correlation function of the sample of simulated galaxies stopping at the $`\mathrm{}=12`$ level described in Section 4.2 and which approximately verifies $`\xi (20)=0`$ and $`\xi (10)=1`$. Typically these point fields will be generated inside a cube of 80$`h^1`$ Mpc sidelength containing about 800 points. Using similar kind of processes (objects homogeneously distributed in filaments and sheets), Buryak & Doroshkevich (1996) have simulated the galaxy distribution.
As can be appreciated in the plots of Fig. 9, the use of different estimators causes variability in the slopes of the correlation function. A least squares fit to a power–law for $`g(r)r^\gamma `$ in the range \[0.5,8\] $`h^1`$ Mpc gives the following results for four of the methods: $`\gamma _{\mathrm{DP}}=2.14\pm 0.06`$, $`\gamma _{\mathrm{HAM}}=2.27\pm 0.09`$, $`\gamma _{\mathrm{RIV}}=2.03\pm 0.04`$, $`\gamma _{\mathrm{STO}}=2.03\pm 0.04`$ for a true value $`\gamma 2`$ due to the shape of the power–spectrum (Eq. 20). The fit has been performed using linear bins and the value of $`\widehat{g}`$ in a particular bin has been assigned to its centre. In this case the error accompanying the previous numbers comes from the weighted least squares fit taking as errors for $`g(r)`$ the ones obtained using the Cox processes mentioned in the previous paragraph.
Apart from using these simulations to test the stability of the methods, we want to stress that this is a way to evaluate the errors of the correlation function for a given realization, alternative to the standard bootstrap. Let us stress the idea of the method, which is similar to measuring the dispersion of $`\xi `$ in ensembles of many independent synthetic catalogues with similar statistical properties (Fisher et al. 1993): we use cluster point processes with the same intensity as our sample and with a known analytical expression for $`\xi (r)`$, we build a model having similar correlation behaviour to that of our galaxy sample, i. e., a similar $`\xi (r)`$ in the whole range of scales, and then we are able to estimate the ensemble error by constructing several realizations of the point process, applying the estimator of $`\xi `$ to all these realizations and measuring the standard deviation. We believe that this method for the estimation of the errors is more reliable than the standard bootstrap because of a serious conceptual weakness the latter suffers from, namely that the bootstrap suggested in Ling et al. (1986) produces new point patterns by sampling with replacement; consequently, in each new point pattern there are multiple points, i.e., quite heavy clusters. In cluster point processes the degree of clustering will increase. This leads to incorrect, probably too great, error predictions. Fisher et al. (1994) show how bootstrap errors are in general an overestimate of the true errors.
## 7 Conclusions
In this paper we have performed a comparison, by using Cox processes, of most of the existing 2–point correlation function estimators.
We would like to point out that a clear distinction has to be made among the statistical quantity $`\xi (r)`$, the estimator used to evaluate it on a particular galaxy catalog, $`\widehat{\xi }(r)`$, and the particular algorithm of computation of the quantities entering into the estimator. It is important to note that what we have compared here is the performance of different estimators, each implemented in its simplest way, following the definitions given in Section 2. These kinds of implementations are the ones commonly used in Cosmology. In particular, the estimators depending on the background pair counts $`RR(r)`$ and $`DR(r)`$ need a large amount of random points $`N_{\mathrm{rd}}10^6`$ if one is trying to accurately measure the correlation function at the smallest separations, although good enough results can be obtained at medium and large scales with $`N_{\mathrm{rd}}20000`$. Note that these figures are appropriate for samples with this density but that, for samples with other characteristics, one should previously perform tests in order to decide which is a good value for $`N_{\mathrm{rd}}`$. Cox processes are a good benchmark for such tests. The results show that at large distances all estimators present similar values and big errors with HAM and LS clearly being better than the others, at intermediate distances values and errors are similar and perfectly acceptable, and at short distances the errors for STO are clearly the smallest. Note, however, that the variance of the former gets smaller by increasing the number of random points or using alternative ways for accurately estimating the number of background pairs. Another advantage of RIV and STO is that they compute something as easy to accurately estimate as volumes (in the Monte Carlo implementation the dependence on $`N_{\mathrm{rd}}`$ is softer than for the others because the random points are being used only for the evaluation of volumes and not for computation of pairs), whereas in order to increase accuracy in the others one should make use of a “big enough” random sample and the decision about how big that should be, in the absence of previous numerical tests, is somewhat arbitrary. Unfortunately one factor of arbitrariness is always present, namely the length of the bin in distance (or the coefficient $`c`$ in the choice of bandwidth for STO estimator).
The main conclusion we have drawn from our analysis is that there exists no optimal estimator but that each one has advantages and weak points and, depending on the nuances of the problem we want to analyze, one or another will be preferable. In the case of complete samples limited in volume, RIV is not very sensitive to the number of random points used to evaluate the volumes but presents a bias at small distances; HAM has small variance at long distances but larger at small distances and in this range is highly sensitive to $`N_{\mathrm{rd}}`$ and it is biased; DP has a big variance and presents a bias at short scales; PH depends also on $`N_{\mathrm{rd}}`$ but less than HAM and DP and also shows a bias at small scales; STO is never the worst in any of the tests and can be applied also to anisotropic processes; and LS behaves in many aspects similarly to PH but with a smaller variance at large scale. For samples with non-uniform density these conclusions may vary, and in particular HAM is preferable at large scales.
Two further points—secondary with regard to the comparison of estimators but also interesting and potentially useful for researchers on this field—have been treated: for testing the estimators we have introduced a new phenomenological method to extract galaxy samples from cosmological simulations based on the multidimensional binary trees; and, for such samples, we have estimated errors in the determination of the 2–point correlation function by using realizations of a Cox process with the same number density as the simulated sample.
###### Acknowledgements.
This work was partially supported by the Spanish DGES project n. PB96-0797. D. Stoyan was partially supported by a grant of the Deutsche Forschungsgemeinschaft. E. Saar acknowledges a fellowship of the Conselleria de Cultura, Educació i Ciència de la Generalitat Valenciana. We are grateful to R. Croft, S. Hermit, M. Graham and J. Loveday for kindly permitting us to use part of their data and results. Advice from M. Stein and R. Moyeed is also acknowledged. We thank the referee, Douglas Tucker, as well as Andrew Hamilton and Martin Kerscher for a careful reading of the manuscript and for their valuable comments and suggestions.
|
no-problem/9906/astro-ph9906428.html
|
ar5iv
|
text
|
# HIGH–RESOLUTION 3D SIMULATIONS OF RELATIVISTIC JETS
## 1 Introduction
Since several years the dynamical and morphological properties of axisymmetric relativistic jets are investigated by means of relativistic hydrodynamic simulations (van Putten 1993; Duncan & Hughes 1994; Martí et al. 1994, 1995, 1997; Komissarov & Falle 1998; Rosen et al. 1999). In addition, relativistic MHD simulations have been performed in 2D (Koide, Nishikawa & Muttel 1996; Koide 1997) and 3D (Nishikawa et al. 1997, 1998). In their 3D simulations Nishikawa et al. have studied mildly relativistic jets (Lorentz factor 4.56) propagating both along and obliquely to an ambient magnetic field. In this Letter we report on high-resolution 3D simulations of relativistic jets with the largest beam flow Lorentz factor performed up to now (7.09), the largest resolution (8 cells per beam radius), and covering the longest time evolution (75 normalized time units; a normalized time unit is defined as the time needed for the jet to cross a unit length; see Massaglia, Bodo & Ferrari 1996).
The calculations have been performed with the high–resolution 3D relativistic hydrodynamics code GENESIS (Aloy et al. 1999), which is an upgraded version of the code developed by Martí, Müller & Ibáñez (1994) and Martí et al. (1995). GENESIS integrates the 3D relativistic hydrodynamic equations in conservation form in Cartesian coordinates including an additional conservation equation for the density of beam material. The computations were performed on a Cartesian domain (X,Y,Z) of size $`15R_b\times 15R_b\times 75R_b`$ ($`120\times 120\times 600`$ computational cells), where $`R_b`$ is the beam radius. The jet is injected at $`z=0`$ along the positive $`z`$-axis through a circular nozzle defined by $`x^2+y^2R_b^2`$. Beam material is injected with a beam mass fraction $`f=1`$, and the computational domain is initially filled with an external medium ($`f=0`$).
We have considered a 3D model corresponding to model C2 of Martí et al. (1997), which is characterized by a beam-to-external proper rest-mass density ratio $`\eta =0.01`$, a beam Mach number $`M_b=6.0`$, and a beam flow speed $`v_b=0.99c`$ ($`c`$ is the speed of light) or a beam Lorentz factor $`W_b7.09`$. An ideal gas equation of state with an adiabatic exponent $`\gamma =5/3`$ is assumed to describe both the jet matter and the ambient gas. The beam is assumed to be in pressure equilibrium with the ambient medium.
The evolution of the jet was simulated up to $`T150R_b/c`$, when the head of the jet is about to leave the grid. The mean jet propagation speed $`v_h0.5c`$, while the 1D estimate of the jet propagation speed (see, e.g., Martí et al. 1997) gives $`0.42c`$, i.e., our simulations are still within the 1D phase (see Martí, Müller & Ibáñez 1998).
Non–axisymmetry was imposed by means of a helical velocity perturbation at the nozzle given by
$$v_b^x=\zeta v_b\mathrm{cos}\left(\frac{2\pi t}{\tau }\right),v_b^y=\zeta v_b\mathrm{sin}\left(\frac{2\pi t}{\tau }\right),v_b^z=v_b\sqrt{1\zeta ^2},$$
(1)
where $`\zeta `$ is the ratio of the toroidal to total velocity and $`\tau `$ the perturbation period (i.e.,$`\tau =T/n`$, $`n`$ being the number of cycles completed during the whole simulation). The wavelength of the perturbation, $`\lambda `$, is obtained from the expression $`\lambda =v_b^z\tau {\displaystyle \frac{v_b}{v_h}}{\displaystyle \frac{L}{n}}`$, where $`L`$ is the axial dimension of the grid.
## 2 Morphology and dynamics of 3D relativistic jets
We have considered a model with a $`1\%`$ perturbation in helical velocity ($`\zeta =0.01`$) and $`n=50`$. Figure 1 shows various quantities of the jet in the plane $`y=0`$ at the end of the simulation. Two values of the beam mass fraction are marked by white contour levels. The beam structure is dominated by the imposed helical pattern with a characteristic wavelength of $`3.0R_b`$ (to be compared with the value $`\lambda =3.5R_b`$ expected from the estimate of $`\lambda `$ in the previous paragraph) and an amplitude of $`0.2R_b`$.
### 2.1 Cocoon
The overall jet’s morphology is characterizad by the presence of a highly turbulent, subsonic, asymmetric cocoon. The pressure distribution outside the beam is nearly homogeneous giving rise to a symmetric bow shock (Fig. 1b). As in the classical case (Norman 1996), our relativistic 3D simulation shows less ordered structures in the cocoon. The cocoon remains quite thin ($`2R_b`$) as long as the jet propagates efficiently.
The flow field outside the beam shows that the high velocity backflow is restricted to a small region in the vicinity of the hot spot (Fig. 1e), the largest backflow velocities ($`0.5c`$) being significantly smaller than in 2D models. The flow with high Lorentz factor found in axisymmetric simulations (see flow patterns in Martí et al. 1996) appears here restricted to a thin layer around the beam and possesses sub-relativistic speeds ($`0.25c`$). The magnitude of the backflow velocities in the cocoon do not support relativistic beaming.
### 2.2 Beam and hot spot
Within the beam the perturbation pattern is superimposed to the conical shocks at about 26 and 50 $`R_b`$. The beam does not exhibit the strong perturbations (deflection, twisting, flattening or even filamentation) found by other authors (Norman (1996) for 3D classical hydrodynamic jets; Hardee (1996) for 3D classical MHD jets). This can be taken as a sign of stability, although it can be argued that our simulation is not evolved far enough. Obviously, the beam cross section and the internal conical shock structure are correlated (bottom panel in Figure 1). Before the first recollimation shock the beam cross section shrinks to an effective radius of $`0.7R_b`$. After this shock and in the rarefaction the beam reexpands and stretches due to an elliptical surface mode (e.g., Hardee 1996). Between $`37R_b<\text{ }z<\text{ }50R_b`$ the beam flow is influenced by the second recollimation shock, which causes a compression of the beam. A triangular mode seems to grow in this region.
The helical pattern propagates along the jet at nearly the beam speed (see animation at http://scry.uv.es/aloy.html/JETS/videos/n50p01) which could yield to superluminal components when viewed at appropriate angles. Besides this superluminal pattern, the presence of emitting fluid elements moving at different velocities and orientations could lead to local variations of the apparent superluminal motion within the jet. This is shown in Fig. 2, where we have computed the mean (along each line of sight, and for a viewing angle of 40 degrees) local apparent speed. The distribution of apparent motions is inhomogeneous and resembles that of the observed individual features within knots in M87 (Biretta, Zhou, & Owen 1995).
The jet can be traced continuously up to the hot spot which propagates as a strong shock through the ambient medium. Beam material impinges on the hot spot at high Lorentz factors. We could not identify a terminal Mach disk in the flow. We find flow speeds near (and in) the hot spot much larger than those inferred from the one dimensional estimate. This fact was already noticed for 2D models by Komissarov & Falle (1996) and suggested by them as a plausible explanation for an excess in hot spot beaming.
### 2.3 Beam/cocoon shear layer
We find a layer of high specific internal energy (Fig. 1d) surrounding the beam like in previous axisymmetric models (Aloy et al. 1999). A comparison with the backflow velocities (Fig. 1e) shows that it is mainly composed of forward moving beam material at a speed smaller than the beam speed. The intermediate speed of the layer material is due to shear in the beam/cocoon interface, which is also responsible for its high specific internal energy. The existence of such a boundary layer has been invoked by several authors (Komissarov 1990, Laing 1996) to interpret a number of observational trends in FRI radio sources. Swain, Bridle & Baum (1998) have found evidence for these boundary layers in FRIIs (3C353).
The diffusion of vorticity caused by numerical viscosity is responsible for the formation of the boundary layer. Although being caused by numerical effects (and not by the physical mechanism of turbulent shear) the properties of PPM–based difference schemes are such that they can mimic turbulent flow to a certain degree (Porter & Woodward 1994). Hence, our calculations represent a first approach to study the development of shear layers in relativistic jets and their observable consequences. The structure of both the shear layer and the beam core are sketched in Fig 3. The specific internal energy of the gas in the shear layer (region with $`0.2<f<0.8`$) is typically more than one order of magnitude larger than that of the gas in the beam core. The shear layer broadens with distance from 0.2$`R_b`$ near the nozzle to 1.1$`R_b`$ near the head of the jet (Fig. 4).
### 2.4 Jet propagation efficiency and disruption
From the head’s position at the end of the simulation ($`T=140.8`$) a mean jet advance speed of 0.47$`c`$ is obtained, but the jet’s propagation proceeds in two distinct phases: (i) for $`t<\text{ }100`$ the jet propagates roughly at the estimated 1D speed ($`0.42c`$); (ii) for $`t>\text{ }100`$ the jet accelerates and propagates at a considerably larger speed (0.55$`c`$). Comparing with the 3D simulation of Norman (1996) we find a similar behaviour: after a short 1D phase and before the deceleration, the jet transiently accelerates to a propagation speed which is $`20`$% larger than the corresponding 1D estimate. This result contradicts the one obtained by Nishikawa et al. (1997, 1998), who found a propagation speed of only 70% of the corresponding 1D estimate in a shorter ($``$ 20 normalized time units) simulation of a denser jet. Although the estimate do not take into account the effect of an extra magnetic pressure in the external medium opposing to the jet propagation, in the case of Nishikawa et al. simulations this magnetic pressure is negligible in comparison with the beam momentum density.
Figure 4 shows the axial component of the momentum of the beam particles (integrated across the beam) along the axis, which decreases by 30% within the first 60 $`R_b`$. Neglecting pressure and viscous effects, and assuming stationarity the axial momentum should be conserved, and hence the beam flow is decelerating. The momentum loss goes along with the growth of the boundary layer whose material is accelerated and heated by viscous stresses. Biconical shocks in the beam are responsible for the break in the axial momentum profiles at $`z=26R_b`$ and $`z=50R_b`$, because when the beam material passes a conical shock and enters into the adjacent rarefaction fan, it is accelerated by local pressure gradients.
How can the jet accelerate while the beam material is decelerating? Although the beam material decelerates, its terminal Lorentz factor is still large enough to produce a fast jet propagation. On the other hand, in 3D, the beam is prone to strong perturbations which can affect the jet’s head structure. In particular, a simple structure like a terminal Mach shock will probably not survive when significant 3D effects develop. It will be substituted by more complex structures in that case, e.g., by a Mach shock which is no longer normal to the beam flow and which wobbles around the instantaneous flow direction. Another possibility is the generation of oblique shocks near the jet head due to off–axis oscillations of the beam. Although difficult to check quantitatively (due to both the lack of an operative definition for Mach disk identification and the present resolution of our simulations) both possibilities will cause a less efficient deceleration of the beam flow at least during some epochs. At longer time scales the growth of 3D perturbations will cause the beam to spread its momentum over a much larger area than that it had initially, which will efficiently reduce the jet advance speed.
## 3 Conclusions
We have presented a first attempt to analyze the morpho-dynamical properties of 3D relativistic jets. From our simulations, we can conclude that the coherent fast backflows found in axisymmetric models are not present in 3D models. We have investigated the beam’s response to non-axisymmetric perturbations to check its stability. During the period of time studied by us ($`t<\text{ }150R_b/c`$), the beam does not display the strong perturbations found by other authors in classical jets (Norman 1996, Hardee 1996) and propagates according to the 1D estimate. Small 3D effects in the relativistic beam give rise to a lumpy distribution of apparent speeds like that observed in M87 (Biretta, Zhou & Owen 1995). We have also analyzed the properties of the boundary layer present in our model.
Obviously, our study must be extended to a wider range of models and perturbations. In particular, stronger perturbations should be considered to reach the nonlinear regime and to identify the acoustic and mixing phases (Bodo 1998) leading to the jet disruption. Further investigation also requires the dependence of the shear layer properties on the perturbation parameters. Finally, appropriate perturbations can be studied that mimic the wiggles observed in specific sources both at pc (0836+710, Lobanov et al. 1998; 0735+178, Gómez et al. 1999) and kpc scales (M87; Biretta, Zhou & Owen 1995).
ACKNOWLEDGEMENTS
This work has been supported in part by the Spanish DGES (grants PB97-1164 and PB97-1432) and the CSIC-Max Planck Gesellschaft agreement. MAA expresses his gratitude to the Conselleria d’Educació i Ciència de la Generalitat Valenciana for a fellowship. The calculations were carried out on two SGI Origin 2000 at the Centre Europeu de Paral.lelisme de Barcelona and at the Centre de Informática de la Universitat de València.
|
no-problem/9906/chao-dyn9906033.html
|
ar5iv
|
text
|
# Untitled Document
Controlling a leaky tap
Aquiles Ilarraza-Lomelí
Laboratorio de Sistemas Dinámicos, Depto. de Ciencias Básicas
Universidad Autónoma Metropolitana-Azcapotzalco
Apartado Postal 21-726, Coyoacán 04000 D. F., México
C. M. Arizmendi and A. L. Salas-Brito<sup>*</sup><sup>*</sup>Corresponding author. On leave of absence from Laboratorio de Sistemas Dinámicos, UAM-Azcapotzalco, e-mails: asb@hp9000a1.uam.mx or asb@data.net.mx. Postal address after April 6 1999: Apartado Postal 21-726 Coyoacan, México City D. F. 04000 México
Department of Physics, Emory University
Atlanta, GA 30322, USA
Abstract
We apply the Ott, Grebogy and Yorke mechanism for the control of chaos to the analytical oscillator model of a leaky tap obtaining good results. We exhibit the robustness of the control against both dynamical noise and measurement noise. A possible way of controlling experimentally a leaky tap using magnetic-field-produced variations in the viscosity of a magnetorheological fluid is suggested.
Classification numbers: 02.70+d, 47.20.Tg, 03.20.+i
The realization that the majority of natural phenomena are chaotic led to the suggestion of chaotic behavior in the common household phenomena of a leaky tap or dripping faucet —common perhaps, but not well understood, not even the process of drop formation where the flow changes from a fluid mass to one or several falling drops; see, for example, \[2–4\] and the references therein. The first clear experimental evidence of such chaotic behavior was found by Shaw and his collaborators , further evidence was found a few years later by Wu and Schelly and by Núñez-Yépez et al. . Since then many experiments and theoretical works have established the system as a sort of paradigm for dissipative chaos \[9–16\].
Shaw proposed the first model for the process, a variable mass oscillator inspired in Rayleigh ideas . The model was actualized by Sánchez-Ortiz and Salas-Brito (SOSB in what follows) and independently by D’Innocenzo and Renna \[21–24\] which, by changing the breakup mechanism and the way of choosing initial conditions, showed the broad range of behavior that can be achieved using the model and how it can be qualitatively related to the experimental facts. A promising hydrodynamic model, aiming at a quantitative agreement with the experiment and accounting for some of the topology transitions and the singularities in the phenomenom, has been recently put forward . It must be clear that despite the enormous simplification in reducing a many-degrees-of-freedom fluid system to an one-dimensional model, there are many things that can be understood using the oscillator model since, basically due to dissipation, the system restricts itself to essentially one-dimensional attractors. Kiyono, Ishioka and Fuchikami have actually shown that the agreement between a model and the experimental results can be made quantitative by analysing the system from the perspective of fluid mechanics . Using such ideas, Kiyono and Fuchikani have improved the relaxation oscillator model . The oscillator idea illustrates the important and sometimes underemphasized point that for reproducing qualitative and even quantitative features of a chaotic system it is usually not necessary to use very complex models.
Moreover, the leaky tap and the oscillator model have been used as a sort of role model to simulate other complex phenomena ; furthermore, given the similarity between certain of their features , they can be of help in modelling the comparable-to-chaotic heartbeat behavior \[28–30\]. The experimental control of a leaky tap can then be of importance as a testing ground for certain ideas. For instance, a cardiac MR imaging technique has been proposed which employs time series forecast and standard methods for the analysis of chaos on heartbeat time series. Using such concepts, new pacemakers are being investigated which use control techniques to correct arrhythmic behavior of the heart while minimizing their intervention and battery consumption. See and the references therein.
Our aim in this work is to apply a model independent chaos control technique to the SOSB equations in the analytical approach of D’Innocenzo and Renna ; we then suggest an experimentally-realizable scheme for the control of an actual dripping faucet. We carry out the control using the Ott, Grebogy and Yorke strategy (OGY in what follows) ; the advantages of the OGY method is that it does not need a detailed knowledge or model of the phenomena and it uses the chaotic behavior itself as the mechanism of control. We have found that it is possible to stabilize the SOSB model around one of its unstable equilibrium points and that such control is robust (between certain limits) against external perturbations; this is obviously a good feature with an experiment in mind. The control is accomplished by adjusting the parameter of the SOSB model analogous to the viscosity of the leaking fluid.
Let us begin by reviewing the SOSB relaxation oscillator model . The starting equations, in nondimensional coordinates, are
$$\begin{array}{cc}\hfill \frac{dx}{dt}& =y,\hfill \\ \hfill \frac{dy}{dt}& =\frac{1}{m}(x+\beta y)+g,\hfill \\ \hfill \frac{dm}{dt}& =f,\hfill \end{array}$$
$`(1)`$
where, $`\beta `$, $`g`$ and $`f`$ are parameters modelling viscosity, external force (gravity) and water inflow, respectively, and $`m`$ is the mass.
We here, following the analytic approach of D’Innocenzo and Renna and instead of studying directly numerical solutions to the set of equations (1), use a sort of approximate solution to it, namely
$$x(t)=\left[A\mathrm{sin}\omega (t)t+B\mathrm{cos}\omega (t)t\right]\mathrm{exp}(\gamma (t)t)+m(t)g,$$
$`(2)`$
where $`m(t)=m_0+f(tt_0)`$, $`\gamma (t)\beta /m(t)`$ and $`\omega ^2(t)1/m(t)`$; equation (2) together with the following proviso for the drop breakup: when the position of the oscillator reaches the meniscus length (normalized such that $`x_c=1`$) a drop is forced to detach, provoking an abrupt diminution in the oscillator mass by the quantity
$$\mathrm{\Delta }m=hm(t_c)y(t_c),$$
$`(3)`$
$`h`$ being a model parameter and where we have set $`tt_c`$. So before using (2) again, we have to reset the starting value of the oscillator mass to the new value $`m_0=m(t_c)\mathrm{\Delta }m`$; this scheme substitutes for the missing singularity-forming description of drop detachment \[19–21\]. The analytic model (2) was first proposed by D’Innocenzo and Renna and has been recently used to reproduce the closed loop attractors and the Hopf bifurcations experimentally observed in a leaky tap \[7, 15–16\].
We further assume that the clock resets every time a drop breaks off, i.e. we take $`t_0=0`$ for every new drop. To give a criterion for the initial position of the next drop we use a sort of amorphous drop model, loosely inspired in Eggers study . This has lead us to propose a way of choosing the initial conditions after breakup such that
$$x_0=\mathrm{exp}(\mathrm{\Delta }m),$$
$`(4)`$
this amorphous drop mechanism guarantees that $`0<x_0<1`$. Furthermore, the initial velocity of the remaining fluid can always be taken as $`y_0=y(t_c)`$ simply considering that the mass of the remaining fluid, plus the effect of the mass inflow, produce an effective mass term very large compared to $`\mathrm{\Delta }m`$ on which the snapping back of the system has a negligible effect. The use of the amorphous model (4) had, in fact, its origin in our attempts at using the spherical drop model of D’Innocenzo and Renna \[21–23\] that, sometimes, resulted in detached drops much larger than the normalized meniscus lenght.
Our analytical SOSB model is hence specified by equations (2) and (3) together with the amorphous drop scheme for the initial position of the next drop. Such mechanism of releasing drops, besides giving reasonable values for the initial positions generates important correlations between succesive drops.
Although it is known that the results obtained from the model vary greatly according to the specific mechanism employed for simulating the drop detachment , we here limit ourselves to quote results from the amorphous drop mechanism (4). We point out that we have tried different mechanisms and other variations of the SOSB model getting the same general conclusion .
Using a modified Newton-Raphson method for getting $`t_c`$ from the dripping condition $`x(t_c)=1`$, a very efficient method of simulating the dripping tap behavior is obtained . We should point out that the analytical SOSB equations (2) and (3) are appropriate for modelling leaking from relatively big (with diameters larger than $`1/4`$ cm) faucets, since for such diameters the drop dynamics is mainly governed by the center of mass motion of the hanging fluid . We are then capable of accumulating a large number of drip intervals $`t_nt_c^{(n)}`$ for analysis. The drip intervals, i.e. the time spans separating a drop from the next one, have become the standard variables used in all leaky tap studies.
Bifurcation diagrams—dripping spectra in the terminology of Wu and Schelly —, time series and return maps $`t_{n+1}`$ versus $`t_n`$, illustrating the results that can be obtained from the analytical SOSB equations, are shown in Figure 1 and 2 . Figure 1 shows both a bifurcation diagram (figure 1a) and a time series (figure 1b) taken from the zone we want to control. Notice also that at the parameter values ($`g=0.4`$, $`h=0.3`$ and $`\beta =3.01`$) used the system undergoes a period-doubling sequence, shows evidence of crisis (figure 1a) and behaves intermittency (figure 1b) . The horizontal dashed line in figure 1b simply marks the drip-interval at the unstable period-1 point ($`t_c=t_F=0.2858`$) we aim to control (but it does not necessarily coincide with the seemingly intermittent state appearing in figure 1b). The bifurcation diagram also shows that, in the conditions of figure 1a, dripping is interrupted by “continuous flow” at $`f>8.5002`$, that is, at greater $`f`$-values the oscillator position always remains larger than the meniscus lenght after the detachment of the first drop .
Let us mention that the parameter values used in our dicussion do not attempt to be typical of an experimental situation, rather they were chosen with the sole purpose of illustrating the OGY mechanism as applied to the system. We must mention though that the results were checked for other values of the parameters, that is, for chaotic attractors of different sorts, always obtaining similarly good results—although the dynamics may be different but still chaotic. The method even allowed us to control the system in an unstable period-10 cycle .
Figure 2 shows the reconstructed attractor that exists at the unstable fixed point location. The fixed point is shown as a small black dot in the figure 2 inset. Notice that the reconstructed attractor has a complex structure which can be regarded as difficulting the precise location, and hence the control, of the unstable fixed point. Nevertheless, the location of such fixed point and its control are easily achieved and are basically limited by the precision of our computations.
To identify the unstable fixed point orbit (shown in Figure 2) in the otherwise chaotic attractor-dominated dynamics, we simply acknowledge that every chaotic system has an agglomeration of unstable periodic orbits embedded almost anywhere. Hence, there must be a fixed point whenever an attractor crosses the line of the identity ($`t_{n+1}=t_n`$) in a return map. Using the time series, we can numerically identify the unstable fixed point at $`t_c=t_F=0.2858`$ present in the attractor shown in figure 2.
Around the unstable fixed point $`𝐗_F=(t_n=t_F,t_{n+1}=t_F)`$ in the return map, with the help of the time series, we can use a locally linear dynamics to describe the system
$$D(𝐗_n𝐗_F)=𝐗_{n+1}𝐗_F,$$
$`(5)`$
where $`D`$ is a $`2\times 2`$ matrix and $`𝐗_n`$ is the vector with components $`(t_{n+1},t_n)`$. With the local dynamics (5), it is then a simple matter to calculate the normalized $`D`$-eigenvectors, $`𝐞_s`$, $`𝐞_u`$, and its corresponding eigenvalues, $`\lambda _s`$, $`\lambda _u`$, associated with its stable and unstable manifolds. From them, we can evaluate also the contravariant vector associated with the unstable manifold as the vector $`𝐟_u`$, for which $`𝐟_u𝐞_u=1`$ and $`𝐟_u𝐞_s=0`$ holds .
To control the system we have chosen to adjust the viscosity parameter $`\beta `$; we choose $`\beta `$ and not the seemingly more natural fluid inflow, because we have in mind a magnetorheological fluid in which the viscosity can be varied using an easily tuned magnetic field and more important because it is rather difficult to control $`f`$ with confidence, in our conditions at least. We thence need to evaluate —numerically from the time series— the sensitivity of the model to changes in the viscosity parameter $`\beta `$ respect to a fiducial value $`\beta _0`$, as
$$𝐬=\frac{𝐗_F}{\beta }|_{\beta _0}.$$
$`(6)`$
With the above information we let the system run and apply the control every time the drip interval is within an appropriate fixed-point neighborhood; such neighborhood is specified through the inequality
$$\left|\left(𝐗_n𝐗_𝐅\right)𝐟_u\right|<\xi _{},$$
$`(7)`$
where
$$\xi _{}=\left|\delta \beta _{}(𝐬𝐟_u)\left(1\frac{1}{\lambda _u}\right)\right|,$$
$`(8)`$
and $`\delta \beta _{}`$ is the maximum value allowed (see below for the value set) for changes in the viscosity parameter—we pinpoint that (8) is only valid as a first-order approximation. Once with the control working and after a brief transitory the system never gets far from $`𝐗_𝐅`$, as figures 3a and 3b show. Notice that, again, we choose controlling the dynamics around the interval defined by (7) and (8); this is quite appropriate with an experimental situation in mind. The scheme described is simply the OGY control method, forcing the system to evolve towards the unstable direction by changing slightly the value of the $`\beta `$ parameter .
The OGY scheme, applied to the model dynamics lead to the results shown in Figure 3. The results correspond to an unstable fixed point $`t_F=0.2858`$, found within the chaotic attractor at the parameter values $`f=8.49`$, $`\beta =3.01`$, $`g=0.4`$ and $`h=0.3`$; the behavior is intermittent at these parameter values (figure 1b). The control parameters used are $`\delta \beta _{}=1.3`$, $`\xi _{}=0.008`$, $`𝐬=(0.0021,0.0021)`$, and the unstable eigenvalue is $`\lambda _u=1.24`$; the unstable manifold is associated with the contravariant vector
$$𝐟_u=\left(\begin{array}{c}0.8940\\ 0.7020\end{array}\right).$$
$`(9)`$
The explicit expressions given above for the unstable and stable eigenvectors guarantee that we are not in an homoclinic tangency point of the attractor which is an unsuitable point for applying the OGY method. In Figure 3a we can observe a consistent estabilization of the system after the application of the control in $`n=1000`$; it takes less than 150 drippings to get the system into the fixed point. The control is released after drop 3000 and chaos sets in inmediately; it is again applied at $`n=5000`$, and 150 or so drippings ahead the system becomes periodic again. Further information about the approach to the fixed point once the control is applied, can be obtained from a return map of the process; this is shown in Figure 3b. The spiralling approach, clearly shown in the inset on figure 3b, to the fixed point seems to be typical. We have to conclude then that with no perturbations present the control seems to work very well.
An appropriate question is what happens if there are extra random perturbations. Such perturbations are expected to occur in any experimental realization of the leaky tap. In what follows we first analyse the effect of random noise superimposed to the value of the parameter $`f`$. We should term this the case of dynamical noise, since experimentally it arises from the impossibility of keeping perfectly fixed the inflow. In fact, we choose such parameter to illustrate the robustness of the control precisely because the fluid flow into the tap is a difficult variable to keep fixed in an experimental situation . This also explains why we do not considered proposing an experimental mechanism of control using the inflow $`f`$—though the control is equally easy to achieve adjusting $`f`$ but in the model . The random perturbation is applied as $`f=f_0+\delta f`$ in the model equations, where $`f_0`$ is now the fiducial value (i.e. $`f_0=8.49`$, as in figures 2 and 3) and $`\delta f`$ is a uniformly distributed random variable in \[$`0.0043,0.0043`$\]. We have to be sure that such random perturbation does not significantly change the dynamics since the $`f`$-width of the chaotic zone is small. In figure 4 we show, as an example, a time series and a return map with fiducial values of the parameters as in Figure 2, but with the random perturbations applied to $`f`$ allowing for variations up to $`1\%`$ of its fiducial value (note that such variations represent $`10\%`$ of the total width of the chaotic zone). It can be seen that the dynamics just become fuzzier compared to the original unperturbed case. Notice also that the system does not permit imposing larger variations in $`f`$, otherwise we will leave the rather small chaotic zone (the $`f`$-width of that zone is 0.035, as shown in Figure 1) and the dynamics would then be drastically changed.
What happens with the control scheme turned on? The control was applied without modification to the perturbed system and the results show that the scheme is rather robust under random perturbations in $`f`$. Figure 5 show time series and return maps of the randomly perturbed system under control. Notice that the spiral approach to the fixed point has become an ellipsoidal blob of points; this figure roughly corresponds to the area of the interval (8) in the reconstruction space. Incidentally, notice that figures 4b and 5b also illustrates the predicted noise induced attractor deformation and elongation recently observed in periodically driven non-linear electric circuits . Figure 5b also illustrates that the control effectively stabilizes the system to a neighborhood of the fixed point, not allowing vagaries larger than the maximum size of the control zone.
But the lack of control in $`f`$ is not the only perturbation worth of analysing. The unavoidable uncertainties in the time measurements, that is, what we can term the case of measurement noise, and the problem of lost drops are also important. We simulate such behavior by randomly perturbing the values of the drip intervals calculated from the model. We consider thus drip intervals $`t_n=t_n^{(0)}+\delta t_n`$ where $`t_n^{(0)}`$ is the drip interval calculated from (2) and (3) and $`\delta t_n`$ is a random variable with, again, a uniform distribution. What we found using these ‘measured’ drip intervals is that, if the uncertainty introduced by the random noise is larger than $`0.5\%`$ of the maximum value of the drip interval, despite the intended control, the system exhibits little but noticeable chaotic bursts. The bursts become larger as the uncertainty grows until the control is completely lost. This is shown for succesively larger values of the perturbation in Figures 6a, 6b and 6c. In the last of the time series shown (Fig. 6c, with a perturbation of $`10\%`$ of the total $`t`$-width ($`0.02995`$) of the chaotic zone) traces of the control still are noticeable but overall the system is destabilized and chaotic. Such behavior can be easily understood when it is considered that at such uncertainties it is no longer possible to tell apart a drop from the adjacent ones. In this measurement noise case then, it is possible to quote the noise values which the control mechanism found acceptable, whereas in the previous dynamical noise case it was not possible due to the small $`f`$-width of the chaotic zone ; in the dynamical noise case the system would no longer be within the chaotic zone before the control collapses by increasing the noise level. That dynamical noise could throw the system out from the chaotic regime, can also happen in the experiment but, in such a case, the large fluctuations in $`f`$ would simply mean that the experiment is not working properly.
In all the examples given, the control procedure used is applied using the approximate linear dynamics calculated from the unperturbed system, which is a sort of idealistic case. In a more realistic situation, the local dynamics will be evaluated from the actual measurements and this would improve the control.
The results of our computations with the relaxation oscillator SOSB equations hint towards a control technique applicable to the leaky tap in an experimental situation. We require a system with at least a parameter allowing quick adjustment and quick response times as compared with typical drip intervals; typical values of $`t_n`$ in an experiment are of the order of 100 ms . The chosen control variable should allow faster response that this typical value. We have thought therefore on adjusting the fluid viscosity because the inflow $`f`$ is not easy to control, at least from the viewpoint of our Laboratory. On the other hand, common fluids (water is the working fluid in every experiment performed to date) are very difficult to change their viscosity excepting with changes in temperature, but this is not easy to accomplish in the required circunstances. Had we thought of changing the temperature of the water, we would need rather large changes which would also change other system parameters—as the diameter of the nozzle—and temperature would not be so easy to control.
To overcome such anticipated difficulties, we propose the use, instead of the customary water, of an oil-based magnetorheological fluid as the leaking fluids in the system. Such fluids are easy to obtain, have response times of 2 or 3 milliseconds —almost an order of magnitude below the typical drip intervals in water and the drip intervals are larger in the magnetorheological fluid given its greater viscosity. Besides, they can quickly change their viscosity for up to a $`10^6`$ factor (though for our purposes we do not need such huge changes) simply applying a magnetic field, which is also rather easy to adjust. Giving such characteristics, we think that the method would allow an excellent control.
In summary, we have applied succesfully the OGY control method to the SOSB leaky tap model, investigating the possible ill-effects of random noise on the water inflow into the tap and on the drip intervals. We have found the the control procedure is effective up to noise to signal ratios of the order of $`10\%`$. We should also mention that all the computations reported in this article were carried out in fortran 77 using a PC workstation running under Linux.
To finalize, we have to say that the study in has been further used to improve the oscillator model. The main change has been the use of a mass-dependent elastic ‘constant’ $`k`$ for the spring (which we here normalized to 1); with the proper indentification of the model parameters a very good agreement with the experimental values \[6–10\] is found. This adds to the usefulness of the oscillator model as it is further illustrated by this contribution.
Acknowledgements.
This work has been partially supported by CONACyT (grant 1343P–E9607) and by PAPIIT-UNAM (grant IN–122498) which partially financed ALSB’s trip to Atlanta. ALSB and CMA want to thank Emory University and particularly Professor F. Family for all the support and the warm hospitality in Atlanta. We also acknowledge helpful discussions with H. N. Núñez-Yépez, M. N. Popescu, G. I. Sánchez-Ortiz and E. Guillaumín-España, the help of J. Estrada-Díaz, Jorge Reyes-Iturbide and the cheerful mood of all the team at LSD. G. Hentschel suggested that calling a plumber is perhaps the best way for controlling a leaky tap and we had to agree with him. This work owes a great deal to the encouragement of Q. Tavi, K. Hryoltiy, M. Chiornaya, F. C. Minina, L. Bidsi, G. Abdul II, U. Kim and C. Srida. Last but not least, we want to dedicate the article to the memory of our beloved friend L. Tuga.
References.
O. E. Rössler, Synergetics, A workshop, H. Haken, ed. Springer Berlin p. 174 (1977).
J. Eggers, lanl archive preprint “Singularities in droplet pinching with vanishing viscosity” (chao-dyn/9705005, v2, 1999).
X. D. Shi, M. P. Brenner, S. R. Nagel, Science 265 (1994) 219.
R. E. Goldstein, R. I. Pesci and M. J. Schelly, Phys. Rev. Lett. 70 (1993) 3043.
R. Shaw, The dripping faucet as a model chaotic system (Aerial Press, Santa Cruz USA, 1984).
P. Martien, S. C. Pope, P. L. Scott and R. S. Shaw, Phys. Lett. 110A (1985) 399.
X. Wu and Z. A. Schelly, Physica D 40 (1989) 433.
H. N. Núñez-Yépez, A. L. Salas-Brito, C. A. Vargas, and L. Vicente, Eur. J. Phys. 10 (1989) 99; reprinted in L. Lam, Nonlinear physics for beginners (World Scientific, 1998). p. 104.
R. F. Cahalan, H. Leidecker and G. D. Cahalan, Comp. Phys. 3 (1990) 368.
J. Austin, Phys. Lett. A 155 (1991) 148.
G. I. Sánchez-Ortiz, Tesis de Licenciatura, El grifo goteante: estudio numérico de un modelo mecánico (FCUNAM, México D. F., 1991).
H. N. Núñez-Yépez, C. Carbajal, A. L. Salas-Brito, C. A. Vargas and L. Vicente, in Nonlinear phenomena in fluids, solids and other complex systems, P. Cordero and B. Nachtergaele eds, (Elsevier, Amsterdam, 1991) p. 467.
P. M. C. de Oliveira and T. J. P. Penna, J. Stat. Phys. 73 (1993) 789.
P. M. C. de Oliveira and T. J. P. Penna, Int. J. Mod. Phys. C 5 (1994) 997.
J. C. Sartorelli, W. M. Gonçalves and R. D. Pinto, Phys. Rev. E 5 (1994) 3963.
R. D. Pinto, W. M. Gonçalves, J. C. Sartorelli, and M. J. de Oliveira, Phys. Rev. E 52 (1995) 6896.
J. W. S. Rayleigh, Proc. London Math. Soc. 4 (1878) 10.
J. W. S. Rayleigh, The theory of sound (Dover, New York, 1945) §364.
G. I. Sánchez-Ortiz and A. L. Salas-Brito, Phys. Lett. A 203 (1995) 300.
G. I. Sánchez-Ortiz and A. L. Salas-Brito, Physica D 89 (1995) 151.
A. D’Innocenzo and L. Renna, Phys. Lett. A 220 (1996a) 75.
A. D’Innocenzo and L. Renna, Int. J. Theor. Phys. 35 (1996b) 941.
A. D’Innocenzo and L. Renna, Phys. Rev. E 55 (1997) 6776.
A. D’Innocenzo and L. Renna, Phys. Rev. E 58 (1998) 6847.
N. Fuchikami, S. Ishioka and K. Kiyono, lanl archive preprint “Simulations of a dripping faucet” (chao-dyn/9811020, 1998).
K. Kiyono and N. Fuchikami, lanl archive preprint “Dripping faucet dynamics clarified by an improved mass-spring model” (chao-dyn/9904012, 1999).
D. N. Baker, A. J. Klimas, R. L. McPherron and J. Buchner, Geophys. Res. Lett. 17 (1990) 41.
T. J. P. Penna, P. M. C. de Oliveira, J. C. Sartorelli, W. M. Gonçalves and R. D. Pinto, Phys. Rev. E 52 (1995) R2168.
C.-K. Peng, J. Mietus, J. M. Haussdorff, S. Havlin, H. E. Stanley, and A. L. Goldberger, Phys. Rev. Lett. 70 (1993) 1343.
K. Otsuka, G. Cornelissen, F. Halberg, Clin. Cardiol. 20 (1997) 631; J. Kanters, N. Henrik, H. Pathlou and E. Agner, J. Cardiovascular Electrophysiology 5 (1994) 591; A. Garfinkel, M. L. Spano, W. L. Ditto and J. N. Weiss, Science 257 (1992) 1230.
G. I. Sánchez-Ortiz, D. Rueckert y P. Burger, Medical Image Analysis 3 (1999) 77.
E. Ott, C. Grebogi, and J. A. Yorke, Phys. Rev. Lett. 64, 1196 (1990).
J. Estrada-Díaz, Proyecto Terminal, Análisis dinámico-paramétrico del sistema goteante, (UAM-Azcapotzalco, México City, 1998); A. Ilarraza-Lomelí, Proyecto Terminal, Control del grifo goteante usando el fenómeno magnetorreológico, (UAM-Azcapotzalco, México City, 1999).
A. Ilarraza-Lomelí, J. Estrada-Díaz, C. M. Arizmendi, A. L. Salas-Brito to be submitted (1999).
W. H. Press, S. A. Teukolsky, W. A. Vetterling, and B. P. Flannery, Numerical Recipes in Fortran (Cambridge University Press, Cambridge, 1992), ch. 9.
L. Jaeger and H. Kantz, Physica D 105 (1997) 79.
M. Diestelhorst, R. Hegger, L. Jaeger, H. Kantz and R.-P. Kpasch, Phys. Rev. Lett. 82 (1999) 2274.
R. E. Rosensweig, Ferrohydrodynamics (Dover Publications, New York, 1998).
Figure Captions.
Figure 1.
Illustration of the behavior predicted by the relaxation oscillator model for the leaky tap. Many other examples of the possible behavior can be found in \[21–25, 33,34\].
1a. Bifurcation diagram at $`\beta =3.01`$, $`g=0.4`$, $`h=0.3`$ as $`f`$ is varied. Notice that beyond $`f=8.500019`$ the dripping stops and “continuous flow” sets in. The vertical dashed line passing through $`f=8.49`$ marks the zone to control. Notice that the chaotic zone after the period doubling bifurcations extends roughly from 8.465 to 8.500, a total $`f`$-width of 0.035.
1b. Time series in the zone we want to control ($`\beta =3.01`$, $`g=0.4`$, $`h=0.3`$ and $`f=8.49`$). Notice the signals of intermittency. The thin dashed line $`t=0.2858`$ corresponds to the unstable fixed point we intend to stabilize. Let us emphasize that we do not intend to control the intermitent orbit which seems to coincide with the selected unstable fixed point.
Figure 2.
Return map $`t_{n+1}`$ vs. $`t_n`$ showing the unstable fixed point at $`t_F=0.2858`$. The inset is a blow up of the square neighborhood depicted around the fixed point. Such unstable orbit is pointed to by a black arrows and marked by a black dot in the inset. As you can notice, the attractor has a complex structure composed of at least two very close sheets. The fixed point lays in the innermost sheet of the reconstructed attractor.
Figure 3.
Effect of the OGY scheme on the dynamics; compare with figures 1b and 2.
3a. Time series of drip intervals for the unperturbed model with the control turned on and off. At $`n=1000`$ the control is applied, it takes roughly 150 drops for the system to be stabilized into the unstable fixed point at $`t=t_F=0.2858`$. At $`n=3000`$ the control is released and chaotic behavior sets in immediately. At $`t=5000`$ the control is applied again.
3b. Return map of the control process. Notice the spiral approach to the unstable fixed point when the control is turned on. The inset is a blow up of the region, exactly the same as described in figure 2, around the unstable fixed point.
Figure 4.
The SOSB model in the presence of random perturbations applied to the value of $`f`$. The noise level is $`10\%`$ of the $`f`$-width of the chaotic zone. In this case, it is not possible to increase the noise for testing the robustness of the control without first leaving the rather small ($`f`$-width $`=`$ 0.035) chaotic zone.
4a. Time series of the drip intervals with random noise superposed on $`f`$. A comparison with figure 1b may show that the dynamics gets fuzzier.
4b. Return map of the zone to be controlled with random noise on $`f`$ superposed. Compare to figure 2. Notice the deformation and the elongation of some parts of the reconstructed attractor induced by the applied random noise . The fuzziness mentioned in 4a becomes evident.
Figure 5.
The OGY scheme applied to the SOSB model in presence of random noise on $`f`$. The noise level is $`10\%`$ of the $`f`$-width of the chaotic zone.
5a Time series of the $`f`$-perturbed SOSB leaky tap model, with the control turned on at $`n=5000`$. Despite the noise the system stabilizes around the unstable fixed point.
5b. Return map of the system with the control turned on. The espiral approach to the fixed point becomes an approximately elliptical region where the system gets controlled.
Figure 6.
Effect of random noise applied to the drip intervals. Notice that in the conditions of figure 6b it begins to be difficult to tell apart a drop from adjacent ones and that, in the conditions of figure 6c, it is almost not possible.
6a. The noise level is here $`0.5\%`$ of the maximun range allowed for $`t`$. Control is still rather good.
6b. The noise level is $`0.75\%`$ of the maximun range in $`t`$. The bursts of chaos were control is lost are evident, control is also present though far from perfect.
6d. The noise level is $`1\%`$ of the maximum range in $`t`$. Traces of control still remain but it is almost completely lost.
|
no-problem/9906/hep-ph9906242.html
|
ar5iv
|
text
|
# QCD CRITICAL POINT: WHAT IT TAKES TO DISCOVER
## 1 Introduction
The phase diagram of QCD in the temperature – baryon chemical potential plane has been a subject of intensified theoretical interest recently. On the experimental front, with the advent of large acceptance detectors such as NA49 and WA98 at CERN SPS, we are now able to measure average event-by-event quantities which carry information about thermodynamic properties of the system at freeze-out. Our goal is to understand what we can learn about the phase diagram of QCD from this newly available and future data.
The main focus of our analysis in is on providing tools for locating the critical point E on the phase diagram of QCD (Fig. 1) and studying its properties. The possible existence of such a point, as an endpoint of the first order transition separating quark-gluon plasma from hadron matter, and its universal critical properties have been pointed out recently in . In a previous letter, we have laid out the basic ideas for finding this endpoint in heavy ion collision experiments. The signatures proposed in are based on the fact that such a point is a genuine thermodynamic singularity at which susceptibilities diverge and the order parameter fluctuates on long wavelengths. The resulting signatures all share one common property: they are nonmonotonic as a function of an experimentally varied parameter such as the collision energy, centrality, rapidity or ion size.
## 2 Thermodynamic Fluctuations in an Ideal Bose Gas
Most of our analysis is applied to the fluctuations of the observables characterizing the multiplicity and momenta of the charged pions in the final state of a heavy ion collision. We begin building our tools by re-analyzing a text-book example of an ideal Bose gas. The basic fact is that every quantum state of such a system is completely characterized by a set of occupation numbers, $`n_p`$. All observables are functions of these numbers and thus all we need to know is the fluctuations of $`n_p`$ from one member of the ensemble (one event) to another:
$$n_p=\frac{1}{e^{ϵ_p/T}1},\mathrm{\Delta }n_p\mathrm{\Delta }n_k=n_p(1+n_p)\delta _{pk}v_p^2\delta _{pk}.$$
(1)
The correlator $`\mathrm{\Delta }n_p\mathrm{\Delta }n_k`$ is the central quantity which we calculate repeatedly, as we proceed beyond the ideal Bose gas approximation.
The fluctuations of various extensive observables are given in terms of the “master correlator” (1):
$$(\mathrm{\Delta }Q)^2=\underset{pk}{}q_pq_k\mathrm{\Delta }n_p\mathrm{\Delta }n_k=\underset{p}{}q_p^2v_p^2\text{for}Q=\underset{p}{}q_pn_p.$$
(2)
More interestingly, the fluctuations of an intensive, or average, quantity, such as energy or transverse momentum per particle, $`q=Q/N`$, are given by:
$$(\mathrm{\Delta }q)^2=\frac{1}{N^2}\underset{p}{}(q_pq)^2v_p^2.$$
(3)
Denoting by $`\overline{{}_{}{}^{}\mathrm{}_{p}^{}}^{\mathrm{inc}}`$ the average over inclusive distribution $`n_p`$ we see that the ensemble average coincides with the inclusive one: $`q=Q/N=\overline{q_p}^{\mathrm{inc}}`$. This is not true for the fluctuation, however:
$$(\mathrm{\Delta }q)^2=\frac{1}{N}\overline{(q_pq)^2(1+n_p)}^{\mathrm{inc}}.$$
(4)
We see that the event-by-event variance is larger than the suitably rescaled (by $`1/N`$) inclusive variance because of the Bose enhancement factor $`(1+n_p)`$. This order several percent effect is very sensitive to the over-population of the pion phase space characterized by the pion chemical potential $`\mu _\pi `$.
Another interesting quantity is the correlation between the fluctuations of an average quantity and the total multiplicity $`N`$:
$$\mathrm{\Delta }q\mathrm{\Delta }N=\frac{1}{N}\underset{p}{}v_p^2\left(q_pq\right)=\frac{1}{N}\underset{p}{}n_p^2\left(q_pq\right).$$
(5)
Its value is totally due to the Bose effect, i.e., it would vanish in the ideal classical gas limit $`v_p^2=n_p`$. For example, for the ideal Bose gas of pions at a temperature $`T=120`$ MeV the value of $`\mathrm{\Delta }p_T\mathrm{\Delta }N/[(\mathrm{\Delta }p_T)^2(\mathrm{\Delta }N)^2]^{1/2}`$ is of the order of a few percent and is negative. In general, such correlations, though small, are very sensitive to non-trivial effects, such as Bose enhancement, as we have just seen, or to the effects which we consider below such as as the energy conservation and thermal contact or the interactions with the sigma field.
## 3 Noncritical Thermodynamic Fluctuations <br>in Heavy Ion Collisions
The next step in our characterization of the thermodynamic fluctuations in heavy ion collisions is inclusion of pions from resonance decays. The hadronic matter produced in a heavy ion collision is not simply an ideal gas of pions. A number of approaches to heavy ion collisions have successfully treated the matter at freeze-out as a resonance gas in thermal equilibrium. The pions observed in the data are then a sum of (i) “direct pions” which were pions at freeze-out and (ii) “resonance pions” produced from the decay of resonances after freeze-out.
Our simulation of a resonance gas model shows that more than half of all observed pions come from resonance decays. The resonances also have a dramatic effect on the size of the multiplicity fluctuations. For an ideal classical gas the ratio $`(\mathrm{\Delta }N)^2/N`$ is equal to 1 and is only slightly enhanced by the Bose effects. If some of the pions are produced in bunches from resonances, which themselves follow Poisson statistics, this ratio increases. We find:
$$\frac{(\mathrm{\Delta }N)^2}{N}1.5.$$
(6)
The experimental value from NA49 of this ratio is 2.0. This is much larger than the ideal gas value of 1. The contribution of resonances is important to bring this number up. However, there is still room for non-thermodynamic fluctuations, such as fluctuations of impact parameter. Their effect can be studied and separated by varying the centrality cut using the zero degree calorimeter.
The $`p_T`$ spectrum of the resonance pions is close to the spectrum of the direct ones. As a result resonance pions do not affect much the shape of the spectrum, and in particular the width of the inclusive distribution, which determines most of the event-by-event fluctuation of the average $`p_T`$. The resonances, however, dilute the Bose enhancement effect by about a factor of two.
In order to compare the results with the experimental data one has to take into account the effect of hydrodynamic flow. This effect is not important for the multiplicity fluctuations. However, it distorts the $`p_T`$ spectrum shifting it to larger $`p_T`$. In the simplest approximation this can be treated as a “blue shift” of the spectrum. Essentially we assume that the effects of the flow largely cancel in the ratio $`v_{\mathrm{inc}}(p_T)/p_T`$. This ratio in our simulation is equal 0.66. The direct contribution from the fluctuations of the flow velocity are small, order 2% or so. With the Bose enhancement included we obtain:
$$\frac{v_{\mathrm{inc}}(p_T)}{p_T}=0.68.$$
(7)
The experimental value obtained from NA49 data is 0.75. We see that the major part of the observed fluctuation in $`p_T`$ is accounted for by the thermodynamic fluctuations. A large potential source of the discrepancy is the “blue shift” approximation we used. This approximation can be improved on in the future study.
Another very important feature in the data is the value of the ratio of the scaled event-by-event variation to the variance of the inclusive distribution:
$$F=\frac{Nv_{\mathrm{ebe}}^2(p_T)}{v_{\mathrm{inc}}^2(p_T)}=1.004\pm 0.004.$$
(8)
This is a remarkable fact, since the contribution of the Bose enhancement (see Section 2) to this ratio is almost an order of magnitude larger than the statistical uncertainty. Some mechanism must compensate for the Bose enhancement. In the next section we find a possible origin of this effect: anti-correlations due to energy conservation and thermal contact between the observed pions and the rest of the system at freeze-out.
## 4 Thermal Contact and Energy Conservation
In this Section, we take a first step towards understanding how the physics characteristic of the vicinity of the critical point affects the event-by-event fluctuations. Along the way, we quantify the effects of energy conservation on the $`p_T`$-fluctuations.
We call the gas of direct pions “system B” and the rest of the system, which includes the neutral pions, the resonances, the pions not in the experimental acceptance and, if the freeze-out occurs near critical point, the order parameter or sigma field, — “system A”. We observe system B, which is our “thermometer”. The thermal contact of B with A and energy conservation affects the “master correlator” (1). For example, $`n_p`$ cannot fluctuate completely independently if the heat capacity $`C_A`$ of the system A is small. There is a constraint on the total energy $`E_B=_pϵ_pn_p`$ which gets stronger at small $`C_A`$. The result for the master correlator we find is:
$$\mathrm{\Delta }n_p\mathrm{\Delta }n_k=v_p^2\delta _{pk}\frac{v_p^2ϵ_pv_k^2ϵ_k}{T^2(C_A+C_B)}.$$
(9)
Using this expression of the correlator we can now calculate the effect of thermal contact and energy conservation on fluctuations of various observables, such as mean $`p_T`$, for example. In particular, we find that the anti-correlation introduced by this effect reduces the value of the ratio $`F`$ defined in (8) by:
$$\mathrm{\Delta }F_T\frac{0.12}{C_A/C_B+1}.$$
(10)
If we take $`C_A/C_B3`$ for orientation, we find $`\mathrm{\Delta }F_T`$ of the order of $`3\%`$, before taking into account the dilution by non-direct pions. This effect is comparable in magnitude to the Bose enhancement, and acts in the opposite direction.
This effect can be distinguished from other effects, e.g., finite two-track resolution, also countering the Bose enhancement, by the specific form of the microscopic correlator (9). The effect of energy conservation and thermal contact introduces an off-diagonal (in $`pk`$ space, and also in isospin space) anti-correlation. Some amount of such anti-correlation is indeed observed in the NA49 data.
Another important point of (9) is that as the freeze-out approaches the critical point and $`C_A`$ becomes very large the anti-correlation due to energy conservation disappears.
## 5 Pions Near the Critical Point: Interaction with the Sigma Field
In this section, unlike the previous sections, we shall consider the situation in which the freeze-out occurs very close to the critical point. This point is characterized by large long-wavelength fluctuations of the sigma field (chiral condensate). We must take into account the effect of the $`G\sigma \pi \pi `$ interaction between the pions and such a fluctuating field. We do that by calculating the contribution of this effect to the “master correlator”. We find:
$$\mathrm{\Delta }n_p\mathrm{\Delta }n_k=v_p^2\delta _{pk}+\frac{1}{m_\sigma ^2}\frac{G^2}{T}\frac{v_p^2v_k^2}{\omega _p\omega _k}.$$
(11)
We see that exchange of soft sigma field leads to a dramatic off-diagonal correlation, the size of which grows as we approach the critical point and $`m_\sigma `$ decreases. This correlation takes over the off-diagonal anti-correlation discussed in the previous section.
To quantify the effect of this correlation we computed the contribution to the ratio $`F`$ (8) from (11). We find:
$$\mathrm{\Delta }F_\sigma =0.14\left(\frac{G_{\mathrm{freeze}\mathrm{out}}}{300\mathrm{MeV}}\right)^2\left(\frac{\xi _{\mathrm{freeze}\mathrm{out}}}{6\mathrm{fm}}\right)^2\mathrm{for}\mu _\pi =0,$$
(12)
This effect, similarly to the Bose enhancement, is sensitive to over-population of the pion phase space characterized by $`\mu _\pi `$ and increases by a factor 2.5 for $`\mu _\pi =60`$ MeV. We estimate the size of the coupling $`G`$ to be around 300 MeV near point E, and the mass $`m_\sigma `$, bound by finite size effects, to be less than 6 fm. The effect (12) can easily exceed the present statistical uncertainty in the data (8) by 1-2 orders of magnitude.
It is important to note that we have calculated the effect of critical fluctuations on $`F`$ because this ratio is being measured in experiments, such as NA49. This observable is not optimized for detection of critical fluctuations. It is easy to understand that observables which are more sensitive to small $`p_T`$ than $`F`$, and/or observables which are sensitive to off-diagonal correlations in $`pk`$ space would show even larger effect as the critical point is approached.
## 6 Pions From Sigma Decay
Near the critical endpoint, the excitations (quasiparticles) of the sigma field are nearly massless and are therefore numerous. Because the pions are massive at the critical point, these $`\sigma `$’s cannot immediately decay into two pions and persist as the system expands after freeze-out when it occurs near the critical point. During the expansion, the in-medium $`\sigma `$ mass rises towards its vacuum value and eventually exceeds the two pion threshold. At this point the $`\sigma `$’s decay quickly, yielding a non-thermal population of soft pions.
We estimate the mean momentum of these soft pions to be around 0.6$`m_\pi `$ and the total number to be of the order of the number of direct pions (i.e., they should constitute up to a third of total observed pions near the critical point). The multiplicity fluctuations of these pions: $`(\mathrm{\Delta }N)^2/N2.7`$, are significantly larger than that of the rest of the pions (6).
## 7 Conclusions
In summary, our understanding of the thermodynamics of QCD will be greatly enhanced by the detailed study of event-by-event fluctuations in heavy ion collisions. We have estimated the influence of a number of different physical effects on the master correlator $`\mathrm{\Delta }n_p\mathrm{\Delta }n_k`$. This is itself measurable, but we have in addition used it to make predictions for the fluctuations of observables which have been measured at present, such as $`(\mathrm{\Delta }p_T)^2`$ and $`(\mathrm{\Delta }N)^2`$ and also for the cross correlation $`\mathrm{\Delta }N\mathrm{\Delta }p_T`$.
The signatures we analyze allow experiments to map out distinctive features of the QCD phase diagram. The striking example which we have considered in detail is the effect of a second order critical end point. The nonmonotonic appearance and then disappearance of any one of the signatures of the critical fluctuations which we have described would be strong evidence for the critical point. Simultaneous detection of the effects of the critical fluctuations on different observables would turn strong evidence into a decisive discovery.
|
no-problem/9906/cond-mat9906109.html
|
ar5iv
|
text
|
# Breakup of a Dimer: A New Approach to Localization Transition
## Abstract
Within the framework of tight binding models, aperiodic systems are mapped to a renormalized lattice with a dimer defect. In models exhibiting metal-insulator transition, the dimer acts like a resonant cavity and explains the existence of the ballistic transport in the system. The localization in the model can be attributed to the vanishing of the coupling between the two sites of the dimer. Our approach unifies Anderson transition and resonance transition and provides a new formulation to understand localization and its absence in aperiodic systems.
The existence of metal-insulator transitions due to quantum interference in correlated disordered systems is a fascinating subject. The origin of this resonant transition can be understood from the textbook examples of the quantum mechanics of a barrier in the continuum problem and from a simple dimer model in the case of a discrete lattice. The resonance transition where the metallic phase with ballistic transport is due to zero reflectance at the two sites of the dimer provides a simple mechanism to understand localization and its absence in systems with short range correlation. Recently, this transition was verified in experiments in superlattices.
In this letter, we present a universal approach to understand metal-insulator transitions by unifying Anderson transition with the resonance transition. Our formulation is applicable for any $`aperiodic`$ system with reflection symmetry described by a nearest-neighbor tight binding model (TBM). In this paper we will confine ourselves to sinusoidal potentials which are further modulated by a Gaussian profile. The purpose of the Gaussian modulation is two-fold: as explained below, it provides a useful means to motivate and illustrate our ideas even though our results and conclusions are valid in the limit where the width of the Gaussian goes to $`\mathrm{}`$. Secondly, it facilitates the study of the pure Gaussian systems that have been the subject of recent studies due to its possible application as efficient energy band-pass filters in semiconducting superlattices. Finally, the case of sinusoidal potential further modulated by a Gaussian profile includes the famous Harper equation exhibiting Anderson localization as a limiting case.
The model system under consideration here is a TBM describing an eigenvalue problem with energy $`E`$,
$$\psi _{m+1}+\psi _{m1}E_m\psi _m=0.$$
(1)
The $`E_m=Eϵ_m`$ is the diagonal term containing the aperiodic onsite energy $`ϵ_m`$. This model also describes the Schrodinger equation for an array of $`\delta `$-function Kronig-Penny potential barriers,
$$\frac{\mathrm{}^2}{2M}\psi ^{^{\prime \prime }}(x)+ϵ_m\delta (xma)\psi (x)=E\psi (x).$$
(2)
This is due to the fact that the Poincare map associated with the model is a TBM with $`E_m=ϵ_m\frac{sin(K)}{K}cos(K)`$. Here $`K`$ is the Bloch vector related to the energy as $`E=\mathrm{}^2K^2/2M`$.
We choose $`ϵ_m`$ to be sinusoidal potential which is further modulated by a Gaussian,
$$ϵ_m=2\lambda \mathrm{cos}[2\pi \mu (mm_0)]\mathrm{exp}[\frac{(mm_0)^2}{2\sigma ^2}].$$
(3)
Here $`\lambda `$ is the strength of the potential, $`\sigma `$ is the width of the Gaussian profile and $`\mu `$ is an irrational number chosen to be the inverse golden-mean $`\mu =\frac{\sqrt{5}1}{2}`$. In this paper, we will describe our results for the pure Gaussian model ($`\mu =0`$), the Harper equation ($`\sigma \mathrm{}`$) and also for the $`\delta `$-function barriers Eq. (2) The last case closely resembles the recent study of Gaussian modulated Kronig-Penny model and is also currently under experimental investigation in superlattices..
The basic idea underlying our approach can be understood for any localized defect with finite spatial extent. For aperiodic systems such as Harper equation where the aperiodicity exists throughout the lattice, we introduce Gaussian modulation so that the system can be viewed as a lattice with a localized defect. However, as shown below, the $`\sigma \mathrm{}`$ limit is well defined and therefore Gaussian modulation ,although not needed, is useful in understanding the decimation scheme described below.
We envision perfectly transmitting phase in all aperiodic systems to be described by Bloch states on some ”renormalized” lattice. In aperiodic systems where the defects are spatially confined to only some parts of the lattice, Bloch wave amplitudes are attenuated or amplified due to various quantum interferences only in the neighborhood of these defects. ( See figure 1) We eliminate such sites from the model by decimating them. The resulting renormalized model will have solutions that are Bloch waves at all sites in the metallic phase. We would like to emphasize that we decimate $`all`$ sites which is in contrast to previous use of decimation for random systems where every other site is decimated. However, the basic idea of decimating all sites is similar in spirit to that of Fibonacci decimation used in quasiperiodic systems where one eliminates all but Fibonacci sites so that the renormalized lattice may exhibit translational invariance in Fibonacci space.
Figure 2 outlines the decimation process. We begin by eliminating the central site $`m_0`$ of the symmetric defect. This leads to a renormalized model consisting of a dimer with coupling between the two sites as $`\overline{\gamma }_1=1/E_0`$ and the onsite energy $`\overline{E}_1=E_1\overline{\gamma }_1`$. We now begin the iterative process by decimating this dimer. At the $`n^{th}`$ step, we obtain a new lattice with renormalized dimer with onsite energies denoted as $`\overline{E}_{n+1}`$ and the renormalized coupling between the 2-sites of the dimer as $`\overline{\gamma }_{n+1}`$. This iterative decimation scheme results in a two-dimensional $`driven`$ map for $`\overline{\gamma }\overline{E}`$ where the $`E_n`$ containing diagonal disorder of the bare model provides the driving term,
$`\overline{\gamma }_{n+1}`$ $`=`$ $`{\displaystyle \frac{\overline{\gamma }_n}{\overline{E}_{n}^{}{}_{}{}^{2}\overline{\gamma }_{n}^{}{}_{}{}^{2}}}`$ (4)
$`\overline{E}_{n+1}`$ $`=`$ $`E_{n+1}(1+\overline{\gamma }_{n+1}\overline{\gamma }_n)/\overline{E}_n.`$ (5)
It should be noted that the parameters of the model are included in $`E_{n+1}`$. This 2-dimensional map contains the complexity of various interference effects within the Gaussian profile, manifesting itself in the energy dependent coupling and onsite potential. We would like to emphasize again that the map has a well defined limit for $`\sigma `$ equal to $`\mathrm{}`$. Therefore, Gaussian profile is not necessary in obtaining the mapping of the aperiodic model to the dimer model. The dynamics of the map does depend upon $`\sigma `$: the finite value of $`\sigma `$ provides damping in the map with the consequence that the renormalization group (RG) flow settles on attractors while, in the limit of $`\sigma \mathrm{}`$, there are no attractors in the map.
The usefulness of this map emerges from the fact that the changes in the parameters $`\lambda `$ are reflected in the significant changes in the trajectories of the map for both finite as well as for the infinite value of $`\sigma `$. Figure 3 shows how the metal-insulator transition in the pure Gaussian case manifests itself in terms of the variation in the RG attractor as $`\lambda `$ is varied. In the subcritical phase, where the lattice has a transmission coefficient $`T`$ equal to unity, a symmetric period-2 limit cycle describes the RG flow for the renormalized coupling $`\overline{\gamma }`$ while the dynamics of $`\overline{E}`$ is governed by a fixed point. Away from the transition point, the RG attractor exhibits very regular oscillatory pattern consisting of symmetric lobes and divergences which is periodic in $`\lambda `$ with periodicity equal to $`\frac{1}{\sigma }`$. The fact that the total number of lobes or divergences is equal to $`\sigma `$ suggests that each lobe owes its existence to a particular site within the Gaussian profile where the sites near the center contributing at smaller values of $`\lambda `$. However, the quantitative understanding of almost equally spaced lobes and its physical significance remains elluded to us at present.
As we approach the transition point, the symmetric 2–cycle of $`\overline{\gamma }`$ looses its symmetry, degenerates to a fixed point (seen in the figure 3 where the lobes cross) and then continues as an asymmetrical period-2 attractor. On the other hand, the fixed point describing the $`\overline{E}`$ values becomes a 2–cycle. At the transition, $`\overline{\gamma }`$ approaches zero with a power-law decay resulting in a $`broken`$ $`dimer`$. The localized phase is characterized by an exponential vanishing of the the coupling with a characteristic length which is found to be related to the localization length $`\xi `$, of the localized wave function, $`\overline{\gamma }_nexp(2n/\xi )`$. It turns out that in the localized phase, $`\overline{E}`$ continues to be described by a period–2 fixed point exhibiting divergences where the spacing between two successive divergences increases as $`\lambda `$ increases.
The localization transition discussed above is a resonant transition where the metallic phase is described by Bloch wave solutions which undergo real phase shifts as they encounter the dimer defect. In the localized phase, these phase shifts become imaginary. Imposing this condition on the solutions of the renormalized system determines the condition for the perfect transmission. We express this condition in terms of a function $`f`$ defined as
$$f_n(\lambda ,\sigma ,E)=1\overline{\gamma }_{n}^{}{}_{}{}^{2}+\overline{E}_n(\overline{E}_nE_n).$$
(6)
Figure 4 shows the variation in $`f_n`$ and the transmission coefficient $`T`$ for the Kronig-Penny model Eq. (2) as the energy $`E`$ is varied. We see that the condition for the vanishing of this function coincides with the condition for perfect transmission. We would like to point out that the transmission coefficient was calculated in a rather simple way by using the renormalized dimer . This requires multiplying only two transfer matrices in contrast to usual calculations where one needs to multiply all transfer matrices on an aperiodic lattice.
The existence of a band of conducting states in Gaussian modulated lattices as discussed above ( which is consistent with the previous related study) is a very desirable feature of a lattice making it a useful filter. Attempts are underway to make such filters using superlattices. This should be contrasted with the dimer-type defects where the metal-insulator transition is obtained by fine tuning the parameter to obtain the resonant energy close to the Fermi energy. We would like to point out that the metal-insulator transition and a band of perfectly transmitting states in Gaussian system can also be understood by heuristic arguments of asymptotic property of ”constantcy” of the potential. Due to the finite width of the potential, the model has a constant potential asymptotically. This property is crucial in determining the $`E`$, which is the global property . This argument leads to extended states if $`E>22\lambda `$ and localized states for $`E<22\lambda `$. This condition is found to be true in our numerical simulations for large values of $`\sigma `$.
We next discuss the case of the pure Harper equation obtained in the limit $`\sigma \mathrm{}`$ in the driving term for the two-dimensional map. The sub-critical phase is not an attractor. As $`\lambda `$ increases, the RG trajectories become more and more complex ( see figure 5) and eventually collapse to a vertical line corresponding to $`\overline{\gamma }0`$ at the onset to localization transition. The localized phase is again characterized by an exponentially decaying coupling of the dimer with length scale which is equal to half the localization of the Harper equation. The metallic phase in Harper equation is thus described in terms of resonace due to the dimer and the Anderson localization is due to the breaking of this dimer.
Recently, the existence of extended states in supercritical Frenkel-Konterova(FK) model and in Fibonacci lattices was shown to be due to dimer-type correlations using decimation schemes that were model dependent. The novel aspect of the work described here is the universal nature of our approach: the 2–dimensional map Eq. (5) can be used to study localization or its absence in systems with short range correlations such as dimer defects, FK and Fibonacci models with long range correlations as well as in Harper equation. Our most important result is the picture of the localized phase as a phase with the broken dimer. Although the present framework is developed only for the TBM type systems, it is possible to extend our method to study localization in 2-dimensional models as well as for systems with long range interaction. The new methodology developed here will have applications in many other areas including dynamical localization in kicked rotors as well as the transition to strange nonchaotic attractors(SNAs) in quasiperiodically driven maps .
The research of IIS is supported by National Science Foundation Grant No. DMR 097535. IGC would like to thank for the hospitality during his visit to George Mason University. We would like to thank Bala Sundaram for his useful comments on this paper.
|
no-problem/9906/astro-ph9906291.html
|
ar5iv
|
text
|
# Fluorescence of [Fe ii] in H ii regions Based on observations made with the Isaac Newton Telescope, operated on the island of La Palma by the Isaac Newton Group in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofísica de Canarias.
## 1 Introduction
Recent studies of the \[Fe ii\] lines observed in M42 have shown that some relative intensities between them cannot be accounted for by assuming that the line excitation is produced only by electron collisions, at the densities ($`N_\mathrm{e}10^4\text{ cm}^3`$) indicated by the ground state transitions $`{}_{}{}^{2}D{}_{}{}^{4}S`$ of \[S ii\] and \[O ii\] (e.g. Bautista et al. bpp96 (1996); Baldwin et al. bald96 (1996)). In order to solve the ensuing problem, Bautista et al. (bpo94 (1994)) postulate that the lines in question have their origin in a high-density partially ionized layer with $`N_\mathrm{e}10^6\text{ cm}^3`$ (see also Bautista & Pradhan bp95 (1995); Bautista et al. bpp96 (1996); Bautista & Pradhan bp98 (1998), BP98 hereafter). But, alternatively, Lucy (lucy95 (1995)) has shown that the line ratios can be explained by considering the fluorescent excitation of the $`{}_{}{}^{}\mathrm{Fe}_{}^{+}`$ levels by UV nebular radiation, which is diluted and reprocessed stellar light, an interpretation further supported by Baldwin et al. (bald96 (1996)) and Rodríguez (rod96 (1996)). The main arguments presented in all these papers for and against either interpretation, rest on the comparison of measured \[Fe ii\] line intensity ratios with calculations involving the collision strengths of the relevant $`{}_{}{}^{}\mathrm{Fe}_{}^{+}`$ levels and the spectral distribution of the radiation field. Consequently, the reliability of the arguments cannot be readily assessed, and since not all the calculations use the same set of collision strengths, it is also difficult to carry out a meaningful comparison of their results.
This paper presents direct observational evidence for the important role played by fluorescence in the formation of the optical \[Fe ii\] lines in M42 and M43. On the basis of this new evidence, the aforementioned interpretations of the \[Fe ii\] line intensities, either in terms of fluorescent excitation or as a diagnostic for the existence of a high-density partially ionized layer, will be critically discussed.
## 2 The data
The intensities of line and continuum radiation emitted by various areas within seven Galactic H ii regions, measured by Rodríguez (rod96 (1996)), provide the basic data for this discussion. Because the optical \[Fe ii\] lines are intrinsically weak and the contamination of the true nebular continuum by night-sky light is minimal, this paper is restricted to the apparently brightest H ii regions M42 and M43.
The present discussion opens with a comparison between the line and continuum intensities used in this paper and those available in the literature. In Table 1 are shown the reddening-corrected intensity ratios, $`I(\lambda )/I({}_{}{}^{}\mathrm{H}_{}^{}\beta )`$, of four of the stronger optical \[Fe ii\] lines: $`\lambda `$4287 ($`a^6D_{9/2}a^6S_{5/2}`$), $`\lambda `$5158+9 ($`a^4F_{9/2}a^4H_{13/2}`$, $`a^4F_{7/2}b^4P_{3/2}`$), $`\lambda `$5262 ($`a^4F_{7/2}a^4H_{11/2}`$) and $`\lambda `$8617 ($`a^4F_{9/2}a^4P_{5/2}`$). Besides the uncertainties inherent in the measurement of individual line intensities (particularly in the fixing of the continuum level), the ratio $`I(8617)/I({}_{}{}^{}\mathrm{H}_{}^{}\beta )`$ may be affected by the intensity calibration between the two different spectral ranges involved, with no lines in common. This latter uncertainty is estimated to be $`15`$%, the degree of disagreement between the reddening-corrected ratios $`I(\text{Pa12})/I({}_{}{}^{}\mathrm{H}_{}^{}\beta )`$ and $`I(\text{Pa13})/I({}_{}{}^{}\mathrm{H}_{}^{}\beta )`$ and their recombination values (Hummer & Storey hum87 (1987)), since Pa12 and Pa13 are in the same spectral range as $`\lambda `$8617. For comparison with some of the values given in Table 1, the relative \[Fe ii\] line intensities obtained by Osterbrock et al. (otv92 (1992)) and Esteban et al. (est98 (1998)) at various positions in M42 are listed in Table 2. It can thus be seen that the ratios of Osterbrock et al. (otv92 (1992)) are quite close to the values for the positions M42 A–4 and M42 A–5 in Table 1. The values for M42–1 and M42–2 in Table 2 are also almost equal to those for M42 A–3 and M42 A–5 in Table 1. On the whole, then, it would appear that the \[Fe ii\] line ratios have a measurement accuracy of 15%.
Intensity measurements of the continuous nebular spectrum are not generally given in papers mainly concerned with the line spectrum, probably because they are intrinsically more difficult to carry out, requiring corrections due to the presence of night-sky light. The measurements presented here have been made after subtracting separate sky exposures, the intensities of which had to be scaled – by factors between 0.5 and 1.5 – in order to get the best cancellation of the sky lines, from the nebular exposures. In M42 and M43 the night-sky brightness contributes less than 5% to the continuum and the sky-subtraction process is thus inconsequential. The compilation by Schiffer & Mathis (sch74 (1974)) of intensity measurements in the continuous spectrum of M42, relative to $`{}_{}{}^{}\mathrm{H}_{}^{}`$$`\beta `$, covers the range $`1.6\text{}5.5\times 10^3\text{ Å}^1`$, in agreement with the measurements presented in Table 1.
## 3 Results
According to Lucy (lucy95 (1995)) and Baldwin et al. (bald96 (1996)), the line \[Fe ii$`\lambda `$8617 is almost insensitive to the effects of optical pumping. The \[Fe ii$`\lambda `$4287 line, on the contrary, is expected to be very sensitive to fluorescence, since it arises in the $`a^6S`$ term of the sextet system to which the $`a^6D`$ ground term also belongs and, therefore, $`a^6S`$ can be populated by allowed emissions from $`z^6P^o`$ and $`z^6D^o`$, terms, which in turn are connected to the ground term by allowed UV transitions. It follows that if fluorescence plays a role in the formation of $`\lambda `$4287, its intensity should be related to that of the radiation field inducing the fluorescence, which is again in the UV range. Nevertheless, since the continuous spectrum of H ii regions is primarily stellar light scattered by dust coexisting with the emitting gas, (O’Dell & Hubbard oh65 (1965); Peimbert & Goldsmith pg72 (1972)), its observed intensity variations in the $`{}_{}{}^{}\mathrm{H}_{}^{}`$$`\beta `$ region should correspond to similar variations in the UV or any other range. Therefore, the clear correlation shown in Fig. 1a, between $`I(4287)/I(8617)`$ and the intensity of the continuum near $`{}_{}{}^{}\mathrm{H}_{}^{}`$$`\beta `$, normalized to the $`{}_{}{}^{}\mathrm{H}_{}^{}`$$`\beta `$ intensity, shows indeed that fluorescence is taking place in the formation process of $`\lambda `$4287. The efficiency of fluorescence in enhancing the line intensities, over the value expected under pure collisional excitation, is shown for $`\lambda `$4287 in Fig. 1d, which exhibits the correlation between the intensity of the continuum near $`{}_{}{}^{}\mathrm{H}_{}^{}`$$`\beta `$, in units of the $`{}_{}{}^{}\mathrm{H}_{}^{}`$$`\beta `$ intensity, and the $`I(4287)/I(8617)`$ ratio normalized to the value $`c_{\mathrm{col}}`$ that the same ratio would take if the line excitation were due to electron collisions, at densities $`N_\mathrm{e}`$\[S ii\] and temperatures $`T_\mathrm{e}`$\[N ii\] (see Rodríguez rod96 (1996) for the definitions of $`N_\mathrm{e}`$\[S ii\] and $`T_\mathrm{e}`$\[N ii\]; the values of these parameters will be given elsewhere). The values of $`I(4287)/I(8617)/c_{\mathrm{col}}`$ shown in Fig. 1d imply that fluorescence enhances $`\lambda `$4287 by two orders of magnitude with respect to $`\lambda `$8617. In this context, it should be noted that the values of $`c_{\mathrm{col}}`$ appearing in Fig. 1 were calculated by Bautista & Pradhan (bp96 (1996)) using their own collision strengths, although the collision strengths of Pradhan & Zhang (pz93 (1993)) and Zhang & Pradhan (zp95 (1995)) are considered more accurate in BP98. The calculations of Pradhan & Zhang do not include the $`a^6S`$ term and hence cannot be used to calculate the collisional value of $`I(4287)/I(8617)`$. But, anyway, the correlation between $`I(4287)/I(8617)`$ and the intensity in the continuum is well established for the observed line ratios, i.e. uncorrected for collisional effects, and can only become tighter when the line ratios are normalized by $`c_{\mathrm{col}}`$ as calculated with correct values for the collision strengths.
As far as other \[Fe ii\] lines are concerned, only the measured intensities of $`\lambda `$5158 and $`\lambda `$5262 exhibit clear correlations with the intensity in the continuum, similar although somewhat looser than that shown by $`\lambda `$4287, as can be seen in Figs. 1b, 1e, 1c and 1f. A fluorescent contribution to other lines, however, cannot be excluded, as for example, $`\lambda `$4244+5 ($`a^4G_{7/2}a^4F_{7/2}`$, $`a^4G_{11/2}a^4F_{9/2}`$), $`\lambda `$4277 ($`a^4F_{7/2}a^4G_{9/2}`$) and $`\lambda `$5334 ($`a^4F_{5/2}a^4H_{9/2}`$), are weaker than the transitions shown in Fig. 1, and therefore could be measured only in a few positions in M42, insufficient to provide sets of data with definite trends. The $`\lambda `$4815 ($`a^4F_{9/2}b^4F_{9/2}`$) and $`\lambda `$7155 ($`a^4F_{9/2}a^2G_{9/2}`$) lines are as strong as those of Fig. 1, but $`\lambda `$4815 is blended with Si iii $`\lambda `$4813 and S ii $`\lambda `$4815 (Esteban et al. est98 (1998)), and $`\lambda `$7155 is insensitive to fluorescence, since it arises in the doublet system.
The fluorescence effects clearly illustrated in Fig. 1, especially for the lines $`\lambda `$4287 and $`\lambda `$5158, give grounds for considering that most \[Fe ii\] lines may be significantly affected by radiative excitation, depending in a very complicated fashion on the structure of the $`{}_{}{}^{}\mathrm{Fe}_{}^{+}`$ ion. The observed strength of the \[Fe ii\] lines can therefore be explained in terms of a line formation process based on physical principles applicable under the conditions characteristic of conventional nebular models (density, temperature and state of ionization). Since fluorescent excitation of \[Fe ii\] is not important for densities greater than $`10^5\text{ cm}^3`$ (BP98), it would appear that the use of the \[Fe ii\] lines as a diagnostic for the existence of a high-density layer is completely inappropriate. The proponents of this high-density model have attempted to use the intensity ratio $`I(\lambda 6300+\lambda 6363)/I(\lambda 5577)`$ of nebular \[O i\] lines as independent evidence for the high-density layer in their model (Bautista & Pradhan bp95 (1995); BP98) , but the nebular \[O i\] lines are difficult to measure accurately because of their contamination by the strong night-sky emission in the same \[O i\] lines, especially the auroral feature at $`\lambda `$5577. The ratios obtained from recent and reliable measurements of the nebular component of this line in M42 (Esteban et al. est99 (1999); see also Baldwin et al. bald96 (1996)), have shown that the \[O i\] line ratio is in fact quite consistent with the line formation taking place at moderate densities. Besides, recent measurements of \[Fe ii\] lines insensitive to fluorescence in the 1–2 $`\mu `$m spectrum of the “bar” in M42 (Marconi et al. mar98 (1998); Luhman et al. luh98 (1998)) indicate that the densities at their levels of formation are in the range $`10^3\text{}10^4\text{ cm}^3`$.
Independently of their argument based on the nebular \[O i\] lines, BP98 also suggest that several measured \[Fe ii\] line ratios, when compared with model predictions, imply the existence of a high-density emitting layer, even allowing for the contribution of lower density layers to the line intensities. However, when comparing the predictions of the two \[Fe ii\] line formation models under discussion, it should be kept in mind that both rely on calculated collision strengths for the lines, which must be used with caution. An example of the uncertainties affecting the collisions strengths well illustrates the problem. The calculations of Pradhan & Zhang (pz93 (1993)) and Zhang & Pradhan (zp95 (1995)) are considered in BP98 to have relatively low uncertainties, but they do not consider the $`\lambda `$4287 and $`\lambda `$7155 transitions. The two available sets of collision strengths dealing with $`\lambda `$4287 (Bautista & Pradhan bp96 (1996); BP98) lead to predicted values for the $`I(4287)/I(8617)`$ ratio that differ by a factor of 10 for any density value. Therefore the values derived for $`I(4287)/I(8617)`$ and $`I(7155)/I(8617)`$ may be quite uncertain, and these ratios are precisely those implying more clearly the existence of high-density emitting regions according to BP98.
In conclusion, it can be said that there is no compelling evidence for the presence of high-density regions to explain the origin of the \[Fe ii\] lines. Those in the near infrared must arise in regions of moderate density, while the optical lines have been clearly shown here to be affected by fluorescence, of significance only at moderate densities.
### 3.1 On the efficiency of fluorescent excitation
It has been argued by BP98 that photoexcitation of \[Fe ii\] lines is a relatively inefficient mechanism, since the ground state of the $`{}_{}{}^{}\mathrm{Fe}_{}^{+}`$ ion is $`a{}_{}{}^{6}D_{9/2}^{}`$ whereas most of the observed lines arise in the quartet system. According to BP98, photoexcitation of these quartet levels must occur through intercombination transitions, with transition probabilities much lower than those of permitted transitions. However, even at moderate densities ($`N_\mathrm{e}10^3\text{ cm}^3`$) the lowest level of the $`a{}_{}{}^{4}F_{9/2}^{}`$ quartets has an appreciable population (Osterbrock et al. otv92 (1992)), and, therefore, higher quartet levels can be populated through permitted transitions from this level.
The absence of some Fe ii lines in the spectra of M42 has been considered by BP98 as further evidence against fluorescent excitation. In particular, Fe ii $`\lambda `$5169 ($`z^6P_{7/2}^oa^6S_{5/2}`$) is mentioned as the main transition that would contribute to populate $`a^6S_{5/2}`$ radiatively. The intensity of $`\lambda `$5169 should then be about 70% that of $`\lambda `$4287, according to BP98. Since the upper limit of the relative intensities of these lines has been estimated to be 0.1 for M42, BP98 conclude that less that 20% of the \[Fe ii$`\lambda `$4287 intensity can be explained by fluorescent excitation.
The spectra available for M42 and M43 show a weak feature at $`\lambda 5169\text{Å}`$ whose intensity is about 10% that of $`\lambda `$4287, in accord with the upper limit mentioned by BP98. In view of the clear demonstration in Fig. 1 of the importance of fluorescence effects in the formation of $`\lambda `$4287, this result is puzzling and difficult to explain, as it is also the extremely low contribution of fluorescence to the ratio $`I(4287)/I(8616)`$ estimated by BP98 (see their Fig. 4.6d). One way out of this difficulty would be to consider as alternative radiative excitation mechanisms of the level $`a{}_{}{}^{6}S_{5/2}^{}`$, transitions to levels $`y^6P^o`$ or $`x^6P^o`$ (with energies 0.57 and 0.72 Ry, respectively), implying the absorption of photons with $`\lambda =1608\text{ or }1261\text{Å}`$. These sextets are comparable in energy with some of the quartets that BP98 consider when calculating the fluorescence effects on \[Fe ii\] emission, but they are not included in the set of collision strengths used by BP98, and therefore, these sextets are not considered in their calculations.
The effects of fluorescent excitation on the \[Fe ii\] line intensities in M42 have also been calculated by Baldwin et al. (bald96 (1996)). Unfortunately, neither the $`a{}_{}{}^{6}S`$ nor the $`a{}_{}{}^{2}G`$ terms are included in the set of collision strengths they use (Pradhan & Zhang pz93 (1993); Zhang & Pradhan zp95 (1995)), and lines like $`\lambda `$4287 and $`\lambda `$7155 are not considered in their calculations. The same collision strengths are used by BP98 for the lines in common, but their predicted line ratios are somewhat different from those calculated by Baldwin et al. (bald96 (1996)). Nevertheless, the latter authors conclude that fluorescent excitation in a region of moderate density can explain the \[Fe ii\] spectrum observed by Osterbrock et al. (otv92 (1992)), while opposite conclusions are advanced by BP98. The different approaches to the problem of both papers make it difficult to find the reasons for the discrepancies. The differences in the contribution of fluorescence to the line ratios presented by Baldwin et al. (bald96 (1996)) and BP98 can thus be considered to reflect the uncertainties involved in the calculation of fluorescence effects in a complex ion like $`{}_{}{}^{}\mathrm{Fe}_{}^{+}`$.
In summary, none of the available calculations faithfully reproduces the observed \[Fe ii\] spectra, but it should be borne in mind that the effects of UV pumping on the \[Fe ii\] line ratios can be quite different from those calculated so far, since the contribution to the pumping of the dust-scattered light – whose relative intensity increases with frequency – has not yet been taken into account. The change in the spectral distribution of the diffuse radiation field would imply that terms like $`z^4G^o`$ (located 0.55 Ry above the ground level) and the sextets mentioned above ($`y^6P^o`$ and $`x^6P^o`$) would have greater contributions to the pumping, thereby increasing the fluorescence effects on lines like $`\lambda `$4287, $`\lambda `$4815, $`\lambda `$5158, $`\lambda `$5262 or $`\lambda `$5334.
## 4 Conclusions
The relative intensities of the \[Fe ii\] lines in the infrared spectra of M42 imply densities in the range $`10^3\text{}10^4\text{ cm}^3`$ (Marconi et al. mar98 (1998); Luhman et al. luh98 (1998)), but the optical \[Fe ii\] spectrum cannot be reproduced assuming pure collisional excitation at these low densities, independently of the set of collision strengths used in the calculations. Two additional agents for the excitation of the upper levels of the optical lines have been proposed: UV pumping (Lucy lucy95 (1995)) and emission at very high densities $`N_\mathrm{e}10^6\text{ cm}^3`$ (Bautista et al. bpo94 (1994)). The available calculations based on these two processes (Baldwin et al. bald96 (1996); BP98) encounter certain difficulties when trying to reproduce faithfully the observed \[Fe ii\] line ratios. However, these calculations depend on the values used for the collision strengths, which have an accuracy that it is difficult to estimate, on the completeness of the set of levels considered in the pumping processes and on the spectral intensity distribution of the radiation field involved. Consequently, the overall reliability of the results is difficult to assess.
The observations presented here have been shown to imply the importance of fluorescence processes on the formation of the optical \[Fe ii\] emission. This conclusion is independent of any calculation and renders the assumption of a high-density emitting layer unnecessary. Further implications are the unreliability of the available collision strengths for $`{}_{}{}^{}\mathrm{Fe}_{}^{+}`$ (at least for some sextets and the doublets), and the need for further calculations on fluorescence that take into account the contribution of dust-scattered light to the radiation field.
###### Acknowledgements.
I am very grateful to Guido Münch and Antonio Mampaso for their advice during the development of this project and their contribution to the improvement of this manuscript. I also thank Terry Mahoney for revising the English text.
|
no-problem/9906/hep-th9906153.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
In the context of Maldacena’s correspondence between gauge theories and gravity , external charges in the gauge theory are dual to macroscopic strings in anti-de Sitter ($`AdS`$) space whose endpoints lie on the boundary. This identification stems from the general role of strings connecting parallel branes as W-bosons of the corresponding spontaneously broken worldvolume theory , and can be confirmed within the $`AdS`$/CFT setting by computing the energy of such strings .
For concreteness, we will restrict attention to the duality between $`D=3+1`$ $`𝒩=4`$ $`SU(N)`$ super-Yang-Mills (SYM) and Type IIB string theory on $`AdS_5\times 𝐒^5`$. A solitary static quark (transforming in the fundamental of $`SU(N)`$) corresponds to a Type IIB string which extends solely in the radial direction; a string of opposite orientation represents an antiquark (transforming in the anti-fundamental of $`SU(N)`$). The GKPW recipe for extracting gauge theory expectation values from the bulk action makes it possible to verify directly that a radial string gives rise to the correct point charge field configuration . We note in passing that expectation values due to string probes in the bulk of $`AdS`$ (with no endpoints on the boundary) have also been computed .
A quark-antiquark pair in the gauge theory is naturally identified with a string with both of its endpoints on the boundary. Expectation values of Wilson loops can thus be deduced from the bulk theory by evaluating the area of a string worldsheet which is bounded by the loop . The result of such a calculation encodes in particular the quark-antiquark potential (see for a review of results on Wilson loops obtained from the bulk-boundary correspondence).
A defining property of a string is its ability to undulate. The identification of strings and charges raises an obvious question: what is the gauge theory interpretation of string oscillations? This is the issue we will address in what follows. The main tool at our disposal is again the GKPW calculational prescription . A string is a source for the supergravity fields, so an oscillating string generates fluctuating fields in the bulk of $`AdS`$ space. The correspondence then translates the fluctuating supergravity fields on the boundary into the time-dependent SYM expectation values associated with an oscillating charge. The analysis thus establishes a correspondence between string oscillations and gauge theory waves (including, one would hope, the usual $`r^1`$ radiation fields produced by an accelerated charge).
In Section 2 we will fill in the details of the procedure outlined in the previous paragraph. To understand the basic ideas it will suffice to concentrate on waves of the dilaton field, which is known to couple to the operator
$$𝒪_{F^2}=\frac{1}{4g_{YM}^2}Tr\left\{F^2+[X_I,X_J][X^I,X^J]+\text{fermions}\right\}$$
(1)
in the boundary theory . There is much to be learned by studying waves of other supergravity fields, especially the graviton, but we will leave this more difficult exercise for another paper. In the above equation $`X^I`$, $`I=1,\mathrm{},6`$, denote the scalar fields of the $`𝒩=4`$ SYM theory (living in the vector of $`SO(6)`$).
The simple case of an oscillating straight radial string will be worked out in Section 3. In Section 4 we will then extend the analysis to the more intricate case of a ‘bent’ string, and discuss some interesting features of the fields of a quark-antiquark pair. We amplify the discussion on the implications of our results for the SYM theory in Section 5, where we point out a puzzle regarding energy conservation in the gauge theory. In Section 6 we apply the same methods to obtain the gauge field profile due to a baryon (represented as a D5-brane appropriately wrapped in $`AdS`$ space) and compare with the quark-antiquark case. A final section consists of a brief summary of our conclusions. Some aspects of string oscillations and SYM waves have been examined before in and we have attempted to go beyond these efforts in ways about which we will comment as appropriate.
## 2 String Oscillations Make SYM Waves
We describe the dynamics of a fundamental string through the Nambu-Goto action
$$S_F=\frac{1}{2\pi \alpha ^{}}d^2\sigma \sqrt{g},$$
(2)
where $`g`$ is the induced metric on the string worldsheet. We work in Poincaré coordinates for $`AdS_5`$, with the metric
$$ds^2=\frac{R^2}{z^2}(dt^2+d\stackrel{}{x}^2+dz^2)+R^2d\mathrm{\Omega }_5^2.$$
(3)
Making the static gauge choice $`\sigma ^1=t,\sigma ^2=z`$, and restricting attention to configurations with the string pointing along a particular $`𝐒^5`$ direction,<sup>3</sup><sup>3</sup>3The operator $`𝒪_{F^2}`$ couples to the spherically symmetric mode of the ten-dimensional dilaton, so we will focus attention on this mode alone. A string which is localized on the five-sphere will excite also all of the higher Kaluza-Klein harmonics, which are massive fields on $`AdS_5`$. These excitations would give expectation values to dual higher-dimension operators which have been identified in . the action reduces to
$$S_F=\frac{R^2}{2\pi \alpha ^{}}𝑑t\frac{dz}{z^2}\sqrt{1_t\stackrel{}{X}^2+_z\stackrel{}{X}^2_t\stackrel{}{X}^2_z\stackrel{}{X}^2+\left(_t\stackrel{}{X}_z\stackrel{}{X}\right)^2},$$
(4)
where $`\stackrel{}{X}(z,t)`$ denotes the position of the string in the $`\stackrel{}{x}`$ directions. The static solutions to (4) can be taken to lie in the $`zx^1`$ plane without loss of generality. They satisfy
$$_zX_s=\pm \frac{z^2}{\sqrt{z_m^4z^4}}.$$
(5)
This equation describes a string lying along a geodesic which starts and ends at $`z=0`$ and reaches a maximum at $`z=z_m`$ (see Fig. 1). The two endpoints of the string are separated by a coordinate distance
$$L=z_m\frac{(2\pi )^{3/2}}{\mathrm{\Gamma }(1/4)^2}.$$
(6)
Now consider small oscillations about the solution described by (5), letting $`\stackrel{}{X}(z,t)=\stackrel{}{X}_s(z)+\stackrel{}{Y}(z,t)`$. For simplicity, we take $`\stackrel{}{Y}\stackrel{}{X}_s`$. The linearized equation of motion for $`\stackrel{}{Y}`$ is
$$_t^2\stackrel{}{Y}+\left[1\frac{z^4}{z_m^4}\right]_z^2\stackrel{}{Y}\frac{2}{z}_z\stackrel{}{Y}=0.$$
(7)
In the calculation to follow, we will cut off $`AdS_5`$ by moving the boundary in to $`z=z_0`$ and take $`z_00`$ at the end of the calculation. In order to solve (7), we need boundary conditions for the left and right string endpoints which we will impose in the form $`\stackrel{}{Y}_{L,R}(z_0,t)=\stackrel{}{y}_{L,R}(t)`$. The interpretation is straightforward: for a given $`z_0`$, the string is attempting to describe a Higgsed gauge boson of very large mass ($`z_0^1`$) transforming in the fundamental of the unbroken $`SU(N)`$ gauge group; this massive object is an extrinsic degree of freedom from the point of view of the $`SU(N)`$ gauge theory and has its own dynamics; this dynamics is essentially that of a point particle and is thus described by a trajectory function $`\stackrel{}{y}(t)`$. For the moment, we will simply prescribe a trajectory, but the Nambu-Goto action for the string in the $`AdS_5`$ geometry implies an action for $`\stackrel{}{y}(t)`$ which in turn implies an equation of motion for the trajectory. We will not pursue this line of thought much further in this paper, but it is interesting to note that the kinetic term in this equation of motion implies a quark mass that matches the static total energy of the quark/string.
Since the Nambu-Goto action (2) depends on the background supergravity fields, it is a source for them as well. In particular, it is a source for the dilaton, a fact which is best displayed by writing the action in terms of the Einstein metric $`G_{MN}^E=e^{\varphi /2}G_{MN}`$:
$$S_F=\frac{1}{2\pi \alpha ^{}}𝑑t𝑑ze^{\varphi /2}\sqrt{g_E}.$$
(8)
The same metric rescaling in the bulk supergravity action yields a dilaton kinetic term
$$S_S=\frac{\mathrm{\Omega }_5R^5}{4\kappa ^2}d^5x\sqrt{G_E}G_E^{mn}_m\varphi _n\varphi .$$
(9)
Notice that the original ten-dimensional action has been dimensionally reduced to $`AdS_5`$ over $`𝐒^5`$: $`\varphi `$ now denotes the projection of the original ten-dimensional $`\varphi `$ onto the constant $`𝐒^5`$ spherical harmonic and is a function on $`AdS_5`$ while $`m,n`$ are $`AdS_5`$ indices. The combined action $`S_{bulk}=S_S+S_F`$ implies a linearized dilaton equation of motion
$$_m\left[\sqrt{G_E}G_E^{mn}_n\varphi \right]=J,J(x)=\frac{2\kappa ^2}{4\pi \alpha ^{}\mathrm{\Omega }_5R^5}\sqrt{g_E}\delta \left(\stackrel{}{x}\stackrel{}{X}(z,t)\right).$$
(10)
This equation is solved by by Greens’ function methods as $`\varphi (x)=d^5x^{}D(x,x^{})J(x^{})`$, where $`D(x,x^{})`$ is the retarded dilaton propagator . The propagator is in fact only a function of the invariant distance $`v`$, defined by
$$\mathrm{cos}v=1\frac{(tt^{})^2(\stackrel{}{x}\stackrel{}{x}^{})^2(zz^{})^2}{2zz^{}}.$$
(11)
Explicitly,
$$D(v)=\frac{1}{4\pi ^2R^3\mathrm{sin}v}\frac{d}{dv}\left[\frac{\mathrm{cos}2v}{\mathrm{sin}v}\theta (1|\mathrm{cos}v|)\right].$$
(12)
This is a fairly complicated-looking propagator, but it is just the dimensional reduction of the much simpler, completely algebraic ten-dimensional $`AdS_5\times 𝐒^5`$ propagator
$$K\frac{(zz^{})^4}{\left[(z\widehat{n}z^{}\widehat{n}^{})^2+(tt^{})^2+(\stackrel{}{x}\stackrel{}{x}^{})^2\right]^4}$$
(13)
where the unit vectors $`\widehat{n},\widehat{n}^{}`$ indicate position on $`𝐒^5`$.
Having obtained $`\varphi (x)`$ in the bulk, the GKPW recipe to extract the expectation value is
$$𝒪_{F^2}=\frac{\delta S_{bulk}}{\delta \varphi }.$$
(14)
The expectation value would of course vanish in the gauge theory vacuum sector. On the other hand, the string corresponds to the sector of the gauge theory where a heavy quark has been inserted in the vacuum. We do expect a non-zero expectation value of the $`TrF^2`$ operator in that sector and Eq. (14) gives a method for computing it. Carrying out similar steps with higher $`𝐒^5`$ harmonic modes of the dilaton would yield gauge theory expectation values for operators of the type $`Tr(F^2X_I\mathrm{}X_J)`$, where the $`X_I`$ are the scalar fields of the $`𝒩=4`$ gauge theory . These higher-dimension operators should give rise to a correspondingly higher power law falloff with $`|\stackrel{}{x}|`$, a result which should emerge naturally from the structure of the Greens’ functions for the higher $`𝐒^5`$ harmonic modes of the dilaton.
Under $`\varphi \varphi +\delta \varphi `$ the action varies only by a surface term, because the configuration about which we vary is a solution to the equation of motion:
$$\delta S_{bulk}=\frac{\mathrm{\Omega }_5R^8}{2\kappa ^2}𝑑td^3\stackrel{}{x}\left(\frac{1}{z^3}_z\varphi \delta \varphi \right)|_{z=z_0}.$$
(15)
As a shorthand, it will be convenient to define a rescaled dilaton field $`\stackrel{~}{\varphi }=\mathrm{\Omega }_5R^8\varphi /2\kappa ^2`$. It follows from the foregoing discussion that
$$\stackrel{~}{\varphi }(x)=\frac{1}{16\pi ^3\alpha ^{}}𝑑t^{}𝑑z^{}\sqrt{g_E}\frac{1}{\mathrm{sin}v}\frac{d}{dv}\left[\frac{\mathrm{cos}2v}{\mathrm{sin}v}\theta (1|\mathrm{cos}v|)\right],$$
(16)
and
$$𝒪_{F^2}=\left(\frac{1}{z^3}_z\stackrel{~}{\varphi }\right)|_{z=z_00}.$$
(17)
Our task, then, is to calculate the dilaton field produced by various string sources and to pick out the $`O(z^4)`$ term in its expansion near the boundary of $`AdS_5`$.
## 3 Gauge Fields of an Oscillating Quark
We will examine first the especially simple case $`z_m\mathrm{}`$, where the static solution describes a straight string extending along the radial direction, $`\stackrel{}{X}_s(z)=0`$. Eq. (7) simplifies to
$$_t^2\stackrel{}{Y}+_z^2\stackrel{}{Y}\frac{2}{z}_z\stackrel{}{Y}=0.$$
(18)
This equation is most easily solved via Fourier transformation. The solution describing purely outgoing waves is found to be
$$\stackrel{}{Y}(z,t)=𝑑\omega e^{i\omega (tz+z_0)}\left(\frac{1i\omega z}{1i\omega z_0}\right)\stackrel{}{y}(\omega ).$$
(19)
For simplicity, we specialize to harmonic boundary data,<sup>4</sup><sup>4</sup>4Henceforth it is understood that one should take the real part of expressions like this. $`\stackrel{}{y}(t)=\stackrel{}{A}\mathrm{exp}(i\omega t)`$.
The Nambu-Goto square root in (16) can be expanded as
$$\sqrt{g_E}\frac{R^2}{z^2}\left[1\frac{1}{2}\left(_t\stackrel{}{Y}\right)^2+\frac{1}{2}\left(_z\stackrel{}{Y}\right)^2\right].$$
(20)
Keeping only the leading term, (16) reads
$$\stackrel{~}{\varphi }(\stackrel{}{x},z,t)=\frac{R^2}{16\pi ^3\alpha ^{}}\frac{dt^{}}{\mathrm{sin}v}\frac{dz^{}}{z^2}\frac{d}{dv}\left[\frac{\mathrm{cos}2v}{\mathrm{sin}v}\theta (1|\mathrm{cos}v|)\right].$$
(21)
Next, change variables of integration $`t^{}v`$, using (11), and integrate by parts on $`v`$, to be left with
$`\stackrel{~}{\varphi }`$ $`=`$ $`{\displaystyle \frac{R^2z}{16\pi ^3\alpha ^{}}}{\displaystyle \frac{dz^{}}{z^{}}I},`$
$`I`$ $`=`$ $`{\displaystyle _0^\pi }𝑑v{\displaystyle \frac{\mathrm{cos}2v}{\mathrm{sin}v}}{\displaystyle \frac{d}{dv}}\left[{\displaystyle \frac{1}{\sqrt{z^2+z^2+(\stackrel{}{x}\stackrel{}{Y})^22zz^{}\mathrm{cos}v}}}\right].`$ (22)
Now expand the square root in powers of $`Y`$. The leading ($`Y`$-independent) term gives rise to a static component of the dilaton field, $`\stackrel{~}{\varphi }_s(\stackrel{}{x},z)`$. Its contribution to the gauge theory expectation value has been computed in and found to be
$$𝒪_{F^2}_s=\frac{\sqrt{2}}{32\pi ^2}\frac{\sqrt{g_{YM}^2N}}{|\stackrel{}{x}|^4}.$$
(23)
This is as expected for a point charge of magnitude proportional to $`(g_{YM}^2N)^{1/4}`$, which is the effective strength of the coupling as inferred from the quark-antiquark potential .
The next term in expansion of (3) in powers of $`Y`$, the term linear in $`Y`$, gives the leading dynamical contribution to $`𝒪_{F^2}`$:
$`\stackrel{~}{\varphi }_{(1)}`$ $`=`$ $`{\displaystyle \frac{R^2z}{16\pi ^3\alpha ^{}}}{\displaystyle \frac{dz^{}}{z^{}}I_{(1)}},`$
$`I_{(1)}`$ $`=`$ $`{\displaystyle _0^\pi }𝑑v{\displaystyle \frac{\mathrm{cos}2v}{\mathrm{sin}v}}{\displaystyle \frac{d}{dv}}\left[{\displaystyle \frac{\left(\stackrel{}{x}\stackrel{}{A}\right)e^{i\omega (t^{}z^{})}\left(1i\omega z^{}\right)}{\sqrt{z^2+z^2+|\stackrel{}{x}|^22zz^{}\mathrm{cos}v}}}\right],`$ (24)
where $`t^{}`$ is understood to be a function of $`v`$,
$$t^{}=t\sqrt{z^2+z^2+|\stackrel{}{x}|^22zz^{}\mathrm{cos}v}.$$
Since, according to (17), we eventually only need the $`O(z^4)`$ terms in $`\stackrel{~}{\varphi }`$, we have set $`z_00`$ in (3).
If one expands the integrand of $`I_{(1)}`$ in powers of $`\eta =zz^{}\mathrm{cos}v/(z^2+z^2+|\stackrel{}{x}|^2)`$, the first non-vanishing term is found to be $`O(\eta ^3)`$, and higher-order terms will not contribute to (17). Keeping only the relevant terms one obtains
$`\stackrel{~}{\varphi }_{(1)}`$ $`=`$ $`{\displaystyle \frac{R^2\left(\stackrel{}{x}\stackrel{}{A}\right)z^4}{128\pi ^2\alpha ^{}}}{\displaystyle _0^{\mathrm{}}}𝑑z^{}z^2(1i\omega z^{})e^{i\omega (t\sqrt{z^2+|\stackrel{}{x}|^2}z^{})}f(\sqrt{z^2+|\stackrel{}{x}|^2}),`$
$`f(u)`$ $`=`$ $`{\displaystyle \frac{i\omega ^3}{u^6}}{\displaystyle \frac{12\omega ^2}{u^7}}{\displaystyle \frac{57i\omega }{u^8}}+{\displaystyle \frac{105}{u^9}}.`$ (25)
The bulk dilaton field is evidently a superposition of waves radiated from each point along the string. The phase delay $`z^{}+\sqrt{z^2+|\stackrel{}{x}|^2}`$ is simply the time needed for a null signal to propagate up along the string to the point $`z=z^{}`$, and then travel down diagonally to reach the boundary at $`\stackrel{}{x}`$ (see Fig. 2).
To understand this result from the viewpoint of the boundary theory, it is advantageous to change the variable of integration to $`\zeta =\sqrt{1+z^2/|\stackrel{}{x}|^2}+z^{}/|\stackrel{}{x}|`$. Using (3) in (17) one finds, after some integration by parts,
$`𝒪_{F^2}_{(1)}`$ $`=`$ $`{\displaystyle \frac{\sqrt{2g_{YM}^2N}\left(\widehat{x}\stackrel{}{A}\right)}{2\pi ^2|\stackrel{}{x}|^4}}\left\{{\displaystyle _1^{\mathrm{}}}𝑑\zeta i\omega e^{i\omega (t\zeta |\stackrel{}{x}|)}\chi (\zeta )+{\displaystyle \frac{59}{32|\stackrel{}{x}|}}e^{i\omega (t|\stackrel{}{x}|)}\right\}`$
$`\chi (\zeta )`$ $`=`$ $`{\displaystyle \frac{210\zeta ^{10}258\zeta ^8+267\zeta ^6+69\zeta ^4+55\zeta ^2+1}{2\left(\zeta ^2+1\right)^7}}.`$ (26)
The expectation value has been expressed solely in terms of gauge theory quantities through use of the relation $`R^2/\alpha ^{}=\sqrt{2g_{YM}^2N}`$, as must be possible for a proper gauge theory interpretation. It should be noted that the dependence of the integrand on $`\omega |\stackrel{}{x}|`$ can be shifted from the phase factor to the envelope function $`\chi `$ through an integration by parts.
Eq. (3) displays the gauge theory disturbance as a superposition of components propagating at speeds $`v=1/\zeta `$, for all $`1\zeta \mathrm{}`$. Notice that the weight factor $`\chi 0`$ as $`\zeta \mathrm{}`$, so low velocity components are evidently suppressed. It should be noted from (19) that the string oscillations actually become large (and our approximations fail) for large $`A\omega z^{}`$, so the detailed shape of $`\chi `$ cannot be trusted at arbitrarily large $`\zeta `$. Nonetheless, it is clear from the geometric setup (summarized in Fig. 2) that the gauge theory wave should indeed include components propagating at arbitrarily low velocities.
The above result implies in particular that even if the charge is shaken abruptly to generate a sharply defined pulse on the string, the SYM observer at $`\stackrel{}{x}`$ will receive an infinitely broadened pulse, only the leading edge of which travels at the speed of light. The delayed signals presumably arise from rescattering of the original disturbance from the static background (23) by virtue of the nonlinear dynamics of the strongly-coupled gauge theory. This rather complex sequence of events would be difficult to unravel in the gauge theory, but the bulk-boundary correspondence gives a precise and physically plausible account of it. Note that the long time delays originate from string disturbances at large values of the bulk $`AdS`$ coordinate $`z`$, as would be expected from the UV/IR connection proposed in .
It is tempting to speak of (3) as electromagnetic ‘radiation’, but the rapid falloff with $`|\stackrel{}{x}|`$ indicates that this would not be strictly correct. We are looking at a contribution to $`𝒪_{F^2}`$ *linear* in the displacement of the quark and this can only arise from a cross-term between the static field and the fluctuating field. Since the static electric field is radial and the asymptotic radiation gauge fields (the ones that fall off as $`|\stackrel{}{x}|^1`$) are transverse, their scalar product vanishes. Hence there is no unambiguous contribution of electromagnetic radiation to the $`𝒪_{F^2}`$ expectation value: instead, we see evidence of fluctuations in the near-fields of the moving quarks, fields which do not transport energy to infinity. The unambiguous diagnostic for radiation would be the demonstration of a net energy flux to spatial infinity in the gauge theory. This could be done by determining the expectation value of the gauge theory energy-momentum tensor, which in the GKPW recipe is dual to the bulk gravitational field produced by the fluctuating string. It would be extremely interesting to carry out this calculation explicitly, for it is not at all obvious how (or even if) the $`AdS`$ description of waves in the boundary theory incorporates energy conservation. In this connection, we should also remark that our analysis neglects the back-reaction on the string due to the supergravity fields. We will return to these issues in Section 5.
Before closing this section, we note that our external charges are by construction infinitely massive, and consequently immune to the SYM field configuration they help to produce. It is possible to consider instead sources with finite mass, represented by strings which terminate not at the boundary, but on a solitary D3-brane placed at $`z_b>0`$. In that case one is really studying an $`SU(N+1)`$ gauge theory, broken spontaneously to $`SU(N)\times U(1)`$ by a Higgs vacuum expectation value $`R^2/z_b\alpha ^{}`$ .
## 4 Gauge Fields of Heavy Quark Mesons
We now extend the analysis to the general case $`z_m<\mathrm{}`$, where the string bends along the geodesic (5). Both of its endpoints reach the boundary, so this configuration describes a quark-antiquark pair (see Fig. 1). Notice that now the parametrization $`\stackrel{}{X}(z,t)`$ has the disadvantage of being double-valued: for each value of $`z`$ there are in fact two points on the string, one on the left and one on the right. When necessary, we will account for this by means of a discrete subindex: $`\stackrel{}{X}_{L,R}(z,t)`$. The need for this awkward notation is compensated by the simple form of the differential equation (7).
The expansion of the Nambu-Goto integrand now yields
$$\sqrt{g_E}\frac{R^2}{z^2}\left\{\mathrm{\Delta }+\frac{1}{2\mathrm{\Delta }}\left[\left(_z\stackrel{}{Y}\right)^2\mathrm{\Delta }^2\left(_t\stackrel{}{Y}\right)^2\right]\right\},\mathrm{\Delta }=\frac{z_m^2}{\sqrt{z_m^4z^4}}.$$
(27)
The dilaton (16) is again a sum of static and fluctuating components.
It is interesting to determine the gauge field profile due to the static bent string. Inserting the first term of (27) into (16), changing variables $`t^{}v`$, and integrating by parts with respect to $`v`$ one obtains
$`\stackrel{~}{\varphi }_s(\stackrel{}{x},z)`$ $`=`$ $`{\displaystyle \frac{R^2z_m^2z}{16\pi ^3\alpha ^{}}}{\displaystyle \frac{dz^{}}{z^{}\sqrt{z_m^4z^4}}I},`$
$`I`$ $`=`$ $`{\displaystyle _0^\pi }𝑑v{\displaystyle \frac{\mathrm{cos}2v}{\mathrm{sin}v}}{\displaystyle \frac{d}{dv}}\left[{\displaystyle \frac{1}{\sqrt{z^2+z^2+(\stackrel{}{x}\stackrel{}{X}(z^{}))^22zz^{}\mathrm{cos}v}}}\right].`$ (28)
Next, expand the integrand of $`I`$ in powers of $`2zz^{}\mathrm{cos}v/[z^2+z^2+(\stackrel{}{x}\stackrel{}{X}^{})^2]`$, and retain only the leading order term, to find
$$I=\frac{15\pi (zz^{})^3}{8\left[z^2+z^2+\left(\stackrel{}{x}\stackrel{}{X}(z^{})\right)^2\right]^{7/2}}.$$
(29)
Use of this in (4) leads to
$`\stackrel{~}{\varphi }_s`$ $`=`$ $`{\displaystyle \frac{15R^2z_m^2z^4}{128\pi ^2\alpha ^{}}}{\displaystyle _0^{z_m}}{\displaystyle \frac{dz^{}z^2}{\sqrt{z_m^4z^4}}}`$ (30)
$`\times \left\{{\displaystyle \frac{1}{\left[z^2+\left(\stackrel{}{x}\stackrel{}{X}_L(z^{})\right)^2\right]^{7/2}}}+{\displaystyle \frac{1}{\left[z^2+\left(\stackrel{}{x}\stackrel{}{X}_R(z^{})\right)^2\right]^{7/2}}}\right\},`$
where we have explicitly indicated the contribution from both halves of the string. It is convenient to place the center of the string at the origin, $`\stackrel{}{X}(z_m)=0`$, so that $`X_L(z)=X_R(z)`$, as depicted in Fig. 1.
We wish to extract the leading term in (30) for $`|\stackrel{}{x}|L`$, which by (6) implies that $`|\stackrel{}{x}|z_m`$ as well. We find
$$\stackrel{~}{\varphi }_s=\frac{15R^2z_m^2z^4L}{128\pi ^2\alpha ^{}|\stackrel{}{x}|^7}.$$
(31)
The SYM expectation value then follows from (17). We can express the result in terms of quantities in the boundary theory, using (6) and $`R^2/\alpha ^{}=\sqrt{2g_{YM}^2N}`$:
$$𝒪_{F^2}_s=\frac{15\mathrm{\Gamma }(1/4)^4\sqrt{2}}{8(2\pi )^5}\frac{L^3\sqrt{g_{YM}^2N}}{|\stackrel{}{x}|^7}.$$
(32)
Notice the peculiar dependence on $`L`$ and $`|\stackrel{}{x}|`$ and the fact that the result is isotropic. This is not what one would expect from a static electric dipole field in a linear gauge theory, but there is nothing obviously inconsistent about it for strongly coupled $`𝒩=4`$ SYM. The above result must be regarded as a prediction of the bulk-boundary correspondence for which we have at present no independent test.
We now examine the contribution from the fluctuating part. Equation (7) can be solved by Fourier transformation. The general solution is
$`\stackrel{}{Y}_{L,R}(z,t)`$ $`=`$ $`{\displaystyle 𝑑\omega \sqrt{1+\omega ^2z^2}\left\{\stackrel{}{A}(\omega )e^{i\omega (t𝒵_{L,R})}+\stackrel{}{B}(\omega )e^{i\omega (t+𝒵_{L,R})}\right\}},`$
$`𝒵_L(z,\omega )`$ $`=`$ $`\sqrt{(\omega z_m)^41}{\displaystyle _{z_0}^z}{\displaystyle \frac{(s/z_m)^2ds}{\left(1+\omega ^2s^2\right)\sqrt{1(s/z_m)^4}}},`$ (33)
$`𝒵_R(z,\omega )`$ $`=`$ $`𝒵_L(z_m,\omega )+\sqrt{(\omega z_m)^41}{\displaystyle _z^{z_m}}{\displaystyle \frac{(s/z_m)^2ds}{\left(1+\omega ^2s^2\right)\sqrt{1(s/z_m)^4}}}.`$
Notice that component waves with $`\omega z_m<1`$ are exponentially damped, reflecting a frequency cutoff imposed by the finite size of the string. The oscillations on the left and right halves of the string are related by the requirement that the solution be smooth at the midpoint, $`z=z_m`$. The coefficients $`\stackrel{}{A}`$ and $`\stackrel{}{B}`$ are determined by enforcing boundary conditions at the string endpoints, $`\stackrel{}{Y}_{L,R}(z_0,t)=\stackrel{}{y}_{L,R}(t)`$:
$$\stackrel{}{A}(\omega )=\frac{\stackrel{}{y}_L(\omega )\mathrm{\Phi }\stackrel{}{y}_R(\omega )}{1\mathrm{\Phi }^2},\stackrel{}{B}(\omega )=\frac{\stackrel{}{y}_L(\omega )\mathrm{\Phi }^{}\stackrel{}{y}_R(\omega )}{1\mathrm{\Phi }^2},\mathrm{\Phi }(\omega )=e^{i2\omega 𝒵_L(z_m,\omega )}.$$
(34)
For any given choice of boundary conditions, the string endpoints trace out definite Wilson lines $`\stackrel{}{y}_{L,R}(t)`$ in the gauge theory. The solution (4) can be used in (16) and then (17) to determine the corresponding SYM expectation value. Rather than working out the details of such a calculation, which would not be particularly enlightening, we will point out some interesting general features of the resulting field configurations.
First, it is evident that the SYM waves display a phase delay analogous to the one found for the straight string, although the details are different. To understand this in some detail, imagine that at $`t=0`$ a pulse is sent along the string by shaking its left end, which we now take to be located at $`\stackrel{}{x}=0`$. The induced metric on the bent string is
$$g_{ab}d\sigma ^ad\sigma ^b=\frac{R^2}{z^2}\left[dt^2+\frac{dz^2}{1(z/z_m)^4}\right],$$
(35)
so the pulse, following a null trajectory, takes a time
$$\mathrm{\Delta }t_1(z^{})=_0^z^{}\frac{dz}{\sqrt{1(z/z_m)^4}}\text{or}\left(_0^{z_m}+_z^{}^{z_m}\right)\frac{dz}{\sqrt{1(z/z_m)^4}}$$
(36)
to reach the point $`z^{}`$ on the left or right half of the string. In particular, it requires a time
$$T=z_m\frac{\mathrm{\Gamma }(1/4)^2}{2\sqrt{2\pi }}$$
(37)
to traverse the entire string and arrive at the right endpoint, where for the time being we assume that it is completely absorbed.
As seen in Fig. 3, a dilaton wave travels from $`z^{}`$ down to a boundary point $`\stackrel{}{x}`$ in an additional time $`\mathrm{\Delta }t_2(z^{},\stackrel{}{x})=\sqrt{z^2+\left(\stackrel{}{x}\stackrel{}{X}_s(z^{})\right)^2}`$. As a result, the radiation arriving at $`\stackrel{}{x}`$ has a component with phase lag $`\mathrm{\Delta }t_1(z^{})+\mathrm{\Delta }t_2(z^{},\stackrel{}{x})`$ for each point $`z^{}`$ on the string. The net effect is that the SYM observer detects a significantly broadened pulse, whose leading and trailing edges arrive at times $`t_f=|\stackrel{}{x}|`$ and $`t_b=T+|\stackrel{}{x}L\widehat{x}_1|`$, respectively.
The situation is thus similar to the one encoded in (3), in that an oscillating source would ultimately give rise to a superposition of gauge theory waves traveling at different speeds $`v1`$. A complicating feature of this situation as compared to the case of an isolated quark is that, because the string now extends only a finite distance into $`AdS`$ space, a disturbance on the string can only propagate for a finite time before running into the boundary. As has been observed by others , the time (37) for a disturbance to propagate from one end of the string to the other corresponds, from the gauge theory point of view, to a subluminal mean speed of propagation of influence $`v=L/T0.457`$. Note, however, that this is not the generic speed of propagation of disturbances in the gauge theory: the whole point of our analysis was to show that disturbances in the expectation value of $`𝒪_{F^2}`$ propagate away from their source in the quark-antiquark system in a conventionally causal fashion: the leading signal arrives on a direct path at the speed of light, followed by indirect signals that arrive later.
As an aside, we remark that disturbances propagating on the strings have bizarre features from the point of view of the boundary gauge theory. For instance, triple-string configurations (describing for example a quark-monopole-dyon system ) can be arranged where a signal originating from one charge would give rise to a disturbance which would run along the strings and arrive first at the more distant (from the boundary theory perspective) of the two other charges . This would not violate causality, strictly understood, but is certainly strange. To repeat, the key point is that an oscillation on the string does not translate *directly* into a wave in the boundary theory. The correct prescription brings the bulk supergravity fields into play, and unequivocally predicts causal SYM propagation, with propagation velocities up to the speed of light.
## 5 Implications for Gauge Theory Dynamics
In this paper, we have been exploring a picture, derived from Maldacena’s AdS/CFT duality conjecture, of the generation and propagation of disturbances in the $`D=3+1`$ $`𝒩=4`$ $`SU(N)`$ super-Yang-Mills (SYM) gauge theory. In this picture, external sources are described by type IIB strings running from the boundary into the bulk of $`AdS`$ space; fluctuations in the position of the external sources generate waves on the strings; the string waves generate propagating disturbances in the supergravity fields in $`AdS`$ space; finally, the fluctuating boundary values of these fields are converted, via the GKPW recipe, into fluctuating expectation values of operators in the gauge theory. Throughout the discussion, we have assumed that the string disturbances propagate according to simple Nambu-Goto dynamics and have treated them as known linear sources for the supergravity fields. In particular, we have not worried about back-reaction of the supergravity fields on disturbances propagating on the string. On the face of it, this seems reasonable because Maldacena’s conjecture includes taking the limit of weak supergravity coupling. On the other hand, as we will now discuss, this collection of assumptions leads to some surprising, perhaps paradoxical, conclusions about the behavior of the gauge theory that are worth pointing out.
A somewhat perplexing feature of the time-dependent field that can be gleaned from Fig. 3 is that dilaton wavefronts emitted from a point $`z^{}`$ on the string describing a quark-antiquark system, give rise to spherical waves in the gauge theory which seem to emanate from neither the quark nor the antiquark, but from the point $`\stackrel{}{X}_s(z^{})`$ on the line between them. Imagine an observer situated halfway between the quark and antiquark: if the quark is shaken to produce a pulse on the string, the observer first sees disturbances coming first from the direction of the quark and then (after a time $`T/2`$) from the opposite direction! Though odd, this feature is in principle consistent with the non-linear character of strongly-coupled SYM: the external sources give rise to propagating disturbances, which propagate through and cause to reradiate, the background gauge field configuration originally set up by the source. This sort of thing would happen in any strongly-coupled theory; what is surprising is the geometrical structure that is inherited from the $`AdS`$ string.
A more profound set of issues arises from the fact that a disturbance travels from one end of the quark-antiquark string to the other in a finite time (37), forcing us to consider how the string disturbance reflects from the boundary if we wish to account for radiation generated at later times. Since the external sources can be taken to be as massive as we like (by letting $`z_00`$), it seems reasonable to assume that the fluctuating string should be subject to fixed or Dirichlet boundary conditions<sup>5</sup><sup>5</sup>5The boundary conditions appropriate for Wilson loops in the $`AdS`$/CFT correspondence have been discussed in . which reflect any incident disturbance back onto the string (with a change of sign). This would mean that a disturbance, however it was initially generated, would simply reflect back and forth between the quark and antiquark ends of the string without ever dying away. More precisely, the linearized string would have eigenstates of oscillation at a discrete set of frequencies $`\omega _n`$ running from some lower cutoff on up to infinity. In a WKB approximation, these frequencies would be determined by the requirement that the phase factor $`\mathrm{\Phi }`$ in (4) is real, i.e.
$$\omega _nz_m\sqrt{(\omega _nz_m)^41}_0^1\frac{d\sigma }{\left[1+(\omega _nz_m)^2\sigma ^2\right]\sqrt{1\sigma ^2}}=\frac{n\pi }{2}.$$
(38)
These oscillations must represent excited states of the dipole field, with a mass gap between states scaled by the dipole separation $`L`$. These states are quite analogous to the infinite tower of mesons found in the large-$`N_c`$ limit of ordinary QCD (where the mass gap is set by the confinement scale). On the other hand, it is quite surprising to imagine finding an analogous set of states in a non-confining conformal gauge theory!
At this point we are led back to the questions, first raised in the discussion of the isolated quark in Section 3, of radiation, energy conservation and back-reaction. It is important to realize that a complete treatment of the production of time-varying supergravity fields by a disturbance on the string must include back-reaction on the string disturbance. To the extent that the bulk field includes a net energy flux away from the string, the back-reaction should cause the disturbance to damp as it propagates. This process is essential to energy conservation in the supergravity picture. Let us now try to understand how this translates into energy conservation in the gauge theory— we will be led to a paradox.
Focus attention again on the infinite tower of excited states of the SYM dipole system, and consider the following question: are these excited states stable? To answer this question, we will pursue two possible lines of argument. On the one hand, within the supergravity framework we know that, once we take back-reaction into account, the resonances will have finite widths and the notion of resonance will only make sense if there is a limit in which the width becomes small compared to the gap between successive states. The rate at which string disturbances radiate is set by $`g_s`$, so if we take the usual $`AdS`$/CFT limit $`g_s0`$ (with $`g_sN`$ fixed), the string will not radiate, and the excitations will be completely stable. This can be seen explicitly in (10): the source term in the dilaton equation of motion vanishes as $`g_s0`$ with $`g_sN`$ fixed. We are thus led to conclude that in the $`N\mathrm{}`$ limit (with the ‘t Hooft coupling $`g_{YM}^2N`$ fixed) there exists in the dual gauge theory an infinite tower of stable (i.e., non-radiating) excited states of the gauge field set up by an infinitely massive quark-antiquark dipole. As we have already pointed out, this would be analogous to what happens in conventional QCD in the large-$`N`$ limit: in the leading approximation, there is a tower of stable states in every sector of the theory (meson, baryon, quarkonium, $`\mathrm{}`$); beyond leading order, these states acquire finite widths proportional to some power of $`1/N`$. It would be most remarkable if the same structure of states survived the passage from confining QCD to non-confining $`𝒩=4`$ SYM (with the confinement scale replaced by a variable geometric scale set by the ‘size’ of the configuration).
On the other hand, the central point of this paper is that the GKPW recipe translates a disturbance propagating on the string into waves in the gauge theory. At the end of the calculations one obtains SYM expectation values (Eq. (3), for instance) which depend on $`g_{YM}`$ only through the ‘t Hooft coupling $`g_{YM}^2N`$, and consequently *do not vanish*<sup>6</sup><sup>6</sup>6The reason for this can be seen in (17): the gauge theory expectation value is extracted directly not from the dilaton $`\varphi `$ (which vanishes as $`g_s0`$), but from the rescaled field $`\stackrel{~}{\varphi }\varphi /g_s^2`$. when $`g_s0`$. Now, as discussed in Section 3, the result obtained in (3) is a near-field contribution (it involves time-dependent fields which depend on the velocity, but not the acceleration, of the sources), and so does not unambigously indicate the presence of SYM radiation. Nonetheless, given that the ten-dimensional static, near-, and radiation fields all come in at the same order in $`g_s`$ (they differ only by their dependence on $`|\stackrel{}{x}|`$), it is natural to expect that a computation of the energy-momentum tensor would reveal a net energy flux away from the external sources, signaling the presence of true radiation. On the face of it, this seems to apply just as much to the solitary oscillating quark as to the quark-antiquark excited states.
We have thus been led to a paradox: if the gauge theory is to conserve energy, a radiating dipole field cannot possibly be stable. To restate the problem in slightly different words, imagine that the quark in the quark-antiquark system is shaken abruptly to produce a pulse running along the string, and the external charges are held fixed at all other times. In this process, a definite amount of energy is added to the system. In the $`g_s0`$ limit, the pulse on the string will not decay, and so it will endlessly travel back and forth between the quark and antiquark. Through the mechanism analysed in detail in this paper, this disturbance will give rise to time-dependent SYM fields which remain finite as $`g_s0`$. If these fields include true radiation (as seems reasonable to expect), they continuously carry energy away from the dipole system, violating energy conservation in the gauge theory. We should remark that, even though the paradox is most evident in the context of the quark-antiquark system, the question of energy conservation must also be addressed in the case of the solitary quark. In that instance, the existence of SYM radiation would not in itself be paradoxical, but it is certainly far from obvious that the infinitely broadened pulse which propagates in the gauge theory after the external charge is shaken properly incorporates energy conservation.
What are we to make of this? In a sense, it is not surprising that we have encountered a problem: given the holographic character of the $`AdS`$/CFT correspondence, the interplay between energy conservation in the bulk and on the boundary is bound to be a delicate issue. Notice that the problem would disappear if our assumption regarding the presence of radiation turned out to be erroneous. Since we have not seen direct evidence for the existence of gauge theory radiation in the $`g_s0`$ limit, we must bear in mind the possibility that the explicit determination of the energy-momentum tensor will show that there is no net energy flux away from the dipole system. This would undoubtedly be a surprising result. We will leave for future work the more thorough analysis required to reach a definitive conclusion on this important issue.
## 6 Gauge Fields of Heavy Quark Baryons
In the preceding two sections, we studied the gauge fields of a color-neutral heavy quark-antiquark pair. Among other interesting things, we found in the static case that the $`𝒪_{F^2}`$ operator expectation value falls off with distance like $`|\stackrel{}{x}|^7`$ (as compared to the $`|\stackrel{}{x}|^4`$ falloff of the same quantity around an isolated color fundamental quark). To assess how general this result is, we will now study the state of the gauge field around a color-neutral collection of $`N`$ quarks: the baryon of this gauge theory.
A baryon in $`𝒩=4`$ SYM is dual to a fivebrane on which $`N`$ fundamental strings terminate . The precise description of this system was found in Ref. (see also ) through a study of the fivebrane worldvolume action. In this approach the strings are faithfully represented by a specific deformation of the flux-carrying fivebrane, in accord with the Born-Infeld string philosophy . The explicit fivebrane embedding that corresponds to a baryon was found to be
$$r(\theta )=\frac{r_0}{\mathrm{sin}\theta }\left[\frac{3}{2}\left(\theta \mathrm{sin}\theta \mathrm{cos}\theta \right)\right]^{1/3},$$
(39)
where $`r=R^2/z`$, $`\theta `$ is the $`𝐒^5`$ polar angle, and $`r_0=r(\theta =0)`$ is a modulus of the configuration. Since the fivebrane is just as much a source of the dilaton as is the string, we may use the logic of the earlier part of the paper to infer the gauge theory expectation value of $`𝒪_{F^2}`$ in the presence of a baryonic collection of heavy quarks. The interesting question is whether this approach yields the same scaling with $`N`$ and $`|\stackrel{}{x}|`$ as would the description of the baryon as a collection of quark strings.
Following , the fivebrane action for an embedding of the above type in the presence of a nontrivial dilaton field can be seen to read (in the Einstein frame)
$$S_{D5}=T_5\mathrm{\Omega }_4R^4𝑑t𝑑\theta \mathrm{sin}^4\theta \{\sqrt{e^\varphi \left[r^2+r^2\right]E^2}+4A_0\},$$
(40)
from which $`E=F_{0\theta }`$ may be eliminated in favor of the displacement field
$$D=\frac{\mathrm{sin}^4\theta E}{\sqrt{r^2+r_{}^{}{}_{}{}^{2}E^2}},$$
(41)
which is known explicitly as a function of $`\theta `$:
$$D(\theta )=\frac{3}{2}\left(\mathrm{sin}\theta \mathrm{cos}\theta \theta \right)+\mathrm{sin}^3\theta \mathrm{cos}\theta .$$
(42)
After this replacement, Eq. (40) implies a linearized dilaton source term which can be written in the form
$$S_{D5\varphi }=\frac{N}{3\pi ^2\alpha ^{}}𝑑t𝑑\theta \varphi \sqrt{r^2+r^2}\sqrt{D^2+\mathrm{sin}^8\theta },$$
(43)
where we have made use of the relation $`T_5\mathrm{\Omega }_4R^4=N/3\pi ^2\alpha ^{}`$.
The embeddings of interest satisfy a BPS condition , which can be used to eliminate $`r^{}`$ in favor of $`r`$, yielding
$$S_{D5\varphi }=\frac{N}{3\pi ^2\alpha ^{}}𝑑t𝑑\theta r(\theta )\left(\frac{D^2+\mathrm{sin}^8\theta }{\mathrm{sin}^4\theta \mathrm{cos}\theta D\mathrm{sin}\theta }\right)\varphi =\frac{N}{3\pi ^2\alpha ^{}}𝑑t𝑑zf(z)\varphi .$$
(44)
To make contact with the discussion of the present paper, in the second step we have reparametrized the fivebrane by the Poincaré radial coordinate $`z=R^2/r`$, implicitly defining the function $`f(z)`$.
It is important to note at this point that, unlike the string configurations discussed in previous sections, which point along a fixed direction on the five-sphere, the fivebrane has a non-trivial $`𝐒^5`$ dependence. At each value of $`z`$ it lies at a different polar angle $`\theta =\theta (z)`$ determined by (39), and it is wrapped isotropically over the remaining $`𝐒^4`$. The operator $`𝒪_{F^2}`$ couples to the massless Kaluza-Klein mode of the dilaton, so the source term given by (44) must be projected onto its spherically symmetric component. In the case of the strings discussed in the preceding sections, this projection would simply multiply the source term by a numerical constant. For the fivebrane, however, it introduces an additional factor of $`\mathrm{sin}^4\theta (z)`$. The resulting source for the massless $`AdS_5`$ dilaton is
$$J(x)=\frac{2\kappa ^2}{3\pi ^2\alpha ^{}\mathrm{\Omega }_5R^5}f(z)\mathrm{sin}^4\theta (z)\delta \left(\stackrel{}{x}\right).$$
(45)
It follows that the (rescaled) dilaton field is now given by
$$\stackrel{~}{\varphi }(x)=\frac{N}{12\pi ^4\alpha ^{}}𝑑t^{}𝑑z^{}f(z^{})\frac{\mathrm{sin}^4\theta (z^{})}{\mathrm{sin}v}\frac{d}{dv}\left[\frac{\mathrm{cos}2v}{\mathrm{sin}v}\theta (1|\mathrm{cos}v|)\right]$$
(46)
(where the invariant distance $`v`$ is given in (11)) Through a familiar set of steps, one can extract the leading behavior of $`\stackrel{~}{\varphi }`$ in the neighborhood of the $`z=0`$ boundary of $`AdS_5`$:
$$\stackrel{~}{\varphi }=\frac{5Nz^4}{4(2\pi )^3\alpha ^{}}_0^{z_m}𝑑z^{}\frac{z^4f(z^{})\mathrm{sin}^4\theta (z^{})}{\left[z^2+|\stackrel{}{x}|^2\right]^{7/2}},$$
(47)
where $`z_m=R^2/r_0`$ is the maximum value of $`z`$ to which the fivebrane extends.
To obtain information from (47) it is convenient to return to the initial angular parametrization:
$$\stackrel{~}{\varphi }=\frac{5NR^8z^4}{4(2\pi )^3\alpha ^{}}_0^\pi \frac{\mathrm{sin}^4\theta d\theta }{r(\theta )^3\left[\frac{R^4}{r(\theta )^2}+|\stackrel{}{x}|^2\right]^{7/2}}\left(\frac{D^2+\mathrm{sin}^8\theta }{\mathrm{sin}^4\theta \mathrm{cos}\theta D\mathrm{sin}\theta }\right),$$
(48)
where the embedding $`r(\theta )`$ is given by (39). The complete field profile of the baryon then follows from (17). From the way $`R^4/r^2=z^2`$ appears in the denominator of (47) and (48) it is clear that the dilaton field (and consequently the SYM field profile) will have qualitatively different behavior in the regions $`|\stackrel{}{x}|>z_m`$ and $`|\stackrel{}{x}|<z_m`$, so the modulus $`z_m`$ in fact determines the ‘size’ of the baryon, as expected from the UV/IR connection (see e.g. the discussion in ).
For $`|\stackrel{}{x}|z_m`$, the leading term in (48) is
$$\stackrel{~}{\varphi }=\frac{5NR^2z_m^3z^4}{9(2\pi )^3\alpha ^{}|\stackrel{}{x}|^7}_0^\pi 𝑑\theta \frac{\mathrm{sin}^6\theta }{\left[\theta \mathrm{sin}\theta \mathrm{cos}\theta \right]^2}(D^2+\mathrm{sin}^8\theta ).$$
(49)
Letting $`c2.40`$ denote the result of the angular integration and employing (17), we find that the $`𝒪_{F^2}`$ expectation value at large distance from the baryon is
$$𝒪_{F^2}=\frac{5c\sqrt{2}}{18\pi ^3}\frac{z_m^3N\sqrt{g_{YM}^2N}}{|\stackrel{}{x}|^7}.$$
(50)
Notice that the dependence on $`|\stackrel{}{x}|`$ and the scale size of the configuration ($`z_m`$) is exactly the same as that found for the ‘meson’, Eq. (32). This is probably a generic feature of color-neutral objects in the $`𝒩=4`$ SYM gauge theory. From the string theory perspective the common origin of this behavior is clear: unlike the quark, the meson and the baryon are represented by brane objects which do not extend all the way to the horizon at $`z=\mathrm{}`$.
A significant difference between (32) and (50) is that the latter includes an additional power of $`N`$. This is precisely as it should be,<sup>7</sup><sup>7</sup>7We thank Igor Klebanov for a discussion on this point. since $`TrF^2/4g_{YM}^2`$ should scale with $`N`$ in the same way as the energy-momentum tensor: at fixed $`g_{YM}^2N`$ it should be $`O(1)`$ for a meson, and $`O(N)`$ for an $`SU(N)`$ baryon .
## 7 Conclusions
We have examined the correspondence between external charges in $`𝒩=4`$ SYM and strings in $`AdS`$ space. Our principal focus was the connection between string oscillations and gauge theory waves. Specifically, by studying the bulk radiation given off by an undulating string, we determined the time-dependent fields produced by an oscillating quark or a quark-antiquark pair in the strongly-coupled theory. The picture that emerges is one in which the waves are in fact generated not only by the external sources, but also by the non-linear medium supplied by the static background field of the same sources. This is in agreement with our qualitative expectations for strongly-coupled non-Abelian gauge theory. The same considerations also suggest the existence of an infinite tower of excitations in the quark-antiquark system in the extreme Maldacena limit. The status of these excitations is uncertain, pending the resolution of some puzzles regarding energy conservation in the $`AdS`$ description of the SYM theory, a subject to which we hope to return.
As a side-result, we have determined the static fields produced by a quark-antiquark pair and also by the D-brane representative of the baryon. Both color-neutral systems were found to display the same long-distance behavior, and to have operator expectation values which fall off more rapidly with distance than those of the isolated quark.
Our results provide yet another example of the remarkable way in which the bulk-boundary correspondence manages to relate intricate aspects of the dynamics of strongly-coupled gauge theories to properties of string theory in $`AdS`$ space. At the same time, we have stressed the need for further work to unravel the precise way in which SYM energy conservation manifests itself in the dual holographic description.
## Acknowledgements
We are grateful to Igor Klebanov for helpful discussions. AG would also like to thank Shiraz Minwalla, Øyvind Tafjord, and Mark Van Raamsdonk for useful conversations. This work was supported in part by US Department of Energy grant DE-FG02-91ER40671 and by National Science Foundation grant PHY98-02484. AG is additionally supported by the National Science and Technology Council of Mexico (CONACYT).
|
no-problem/9906/astro-ph9906288.html
|
ar5iv
|
text
|
# On the anomalous X–ray afterglows of GRB 970508 and GRB 970828
## 1 Introduction
The discovery of afterglows from gamma ray bursts has greatly strengthened our confidence in the correctness of the fireball model (Rees and Mészáros 1992). Since then, attention has begun to shift toward the nature of the exploding source, a problem which is conveniently decoupled from the fireball itself and the ensuing afterglow. For this reason, evidence about the nature of the source has to be sought elsewhere. In particular, attention has been called to the possible interaction of the burst with surrounding material, and the possible generation of a detectable Fe line in the soft X–rays (Perna and Loeb, 1998, Boettcher et al. ., 1999, Ghisellini et al. , 1999, Mészáros and Rees 1998).
Recently, a reburst, i.e. , a resurgence of emission during the afterglow has been reported in two bursts, GRB 970508 (Piro et al. , 1998), and GRB 970828 (Yoshida et al. , 1999). In the case of GRB 970508, the reburst occurs about $`10^5s`$ after the burst, with the soft X–ray flux clearly rising, and departing from its otherwise typical power–law decline. This resurgence lasts a total of $`4\times 10^5s`$, reaches a typical flux in the BeppoSAX band of $`8\times 10^{13}ergs^1cm^2`$, after subtraction of the normal afterglow, and shows evidence for a harder spectrum than during the afterglow proper (power law photon index of $`\alpha =0.4\pm 0.6`$, as opposed to $`\alpha =1.5\pm 0.6`$ before the reburst, and $`\alpha =2.2\pm 0.7`$ at the end of the reburst), (Piro et al. , 1998, 1999).
Furthermore, possible evidence for the existence of Fe K-shell emission lines has been found in these same two bursts: for GRB 970508 see Piro et al. , 1999, while for GRB 970828 see Yoshida et al. 1999. In the first case, a $`K_\alpha `$ iron line occurs at an energy compatible with the burst’s optically determined redshift, while in the second one, for which no independent redshift determination exists, the line, if interpreted as $`K_\alpha `$ from neutral, or weakly ionized iron, yields a redshift of $`z=0.33`$. What is astonishing are the inferred line fluxes and equivalent widths: for GRB 970508, $`F=(2.8\pm 1.1)\times 10^{13}ergs^1cm^2`$ (EW $`1.1keV`$), while for GRB 970828 $`F=(1.5\pm 0.8)\times 10^{13}ergs^1cm^2`$ (EW $`3keV`$). In the case of GRB 970508, furthermore, no evidence for the Fe-line was found after about $`10^5s`$.
Despite their inferred intensities, these lines are at the limit of BeppoSAX and ASCA detectability, so that further observations are needed to confirm their presence. On the contrary the statistical significance of the rebursts is very robust. In the following, we shall concentrate on the especially well documented case of GRB 970508, keeping in mind that qualitatively similar arguments apply to GRB 970828 as well.
It is the aim of this paper to show that, if enough material of sufficiently high density is present in the surroundings of the gamma ray burst event site, then this reburst is exactly what one ought to expect on theoretical grounds. In particular, it is possible to explain all observed characteristics of the reburst, duration/flux level/spectral hardening, including the (possible) presence of the iron lines. In the next section we shall consider the dynamical interaction of the burst’s ejecta with the torus, and in the following one we shall discuss the thermodynamic state of the torus, and establish the properties of its (thermal) emission. In the discussion, it will also be pointed out that the thermodynamic status of the torus is precisely the same postulated by Lazzati et al. (1999) to explain the properties of the iron line.
## 2 Dynamical interaction with surrounding gas
Both Piro et al. (1999) and Lazzati et al. (1999) have already argued that the material giving rise to the Fe K-line cannot lie on the line of sight: the ensuing column depths, in H and Fe would give effects easy to observe. Furthermore, this material should be present in large amounts which would spoil the smooth, power–law expansion of the afterglow, which is observed to cover more than a year. We thus begin by assuming that the site of the explosion is surrounded by a thick torus of matter, with an empty symmetry axis pointing roughly toward the observer. The particle density $`n`$ and distance $`R`$ from the explosion site will be scaled in units of $`10^{10}cm^3`$ and $`10^{16}cm`$.
A time $`R/c`$ after the explosion, this torus will be inundated by the burst proper, and a few seconds later ($`\delta tR/\gamma ^2c=30s`$, where $`\gamma =100`$ is the shell bulk Lorenz factor), it will be hit by the ejecta shell. This crash will generate a forward shock propagating into the torus, and a reverse one moving into the relativistic shell. For any reasonable value of the torus density, the forward shock will quickly rake up as much mass as there is in the shell: we find that this occurs after the shock has propagated a mere distance $`d`$, with
$$d=6\times 10^8cm\frac{E}{10^{51}erg}\frac{10^{10}cm^3}{n}\left(\frac{10^{16}cm}{R}\right)^2\frac{100}{\gamma }.$$
(1)
As is well–known, this means that the relativistic shell must slow down to sub–relativistic speeds. Thus, after just $`d/c0.1s`$, the forward shock has become sub–relativistic. The large pressure behind the forward shock acts to steepen the reverse shock, which will thus slow down the incoming material to sub–relativistic speeds as well. All of this occurs a few seconds after the torus sees the burst.
The total energy released is expected to be of order of the whole kinetic energy of the shell, because post–shock acceleration of electrons occurs at the expense of the shell bulk expansion, in the shocks. If we suppose that the burst generated a total energy release of $`E=10^{51}erg`$, that the initial burst is roughly isotropic, and that the torus covers $`\delta \mathrm{\Omega }`$ radians as seen from the explosion site, the total energy release $`E_{sh}`$ will be
$$E_{sh}=\frac{\delta \mathrm{\Omega }}{4\pi }E.$$
(2)
The total emission timescale can also be reliably computed: the reader will have already noticed that this emission scenario is similar to the external shock scenario (Mészáros, Laguna and Rees 1993), except for two differences. First, in the external shock scenario we are seeing the burst from a reference frame which is moving with respect to the shell of shocked gas with large Lorenz factor, while here the observer is sitting in a reference frame in which the shocked gas is moving sub–relativistically. The major consequence of this first difference is that the photon emission will be isotropic, and we shall thus see it, even though the initial shell movement was perpendicular to the line of sight. The second difference is that, in the external shock scenario, it is matter ahead of the forward shock which is moving relativistically with respect to the shocked gas, while matter entering the reverse shock is moving only barely relativistically with respect to it. In this paper, instead, the opposite applies: matter entering the reverse shock is relativistic, while the forward shock is barely, if at all, relativistic.
Still, these two differences do not spoil the fact that electrons accelerated at either shock cool much faster than the shell light–crossing time, as shown by Mèszàros, Laguna and Rees (1993), so that the total burst duration is given by the time the reverse shock takes to cross the whole shell. In our model, the shell thickness in the laboratory frame is $`R/\gamma `$ (Mèszàros, Laguna and Rees 1993), and, since the reverse shock is relativistic with respect to the incoming matter, the shock crossing time, and thus also the duration $`t_{sec}`$ of the secondary burst, is given by
$$t_{sec}=\frac{R}{\gamma c}=3\times 10^3s\frac{R}{10^{16}cm}.$$
(3)
Together, the total energy release and emission timescale give us the expected bolometric luminosity; the observed flux can be computed, for cosmological parameters $`\mathrm{\Omega }=1`$, $`H_0=65kms^1Mpc^1`$, and $`\mathrm{\Lambda }=0`$, and knowing the burst’s redshift $`z=0.835`$ (Metzger et al. , 1997), and is
$$F_X=1.5\times 10^{10}ergs^1cm^2\frac{\delta \mathrm{\Omega }}{4\pi }\frac{E}{10^{51}erg}\frac{10^{16}cm}{R}.$$
(4)
We must now establish in which band this emission will end up. As is well–known, bursts’ spectra are highly variable, both from burst to burst and within the same burst, at different moments. Also, the fireball model is not too specific about the spectral characteristics of bursts. We can still get an idea of the spectrum, however, by noticing first that the spectrum will be non–thermal, with the usual power law dependence upon photon energy typical of synchrotron emission, and second that once again we are observing a burst in the external shock scenario, but in the shell frame. In normal bursts, the spectrum has a break at an energy $`ϵ_b`$, which is approximately $`ϵ_b1MeV`$. However, this spectral feature is blueshifted in the observer’s frame by the shell’s bulk Lorenz factor: $`ϵ_b=\gamma ϵ_i`$. The intrinsic spectral break $`ϵ_i`$, i.e. in the shell frame, is thus given by
$$ϵ_i=\frac{ϵ_b}{\gamma }=10keV\frac{ϵ_b}{1MeV}\frac{100}{\gamma }.$$
(5)
It is clear why this secondary burst was not observed. First of all, it is dimmer than the original one by a bolometric factor of $`\delta \mathrm{\Omega }/4\gamma \pi <10^2`$, which would push it below detection threshold for both BATSE and the GRBM/WFC instruments of BeppoSAX. Also, it must have occurred sometime between the burst proper and the BeppoSAX detection of the iron line, when, however, BeppoSAX was not observing with its (more sensitive) Narrow Field Instruments.
The further evolution of the shocked shell is as follows. The material that passed through the reverse shock will have an internal energy density higher than the pre–shock one by a factor $`\mathrm{\Gamma }^2`$, where $`\mathrm{\Gamma }\gamma `$ is the Lorenz factor of the reverse shock, as seen by the pre–shocked ejecta shell. For reasonable radiative efficiencies, the post–shocked matter will have a relativistic velocity dispersion even after the secondary burst; then, a rarefaction wave will make it expand at the sound speed $`c/\sqrt{3}`$ back into the cavity from which it came. Thus pressure behind the forward shock will be reduced on a time–scale $`\delta R/c`$, where we can again take for the post–reverse shock shell thickness, as an order of magnitude, $`\delta RR/\gamma `$. Thus the heated gas expansion time–scale is again $`R/\gamma c3\times 10^3s`$.
As the pressure from the post–reverse shock material is reduced, the forward shock keeps propagating because of momentum conservation. However, even this shock cannot last long, because of the strong counterpressure applied by the pre–shock torus. We shall show in the next section that this material will be brought up to $`T_f10^8{}_{}{}^{}K`$ by heating/cooling from the primary and secondary bursts. Then it can easily be checked that $`\rho _sc^2m_pnv^2`$, where $`\rho _s`$, the shell baryon density, is given by spreading the total fireball baryon mass, $`E/\gamma c^2`$, over the shell volume, $`4\pi R^3/\gamma `$, and the torus’ velocity dispersion $`v`$ is purely thermal: $`v^2=kT_f/m_p`$. Thus the torus counterpressure will halt the forward shock as soon as it becomes subrelativistic.
We now make a small detour to discuss an interesting point about the kinematics. As seen from the observer, the part of the shell moving toward him will have moved a long distance ($`R`$, taking the torus to be perpendicular to the line of sight) toward him before the torus is reached by the burst, and thus starts emitting. At that point, photons start travelling away from the torus, and they will catch up with the part of the expanding matter shell moving toward the observer at a rate
$$\delta R=(cv)\delta t$$
(6)
where $`vc(11/2\gamma ^2)`$ is the matter speed. However, the time appearing in the above equation is the time in the reference frame of the exploding object, which is related to that of the observer, $`t_o`$, by $`\delta t_o=\delta t(1v/c)`$, and thus, the distance by which the photon catches up with the matter shell, in an observer’s time interval $`\delta t_o`$ is
$$\delta R=c\delta t_o$$
(7)
which is identical to the expression when relativistic effects are not present. This immediately allows us to estimate the distance of the torus: in fact, since the reburst was present in the observations made $`10^5s`$ after the burst, and this can only occur after the bursts’ photons have reached the torus, we deduce that $`R(1\mathrm{cos}\theta )<3\times 10^{15}cm`$, where $`R`$ is the torus distance from the line of sight, and $`\theta `$ is the angle away from the line sight of the torus symmetry plane. For the total distance, we shall take $`R10^{16}cm`$.
## 3 Thermal history of the torus
In order to proceed, we need first to determine the torus thickness, which we do by using a constraint from the observations of the iron line. When the torus is reached by the burst proper, the ionization parameter is
$$\xi \frac{L}{nR^2}=10^9\frac{L}{10^{51}ergs^1}\frac{10^{10}cm^3}{n}\left(\frac{10^{16}cm}{R}\right)^2.$$
(8)
For these large values, we expect that all iron will be completely ionized, so that the generation of the iron line by fluorescence is unlikely. Furthermore, the torus will be hit by the secondary burst only $`R/\gamma ^2c30s`$ later: thus fluorescence with afterglow photons cannot be invoked either. The remaining mechanisms, multiple recombination/ionizations and thermal processes, both require the torus Thomson optical depth $`\tau _T1`$ for maximum efficiency, and to avoid line smearing (fluorescence, instead, requires $`\tau _T1`$). In such a thin shell, the torus temperature is quickly brought up by the primary burst photons to a temperature close to its Inverse Compton value, given by $`4kT_{IC}=\overline{ϵ}`$, with $`\overline{ϵ}`$ the average burst photon energy. Taking this to be of order the break photon energy $`ϵ_b1MeV`$, we find $`T\frac{ϵ_b}{4k}3\times 10^9{}_{}{}^{}K`$. However, at this temperature, pair creation will quickly give $`\tau _T1`$, and the ensuing thermal cooling will badly limit the temperature, to a value close to the pair creation limit,
$$T_{IC}5\times 10^8{}_{}{}^{}K.$$
(9)
At such large temperatures, the bremsstrahlung cooling time–scale is quite long $`t_{br}5\times 10^5s(10^{10}cm^3/n)(T/5\times 10^8{}_{}{}^{}K)^{1/2}`$. However, the torus may cool due to Inverse Compton cooling off the photons produced by the crashing of the ejecta onto the torus, which have a typical photon energy $`ϵ_i`$ (Eq. 5) much below the torus temperature. For ease of reference, we shall call these secondary photons. The Inverse Compton cooling time–scale $`t_{IC}=3m_ec^2/8c\sigma _TU_{ph}`$ (where $`m_e`$ is the electron’s mass, and $`\sigma _T`$ the Thomson cross–section), can be computed using the fact that the photon energy density $`U_{ph}=L/cA`$, where $`L`$, the secondary photons’ luminosity, was given above as $`L=E\delta \mathrm{\Omega }/4\pi t_{sec}`$, and the total area is roughly twice the shock area, $`A2R^2\delta \mathrm{\Omega }`$. We find thus $`U_{ph}=E\gamma /8\pi R^3`$, independent of the solid angle subtended by the torus. The ratio of the Inverse Compton cooling time to the duration of the secondary burst is then given by
$$\frac{t_{IC}}{t_{sec}}=\frac{3\pi m_ec^2R^2}{\sigma _TE}=1.3\left(\frac{R}{10^{16}cm}\right)^2\frac{10^{51}erg}{E}.$$
(10)
We see that this ratio is very sensitive to the torus location, and to the total energetics. For $`t_{IC}t_{sec}`$, the torus matter will remain hot (Eq. 9), while for $`t_{IC}<t_{sec}`$, its temperature will cool to the new Inverse Compton temperature of the secondary photon bath:
$$T_{IC}^{(2)}\frac{ϵ_i}{4k}3\times 10^7{}_{}{}^{}K.$$
(11)
For the parameters assumed here, $`t_{IC}t_{sec}`$, so that the torus will probably settle to a value intermediate between $`T_{IC}^{(2)}`$ and $`T_{IC}`$. We scale the value of $`T`$ to $`T_f=10^8{}_{}{}^{}K`$, but see the next section for a discussion.
The bremsstrahlung cooling time, at this lower temperature, is given by $`t_{br}1.3\times 10^5s(10^{10}cm^3/n)`$, comparable to the total duration of the reburst observed by Piro et al. , 1998. Also, the expected flux level is
$$F_{br}=1.1\times 10^{12}ergs^1cm^2\left(\frac{M}{1M_{}}\right)^2\frac{10^{46}cm^3}{V}\left(\frac{T}{10^8{}_{}{}^{}K}\right)^{1/2},$$
(12)
provided the torus cooling time is longer than the torus light crossing time, $`t_{lc}R/c`$. Otherwise, the observed flux $`F_{br}^{(obs)}`$ would be related to the previous formula by
$$F_{br}^{(obs)}=F_{br}\times \frac{t_{br}}{t_{lc}}$$
(13)
Furthermore, when, initially, the temperature is rather large, $`10^8{}_{}{}^{}K`$, the spectral slope between the BeppoSAX’ Low and Medium Energy concentrator optics/spectrometers should be rather flat, while later, as the torus cools and its flux decreases, the spectral slope should also increase. Piro et al. (1999) find that, at the point where the reburst is (fractionally) highest over the smooth afterglow, $`\alpha =0.4\pm 0.6`$ (i.e. , consistent with a flat bremsstrahlung spectrum), while later they find $`\alpha =2.2\pm 0.7`$. Though there are large errors, the steepening of the spetrum through the reburst appears to be significant. In view of the agreement of the duration timescale, flux level, and steepening of spectral slope, we suggest that the observed reburst in GRB 970508 is thus bremsstrahlung radiation from a torus of hot material, heated up, and then cooled down, by the photons produced by the impact of the burst ejecta.
We now need to cover our tracks by determining whether there are values of the total torus mass and volume which satisfy, together with $`F_{br}^{(obs)}=1\times 10^{11}ergs^1cm^2`$, also $`\tau _T1`$, $`n10^{10}cm^3`$ which we assumed throughout. We assume a geometry whereby the torus has a volume $`V=\delta \mathrm{\Omega }R^2\delta R`$, with the torus thickness $`\delta RR`$, the torus distance from the explosion site. Since $`\tau _T=0.6(M/1M_{})(10^{16}cm/R)^24\pi /\delta \mathrm{\Omega }`$, we see that for $`M=5M_{}`$, $`R=10^{16}cm`$, $`V=10^{47}cm^3`$, $`\delta R=10^{14}cm`$ and $`\delta \mathrm{\Omega }4\pi `$, we satisfy all constraints simultaneously: $`\tau _T2`$ and $`n=4\times 10^{10}cm^3`$. From this we see that the torus need not be thin ($`\delta \mathrm{\Omega }4\pi `$), which certainly agrees with expectations about the nature of exploding sources. Also, we notice that $`t_{br}/t_{lc}4`$, so that the duration of the bremsstrahlung cooling radiation is diluted by light crossing time effects.
Thermal expansion of the shell during the cooling phase is negligible, since the cooling time is of order of the light crossing time, which is certainly shorter than the sound crossing time.
It is well–known that GRB 970508 had an early optical detection, $`0.2^d`$ after the burst, which was dimmer than later ($`>1^d`$) detections (Sahu et al. , 1998). Typical fluxes throughout the first 2 days are around $`30\mu Jy`$, which far exceed the optical component of the bremsstrahlung emission from the torus, which is in the range of $`0.03\mu Jy`$. Thus the observed nearly simultaneous rise of X–ray and optical fluxes remains, within this model, a coincidence.
## 4 Discussion
Beyond explaining the observed X–ray reburst (and the Fe line, see below), the current model makes a number of interesting predictions. First, the secondary burst may be observable. We may expect these events to last a few thousand seconds, with fluxes in the range of $`10^{11}`$ to $`10^{10}ergs^1cm^2`$. The spectra of these sources are also interesting: we argued above that the torus temperature is limited by pair–creation, which would otherwise cause excessive radiative losses; thus we may expect the torus to reach a limiting temperature such that $`\tau _T1`$, and a temperature $`5\times 10^8{}_{}{}^{}K`$, which correspond to a Compton parameter $`y0.5`$. We thus expect significant departures from the usual, power–law like spectra of bursts. In particular, from sources which do not have time to cool down to $`T_f`$ (Eq. 10), so that the Comptonization of the secondary burst spectrum is time–independent, we expect to see a cutoff $`\mathrm{exp}(h\nu /kT)`$ beyond $`h\nu =kT50keV`$, with a complicated, time–dependent non–power–law behavior below this point (Rybicki and Lightman 1979). This exponential cutoff can be used as signature of unsaturated Comptonization, typical of the present model.
Another interesting consequence of this model is that the secondary burst may be seen even without its being preceded by the main gamma ray burst. This would occur whenever we would missed the (beamed) emission from the burst proper, but would see the isotropic emission from the reburst. This might occur because in many models, the beaming of the main burst is expected to be rather smooth, and one may conjecture that, while the total output may be $`10^{52}erg`$ close to the major axis, a total of $`10^{51}erg`$ remains to be emitted nearly isotropically. This would amply satisfy the energy requirements of the reburst. The total expected fluences (up to $`10^4ergcm^2`$ for distances smaller than GRB 970508’s)) and durations ($`10^3s`$) strongly remind one of the so–called Fast X–ray Transients, many of which last through several satellite orbits and have no identified counterparts (Grindlay 1999). A bevvy of these events should become observable with planned new telescopes such as SWIFT.
An interesting question one may ask is why the observation of the rebursts is so rare: up to now, GRB 970508 and GRB 970828 are the only two bursts for which such phenomenon has been observed. So long as the torus is optically thin to bremsstrahlung, we see from Eq. 12 that the expected flux scales with distance from the explosion site as $`R^q`$, where $`q=23`$. Since we ignore the torus thickness, we consider the two limiting cases: $`q=3`$, uniformly filled sphere, and $`q=2`$, infinitely thin shell. This flux will appear with a time–delay $`R/c`$ with respect to the burst, simultaneously with an afterglow which scales as $`t^p`$, with $`p1.3`$. We see that the torus to afterglow flux ratio scales as $`t^{pq}`$, with $`pq=(0.71.7)<0`$. Thus, the more distant the torus is, the less easy it is to detect it. However, since we supposed that $`\tau _T1`$ for $`R10^{16}cm`$, further shrinking of the torus will make it less bright, not more; but it will have to compete with a simultaneously emitted afterglow which is brighter and brighter. So $`R=10^{16}cm`$ is an ideal distance at which the torus could be located.
For the same parameters as above, Lazzati et al. (1999) have shown that the iron line can be interpreted as due to purely thermal processes. Actually, Lazzati et al. showed that also fluorescence and multiple ionization/recombinations can account for the line, given suitable (but different!) thermodynamic conditions for the emitting plasma. However, we showed in this paper that the thermodynamic conditions of the emitting torus are not free, but are essentially fixed by the requirement that the reburst be fitted. We wish to stress that this is a much more demanding requirement, since the reality of the reburst cannot be doubted, while that of the iron line is more questionable. It is however satisfying that the thermodynamic parameters thusly determined ($`T=10^8{}_{}{}^{}K`$, $`n=4\times 10^{10}cm^3`$) are precisely those that Lazzati et al. (1999) had to assume, in order to fit the line.
As a corollary, one may then understand why it is difficult to observe the iron lines. Lazzati et al. (1999) have derived the luminosity of the line as a function of the torus temperature: $`\mathrm{exp}(8\times 10^7{}_{}{}^{}K/T)T^{2.4}`$. This luminosity has a peak for $`T=T_m=3\times 10^7{}_{}{}^{}K`$, and decreases steeply with increasing $`T`$. We see that $`T_{IC}^{(2)}T_m`$, while $`T_{IC}T_m`$. Thus, it is only when the torus manages to cool down, that it will find itself in ideal conditions for producing a bright iron line; we see from Eq. 10 that this occurs only for material that lies close to the explosion site. Otherwise, the torus material will remain into a hot state in which the line equivalent width is very small: $`20eV`$ at $`T=T_{IC}`$ (Bahcall and Sarazin 1978). We also remark that, even in the case in which the torus has managed to cool down to $`T_m`$, after a time $`t_{br}`$, it will further cool below $`T_m`$, and the line flux will promptly decrease, thereby explaining the disappearance of the iron line in the observations of GRB 970508 (Piro et al. , 1999).
Should the torus be located at larger radii, then we would expect that the material be hotter (from Eq. 10), and that the Fe–line should not be observable, from the argument above. We thus expect inverse correlations of the time–delay with which the reburst appears with the luminosity, and with the Fe–line equivalent width.
An alternative model for the anomalous behaviour of GRB 970508 has been proposed (Panaitescu, Mèszàros and Rees 1998). In their model there is no external material to cause a resurgence of the X–ray flux, and the peculiarities in the time–evolution of the optical afterglow are explained as a consequence of beaming. However, the anomalous variations in the X–ray flux can hardly be followed (see especially their Fig. 2), and certainly there is no allowance for either the observed spectral variations of the X–ray flux during the first two days, nor for the existence of an iron line.
Lastly, we would like to comment on the fact that we require a dense, and abundant amount of iron–rich (for a redshift of $`z=0.835`$!) material, at close distance from the explosion site: $`5M_{}`$ at $`R=10^{16}cm`$. This is clearly incompatible with all existing models of GRBs, neutron star–neutron star/neutron star–black hole/ black hole–white dwarf mergers, and hypernovae, except for SupraNovae (Vietri and Stella 1998), which are preceded by a SuperNova explosion occurring between 1 month and 10 years before the GRB. With an average expansion speed of $`3000kms^1`$, this implies an accumulated distance of $`R=10^{1517}cm`$. At this distance, one should find several solar masses (McCray 1993) with densities of order $`10^{10}cm^3`$, exactly as required by this independent set of observations.
|
no-problem/9906/hep-lat9906023.html
|
ar5iv
|
text
|
# 1 dummy
In a recent paper we examined the spectrum of screening masses at finite temperature in four dimensional $`SU(2)`$ and $`SU(3)`$ pure gauge theories. Our primary result was that dimensional reduction could be seen in the (gauge invariant) spectrum of the spatial transfer matrix of the theory. In addition, we had shown that the specific details of the spectrum precluded any attempt to understand it perturbatively. In this paper we present a complete set of non-perturbative constraints on the effective dimensionally reduced theory at a temperature ($`T`$) above the phase transition temperature ($`T_c`$) for the $`SU(2)`$ case in the zero-lattice spacing and infinite volume limit.
The study of screening masses is interesting for two reasons. First, they are crucial to phenomenology because they determine whether the fireball obtained in a relativistic heavy-ion collision is large enough for thermodynamics. Second, the problem of understanding screening masses impinges on several long-standing problems concerning the infrared behaviour of the $`T>T_c`$ physics of non-Abelian gauge theories.
It is known that electric polarisations of gluons get a mass in perturbation theory, whereas magnetic polarisations do not. Long ago, Linde pointed out that $`T>0`$ perturbation theory breaks down at a finite order due to this insufficient screening of the infrared in non-Abelian theories. The most straightforward way to cure this infrared divergence would be if the magnetic polarisations also get a mass non-perturbatively. There have been recent attempts to measure such a mass in gauge-fixed lattice computations .
It was found long back that the solution could be more complicated and intimately related to the dynamics of dimensionally reduced theories. Jackiw and Templeton analysed perturbative expansions in massless and super-renormalisable three dimensional theories and found that subtle non-perturbative effects screen the infrared singularities in such theories. In a companion paper, Applequist and Pisarski discussed the possibility that such effects might, among other things, also give rise to magnetic masses . In fact the recent suggestion of Arnold and Yaffe that non-perturbative terms and logarithms of the gauge coupling may be important in an expansion of the Debye screening mass in powers of the coupling may be seen as an example of such non-perturbative effects. The generation of the other screening masses are also non-perturbative. We discuss these issues further after presenting our main results.
In this paper we report our measurements of the screening masses in the infinite volume and zero lattice spacing limit of $`SU(2)`$ pure gauge theory at temperatures of 2–4$`T_c`$. We found a strong finite volume movement of one of the screening masses due to spatial deconfinement. However, the lack of finite volume effects in the remaining channels allowed us to extract infinite volume results from small lattices. The effect of a finite lattice spacing turned out to be small. We were able to pin down all the available screening masses with an accuracy of about 5%.
It is necessary to set out our notation for the quantum numbers of the screening masses. The transfer matrix in the spatial direction, $`z`$, has the dihedral symmetry, $`D_h^4`$ of a slice of the lattice which contains the orthogonal $`x`$, $`y`$ and $`t`$ directions. The irreducible representations (irreps) are labelled by charge conjugation parity, $`C`$, the 3-dimensional ($`x,y,t`$) parity, $`P`$, and the irrep labels of $`D_4`$ (four one-dimensional irreps $`A_{1,2}`$, $`B_{1,2}`$ and one two-dimensional irrep $`E`$). In $`SU(2)`$ gauge theory, only the $`C=1`$ irreps are realised; hence we lighten the notation by dropping this quantum number.
Dimensional reduction implies the following pair-wise degeneracies of screening masses—
$$m(A_1^P)=m(A_2^P),m(B_1^P)=m(B_2^P),m(E^P)=m(E^P).$$
(1)
After this reduction, the symmetry group becomes $`C_v^4`$ on the lattice and $`O(2)`$ in the continuum. The latter group has two real one-dimensional irreps— $`0_+`$ and $`0_{}`$. The first comes from the $`J_z=0`$ components of even spin irreps of $`O(3)`$, and the second from the $`J_z=0`$ components of the odd spins. There are also an infinite number of real two dimensional irreps, $`𝐌`$, corresponding to the $`J_z=\pm M`$ pair coming from any spin of $`O(3)`$. Dimensional reduction associates the irreps of $`D_h^4`$ with those of $`O(2)`$ according to
$`m(0_+)=m(A_1^+),`$ $`m(0_{})=m(A_1^{}),`$ (2)
$`m(\mathrm{𝟏})=m(E),`$ $`m(\mathrm{𝟐})=m(B_1^+)=m(B_1^{}).`$ (3)
The final double equality is valid only when all lattice artifacts disappear. Although $`O(2)`$ has an infinite tower of states, only these four masses are measurable in a lattice simulation of the $`SU(2)`$ theory<sup>3</sup><sup>3</sup>3There has been a first attempt to disentangle these lattice effects and measure the higher irreps .. Some of the equalities in eqs. (1,3) may be broken by dynamical lattice artifacts.
We studied these artifacts using “torelon” correlators . These are correlation functions of Polyakov loop operators in the spatial ($`P_x`$ and $`P_y`$) and temporal ($`P_t`$) directions. $`P_t`$ and $`P_x+P_y`$ transform as the scalar ($`A_1^+`$) of $`D_h^4`$, whereas $`P_xP_y`$ transforms as $`B_1^+`$. At zero temperature, a major part of finite volume effects in masses can be understood (for moderate $`mL`$) in terms of torelons.
The status of the $`A_1^+`$ torelon, $`P_t`$, at $`T>0`$ is very different from that at $`T=0`$. Here, $`P_t`$ is the order parameter for the phase transition, and its correlations have genuine physical meaning— giving the static quark-antiquark potential, and hence defining the Debye screening mass, $`M_D`$. This is identical to $`m(A_1^+)`$ obtained from the Wilson loop operators . In this respect, the finite temperature theory is nothing but a finite size effect.
We believe that the major part of finite volume effects in screening masses can be understood in terms of finite temperature physics. In simulations of $`N_t\times L^2\times N_z`$ lattices at a given coupling $`\beta `$, when the transverse direction, $`L`$, is small enough, the spatial gauge fields are deconfined. The spatial torelons $`P_{x,y}`$ are order parameters for this effect. In general, large lattices, $`L/N_tT/T_c`$ have to be used to obtain the thermodynamic limit. Below this limiting value of $`L`$, we should find strong finite volume effects, but only in the $`A_1^+`$ and $`B_1^+`$ sectors. When such effects can be directly measured, the $`B_1^+`$ loop mass is expected to be twice the torelon mass. Whether or not similar effects are seen in the $`A_1^+`$ sector depends on whether the spatial torelon mass is less than $`M_D/2`$. If it is, then finite volume effects should be strong, otherwise not. We look upon torelons as convenient probes of finite volume effects, not their cause.
We have studied screening masses for $`SU(2)`$ gauge theory on $`N_t\times L^2\times N_z`$ lattices with $`N_z=4N_t`$ at a temperature of $`T=2T_c`$. We studied two series of lattices, one for $`N_t=4`$ and another for $`N_t=6`$. For the first, we took $`L=8`$, 10, 12 and 16. For the second, we chose $`L=16`$, 20 and 24. For $`N_t=4`$, a temperature of $`2T_c`$ is obtained by working with $`\beta =2.51`$. On $`N_t=6`$, the choice $`\beta =2.64`$ gives $`T=2T_c`$. The choice of lattice sizes allowed us to investigate finite volume as well as finite lattice spacing effects at constant physics.
We have also carried out measurements at $`T=3T_c`$ and $`4T_c`$. Since our measurements at $`2T_c`$ showed that lattice spacing effects are quite small for $`N_t=4`$, we restricted ourselves to this size at higher temperatures. At $`3T_c`$, we worked with a $`4\times 24^3`$ lattice. At $`4T_c`$, we supplemented our earlier measurements on small ($`4\times 8^2\times 16`$ and $`4\times 12^2\times 16`$) lattices with measurements on $`4\times 24^3`$ and $`4\times 32^3`$ lattices. For $`N_t=4`$, temperatures of $`3T_c`$ and $`4T_c`$ are attained by working at $`\beta =2.64`$ and 2.74, respectively.
We used a hybrid over-relaxation algorithm for the Monte-Carlo simulation, with 5 steps of OR followed by 1 step of a heat-bath algorithm. The autocorrelations of plaquettes and Polyakov loops were found to be less than two such composite sweeps; hence measurements were taken every fifth such sweep. We took $`10^4`$ measurements in each simulation except on the $`6\times 24^3`$ lattice where we took twice as much, and the $`4\times 32^3`$ lattice where we took 4000.
Noise reduction involved fuzzing. The full set of loop operators measured on some of the smaller lattices can be found in . Since analyses of subsets of these operators gave identical results, we saved CPU time on the larger lattices by measuring a smaller number of operators. The full matrix of cross correlations was constructed, between all operators at all levels of fuzzing, in each irrep. A variational procedure was used along with jack-knife estimators for the local masses. Torelons were also subjected to a similar analysis.
Our measurements at $`2T_c`$ for $`N_t=4`$ are reported in Table 1. We can measure torelons for fairly large values of $`L/N_t`$. Twice the $`A_1^+`$ spatial torelon screening mass is greater than that obtained from $`P_t`$. Hence $`m(A_1^+)`$ obtained from loops is equal to the latter and therefore shows no finite volume effect. The $`A_1^+`$ and $`B_1^+`$ spatial torelons have equal screening masses. The $`B_1^+`$ loop screening mass closely equals twice the $`B_1^+`$ torelon mass, and hence shows a systematic dependence on $`L`$. Finite volume effects are absent in all the other channels, as expected. For the $`L=16`$ lattice for $`N_t=4`$, the torelon is not measurable, and finite volume effects are under control. At this largest volume dimensional reduction and continuum physics are visible since the equalities in eqs. (1,3) are satisfied.
We have investigated finite lattice spacing effects by making the same measurements at the same physical temperature on lattices with $`N_t=6`$. The measurement of $`m(E^{})`$ turns out to be rather noisy. Since we had observed on the coarser lattice that $`m(E^+)=m(E^{})`$, we saved on CPU time by dropping the measurement of the $`E^{}`$ screening mass on the $`N_t=6`$ lattices. Our results on the finer lattice are collected in Table 2. Again, dimensional reduction and continuum physics is visible because the equalities in eqs. (1,3) are satisfied on the largest lattice.
From the data collected in Tables 1 and 2 it is clear that the physical ratio $`m/T`$ is the same with both lattice spacings, for loop masses. Hence finite lattice spacing effects are under control. This result is consistent with zero temperature lattice measurements which show that at these lattice spacings, ratios of physical quantities are independent of the spacing.
In Figure 1 we illustrate the nature of the finite volume effects. The lack of movement in $`m(A_1^+)`$, $`m(A_1^{})`$ and $`m(B_2^+)`$ is obvious. We have used the fact of dimensional reduction to prune the amount of data that has to be displayed in the graph. Note that the data show that $`m(\mathrm{𝟐})`$ can be estimated by measuring any of the $`B`$ irrep screening masses, apart from the $`B_1^+`$, at small volumes. Note also that $`m(B_1^+)/T`$ scales with either $`L/N_t`$ or $`L/N_z`$. However, for $`N_t=6`$, it becomes difficult to measure the torelon correlator at fairly small value of $`L/N_t`$. This supports our earlier statement that the torelon is a measure, not the cause, of finite volume effects.
In the $`SU(3)`$ pure gauge theory, which has a first order phase transition, the simple equalities $`N_z/N_t,L/N_t>T/T_c`$ are sufficient to remove finite volume effects . The observed slow finite volume movement of $`m(B_1^+)`$ is special to the $`SU(2)`$ gauge theory, which has a second order finite temperature phase transition. As a result, the above constraints on the lattice sizes are compounded by two separate systematics of second order phase transitions. The first is that there are precursor effects which cause masses to decrease at temperatures less than $`T_c`$; the second that part of this decrease is power-law singular in $`N_z`$ at fixed $`\beta `$. Consequently, in the $`SU(2)`$ theory we can at best state the more stringent conditions $`L/N_t=N_z/N_tT/T_c`$.
In our measurements with $`N_t\times N_s^3`$ lattices at higher temperatures, we found that the lattice artifact in $`m(B_1^+)`$ persists for the largest values of $`N_s/N_t`$ that we had. At both the higher temperatures no other finite volume effects were seen within the precision of the measurements. As a result, we were able to estimate all the four screening masses listed in eq. (3). The ratios $`m/T`$ are seen to be approximately constant in this temperature range. This is illustrated in Fig. 2.
The four masses that we have extracted from simulations of the 4-d theory represent the maximum information available non-perturbatively to constrain the effective 3-d theory. We found it instructive to display the same data in Figure 3 as a plot of the ratio $`m(0_+)/m(\mathrm{𝟐})`$ against $`m(0_{})/m(\mathrm{𝟐})`$. The finite volume movement in these numbers is fairly large if the denominators are estimated through $`m(B_1^+)`$. However, as shown, the movement is much reduced if $`m(B_1^{})`$ is used as an estimator of $`m(\mathrm{𝟐})`$. Since the continuum and thermodynamic limit is pretty well pinned down, the figure also serves well to compare the 4-d theory with different 3-d theories.
The point for the 3-d $`SU(2)`$ pure gauge theory in the infinite volume and for zero lattice spacing is shown in the figure. It is clear that this is not the appropriate effective theory. This result is expected, since a perturbative mode counting shows that the effective three dimensional theory must contain a gauge field and a scalar field that transforms adjointly under gauge transformations, and the scalar field does not decouple completely from the theory even at high temperatures .
The vertical bands in Figure 3 come from measurements in a 3-d $`SU(2)`$ gauge theory with a fundamental scalar in both the symmetric and Higgs phases of this theory . It is not surprising that the ratio $`m(0_+)/m(\mathrm{𝟐})`$ in either phase of this theory does not agree with our measurements at $`2T_c`$, in view of the arguments already presented.
In a super-renormalisable 3-d theory of $`SU(2)`$ gauge fields and an adjoint scalar, with three couplings, was suggested as the effective theory. Matching two of these couplings in a perturbation expansion, the screening masses were computed through a simulation of the 3-d theory. It turned out that at couplings corresponding to a temperature of $`2T_c`$, $`m(\mathrm{𝟏})/m(0_+)=1.6\pm 0.2`$ as opposed to the value $`2.4\pm 0.2`$ that we measure. Whether better agreement can be obtained by fine-tuning the third coupling remains as a future exercise. If three couplings can be tuned to reproduce four masses, then this would vindicate the perturbative approach to matching espoused in .
However, until such a demonstration is made, there are questions whether this procedure is viable at $`T2T_c`$. A direct measurement suggests that the gauge coupling is larger than unity, $`g^2/2\pi 0.53`$, even for $`T=2T_c`$ . A related statement has been made based on a recent study of the Debye mass— that higher orders in the perturbative series become numrically smaller only at $`T10^7T_c`$ . A similar statement comes from attempts to find the region of validity of the perturbative expansion of the free energy in a non-Abelian plasma , which give $`T>10^5T_c`$. It has recently been suggested that effects associated with screening and damping should be resummed to all orders in $`g`$, if perturbation theory is to behave reasonably at $`T2T_c`$. We have earlier concluded that the screening masses we observe cannot be obtained perturbatively . The fact that our measurements show $`m/T>2\pi `$ in some channels also indicates that the perturbative matching procedure may not be useful, since dimensional reduction works only if modes with energy $`2\pi T`$ or more decouple .
There are alternatives to perturbation theory. One interesting method would be to use gauge invariant composites directly to construct the effective theory. Phenomenology of this kind was used long ago to examine lattice data on the energy density for $`T>T_c`$ $`SU(3)`$ gauge theory . A more sophisticated attempt of this kind was tried in , but needed the machinery of large-N theories to control the expansion.
The question of the compositeness of screening masses is closely related. Note that the 3-d adjoint Higgs, $`A_t`$, is in the $`0_{}`$ irrep of $`O(2)`$, and the 3-d gauge field, $`𝐀`$, in the $`\mathrm{𝟏}`$. The gauge invariant $`0_+`$ can be seen in correlations of the composite operators $`𝒪_1=\mathrm{Tr}(A_t^2)`$ and $`𝒪_2=\mathrm{Tr}(𝐀𝐀)`$, as well as higher dimensional operators. The gauge invariant $`0_{}`$ screening mass can be seen, for example, in correlations of $`𝒪_3=\mathrm{Tr}(A_t^3)`$. The composite operators corresponding to the remaining gauge invariant screening masses can also be easily written down. Nadkarni had shown by explicit computation that $`𝒪_1`$ and $`𝒪_2`$ mix at order $`g^4`$, where $`g`$ is the 4-d gauge coupling . Hence, the characterisation of $`m(0_+)`$ as being due to electric phenomena is a perturbative statement, and is more or less correct according to how large $`g`$ is at the temperature that concerns us. Similar problems occur in the other channels as well.
In summary, we identified the only source of large finite volume effects in the determination of screening masses at $`T>T_c`$ in $`SU(2)`$ pure gauge theory. These are due to spatial deconfinement and can be conveniently studied using torelons. Finite lattice spacing effects turn out to be easy to control. We found that rather small and coarse lattices can be used to obtain a good measurement of the physical screening masses, provided one ignores the $`B_1^+`$ channel. Our best estimates are shown in Figure 2. Dimensional reduction, as expressed non-perturbatively in eqs. (1,3), is seen in the temperature range 2–4$`T_c`$.
We would like to thank Owe Philipsen for a discussion.
|
no-problem/9906/nucl-th9906080.html
|
ar5iv
|
text
|
# Scaling of particle production with number of participants in high-energy A+A collisions in the parton-cascade model
##
Recently, the WA98 Collaboration has published data for the production of neutral pions up to transverse momenta of $`p_{}4`$ GeV/c, in central $`Pb+Pb`$ collisions at 160 A GeV/c incident momentum, corresponding to $`\sqrt{s}17`$ A GeV. Two most interesting features of these data emerge when compared to corresponding data from $`pp`$ collisions and collisions involving lighter nuclei : a) an approximate invariance of the spectral shapes, i.e., a near indepence of the slope of the neutral pion $`p_{}`$ spectra; b) a simple scaling of the $`\pi ^0`$ with the number of participating nucleons, if the number of participants is large ($`\stackrel{>}{}\mathrm{\hspace{0.17em}30}`$). In the present work, we use the event generator VNI which embodies the physics of the parton-cascade model for ultra-relativistic heavy-ion collisions to analyze these observations. The model attempts to describe the nuclear dynamics on the microscopic level of particle transport and interactions, by evolving the multi-particle system in space-time from the instant of nuclear overlap all the way to the final-state hadron yield. For details we refer the interested reader to Refs. .
A simple consideration may illustrate the features of particle production within this approach and its relevance to multiple parton scattering. Let $`x`$ denote the number of partons in each nucleon, and let each parton suffer $`\nu `$ collisions during the partonic stage. Assuming that each virtual parton radiates $`r`$ partons, we see that the number of produced partons will vary as
$$N_{\text{partons}}\nu (1+r)xA,$$
(1)
and as we expect, $`N_{\text{hadrons}}N_{\text{partons}},`$ one realizes immediately that if the partons interact only once, the multiplicity of the partons, and hence the multiplicity of hadrons, will scale as $`A`$. It is also clear that if every parton interacts with with every other parton then $`\nu A`$, and the number of materialized partons would scale as $`A^2`$. That can happen, if the system would live for an infinitely long time. However, this is not the case. Rather than that, in relativistic heavy ion collisions, the partonic matter will expand, dilute, and eventually convert into hadrons. Thus a given parton may undergo $`\nu R/\lambda `$ interactions; where $`R`$ is the transverse size of the system and $`\lambda `$ is the mean free path of the parton. Noting that $`RA^{1/3}`$, we immediately see that the number of materialized partons, and hence the number of produced particles would scale as $`A^{4/3}`$. An experimental verification of this scaling behaviour could be a direct manifestation the formation of a dense partonic matter!
We shall demonstrate now that these simple considerations are indeed confirmed by a detailed simulation with the event generator VNI on the basis of the parton-cascade/cluster-hadronization model. We first consider the recently measured transverse momentum distribution of $`\pi ^0`$-production in central collsions of $`Pb+Pb`$ at CERN SPS obtained by the WA98 collaboration . To make contact with the experimental data, the simulations were done for the range of impact parameters $`0<b<4.5`$ fm, which corresponds to 10% of minimum-bias cross-section. The result of our model calculation, shown as the solid histogram in Fig. 1a, is seen to be in decent agreement with the experimental measurements. The model results do not include the final-state interaction among produced hadrons yet, but it is likely that the agreement will further improve once the effect of cascading hadrons is included. The dashed histogram in Fig. 1 gives the soft contribution while the solid curve gives a hydrodynamic prediction (without the contribution of resonance decays).
In Fig. 2 we plot our results for the $`p_{}`$ spectra of $`\pi ^0`$’s for a number of central $`AA`$ collisions at $`\sqrt{s}=`$ 17 A GeV for various $`A+A`$ systems from $`A=16`$ to $`A=197`$. One observes that they are almost identical in shape with a universal slope for $`p_{}\stackrel{<}{}\mathrm{\hspace{0.17em}1.5}`$ GeV/c. On the other hand, the deviations appearing at larger $`p_{}`$ for heavier systems are indicative of enhanced multiple scattering there. Similar results (not displayed here) were obtained at RHIC energies. In order to verify this scaling more closely, we have calculated, as a function of the nuclear mass number $`A`$, the production of $`\pi ^0`$’s in the central rapidity region ($`0.5<y<0.5`$) having transverse momenta $`p_{}0.5`$ GeV/c. The latter choice minimizes the influence of pions having their origin in decay of resonances. This kinematic window was motivated by the WA98 collaboration in their measurement of the $`\pi ^0`$ yield. Fig. 3 displays the simulation results for central $`A+A`$ collisions across the periodic table, at CERN SPS center-of-mass energy $`\sqrt{s}=17`$ A GeV, while Fig. 4 shows the same for RHIC energy $`\sqrt{s}=200`$ A GeV. The solid lines are fits to the model results, represented by the symbols, and scale as
$$N_{\pi ^0}\left(N_{part}\right)^\alpha ,$$
(2)
where $`N_{part}=2A`$ is the number of participating nucleons, and $`\alpha `$ being extracted as:
$$\alpha \{\begin{array}{c}1.16\text{at}\sqrt{s}=17\text{A GeV}\hfill \\ 1.23\text{at}\sqrt{s}=200\text{A GeV}\hfill \end{array}.$$
(3)
It is interesting to note that $`\alpha 1.2`$ is in excellent agreement with the corrected WA98 results .
For comparison, the dashed lines correspond to a linear scaling $`(N_{part})^{1.0}`$, whereas the dashed-dotted lines indicate a hypothetical scaling with the nuclear overlap factor $`T_{AA}(b=0)(N_{part})^{1.42}`$. A linear scaling would reflect a single-collision situation, and a scaling with $`T_{AA}`$ would indicate a Glauber-type multiple-collision scenario, in which nucleons suffer several collisions along their incident straight-line trajectory, without deflection and without energy loss. Comparing the three curves, we can conclude that our simulation results rise significantly slower than with $`T_{AA}`$, because firstly, the particles change direction through the collisions, and secondly, they are subject to a collision time of the order of the inverse momentum transfer, during which they cannot rescatter. On the other hand, our calculated $`\pi ^0`$ yields grow much faster than linear with $`A`$, due to multiple scatterings.
In summary, we have demonstrated here that the observed scaling of the number of produced particles with the number of participants, in heavy-ion $`A+A`$ collisions, as well as the approximate shape-independence of the transverse momentum spectra, are satisfactorily reproduced by the parton-cascade / cluster-hadronization model. We must add that a more accurate description of the $`p_T`$ spectra at larger $`p_T`$ will need a readjustment of the hadronization model.
Dr. Klaus Geiger, a dear colleague, a very good friend, and a brilliant physicist was killed in an air-crash in September 1998. This is one of the last pieces of work which we completed together .
|
no-problem/9906/cond-mat9906338.html
|
ar5iv
|
text
|
# Enhancement of Coulomb interactions in semiconductor nanostructures by dielectric confinement
## Abstract
We present a theoretical analysis of the effect of dielectric confinement on the Coulomb interaction in dielectrically modulated quantum structures. We discuss the implications of the strong enhancement of the electron-hole and electron-electron coupling for two specific examples: (i) GaAs-based quantum wires with remote oxide barriers, where combined quantum and dielectric confinements are predicted to lead to room temperature exciton binding, and (ii) semiconductor quantum dots in colloidal environments, where the many-body ground states and the addition spectra are predicted to be drastically altered by the dielectric environment.
When a semiconductor nanostructure is embedded in a medium with a smaller dielectric constant, the Coulomb interaction between quantum confined states may be enhanced by virtue of the polarization charges which form at the dielectrically mismatched interfaces . While this effect is relatively small and usually neglected in conventional semiconductor heterostructures (e.g., GaAs/AlAs), we will show that, for hybrid semiconductor nanostructures surrounded by an organic or dielectric medium the enhancement can be large and must be taken into account for a realistic description of Coulomb correlated quantum states. Beside being quantitatively important for the interpretation of experimental spectra, these effects provide an additional degree of freedom for tailoring optical and transport properties of quantum structures.
In this paper we examine two prototype examples relevant to the physics of quantum wires (QWIs) and dots (QDs). We first consider properly designed hybrid semiconductor/insulator QWIs based on GaAs, and show that dielectric confinement (DC) may lead to excitonic states with a binding energy exceeding the room temperature thermal energy $`kT_{\text{room}}`$ (a prerequisite for exploiting excitonic states in electro-optical devices) without degrading the optical efficiency typical of conventional GaAs/AlAs nanostructures. Secondly, we show that the dielectric constant of the environment may strongly affect the addition spectra of QDs by modifying the electronic ground state with respect to the case of good dielectric matching.
Our theoretical scheme moves from the following basic considerations. When the dielectric constant $`ϵ(𝐫)`$ is spatially modulated, the Coulomb interaction between, say, two electrons sitting at positions $`𝐫`$ and $`𝐫^{}`$, is given by $`V(𝐫,𝐫^{})=e^2G(𝐫,𝐫^{})`$, where $`G(𝐫,𝐫^{})`$ is the Green’s function of the Poisson operator, i.e.,
$$\mathbf{}_𝐫ϵ(𝐫)\mathbf{}_𝐫G(𝐫,𝐫^{})=\delta (𝐫𝐫^{}).$$
(1)
Therefore, the space dependence of $`ϵ(𝐫)`$ modifies $`G(𝐫,𝐫^{})`$ with respect to the homogeneous case, where $`ϵ(𝐫)=ϵ_{}`$ and $`G_{}(𝐫,𝐫^{})=1/[4\pi ϵ_{}|𝐫𝐫^{}|]`$. This, in turn, modifies the Coulomb matrix elements between the quantum states of the structure which, in the basis ensuing from the single-particle envelope functions $`\mathrm{\Phi }^{e(e^{})}`$, can be written as
$$V_{ij}=e^2\mathrm{\Phi }_i^e(𝐫)\mathrm{\Phi }_j^e^{}(𝐫^{})G(𝐫,𝐫^{})\mathrm{\Phi }_i^e^{}(𝐫^{})\mathrm{\Phi }_j^e(𝐫)𝑑𝐫𝑑𝐫^{}.$$
(2)
Here $`i`$, $`j`$ stand for an appropriate set of quantum numbers labelling the states. If the symmetry of the structure is low, as in the realistic quantum wire structures considered below, Eq. (1) must be explicitly solved, and the ensuing potential is then used in (2). In this case it is convenient to cast Eqs. (1) and (2) in Fourier space as described in Ref. . In contrast, for particularly simple structures the analytic form of the potential can be directly obtained in real space. This is, e.g., the case for the second prototype structure discussed below, the spherical QD: here two electrons can be shown to interact via the potential
$$V(𝐫_i,𝐫_j)=\frac{e^2}{ϵ_1}\frac{1}{|r_ir_j|}+\frac{e^2}{ϵ_1R_d}\underset{k=0}{\overset{\mathrm{}}{}}\frac{(k+1)(ϵ1)}{(kϵ+k+1)}\left(\frac{r_ir_j}{R_d^2}\right)^kP_k(\mathrm{cos}\mathrm{\Theta }_{ij}),$$
(3)
where $`R_d`$ is the QD radius, $`ϵ=ϵ_1/ϵ_2`$, and $`ϵ_1`$ ($`ϵ_2`$) is the dieletric constant of the inner (outer) material. In the following we discuss the basic results and the relevance of these effects for our prototype QWIs and QDs.
(i) QWIs with remote dielectric confinement.
Recently, we have proposed that remote dielectric confinement (RDC) may be used in order to enhance the exciton binding energies $`E_b`$. In convential nanostructures $`E_b`$ is considerably enhanced by quantum confinement; however, for GaAs-based structures, observed values of $`E_b`$ are still well below $`kT_{\text{room}}`$ . On the other hand, owing to the low optical quality of typical semiconductor/oxide interfaces, oxides cannot be used directly as confining barriers. Our novel approach is based on the idea that quantum and dielectric confinement can be spatially separated since they are effective over different length scales. In the proposed structures the enhanced electron-hole overlap induced by quantum confinement in conventional GaAs/AlGaAs structures is combined with the DC provided by polarization charges which form at a remote interface with a low-dielectric constant material, typically an insulator; since electron and hole wavefunctions decay exponentially into the barrier, they will not be affected by the presumably disordered remote interface.
As an example of our approach, we discuss quantitative predictions for the case of a conventional V-shaped GaAs/AlAs QWI with two oxide layers added above and below the QWI at a distance $`L`$. The cross-section is shown in Fig. 1(a). The additional layers are characterized by a small dielectric constant that we take equal to 2 (see, e.g., Ref. ). For this structure, we find $`E_b=29.3`$ meV, to be compared with $`E_b=13\text{meV}`$ of the conventional (i.e., with no oxide layers) structure. Fig. 1(a) shows that the origin of this dramatic enhancement is the large polarization of the AlAs/oxide interfaces induced by the excited electron and hole charge densities. A small polarization charge is also induced at the GaAs/AlAs interface, due to the small dielectric mismatch. In Fig. 1(b) we show the calculated $`E_b`$ for selected values of $`L`$. Obviously, $`E_b`$ is maximum when the oxide layer is at minimum distance , $`L=0`$, where it is enhanced by more than a factor 3 with respect to $`E_0`$, and it is well above $`kT_{\text{room}}`$. The important point here is that $`E_b`$ decreases slowly, indeed as $`L^1`$, with the distance $`L`$, and crosses $`kT_{\text{room}}`$ at $`L`$ as large as $`9\text{nm}`$, where the effects of the disorder at the Oxide/AlAs interface are very small.
(ii) Quantum dots in dielectric environments.
These structures have become accessible by transport studies only very recently. They are III-V or II-VI nanoparticles embedded in materials with different dielectric properties, such as organic matrices in a colloid. QDs in biological environments are also assuming increasing importance. The addition energies $`E_{add}(N)`$ (the energy required to add an electron to a QD containing N electrons) have been used used to characterize these systems experimentally, but a theoretical description is still lacking. To obtain it, we must compute the ground state energy, $`E_0(N)`$, of the QD with $`N`$ interacting electrons (assumed to be confined in a spherical parabolic potential). The chemical potential of the QD with $`N`$ electrons is then $`\mu (N)=E_0(N)E_0(N1)`$, from which we obtain
$`E_{add}(N)=\mu (N+1)\mu (N)`$ (4)
The ground state energies $`E(N)`$, obtained from an Hubbard-like approximation to the many-body hamiltonian , give rise to the addition spectra of Fig. 2. For $`N5`$, we have also performed the exact diagonalizations of the many-electron hamiltonian, with results that are almost identical.
The solid line in Fig. 2 is the calculated addition spectrum for a dielectrically homogeneuos QD, i.e., $`ϵ=1`$ in Eq. (3). The peaks at $`N=2`$ and $`N=8`$ correspond to the addition of one electron to a QD with a closed s- and p-shell, respectively; the weaker peaks at $`N=5`$ and $`N=13`$ correspond to the addition of one electron to a QD with a half-filled outer shell where all spins are parallel, as expected by a filling of the shells according to Hund’s rule. When $`ϵ>1`$ the addition spectra of Fig. 2 are affected in several ways. Let us first consider the behaviour for $`N8`$. As $`ϵ`$ is increased, the spectra are shifted to higher energies, since a larger energy is needed to add new electrons to the QD due to the enhanced Coulomb repulsion. Note, however, that this shift is not rigid, and the half-shell peak at $`N=5`$ is enhanced with respect to full-shell peaks at $`N=2`$ and $`N=8`$. This result is quite general and derives from the different combinations of direct and exchange Coulomb terms that enter the ground state energies determining the full and half-shell peaks.
The changes taking place at larger $`N`$ are more dramatic. As $`ϵ`$ is increased, the ordering and amplitude of the peaks deviates from the behaviour at $`ϵ=1`$. As can be seen Fig. 2, half-shell peaks become comparable with full-shell ones, and additional features appear for the larger values of $`N`$. Inspection of the ground state configurations shows that this is due to a shell filling in violation of Hund’s rule. Above a critical value $`N_c`$, a reconstruction of the electronic configuration takes place, i.e., the added electron will not be arranged in the most external shell, leaving the remaining electrons in the previous configuration. Instead, it will cause other electrons in the inner shells to be promoted to shells of higher angular momentum. This reconstruction occuring at large values of the dielectric mismatch is similar to the one predicted in QDs in a strong magnetic field (where a similar enhancement of the Coulomb interaction takes place).
In summary, we have shown that dielectric confinement effects may strongly affect quantum states in dielectrically modulated nanostructures. By modulating the dielectric mismatch between different layers it is possible to tune the Coulomb interaction between the quantum confined states, in analogy to what is done by external magnetic fields and/or doping, thereby modifying substantially the optical and addition spectra of nanostructures.
This paper was supported in part by MURST-40% through grant ”Physics of nanostructures” and by INFM through grant PRA-SSQI.
|
no-problem/9906/quant-ph9906087.html
|
ar5iv
|
text
|
# Diffractive Orbits in an Open Microwave Billiard
## Abstract
We demonstrate the existence and significance of diffractive orbits in an open microwave billiard, both experimentally and theoretically. Orbits that diffract off of a sharp edge strongly influence the conduction spectrum of this resonator, especially in the regime where there are no stable classical orbits. On resonance, the wavefunctions are influenced by both classical and diffractive orbits. Off resonance, the wavefunctions are determined by the constructive interference of multiple transient, nonperiodic orbits. Experimental, numerical, and semiclassical results are presented.
Recently, Katine et al. studied the transmission behavior of an open quantum billiard in the context of a two dimensional electron gas (2DEG) in a GaAs/AlGaAs heterostructure . Their resonator was formed by a wall with a small aperture, called a quantum point contact (QPC), and an arc-shaped reflector. A schematic of this resonator is shown in Fig. 1. The voltage on the reflector could be varied, effectively moving the reflector towards or away from the wall. Their measurements showed a series of conductance peaks, analogous to those seen in a Fabry-Perot, as the reflector position was varied.
As we discuss below, the resonator considered here represents a new class of billiards, to our knowledge not previously studied in the literature. That is, the billiard is geometrically open, but in the stable regime, it is classically closed. In the unstable regime, the resonance properties of the billiard are determined in large part by diffraction.
The resonator shown in Fig. 1 has two distinct modes of operation. When the center of curvature of the reflector is to the left of the wall (the regime studied in ), then all classical paths starting from the QPC that hit the reflector remain forever in the region between wall and the reflector: the dynamics is stable and the periodic orbits can be semiclassically quantized. Each quantized mode of the resonator can be characterized by two quantum numbers $`(n,m)`$, which represent the number of radial and angular nodes respectively. As the reflector-wall separation is varied, the conductance exhibits a peak each time one of these quantized modes is allowed. Once an electron is in the resonator, the only way for it to leave is by tunneling back through the QPC or by diffracting around the reflector; since both processes are slow, the resonances have narrow widths. Because the QPC is on the symmetry axis, only modes with even $`m`$ can be excited.
When the center of curvature is to the right of the wall, however, the dynamics becomes unstable: all classical trajectories beginning at the QPC rapidly bounce out of the resonator, except for a single unstable periodic orbit along the axis of symmetry, which we will call the “horizontal” orbit \[see Fig 1(b)\]. Although the horizontal orbit returns to the QPC, it has a low probability of escaping the resonator there because the QPC is much smaller than the de Broglie wavelength of the electron. Because the horizontal orbit is the only periodic orbit in the unstable regime, one might expect resonant buildup only along the symmetry axis. Such a spectrum would be quasi-one-dimensional, with only the half-wavelength periodicity of a Fabry-Perot cavity. However, in numerical simulations it was found that there were other transmission resonances in the unstable regime which did not correspond to any classical periodic orbits . It was proposed that these anomalous peaks are supported by diffraction off the tips of the reflector. Unfortunately, in the mesoscopic experiments, decoherence of the electron wave by impurities in the GaAs/AlGaAs heterostructure shortens the lifetime of the resonances, leaving insufficient energy resolution to resolve the diffractive peaks .
For this reason, we decided to investigate a parallel plate microwave resonator with a similar geometry. In microwave experiments, decoherence and dissipation are not a problem, the geometry of the resonator can be specified much more accurately, and the dynamical range of available wavelengths is much larger. The experimental setup is shown in Fig. 2.
For the transverse electromagnetic (TEM) mode, it can be shown that the equation governing the component of the electric field normal to the plates is identical to the two-dimensional time-independent Schrödinger equation . Therefore, by studying the modes of parallel-plate resonators we can gain insight into the behavior of two-dimensional solutions to the Schrödinger equation.
The resonator consisted of two parallel copper plates, 1 meter square, separated by a distance of 1.25 cm. One side of the resonator consisted of a copper wall. The other three sides were lined with a 11.5 cm thick layer of microwave absorber (C-RAM LF-79, Cuming Microwave Corp.) designed to provide 20 dB of attenuation in the reflected wave intensity in the range 0.6-40 GHz. The absorber prevented outgoing waves from returning to the resonator, thereby simulating an open system in the directions away from the wall. An antenna was inserted normal to the plates, 2 mm from the wall, to simulate the QPC. The curved reflector was formed from a rectangular aluminum rod bent into an arc with radius of curvature $`R=30.5\text{ cm}`$. Various opening angles $`\alpha `$ were used: $`115^{}`$, $`112^{}`$, $`109^{}`$, and $`106^{}`$.
Instead of measuring the transmission of the resonator, we measured the reflection back to the antenna; for this we used an HP8720D network analyzer in ‘reflection’ mode (the complex $`S_{11}`$ parameter of the resonator was measured). We inferred the transmission probability $`|T|^2`$ via $`|T|^2=1|R|^2`$, where $`R=S_{11}`$ is the measured reflection coefficient. Because of the proximity of the antenna to the wall, it was only weakly coupled to the resonator; therefore, in the absence of the reflector, the transmission coefficient was close to zero. However, when the reflector was present, the transmission experienced pronounced maxima at certain frequencies. In Fig. 3 we show a transmission spectrum at fixed frequency, as the distance between the wall and reflector is varied. In the unstable regime, there are two types of resonance. The first type, labeled $`f`$ in Fig. 3, is related to the horizontal orbit along the axis of symmetry, and bears some resemblance to a Fabry-Perot type resonance between two half-silvered mirrors. The second type, labeled $`d`$, is supported by diffraction off the tips of the reflector.
The wavefunctions corresponding to peaks $`f_1`$ and $`d_1`$ were measured using the technique of Maier and Slater . They showed that the frequency shift of a given resonance due to a small sphere of radius $`r_0`$ at a position $`(x,y)`$ is given by
$$\frac{\omega ^2\omega _0^2}{\omega _0^2}=4\pi r_0^3\left(\frac{1}{2}H_0^2(x,y)E_0^2(x,y)\right),$$
(1)
where $`E_0`$ and $`H_0`$ are the unperturbed electric and magnetic fields. Thus, the frequency shift is proportional to the local intensity of the microwave field, and by measuring the shift as a function of the position of the sphere, the field intensity of a particular mode can be mapped out. Note that the frequency shift will be positive in regions where the magnetic field is large, and negative where the electric field is large. Also, the factor of $`1/2`$ multiplying the magnetic field in Eq. (1) indicates that the sphere is a stronger perturbation to the electric field that magnetic field. In our measurements, we found this to be the case: the shifts were predominantly negative. Appreciable positive shifts were only found at the nodes of the electric field, corresponding to maxima of the magnetic field.
Figure 4 shows theoretical quantum wavefunctions compared with experimentally measured frequency shifts for the resonances labeled by $`f_1`$ and $`d_1`$ in Fig. 3. The measured frequency shift is plotted as a function of sphere position. It is important to note that the frequency shift is not proportional to $`E^2`$, but rather to $`H^2/2E^2`$. Therefore we show only negative contour lines below 20% of the maximum negative shift, and thereby emphasize regions of strong electric field. The similarity between theory and experiment is striking.
The wavefunction labeled $`f_1`$ in Fig. 4 is clearly associated with the horizontal orbit along the axis of symmetry. Rays emanating from a point source located on the axis of symmetry next to the wall bounce off the reflector and come to an approximate focus about 10 cm from the source. The focus is approximate because of spherical (or in this case cylindrical) aberration.
Now we turn our attention to the state labeled $`d_1`$ in Fig. 4. As noted above, the only periodic orbit in the unstable regime is the horizontal orbit, along the axis of symmetry. The pictured wavefunction, however, clearly has very little amplitude along this periodic orbit. Instead the wavefunction has a band of higher amplitude running from the region of the tip of the mirror to the QPC, but in the unstable regime there is no classical periodic orbit that does this. Theoretical studies have suggested that states such as $`d_1`$ are supported by orbits that undergo diffraction off the tips of the reflector . One such orbit is shown in Fig. 1(b). Rays that hit the smooth surfaces of the reflector or wall undergo specular reflection, whereas the rays that hit near the reflector tips can be diffracted. A fraction of the wave amplitude can then return to the QPC from this region, thus setting up a non-classical closed orbit. All peaks labeled with a $`d`$ in Fig. 3 are supported by such diffractive orbits.
Numerical calculations have shown that for energies off resonance, the quantum wavefunction is often intermediate between those shown for $`f_1`$ and $`d_1`$, in the sense that amplitude seems to be running from the QPC to some point between the center of the mirror and the tip . This can be understood in terms of the interference of paths with each other as they “walk off” the horizontal orbit and escape the resonator. Thus diffraction does not play a major role in determining the off-resonance wavefunctions. However, diffraction is instrumental in determining the on-resonance wavefunctions underlying the conductance peaks $`d_1`$ and $`d_2`$ in Fig. 3.
Figure 5 shows a more global picture of the transmission properties of the resonator. Here we plot the transmission of the resonator as both the wavelength and the reflector-wall separation are varied. Each vertical slice through this figure is a frequency spectrum with fixed reflector position; the dotted line marks the classical transition from stable to unstable motion that occurs when the reflector’s center of curvature moves to the right of the QPC. The vertical axis indicates how many wavelengths fit along the horizontal orbit between the QPC and the reflector. The repetition of the resonance pattern every half-wavelength in the vertical direction is analogous to the half-wavelength periodicity of a Fabry-Perot cavity.
In the stable regime we have labeled the peaks with their quantum numbers, $`(n,m)`$. The vertical axis is chosen to make the $`m=0`$ resonance peaks approximately horizontal in this figure. As the stable/unstable transition is approached, the peaks with high $`m`$ disappear one by one because their large angular sizes allow them to escape around the reflector.
At the stable/unstable transition, all of the resonances in a family would be approximately degenerate, but instead there is an avoided crossing. The level repulsion is caused by a coupling that is partly mediated by diffraction; this subject will be explored more thoroughly in a future publication.
In the unstable regime, the only remaining classical periodic orbit is the horizontal orbit, which itself becomes unstable. The Fabry-Perot peak (labeled $`f`$) is essentially quantized along the horizontal orbit, so its position shows a simple dependence on reflector position. It becomes broad in the unstable regime, with a lifetime given by the classical Lyapunov stability exponent of the horizontal orbit. Two diffractive resonances (labeled by $`d`$), are also visible; they separate from the Fabry-Perot type peak as the reflector is moved away from the wall. If the angle $`\alpha `$ subtended by the mirror is changed, the position of the Fabry-Perot peak remains unaffected whereas the diffractive peaks shift.
The diffractive peaks labeled by $`d`$ in Fig. 5 cannot be explained by semiclassical theory unless diffraction off the tips of the reflector is included. The semiclassical calculation involves launching a manifold of rays from the QPC, tracking their phases as they reflect off the reflector and cross caustics or foci, and then adding coherently the amplitudes of any orbits that return to the QPC. To include diffraction, we also allow for the fact that every ray that hits the tip of the reflector is scattered in all directions, with an angle-dependent amplitude . Any of those scattered rays that return to the QPC gives an additional contribution to the conductance. The details of the semiclassical theory including diffraction will be presented in a future paper.
Further evidence of the importance of diffractive orbits is contained in the return spectrum (Fig. 6), which is the Fourier transform of the complex reflection scattering matrix element $`S_{11}(\omega )`$. That is, if a short pulse were emitted from the antenna at time $`t=0`$, echos would return to the antenna at certain later times. These echos are indicated by peaks in the return spectrum. Many of the return peaks are split due to the coexistence of the horizontal orbit and diffractive orbits with slightly shorter return times. The horizontal orbit and its repetitions, which have lengths indicated in Fig. 6 by long vertical bars, cause the primary peaks in each group. In addition, near each primary peak there is a family of diffractive orbits (with lengths indicated in Fig. 6 by short vertical bars) which combine to form a secondary peak. The presence of this splitting in the return spectrum is strong evidence in support of the claim that diffraction off the edges of the reflector supports other closed orbits, which lead to resonances in the transmission spectra. Note that for the long orbits, the diffractive peaks are even stronger than the peaks from the geometric orbit. This is because the number of diffractive orbits increases linearly with the length of the orbit, whereas there is alwayse only one geometric orbit, regardless of length.
In summary, we have demonstrated the existence of diffractive orbits in an open microwave billiard, which give rise to wavefunctions that would not be predicted by a simple semiclassical theory. Such orbits are of importance in open, unstable systems where the number of unstable classical periodic orbits is small. In such systems, diffraction can play a major role in determining the spectrum of the system.
We thank the Hewlett Packard Corporation for the loan of a network analyzer that was used in these experiments. We thank J. D. Edwards for the computer program that was used for the quantum computations. This work was supported through funding from Harvard University, ITAMP, and also Grant No. NSF-CHE9610501.
|
no-problem/9906/cond-mat9906251.html
|
ar5iv
|
text
|
# Oxygen Ordering Superstructures and Structural Phase Diagram of YBa2Cu3O6+x Studied by Hard X-ray Diffraction
## I Introduction
It is now well-established that YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6+x</sub> (YBCO) is antiferromagnetic (AF) in the tetragonal structural phase for $`0<x<0.35`$, and becomes superconducting at low temperatures in the weakly distorted orthorhombic phase for $`0.35x<1`$. By high temperature thermal treatment in a suitable oxygen pressure it is possible to vary the oxygen composition in a continuous way , that changes the electronic properties from an AF insulator via an underdoped to an optimally doped high-$`T_c`$ superconductor with $`T_c=93`$ K, and a slightly overdoped material for $`x>0.93`$. YBCO has therefore become a major model system for basic studies of high-$`T_c`$ superconductivity and it is a leading candidate for technological applications.
Structural refinement of neutron diffraction data have shown that the variable amount of oxygen resides in the CuO<sub>x</sub> basal plane . A strong tendency towards formation and alignment of Cu-O chains gives rise to the orthorhombic distortion below the temperature dependent transition line between the tetragonal and the orthorhombic phase. The importance of oxygen ordering for the superconducting properties has been verified directly from experimental studies where crystals are quenched from the tetragonal disordered into the orthorhombic ordered phase. Here it is found that $`T_c`$ of quenched YBCO is reduced compared to the equilibrium value and increases with time when the sample is annealed at room temperature . Equally, it has been observed that the oxygen ordering of quenched YBCO crystals increases with time . The consensus is therefore that the Cu-O chain ordering in the CuO<sub>x</sub> basal plane controls the charge transfer leading to superconductivity in the CuO<sub>2</sub> planes. However, in spite of the very large number of experimental and theoretical model studies there is still no finite microscopic understanding of how the charge transfer is controlled by the Cu-O chain length and superstructure ordering. Thus, it is not settled how the electronic states formed in the Cu-O chains hybridize with the electronic structure in the CuO<sub>2</sub> planes, give rise to the charge transfer and contribute to the anisotropy of the superconducting properties. A likely explanation is that the available structural information is not sufficiently detailed and unambiguous because the oxygen diffusion kinetics is too slow at the temperatures where the superstructures become stable. It has therefore not been established which superstructures are actually formed as function of oxygen composition $`x`$, impurity level and thermal treatment, and how they influence the electronic states.
The orthorhombic phase found below the tetragonal to orthorhombic phase transition has the basic ortho-I structure, and it is formed by Cu-O chains that align along the $`b`$ axis with oxygen on the so-called O(1) site, whereas the sites on the $`a`$ axis (called O(5)) are essentially empty. Ortho-I is a 3D long range ordered structure, but in commonly prepared crystals true long range order is prevented by the formation of twin-domains with domain size ranging from a few hundred Ångstrøm to macroscopic size. Clearly, there is disorder in the ortho-I chain structure for compositions $`x<1.0`$. Therefore, in thermodynamic equilibrium ordered superstructures must be formed for $`T0`$. At $`x=0.5`$ an ideal 3D ortho-II superstructure may in principle be formed inside the ortho-I twin domains with perfect Cu-O chains on every second $`b`$ axis and the remaining ones are empty, i.e. they contain only Cu. 3D ordering with the Cu-O chains stacked on top of one another along the $`c`$ axis has been observed but only with finite size ordering in all three crystallographic directions.
Electron microscopy techniques have had a leading role in establishing the superstructures of YBCO . However, the need for confirmation by bulk structural techniques is generally recognized, because electron beam heating of thin crystals may change the mobile oxygen content $`x`$ and generate transient non-equilibrium surface structures. Also, it is difficult to obtain quantitative details about the finite size ordering properties and their temperature dependence by these techniques. The observed superstructures include the Cu-O chain type of ordering as ortho-II as well as more complex ordering sequences of essentially full (Cu-O) and empty (Cu) chains with periodicity $`ma`$ along the $`a`$ axis and corresponding superstructure reflections at modulation scattering vectors: $`\stackrel{}{Q}=(nh_m\mathrm{\hspace{0.33em}0\hspace{0.33em}0})`$, where $`h_m=1/m`$, and $`n<m`$ are integers, and the coordinates refer to the reciprocal lattice vectors. Superstructure reflections with $`m=2,3,4,5`$ and $`8`$, which we shall call ortho-II, ortho-III, ortho-IV, ortho-V and ortho-VIII, respectively, have been observed experimentally. Ideally, these superstructures may be symbolized by their sequence of full (1) Cu-O and empty (0) Cu chains. Thus, ortho-II is simply (10) with $`x=1/2`$, and the ortho-III sequence is (110) with composition $`x=2/3`$. Ortho-IV has an ordered sequence of (1110) and composition $`x=3/4`$, and ortho-V is a sequential ordering of ortho-II and ortho-III, i.e. (10110), with composition $`x=3/5`$. Ortho-VIII is ortho-V combined with ortho-III to the sequence (10110110) and ideal composition $`x=5/8`$. In principle similar structures with full and empty chains interchanged may be stable, but they have not been observed experimentally.
Superstructures with unit cells $`2\sqrt{2}a\times 2\sqrt{2}a\times c`$ and $`\sqrt{2}a\times 2\sqrt{2}a\times c`$ , the so-called herringbone type, have been reported. However, as we shall discuss in Section IV these superstructures are most likely not from oxygen ordering. The ortho-II and ortho-III superstructures have been verified as bulk structural phases by x-ray and neutron diffraction techniques. The first observation by x-ray diffraction was made by Fleming et al. for ortho-II and by Plakhty et al. for ortho-III. Similarly, the first neutron diffraction data were presented by Zeiske et al. for ortho-II, and by Plakhty et al. and Schleger et al. for ortho-III. Analysis of structure factors obtained from a combination of neutron and x-ray diffraction data has unequivocally shown that the ortho-II and ortho-III superstructures result from oxygen ordering in Cu-O chains . However, relaxation of cations associated with the oxygen chain ordering contributes significantly to the superstructure intensities. In particular the barium displacement has a strong influence on the x-ray diffraction intensity. The displacements show only minor variation with oxygen composition. Also, in the ortho-II phase there was found no significant change in the displacements as function of temperature . Compiling previous room temperature data from x-ray and neutron diffraction studies, we find that the ortho-II superstructure has been observed for oxygen compositions $`0.35x0.7`$ and the ortho-III superstructure for $`0.7x0.77`$. As we shall present below, we have observed clear indications of bulk phase ortho-V correlations for $`x=0.62`$ and ortho-VIII for $`x=0.67`$, but we have found no evidence for the ortho-IV supestructure. Only a few of the previous studies have been carried out above room temperature .
From previous neutron and hard x-ray diffraction studies it has been inferred that the finite size ordering of ortho-II results from formation of anti-phase domains inside the ortho-I twin domains below an ordering temperature of $`T_{OII}=125(5)^{}`$C . Anisotropic superstructure reflections with a Lorentzian-squared line shape were found in the ordered state as expected from Porod’s law: $`S(q)\frac{1}{q^{d+1}}`$ for finite size ordered domains with sharp boundaries in spatial dimension $`d`$. Studies of the ordering kinetics following a quench in temperature from the ortho-I into the ortho-II phase have shown a time dependent domain growth that is algebraic at early times and logarithmic at late times. The characteristic time of the growth process is activated with an activation energy of 1.4 eV. At 70 C it is some days and it extrapolates to several years at room temperature. On this basis it was suggested that the finite size ordering may result from pinning of the domain walls by impurities or defects, but recent computer simulations have shown that intrinsic slowing down due to the large effective activation energy for movement of long Cu-O chains may contribute as well .
In the present paper we report on experimental studies of the oxygen ordering in YBCO covering the oxygen compositions $`0.35x0.87`$ and temperatures up to 250 C, by diffraction of high energy synchrotron radiation ($`100`$ keV). Thus, our studies do not include the region at and above the optimal doping level $`x0.93`$ where Kaldis et al. have observed structural anomalies (see e.g. Ref. REFERENCES). The high energy x-ray diffraction technique combines the high penetration power of neutrons with the high momentum space resolution. The penetration depth of 100 keV x-rays in YBCO is of the order of 1 mm. This assures that we probe the bulk properties of the samples and are insensitive to oxygen diffusion in and out of the surface, and studies in sample environments with varying temperatures and controlled atmospheres are easily accessible. Finally, the synchrotron intensity is so high that scattering signals down to a factor of 10<sup>8</sup> smaller than the fundamental Bragg peaks can be resolved, and the kinetics of the ordering can be studied with a time resolution of 1 second. From our studies we present temperature scans of the structure factors of the superstructures, determine their phase boundaries and the nature of the ordering. Firstly, we show that the superstructures including ortho-V and ortho-VIII, that Beyers et al. have observed by electron microscopy, represent bulk structural phases. Secondly, we present extensive studies of the ortho-II superstructure ordering in crystals of different quality and thermal treatment. Finally, we use the present structural data jointly with data compiled from previous studies to establish a structural phase diagram that includes the oxygen superstructures and the tetragonal to orthorhombic (ortho-I) transition temperature, $`T_{TOI}`$, obtained by neutron powder diffraction. We also review and compare with structural findings by other groups and discuss our results in relation to structural model studies.
The layout of the paper is as follows: In Section II we supply information about the crystal growth and oxidation of the sample (II A), details about the experimental setup (II B), and the data analysis (II C). The experimental results are presented in Section III. First we account for the results for the ortho-II superstructure formation (III A). This includes the dependence of the ortho-II correlation length on crystal quality and thermal treatment of the sample, the phase transition into the ortho-I phase, and the stability range of the composition $`x`$. Then we describe the ordering properties and the stability range of the ortho-III superstructure (III B). In Subsections III C and III D we present the properties of the ortho-V and ortho-VIII superstructure ordering, respectively, and a structural phase diagram of the oxygen ordering is presented in Section III E. In Section IV we discuss our experimental structural results in relation to other structural findings and their importance for charge transfer, and to theoretical model descriptions. A concluding summary is given in Section V.
## II Experimental details
### A Sample Preparation
The single crystals used to study the different superstructure phases and establish the phase diagram were grown in YSZ (yttria-stabilized zirconia) crucibles by a flux growth method using chemicals of 99.999 % purity for Y<sub>2</sub>O<sub>3</sub> and CuO, and 99.997 % for BaCO<sub>3</sub>. The impurity level of the crystals has been analyzed by ICP-MS (Inductively Coupled Plasma Mass Spectroscopy). The Zr content of the crystals was found to be less than 10 ppm. by weight. The major impurities were Al, Fe and Zn, the sum of which amounts to less than 0.2 % atom per unit cell. When optimally doped $`(x=0.93)`$ these crystals have $`T_c=93.2`$ K and the width of the 10 % - 90 % diamagnetic response is $`\mathrm{\Delta }T_c=0.3`$ K.
Of the four crystals used to study the ortho-II ordering properties as function of sample purity, crystals #1, #3 and #4 were grown by the flux growth technique described in Ref. REFERENCES, and crystal #2 as described above. Crystal #1 was used to study the ortho-II ordering properties when exposed to six different annealing methods.
YSZ crucibles, and chemicals of purity better than 99.99 % and 99.9 % were used for crystals #1 and #3, respectively. Crystal #4 was grown in a SnO<sub>2</sub> crucible with chemicals of purity better than 99.99 %. The impurity level of crystals #1, #3 and #4 has not been determined directly, but the flux from the crucibles used to grow crystals #1 and #4 has been analyzed. Excessive amounts of Zr and Hf (up to 15000 ppm for Zr and 550 ppm for Hf) were found, but these elements are known to have a very low solubility in YBCO. Major impurity components were Al with concentrations 200 ppm and 335 ppm in the flux from crystal #1 and crystal #4, respectively, and similarly for Eu: 115 ppm and 110 ppm. The superconducting transition temperatures and the widths of the transitions have been determined in the fully oxygenated state $`(x=0.99)`$. They are: $`T_c=91.0`$ K, $`\mathrm{\Delta }T_c<1.0`$K for crystal #1, $`T_c=92.0`$ K, $`\mathrm{\Delta }T_c=1.5`$ K for crystal #3, and $`T_c=91.5`$ K, $`\mathrm{\Delta }T_c<0.5`$ K for crystal #4. The lower $`T_c`$ and larger $`\mathrm{\Delta }T_c`$ values of these crystals are not necessarily a consequence of bad crystal quality but rather a result of overdoping. Thus crystal #1 has $`T_c=93.5`$ K and $`\mathrm{\Delta }T_c<0.5`$ K when optimally doped. For all crystals prepared by high purity chemicals (better than 99.99 %) it is likely that the impurities come from minority components of the crucible material or from the furnace walls. All the crystals are plate-like with thicknesses of $`\frac{1}{2}`$ to 1 mm, flat dimensions of 1.5 to 3 mm and weights ranging from 10 to 70 mg.
The oxygen composition of the crystals was changed by use of a gas-volumetric equipment . For reduction or oxidation the crystals are heated with a suitable amount ($`10`$ g) of YBCO buffer powder in a quartz tube connected to an external closed volume system. High purity oxygen gas (99.999 %) is supplied to the system and the pressure is controlled and monitored by use of high precision absolute pressure gauges (MKS Baratron) with accuracy and resolution better than 0.01 %. The crystals and the powder are wrapped in platinum foil to prevent reaction with the quartz tube. The closed volume system is made of ultra-high vacuum components and contained in a thermostatically controlled environment which allows for accurate determination of the oxygen pressure and oxygen uptake by the powder and the crystals. The water adsorbed in the system and the materials is removed prior to the preparation by use of a liquid nitrogen trap and heating the quartz tube to 300 C. At this temperature there is no reduction of the crystal and the oxygen equilibrium pressure is in any case sufficiently low that the oxygen does not condense in the trap. The desired oxygen composition $`x`$ is usually established by pumping out or adding a suitable amount of oxygen gas at temperatures between 500 and 600 C. During subsequent cooling the oxygen pressure is reduced to assure that $`x`$ stays essentially constant. A final long time annealing may be performed at lower temperatures to obtain equilibrium between the powder and the single crystals and develop the oxygen ordering superstructures. For studies of the structural phase diagram a characteristic procedure to establish the superstructure is annealing at 80 C for 10 hours and cooling by 1 C/hour to room temperature where the crystal is stored for more than one week before the measurements.
If the starting oxygen composition of the buffer powder is known and it is assumed that the crystals are in equilibrium with the powder, the oxygen composition may be determined with an accuracy better than $`\mathrm{\Delta }x=0.02`$ by use of the ideal gas law. The resulting oxygen composition $`x`$ has been compared with the known values of the oxygen equilibrium pressure determined by Schleger et al. , and full agreement has been established in all cases. Crystals prepared previously by the method have been examined by neutron diffraction technique and the oxygen composition $`x`$ determined from crystallographic analysis of 375 unique reflections were found to be in full agreement with the values obtained from the gas-volumetry .
### B Instrument
The experiments were performed on a triple axis diffractometer at the high-energy beam line BW5 at HASYLAB in Hamburg . The diffractometer operates in horizontal Laue scattering geometry and is equipped with a Huber 512 Eulerian cradle and a solid state Ge detector. The insertion device is a high field wiggler with a critical energy of 26.5 keV at the minimum gap of 20 mm. A 1.5 mm copper filter cuts the spectrum below 50 keV, thereby minimizing the heat load on the monochromator crystal. The incident radiation with an energy in the range of 100 keV has a penetration depth of $`1`$ mm in YBCO samples. For monochromator and analyzer crystals either $`(\mathrm{2\hspace{0.33em}0\hspace{0.33em}0})`$ SrTiO<sub>3</sub> crystals or $`(\mathrm{1\hspace{0.33em}1\hspace{0.33em}1})`$ Si/TaSi<sub>2</sub> crystals were used. Both types of crystals had a mosaic spread of $`50`$” (arc seconds), resulting in a longitudinal resolution of 0.0075 Å<sup>-1</sup> at the $`(\mathrm{2\hspace{0.33em}0\hspace{0.33em}0})`$ reflection of YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6+x</sub>. The transverse resolution is limited by the sample mosaicity, which was in the range of 0.05-0.1 for our samples, corresponding to $`0.0015`$ Å<sup>-1</sup> at the $`(\mathrm{2\hspace{0.33em}0\hspace{0.33em}0})`$ reflection. The vertical resolution depends on the setting of the slits before and behind the sample. They were usually set to integrate the scattering over a quarter of a reciprocal lattice unit, that is 0.40 Å<sup>-1</sup> along the $`a`$ and $`b`$ axes and 0.13 Å<sup>-1</sup> along the $`c`$ axis. The sample was wrapped in Al-foil and mounted in a small furnace, designed for use in an Eulerian cradle. The furnace temperature was stable within 1C. An inert atmosphere of 0.3 bar Ar was introduced into the furnace to prevent oxidation of the crystals. From the gas-volumetric preparations it is established that the reduction is negligible for temperatures below 300 C, and we observed no changes in the structural properties that could be related to a change of oxygen composition during temperature cycling at temperatures below 250C.
### C Analysis of superstructure data
The ortho-II superstructure reflections are well described by the scattering function
$$S(𝐪)=A/(1+(q_h/\mathrm{\Gamma }_h)^2+(q_k/\mathrm{\Gamma }_k)^2+(q_l/\mathrm{\Gamma }_l)^2)^y$$
(1)
where $`q_i`$, $`i=h,k,l`$ is the reduced momentum transfer and $`\mathrm{\Gamma }_i`$ the reduced inverse correlation length, related to the correlation length $`\xi _i`$ by $`\xi _h=a/(2\pi \mathrm{\Gamma }_h)`$ for the $`a`$ direction and analogous along $`b`$ and $`c`$. Equation 1 is a 3D anisotropic Lorentzian raised to the power $`y`$. The scattering function $`S(𝐪)`$ has been convoluted with the resolution function, which is treated as a $`\delta `$-function in the scattering plane and an integrating function in the perpendicular direction. The exponent $`y`$ indicates the distribution of domain size, i.e. $`y=1`$ points to an exponential decrease of the pair correlations as for example in the ramified clusters typically for critical fluctuations above the transition temperature. The exponent $`y=2`$ may result from a domain size distribution around an average value $`\mathrm{\Gamma }_i`$ and, as mentioned in the Introduction, the asymptotic behaviour for large $`q`$ is in agreement with Porod’s law for scattering from 3D finite size domains with sharp boundaries. Furthermore Bray has shown that the tail of the scattering function from a topological defect of dimension $`m`$, in a system of dimension $`d`$ is given by $`S(q)\frac{1}{q^{2dm}}`$ . The relation between $`\mathrm{\Gamma }`$ and the peak width (HWHM=$`\mathrm{\Delta }`$) is $`\mathrm{\Delta }/\sqrt{2^{1/y}1}`$. When full integration of the superstructure peak is performed perpendicular to the scattering plane by relaxing the vertical aperture the in-plane scattering function derived from Eq. 1 is described by a Lorentzian to the power $`y^{}=y\frac{1}{2}`$.
The ortho-III, ortho-V and ortho-VIII superstructures are essentially 2D ordered giving rise to significant overlap of the peaks along $`l`$ . In this case full integration in the vertical direction cannot be obtained when the $`c`$ axis is perpendicular to the scattering plane. For the 2D ordered superstructures with finite domain size and sharp boundaries it is expected that the scattering function in the $`ab`$ plane should be a Lorentzian to the power $`y^{}=y=3/2`$ because the integration along $`c`$ is rather incomplete, whereas it should be a simple Lorentzian ($`y^{}=y1/2=1`$) when the $`c`$ direction is in the scattering plane and full integration along the $`a`$ or $`b`$ direction is performed.
## III Results
### A The ortho-II superstructure
The ortho-II phase is observed as a 3D ordered structure with anisotropic correlation length at compositions $`0.35x<0.62`$ and no other superstructure was found in this range of oxygen content. Especially at $`x=0.35`$ the q-space was surveyed unsuccessfully for correlations of the herringbone, $`\sqrt{2}a\times 2\sqrt{2}a\times c`$, or $`2\sqrt{2}a\times 2\sqrt{2}a\times c`$ type structures which were observed in other studies by electron , x-ray and neutron diffraction .
The largest correlation lengths are obtained for the ortho-II phase at $`x=0.50`$. An example of the $`(\mathrm{2.5\hspace{0.33em}0\hspace{0.33em}0})`$ superstructure reflection measured in crystal #1 is shown in Fig. 1. The crystal has been annealed by the thermal procedure marked 2 below. The room temperature properties of the oxygen superstructures are strongly dependent on the crystal quality and the thermal treatment. The four crystals labeled #1 - #4 in section II A have been prepared with $`x=0.50`$, annealed at 500 C for 6 days, cooled by 10 C/hour to 100 C where they were annealed for 36 hours and then quenched to room temperature. The diffraction studies were performed 5-10 days later. With this thermal treatment we assume, on the basis of the results presented below and in Ref. REFERENCES, that all the crystals have reached the late state of ortho-II domain growth, and the influence of the different room temperature annealing times is considered to be small compared to the differences due to impurities and defects.
Although it is possible to determine the impurity level in the crystal, it is often not known on which lattice sites the various impurities are located and what influence they have on the oxygen ordering. However, a crystal prepared as described in Ref. REFERENCES using Al<sub>2</sub>O<sub>3</sub> as crucible material resulted in a crystal with 6 mole % Al, which by neutron diffraction studies was found to be located on the Cu(1) site in the basal plane . This crystal did not show ortho-II superstructure ordering when it was prepared with $`x=0.50`$. The nature and influence of lattice defects are equally difficult to quantify. One way to measure the overall quality of the crystal lattice is the mosaicity, i.e. the width of the rocking scan of the sample. Fig. 2 shows the HWHM of $`h,k`$ and $`l`$ scans obtained at room temperature of the ortho-II superstructure reflection versus the mosaicity for the four crystals #1 - #4. The $`h`$ and $`l`$ scans are measured in the $`ac`$ scattering plane, and the scan along $`k`$ in the $`ab`$ plane. In both cases a full integration of the respective vertical widths, $`\mathrm{\Delta }k`$ and $`\mathrm{\Delta }l`$ is performed. It is immediately obvious that crystal #3 prepared with low purity chemicals have the largest mosaicity and the broadest ortho-II peaks. Thus, the purity of the chemicals is crucial for the development of large ortho-II domains. For crystals #1, #2 and #4 a linear relation between mosaicity and the HWHM of the ortho-II superlattice peak is found, with small deviations of the width along $`h`$. With the mosaicity as a criteria of crystal quality crystal #1 is the most perfect one. This is corroborated by magneto-optic studies of the magnetic flux flow in the crystals. Crystal #1 is the only one that shows flux flow instability, which is considered to be a signature of very high crystal quality .
The influence of the thermal treatment has been studied in crystal #1. After the preparation for $`x=0.50`$ mentioned above the crystal has been annealed the following six ways:
1. 70 days annealing at 80C, cooling to room temperature in steps of 1C/hour
2. 5 hours annealing at 100C, quenched to room temperature and stored for 10 days
3. quenched from 170C to room temperature and stored at room temperature for 97 days
4. cooled down from 170C to room temperature in steps of 10C every 10 minutes
5. cooled down from 170C to room temperature with 4C/minutes
6. quenched from 170C to room temperature within 3 minutes
In Fig. 3 the normalized peak intensity is plotted versus the inverse correlation lengths measured at room temperature at the $`(\mathrm{2.5\hspace{0.33em}0\hspace{0.33em}5})`$ superstructure reflection for the differently treated samples. The normalization of the peak intensity is made relative to the background. As mentioned above, the $`h`$ and $`l`$ scans are measured in the $`ac`$ plane, the scans along $`k`$ in the $`ab`$ plane, and a full integration along the vertical widths is performed. The relation between the measured peak intensity and the peak widths in all three directions follows a quadratic dependence as indicated by the lines. This shows that the ratios between the line-widths $`\mathrm{\Gamma }_h/\mathrm{\Gamma }_k`$ and $`\mathrm{\Gamma }_l/\mathrm{\Gamma }_k`$ are independent of the thermal treatment. Further, since the measured peak intensity, $`I_{peak}^{obs}`$, includes an integral over the direction perpendicular to the scattering plane we establish that also the total integrated intensity, $`I_{int}`$, is independent of thermal treatment. This is easily seen from Fig. 3 and the following relations, where the integration is assumed to be along $`k`$:
$`I_{int}`$ $``$ $`I_{peak}\mathrm{\Gamma }_h\mathrm{\Gamma }_k\mathrm{\Gamma }_l`$
$`I_{peak}^{obs}`$ $``$ $`I_{peak}\mathrm{\Gamma }_kI_{int}/(\mathrm{\Gamma }_h\mathrm{\Gamma }_l)I_{int}/\mathrm{\Gamma }_h^2`$
Accordingly, only the correlation length of the superstructure and thereby the characteristic domain size depend on the sample treatment. This indicates that the finite size domains have internal thermodynamic order and fill the crystal. Studies of the time dependent oxygen ordering following a temperature quench from the ortho-I into the ortho-II phase at this composition confirm that the integrated intensity depends only on the temperature. These results will be published elsewhere .
It is instructive to consider the temperature variation of the ortho-II structure for crystal #1 prepared by thermal treatment 1. The phase transition from the ortho-II phase into the ortho-I phase was studied by means of $`\omega `$-scans at the $`(\mathrm{2.5\hspace{0.33em}0\hspace{0.33em}5})`$ reflection during initial heating and at subsequent cooling within one hour. The result is shown in Fig. 4. Clearly, the peak intensity is lower and the $`\omega `$ width is larger on cooling than on heating. However, the integrated intensity calculated as $`I_{peak}^{obs}\times \omega ^2`$ is found to be the same during heating and cooling. Since the results of the ortho-II superstructure ordering indicate that internal superstructure order is established inside the finite size domains it is appropriate to define a transition temperature, $`T_{OII}`$. Several criteria may be used. Firstly the variation of the peak intensity of the superstructure reflection plotted in the top part of Fig. 4 shows an inflection point at 95C, as determined by the minimum of the normalized slope (N.S.) of the peak intensity, plotted in the inset. The inflection point of the peak intensity indicates the cross-over from static order to critical fluctuations, i.e. the transition temperature $`T_{OII}`$. Secondly the onset of the line broadening of the superstructure reflection also marks the transition temperature. The temperature dependence of the line width above the transition can be well described by the critical exponent $`\nu =0.63`$ for the 3D Ising model:
$`\mathrm{\Delta }(T)=\mathrm{\Delta }_0^\pm |TT_c|^\nu ,`$ (2)
where $`\mathrm{\Delta }_0^\pm `$ are the slopes below and above the transition temperature. This behavior is shown in the middle part of Fig. 4. From the fit to the heating data in the critical region we find $`T_{OII}=95`$ C. Thirdly the line shape changes from approximately a Lorentzian squared, i.e. $`y^{}=\frac{3}{2}`$, at room temperature to a simple Lorentzian at the transition temperature: $`T_{OII}=95^{}`$C shown in the bottom part of Fig. 4. A line shape described by a simple Lorentzian is characteristic for critical fluctuations above the transition temperature. During cooling a drastic slowing down of the ordering process is observed at temperatures close to the transition temperature as seen in the variation of the peak intensity, which starts to deviate from the heating data at $`105^{}`$C.
The variation of the peak intensity and the peak width with temperature leads to the distinction of three areas during the heating cycle. Between room temperature and 50C both the peak intensity and the width of the superstructure reflections are constant, i.e. both the domain size and the integrated intensity, which is a measure of the order parameter of the ortho-II phase, are constant within the time period studied. Between 50C and 95C the width of the superstructure reflections is still constant, but the peak intensity decreases with increasing temperature, indicating that the ortho-II order inside the domains and thereby the number of oxygen atoms ordered in alternating full and empty chains start to decrease. Finally, the increasing width and the decreasing intensity in the temperature range above 95C indicate the area of critical fluctuations above the transition temperature. This range is well described by the critical exponents. In contrast, the behavior below the transition temperature shows substantial deviations from what is expected from a regular second order phase transition, where a long range ordered phase should be formed.
The investigation of the temperature dependence of the ortho-II phase at $`x=0.42`$ exhibits exactly the same behavior as found at $`x=0.50`$ with a small shift in the transition temperature. Thus a stoichiometric oxygen content is only of minor importance for the behavior at the ortho-I/ortho-II phase transition. However, the peak widths, $`\mathrm{\Delta }h=0.031(1),\mathrm{\Delta }k=0.010(1),\mathrm{\Delta }l=0.14(1)`$, are significantly larger than found for the high purity crystal #1 with $`x=0.50`$, cf. Figs. 2 and 3.
### B The ortho-III structure
The ortho-III phase is found at the oxygen compositions of $`x=0.72,0.77`$ and $`0.82`$. A crystal prepared with $`x=0.87`$ showed no sign of any oxygen ordering. At these oxygen compositions the ortho-III phase is formed by the sequence $`(110)`$ of two full chains and one empty chain. Accordingly, the size of the unit cell is tripled along $`a`$ and the diffraction pattern shows two superstructure reflections along $`h`$ between the fundamental Bragg peaks. As shown in Fig. 5 the ortho-III super lattice peaks are well defined in the $`ab`$ plane, but like all superstructures due to oxygen ordering in YBCO, broadened due to finite domain sizes. In contrast to the ordering in the $`ab`$ plane, the $`l`$-dependence of the diffracted intensity shows only a broad modulation, with a peak width corresponding to more than one reciprocal lattice unit. This $`l`$-modulation is characteristic of the ortho-III structure and has been found in all samples exhibiting the ortho-III phase, which indicates that the ordering of oxygen atoms takes place in the $`ab`$-plane, whereas different planes are only weakly correlated. Thus, in contrast to the ortho-II structure, which is 3D ordered, the ortho-III phase is essentially a 2D ordered superstructure. As mentioned in Section II C a simple Lorentzian is expected for scattering from finite size domains in a 2D system when the integration of the peaks is performed either along the $`a`$ or the $`b`$ direction $`(y^{}=y1/2=1)`$, whereas a Lorentzian to the power $`y^{}=y=3/2`$ is expected for integration along $`l`$. However, the peak shapes along $`h`$ and $`k`$ are well described by Lorentzians, independent of which component is integrated in the vertical direction. Attempts to include a variable power $`y`$ did not improve the fits when the $`l`$ direction was vertical. The smallest widths, which have been reported previously by Schleger et al. are found in the $`x=0.77`$ crystal with $`\mathrm{\Delta }h=0.031(1)`$ and $`\mathrm{\Delta }k=0.0090(2)`$.
One example of the temperature dependence of the ortho-III phase is shown in Fig. 6 for the crystal with composition $`x=0.72`$. Here the $`(8/\mathrm{3\hspace{0.33em}0\hspace{0.33em}5})`$ reflection was scanned along $`h`$ at various temperatures. Similar to the transition of the ortho-II phase the peak intensity and peak width are frozen at temperatures smaller than 35C. Above this temperature critical fluctuations are observed. Fitting the temperature dependence of the peak width a critical exponent of $`\nu =0.92(8)`$ is obtained, with a transition temperature of $`T_{OIII}=48(5)^{}`$C. This value for the critical exponent is in good agreement with the theoretical value of $`\nu =1`$ for the 2D Ising model and confirms the 2D character of the ordering.
### C Ortho-V
The investigation of a crystal prepared with the oxygen composition of $`x=0.62`$ shows a mixture of ortho-II and ortho-V phase at room temperature. This is revealed by the observation of diffuse peaks at positions of $`h=2.4,2.5`$ and $`2.6`$ as shown in Fig. 7. The peak at $`h=2.5`$ results from the ortho-II structure, and the peaks at $`h=2.4`$ and $`h=2.6`$ are consistent with a unit cell which is enlarged five times in the $`a`$ direction, i.e. the ortho-V structure. The two small peaks seen in the $`h`$-scan in Fig. 7 at $`h`$=2.23 and $`h`$=2.83 are an Al-powder line and possibly a grain of an unknown phase oriented with the lattice, respectively. The hump at $`h`$=2.83 has also been observed when the same crystal was prepared with other oxygen stoichiometries (compare with Fig. 9 and Ref. REFERENCES). A similar diffraction pattern, consistent with a mixture of ortho-II and ortho-V has been observed in all $`(h\mathrm{\hspace{0.33em}0}l)`$ scans performed with $`1h4`$ and $`l`$=0,3,5,6,7 (8 scans in total). However, none of these scans showed a peak at position $`\stackrel{}{Q}=(\frac{1}{5}\mathrm{\hspace{0.33em}0}l)`$. This is explained by the structure factor calculations of the superlattice peaks from the ideal ortho-V ordering sequence (10110) shown in Ref. REFERENCES. The intensities of the peaks at $`(1/\mathrm{5\hspace{0.33em}0\hspace{0.33em}0})`$ are indeed much smaller than the ones at $`(2/\mathrm{5\hspace{0.33em}0\hspace{0.33em}0})`$ and $`(3/\mathrm{5\hspace{0.33em}0\hspace{0.33em}0})`$. However, this model takes into account only the oxygen order, and, as discussed in Section I, the superlattice peaks are caused by both the oxygen order and the cation displacements. These displacements and the pronounced disorder may change the intensities and reduce them further.
Due to the heavy overlap of the peaks from the two phases it is difficult to determine the peak shape and width. However, analysis of the ortho-II and ortho-V peaks using Lorentzian profiles gave the following HWHM in reciprocal lattice units at room temperature: $`\mathrm{\Delta }h=0.040(28),\mathrm{\Delta }k=0.0078(16),\mathrm{\Delta }l=0.12(3)`$ for ortho-II, and $`\mathrm{\Delta }h=0.058(10),\mathrm{\Delta }k=0.0096(19)`$ for ortho-V. The scan along $`l`$ at $`(\mathrm{2.5\hspace{0.33em}0}l)`$, shown in Fig. 7, exhibits the intensity modulation well known for 3D ordering in the pure ortho-II phase, but the heavy overlap of the peaks prevents that the 2D short range type of modulation expected for the ortho-V peaks along $`l`$ can be determined independently.
The temperature dependence of this mixed phase was measured by $`h`$-scans between the $`(\mathrm{2\hspace{0.33em}0\hspace{0.33em}0})`$ and the $`(\mathrm{3\hspace{0.33em}0\hspace{0.33em}0})`$ Bragg reflections and the diffraction pattern was fitted to three Lorentzians with fixed positions at 2.4, 2.5 and 2.6. Looking at the measurement of the phase transition of this mixed phase, shown in Fig. 8, one observes that the ortho-V correlations disappear between 50C and 70C and at the same time the ortho-II gains intensity. Also during the cooling cycle the ortho-II correlations dominate the diffraction pattern. The ortho-II correlations disappear at approximately 110 C. Both facts together with our knowledge about the ordering kinetics indicate that during the cooling process the ortho-II phase is stabilized at higher temperature than ortho-V and with rather fast ordering kinetics. Then at lower temperature the ortho-V phase becomes stable, but due to the slow ordering kinetics at this lower temperature the ortho-V domains do not form within the one hour time period of the experiment.
### D Ortho-VIII
Figure 9 shows $`h,k`$ and $`l`$ scans for the oxygen composition $`x=0.67`$. The left part with $`h`$ scans along $`(h\mathrm{\hspace{0.33em}0\hspace{0.33em}0})`$ with $`2<h<3`$ reveals diffuse superlattice peaks at $`h=2.382(4)`$ and $`h=2.627(3)`$. The peak positions and profiles have been fitted to two Lorentzians giving a HWHM of $`\mathrm{\Delta }h=0.053`$. The middle part shows that the peaks are also localized in the transverse direction with a width of $`\mathrm{\Delta }k=0.013(2)`$. The modulation of the intensity for a scan along $`l`$ (right part of the figure) has a similar $`q`$ dependence as the corresponding scan for the ortho-III phase, shown in Fig. 5. Thus, there are no well-defined peaks along $`l`$, indicating essentially 2D ordering with substantial disorder in the stacking of full and empty chains along the $`c`$ direction. Similar superlattice peaks have been observed at positions in reciprocal space of $`(h\mathrm{\hspace{0.33em}0\hspace{0.33em}3})`$ and $`(h\mathrm{\hspace{0.33em}0\hspace{0.33em}5})`$ with $`2h3`$. The superstructure peak positions at modulation vectors with $`nh_m=0.382`$ and 0.627 are close to the expected values $`nh_m=3/8`$ and $`5/8`$ for a superlattice with a unit cell of $`8a\times b\times c`$, i.e. the ortho-VIII phase. The expected sequence of full and empty chains of the ideal ortho-VIII structure is (11010110). Calculating the intensities of the superlattice peaks for this ideal case one finds that the observed peaks at $`nh_m=`$ 3/8 and 5/8 are the strongest, the peaks at $`nh_m=`$ 2/8, 4/8 and 6/8 are about one order of magnitude smaller, and the ones at $`nh_m=`$ 1/8 and 7/8 are about two orders of magnitude smaller (compare with the presentation in Ref. REFERENCES). Due to the weak ordering it is unlikely that the smaller superlattice reflections can be observed.
The temperature dependence was observed from $`h`$-scans at the $`(2\frac{3}{8}\mathrm{\hspace{0.33em}0\hspace{0.33em}3})`$ peak position and the results are shown in Fig. 10. The onset of broadening of the superstructure peaks takes place at $`T_{OVIII}=42(5)^{}`$C. The temperature dependence of the peak width above the transition temperature is described by the critical exponent of $`\nu =0.79(3)`$ as shown is the middle part of Fig. 10. This value is between the exponent of 0.63 for the 2D and 1 for the 3D Ising model. Another interesting feature of this phase transition is revealed by the inspection of the peak position. When the temperature exceeds 50 C the peak position changes continuously from $`h=2.372`$ to $`h=2.4`$ which corresponds to the position of the ortho-V phase. Above 90C the peak shifts gradually to $`h=2.33`$ at 150C, the location of the peaks of the ortho-III structure. Upon cooling the data are reproduced down to 75C, at lower temperatures the intensity is significantly reduced and the structure freezes into the ortho-V phase.
### E The oxygen ordering phase diagram
From the transition temperatures obtained in the present and previous studies using hard x-ray diffraction and the same type of crystals we may establish phase lines for the oxygen superstructure ordering. Combining these data with the transitions temperatures, $`T_{OI}`$, of the phase transition from the tetragonal to the orthorhombic ortho-I phase, obtained by neutron powder diffraction , we may construct the structural phase diagram of oxygen ordering in YBCO, shown in Fig. 11. In the figure is also included the phase transition temperatures, $`T_{OI}`$ and $`T_{OII}`$ predicted by Monte Carlo simulations based on the ASYNNNI model with ab initio interaction parmeters.
The only true equilibrium structures are the ortho-I phase and the tetragonal phase, all superstructures formed by oxygen ordering do not show long range ordering. Within the temperature range studied the tetragonal phase is the only one observed for $`x<0.35`$. Below the tetragonal to orthorhombic phase transition temperature the 3D ordered ortho-I phase always develops, and it is the only structure observed for $`x>0.82`$. For $`0.35x<0.62`$ the 3D short range ordered ortho-II phase is the only stable superstructure. Similarly, a single phase ortho-III structure with 2D finite size ordering is observed for $`0.72x0.82`$. At intermediate compositions a mixed phase of ortho-II and ortho-V is found at $`x=0.62`$, and ortho-VIII is found at $`x=0.67`$ in crystals that have been slowly cooled to room temperature as described in Sec. II A. Both the ortho-V and the ortho-VIII structures are essentially 2D ordered and have finite size ordering. During heating the ortho-V structure transforms into ortho-II and it does not recover on cooling within one hour. Above room temperature the ortho-VIII structure transforms gradually first into ortho-V and then into ortho-III. On subsequent cooling the ortho-V superstructure is recovered and remains stable within the one hour time period of the measurements.
The line shape of the superstructure reflections is in most cases well described by a simple Lorentzian ($`y=1`$). Only for the ortho-II phase between $`0.42x<0.62`$ a Lorentzian squared shape ($`y=2`$) is found. At the low oxygen side of the ortho-II phase $`x0.36`$ the small peak to background ratio (see bottom part of Fig. 12) does not permit the determination of the exponent of the Lorentzian. The domain size of the superstructures depends strongly on the crystal quality and the annealing times. However, for high quality crystals that have been annealed by the standard procedure for studies of the phase diagram (described in Sec. II A) we expect that the domains are at the late state of growth and therefore only weakly time dependent (cf. Fig. 3, thermal preparations 1 and 2, and Refs. REFERENCES and REFERENCES). On this basis we consider the results presented in Sec. III of the peak widths measured at room temperature after the initial thermal preparation as saturation values. The HWHM of the superlattice peaks measured along the three axis of reciprocal space as function of oxygen composition is depicted in Fig. 12 (top). The parallel lines (guides to the eye) in the logarithmic plot observed in the ortho-II phase as well as in the ortho-III phase show that the ratio of the anisotropy is constant within a given structural phase. For the ortho-II phase we find the following ratios of the inverse correlation lengths at room temperature: $`\mathrm{\Gamma }_h/\mathrm{\Gamma }_k=2.7(6)`$ and $`\mathrm{\Gamma }_l/\mathrm{\Gamma }_k=15(2)`$. The $`ab`$ plane ratio seems to be independent also of the type of structure, since the ratio for the ortho-III phase: $`\mathrm{\Gamma }_h/\mathrm{\Gamma }_k=2.9(4)`$ is in good agreement with the value of the ortho-II phase. This implies that the domain pattern in the $`ab`$ plane scales in both the ortho-II and the ortho-III phases, and in ortho-II the scaling is extended to 3D. The peak intensities cannot be compared directly because different crystals and instrumental settings have been used. However, the peak intensity normalized to the background, shown in the bottom part of Fig. 12, is essentially an independent parameter of the ordering properties. From this normalized peak intensity and the HWHM data it is clear that the optimal superstructure order parameter is found close to $`x=0.55`$.
The oxygen composition $`x`$ of all the ordered phases deviates systematically from the ideal composition of these phases. For example the longest correlation lengths for the ortho-II phase are likely to be at $`x0.55`$. Unfortunately no data points are available at this composition. Theoretically one would expect the best ortho-II order for $`x=0.50`$. This deviation is even more significant for the ortho-III phase, which is expected at $`x=0.67`$, but observed around $`x0.77`$. Thus, the deviation from the ideal composition increases with increasing chain density and the amount of oxygen atoms occupying sites on the empty chains at room temperature can be estimated to be about 10 % for ortho-II and 30 % for ortho-III.
## IV Discussion
### A Experimental results
It has been known for several years that the ortho-II and ortho-III superstructures are bulk structural phases of finite size domains. Several other superstructures have been suggested, mainly from electron microscopy. In the present paper we have shown that also the ortho-V and ortho-VIII correlations observed by electron microscopy result from bulk structural ordering, but we found no evidence for the ortho-IV phase. However, we recognize in particular the early electron microscopy results obtained by Beyers et al. which are in close agreement with our room temperature data. Beyers et al. observe the ortho-II and ortho-III superstructures in the same composition range as in our studies. Furthermore, they found co-existence of ortho-II and ortho-V at $`x=0.65`$, and a structure similar to the ortho-VIII phase, which they call a ’$`(\mathrm{0.37\hspace{0.33em}0\hspace{0.33em}0})`$’ structure, at $`x=0.71`$.
Beyers et al. attributed the clear disagreement between the observed oxygen compositions and the stoichiometries of the ideal superstructure phases to gradients in the oxygen content of the sample, which might be different on the surface and in the bulk material. In our experiment such differences can be ruled out. We conclude that this deviation is an intrinsic property of the oxygen ordering mechanism. It is possible that the phase lines between the superstructure phases are in fact tilted, and only at zero temperature the ideal oxygen stoichiometry of the superstructure phases is found. However, this will never happen because the oxygen ordering kinetics is very slow at the temperatures where the superstructures become stable, and the movement of Cu-O chains freeze effectively below approximately 40 C.
Beyers et al. interpret the mixing of ortho-II and ortho-V phases at $`x=0.65`$ as a phase separation, which leads to the 60 K plateau . Our investigation of the temperature dependence together with the studies of the ordering kinetics may lead to a different conclusion. During the cooling of a sample with an oxygen content of $`x=0.62`$ (in the case of Beyers et al. $`x=0.65`$) oxygen starts to order in the ortho-II phase. The relatively high temperature enables a fast growth of ortho-II domains. At lower temperature the ortho-V phase becomes stable, but now at temperatures just above room temperature where the growth of ortho-V domains is slow. A full transformation into the ortho-V phase cannot be precluded but it is very time consuming. Therefore, we suggest that domains of the complex ortho-V superstructure start to grow inside the ortho-II structure and a mixed phase, rather than phase separation, results.
From studies of the oxygen ordering properties it has become clear that the finite size of the ortho-II superstructure results from formation of anti-phase boundaries that limit the domain growth due to slow kinetics of moving long Cu-O chains. The reason for this has been discussed by Schleger et al. , and it was speculated that random fields introduced by impurity defects in the crystal stabilize the anti-phase domain walls and prevent formation of long range order. This is corroborated by the present studies and additional studies of the ordering kinetics . However, the observation of superstructures extending over eight unit cells shows the importance of long range interactions for the ordering mechanism. That these long range interactions play a significant role for the finite size ordering has recently been established by model simulations , and will be discussed further below. For the ortho-III, ortho-V and ortho-VIII superstructures the small 2D domains indicate that the ordering resembles a random faulting sequence of ortho-II and ortho-III. Khachaturyan and Morris have suggested that this is a likely ordering scheme, and they have calculated structure factors which are qualitatively similar to those observed at room temperature in our experiments. However, the fact that the ortho-V and ortho-VIII superstructures only appear when they are slowly cooled indicates that the long range interactions tending to form these superstructures become effective at low temperatures, but the slow oxygen ordering kinetics for movement of long Cu-O chains prevent that well-defined domains are formed. As mentioned in Sec. II C we would expect a diffraction profile of a Lorentzian to the power $`y^{}=y=3/2`$ from 2D domains with sharp boundaries when the integration along the $`c`$ axis is incomplete. The observation that all the superstructure peaks of the ortho-V, ortho-VIII and ortho-III peaks are described by Lorentzian profiles suggests that these superstructures have a more fuzzy type of boundaries than the ortho-II domains.
Generally, there is significant hysteresis in the superstructure ordering when the temperature is cycled through the phase transitions. The ortho-II and ortho-III superstructures are re-established during cooling from the ortho-I phase within one hour. However, the ortho-V phase (mixed with ortho-II) and the ortho-VIII do not recover during cooling within this short time period. Instead, the less complex superstructures, ortho-II and ortho-V develop, respectively. It is obvious that the superstructure ordering does not represent equilibrium phases, and it cannot be precluded that more complex superstructures may be formed by very long time annealing at an appropriate temperature or in crystals that are even more perfect than the present ones. According to Ostwald’s step rule for phase transformations, metastable phases may be formed, before the system finally transforms into the stable phase, as long as nucleation centers with a similar structure like the metastable phases are present. In our case the ortho-II and ortho-III phase might be nucleation centers for the ortho-V phase, which in turn, at $`x=0.67`$, is metastable and serves as a nucleation center for the ortho-VIII phase (see figure 10). Thus, although we have been able to define unique transition temperatures, at least for the ortho-II superstructure phase, it is questionable whether we have established a phase diagram in the usual sense. This may explain why the phase diagram does not comply with Gibb’s phase rule.
The 3D ordered superstructures with unit cells $`2\sqrt{2}a\times 2\sqrt{2}a\times c`$ and $`\sqrt{2}a\times 2\sqrt{2}a\times c`$, the so called herringbone structure, have been observed by electron microscopy, and one group has reported on these structures by neutron and x-ray diffraction techniques on single crystals with composition $`x=0.35`$. However, no other experiments with bulk structural techniques, could confirm these results. Bertinotti et al. and Yakhou et al. have shown that the reflections of the herringbone type can be assigned to BaCu<sub>3</sub>O<sub>4</sub> grains in the crystals. Krekels et al. attribute the $`2\sqrt{2}a\times 2\sqrt{2}a\times c`$ structure to distortions of the CuO<sub>5</sub> pyramids in the CuO<sub>2</sub> planes, and Werder et al. suggest that they could result from ordering of copper and barium vacancies in the lattice. The consensus from these and several other studies is therefore that the $`2\sqrt{2}a\times 2\sqrt{2}a\times c`$ and the herringbone type structures are not oxygen ordering superstructures in YBCO. If they were, it is peculiar that they have 3D long range order while the Cu-O chain ordering develops only finite size domains. Also, we have found no evidence of them at any composition $`x`$ in the present hard x-ray diffraction studies on carefully prepared high quality single crystals.
### B Significance for superconductivity
The significance of the oxygen ordering for charge transfer and superconductivity is obvious from many studies. Chemical bond considerations combined with structural and spectroscopic studies have shown that the basal plane copper in undoped YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6+x</sub>, $`(x=0)`$, is monovalent and that simple oxygen monomers, i.e. Cu-O-Cu, will not give rise to charge transfer. However, charge transfer is observed for larger $`x`$ where Cu-O chains are formed. Cava et al. and Tolentino et al. have established that an increasing amount of oxygen give rise to a charge transfer to the CuO<sub>2</sub> planes that is in good agreement with the well-known plateau variation of $`T_c`$ with $`T_c=58`$ K around $`x=0.5`$ and $`T_c=93`$ K close to $`x=1`$. Relating the oxygen ordering to the variation of $`T_c`$ observed e.g. by Cava et al. we find that the 58 K plateau is identical to the stability range of the ortho-II superstructure, the rise in $`T_c`$ from the 58 K to the 93 K plateau takes place at values of $`x`$ where the ortho-V/II, ortho-VIII and ortho-III structures are found, and the 93 K plateau coincide with the oxygen compositions of the ortho-I phase.
The significance of the ortho-II ordering for superconductivity has been shown directly by Veal et al. and Madsen et al. . Both groups have shown that $`T_c`$ is significantly reduced just after a quench and increases with time towards the equilibrium values with a thermal activated time constant. The conclusion, that may be drawn from these experiments and the present structural data, is that the formation of ortho-II superstructure is decisive for the charge transfer and $`T_c`$. When the sample is quenched from temperatures above $`T_{OII}`$, and even from the tetragonal phase, it is only the time used to quench it into the ortho-II phase that matters. A time dependent increase of $`T_c`$ is observed at annealing temperatures down to 250 K. Most likely this temperature is the lower limit for local oxygen jumps which dominates the oxygen ordering at the very early time. It is unlikely that the domain wall separating anti-phase domains are mobile at 250 K.
### C Relation to model calculations
There has been many theoretical studies of the oxygen ordering in YBCO and attempts to correlate the structural ordering with the electronic properties and superconductivity. These include phenomenological relations between ordered oxygen domains and $`T_c`$ , and electron band structures calculated from oxygen chain configurations estimated ad hoc or derived from model studies . More realistic and elaborate models, where the electronic degrees of freedom from the strongly correlated electron system has been included in combination with the oxygen ordering properties, have also been considered . The aim is clearly to understand details of the local oxygen ordering properties which are important for the electronic structure and the charge transfer, but difficult to obtain directly from experiments. The predictive power of these model studies for the structural and electronic properties is strongly related to their ability to reproduce the experimental findings, as presented in the present structural studies.
Most of the structural models are based on local effective oxygen-oxygen interactions in a 2D lattice gas formulation . These models do not take into account long range interactions like strain effects and will therefore not reproduce the twin domain formation observed experimentally at the onset of the ortho-I ordering. Models including such long range interactions have been considered . It has also been suggested that the diffuse scattering results from generation of $`(\mathrm{1\; 0\; 0})`$ interstitial plane defects that order by forming a Magneli type homologous series . However, the oxygen ordering observed experimentally has a predominant 2D character related to the CuO<sub>x</sub> basal plane, that is $`(\mathrm{0\; 0\; 1})`$ planes. It takes place inside the ortho-I twin domains and has domain sizes that are usually much smaller than the twin domains. Strain effects and long range Coulomb type interactions are therefore of little significance for the oxygen superstructure formation but they do play a role for the mesoscopic ordering properties.
The simplest model, that accounts for many elements of the oxygen ordering properties, like the formation of Cu-O chains and the presence of the tetragonal, ortho-I and ortho-II phases, is the so-called ASYNNNI (Asymmetric Next Nearest Neighbor Ising) model . The ASYNNNI model is a 2D lattice gas (or Ising) model with effective oxygen-oxygen pair interactions, that are assumed to be independent of temperature and composition $`x`$. The interactions parameters include a strong Coulomb repulsion $`V_1`$ between the oxygen atoms on nearest neighbor sites and an attractive covalent interaction $`V_2`$ between oxygen atoms that are bridged by a Cu atom. These two interactions locate the oxygen on the $`b`$ axis (the O(1) site) and prevent significant oxygen occupation on the $`a`$ axis (the O(5) site) in the orthorhombic phases at moderate temperatures and compositions $`x>0.35`$. A weaker effective repulsive Coulomb type interaction $`V_3`$ between oxygen atoms that are next nearest neighbors and not bridged by a Cu atom stabilizes the ortho-II superstructure. The ASYNNNI model accounts quantitatively for the temperature and composition dependence of the experimental structural phase transition between the tetragonal and the ortho-I phases (see Fig. 11) by use of interaction parameters, which are consistent with values obtained by Sterne and Wille from first principles total energy calculations: $`V_1/k_B=4278`$ K, $`V_2/k_B=1488`$ K and $`V_3/k_B=682`$ K. It also predicts the existence of the ortho-II phase, but it cannot account for the additional superstructure phases, ortho-III, ortho-V and ortho-VIII. Moreover, it predicts long range order of the superstructure phases, which has never been obtained experimentally, and it cannot account quantitatively for the ortho-I to ortho-II phase transition temperature.
Extensions of the ASYNNNI model have been suggested to account for the shortcomings. These include an effective 3D interaction with a nearest neighbor attractive interaction along $`c`$ that is $`V_40.02V_1`$ , effects of electronic degrees of freedom in the Cu-O chain structure, as mentioned above , and 2D Coulomb type interactions of longer range than $`V_2`$ and $`V_3`$ . For the 2D ordering it has been argued that a single additional interaction parameter for oxygen atoms that are $`2a`$ apart and not bridged by copper should be sufficient . This is corroborated by an estimate of screened Coulomb potentials which shows that the interaction between oxygen atoms separated by $`2a`$ is of the order of $`V_5=0.02V_1`$, and it decays rapidly for larger distances. At the temperature where $`V_5`$ becomes effective Cu-O chains have already been formed and it will act as an effective interaction between chains rather than between oxygen pairs. The $`V_5`$ interaction stabilizes the ortho-III phase by construction but it is not expected to account for the ortho-V and ortho-VIII phases. The influence of including effective Coulomb type Cu-O chain interactions extending beyond $`a`$ and $`2a`$ has been studied analytically in the framework of a 1D Ising model . Here a sequence of branching phases develops for $`T0`$ in order to comply with the Nernst principle and stoichiometric phases at different compositions $`x`$. However, for the YBCO system it is expected that the interactions of range beyond $`2a`$ play a role only at low temperatures where effectively the structural ordering is frozen. Also, the projection to a 1D system requires that rather long Cu-O chains are formed, and recent Monte Carlo simulations have shown that finite chain lengths may result when the ASYNNNI model is extended by the $`V_5`$ interaction, even for $`T0`$ . On the other hand, the ASYNNNI model extended this way predicts that the $`V_5`$ parameter is sufficient to establish not only short range correlations of ortho-V and ortho-VIII but also the ortho-II and ortho-III superstructures do not develop long range order, as observed experimentally. The finite size ordering of the superstructures is therefore not necessarily a consequence of impurities or defects that pin the domain walls, but may be an intrinsic disordering property. Experiments on even more perfect crystals than used in the present study could supply additional information about the influence of defects on the ordering properties. In further agreement with the experiments, the ASYNNNI model extended with the $`V_5`$ as well as the 3D $`V_4`$ parameters predicts a significant suppression of the $`T_{OII}`$ ordering temperature relative to the $`T_{OI}`$ temperature which the original version failed to do (see Fig. 11). It is therefore, a promising model for analysis of experimental results and credible predictions about the local oxygen ordering properties. So far the theoretical phase diagram including the $`V_4`$ and $`V_5`$ interactions has not been determined. The present experimental results can be used as a guide to further model studies. Here it is interesting to note that our data show that the ratios of the correlation lengths are essentially independent of the oxygen stoichiometry in the ortho-II and the ortho-III phases. A comparison between this result and mean field predictions of the peak widths indicates that the ASYNNNI model interaction parameters are independent of $`x`$. This assumption has been a major objection against the validity of the ASYNNNI model.
## V Concluding summary
High energy X-ray diffraction has proven to be a unique tool for studies of oxygen ordering properties in the orthorhombic phase of YBCO. Chain ordered superstructures of the ortho-II, ortho-III, ortho-V and ortho-VIII types have been observed in high quality single crystals with this bulk sensitive technique. None of the superstructures develops long range order. Only the ortho-II phase is a 3D ordered superstructure with anisotropic correlation lengths. The ortho-II correlation lengths observed at room temperature depend on the oxygen composition (optimal for $`x=0.55`$), crystal perfection and thermal annealing. All other superstructures have 2D character with ordering only in the $`ab`$-plane. The ratio of the $`ab`$ plane correlation lengths is essentially independent of the oxygen composition and whether the ordering is ortho-II or ortho-III. The transition temperatures of the superstructures are between room temperature and 125C. The ordering properties resulting from thermal cycling through the $`T_{OII}`$ and the $`T_{OIII}`$ ordering temperatures show that finite size domains with internal thermodynamic equilibrium are formed. The domain size observed on cooling from the ortho-I phase within one hour is significantly reduced compared to the value obtained by long time annealing. The observation of ortho-V mixed with ortho-II, and ortho-VIII superstructures shows that these superstructures are bulk properties, and that Coulomb interactions beyond next-nearest-neighbors become effective close to room temperature. The ordering of the ortho-V and ortho-VIII superstructures does not reproduce when the sample is cooled from the ortho-I phase within one hour, and it can not be precluded that additional superstructure phases may be formed by careful annealing of high quality single crystals. Therefore, although an unambiguous criterion has been identified for the ordering temperatures of the finite size ortho-II and ortho-III superstructures, the resulting ’phase diagram’ is not an equilibrium phase diagram in the usual sense.
## Acknowledgments
This work was supported by the EC TMR - Access to Large Scale Facilities Programme at HASYLAB, and the Danish Natural Science Research Council through DanSync. The Danish Technical Science Research Council supports TF. Collaboration with H. Casalta, R. Hadfield and P. Schleger on initial studies preceding this work is gratefully acknowledged. Technical assistance from S. Nielsen, R. Novak, A. Swiderski and T. Kracht is much appreciated.
|
no-problem/9906/astro-ph9906319.html
|
ar5iv
|
text
|
# GRB 990510: linearly polarized radiation from a fireballBased on ESO VLT-Antu (UT1) observations (63.H-0233). Raw data are available upon request.
## 1 Introduction
GRB 990510 was detected by BATSE on-board the Compton Gamma Ray Observatory and by the BeppoSAX Gamma Ray Burst Monitor and Wide Field Camera on 1999 May 10.36743 UT (Kippen (1999); Amati et al. (1999), Dadina et al. 1999). Its fluence (2.5$`\times 10^5`$ erg cm<sup>-2</sup> above 20 keV) was relatively high (Kippen (1999)). Follow up optical observations started $`3.5`$ hr later and revealed an $`R17.5`$ (Axelrod et al. (1999)) optical transient, OT (Vreeswijk et al. 1999a), at the coordinates (J2000) $`\alpha =13^\mathrm{h}38^\mathrm{m}07.11^\mathrm{s}`$, $`\delta =80^{}29^{}48.2\mathrm{"}`$ (Hjorth et al. 1999b ) (galactic coordinates $`\mathrm{}^{II}=304.942`$, $`b^{II}=17.8035`$). Fig. 1 shows the Digital Sky Survey II image of the field of GRB 990510, together with the European Southern Observatory (ESO) Very Large Telescope (VLT) image we obtained (see below): the OT is clearly visible in the latter.
The OT showed initially a fairly slow flux decay $`F_\nu t^{0.85}`$ (Galama et al. 1999), which gradually steepened, $`F_\nu t^{1.3}`$ after $`1`$ d (Stanek et al. 1999a ), $`F_\nu t^{1.8}`$ after $`4`$ d (Pietrzynski & Udalski (1999), Bloom et al 1999), $`F_\nu t^{2.5}`$ after $`5`$ d (Marconi et al. 1999a , 1999b)). Vreeswijk et al. (1999b) detected Fe II and Mg II absorption lines in the optical spectrum of the afterglow. This provides a lower limit of $`z=1.619\pm 0.002`$ to the redshift, and a $`\gamma `$–ray energy of $`>10^{53}`$ erg, in the case of isotropic emission.
Polarization is one of the clearest signatures of synchrotron radiation, if this is produced by electrons gyrating in a magnetic field that is at least in part ordered. Polarization measurements can provide a crucial test of the synchrotron shock model (Mészáros & Rees (1997)). An earlier attempt to measure the linear polarization of the optical afterglow of GRB 990123 yielded only an upper limit (Hjorth et al. 1999a ) of $`2.3`$%.
## 2 Observations
Our observations of GRB 990510 were obtained at ESO’s VLT–Antu (UT1), equipped with the Focal Reducer/low dispersion Spectrometer (FORS) and Bessel filter $`R`$. The OT associated with GRB 990510 was observed $`18.5`$ hr after the burst, when the $`R`$-band magnitude was $`19.1`$. Observations were performed in standard resolution mode with a scale of $`0.2\mathrm{}`$/pixel; the seeing was $`1.2\mathrm{}`$. The observation log is reported in Table 1.
Imaging polarimetry is achieved by the use of a Wollaston prism splitting the image of each object in the field into the two orthogonal polarization components which appear in adjacent areas of the CCD image. For each position angle $`\varphi /2`$ of the half–wave plate rotator, we obtain two simultaneous images of cross–polarization, at angles $`\varphi `$ and $`\varphi +90^{}`$.
Relative photometry with respect to all the stars in the field was performed and each couple of simultaneous measurements at orthogonal angles was used to compute the points in Fig. 2 (see Eq. 1). This technique removes any difference between the two optical paths (ordinary and extraordinary ray) and the polarization component introduced by galactic interstellar grains along the line of sight. Moreover, being based on relative photometry in simultaneous images, our measurements are insensitive to intrinsic variations in the optical transient flux ($`0.03`$ magnitudes during the time span of our observations). With the same procedure, we observed also two polarimetric standard stars, BD–135073 and BD–125133, in order to fix the offset between the polarization and the instrumental angles.
The data reduction was carried out with the ESO–MIDAS (version 97NOV) system. After bias subtraction, non–uniformities were corrected using flat–fields obtained with the Wollaston prism. The flux of each point source in the field of view was derived by means of both aperture and profile fitting photometry by the DAOPHOT II package (Stetson (1987)), as implemented in MIDAS. For relatively isolated stars the two techniques differ only by a few parts in a thousand.
In order to evaluate the parameters describing the linear polarization of the objects, we compute, for each instrumental position angle $`\varphi `$, the quantity:
$$S(\varphi )=\frac{\frac{I(\varphi )/I(\varphi +90^{})}{I_\mathrm{u}(\varphi )/I_\mathrm{u}(\varphi +90^{})}1}{\frac{I(\varphi )/I(\varphi +90^{})}{I_\mathrm{u}(\varphi )/I_\mathrm{u}(\varphi +90^{})}+1}$$
(1)
where $`I(\varphi )`$ and $`I(\varphi +90^{})`$ are the intensities of the object measured in the two beams produced by the Wollaston prism, and $`I_\mathrm{u}(\varphi )/I_\mathrm{u}(\varphi +90^{})`$ are the average ratios of the intensities of the stars in the field. This corrects directly for the small instrumental polarization (and, at least in part, for the possible interstellar polarization). These field stars (see Fig. 3) have been selected over a range in magnitude ($`18R22`$) to check for possible non–linearities. Since the interstellar polarization of any star in the field may be related to the patchy dust structure and/or to the star distance, we have verified that the result does not depend on which stars are chosen for the analysis. The parameter $`S(\varphi )`$ is related to the degree of linear polarization $`P`$ and to the position angle of the electric field vector $`\vartheta `$ by:
$$S(\varphi )=P\mathrm{cos}2(\vartheta \varphi ).$$
(2)
$`P`$ and $`\vartheta `$ are evaluated by fitting a cosine curve to the observed values of $`S(\varphi )`$. The derived linear polarization of the OT of GRB 990510 is $`P=(1.7\pm 0.2)`$% (1$`\sigma `$ error), at a position angle of $`\vartheta =101^{}\pm 3^{}`$<sup>1</sup><sup>1</sup>1Please, note that the position angle reported in IAUC 7172 is incorrect by $`90^{}`$. The errors for the polarization level and position angle are computed propagating the photon noise of the observations and the contribution of the normalization to the stars in the field and of the calibration of the position angle. The latter quantities, however, amounts to only a minor fraction of the quoted 1$`\sigma `$ uncertainties. Fig. 2 shows the data points and the best fit $`\mathrm{cos}\varphi `$ curve. The statistical significance of this measurement is very high. A potential problem is represented by a “spurious” polarization introduced by dust grains interposed along the line of sight, which may be preferentially aligned in one direction. Stanek et al. (1999b), using dust infrared emission maps (Schlegel et al. (1998)), reported a substantial Galactic absorption ($`E_{BV}0.20`$) in the direction of GRB 990510. The maps by Dickey & Lockman (1990) and by Burstein & Heiles (1982) give instead a somewhat lower value, $`E_{BV}0.17`$ and $`0.11`$, respectively. Applying an empirical relation (Hiltner (1956); Serkowski et al. (1975)) this polarization can amount to $`P_{\mathrm{max}}9.0E_{\mathrm{B}\mathrm{V}}`$, i.e. $`12`$%. These are only statistical estimates and large variations on the main trend may be expected.
Then a fraction, or even all the polarization of the OT could be caused by the passage of its light through the galactic ISM. However the normalization of the OT measurements to the stars in the field already corrects for the average interstellar polarization of these stars, even if this does not necessarily account for all the effects of the galactic ISM along the line of sight to the OT (e.g. the ISM could be more distant than the stars, not inducing any polarization of their light). To check this possibility, we plot in Fig. 3 the degree of polarization vs. the instrumental position angle for each star and for the OT. All points in this figure have been derived avoiding to normalize with respect to other objects. It is apparent that, while the position angle of all stars are consistent with being the same (within 10 degrees), the OT clearly stands out. The polarization position angle of stars close to the OT differs by $`45^{}`$ from the position angle of the OT (see Fig. 3). This is contrary to what one would expect if the polarization of the OT were due to the galactic ISM. Indeed, the higher polarization level measured for the OT when normalized to the stars in the same field implies that the ISM actually somewhat de-polarizes the OT. We therefore conclude that the OT, even if contaminated by interstellar polarization, must be intrinsically polarized to give the observed orientation.
We can place tight limits on the amount of absorption, and hence the associated polarization, that could be produced by interstellar material in the host galaxy of GRB 990510. Assuming that the intrinsic spectrum is a power law $`(F_\nu \nu ^\alpha )`$, we require that the fluxes measured simultaneously in the $`B`$, $`V`$, $`R`$ and $`I`$ band (Pietrzynski & Udalski 1999; Kaluzny et al. 1999; Hjorth et al. 1999b) lie on a power law curve. This strongly limits the amount of the local extinction, affecting the flux at rest–frame frequencies of $`\nu ^{}=(1+z)\nu _{obs}`$, i.e. in the UV, where extinction is more severe. We find a maximum allowed value $`E_{BV}^{host}0.02`$, corresponding to a maximum induced polarization level of $`0.2\%`$. Incidentally, the best fit power law is obtained for $`\alpha 0.7`$, a galactic $`E_{BV}=0.16`$ and $`E_{BV}^{host}0`$. This value of $`\alpha `$ matches the predictions of the standard model for the decaying afterglow flux (Mészáros & Rees (1997)), which gives $`F_\nu (t)t^{3\alpha /2}`$. For $`\alpha =0.7`$, the expected flux decay is in agreement with that measured at the time of the observations.
## 3 Discussion
Relativistic fireball models do explain the main properties of GRBs and their afterglows (Rees & Mészáros (1992); Vietri (1997); Waxman (1997); Sari et al. (1998)). Polarized optical synchrotron emission may be observable if: (i) the coherence length of the magnetic field in the fireball grows at a sizeable fraction of the speed of light (Gruzinov & Waxman 1999; Gruzinov 1999) or, (ii) the fireball is collimated (Hjorth et al. 1999a) (i.e. it is beamed). Therefore, measurements of optical polarization can provide constraints on the geometry of the emitting source.
Additional information come from the afterglow light curve which shows a gradual steepening in the bands $`V`$, $`R`$ and $`I`$, which was never observed before (Marconi et al. 1999a). The observed steepening is almost wavelength independent, thus excluding that it could be entirely caused by a curved spectrum shifting in time rigidly to lower frequencies, in which case we ought to see the highest frequencies steepening first. In addition, the $`VR`$ and $`RI`$ colors are changing very slowly during the evolution, indicating that the spectral slope is changing slowly with time. These information suggest that the fireball is collimated in a jet. The solid angle of the jet visible to the observer is limited to those regions making an angle smaller than $`1/\mathrm{\Gamma }`$ with the line of sight. As $`\mathrm{\Gamma }`$ decreases, the visible solid angle increases as $`1/\mathrm{\Gamma }^2`$, until $`\mathrm{\Gamma }=\mathrm{\Gamma }_1=1/(\theta _j\theta )`$ (with $`\theta _j`$ being the cone of semi–aperture angle and $`\theta `$ the angle between the cone axis and the line of sight). When $`\mathrm{\Gamma }=\mathrm{\Gamma }_2=1/(\theta _j+\theta )`$ the observed solid angle remains constant, since the entire jet is visible. For $`\mathrm{\Gamma }`$–factors between $`\mathrm{\Gamma }_1`$ and $`\mathrm{\Gamma }_2`$ the observed solid angle increases somewhat slower than $`1/\mathrm{\Gamma }^2`$. Since the flux at the earth is proportional to the observed solid angle, we have two well defined behaviors of the light curve, corresponding to $`\mathrm{\Gamma }>\mathrm{\Gamma }_1`$ and $`\mathrm{\Gamma }<\mathrm{\Gamma }_2`$, and a transition period of gradual steepening in between. Photons produced in regions at an angle $`1/\mathrm{\Gamma }`$ with respect to the line of sight are emitted, in the comoving frame, at $`90^{}`$ from the velocity vector. A comoving observer at this angle can then see a compressed emitting region and a projected magnetic field structure with a preferred orientation. If the gradual steepening of the light curve is due to the mechanism just mentioned, we would observe only some regions at a viewing angle $`1/\mathrm{\Gamma }`$, not all those we would see in axis–symmetric situation, and this asymmetry can be the cause of the observed linear polarization.
The above arguments suggest that we are observing, slightly off–axis, a collimated beam. If this is the case, we would have a link between the flux decay behavior, the presence of polarization, and the degree of collimation, opening a new perspective for measuring the intrinsic power of GRBs.
Deeper understanding of polarization in GRBs may come from future multi–filter observations and from spectropolarimetry. Frequency dependent polarization can in fact easily disentangle different components of polarization. In addition, variability in the degree of polarization and its position angle is expected in such fastly evolving sources: therefore repeated observations of the same afterglow will also be important.
###### Acknowledgements.
We thank the ESO–VLT service team, and in particular H. Boehnhardt, F. Bresolin, P. Møller and G. Rupprecht. FH thanks the kind hospitality of the Department of Physics of the University of Milan. We also thank the referee J. Hjorth for his helpful comments.
|
no-problem/9906/cond-mat9906116.html
|
ar5iv
|
text
|
# Symmetric Diblock Copolymers in Thin Films (I): Phase stability in Self-Consistent Field Calculations and Monte Carlo Simulations
## I Introduction.
Amphiphilic polymers are model systems for investigating mechanisms of self-assembly. Joining chemically distinct polymers – $`A`$ and $`B`$ – at their ends to form an $`AB`$ diblock copolymer prevents macrophase separation of the two species. In order to reduce the number of energetically unfavorable interactions between distinct blocks in a melt, the molecules self-assemble into complex morphologies. The morphology is selected via a delicate balance between the free energy cost of the internal interfaces and the conformational entropy loss as the molecules stretch to fill space at constant density. The phase diagram in the bulk has been investigated in much detail as a function of the relative length of the blocks $`f`$ and the incompatibility $`\chi N`$. The morphologies found in copolymer melts and copolymer/homopolymer mixtures resemble the spatially structured phases of other amphiphilic systems (e.g., lipid/water mixtures).
From a theoretical point of view, polymeric systems are particularly convenient for investigating mechanisms of self-assembly. Only a small number of parameters describe the system, i.e., the fraction $`f`$ of $`A`$ monomers in the diblock, the molecule’s end-to-end distance $`R_e`$ and the incompatibility $`\chi N`$, where $`\chi `$ denotes the repulsion between monomers of different species and $`N`$ the number of monomers per molecule. In general, polymeric systems are well describable by self-consistent field theories using the Gaussian chain model. For a wide range of temperature the theory accurately calculates the excess quantities of the internal interfaces (e.g., the interfacial tension, the bending moduli, or the enrichment of solvent). The understanding of these interfacial properties makes polymers suitable microscopic model systems for investigating the statistical mechanics of interacting interfaces.
If the copolymers are confined into a thin film, the interactions with the boundaries influence the morphology and its orientation. Controlling the orientation of the spatial structure over rather large length scales is important for many practical applications. Consequentially, much experimental effort has been directed towards a tailoring of surface properties and studying the morphologies in confined geometry. Lambooy et al. studied symmetric dPS-PMMA diblock copolymers between silicon wavers and found transitions between parallel oriented lamellar phases with different numbers of internal interfaces upon increasing the film thickness. Kellogg and co-workers studied a similar system between surfaces, which were coated with random copolymers. This boundary mimics neutral walls and for film thickness corresponding to 2.5 lamellar spacings in the bulk, indeed, perpendicular oriented lamellae could be observed. The perpendicular oriented morphology has attracted abiding interest as a template for lateral structure on the nanometer scale.
The self-assembly was analyzed for strong segregation by Walton et al. and in the framework of the self-consistent field theory by Pickett and Balazs and by Matsen. Strictly neutral walls give rise to a perpendicular orientation of the lamellae. However, surfaces that interact favorably with one component stabilize parallel morphologies if the film thickness is compatible with the lamellar spacing in the confined state. Increasing the film thickness the self-consistent field studies revealed a sequence of perpendicular and parallel oriented lamellae.
The orientation of the morphologies upon confinement makes thin films a promising candidate for investigating the details of their structure via Monte Carlo simulations. Copolymers in confined geometry have been first studied in Monte Carlo simulations by Kikuchi and Binder. They found pronounced effects of the confinement on the ordering. However, similar to experiments, Monte Carlo simulations are plagued by very long relaxation times and reaching thermal equilibrium is rather difficult. Indeed, the mixed state of perpendicular and parallel orientated lamellae observed in the simulations and experiments was found to be unstable in framework of self-consistent field calculations. Reviews of both experiments and theory on the self-assembly of block copolymers in thin films give more details on previous work.
The aim of the present work is twofold: On the one hand, we calculate the phase diagram of diblock copolymers in a thin film as a function of the film thickness and incompatibility. We use a self-consistent field technique developed by Matsen. Both confining walls attract the $`A`$ component of the diblock and we restrict ourselves to symmetric diblock copolymers $`f=1/2`$. The temperature spans the weak and intermediate segregation limit. On the other hand, we compare the stability of different phases in the self-consistent field calculations and in the Monte Carlo simulations of the bond fluctuation model for chain length $`N=32`$ and at incompatibility $`\chi N=30`$.
Our paper is arranged as follows: First we briefly describe the self-consistent field technique for diblock copolymers in thin films and introduce the model used in the Monte Carlo simulations. Then we discuss the regions of stability of the various oriented lamellar phases as a function of the incompatibility and the film thickness. We present a qualitative comparison between the self-consistent field calculations and the Monte Carlo simulations and close with a brief discussion of our findings.
## II Model and techniques
We consider $`n`$ diblock copolymers containing $`N=N_A+N_B`$ segments in a volume $`\mathrm{\Delta }_0\times L\times L`$. $`\mathrm{\Delta }_0`$ denotes the film thickness, while $`L`$ is the lateral extension of the film. The monomer number density in the middle of the film is denoted by $`\rho `$. The density at the film surfaces deviates from the density in the middle and it is useful to introduce the thickness $`\mathrm{\Delta }`$ of an equivalent film with constant monomer density $`\mathrm{\Delta }nN/\rho L^2`$. The individual blocks of the diblock have identical length $`N_A=N_B=fN`$, such that the diblock assembles into a lamellar phase in the bulk. $`R_e`$ denotes the end-to-end distance of the molecule. There is a short range repulsion between the two monomer species which can be parameterized by the Flory-Huggins parameter $`\chi `$. The two surfaces of the film are impenetrable and hard. Therefore, there is no formation of islands or hole defects at the surface. However, the free energy of a confined film is also an important ingredient in understanding the static and dynamics of pattern formation in thin films with free boundaries. Both walls attract the $`A`$ component of the diblock and repel the $`B`$ component via a short range potential.
### A Self-consistent field calculations (SCF)
The computational technique in our self-consistent field (SCF) calculations is very similar to the work of Matsen. In a boundary region of width $`\mathrm{\Delta }_w`$ the total monomer density drops to zero at both walls. In our calculations we assume the monomer density profile $`\rho \mathrm{\Phi }_0`$ to take the form
$$\mathrm{\Phi }_0(x)=\{\begin{array}{cc}\frac{1\mathrm{cos}\left(\frac{\pi x}{\mathrm{\Delta }_w}\right)}{2}\hfill & \text{for}0x\mathrm{\Delta }_w\hfill \\ 1\hfill & \text{for}\mathrm{\Delta }_wx\mathrm{\Delta }_0\mathrm{\Delta }_w\hfill \\ \frac{1\mathrm{cos}\left(\frac{\pi (\mathrm{\Delta }_0x)}{\mathrm{\Delta }_w}\right)}{2}\hfill & \text{for}\mathrm{\Delta }_0\mathrm{\Delta }_wx\mathrm{\Delta }_0\hfill \end{array}$$
(1)
The width and the shape of the density profile near the wall is determined by a competition of the entropy loss of the polymers near the wall (favoring a thick boundary) and equation of state effects, which try to restore a spatially homogeneous density. The particular choice of the density profile is employed for computational convenience. The value of $`\mathrm{\Delta }_w`$ and the shape of the profile has little effect on the relative stability of the different phases. We choose $`\mathrm{\Delta }_w=0.15R_e`$ in accord with the previous study. This value is close to the ratio of the interaction range and the end-to-end distance in the Monte Carlo simulations. A film with the same number of monomers but uniform density would have the thickness $`\mathrm{\Delta }=\mathrm{\Delta }_0\mathrm{\Delta }_w`$.
Both walls attract the $`A`$ component of the diblock and repel the $`B`$ component via a short range potential. The monomer wall interaction $`H`$ is modeled as:
$$H(x)=\{\begin{array}{cc}\frac{4\mathrm{\Lambda }_1b\sqrt{N}}{\mathrm{\Delta }_w}\left\{1+\mathrm{cos}\left(\frac{\pi x}{\mathrm{\Delta }_w}\right)\right\}\hfill & \text{for}0x\mathrm{\Delta }_w\hfill \\ 0\hfill & \text{for}\mathrm{\Delta }_wx\mathrm{\Delta }_0\mathrm{\Delta }_w\hfill \\ \frac{4\mathrm{\Lambda }_2b\sqrt{N}}{\mathrm{\Delta }_w}\left\{1+\mathrm{cos}\left(\frac{\pi (\mathrm{\Delta }_0x)}{\mathrm{\Delta }_w}\right)\right\}\hfill & \text{for}\mathrm{\Delta }_0\mathrm{\Delta }_wx\mathrm{\Delta }_0\hfill \end{array}$$
(2)
The normalization of the surface fields $`\mathrm{\Lambda }_1`$ and $`\mathrm{\Lambda }_2`$, which act on the monomers close to the left and the right wall, is chosen such that the integrated interaction energy between the wall and the monomers is independent of the width of the boundary region $`\mathrm{\Delta }_w`$.
The microscopic monomer densities $`\widehat{\mathrm{\Phi }}_A`$ and $`\widehat{\mathrm{\Phi }}_B`$ can be expressed as a functional of the polymer conformations $`\{𝐫_\alpha (\tau )\}`$:
$$\widehat{\mathrm{\Phi }}_A(𝐫)=\frac{N}{\rho }\underset{\alpha =0}{\overset{n}{}}_0^fd\tau \delta \left(𝐫𝐫_\alpha (\tau )\right)$$
(3)
where the sum runs over all $`n`$ diblock copolymers in the system and $`0\tau 1`$ parameterizes the contour of the Gaussian polymer. A similar expression holds for $`\widehat{\mathrm{\Phi }}_B(𝐫)`$. With this definition the partition function of a melt of Gaussian diblock copolymers takes the form:
$`𝒵`$ $`{\displaystyle 𝒟[𝐫]𝒫[𝐫]}`$ $`\mathrm{exp}\left(\rho {\displaystyle \mathrm{d}^3𝐫\left\{\chi \widehat{\mathrm{\Phi }}_A\widehat{\mathrm{\Phi }}_BH(𝐫)(\widehat{\mathrm{\Phi }}_A(𝐫)\widehat{\mathrm{\Phi }}_B(𝐫))\right\}}\right)`$ (5)
$`\times \delta \left(\mathrm{\Phi }_0(𝐫)\widehat{\mathrm{\Phi }}_A(𝐫)\widehat{\mathrm{\Phi }}_B(𝐫)\right)`$
The functional integral $`𝒟`$ sums over all chain conformations of the diblock copolymers and $`𝒫[𝐫]\mathrm{exp}\left(\frac{3}{2Nb^2}_0^1d\tau \left(\frac{\mathrm{d}𝐫}{\mathrm{d}\tau }\right)^2\right)`$ denotes the statistical weight of a non interacting Gaussian polymer. $`b^2=R_e^2/(N1)`$ is the statistical segment length of the polymer. This simple model neglects the coupling between the interaction energy and the chain conformations. Thus, it cannot reproduce the stretching of the diblock copolymer in the disordered phase observed in simulations and experiments. Moreover, the Gaussian chain model neglects a finite stiffness of the polymers, and the chain extensions parallel to the walls (in the parallel orientated lamellae or disordered state) always remain unperturbed. The Boltzmann factor in the partition function incorporates the thermal repulsion between unlike monomers and the interactions between the monomers and the walls. The last factor represents the incompressibility of the melt in the center of the film and enforces the monomer density to decay according to Eq.(1) at the walls. A finite compressibility of the polymeric fluid is neglected.
Introducing auxiliary fields $`W_A`$, $`W_B`$, $`\mathrm{\Phi }_A`$, $`\mathrm{\Phi }_B`$ and $`\mathrm{\Xi }`$ we rewrite the partition function of the multi–chain system in terms of the partition function of a single chain
$$𝒵𝒟W_A𝒟W_B𝒟\mathrm{\Phi }_A𝒟\mathrm{\Phi }_B𝒟\mathrm{\Xi }\mathrm{exp}\left(\frac{[W_A,W_B,\mathrm{\Phi }_A,\varphi _B,\mathrm{\Xi }]}{k_BT}\right)$$
(6)
The free energy functional has the form:
$`{\displaystyle \frac{[W_A,W_B,\mathrm{\Phi }_A,\varphi _B,\mathrm{\Xi }]}{nk_BT}}`$ $``$ $`\mathrm{ln}𝒬[W_A,W_B]`$ (11)
$`+{\displaystyle \frac{1}{V}}{\displaystyle \mathrm{d}^3𝐫\chi N\mathrm{\Phi }_A(𝐫)\mathrm{\Phi }_B(𝐫)}`$
$`{\displaystyle \frac{1}{V}}{\displaystyle \mathrm{d}^3𝐫H(𝐫)N\left\{\mathrm{\Phi }_A(𝐫)\mathrm{\Phi }_B(𝐫)\right\}}`$
$`{\displaystyle \frac{1}{V}}{\displaystyle \mathrm{d}^3𝐫\left\{W_A(𝐫)\mathrm{\Phi }_A(𝐫)+W_B(𝐫)\mathrm{\Phi }_B(𝐫)\right\}}`$
$`{\displaystyle \frac{1}{V}}{\displaystyle \mathrm{d}^3𝐫\mathrm{\Xi }(𝐫)\left\{\mathrm{\Phi }_0(𝐫)\mathrm{\Phi }_A(𝐫)\mathrm{\Phi }_B(𝐫)\right\}}`$
where $`𝒬`$ denotes the single chain partition in the external fields $`W_A`$ and $`W_B`$:
$$𝒬[W_A,W_B]=\frac{1}{V}𝒟[𝐫]𝒫[𝐫]\mathrm{exp}\left(_0^fd\tau W_A(𝐫(\tau ))_f^1d\tau W_B(𝐫(\tau ))\right)$$
(12)
The functional integration in Eq.(6) cannot be carried out explicitly. Therefore we employ a saddlepoint approximation, which replaces the integral by the largest value of the integrand. This maximum occurs at values of the fields and densities determined by extremizing $``$ with respect of each of its arguments. These values are denoted by lower–case letters and satisfy the self-consistent set of equations:
$`w_A(𝐫)`$ $`=`$ $`\chi N\varphi _BH(𝐫)N+\xi (𝐫)`$ (13)
$`w_B(𝐫)`$ $`=`$ $`\chi N\varphi _A+H(𝐫)N+\xi (𝐫)`$ (14)
$`\varphi _A(𝐫)`$ $`=`$ $`{\displaystyle \frac{V}{𝒬}}{\displaystyle 𝒟𝒫_0^fd\tau \delta (𝐫𝐫(\tau ))}`$ (16)
$`\mathrm{exp}\left({\displaystyle _0^f}d\tau w_A(𝐫(\tau )){\displaystyle _f^1}d\tau w_B(𝐫(\tau ))\right)`$
$`\varphi _B(𝐫)`$ $`=`$ $`{\displaystyle \frac{V}{𝒬}}{\displaystyle 𝒟𝒫_f^1d\tau \delta (𝐫𝐫(\tau ))}`$ (18)
$`\mathrm{exp}\left({\displaystyle _0^f}d\tau w_A(𝐫(\tau )){\displaystyle _f^1}d\tau w_B(𝐫(\tau ))\right)`$
At this stage fluctuations around the most probable configuration are ignored. Hence, the interfaces in the self-consistent (SCF) field calculations are ideally flat and there is no broadening by fluctuations of the local position of the interfaces (capillary waves).
The distribution of copolymer segments is calculated by solving appropriate diffusion equations for the end segment distributions. In order to study the self-assembly into various morphologies, we expand the spatial dependence of the densities and fields in a set of orthonormal functions that possess the symmetry of the morphology being considered. This results in a set of non–linear equations which are solved by a Newton-Raphson like method. Substituting the saddlepoint values of the densities and fields into the free energy functional (11) we calculate the free energy of the various morphologies. For the perpendicular oriented lamellar phase the value of the free energy has to be minimized with respect to the lamellar spacing. We use up to 220 basis functions for the calculation of the perpendicular oriented lamellar phases and thus achieve a relative accuracy of the order $`10^4`$ for the free energy.
### B Monte Carlo simulations (MC)
For the Monte Carlo (MC) simulations we employ the bond fluctuation model. This coarse grained lattice model captures the relevant universal features of polymeric materials: excluded volume of segments, chain connectivity, and a short range thermal interaction. Many thermodynamic properties of the model have been determined in previous studies and the model is a good compromise between the computational advantages of a lattice model and a faithful representation of continuum space properties. In particular, the relation between the model parameters, the local fluid–like packing of monomers, and the phase behavior has been investigated in detail. Within the framework of this model a small number of chemical repeat units is represented by the eight corners of a cube on a three dimensional lattice. Monomers along a polymer are connected via one of 108 bond vectors of length $`2,\sqrt{5},\sqrt{6},3`$, and $`\sqrt{10}`$. The distances are measured in units of the lattice spacing. The polymers comprise $`N=32`$ monomers. We work at a monomer number density $`\rho =1/16`$. This value corresponds to a concentrated solution or a melt. Under these conditions the end-to-end distance is $`R_e17`$ and, in accord with previous studies, we use $`b=R_e/\sqrt{N1}=3.05`$ for the statistical segment length in the SCF calculations.
One half of the polymer consists of $`A`$ monomers, the other consists of $`B`$ monomers. Monomers of the same type attract each other via a square well potential, while there is a repulsion between unlike species. The interaction range comprises the 54 nearest neighbors up to a distance $`\sqrt{6}`$. This corresponds roughly to the first neighbor shell in the monomer density pair correlation function. The well depths of the interactions are chosen symmetrically: $`ϵ_{AA}=ϵ_{BB}=ϵ_{AB}ϵ`$.
The phase behavior and structure of a binary blend of $`A`$ and $`B`$ homopolymers has been investigated in the framework of this model. The phase diagram of binary homopolymer mixtures and ternary homopolymer/copolymer blends as well as the interfacial structure between unmixed phases are well describable by the Gaussian chain model if the Flory-Huggins parameter is identified via
$$\chi =\frac{2ϵ}{k_BT}_{r\sqrt{6}}\mathrm{d}^3𝐫g^{\mathrm{inter}}(𝐫)\frac{2ϵ}{k_BT}\mathrm{\hspace{0.33em}2.65},$$
(19)
where we have used the symmetry of the monomeric interactions. Here, $`g^{\mathrm{inter}}`$ denotes the intermolecular pair correlation function in the melt and the integral is extended over the range of the square well potential. Moreover, we have assumed that $`g^{\mathrm{inter}}`$ is largely independent from temperature. We use the value 2.65 for the number of monomers of different chains in the range of the square well interaction. This identification of the Flory-Huggins parameter is based on the energy of mixing; entropic contributions to the free energy due to packing effects or conformational changes are negligible. We perform Monte Carlo (MC) simulations at $`ϵ=0.1769k_BT`$. This value corresponds to $`\chi N=30`$.
The simulation cell possesses a geometry of the form $`\mathrm{\Delta }_0\times L\times L`$. Periodic boundary conditions are applied in the lateral directions. The two impenetrable walls at $`x=0`$ and $`x=\mathrm{\Delta }_0+1`$ interact with monomers in the nearest two layers via a square well potential. An $`A`$ monomer in the interaction range of the walls lowers the energy by an amount $`ϵ_w`$ while a $`B`$ monomer increases the energy by the same amount. If the surface is completely covered with $`A`$ monomers the wall interaction energy per chain is:
$$\frac{F_{\mathrm{wall}}}{nk_BT}=2\times \frac{2ϵ_wN}{\mathrm{\Delta }k_BT}$$
(20)
In the simulations we use the value $`ϵ_w=0.1k_BT`$. Simulations of binary blends (with identical interactions) show, that for this strength of surface interactions the $`A`$ component wets the surface for $`ϵ<0.043k_BT`$ or $`\chi N<7.3`$. For the comparison between the SCF calculations and the MC simulations we adjust the surface fields $`\mathrm{\Lambda }`$ as to result in the same contribution to the energy. Hence, the surface interactions in the MC simulations correspond to $`\mathrm{\Lambda }_1N=\mathrm{\Lambda }_2N=0.375`$ (cf. Eq.(23) below).
The MC simulations comprise three different moves: The conformations of the polymers are updated via local hopping of the monomers and slithering snake like motions. In the former, we randomly choose a monomer and try to displace it by one lattice unit in a random direction. During the slithering snake attempts, we randomly choose a chain end and try to attach it at the opposite end of the chain. The monomer identity ($`A`$ or $`B`$) of the chains is correspondingly updated to precisely conserve the composition of the chain. The latter moves relax the conformations of the polymers a factor of the order $`N`$ faster than local updates. Moreover, we allow for $`AB`$ flips, in which the identity of the $`A`$ and $`B`$ monomers of a randomly chosen polymer are exchanged. One Monte Carlo step consists of 3 slithering snake attempts per chain, 1 local hopping attempt per monomer and 1 $`AB`$ flip per diblock. Every 12500 Monte Carlo steps a configuration was stored for further analysis. Since we are interested in studying the stability of different morphologies, we do not impose a specific morphology on the starting configurations. Rather, we let the structure self-assemble via a quench from the disordered phase to the ordered state at $`\chi N=30`$, and monitor the morphologies which occur in several independent runs with identical parameters.
## III Phase diagram.
At high incompatibility, many features of confined diblock copolymers can be deduced from the strong stretching theory. This has been applied to study the effect of confinement by Turner and Walton et al (see also ). We follow the notation of Ref.. We consider a parallel lamellar phase $`L_p`$ with $`p`$ internal interfaces. In the limit $`\chi N10`$ the $`A`$ and $`B`$ rich domains are well segregated and the junction points are confined to a narrow interfacial region. To fill space uniformly the copolymers stretch. In a lamellar morphology each half of the diblock forms a brush. These brushes do not interpenetrate. In the parallel lamellar phase $`L_p`$ with $`p`$ interfaces each brush has the height $`\mathrm{\Delta }/2p`$ and the free energy cost due to the stretching of the chains in the brush amounts to:
$$\frac{F_{\mathrm{brush}}}{nk_BT}=2\times \frac{\pi ^2(\mathrm{\Delta }/2p)^2}{8(N/2)b^2}$$
(21)
Each half of the diblock covers an interfacial area $`pN/\rho \mathrm{\Delta }`$. Estimating the value of the interfacial tension between $`A`$ and $`B`$ domains by the interfacial tension in a binary blend $`\sigma /k_BT\rho b\sqrt{\chi /6}`$, the free energy contribution of the internal interfaces per polymer takes the form:
$$\frac{F_{\mathrm{inter}}}{nk_BT}=\sqrt{\chi /6}\frac{pbN}{\mathrm{\Delta }}$$
(22)
The balance between these two terms determines the behavior in the bulk. This leads to a prefered lamellar spacing $`D_\mathrm{b}=2\mathrm{\Delta }_{\mathrm{min}}/p=2(8\chi N/3\pi ^4)^{1/6}R_e`$. If the walls preferentially interact with one component the interaction energy of the monomers with the walls gives another contribution to the free energy. Using the expression for the density profiles and the wall monomer interactions, this contribution takes the form:
$$\frac{F_{\mathrm{wall}}}{nk_BT}=\frac{1}{N}\mathrm{d}^3𝐫H(𝐫)\rho \mathrm{\Phi }_0(𝐫)=\frac{\mathrm{\Lambda }_1N+\mathrm{\Lambda }_2N}{\mathrm{\Delta }/(b\sqrt{N})}$$
(23)
where we have assumed that each surface is covered completely with the energetically favored component. For parallel lamellar phases with an odd or an even number of interfaces exposed to symmetric or antisymmetric surfaces fields, respectively, the contributions cancel.
The confinement into a film also reduced the conformational entropy of the molecules. Using ground state dominance, the entropy of an inhomogeneous melt is given by:
$$\frac{F_{\mathrm{conf}}}{nk_BT}=\frac{b^2}{24n}d𝐫\frac{\left(\mathrm{\Phi }_0(𝐫)\right)^2}{\mathrm{\Phi }_0(𝐫)}=\frac{\pi ^2Nb^2}{24\mathrm{\Delta }_w\mathrm{\Delta }}$$
(24)
This contributions depends strongly on the detailed density profile at the wall. However, it does not discriminate between the different phases and, hence, is irrelevant to the stability of the different morphologies.
In the perpendicular lamellar phase $`L_{}`$ both components cover an equal amount of surface area and, hence, the contribution of the surface fields vanishes. Moreover, the lamellar period is free to adjust as to minimize the contribution from the internal interfaces and the chain stretching. As a result, the lamellar spacing in the strong stretching theory is identical in the perpendicular morphology of a film and in the bulk.
Fig.1 presents the excess free energy for the different phases in the strong stretching approximation for $`\chi N=30`$. We plot the difference $`(FF_\mathrm{b})\mathrm{\Delta }/nk_BTR_e`$, which is proportional to the difference between the free energy in the confined geometry and the bulk free energy per unit area of the film. For neutral walls (a), which do not prefer a component, the $`L_{}`$ phase is stable for all film thicknesses. The parallel lamellar morphology has the identical free energy only if the film thickness coincides with half integer multiples of the bulk period $`D_\mathrm{b}`$. For symmetric walls (b) the free energy of the parallel morphology with an even number of interfaces is lowered, because the prefered component is brought into contact with both walls. The free energies of the other phases remain unchanged. Upon increasing the film thickness one observes transitions from morphologies with an even number $`2p`$ of interfaces at film thickness around $`pD_\mathrm{b}`$ and perpendicular orientated lamellae for thickness $`(2p+1)D_\mathrm{b}/2`$. In the case of antisymmetric surface fields (c) the free energy of the parallel lamellae with an odd number (2p+1) of interfaces is lowered, and one finds transitions between those parallel lamellae for film thickness close to $`(2p+1)D_\mathrm{b}/2`$ and the perpendicular morphology for thickness around integer multiples of the bulk period.
In Fig 2 we compare the results with the full SCF calculations. In this representation the curves are qualitatively similar. In particular, both approaches yield the same sequence of morphologies as the film thickness is increased. However, the absolute values of the free energies differ by more than $`20\%`$ for these parameters. Moreover, there are some subtle differences: For neutral walls (a), the free energy of the perpendicular morphology is strictly lower than that of the parallel oriented lamellae. This is in accord with the calculations of Pickett and Balazs. For symmetric surface fields (b) the free energy of the parallel morphology with an even number of interfaces is lowered. The value of the relative shifts upon increasing the surface field in the strong stretching theory and the SCF calculations agree nicely. In both approaches the free energy of the parallel morphology with an odd number of lamellae remains almost unaffected. This indicates that the structure at the surfaces is hardly perturbed by the weak surface fields. However, the free energy of the perpendicular morphology is independent of the surface fields in the strong stretching theory, while the presence of the surface fields lowers the free energy in the SCF calculations, indicating a dependence of the spatial arrangement on the surface fields.
The composition profiles of the perpendicular morphology are presented in Fig.3 for neutral walls (a) and symmetric walls attracting the $`A`$ component (b). The $`A`$ rich regions are bright and $`B`$ rich regions are shaded darkly. In the case of neutral walls, the interface between the $`A`$ and $`B`$ domains runs strictly perpendicular to the surface. However, the interfacial width broadens close to the surface. Partially, this is due to the reduction of the density in the surface region, which reduces the incompatibility between the two components. This reduces the $`AB`$ interfacial tension as the interface intersects the wall. Moreover, the polymers in the vicinity of the surfaces are aligned parallel to the wall, and this orientation is compatible with their conformation at the $`AB`$ interface as it approaches the wall. Both effects reduce the free energy of the $`AB`$ interface in the perpendicular morphology. This gives rise to a negative line tension and tends to stabilize the perpendicular phase.
Upon increasing the surface interactions (b), the $`AB`$ interface bends and intersects the wall at an angle. This distortion of the interface close to the surface increases the surface area covered by the energetically favored $`A`$ component and lowers the free energy of the perpendicular phase. This effect is not captured by the strong stretching approximation. However, Pereira and Williams have argued that it remains typically small. Note that the surface fields are rather small such that the $`A`$ component does not wet the surface.
The phase diagram as a function of the incompatibility $`\chi N`$ and film thickness is presented in Fig.4 for symmetric boundary fields. At high incompatibilities we find a sequence of perpendicular aligned lamellae $`L_{}`$ and parallel lamellae $`L_p`$ with an even number of interfaces. The latter are stable for film thicknesses close to integer multiples of the bulk period. Upon increasing the film thickness the stability region of the perpendicular oriented morphology decreases. The free energy difference between the two morphologies is related to the balance between the surface interactions which favor parallel orientation and the free energy costs of imposing a lamellar spacing which differs from the prefered bulk value. The surface contribution is independent of the film thickness. The free energy costs per lamellae due to a mismatch in the lamellar spacing increase quadratically. Hence, it is favorable to distribute the mismatch evenly among the p lamellae; the mismatch per lamellae is proportional to $`1/p`$. Therefore the free energy of the film due to deviations of the film thickness from the prefered spacing decreases like $`p\times (1/p)^2`$; i.e., for thick films the mismatch becomes unimportant and only parallel lamellae are stable.
In thin films the translational symmetry parallel to the walls is spontaneously broken and the SCF theory predicts a second order transition from the disordered state to the perpendicular lamellar phase. For thicker films we find direct transitions between the parallel lamellar phase with 4 and 6 interfaces. Only at higher incompatibilities we encounter a triple point, at which the two parallel lamellar phases $`L_4`$ and $`L_6`$ coexist with a perpendicular aligned lamellar phase $`L_{}`$. At higher incompatibilities we find the sequence $`L_4`$,$`L_{}`$ and $`L_6`$ upon increasing the film thickness. We expect this behavior to be representative for larger film thicknesses and the incompatibility at which the triple point occurs increases with the film thickness. An important point is that the theory predicts a gradual onset of parallel ordering ($`L_p`$) as $`\chi N`$ increases, without a phase transition from the disordered phase. This happens because for finite $`\mathrm{\Lambda }`$ the surface fields create surface induced order of lamellar type already in the disordered phase, and this order gets gradually stronger as $`\chi N`$ increases.
The composition profiles in a thin film close to these critical points, where the perpendicular lamellae emerge, are presented in Fig.5 for a film thickness $`\mathrm{\Delta }/R_e=0.55`$ and Fig.6 for $`\mathrm{\Delta }/R_e=1.92`$. When the incompatibility is increased the perpendicular modulations become more pronounced. Slightly above the critical points the $`B`$–rich domains for $`\mathrm{\Delta }/R_e=0.55`$ and the $`A`$–rich domains for $`\mathrm{\Delta }/R_e=1.92`$ form cylinders which run parallel to the surfaces. This behavior resembles the fingerprint–like morphology observed in experiments of Chaikin and co-workers. For slightly asymmetric diblocks or stronger surface fields even more pronounced effects could be anticipated. However, the neglect of fluctuations imparts a quantitative inaccuracy to the SCF calculations in the weak segregation limit. In particular, the existence of critical points where a second order transition from the disordered phase into the perpendicular oriented lamellar structure occurs is questionable. In the bulk, one encounters a fluctuation–induced first order transition rather than a critical point.
## IV Comparison between self-consistent field (SCF) calculations and Monte Carlo (MC) simulations.
In the following we compare our SCF calculations at $`\chi N=30`$ and $`\mathrm{\Lambda }_1N=\mathrm{\Lambda }_2N=0.375`$ to the corresponding MC simulations in the framework of the bond fluctuation model. The value of the incompatibility lies in the regime of intermediate segregation and is well inside the experimentally accessible range. For much smaller incompatibilities there are strong fluctuation effects. We have performed some preliminary simulations in a $`256\times 64\times 64`$ geometry with $`ϵ_w=0.1k_BT`$ to study the ordering behavior. The results for the difference between the $`A`$ and $`B`$ monomer density in the vicinity of the surfaces are displayed in Fig.7. Upon increasing the incompatibility, the amplitude and correlation length of composition fluctuations increase. Moreover, the period of the oscillations increases slightly upon increasing the incompatibility. This indicates a stretching of the molecules. For $`ϵ=0.09`$ we observe a weak modulation of the composition across the whole film thickness. Hence, we expect the order-disorder transition to occur in the range $`ϵ0.09(1)`$ or $`13.5\chi N17`$. This is in accord with a previous estimate of the transition temperature. Moreover, the magnitude of this shift in the transition temperature is in qualitative agreement with fluctuation corrections calculated by Fredrickson and Helfand. They predict $`\chi _cN=10.5+41/(R_e^2\rho ^{2/3})^{1/3}`$ for very long chains. Deviations of similar magnitude, though by a different mechanism, have been predicted in the framework of the P-RISM theory. For chain length $`N=32`$ we anticipate rather large deviations from the SCF calculations at incompatibilities smaller than $`\chi N=25`$. Since for the present model the transition point is not known to high precision, we did not attempt a quantitative comparison of the results in Fig.7 with theoretical results for the order parameter profiles for surface–induced ordering.
At stronger incompatibilities $`ϵ_\mathrm{\Theta }>0.5`$, there occurs a phase separation into a homopolymer-rich phase and a phase rich in vacancies. This would correspond to $`\chi N85`$ according to Eq.(19). However, we do not expect this equation to hold in this temperature regime, because the fluid structure at these temperatures differs significantly from the high temperature structure. At much smaller incompatibilities, already, the width of the internal interfaces becomes comparable to the length scale of the local polymer architecture. Previous studies have shown that in the strongly segregated regime the Gaussian chain model may yield qualitatively erroneous results. These two considerations yield a rough estimate for the temperature interval in which good agreement between the MC simulations and the SCF calculations can be expected.
We have quenched three different systems from their athermal state ($`ϵ=0`$) to $`ϵ=0.1769k_BT`$. The systems have the geometry $`30\times 96\times 96`$, $`46\times 93\times 93`$ and $`56\times 96\times 96`$ in units of the lattice spacing. According to the SCF calculations these geometries correspond to the $`L_2`$, the $`L_{}`$, and the $`L_4`$ phase, respectively. The lateral extension in the $`L_{}`$ phase has been chosen compatible with the lamellar spacing in the SCF calculations. For each of these geometries we have simulated at least 4 independent systems. We do not find transitions from one morphology to a different one during the simulation time which exceeds at least $`3.810^6`$ Monte Carlo steps for each system. Hence, we cannot rule out metastability effects completely. However, the simulation time is long enough that the observed structures are free of defects (on the scale of the simulation cell) and the composition profiles of systems which assembled into the same morphology agree (cf. also ).
In Fig.8 we present the monomer density profiles of the $`L_2`$ phase in the SCF calculations (a) and in the MC simulation (b). Qualitatively, the profiles are similar. Both data sets show almost completely segregated $`A`$ and $`B`$ rich regions separated via two interfaces. However, in the SCF calculations the total density rises smoothly from zero at the film boundaries to one in the middle of the film according to Eq.(1). In the MC simulations the monomers pack against the wall and produce oscillations in the density profile near the surfaces. These details of the local fluid structure are not captured in the Gaussian chain model. Note that the average density in the two layers nearest to the walls is close to the bulk density. Thus the simple estimate for the energy contribution of the walls (cf. Eq.(20)) is largely unaffected by the packing effects.
Snapshots of the final morphologies are presented in Fig.9. In panel (a) we present the final snapshots for film thickness $`\mathrm{\Delta }_0=30`$. The surfaces correspond to the top and bottom plane. All systems have assembled into the $`L_2`$ phase. In the snapshots the $`A`$ component corresponds to the darker species. Half an $`A`$ lamella is located at each wall. However, one also observes rather strong fluctuations which allow the $`B`$ component to protrude up to the wall. All films of thickness $`\mathrm{\Delta }_0=46`$ (cf. Fig.9(b)) have assembled into the $`L_{}`$ phase. However, the simulations exhibit two different repeat distances. In the two systems, where the lamellae are oriented parallel to the box axis, the lamellae are spaced at a distance $`D=1.827R_e`$, which agrees with the SCF calculations $`D_{\mathrm{SCF}}=1.822R_e`$. However, in the six other systems, the lamellae make an angle with the box axis and the repeat distance is $`D=1.938R_e`$. This value exceeds the SCF prediction by $`6\%`$. In the SCF calculations such a deviation from the prefered repeat distance increases the free energy by 0.01 $`k_BT`$ per molecule or 8 $`k_BT`$ for the whole system. Hence, the occurence of the larger spacing cannot be explained by thermal fluctuations alone. The difference might be traced back to the fact that we use for the SCF calculations the chain extension $`R_e`$ corresponding to the athermal state. As it has been observed in other simulations (cf. also Fig.7), the chains stretch even in the disordered phase to avoid energetically unfavorable contacts between the different blocks. For the comparison with the SCF calculations we employ the two configurations in which the lamellae are oriented parallel to the box axis. Three films of thickness $`\mathrm{\Delta }_0=56`$ (cf. 9(c)) assembled into the $`L_4`$ phase, whereas one system prefered the $`L_{}`$ phase. For the comparison with the SCF calculations the latter system was discarded. Though we cannot rule out that some of the systems are trapped in metastable conformations it is very gratifying that we observe the morphologies predicted by the SCF calculations except for one case at the largest film thickness.
## V Summary.
We have presented SCF calculations and MC simulations for symmetric diblock copolymers confined into a thin film. Both surfaces attract the same component of the diblock via a short range potential. We have calculated the phase diagram as a function of the incompatibility $`\chi N`$ and the film thickness in mean field approximation and discussed the stability of parallel $`L_p`$ and perpendicular $`L_{}`$ aligned lamellar phases. At high incompatibility we find the sequence $`L_2`$,$`L_{}`$,$`L_4`$,$`L_{}`$, $`L_6`$ while we find a direct transition between the $`L_4`$ and the $`L_6`$ phase at weak segregation.
At low incompatibilities we find rather pronounced deviations from the SCF theory. Most notably, for chain length $`N=32`$ the onset of ordering occurs around $`13.5\chi _cN17`$ instead of $`\chi _c^{\mathrm{SCF}}N10.5`$. The order of magnitude of the shift is compatible with corrections to the mean field behavior. At very high segregation, we expect the local structure of the model to be important.
At $`\chi N=30`$ we have compared the results of the SCF calculations with MC simulations in the framework of the BFM. We find qualitative agreement between the MC simulations and the SCF calculations. In particular, we observed the $`L_2`$, $`L_{}`$ and $`L_4`$ phases as predicted by the SCF calculations. The main difference between the SCF calculations and the MC simulations is the structure at the surfaces. While the density decays smoothly to zero in the SCF calculations, there are pronounced packing effects in the MC simulations. In both schemes the incompatibility at the film surface is reduced. In the SCF calculations it stems from the reduced density at the surface, while in the MC simulations it is due to the finite extension of the interactions (“missing neighbor effect”). Both effects give rise to a negative line tension as the interface approaches the surface and lead to a stabilization of the $`L_{}`$ phase. The effect is, however, more pronounced in the MC simulations. Another difference between the MC simulations and the SCF calculations is the dependence of the chain extension on the incompatibility. The majority of systems in the $`L_{}`$ phase has assembled into a morphology with a lamellar spacing which exceed the prediction of the SCF calculations by about $`6\%`$. This goes along with a stretching of the diblock copolymers already above the ordering transition. The effect has been observed in previous simulations and experiments, and can be rationalized via a coupling of the intramolecular energy and the chain conformations.
In view of these effects the agreement with the SCF calculations is satisfactory. A detailed comparison of individual profiles between the SCF calculations and the MC simulations shall be presented in the following paper.
### Acknowledgment
It is a great pleasure to thank P.K. Janert for discussions and technical advice. We have also benefited from stimulating discussions/correspondence with F. Schmid and M.W. Matsen. We acknowledge generous access to the CRAY T3E at the HLR Stuttgart and HLRZ Jülich, as well as access to the CONVEX SPP at the computing center in Mainz. Financial support was provided by the DFG under grant Bi314/17.
|
no-problem/9906/nucl-th9906017.html
|
ar5iv
|
text
|
# Complete set of electromagnetic corrections to the nucleon mass in the Nambu-Jona-Lasinio model
## I INTRODUCTION
We have recently shown how to calculate all possible electromagnetic corrections, of order $`e^2`$, to any quark or hadronic model whose strong interactions are described nonperturbatively by integral equations . Here we would like to apply our method to the three-quark NJL model of the nucleon and in this way derive the expression for the neutron-proton mass difference that is due to a complete set of electromagnetic interaction within this model.
For this purpose it is useful to summarise the main model-independent results obtained in Ref. . Within the framework of relativistic quantum field theory, the strong interaction Green function $`G`$ describing a system of quarks or hadrons is given nonperturbatively by the integral equation whose symbolic form is
$$G=G_0+G_0KG.$$
(1)
The complete set of lowest order electromagnetic corrections to the Green function $`G`$, denoted by $`\delta G`$, then follows from Eq. (1) on topological grounds:
$$\delta G=\delta G_0+\delta G_0KG+G_0\delta KG+G_0K\delta G+\left(G_0^\mu K^\nu G+G_0^\mu KG^\nu +G_0K^\mu G^\nu \right)D_{\mu \nu }$$
(2)
where $`\delta G_0`$ is the complete set of electromagnetic corrections to the free Green function $`G_0`$, and $`D_{\mu \nu }`$ is the photon propagator that connects the appropriate currents (quantities with a $`\mu `$ or $`\nu `$ superscript). Unlike $`\delta G`$ which has internal photons coupled everywhere, $`\delta K`$ consists of the strong interaction potential $`K`$ with all possible photon insertions except those that start or finish on an external quark or hadron leg. All currents $`G^\mu `$, $`G_0^\mu `$, and $`K^\mu `$ are constructed by the gauging of equations method that effectively attaches external photons in all possible ways to the corresponding strong interaction quantities $`G`$, $`G_0`$, and $`K`$, respectively. Using this method one obtains
$$G^\mu =G\mathrm{\Gamma }^\mu G,\mathrm{\Gamma }^\mu =\mathrm{\Gamma }_0^\mu +K^\mu ,\mathrm{\Gamma }_0^\mu =G_0^1G_0^\mu G_0^1.$$
(3)
With the current $`G^\mu `$ specified in this way, Eq. (2) can be formally solved to give
$$\delta G=G\mathrm{\Delta }G,\mathrm{\Delta }=\delta K+G_0^1\delta G_0G_0^1+\left(\mathrm{\Gamma }^\mu G\mathrm{\Gamma }^\nu \mathrm{\Gamma }_0^\mu G_0\mathrm{\Gamma }_0^\nu \right)D_{\mu \nu }.$$
(4)
The quantity $`\mathrm{\Delta }`$ as given by Eq. (4) is the key result that describes the complete set of electromagnetic corrections to any observable of the strong interaction model in question. For example, if the strong interactions admit a bound state of mass $`M`$ and wave function $`\psi `$, then the complete set of lowest order electromagnetic corrections to $`M`$ is given by $`\delta M=\overline{\psi }\mathrm{\Delta }\psi /2M`$. It is a feature of our approach that the gauge invariance of such electromagnetic corrections is a result of their completeness.
## II NJL MODEL FOR THE NUCLEON
The simplest NJL Lagrangian density $``$ is defined in terms of the iso-doublet (two flavours) colour-triplet quark field $`\mathrm{\Psi }`$ as
$$=\overline{\mathrm{\Psi }}\left(i\partial ̸m_0\right)\mathrm{\Psi }+G\left[(\overline{\mathrm{\Psi }}\mathrm{\Psi })^2(\overline{\mathrm{\Psi }}\gamma _5𝝉\mathrm{\Psi })^2\right]$$
(5)
where $`𝝉`$ is the vector of isospin Pauli matrices, and $`m_0`$ is the bare quark mass \[for $`m_0=0`$ Eq. (5) is chiral invariant\]. The model of the nucleon considered here is described by the three-quark wave function that satisfies a four-dimensional Bethe-Salpeter (BS)-like three-body integral equation with pair interaction kernels given by the lowest order $`qq`$ irreducible diagrams corresponding to the Lagrangian of Eq. (5), namely
$$v_{ij}=iG\left[(I_sI_fI_c)_i(I_sI_fI_c)_j(\gamma _5𝝉I_c)_i(\gamma _5𝝉I_c)_j\right]$$
(6)
where $`I_s,I_f\text{and}I_c`$ are the unity operators in the Dirac, flavour and colour spaces respectively, with the subscript $`i`$ ($`j`$) indicating that the corresponding operators act in the $`i`$-th ($`j`$-th) quark’s one-particle space. In Eq. (6) and in the equations below we treat the quarks as distinguishable particles as the inclusion of antisymmetrisation can always be taken care of at the end. In the mentioned BS-like integral equation, the quark propagator $`d(p)`$ satisfies the (nonlinear) Dyson-Schwinger equation
$$d(p)=d_0(p)+d_0(p)\mathrm{\Sigma }(p)d(p)$$
(7)
where $`d_0(p)`$ is the bare quark propagator and the dressing term $`\mathrm{\Sigma }`$ is taken in the so-called Hartree approximation:
$$\mathrm{\Sigma }(p)=iG\frac{d^4k}{(2\pi )^4}\{\mathrm{\Lambda }^\mu d(k)\mathrm{\Lambda }_\mu \mathrm{\Lambda }^\mu \text{tr}[d(k)\mathrm{\Lambda }_\mu ]\}.$$
(8)
Here $`\mathrm{\Lambda }_\mu `$ is the Lorentz four-vector $`(I_sI_fI_c,\gamma _5𝝉I_c)`$, and the trace “tr” is over the Dirac, flavour and colour indices. The kernel of Eq. (6) is effectively separable so that the original three-body equation for the three-quark system can be reduced down to a quark-diquark two-body equation, and it is this latter form which we shall use to calculate the electromagnetic corrections. In the channel where the two interacting quarks form a scalar, isoscalar, positive parity diquark with colour $`\overline{3}`$, the kernel of Eq. (6) reduces to
$$v_{f_1f_2,i_1i_2}=4ig_s(\gamma _5C\tau _2\beta ^a)_{f_1f_2}\times (C^1\gamma _5\tau _2\beta ^a)_{i_1i_2};\beta _{ik}^a=i\sqrt{\frac{3}{2}}ϵ_{aik},C=i\gamma _2\gamma _0$$
(9)
where $`i_1i_2`$ $`(f_1f_2)`$ are triples of initial (final) quantum numbers of the first and second particle . Then the diquark propagator is
$$D(p)=\frac{4ig_s}{12ig_s\mathrm{\Pi }(p^2)}\text{where}\mathrm{\Pi }(p^2)\delta _{ij}=\frac{d^4k}{(2\pi )^4}\text{tr}\left[\gamma _5\tau _id(p+k)\gamma _5\tau _jd(k)\right]$$
(10)
and the quark-diquark interaction kernel is given by the quark exchange term
$$K(p^{},p)=\gamma _5d(p^{}+p)\gamma _5\beta ^a\beta ^a^{}.$$
(11)
With $`G_0=dD`$ in Eq. (1), the resulting equation for the quark-diquark Green function has the diagrammatic form illustrated in Fig. 1.
## III ELECTROMAGNETIC CORRECTIONS TO THE NJL MODEL
All electromagnetic corrections can be found by applying the general formulation of Eqs. (1)-(4) to the particular case under consideration. Thus in the case of a single quark, even though the solution of Eqs. (7) and (8) is known to be $`d(p)=i(\mathit{}m)^1`$, where $`m`$ is a constituent quark mass (which is not zero even if $`m_0=0`$), these equations are needed for the proper construction of the external and internal (with respect to the quark propagator) photon currents. By identifying Eq. (7) as a special non-linear case of Eq. (1) (non-linear because $`\mathrm{\Sigma }`$ contains $`d`$) and gauging both Eq. (7) and Eq. (8), we obtain that the electromagnetic current $`d^\mu `$ of the dressed quark propagator satisfies the equation
$$d^\mu (p)=d(p+q)\left[\gamma ^\mu +iG\frac{d^4k}{(2\pi )^4}\{\mathrm{\Lambda }^\alpha d^\mu (k)\mathrm{\Lambda }_\alpha \mathrm{\Lambda }^\alpha \text{tr}[d^\mu (k)\mathrm{\Lambda }_\alpha ]\}\right]d(p)$$
(12)
which is linear in $`d^\mu `$ and can be easily solved \[in Eq. (12) the momentum of the incoming photon, $`q`$, is contained implicitly in all the $`d^\mu `$ functions\]. The electromagnetic corrections to the dressed quark propagator, $`\delta d(p)`$, can then be found in a similar manner.
We can similarly identify Eq. (10) for the diquark propagator $`D`$ with Eq. (1) and therefore write down the diquark current $`D^\mu `$ and the electromagnetic corrections to the diquark propagator $`\delta D`$ as
$$D^\mu =D\mathrm{\Pi }^\mu D;\delta D=D\delta \mathrm{\Pi }D+D\mathrm{\Pi }^\mu D\mathrm{\Pi }^\nu DD_{\mu \nu }.$$
(13)
Finally, we can write down the complete set of electromagnetic corrections $`\delta G`$ corresponding to the three-quark NJL model by identifying the quark-diquark equation of Fig. 1 with Eq. (1). The resulting expression for $`\mathrm{\Delta }`$ is shown diagrammatically in Fig. 2.
Clearly, our procedure for finding a complete set of electromagnetic corrections in the NJL model is of a general nature, and can be used to include complete sets of lowest order corrections due to other particle exchanges. Indeed we have recently applied our method to derive the complete set of pionic corrections to the three-quark NJL model (where previously only a part of such corrections were included ), as well as to some constituent quark models with confinement (thereby clarifying some recent discussions regarding this matter ). As our pionic corrections are complete, axial current is conserved exactly in the limit of massless bare quarks. This feature is important for maintaining chiral symmetry in the next to leading order approximations.
|
no-problem/9906/cond-mat9906454.html
|
ar5iv
|
text
|
# Short-Range Interactions and Scaling Near Integer Quantum Hall Transitions
## I Introduction
In this paper we study the effects of short-range interactions on the nature of the transitions between quantized Hall plateaus in a disordered two-dimensional electron gas (2DEG). These transitions are generally believed to be prime examples of continuous quantum phase transitions, that is to say, examples of quantum critical phenomena. We focus here on samples with sufficiently strong disorder that fractional quantum Hall states do not intervene, so that the transitions are directly from one integer Hall plateau to another. Recently, Shahar and collaborators have presented an analysis of transport measurements that would seem to indicate an absence of a true quantum Hall liquid—insulator phase transition. The full implications of this are unclear at present, but we presume that this is an indication of the difficulty of reaching the asymptotic quantum critical regime in certain classes of disordered systems and will not consider it further in this paper.
The existence of quantized Hall plateaus is intimately related to the presence of disorder. In a single-particle description, all states are localized except for those at a single critical energy near the center of each Landau level. Thus the quantum phase transition is an unusual insulator to insulator transition with no intervening metallic phase. The critical point itself is quasimetallic, exhibiting anomalous diffusion. Associated with each transition between plateaus in $`\sigma _{xy}`$ there is a peak in $`\sigma _{xx}`$ which in principle becomes infinitely sharp at zero temperature (see however Ref.\[\]) and whose peak value is universal and close to $`0.5e^2/h`$. However, as we discuss below, since we have the peculiar circumstance that the set of extended states has measure zero, the zero temperature limit is quite singular in the absence of interactions. In the non-interacting case $`\sigma _{xx}`$ is actually rigorously zero in the limit of large sample size at all values of the magnetic field, including the critical values, for any non-zero temperature. Moreover, it has been argued previously, using a combination of renormalization group techniques and numerical calculations that interactions of sufficiently short range are perturbatively irrelevant at the non-interacting fixed point. Hence systems with short-range interactions scale into this singular non-interacting limit. We show in this paper that although interactions are irrelevant in this sense, they generate a non-zero critical value of $`\sigma _{xx}`$ and determine the nature of tempe rature and frequency scaling near the critical point. We expect that interactions have similar consequences near other delocalization transitions at which they are formally irrelevant, although behavior in a different category is possible if interactions are sufficiently strongly irrelevant. We note that irrelevant interactions which control dynamical properties at a quantum critical point have been encountered previously, in the theory of metallic spin glasses.
In contrast to short-range, model interactions, true Coulomb interactions are believed to be relevant at the non-interacting fixed point. Hence one expects that the true critical point is interacting. One of the persistent mysteries in this problem is the fact that the experimentally observed value of the correlation exponent $`\nu `$ at the interacting fixed point appears to agree rather well with that predicted by numerical simulations of the non-interacting fixed point. That is, the correlation length exponent does not appear to change even though the value of the dynamical critical exponent $`z`$ is believed to change from $`z=1`$ for long-range interactions to $`z=2`$ for the short-range case. In the following, we do not consider this issue, and instead restrict our attention to short-range interactions. The Coulomb interaction can be made short-range by placing a metallic screening gate (ground plane) nearby. Such a situation was successfully realized by Jiang, Dahm and collaborators although they did not study the quantum critical point, but rather the insulating phase at densities well below the $`01`$ plateau transition. They observed that the variable range hopping exponent changed from the Efros–Shklovskii value expected for long-range interactions to the Mott value expected for short-range interactions.
The remainder of the paper is organized as follows. We summarize the scaling description of the quantum Hall plateau transitions in the next section, and discuss in section III the pathologies associated with the finite temperature scaling behavior of the conductance in the non-interacting theory. From section IV onwards, systems with short-range interactions are considered. We first describe dephasing in the critical regime and the emergence of a long coherence time, and determine the inelastic exponent $`p`$, and the thermal exponent $`z_Tz`$ in terms of the scaling dimension of the interactions. The difficulties arising from a direct application of conventional scaling ideas are discussed. In section V, finite temperature scaling is analyzed in the presence of short-range interactions. We show that, although short-range interactions are formally irrelevant, they control aspects of the critical behavior. We demonstrate that the critical conductivity is non-zero provided interactions are not too strongly irrelevant. Finally, we construct new scaling variables and examine to what extent conductance scaling can be forced into the conventional scaling framework. Finite frequency scaling at $`T=0`$ is discussed in section VI and the general scaling in temperature and frequency in section VII. Concluding remarks are presented in section VIII.
## II Plateau Transitions and Scaling Theory
The integer quantum Hall transition (IQHT) is driven by varying the location of the chemical potential, $`\mu `$, relative to the critical value, $`\mu _c`$. Throughout this paper we denote the distance from the critical point by $`\delta =|\mu \mu _c|`$. Since $`\mu _c`$ is dependent on magnetic field, $`B`$, the transition is often reached experimentally by changing $`B`$ while keeping electron density fixed. In the large $`B`$ limit, $`\mu _c`$ lies near the center of the Landau levels. A body of experimental data, reviewed for example in Ref. \[\], can be summarized by the statements that: (i) On either side of the transition ($`\delta 0`$) the Hall conductivity is quantized and the dissipative conductivity has the limit $`\sigma _{xx}0`$ at zero temperature; (ii) At the transition ($`\delta =0`$) the Hall conductivity is unquantized and $`\sigma _{xx}`$ remains finite at zero temperature, so that the critical state is conducting.
Critical behavior is cut off in the presence of a finite length scale. In this event, the transition has a finite width $`\delta ^{}`$ within which the Hall conductivity deviates from the quantized values and $`\sigma _{xx}`$ is non-zero. This width is
$$\frac{\delta ^{}}{\delta _0}\mathrm{min}[\left(\frac{L_0}{L}\right)^{1/\nu },\left(\frac{T}{T_0}\right)^{1/z_T\nu },\left(\frac{\omega }{\omega _0}\right)^{1/z\nu }]$$
(1)
where $`L`$, $`T`$ and $`\omega `$ are the finite system size, temperature and measurement frequency in a specific experimental situation, and $`\delta _0`$, $`L_0`$, $`T_0`$ and $`\omega _0`$ are microscopic scales. The various exponents appearing in Eq. (1) have the following meaning: $`\nu `$ is the exponent of the single divergent length scale, the localization length $`\xi \delta ^\nu `$; $`z`$ is the dynamical exponent defining the length scale introduced by a finite frequency, $`L_\omega \omega ^{1/z}`$; and $`z_T`$ is the thermal exponent governing a temperature-dependent length scale $`L_\phi T^{1/z_T}`$. In the conventional dynamical scaling description of a quantum phase transition in which interactions are relevant and scale to a finite strength at the transition, $`z_T`$ is expected to be the same as $`z`$. All the three regimes in Eq. (1) have been probed experimentally, as well as the regime in which electric field strength sets the cut-off. Summarizing the results in the form in which they appear in the literature, we have $`\nu =2.3\pm 0.1`$, $`1/z_T\nu =0.42\pm 0.04`$, and $`1/z\nu =0.41\pm 0.04`$. This suggests that $`z_T=z=1`$, which is consistent with the interpretation that the Coulomb interaction is relevant at the transition. More generally, $`z_T`$ and $`z`$ may be independent exponents at a quantum phase transition. We show in the following that this is the case at the IQHT if the interaction scales to zero at the critical point. This happens for short-range interactions and could be realized experimentally by screening out the long-ranged Coulomb interaction with nearby ground planes or gates.
We now turn to recent theoretical developments. The Hamiltonian of interest describes interacting electrons moving in a two-dimensional random potential in the presence of a magnetic field:
$`H`$ $`=`$ $`{\displaystyle \underset{i}{}}\left[{\displaystyle \frac{1}{2m}}\left(\stackrel{}{p}_i+{\displaystyle \frac{e}{c}}\stackrel{}{A}\right)^2+V_{\mathrm{imp}}(\stackrel{}{r}_i)\right]`$ (2)
$`+`$ $`{\displaystyle \frac{1}{2}}{\displaystyle \underset{ij}{}}V(\stackrel{}{r}_i\stackrel{}{r}_j),`$ (3)
where $`\stackrel{}{A}`$ is the external vector potential, $`V_{\mathrm{imp}}`$ is the one-body impurity potential, and $`V`$ is the two-body interaction potential. We write
$$V(\stackrel{}{r}_i\stackrel{}{r}_j)=\frac{u}{|\stackrel{}{r}_i\stackrel{}{r}_j|^\lambda },$$
(4)
where $`u`$ and $`\lambda `$ parameterize the strength and the range of the interaction. The existence of the IQHT in the model is not dependent on interactions, and the non-interacting theory, obtained by setting $`u=0`$, provides a simplified but concrete model which has allowed extensive quantitative calculations. A good understanding of the main features of the non-interacting critical point has emerged: the static localization length exponent has the value $`\nu 2.33\pm 0.03`$ and the dynamical exponent is $`z=d=2`$. However, the relevance of the free electron model to the IQHT in real materials depends on the nature and the effects of electronic interactions.
Imagine starting with a system at the noninteracting fixed point (NIFP), and switching on the interaction. One can ask whether this interaction is a relevant or irrelevant perturbation in the renormalization group (RG) sense. Such a stability analysis of the NIFP has been performed. For the unscreened Coulomb interaction, $`\lambda =1`$ in Eq. (4), $`u`$ has RG scaling dimension one and is therefore a relevant perturbation. The resulting flow away from the NIFP presumably leads to another, interacting fixed point (IFP) at which the effective interaction strength is finite. Critical phenomena in this case should be described by conventional dynamical scaling theory with two independent critical exponents, $`z`$ and $`\nu `$, and $`z_T=z`$. While one expects that $`z=1`$ on general grounds with Coulomb interactions, the value of $`\nu `$ is unknown and may be different from the value at the NIFP. Nevertheless, a scenario whereby Coulomb interaction changes $`z`$ but not $`\nu `$ from the noninteracting values has been conjectured. An alternative possibility is that there are two divergent lengths at the critical point, with different exponents.
We shall not consider long-range Coulomb interactions further. Instead, we focus on the case of short-range interactions having $`\lambda >2`$. As mentioned above, this case is physically relevant when the IQHT is studied in the presence of ground planes or metallic gates. It has been shown that for screened Coulomb interactions with $`\lambda >2+x_{4s}`$, $`x_{4s}0.65`$, the RG dimension of $`u`$ is $`\alpha =x_{4s}`$, so that interactions are an irrelevant perturbation. Notice that, in particular, the dipole-dipole interaction has $`\lambda =3`$ and thus belongs to this class of interactions. Moreover, for $`x_{4s}>\lambda 2>0`$, the interaction is still irrelevant with the scaling dimension $`\alpha =2\lambda `$. . In all these cases, the effective interaction scales to zero at the transition in the asymptotic limit. The NIFP is therefore stable against interactions. As a result, $`\nu 2.33`$ and $`z=2`$. It turns out, though, that short-range interactions, although irrelevant, control the finite temperature behavior of the conductance. As we shall see, the scaling function for the conductance is discontinuous at zero interaction strength when written in terms of a natural set of scaling variables. We will show that the scaling theory thus becomes unconventional, and a third independent critical exponent, the thermal exponent, $`z_T`$, emerges in the scaling arguments. The value of $`z_T`$ is set by the scaling dimension, $`\alpha `$, of the interaction strength: consideration of the dephasing time in the critical regime leads to $`z_T=2z/(z+2\alpha )`$. Since $`z_T`$ determines the transition width in the temperature scaling regime (cf Eq. (1)), experiments can, in principle, determine the scaling exponent $`\alpha `$. We find, on the other hand, that the frequency scaling of the conductance in this case is conventional, with $`z=d=2`$, where $`d`$ is the spatial dimension of the system. We argue that quantum critical scaling behavior of this kind may be a general feature of finite temperature transport near quantum critical points, when interactions are irrelevant. The central feature is the existence of a time scale, the dephasing time $`\tau _\varphi T^p`$ where $`p=1+2\alpha /z`$, which is longer than the single characteristic time, $`\mathrm{}/T`$, at a conventional quantum phase transition. The long coherence time results from the underlying free fermion description and its associated infinite number of conservation laws. As a result, for $`\omega ,T0`$, the $`\omega /T`$-scaling in conventional quantum phase transitions is replaced by $`\omega /T^p`$-scaling.
## III Noninteracting Theory, $`u=0`$
### A T=0
We begin by describing the finite size scaling of the zero frequency conductance in the absence of interactions. Consider a 2D square sample of size $`L\times L`$. At $`T=0`$, the dimensionless conductance should depend only on $`L/\xi `$. Measuring the conductance in units of $`e^2/h`$, we write
$$g(\delta ,L)=𝒢_0(\delta L^{1/\nu }).$$
(5)
The scaling function $`𝒢_0`$ has the limiting behavior
$$𝒢_0(X)=\{\begin{array}{cc}g_c,\hfill & X0,\hfill \\ 0,\hfill & X\mathrm{},\hfill \end{array}$$
(6)
where $`g_c`$ is a critical conductance at the transition. This quantity is expected to be universal for a given geometry and boundary conditions. In phase coherent, square samples under periodic transverse boundary conditions, $`g_c0.5`$. The behavior of $`𝒢_0(X)`$ is known from numerical work in various settings , and in most detail for square samples from transfer matrix calculations of the two-terminal Landauer conductance : the results of these are sketched in Fig. 1a. It decays exponentially for large $`X`$, according to $`𝒢_0(X)\mathrm{exp}(cX^\nu )`$, where $`c`$ is a constant. Hence, in the limit $`L\mathrm{}`$, $`g`$ is zero for all $`\delta `$ except $`\delta =0`$ at which it has the finite value $`g_c`$, as shown in Fig. 1b. We will denote the conductance in the thermodynamic limit, the quantity of interest throughout the paper, by suppressing the $`L`$ dependence in its argument. Thus
$$g(\delta )=\{\begin{array}{cc}g_c,\hfill & \delta =0,\hfill \\ 0,\hfill & \mathrm{otherwise}.\hfill \end{array}$$
(7)
### B $`T0`$
For noninteracting electrons, the conductivity at $`T0`$ is
$$\sigma _{xx}(\delta ,T,L)=𝑑E\left(\frac{f}{E}\right)𝒢_0(EL^{1/\nu }),$$
(8)
where $`𝒢_0`$ is the $`T=0`$ conductance scaling function given in Eq. (5), and $`f(E)`$ is the Fermi-Dirac distribution function
$$f(E)=\frac{1}{e^{\beta (E\delta )}+1}.$$
(9)
Eq. (8) is a convolution of the derivative of the Fermi function (which has width $`k_\mathrm{B}T`$) with the $`T=0`$ conductance scaling function (which has width $`L^{1/\nu }`$), as illustrated in Fig. 2. In the limit $`L\mathrm{}`$, Eqs. (7) and (8) imply that
$$\sigma _{xx}(\delta ,T)=0$$
(10)
for any $`\delta `$ if $`T0`$: within the noninteracting theory, the conductivity vanishes for all values of the Fermi energy at finite temperature. This strange result follows from the fact that the set of conducting states is of measure zero for this transition.
## IV Short-Ranged Interaction, $`u0`$
For the conductivity to be non-zero at finite temperatures near the transition, interactions are necessary, and we now examine the effect of short-range interactions. Since $`u`$ is an irrelevant coupling in the RG sense, the transitions at $`T=0`$ are described by the non-interacting fixed point. In general at such a fixed point, provided the density of states is finite, $`z=d`$ in $`d`$-dimensions, and so for the IQHT $`z=2`$. Under a RG length scale transformation $`b`$, $`u`$ transforms according to $`u^{}=b^\alpha u`$, and energy scales, $`ϵ`$, transform as $`ϵ^{}=b^zϵ`$.
### A Naive scaling at $`\delta =0`$
The finite temperature conductivity at criticality is expected to have the scaling form
$$\sigma _{xx}(T,u)=b^{2d}𝒢^{}(b^zT,b^\alpha u).$$
(11)
Choosing the scale factor $`b=T^{1/z}`$, we obtain a new scaling function
$$\sigma _{xx}(T,u)=𝒢\left(uT^{\alpha /z}\right).$$
(12)
Eq. (10) implies, setting $`u=0`$, that $`𝒢(X=0)=0`$.
If $`u`$ were a conventional irrelevant scaling variable $`𝒢`$ would have a power series expansion and one could write
$$\sigma _{xx}(T,u)=𝒢(0)+\underset{l=1}{\overset{\mathrm{}}{}}(uT^{\alpha /z})^l𝒢_l(0).$$
(13)
Since $`𝒢(0)=0`$, Eq. (13) implies that $`\sigma _{xx}(T0,u)=0`$. This result would, paradoxically, exclude the existence of a conducting critical state. In fact, as we show in the following sections, $`𝒢(X)`$ is a discontinuous function of its argument, $`X`$, at $`X=0`$ so that
$`\sigma _{xx}(T0,u=0)`$ $`=`$ $`𝒢(X=0)=0`$ (14)
$`\sigma _{xx}(T0,u0)`$ $`=`$ $`𝒢(X0)=g_c.`$ (15)
This discontinuous behavior is shown schematically in Fig. 3.
### B Dephasing in the Critical Regime by Interactions
For $`T0`$, interactions, relevant or irrelevant in the RG sense, will cause transitions between single particle states. This leads to a finite quasiparticle dephasing rate $`\tau _\phi =T^p`$. At a quantum phase transition, the exponent $`p`$ that enters the dephasing rate should not be taken from those for simple disordered metals in the large conductance regime, for it is the decay time of the critical eigenstates that matters. This should be determined by the underlying critical phenomena. A natural scaling form for the dephasing rate is
$$\frac{h}{\tau _\phi }=TY^{}(b^zT,b^\alpha u),$$
(16)
where the prefactor $`T`$ is determined by the engineering dimension of $`1/\tau _\phi `$. Setting $`b=T^{1/z}`$, we have
$$\frac{h}{\tau _\phi }=TY(uT^{\alpha /z}).$$
(17)
As $`u`$ is an irrelevant coupling (perturbation) which scales towards zero under renormalization group scale transformations, the unperturbated state (noninteracting fixed point) is therefore analytically connected to the perturbed state in the presence of $`u`$. Thus, a perturbative expansion in $`u`$ is justified. To lowest order, $`1/\tau _\phi u^2`$ from a Fermi’s Golden Rule estimate of the inelastic scattering rate. Thus, the expected leading scaling behavior is
$$\frac{1}{\tau _\phi }u_{\mathrm{eff}}^2Tu^2T^{1+2\alpha /z},$$
(18)
or
$$\tau _\phi T^p,p=1+\frac{2\alpha }{z}.$$
(19)
For the case of a quantum Hall transition in the presence of a screening gate, we have $`z=2`$ and $`\alpha 0.65`$, and we obtain $`p1.65`$.
### C Dephasing Length and Thermal Exponent $`z_T`$
For a conventional quantum phase transition (with finite interaction strength at the fixed point), there is one length scale ($`\xi \delta ^\nu `$) and one time scale ($`\mathrm{\Omega }^1\xi ^z\delta ^{z\nu }`$) away from criticality. There are no finite correlation length or time scales at criticality. In such a critical system at finite temperature $`T`$, one expects to have one characteristic time $`\mathrm{}/T`$, the significance of which is particularly clear in imaginary time, where it sets a finite size in the time direction, as shown in Fig. 4. However in the present case, we have obtained an additional (real) time, $`\tau _\phi `$, which is much larger than $`\mathrm{}/T`$ as $`T0`$, provided $`p>1`$ ($`\alpha >0`$), which is the case if interactions are irrelevant. For further discussion of quantum critical transport in the incoherent long time limit see Ref.\[\].
We now turn to the dephasing length, $`L_\phi `$, associated with $`\tau _\phi `$. The irrelevance of the interaction at the NIFP allows us to view the system in terms of weakly interacting diffusive quasiparticles. The dephasing length that cuts off the phase coherent d.c. transport is thus
$$L_\phi =\sqrt{D\tau _\phi }T^{p/2},$$
(20)
where $`D`$ is the diffusion constant at the non-interacting critical point, obtained from the wavevector, $`q`$, and frequency, $`\omega `$, dependent coefficient, $`D(q,\omega )`$, in the limit: first $`q0`$ and then $`\omega 0`$. Thus, anomalous diffusion present in the opposite limit will not enter our discussion. We show below that, even though $`u`$ is irrelevant in the RG sense, the important length scale introduced by temperature is $`L_\phi `$, so that
$`L_\phi `$ $``$ $`T^{1/z_T},`$ (21)
$`z_T`$ $`=`$ $`{\displaystyle \frac{2}{p}}={\displaystyle \frac{2z}{z+2\alpha }}.`$ (22)
This length enters the scaling of the transition width in Eq. (1). For the IQHT in the presence of short-range interactions, we thus obtain $`z_T1.21`$.
## V Temperature Scaling of Conductivity Near Criticality
To calculate the conductivity in the presence of a finite dephasing length, we follow the standard procedure and divide the system into $`L_\phi \times L_\phi `$ phase coherent blocks. Transport within each block can be described by phase coherent single-electron transport using the underlying noninteracting theory. The disorder-averaged conductivity that we are interested in can be obtained by averaging over the phase coherent blocks. The outcome of this exercise is that the system size $`L`$ in Eq. (8) should be replaced by $`L_\phi `$, which leads to
$$\sigma _{xx}(\delta ,T,u)=𝑑E\left(\frac{f}{E}\right)𝒢_0(EL_{\phi }^{}{}_{}{}^{1/\nu }),$$
(23)
where $`𝒢_0`$ is a scaling function. Although the precise phase coherent geometry appropriate for this averaging procedure is unclear, this scaling function is expected to have the same qualitative behavior as $`𝒢_0`$ in Eq. (8). Note that this discussion omits contributions to transport from variable range hopping, which will in fact dominate when $`𝒢_0`$ is very small.
Let $`x=\beta (E\delta )`$. We then have
$$\sigma _{xx}(\delta ,T,u)=𝑑x\frac{f(x)}{x}𝒢_0(xk_\mathrm{B}TL_{\phi }^{}{}_{}{}^{1/\nu }+\delta L_{\phi }^{}{}_{}{}^{1/\nu }),$$
(24)
where $`f(x)=1/(e^x+1)`$.
### A At Criticality: $`\delta =0`$, $`T0`$
We first study the behavior of the critical conductivity at low temperatures. At $`\delta =0`$, the second term in the argument of $`𝒢_0`$ in Eq. (24) vanishes, leading to
$$\sigma _{xx}(\delta =0,T,u)=𝑑x\frac{f(x)}{x}𝒢_0\left[x\left(T/T_0\right)^{1p/2\nu }\right],$$
(25)
where $`T_0(u^2/D)^{1/(2\nu p)}`$ is a constant determined by the bare interaction strength and the diffusion constant. To understand the behavior of $`\sigma _{xx}`$ which results from Eq. (25), one should compare the width of the thermal window, determined by $`(f/x)`$, with the width of the window over which electrons are mobile, determined by the scaling function $`𝒢_0(X)`$ (see Fig. 1a). There are two different low-$`T`$ behaviors for $`\sigma _{xx}`$, depending on the value of $`p/2\nu `$.
#### 1 $`p<2\nu `$: the case of IQHT
For $`p<2\nu `$, the argument of the scaling function in Eq. (25) approaches zero as $`T0`$. Thus, using Eq. (6), we have
$$\sigma _{xx}(\delta =0,T0,u)𝒢_0(X0)=g_c.$$
(26)
In this case, the low-$`T`$ conductance is finite (cf. Eq. (15)) (despite the fact that the set of conducting states is of measure zero) and has a value comparable to the critical phase-coherent conductance in the noninteracting theory. Hence interactions control the low-temperature behavior, even though they are irrelevant in the RG sense. The quantum Hall transition with short-range interactions produced by a screening gate falls into this category since $`p1.65`$ and $`\nu 2.33`$ so that $`p/2\nu 0.35`$.
#### 2 $`p>2\nu `$
For sufficiently irrelevant interactions (large $`\alpha `$), the condition $`p>2\nu `$ may be satisfied. In this case, the argument of $`𝒢_0`$ in Eq. (25) diverges as $`T0`$ for fixed $`x`$. Taking $`𝒢_0(X)`$ from Eq. (6),
$`\sigma _{xx}(\delta =0,T,u)`$ $``$ $`{\displaystyle 𝑑x𝒢_0[x(T_0/T)^{p/2\nu 1}]}`$ (27)
$``$ $`T^{p/2\nu 1}.`$ (28)
Thus the critical conductivity vanishes as $`T0`$ according to a universal power law. Note that the power law exponent cannot be obtained using naive scaling with irrelevant couplings by following the approach discussed in section III.A. Again, this vanishes because the set of conducting states is of measure zero. The difference between the results for the two cases $`p<2\nu `$ and $`p>2\nu `$ will be further elucidated below.
### B Transition Width: $`\delta 0`$, $`T0`$
Hereafter, we specialize to $`p<2\nu `$ (case 1 above) which is appropriate for the quantum Hall transition with short-range interactions. For $`\delta 0`$ and small $`T`$, the first term in the argument of the $`𝒢`$ in Eq. (24) can be ignored, leading to
$$\sigma _{xx}(\delta ,T)𝒢_0(\delta L_{\phi }^{}{}_{}{}^{1/\nu }).$$
(29)
Making use of $`L_\phi T^{p/2}=T^{1/z_T}`$ from Eqs. (22) and (20), this can be rewritten as,
$$\sigma _{xx}(\delta ,T)=𝒢_0\left(\frac{c\delta }{T^{1/z_T\nu }}\right).$$
(30)
The transition width is determined by the value of $`\delta `$ at which the scaling variable in Eq. (30) is of order one. We obtain
$$\delta ^{}T^{1/z_T\nu }.$$
(31)
We can view $`\delta ^{}`$ as the width of the energy window of states whose localization length exceeds the phase coherence length. If the width of this window exceeds the energy window defined by the Fermi function through the temperature (i.e., if $`z_T\nu >1`$ or equivalently $`p<2\nu `$), then the conductivity will scale to a finite value as discussed above. Conversely, if the energy window of states is narrower than the temperature, the conductivity becomes sensitive to the fact that the set of conducting states is of measure zero.
At large argument, the scaling function in Eq. (29) falls off exponentially with $`L_\phi /\xi `$, being controlled by the crossover to the non-interacting localized phase. But the interaction $`u`$, although irrelevant at the critical fixed point, will give rise to conduction by variable range hopping in the localized phase. Because $`u`$ is dangerously irrelevant in this sense, variable range hopping will not be part of the universal crossover scaling function in Eq. (29), but will only set in when $`L_\phi `$ exceeds the hopping length $`R_{hop}`$. Naive scaling suggests that the ratio of this longer crossover length to $`\xi `$ will diverge as a power in $`\xi `$.
From Eq. (31) we deduce the temperature scaling exponent $`\kappa `$ for the case of short-range interactions,
$$\kappa =\frac{1}{z_T\nu }0.36.$$
(32)
Interestingly, because the value of $`z_T`$ happens to be close to 1 - the expected value with long-range Coulomb interactions - the value of $`\kappa `$ is quite close to the corresponding value $`\kappa 0.42`$ as well, provided that $`\nu `$ is indeed the same in both cases. This suggests that temperature scaling of the transition width will not be dramatically altered by the presence of a screening gate and careful measurements will need to be made to see the change in the exponent. An important feature of Eq. (29) is that it implies that the correct thermal scaling variable is
$$\frac{L_\phi }{\xi }\frac{1}{T\xi ^{z_T}},$$
(33)
and the thermal scaling function has the form
$$\sigma _{xx}(\delta ,T)=𝒢_0([T\xi ^{z_T}]^{1/z_T\nu }).$$
(34)
These results suggest that by choosing appropriate scaling variables, the conductivity can be expressed in terms of a scaling function which is free of singularities in the limit of small scaling arguments. This will allow a description of transport within the conventional scaling framework, despite the fact that the scaling function $`𝒢(X)`$ of Eq. (12) is discontinuous.
### C Conventional Scaling Framework
The basic scaling form at the noninteracting fixed point reads
$$\sigma _{xx}(\delta ,T,u)=𝒢^{}(b^{1/\nu }\delta ,b^zT,b^\alpha u).$$
(35)
At scale $`b=\xi `$, one writes
$$\sigma _{xx}(\delta ,T,u)=𝒢(T\xi ^z,u\xi ^\alpha ),$$
(36)
where, as we have shown earlier, the scaling function has a discontinuity when its second argument approaches zero. In view of Eqs. (33) and (34), it is convenient to change the scaling variables according to
$$(T\xi ^z,u\xi ^\alpha )(L_\phi /\xi ,u\xi ^\alpha ).$$
(37)
This is possible because
$$\frac{L_\phi }{\xi }=\frac{1}{(T\xi ^z)^{p/2}(u\xi ^\alpha )}.$$
(38)
Hence, we can write as an alternative to Eq. (36)
$$\sigma _{xx}(\delta ,T,u)=𝒢_{\mathrm{reg}}(L_\phi /\xi ,u\xi ^\alpha ),$$
(39)
in which $`𝒢_{\mathrm{reg}}`$ is a regular scaling function when its second argument is taken to zero. Specifically,
$$𝒢_{\mathrm{reg}}(L_\phi /\xi ,0)=𝒢_{\mathrm{reg}}(\delta ^\nu /T^{1/z_T},0)=𝒢_0(\delta /T^{1/z_T\nu }),$$
(40)
where use has been made of Eq. (29) in the last step and the behavior of $`𝒢_0(X)`$ is shown in Fig.1a. It is perhaps important to note that the change of variables in Eq. (37) has not removed the singularity associated with the scaling function in Eq. (36). Instead, it simply makes the singularity inaccessible in Eq. (39), since $`u0`$ implies $`L_\phi \mathrm{}`$.
## VI Frequency Scaling At $`T=0`$
### A Noninteracting case, $`u=0`$
For studying the frequency scaling, we start by returning to the noninteracting theory. Scaling implies
$$\sigma _{xx}(\delta ,\omega )=𝒢_0^{}(b^{1/\nu }\delta ,b^z\omega ).$$
(41)
Putting $`b=\xi `$ leads to
$$\sigma _{xx}(\delta ,\omega )=𝒢_0(\omega \xi ^z).$$
(42)
The behavior of the scaling function in Eq. (42) is expected from the Mott formula to be
$$𝒢_0(X)=\{\begin{array}{cc}X^2\mathrm{ln}^{d1}X,\hfill & X0,\hfill \\ \mathrm{const},\hfill & X\mathrm{},\hfill \end{array}$$
(43)
and has been studied numerically. Thus the natural frequency scaling variable is $`\omega \xi ^z`$, in contrast to the temperature scaling variable, $`T\xi ^{z_T}`$, which appears in Eqs. (34) and (35).
### B Short-range Interactions, $`u0`$
Including $`u`$ as in Eq. (36), we write
$$\sigma _{xx}(\delta ,\omega )=𝒢(\omega \xi ^z,u\xi ^\alpha ).$$
(44)
This function has a nonsingular limit, i.e. $`𝒢(X,Y0)=𝒢_0(X)`$. Thus we conclude that frequency scaling is conventional, so long as $`p<2\nu `$. Anticipating that this is the case for the IQHT with short-range interactions, further subtleties that occur in the opposite limit ($`p>2\nu `$) will not be discussed here. The transition width for $`\omega 0`$ but $`T=0`$ is determined by setting $`\omega \xi ^z(\delta ^{})=1`$, giving
$$\delta ^{}(T=0,\omega )\omega ^{1/z\nu }.$$
(45)
This should be contrasted with $`\delta ^{}(T,\omega =0)T^{1/z_T\nu }`$ where $`z_T=2/p`$, Eq. (31).
### C Irrelevance of Frequency Dephasing
A finite frequency can also lead to dephasing through interactions. For $`u=0`$, the only length scale introduced by a finite frequency is
$$L_\omega =\sqrt{D/\omega }.$$
(46)
However, when $`u0`$, there is a frequency-induced dephasing time $`\tau _\phi (\omega )`$ which can be accounted for by including $`\omega `$ in the discussion of section III.B. Following Eqs. (16-19), one obtains,
$$\frac{1}{\tau _\phi (\omega )}u^2\omega ^p$$
(47)
at $`T=0`$. This leads to another frequency-dependent length scale in the diffusive regime, $`L_\omega ^u=\sqrt{D\tau _\phi (\omega )}`$. Thus
$$L_\omega ^u\sqrt{D/u^2}\omega ^{1/z_T}.$$
(48)
The ratio of the two lengths is
$$\frac{L_\omega ^u}{L_\omega }\omega ^{(zz_T)/zz_T}.$$
(49)
Provided interactions are irrelevant, so that $`\alpha >0`$ and $`z_T<2`$ from Eq. (22), this ratio diverges in the limit $`\omega 0`$. The fact that $`L_\omega ^uL_\omega `$ ensures that frequency dephasing results only in corrections to scaling of the conductivity, and is irrelevant in the asymptotic limit.
## VII General Temperature and Frequency Scaling
In this section, we discuss the general scaling behavior of the conductivity as a function of both frequency and temperature. We start with the basic scaling form at the NIFP,
$$\sigma _{xx}(\delta ,T,\omega ,u)=𝒢(T\xi ^z,\omega \xi ^z,u\xi ^\alpha ).$$
(50)
We convert to new scaling variables as in Eq. (38). Then
$$\sigma _{xx}(\delta ,T,\omega ,u)=𝒢_{\mathrm{reg}}(L_\phi /\xi ,\omega \xi ^z,u\xi ^\alpha ),$$
(51)
where $`𝒢_{\mathrm{reg}}(X,Y,Z)`$ is continuous in $`Z`$ at $`Z=0`$. Let
$$𝒢_{\mathrm{reg}}(L_\phi /\xi ,\omega \xi ^z,0)=𝒢_0(L_\phi /\xi ,\omega \xi ^z).$$
(52)
Thus for $`\xi 1`$ we have
$$\sigma _{xx}(\delta ,T,\omega ,u)=𝒜(T\xi ^{z_T},\omega \xi ^z).$$
(53)
Now consider the approach to the critical point at $`\delta =0`$. As $`\xi \mathrm{}`$, one argument of $`𝒢_0`$ diverges and the other approaches zero, but the scaling variable
$$(L_\phi /\xi )^z\omega \xi ^z=\frac{\omega }{T^p},$$
(54)
remains finite for $`\omega ,T0`$. (We have used $`z_T=2/p`$ and $`z=2`$.) Thus at the critical point
$$\sigma _{xx}(\delta =0,T,\omega ,u)=𝒜\left(\omega \tau _\phi \right)=𝒜\left(\frac{\omega }{T^p}\right).$$
(55)
We see that $`\omega /T^p\omega /T^{1.65}`$ is the scaling variable at criticality, in contrast to the conventional situation in which the interaction $`u`$ scales to a finite value at the fixed point and the scaling variable is $`\omega /T`$.
## VIII Summary
We have shown that, in the presence of short-range Coulomb interactions, the integer quantum Hall transition is a quantum phase transition of an unconventional kind. We find that the interactions, though irrelevant, are responsible for the existence of a finite critical conductivity. In addition, the conventional $`\omega /T`$ scaling at criticality is replaced by $`\omega /T^p`$ scaling, where $`p`$ is a critical exponent controlling the inelastic dephasing time. As a result, there exist two independent dynamical scaling exponents $`z_Tz`$ for temperature and frequency respectively. The dynamic exponents determine the physical length scales associated with $`T`$ and $`\omega `$: $`(L_\phi ,L_\omega )(T^{1/z_T},T^{1/z})`$. These unconventional results follow from the fact that, though short-range interactions are irrelevant at the critical point, the physical behavior is discontinuous in the interaction strength in the non-interacting limit. Associated with this is the existence of a coherence time much longer than the conventional quantum coherence time, $`\mathrm{}/T`$, as interactions scale to zero and the system scales towards the non-interacting fixed point. We have shown that the scaling exponent $`z_T`$ (or $`p`$) is completely determined by the scaling dimension of the leading irrelevant interaction. The physics discussed here may in fact be quite general for quantum critical transport phenomena such as the conventional Anderson-Mott metal-insulator transitions, whenever the interactions scale to zero at the fixed point.
For the IQHT with short-range interactions, we have the set of critical exponents
$$\nu 2.3,z_T1.2,z=2,$$
(56)
which describe the scaling with sample size, temperature and frequency according to Eq. (1).
This behavior can be checked experimentally, for example by looking for a change in the temperature scaling of the transition width whose exponent will change from $`\kappa 0.42`$ to $`\kappa 0.36`$, or by looking at the frequency/temperature scaling described in Eq. (55) where a larger change in exponent is expected. The experimental requirement is that the long-range Coulomb interaction between electrons at large distances be screened, so that they interact via a residual, short-range interacting potential.
ACKNOWLEDGMENTS
We would like to thank Assa Auerbach, Bodo Huckestein, Subir Sachdev, and especially Dung-Hai Lee and Shivaji Sondhi for many valuable discussions. JTC is supported by EPSRC Grant No. GR/MO4426. MPAF is supported by NSF Grants DMR-97-04005 and DMR-95-28578. SMG is supported by NSF Grant DMR-9714055. ZW is supported by DOE Grant DE-FG02-99ER45747 and an award from Research Corporation. The authors would like to thank the Institute for Theoretical Physics at UCSB where this work was begun and the generous support of NSF Grant PHY94-07194.
|
no-problem/9906/astro-ph9906297.html
|
ar5iv
|
text
|
# Hydrodynamical Simulations of the Lyman Alpha Forest: Model Comparisons
## 1 Introduction
Several numerical simulations of the Ly$`\alpha `$forest in cold dark matter (CDM) dominated cosmologies have been performed in recent years and compared with observations (Cen et al. 1994; Zhang, Anninos, & Norman 1995; Hernquist et al. 1996; Miralda–Escudé et al. 1996; Zhang et al. 1996; Davé et al. 1997; Bond & Wadsley 1997; Zhang et al. 1998). Remarkably all the simulations have been able to reproduce the measured neutral hydrogen column density distribution, the size of the absorbers (Charlton et al. 1997), and the line number evolution reasonably well, despite the differences in the cosmological models used: Cen et al. adopt a $`\mathrm{\Lambda }`$CDM model; Zhang et al. investigate sCDM models with both an unbiased and a cluster scale normalization; Hernquist et al. (1996) evolve an sCDM model with a cluster scale normalization. The distribution of Doppler parameters has fared somewhat less well: the predicted distribution peaks toward lower values than observed when the simulations are performed with adequate resolution (Bryan et al. 1998; Theuns et al. 1998). Nonetheless, the generally good agreement with observations of the Ly$`\alpha `$forest suggests that the models are capturing the essential physical properties of the absorbers. This has prompted recent work by Croft et al. (1998) aimed at using flux statistics of the observational data to extract the fluctuation spectrum of the underlying cosmology. We are thus encouraged to investigate the possibility that differences in the statistical properties of the Ly$`\alpha `$forest predicted by different cosmological models may provide a means of testing the models.
The objective of this paper is to compare the Ly$`\alpha `$forest statistics derived from simulations in different cosmological models and to investigate what key properties of the cosmological models control a given statistic. We present results from nine numerical simulations using five different background cosmological models, three of which are flat with no cosmological constant, one is open, and one is flat with a nonzero cosmological constant. For five of the simulations, which we will refer to as the model comparison study, the parameters of the cosmological models have been selected by their ability to match the local or low redshift observations, although all of these models except the standard cold dark matter (sCDM) model are also consistent with COBE measurements of the cosmic microwave background. A tilted CDM model is further designed to match COBE constraints on the normalization of the power spectrum on large scales. In the remaining four simulations we keep the underlying cosmology fixed (sCDM) while varying the normalization of the fluctuation power spectrum in order to clarify the dependence of the Ly$`\alpha `$statistics on this parameter. The radiation field is normalized to the absorption properties of the Ly$`\alpha `$forest as measured at high redshifts. While the emphasis in this paper is on comparing cosmological models, we also test how well the models are doing using the tabulated statistics of the Ly$`\alpha `$forest as determined primarily by Kim et al. (1997) for several QSO lines-of-sight. A more complete comparison with existing data from several observational groups will be presented in Meiksin et al. (1999).
The paper is organized in the following way. In §2 we describe the cosmological models and simulation technique. In §3 we investigate the model differences, power dependence, and redshift evolution in the raw opacity data as characterized by nonparametric statistics of the flux and optical depth. In §4 we present a line analysis of the spectra generated by the various simulations focusing on the column density distribution and line number evolution statistics. In §5 we discuss the Doppler b parameter distributions and related nonparametric statistics, and in §6 we present model predictions for He $`\mathrm{II}`$absorption. We summarize our results in §7.
## 2 The Models and Simulations
All the model background spacetimes we consider are in the context of Cold Dark Matter (CDM) dominated cosmologies. We examine the following five models: a standard critical density flat CDM model (sCDM), a flat CDM model with a nonvanishing cosmological constant ($`\mathrm{\Lambda }`$CDM ), a topologically open CDM model (OCDM), the standard CDM model but with the power spectrum of the density perturbations tilted (tCDM) to match the normalization on large scales as determined from the COBE measurements of the Cosmic Microwave Background (Bunn & White 1997), and a flat critical density mixed dark matter model with a hot component added to the CDM (CHDM). There are several important and well–established astrophysical measurements which constrain the various combinations of cosmological parameters. The parameters for each model, which we list in Table 1, have been determined to provide good consistency with these observations. For example, the combination $`\mathrm{\Omega }_Bh^2`$ is restricted by Big Bang nucleosynthesis constraints and the measured abundance of primordial deuterium to lie in the range $`0.015`$$`0.025`$ (Copi, Schramm, & Turner 1995; Burles & Tytler 1998). In addition, because the $`\mathrm{I}`$column density scales approximately as $`(\mathrm{\Omega }_bh^2)^2`$ for a fixed UV radiation intensity, we choose $`\mathrm{\Omega }_b`$ and $`h`$ so that $`\mathrm{\Omega }_bh^2`$ is the same for three of the models sCDM, $`\mathrm{\Lambda }`$CDM and OCDM. The fluctuation normalization in a sphere of $`8h^1`$ Mpc is defined to match observations of the number density of galaxy clusters (White, Efstathiou, & Frenk 1993; Bond & Myers 1996) in all the models. In addition, a tilt has been applied to the CDM power spectrum in the tCDM model in order to approximately match the amplitude of the CMB quadrupole as measured by COBE (Bunn & White 1997). The cosmological constant in the $`\mathrm{\Lambda }`$CDM case is consistent with the upper limit ($`\mathrm{\Omega }_\mathrm{\Lambda }<0.7`$) of Maoz and Rix (1993) and the best fit parameters of Ostriker and Steinhardt (1995). One of the major problems with the sCDM model is its difficulty in matching observations of the large scale structures in the universe. Since the standard CDM model is historically one of the most studied models, however, we use it as our canonical model to which the perhaps more viable additional models considered here may be compared and through which we investigate the dependence of the Ly$`\alpha `$statistics on the fluctuation power spectrum. We refer the reader to Zhang et al. (1995, 1997, 1998) for further details and results from our previous sCDM simulations.
The initial data were generated using COSMICS (Bertschinger 1995) with the BBKS transfer function (Bardeen et al. 1986) to compute the starting redshifts and the initial particle positions and velocity perturbations appropriate for all models except CHDM. We used CMBFAST (Seljak & Zaldarriaga 1996) to solve the linearized Boltzmann equations to set the initial conditions for CHDM. For the comoving box size adopted (9.6 Mpc) and the corresponding comoving grid cell size (37.5 kpc in our high resolution runs), the relevant wavenumber domain of the simulations at $`z=0`$ is $`168>k>0.65`$ Mpc<sup>-1</sup>, where $`k=2\pi /\mathrm{}`$ and $`\mathrm{}`$ is the length scale. Over this domain, the sCDM, $`\mathrm{\Lambda }`$CDM and OCDM models all have a similar power distribution. The tCDM and CHDM models, on the other hand, have an overall lower normalization (see Table 1) in addition to a steeper slope that drops slightly more sharply than the other models over the smaller scales. In Figure 1 we show the linear power spectra for these models evolved to $`z=3`$, the redshift at which we present many of our results. Since previous work (Zhang et al. 1998) indicates that the sizes of the low column density absorbers at $`z3`$ are $`100`$ kpc, it is useful to characterize the models in terms of their power at these small fluctuation scales. A useful measure of this power, introduced by Gnedin (1998), is
$$\sigma _{34}^2=_0^{\mathrm{}}P(k)e^{2k^2/k_{34}^2}\frac{k^2dk}{2\pi ^2}$$
(1)
(where $`k=2\pi /\mathrm{}`$, $`P(k)`$ is the linear power spectrum at $`z=3`$ and $`k_{34}=34\mathrm{\Omega }_0^{1/2}h`$ Mpc<sup>-1</sup>). This is also listed for each model in Table 1.
In addition to specifying a cosmological model, it is also necessary to include a background UV radiation field to ionize the IGM. We have implemented the spectrum computed by Haardt & Madau (1996) for a flat universe on the basis of radiation transfer in a clumpy universe and the measured luminosity function of QSOs, accounting for QSO sources, absorption by the Ly$`\alpha `$forest and Lyman limit systems, and re-emission of the recombination radiation from the absorbing clouds. This spectrum reionizes the universe between redshifts $`7`$ and $`6`$ and peaks at about $`z=2`$. Because the OCDM model corresponds to a cosmology not considered by Haardt & Madau (1996), we use the field for a flat universe for this model also, noting that the neutral fractions are in any case rescaled, as described below, to match observations. We also note that only clouds which are optically thin at the Lyman edge are considered in this paper. Hence the optically thin limit is a good approximation and it is not necessary to account for self–shielding and radiative transfer of the external ionizing radiation field.
The numerical computations were performed using two different numerical codes, Kronos and Hercules, each in a simulation box of length $`9.6\mathrm{Mpc}`$ comoving with the universal expansion. Kronos (Bryan et al. 1995) is a single grid Eulerian code that uses a particle-mesh algorithm to follow the dark matter and the piecewise parabolic method (PPM) to simulate the gas dynamics. Since non–equilibrium chemistry and cooling processes can be important, six particle species ($`\mathrm{I}`$, $`\mathrm{II}`$, He $`\mathrm{I}`$, He $`\mathrm{II}`$, He $`\mathrm{III}`$and the electron density) are followed with a sub-stepped backward finite-difference technique (Abel et al. 1997; Anninos et al. 1997). This is the same non–equilibrium chemistry and cooling model used in our previous studies of the Ly$`\alpha `$Forest (Zhang et al. 1995, 1997, 1998; Charlton et al. 1997; Bryan et al. 1998). For sCDM, $`\mathrm{\Lambda }`$CDM , OCDM, and tCDM we use $`256^3`$ grid cells in the simulation box to follow the evolution of $`128^3`$ dark matter particles. This results in a spatial resolution of $`\mathrm{\Delta }x=37.5\mathrm{kpc}`$. For CHDM $`128^3`$ grid cells are used with $`64^3(128^3)`$ cold (hot) dark matter particles, respectively, resulting in a lower spatial resolution, $`\mathrm{\Delta }x=75\mathrm{kpc}`$, for this model. For the sCDM simulations with varying $`\sigma _{8h^1}`$ we present simulations with both $`128^3`$ and $`256^3`$ grid cells resulting in both low and high spatial resolutions of $`75\mathrm{kpc}`$ and $`37.5\mathrm{kpc}`$, respectively.
We have also simulated three of our models (sCDM, $`\mathrm{\Lambda }`$CDM , OCDM) with a different numerical code, Hercules (Anninos, Norman & Clarke 1994; Anninos et al. 1997). Hercules is a nested grid code that utilizes a multiscale PM method for the dark matter, artificial viscosity methods for the baryonic fluid, and the same non–equilibrium chemistry and cooling model as above. The simulations produced from this code use $`128^3`$ particles and $`128^3`$ cells for both the nested and parent grids. However, in order to derive a more representative sample for statistics, the results discussed in this paper are extracted from the parent grid only. Thus these simulations are of lower spatial resolution than most of the Kronos simulations, although the dark matter mass resolution is the same. For statistics that are insensitive to spatial resolution, a comparison of the results of the two codes is useful to insure that simulation results are robust against changes in numerical technique.
Synthetic spectra are generated along $`300(900)`$ random lines of sight through the Kronos (Hercules) simulated volume using the method of Zhang et al. (1997) including the effects of peculiar velocity and thermal broadening of the gas. (We have verified that decreasing the sample size from 900 to 300 for the Kronos data does not affect the results except for a slight increase in the scatter of the line properties for the highest, optically thick column density systems, a regime where our results become unreliable anyway because of the absence of radiative transfer in the codes.) Since we are primarily concerned in this paper with a comparison of model predictions, we have not included noise or continuum fitting in the analysis. Furthermore the resolution of the spectra, $`1.2\mathrm{km}/\mathrm{s}`$, is the same for all the simulations, a value that is smaller than current observations. However, we have shown elsewhere (Bryan et al. 1998) that, as long as we restrict ourselves to high quality observational data, the impact of not including these observational difficulties is small. In addition to analyzing the raw optical depth and flux distributions, line lists are extracted from the data using a Voigt profile fitting procedure. This is described in more detail elsewhere (Zhang et al. 1997), but we outline it briefly here. First, maxima in the optical depth distribution are identified as line centers. Then Voigt profiles are fit, using a non-linear minimization, to the part of the spectrum which is above $`\tau _{HI}=0.05`$ and between neighbouring minima. This results in the same spectral threshold $`F_te^{\tau _t}=0.95`$ as the high resolution Keck HIRES spectrometer. Each line of sight chosen produces a sample spectrum with on the order of $`10`$$`100`$ lines per redshift interval $`\delta z=0.1`$ depending on the redshift and cosmological model. The statistics of these linelists are discussed in §4 and §5.
The amplitudes of the distributions found in the models cannot be used as a basis for comparing the models since they may be arbitrarily re-scaled for any individual model using the ionization bias factor $`b_{\mathrm{ion}}=\mathrm{\Omega }_B^2/\mathrm{\Gamma }`$, where $`\mathrm{\Omega }_B`$ is the fraction of the critical density carried in baryons and $`\mathrm{\Gamma }`$ is the Haardt-Madau (1996) parameterization of the metagalactic UV ionizing background extracted from the observed distribution of quasars. It is important to normalize all the models consistently before comparing the shapes of any of the distributions. This may be done in a variety of ways. We do so by matching the mean $`\mathrm{I}`$opacity in each simulation to the measured intergalactic $`\mathrm{I}`$opacity at $`z=3`$. In Zhang et al. (1997), we found the opacity measurements of Steidel & Sargent (1987) and Zuo & Lin (1993) gave a mean $`\mathrm{I}`$opacity at $`z=3`$ of $`\overline{\tau }_\alpha =0.270.35`$, although values as much as 30–60% larger have been claimed (Press et al. 1993; Rauch et al. 1997). Because of the uncertainty in this measurement, we also require consistency with the number density of lines observed above a threshold of $`\mathrm{log}N_{\mathrm{HI}}=13.5`$. Using the three quasars in Hu et al. (1995) for which lines in the full redshift range $`3<z<3.1`$ are listed, we find a total of 61 lines for the three lines–of–sight in this redshift interval with $`\mathrm{log}N_{\mathrm{HI}}>13.5`$, for which the line lists should be complete. (An estimate based on using the available lines for all four QSO line lists in Hu et al. in the redshift interval $`2.9<z<3.1`$ gave essentially the same line density.) Normalized to $`\overline{\tau }_\alpha =0.30`$, the CHDM, sCDM, $`\mathrm{\Lambda }`$CDM , OCDM, and tCDM models predict, respectively, 60.8, 62.1, 62.7, 63.7, and 59.5 lines, in close agreement with the observed number. Normalizing to $`\overline{\tau }_\alpha =0.35`$, the respective numbers of predicted lines are 73.7, 74.3, 73.9, 75.2, and 72.8. While these are not badly inconsistent with the observed number, they are all fairly high. We normalize the spectra according to $`\overline{\tau }_\alpha =0.30`$ throughout this paper, noting that this value is still not well agreed upon. In Figure 2 we plot a related statistic, $`\tau _{eff}`$ (Zhang et al. 1997) for the normalized spectra of our models and compare to recent data by Kirkman & Tytler (1997). After normalization all of our models are consistent with the data over the redshift range $`2z4`$ considered by this paper.
## 3 Direct Optical Depth and Flux Measurements
Historically Ly$`\alpha `$absorption spectra have been analyzed in terms of the statistics of spectral line features and as such have been plagued with difficulties of the line fitting procedure such as line identification and blending. Many of these difficulties become increasingly severe at higher redshifts making the results of the analysis uncertain. It is thus natural to ask whether statistics dependent directly on the observed flux and optical depth without recourse to line fitting might be of use in describing the forest and discriminating among competing models. Statistics of this kind have recently been proposed by several authors (Miralda–Escudé et al. 1997; Rauch et al. 1997; Cen 1997). Since these nonparametric measures are also easier to relate theoretically to the physical state of the absorbing gas, we begin our discussion with them.
### 3.1 Optical Depth Probability Distribution Function
The optical depth $`\tau `$ is related to the transmitted flux $`F`$ by $`F=exp(\tau )`$. We define the optical depth probability distribution $`dP/d\tau `$ as the probability that a pixel will have optical depth between $`\tau `$ and $`\tau +d\tau `$. In Figure 3 we use spectra generated from the sCDM high resolution simulation to show $`\tau dP/d\tau `$ versus $`\tau `$ for redshifts $`z=2,3`$ and $`4`$ (top panel). Although the peak of the distribution decreases and the distribution broadens slightly with decreasing redshift, the principal contributor to the redshift evolution seen in Figure 3 is the evolution of the optical depth $`\tau `$. Hui, Gnedin & Zhang (1997) discuss in detail the dependence of $`\tau `$ on the distribution and properties of neutral hydrogen along the line of sight in an expanding universe. Since we would like to understand the redshift evolution of the optical depth in terms of simple scaling laws, we repeat some of their discussion here in order to the isolate the key factors controlling this redshift evolution and clarify the scaling law assumptions. The optical depth is defined as
$$\tau (\nu _o)=_{x_a}^{x_b}n_{HI}\sigma _\alpha \frac{dx}{1+z}$$
(2)
where $`\nu _o`$ is the observed frequency, $`n_{HI}`$ is the number density of neutral hydrogen, $`z`$ is the redshift of the absorbing gas, $`\sigma _\alpha `$ is the absorption cross section for Ly$`\alpha `$, and the integral is over the line of sight between the quasar ($`x_a`$) and the observer ($`x_b`$) in comoving coordinates. In practice the form of the Ly$`\alpha `$absorption cross section limits the integration range per absorber to a small portion of the line of sight. It is thus useful to make a change of variable to velocity coordinates $`u`$ about some characteristic average redshift $`\overline{z}`$ in the problem. For example, for simulated data the redshift $`\overline{z}`$ might be a given output redshift for the simulation. The observed frequency $`\nu _o`$ and the frequency $`\nu `$ of the radiation in the absorber rest frame are then related by
$$\nu =\nu _o(1+\overline{z})(1+u/c)$$
(3)
where
$$u\frac{H(\overline{z})(x\overline{x})}{1+\overline{z}}+v_{pec}(x).$$
(4)
$`\overline{x}`$ is the comoving position along the line of sight whose redshift is exactly $`\overline{z}`$, $`v_{pec}`$ is the physical velocity of the gas, and $`H(\overline{z})`$ is the Hubble parameter defined by
$$H(\overline{z})=H_0\sqrt{\mathrm{\Omega }_m(1+\overline{z})^3+(1\mathrm{\Omega }_m\mathrm{\Omega }_\mathrm{\Lambda })(1+\overline{z})^2+\mathrm{\Omega }_\mathrm{\Lambda }}.$$
(5)
The first term in Equation 4 represents the contribution of the residual Hubble flow about the mean while the second term is due to the physical bulk flow of the gas. We assume $`u/c<<1`$ and neglect contributions from turbulent flows since they would be unlikely in the low column density regions we are considering. Under this change of variable the Ly$`\alpha `$cross section becomes
$$\sigma _\alpha =\frac{\sigma _{\alpha 0}c}{b\sqrt{\pi }}e^{(uu_0)^2/b^2}$$
(6)
where $`\sigma _{\alpha 0}=4.5\times 10^{18}`$ cm<sup>2</sup> sets the scale of the absorption cross section in terms of fundamental constants, $`u_0`$ is the velocity $`u`$ for which the frequency $`\nu `$ in the rest frame of the absorbing gas is equal to the Ly$`\alpha `$frequency $`\nu _\alpha `$ and $`b=\sqrt{2k_BT/m_p}`$ is the thermal width. For absorption lines of neutral hydrogen with column densities $`N_{\mathrm{HI}}<10^{17}`$ cm<sup>-2</sup> the thermal profile dominates the cross section so we neglect the contribution of the natural line width to $`\sigma _\alpha `$. The optical depth $`\tau `$ can now be written as
$$\tau =\frac{\sigma _{\alpha 0}c}{\sqrt{\pi }}\underset{streams}{}\frac{n_{HI}}{b(1+\overline{z})}\left|\frac{du}{dx}\right|^1e^{(uu_0)^2/b^2}𝑑u$$
(7)
The sum over streams represents the possibility that a given velocity $`u`$ corresponds to more than one position $`x`$. Although the integration formally runs over the full line of sight from quasar to observer, the Gaussian form for the cross section effectively limits the $`u`$ integration to a narrow range around $`u_0`$ (thus justifying our replacement of $`z`$ everywhere by $`\overline{z}`$.) To simplify notation we drop the bar letting $`z`$ represent $`\overline{z}`$ in what follows.
We assume that the number density of hydrogen traces the baryon gas density well. (There has been little metal production in these low density regions and there is no interaction that would cause the helium and hydrogen to separate.) Thus the number density of neutral hydrogen is $`n_{HI}=\rho _bX_{HI}`$ where $`X_{HI}`$ is the neutral fraction and $`\rho _b`$ the gas density. In ionization equilibrium (which is well satisfied except for the period of initial reionization) the neutral fraction of hydrogen is $`X_{HI}\rho _bT^{0.7}`$ such that the number density of neutral hydrogen (relevant to Ly$`\alpha `$absorption) scales as
$$n_{HI}(\mathrm{\Omega }_bh^2)^2\mathrm{\Gamma }^1(z)(1+z)^6(1+\delta _b)^2T^{0.7}$$
(8)
where $`\delta _b`$ is the baryon overdensity. Studies (Hui & Gnedin, 1996; Weinberg et al. , 1996) of the equation of state for the gas find that for unshocked gas at low to moderate baryon overdensities ($`\delta _b5`$) the equation of state is well fit by a power law:
$$T(1+z)^{1.7}(1+\delta _b)^{\gamma 1}.$$
(9)
Thus
$$n_{HI}(\mathrm{\Omega }_bh^2)^2\mathrm{\Gamma }^1(z)(1+z)^{4.8}(1+\delta _b)^{2.70.7\gamma }.$$
(10)
For a uniform radiation field and reionization that occurs before $`z=5`$, as is the case in our simulations, $`\gamma 1.4`$. This is in agreement with the value $`\gamma 1.5`$ found by Zhang et al. (1998) for clouds with column densities in the range $`12.5<\mathrm{log}N_{\mathrm{HI}}<14.5`$. Furthermore the assumption that most of the optical depth arises from low column density absorbers, large structures whose overdensities and peculiar velocities are slowly varying compared to the thermal profiles, means that multiple streaming is rare, the sum over streams in Equation 7 can be dropped, and $`\left|\frac{du}{dx}\right|\frac{H}{1+z}`$. We then integrate over the thermal profile to obtain (Croft et al. 1997)
$$\tau \frac{c\sigma _{\alpha 0}(\mathrm{\Omega }_Bh^2)^2}{\mathrm{\Gamma }(z)H}(1+z)^{4.8}(1+\delta _b)^{1.7}.$$
(11)
Note that in this limit $`\tau `$ need no longer have a thermal profile about its maximum (Hui, Gnedin & Zhang, 1997). If $`\delta _b`$ is evolving slowly over this redshift range, $`\tau `$ should scale as
$$\tau \frac{(1+z)^{4.8}}{\mathrm{\Gamma }(z)H}.$$
(12)
In the middle panel of Figure 3 we use this simple scaling law to rescale the $`z=2`$ and $`z=4`$ sCDM distributions from the top panel to $`z=3`$, the redshift at which all the models are normalized. We do this in order to test how well the simulations obey this simple scaling relation: if they followed it exactly then all three curves would overlap. Most, but not all, of the redshift evolution of this distribution is accounted for by the scaling of $`\tau `$ given in Equation 12. Since the evolution of the metagalactic UV radiation field $`\mathrm{\Gamma }`$ is relatively slight over this redshift range, we are left with the remarkable conclusion that most of the evolution of the Ly$`\alpha `$forest is a direct consequence of the universal expansion. The direct numerical results of Zhang et al. (1998) support this conclusion. If we include the evolution of the baryon overdensity, as shown in Figure 4 for the sCDM simulation, and shift the overdensity distribution until the peaks overlap, the $`(1+\delta _b)^{1.7}`$ dependence in Equation 11 for the optical depth distributions predicts additional scaling factors of $`1.64`$ ($`0.77`$) at $`z=2`$ ($`z=4`$), respectively, for $`\tau `$ that bring the distributions (shown in the bottom panel of Figure 3) into close agreement. The remaining small differences, the slight broadening of the distribution and a reduction in its peak amplitude with decreasing redshift, most probably reflect the fact that the shape of the baryon overdensity distribution is also evolving slowly with $`z`$.
The top panel of Figure 5, shows $`\tau dP/d\tau `$ for the five sCDM simulations with varying power and spatial resolution. From this we can see that the optical depth PDF at a given redshift is insensitive to the spatial resolution of the simulation. The shape of the distribution, however, is strongly dependent on the amount of small scale power present. Models with less power at these scales produce narrow, sharply peaked distributions. As the power increases, the distribution flattens and broadens. In the lower panel of Figure 5 we show $`\tau dP/d\tau `$ versus $`\tau `$ for the simulations in the model comparison study. These distributions again display a clear dependence on the power spectrum of the model with OCDM (our model with the most small-scale power) producing the broadest distribution, and CHDM and tCDM (models with the least small-scale power) producing the most sharply peaked distributions. Thus this statistic is particularly promising as a model discriminator in that these differences between models are significant in the range $`0.02<\tau <4`$ that should be accessible to observers.
We quantify this relation between the shape of the $`\tau `$ distribution and the amplitude of the power spectrum by fitting a log-normal to the curves:
$$\tau \frac{dP}{d\tau }e^{(\mathrm{ln}\tau \mathrm{ln}\tau _0)^2/2\sigma _\tau ^2}.$$
(13)
Although this does not fit the profiles in Figure 5 in detail, it does provide an adequate description as long as we restrict the range of optical depths used in the fitting. Here we adopt $`0.02<\tau <4`$, corresponding roughly to the observable range. A different range or a different fitting function changes the details, but not the nature of our result. In Figure 6, we show the correlation between $`\sigma _\tau `$, a measure of the width of the distribution, and $`\sigma _{34}`$, the amplitude of the linear power spectrum on small scales as defined in Equation 1. The strength of the correlation is striking. The low scatter around the power law relation shown in this figure bolsters our claim that the shape of the $`\tau `$ distribution function is insensitive to other cosmological parameters. To give an idea of the uncertainty in each point, we fit both the high and low resolution simulations for the sCDM $`\sigma _8=0.3`$ and $`\sigma _8=0.7`$ models. In both of these cases $`\sigma _\tau `$ differs by less than 10 %.
### 3.2 Flux Probability Distribution
Although the optical depth PDF is easier to model theoretically, the flux PDF (where $`dP/dF`$ is the probability that a pixel will have transmitted flux between $`F`$ and $`F+dF`$) is closer to actual observation. The top panel of Figure 7 shows the flux probability distribution functions for the high spatial resolution sCDM model with $`\sigma _8=0.7`$ at $`z=2,3`$ and $`4`$. The bottom panel of Figure 7 shows the prediction of the simple $`\tau `$ scaling given in Equation 12 applied to the flux and these same flux probability distributions. Again we attempt to rescale the $`z=4`$ and $`z=2`$ distributions to $`z=3`$ in order to test the scaling. This results in a highly nonlinear mapping of the flux and the flux PDF from $`z=j`$ to $`z=3`$ given by $`dP_j/dF\eta F^{11/\eta }dP_j/dF`$ and $`F_jF_j^{1/\eta }`$ for $`\tau _j\tau _j/\eta `$, where $`\eta =0.356(6.511)`$ for $`j=2(4)`$, respectively. While the shapes of the distributions in the top panel appear quite different, much of the $`z`$ evolution of the flux probability distributions is explained by this simple scaling, the remainder representing mostly the effect of the evolution of the baryon density in the cosmological model. We do not plot the scaled distribution for $`z=4`$ below the scaled flux of $`0.5`$ because this already corresponds to an unscaled flux of $`0.015`$, close to saturation and most likely noise dominated in the observations.
The flux PDF depends only weakly on simulation grid resolution (Bryan et al. 1998). Its shape is strongly dependent on the power spectrum of the underlying cosmology. In the top panel of Figure 8 we show the flux probability distributions in the sCDM model (spatial resolution $`\mathrm{\Delta }x=37.5`$ kpc) for cluster scale normalizations $`\sigma _8=0.3`$ and $`0.7`$. The dependence on the normalization of the power spectrum is clear. The number of pixels found with flux in the central flux range $`0.3<F<0.9`$ is greater for models with less power ($`\sigma _8=0.3`$); while the number of pixels with flux in the low ($`F<0.3`$) and high ($`F>0.9`$) flux ends of the distribution are less than for models with greater power ($`\sigma _8=0.7`$). This is in qualitative agreement with Croft et al. (1997a). We note, however, that our result (using the Kronos code) does not require any smoothing of the simulations as was the case for their TreeSPH simulations. In the lower panel of Figure 8 we present the $`z=3`$ flux PDFs for the five models of the model comparison study. Models with lower power at small scales (tCDM, CHDM) have a larger flux PDF for $`0.3<F<0.9`$ than sCDM and $`\mathrm{\Lambda }`$CDM , while the low density model (OCDM) with the highest spectral power at these scales has the smallest flux PDF in this range, as expected. Furthermore, the differences between models can be substantial. For example, at $`F=0.6`$ the OCDM results lie $`10`$ % below the $`\mathrm{\Lambda }`$CDM model result while the CHDM result lies above the $`\mathrm{\Lambda }`$CDM result by about a factor of $`1.4`$. We remind the reader that the mean of the distribution has been fixed to match observations. Thus this statistic should be useful to constrain competing models.
### 3.3 Fraction of high Ly$`\alpha `$opacity
Another possibly useful statistic for discriminating models is the fraction of a quasar spectrum with Ly$`\alpha `$optical depth greater than a specified value $`\tau _0`$, i.e. the cumulative distribution in optical depth. Small differences in the amplitude normalization of the primordial power spectrum may be enhanced in the cumulative opacity data (Cen 1997).
Figure 9 shows the linear correlation between the opacity at line center and the column density of absorption features in the sCDM model, ranging from the optically thin to thick at the Lyman edge. The nearly unbroken relation $`\tau _cN_{HI}`$, which exists down to the incompleteness density of $`10^{12}`$ cm<sup>-2</sup> is attributed to the weak correlation between the Doppler parameter and column density since, in general, $`N_{HI}b\tau _c`$. The lower bound on the opacity ($`\tau _c>0.05`$) is set by the transmission or spectral threshold $`F_t=e^{\tau _t}=0.95`$ used in the line identification procedure. Using Figure 9 as a guide, we investigate the cumulative opacity distribution with the following minimum opacity thresholds: $`\tau >`$ 0.1, 1, and 7 which, if associated with the line centers, would correspond roughly to column densities of $`\mathrm{log}N_{HI}=`$ 12.5, 13.5 and 14.5 respectively. The distributions $`P(\tau >\tau _0)`$ for the above minimum opacity thresholds are plotted in Figure 10 at redshifts $`z=2`$, $`3`$, and $`4`$ for the models in the model comparison study. In comparing groups with the same $`\tau _0`$, the smaller threshold curves are more highly clustered and less sensitive to the background cosmological model parameters. This is especially evident in Figure 11 where we show the cumulative distributions of the optical depth at redshift $`z=3`$ for these models.
## 4 Line Parameter Statistics
In this section we present a line analysis of the spectra generated from the various model simulations. We compare and contrast the cosmological models based on the column density distribution and the evolution of line number.
### 4.1 $`\mathrm{I}`$Column Density Distribution
One of the most robust line statistics used in the analysis of the Ly$`\alpha `$forest is the $`\mathrm{I}`$column density distribution, which is well converged by simulation box sizes of $`9.6`$ Mpc and is insensitive to changes in the simulation grid resolution or treatment of gas hydrodynamics (Bryan et al. 1998, Zhang et al. 1997). The $`\mathrm{I}`$column density distribution, defined to be $`N_{HI}=_{x_A}^{x_B}\frac{n_{HI}}{1+z}𝑑x`$, is closely related to the optical depth $`\tau `$ through the dependence of each on the number density of neutral hydrogen. Thus using Equation 10 and the approximations that led to Equation 11 we expect the $`\mathrm{I}`$column density to scale as
$$N_{HI}\frac{(\mathrm{\Omega }_Bh^2)^2}{\mathrm{\Gamma }(z)H}(1+z)^{4.8}(1+\delta _b)^{1.7}𝑑u$$
(14)
In the top panel of Figure 12 we show the raw (uncut) $`\mathrm{I}`$column density distribution for the high resolution sCDM model for redshifts $`z=2,3`$ and $`4`$. In the bottom panel of Figure 12 we see that the column density scaling
$$N_{HI}\frac{(\mathrm{\Omega }_Bh^2)^2}{\mathrm{\Gamma }(z)H}(1+z)^{4.8},$$
(15)
the same relation as the naive scaling relation given in Equation 12 for the optical depth $`\tau `$, accounts for the redshift evolution of the column density distribution amazingly well. This demonstrates that the column density, an integrated quantity, is much less sensitive than the optical depth distributions to the redshift evolution of the gas overdensity within an absorbing structure. The differences seen in the low column density end of the distributions, particularly for $`z=4`$, may be a result of the simulation spatial resolution (Bryan et al. 1998), while the differences observed in the high column density end may partially be due to shot noise in the high $`z`$ data. In Figure 13 (top) we explore the dependence of this distribution at a given redshift ($`z=3`$) on the power spectrum of the underlying cosmology. We find qualitative agreement with semi-analytic arguments (Gnedin 1998; Hui, Gnedin, & Zhang 1997) in that models with less power on small scales (such as sCDM with $`\sigma _8=0.3`$ and $`\sigma _{34}=0.812`$) have $`\mathrm{I}`$column distributions with significantly steeper slopes than models (such as sCDM with $`\sigma _8=0.7`$ and $`\sigma _{34}=1.89`$) with more power at these scales. However, as we discuss in more detail below and in Table 2, quantitative agreement between the simulations and the predictions of these semi-analytic arguments seems more difficult to achieve.
In Figure 13 (bottom), we show the $`\mathrm{I}`$column density distribution at redshift $`z=3`$ for (Kronos) simulated spectra in the model comparison study and compare the simulated data with data from Kirkman & Tytler (1997) and the fits provided by Kim et al. (1997). The distributions are conventionally quantified by fitting them to power laws, $`dN/dN_{\mathrm{HI}}N_{\mathrm{HI}}^\beta `$. We use the same sets of column density cuts on the simulated data in the model comparison study as Kim et al. in order to expedite comparison with the data and use a direct unweighted least squares fit (all quoted errors are $`2\sigma `$) to extract the slope $`\beta `$ from the simulated data. Our results are summarized in Table 2. We find again the expected dependence on the fluctuation power spectrum. For the column density range $`13.7<\mathrm{log}N_{\mathrm{HI}}<14.3`$ (given by the column labeled $`\beta _\mathrm{h}`$) the most shallow slope is for OCDM, the low matter density model with $`\sigma _{34}=2.50`$ while CHDM and tCDM with $`\sigma _{34}=1.14`$ and $`1.09`$, respectively, give the steepest distributions (see Table 1). The predicted column density distributions generally also steepen with time (decreasing redshift). Kim et al. find $`\beta =1.46\pm 0.07`$ ($`2\sigma `$) for this column density range at $`z=2.85`$. This is formally inconsistent with all the models at the $`3\sigma `$ level except for OCDM, although it is marginally consistent with $`\mathrm{\Lambda }`$CDM .
Results for the column density range $`12.8<\mathrm{log}N_{\mathrm{HI}}<14.3`$ in Table 2 are shown in the column labeled $`\beta _{\mathrm{}}`$. The average distributions are generally shallower when extended to lower column densities, showing that the distributions are curved. Kim et al. similarly find a shallower distribution over this column density range with their results for lines at $`z=2.31`$, $`z=2.85`$, and $`z=3.35`$ shown in the last row of Table 2. The result at $`z=2.31`$ is inconsistent with all of the simulation results at $`z=2`$, but note that the quoted uncertainty in the observation is eight times smaller than at $`z=2.85`$, despite comparable numbers of absorbers. By contrast, the result at $`z=2.85`$ is formally consistent at the $`3\sigma `$ level with all the models. At $`z=3.35`$, Kim et al. find $`\beta =1.59\pm 0.13`$ ($`2\sigma `$). This result is consistent with the simulation results for the OCDM model and marginally consistent (at the $`3\sigma `$ level) for the sCDM and $`\mathrm{\Lambda }`$CDM models. The observational data, however, also suggest a weak steepening of the distribution with increasing redshift, contrary to our findings. These discrepancies might indicate that the redshift evolution of the ionizing radiation field may be somewhat different from that of the Haardt & Madau spectrum assumed in the simulations.
We may compare the simulation results with the semi-analytic predictions of Hui, Gnedin, & Zhang (1997) to understand the trend of changing steepness with power spectrum. We provide the predicted values of $`\beta `$ according to the prescription of Hui et al. in Table 2. We assume $`T\rho _B^{0.5}`$, as found by Zhang et al. (1998) for this column density range in an sCDM simulation. The uncertainty in the $`T\rho _B`$ relation introduces only a 10% uncertainty in the prediction for $`\beta 1`$, so it seems reasonable to retain it for the other models as well for this purpose. The predicted values of $`\beta `$ for the sCDM, tCDM, and CHDM models at $`z=3`$ match the simulation values to within $`1\sigma `$, in agreement with the comparison in Hui et al. with one of our earlier sCDM models. However, the predictions for OCDM and $`\mathrm{\Lambda }`$CDM at $`z=3`$ are in disagreement with the semi-analytic arguments giving too steep a slope. The predictions do less well for all models at $`z=2`$. In particular, the simulation results show a steepening of the column density distribution toward decreasing redshifts for all the models, opposite to the predicted trend.
Over the wider column density range $`10^{12.8}<N_{\mathrm{HI}}<10^{16}`$ (summarized in the column labeled $`\beta _\mathrm{f}`$ in Table 2), we see that the average distributions continue to steepen toward higher column densities. Kim et al. obtain $`\beta =1.46`$ for this column density range at $`z=2.85`$, with a steepening to $`\beta =1.55`$ at $`z=3.7`$. The results for the tCDM and CHDM models ($`1.95\pm 0.06`$ and $`1.92\pm 0.06`$ at $`z=3`$, respectively) are substantially steeper than these values. Because the distribution deviates from a pure power law at the low column density end, it is useful to split the simulation samples into two halves, fitting each to a power law. These results are given in the last two columns of Table 2 where $`\beta _1`$ is the slope of the column density distribution for lines at the low column density end ($`10^{12.8}<N_{\mathrm{HI}}<10^{14}`$) and $`\beta _2`$ is the slope of the column density distribution for lines at the high end ($`10^{14}<N_{\mathrm{HI}}<10^{16}`$). Giallongo et al. (1996) obtain $`\beta =1.8`$ for systems with $`N_{\mathrm{HI}}>10^{14}`$ and $`2.8<z<4.1`$.
Finally we note that the analogous column density distributions derived from the lower resolution Hercules runs give similar results and slopes as the Kronos data. For example over the full column density range at $`z=3`$, Hercules data give slopes for the column density distribution of $`1.71`$, $`1.66`$ and $`1.62`$ for the sCDM, $`\mathrm{\Lambda }`$CDM and OCDM models, respectively, consistent within errors with the Kronos results. This suggests that the distribution function is a robust diagnostic, being relatively insensitive to grid resolution and numerical method. A preliminary comparison with the data favors models with more power at these scales than in our CHDM or tCDM cosmologies. However, there appears to be some discordance in the observations, so a more definitive comparison will require more work.
### 4.2 Line Number Evolution
The number of Ly$`\alpha `$lines at a particular redshift reveals how many intergalactic absorbers exist at that time between the quasar and observer and, given certain assumptions on their geometry, the size and volume filling factor of the absorbers can also be deduced. Since the column density of Ly$`\alpha `$lines corresponds to the mean overdensity and size of the clouds fairly well (Charlton et al. 1997; Zhang et al. 1997), it is useful to see how the number of lines evolves with different column density cutoffs, as this will track the evolution of morphologically distinct small scale structures in the universe.
Figure 14 shows the evolution of the number of lines with $`\mathrm{I}`$column densities greater than $`10^{13}`$ cm<sup>-2</sup> , $`10^{13.5}`$ cm<sup>-2</sup> , and $`10^{14}`$ cm<sup>-2</sup> , respectively, comparing results for the models in the model comparison study with the observed data from Kulkarni et al. (1996) at $`z2`$, Hu et al. (1995) at $`z3`$ and Lu et al. (1997) at $`z4`$. For a fixed transmission cutoff (here $`F_t=0.95`$) and column density threshold, the total number of lines per unit redshift decreases with time because the opacity of the universe decreases from both the increasing flux of radiation and the expansion of the universe. With the exceptions of the tCDM and CHDM models for the lowest column density threshold $`N_{\mathrm{HI}}>10^{13}`$ where incompleteness due to line blending becomes significant at higher $`z`$, the deviation from a fixed power law behavior tracks predominantly the behavior of the radiation flux. Fitting the evolutions to the form $`dN/dz(1+z)^\gamma `$ over the range $`2<z<4`$, we find the exponents are fairly similar in the different models. We summarize these results in Table 3 (all errors are $`2\sigma `$). To compare these simulated results with observational data we fit the combined line lists from Kulkarni et al. , Hu et al. , Kirkman & Tytler, and Lu et al. to the same power law behavior and display those results in the row labeled “combined” in Table 3 again with $`2\sigma `$ errors. (Lines near the QSO emission redshift were avoided because of the proximity effect, as were lines associated with metal systems.) Kim et al. obtain $`\gamma =2.78\pm 1.42`$ ($`2\sigma `$), fit over $`2<z<3.5`$ for systems with column densities $`10^{13.77}<N_{\mathrm{HI}}<10^{16}`$ cm<sup>-2</sup> . Using the same column density cuts and simulation data for $`z=2`$ and $`3`$ only, we find power law exponents (labeled $`\gamma _\mathrm{h}`$ in Table 3) for the sCDM, $`\mathrm{\Lambda }`$CDM , OCDM, tCDM and CHDM models of the comparison study in good agreement with the observational result. For the lower column density range $`10^{13.1}<N_{\mathrm{HI}}<10^{14}`$ cm<sup>-2</sup> , the power law exponents are labeled $`\gamma _{\mathrm{}}`$ in Table 3. Kim et al. (1997) obtain $`\gamma =1.29\pm 0.90`$ ($`2\sigma `$) fit over $`2<z<4`$. Our sCDM, $`\mathrm{\Lambda }`$CDM and OCDM model predictions are in good agreement with this observational result, although tCDM and CHDM show a somewhat stronger evolution.
All the models yield comparable levels of evolution at each of the column density cutoffs and, in fact, the evolution slopes in all the models agree for the most part within errors with the observed values. Two trends in the model predictions are apparent. The first is that at a given column density threshold the slope $`\gamma `$ of the line number evolution is correlated to the slope $`\beta `$ of the column density distribution with $`\gamma `$ increasing for models with larger $`\beta `$ (i.e. models with less power in the fluctuation spectrum at small scales). The second trend is that stronger column density lines for a given model exhibit a greater rate of redshift evolution. This is not unexpected since the evolution for a fixed transmission threshold is essentially determined by the radiation field (which is the same in all the models) and, to a lesser degree, the expansion of the universe. In our previous studies (Zhang et al. 1997) we have found little intrinsic cloud evolution over these redshift intervals. If we assume that the evolution of the gas overdensity does not contribute significantly to the evolution of line number or column density at these redshifts we can use Equation 15 and the power law dependence of the column density distribution $`dN/(dN_{\mathrm{HI}}dz)N_{\mathrm{HI}}^\beta `$ to predict the number of lines above a fixed column density threshold as a function of redshift $`z`$. We find
$$\frac{dn}{dz}(>N_{HI})\left(\frac{(1+z)^{4.8}}{\mathrm{\Gamma }(z)H(z)}\right)^{(\beta 1)}$$
(16)
In Figure 15 we compare the scaling predictions of Equation 16 to the simulation results using the high resolution sCDM $`\sigma _8=0.7`$ model for column density thresholds $`N_{\mathrm{HI}}>10^{13}`$, $`10^{13.5}`$, and $`10^{14}`$ cm<sup>-2</sup> , respectively. Since $`\beta `$ also evolves weakly with $`z`$, we use $`\beta `$ at $`z=3`$ as representative of the average for $`2<z<4`$ in Equation 16. For the lowest two column density thresholds we use the single power law fit to the column density distribution over the range $`10^{12.8}<N_{\mathrm{HI}}<10^{16}`$ cm<sup>-2</sup> , while we use $`\beta _2`$ from the two power law fit for the high ($`N_{\mathrm{HI}}>10^{14}`$ cm<sup>-2</sup> ) column density threshold. We normalize the scaling predictions to the simulated number of lines at $`z=3`$ because that is where all the models in our study were normalized to the observational data. For the lower two column density thresholds, the scaling prediction tends to overestimate the number of lines at $`z=4`$. For the lowest column density threshold ($`N_{\mathrm{HI}}>10^{13}`$) this again is partly due to incompleteness in the simulation line lists caused by line blending, an effect that becomes more severe for low column densities at high $`z`$. Furthermore the slope of the low column density end softens for $`N_{\mathrm{HI}}<10^{14}`$ with the break at $`N_{\mathrm{HI}}10^{14}`$ probably reflecting a change in the absorbers from low density structures evolving primarily with the universal expansion to structures undergoing gravitational collapse (Bryan et al. 1998). This deviation of the column density distribution from the pure power law assumed in the scaling relation would also cause the scaling law to overproduce the low column density lines. For the column density threshold $`N_{\mathrm{HI}}>10^{13.5}`$ the discrepancy between the scaling prediction and the simulation results at $`z=4`$ is reduced. This is to be expected since the high spatial resolution sCDM linelists should be complete at this column density threshold. For lines with $`N_{\mathrm{HI}}>10^{14}`$, where the absorbers share a common morphological type and the column density distribution is well fit by a single power law, agreement between the scaling prediction and the simulations is good. In the lower panel of Figure 15 we compare the scaling predictions with the simulation results for the models in the model comparison study for the high column density threshold case. The scaling predictions for all models agree reasonably well with the simulations.
Encouraged by these results we solve Equation 16 for the shape of the UV ionizing background $`\mathrm{\Gamma }(z)`$ in terms of the Hubble parameter $`H(z)`$ modeling the universal expansion and the (in principle) measurable quantities $`dn/dz(N_{\mathrm{HI}}>10^{14})`$ and $`\beta _2`$, the slope of the column density distribution over this column density range.
$$\mathrm{\Gamma }(z)\frac{(1+z)^{4.8}}{H(z)}\left(\frac{dn}{dz}\right)^{1/(1\beta _2)}$$
(17)
We use Equation 17 to compute $`\mathrm{\Gamma }(z)`$ with simulation data from the model comparison study and compare these predictions to the Haardt–Madau spectrum actually used in Figure 16. Although the prediction is highly sensitive to the slope of the column density distribution used (whose errors are still quite large), it is gratifying, given the simplicity of the scaling relations, that all of the models reproduce the assumed Haardt-Madau evolutionary trend.
## 5 Doppler $`b`$ Parameter
Recent papers (Bryan et al. 1998, Theuns et al. 1998) have shown that both the Doppler $`b`$ parameter and a related nonparametric statistic, the mean flux difference as a function of velocity, require high simulation spatial resolution to model properly. In this section we investigate the dependencies of these statistics on the properties of the cosmological model. Because these statistics are highly sensitive to the spatial resolution of the simulation, we present results only for those models (sCDM, $`\mathrm{\Lambda }`$CDM , OCDM, and tCDM) simulated with our highest spatial resolution $`\mathrm{\Delta }x=37.5`$ kpc.
The Doppler $`b`$ parameter measures the amount of line broadening due to thermal broadening, physical velocities, Hubble expansion broadening and the shape of the absorber density profile (Bryan et al. 1998). Both Hubble and thermal broadening are significant for the lower column density lines that arise from structures found in voids that are still expanding in absolute coordinates. The thermal contribution only becomes dominant for the higher column density lines that have turned around and are gravitationally collapsing. Furthermore, the $`b`$ parameter is highly sensitive to the simulation spatial resolution. Lower resolution simulations numerically thicken the lines causing the width of the lines, and thus $`b`$ to be overestimated (Bryan et al. 1998; Theuns et al. 1998). In our previous work (Bryan et al. 1998) we argued that the shape of the $`b`$–distribution was in rough agreement with observation and particularly that the high $`b`$ power law tail of the distributions arises naturally in hierarchical models when quasar lines of sight pass obliquely through the filamentary absorbing structures (Rutledge 1998). However the median of the simulated $`b`$ parameter distribution for the sCDM model, calculated from simulations with high spatial resolution, was now substantially below the $`30`$ km/s median seen in the observations. Thus the sCDM model, which previously had appeared to be in agreement with observations, is now discrepant. In Figure 17 we show the Doppler $`b`$ parameter distributions extracted from the high grid resolution ($`\mathrm{\Delta }x=37.5`$ kpc) Kronos simulations at redshift $`z=3`$ for the sCDM, $`\mathrm{\Lambda }`$CDM , tCDM, and OCDM models for lines with column densities between $`10^{13.1}`$ cm<sup>-2</sup>$`<N_{\mathrm{HI}}<10^{14}`$ cm<sup>-2</sup> . We present for comparison data from Kim et al. (1997) for $`z=3.35`$. The $`\mathrm{\Lambda }`$CDM and OCDM models, like sCDM, have their $`b`$ distributions shifted too much to the left (to low $`b`$ values) to agree with observation. Only tCDM, the model with the least fluctuation power at small scales and thus broader density structures at this redshift, has a median $`b`$ approaching the observational values. We explore this dependence on the fluctuation power spectrum with the two highest resolution sCDM models (with $`\sigma _8=0.7`$ and $`0.3`$, respectively) in the lower panel of Figure 17 and see that indeed the model with lower spectral power produces a $`b`$ parameter distribution shifted towards higher $`b`$ (as predicted by Hui & Rutledge 1998). The increase in $`b`$ for models with less fluctuation power at small scales may be partly due to line blending effects at these low column densities. However as shown below, the shift to higher $`b`$ values for these models persists for lines with higher column densities as well where line blending should not be as significant and thus can not be explained by line blending alone.
To facilitate a better comparison of the models with observations we plot the median Doppler parameters as a function of redshift in Figure 18 where we have imposed the same column density cuts on the lines as those used by Kim et al. (1997). The median $`b`$ for lines with column densities $`10^{13.8}`$ cm<sup>-2</sup>$`<N_{\mathrm{HI}}<10^{16}`$ cm<sup>-2</sup> and $`10^{13.1}`$ cm<sup>-2</sup>$`<N_{\mathrm{HI}}<10^{14}`$ cm<sup>-2</sup> are shown in the top and bottom panels, respectively. While the $`\mathrm{\Lambda }`$CDM , sCDM, and OCDM models predict roughly the observed evolutionary trend for both sets of column density cuts, the median $`b`$ values lie systematically more than $`6`$ km/s below the observational data. OCDM, the model with the most power at these scales, is the most discrepant. Although tCDM predicts median $`b`$ parameters more consistent with observation, the redshift evolution predicted by this model appears to be in disagreement with the data. We can compare these results with other recent data sets. Confining to lines with $`N_{\mathrm{HI}}>10^{13}`$ cm<sup>-2</sup> , we obtain from the published line lists, for $`1.9<z<2.0`$ ($`1\sigma `$ errors), $`(b_{\mathrm{mean}},b_{\mathrm{median}})=(32.1\pm 2.6,29.7\pm 3.3)`$ km s<sup>-1</sup> ($`1\sigma `$) (Kulkarni et al. ), $`3<z<3.1`$ $`(b_{\mathrm{mean}},b_{\mathrm{median}})=(38.0\pm 1.6,33.6\pm 2.0)`$ km s<sup>-1</sup> (Hu et al. ), $`(b_{\mathrm{mean}},b_{\mathrm{median}})=(27.3\pm 1.9,25.9\pm 1.2)`$ km s<sup>-1</sup> (Kirkman & Tytler), and at $`4<z<4.1`$ $`(b_{\mathrm{mean}},b_{\mathrm{median}})=(32.6\pm 2.4,25.9\pm 3.1)`$ km s<sup>-1</sup> (Lu et al. ), again clearly discrepant with the model predictions. Thus none of the models considered here can restore agreement with the observational data.
We also argued in Bryan et al. (1998) that this discrepancy is not a result of the particular choice of line fitting algorithm, but appears for sCDM in the nonparametric moments of the two point flux distribution functions as well. The two-point function $`P_2(F_1,F_2,\mathrm{\Delta }v)`$ gives the probability that two pixels with separation $`\mathrm{\Delta }v`$ will have flux $`F_1`$ and $`F_2`$. We plot the normalized moments of this function averaged over the flux range $`F_a`$ to $`F_b`$ given by
$$\frac{_{F_a}^{F_b}𝑑F_1_0^1𝑑F_2P_2(F_1,F_2,\mathrm{\Delta }v)(F_1F_2)}{_{F_a}^{F_b}𝑑F_1_0^1𝑑F_2P_2(F_1,F_2,\mathrm{\Delta }v)}$$
(18)
which represents the average flux diffence as a function of velocity for pixels in the range $`F_a`$ to $`F_b`$. In Figure 19 we plot the above statistic at $`z=3`$ as a function of velocity for several flux ranges for the high resolution models (sCDM, $`\mathrm{\Lambda }`$CDM , OCDM, tCDM) of the model comparison study (lower panel) and study its dependence on the small scale fluctuation power (top panel) using sCDM with $`\sigma _8=0.7`$ and $`0.3`$, respectively. There is little difference for low velocities, independent of flux level, due to the high coherence of the lines. At very large velocity differences there is no coherence and the value is just the difference between the mean value of the transmitted flux and the mean flux in a given flux interval (Bryan et al. 1998). It is at intermediate velocity separations where the statistic is heavily influenced by the structure of the lines. There $`\mathrm{\Lambda }`$CDM , OCDM, and sCDM with $`\sigma _8=0.7`$, whose power spectra are very similar, produce very similar distributions; while the tCDM model is quite distinct. We may quantify these model differences by determining at what $`\mathrm{\Delta }v`$ the model prediction passes through a given average flux difference. For the flux interval $`0<F<0.1`$ the simulation predictions pass through the mean flux difference of $`0.3`$ at $`\mathrm{\Delta }v35`$ km/s for $`\mathrm{\Lambda }`$CDM , OCDM, and sCDM $`\sigma _8=0.7`$ and at $`\mathrm{\Delta }v45`$ km/s for tCDM. Although observational data is limited, these are all lower than the $`\mathrm{\Delta }v55`$ km/s from Figure 3 of Miralde-Escudé et al. (1997).
It is important to ask what is needed to restore agreement between the simulations and observations. Although we can not completely rule out the possibility that the line fitting algorithm contributes to differences in the simulated and observed $`b`$-parameter distributions, we argue that its effect should not be significant because the discrepancy is seen at a comparable level (Bryan et al. 1998) in the fit–independent two–point distribution of the flux as well. The mean optical depth of our models was scaled to agree with observations, but this normalization is in some dispute. However, changing this normalization has little effect on the median of the $`b`$-distribution. For example, using sCDM ($`\sigma _8=0.7`$), an increase in $`\overline{\tau }_\alpha `$ from $`0.225`$ to $`0.35`$ at $`z=3`$ causes the median $`b`$ value to decrease from $`20.8`$ km/s to $`20.1`$ km/s, a change of only $`1`$ km/s. One possibility might be to change the ionization history of the universe such that pressure broadening would widen the absorbing structures. Another might be to change the density structure through the power spectrum of the cosmology itself. However, with the suite of models considered here it seems difficult for a single model to give good agreement with both the column density and $`b`$-parameter data.
## 6 Flux Statistics for Helium II
Previous work (Zhang et al. 1997,1998; Croft et al. 1997b) indicates that He $`\mathrm{II}`$Ly$`\alpha `$(304 Å) absorption may be significant in regions where $`\mathrm{I}`$Ly$`\alpha `$is not. Thus the study of He $`\mathrm{II}`$Ly$`\alpha `$absorption in quasar spectra provides a unique probe of structure in the lowest density regions of the universe. Comparison of both $`\mathrm{I}`$and He $`\mathrm{II}`$absorption within the context of a given cosmological model may also yield important information about the spectral shape of the metagalactic UV radiation field and its redshift evolution. While current observations still struggle to obtain sufficient resolution to detect any but the broadest individual He $`\mathrm{II}`$Ly$`\alpha `$lines, it is still possible to determine mean statistics of the He $`\mathrm{II}`$flux and optical depth which are not so sensitive to instrumental resolution. We define the mean optical depth $`\overline{\tau }_{HeII}=\mathrm{ln}F`$ where $`F`$ is the mean transmitted flux ($`F=1`$ signifying complete transmission). In Figure 20 we present $`\overline{\tau }_{HeII}`$ as a function of redshift for the sCDM models with varying power normalizations (top) and for the models in the model comparison study (bottom). Several trends are apparent. First all models produce a rapid rise in mean optical depth with increasing redshift (roughly a factor two between $`z=2`$ and $`z=3`$), with tCDM and CHDM rising slightly more steeply. This is consistent with previous work on a smaller number of hierarchical cosmologies (Zhang et al. 1997; Croft et al. 1997b) and with the interpretation that the observed optical depth is due primarily to absorption by gas in underdense regions. The redshift evolution of the optical depth is thus dominated by the change in the gas density due to universal expansion and (to a lesser degree) by the shape of the UV metagalactic ionizing background (here assumed to be that of Haardt-Madau (1996) with frequency dependence $`\nu ^{1.8}`$). Second, for a given redshift $`z`$, models with less power on small fluctuation scales have progressively larger optical depths. This is again consistent with the interpretation that the absorption is due to gas in predominantly underdense regions since less gas in these low power models will have turned around and collapsed.
The first observation of a flux decrement at the wavelength where the He $`\mathrm{II}`$Ly$`\alpha `$absorption should occur was made by Jakobson et al. (1994) using the HST Faint Object Camera to observe quasar Q0302-003. They obtained a 90% confidence lower limit of $`\overline{\tau }_{HeII}>1.7`$ at $`z=3.286`$. Subsequently improved measurements using spectra from this same quasar were made by Hogan et al. (1996) with the Goddard High Resolution Spectrograph on the HST and by Heap et al. (1998) using the Space Telescope Imaging Specrometer (STIS). STIS provides better sensitivity and background determinations than previous measurements. Davidsen, Kriss & Zheng (1996) used the Hopkins Ultraviolet Telescope to study the average He $`\mathrm{II}`$opacity in the spectrum of quasar HS1700+64 over the redshift interval $`2.2<z<2.6`$ (lower than that available with HST). They find $`\overline{\tau }_{HeII}=1.0\pm 0.07`$, although as shown in Figure 3 of Croft et al. (1997b) there is considerable scatter when the wavelength range is divided into $`10\AA `$ bins. Measurements of He $`\mathrm{II}`$absorption have also been made by Anderson et al. (1998) with STIS using the spectrum of quasar PKS 1935-692. Although the number of lines of sight studied so far are limited and thus a detailed comparison of observations with our model simulations (that average over 300 lines of sight) is premature, these data are also presented in Figure 20. On face value these data favor higher optical depths and thus models with lower fluctuation power. However, none of the simulation models presented here can reproduce the apparent break at $`z=3`$ in the optical depth observed by Heap et al. (1998). If this break persists it would most likely signal a departure from the Haardt-Madau quasar reionization spectrum assumed here.
For completeness and comparison with previous work (Croft et al. 1997b) we show in Figure 21 (from top to bottom) the He $`\mathrm{II}`$Ly$`\alpha `$flux probability distribution functions at $`z=4,3,2`$, respectively, for the models in the model comparison study. The distribution functions are calculated from the flux smoothed with window functions corresponding to the same Full Width at Half Maximum (FWHM) as STIS with high, 50 km s<sup>-1</sup> resolution (left column) and low, 500 km s<sup>-1</sup> resolution (right column). As Figure 21 shows, the shape of the flux PDF is highly dependent on the smoothing. We see however, that models with less fluctuation power on small scales have far fewer truly transparent regions (pixels with $`F`$ near $`1`$). For the high $`\mathrm{\Delta }v=50`$ km s<sup>-1</sup> resolution case, all models converge in the fully saturated regime ($`F<0.05`$) as expected.
## 7 Summary
We have performed several simulations of the Ly$`\alpha `$forest using different background cosmological models, numerical codes and grid resolutions. Five different cosmological models were considered here: the standard flat critical density cold dark matter model (sCDM), a flat CDM model with a nonzero cosmological constant ($`\mathrm{\Lambda }`$CDM ), an open CDM model (OCDM), a flat critical density CDM model with a tilted power spectrum matching both the COBE amplitude and small scale clustering contraints (tCDM), and a flat critical density mixed dark matter model (CHDM). The high resolution shock capturing code Kronos was used with grid resolution $`\mathrm{\Delta }x=37.5`$ kpc ($`\mathrm{\Delta }x=75`$ kpc for CHDM) for the benchmark calculations presented in this paper. Three of the models (sCDM, $`\mathrm{\Lambda }`$CDM , OCDM with identical parameters) were also evolved with the artificial viscosity based code Hercules at the lower grid resolution $`\mathrm{\Delta }x=75`$ kpc. Both simulation techniques give similar results for statistics, such as the slope of the column density distribution, that are insensitive to grid resolution.
We have presented results from several statistical analyses of absorption features present in the Ly$`\alpha `$spectra, both from the unprocessed optical depth data and from the reduced line lists. Explicitly we have considered the optical depth and transmitted flux probability distribution functions, the cumulative optical depth distributions, the $`\mathrm{I}`$column density distributions, line number evolution, Doppler $`b`$ parameter distribution, the average flux difference as a function of velocity (first moment of the 2-point flux distribution function), and the mean optical depth and flux probability distribution functions for He $`\mathrm{II}`$absorption. We find:
1. Simple scaling laws describe the redshift evolution of the optical depth, flux PDF, the $`\mathrm{I}`$column density distribution and, in conjunction with the slope of the column density distribution, the line number evolution remarkably well. This demonstrates that most of the evolution of the Ly$`\alpha `$forest is a direct consequence of universal expansion.
2. The shape of the optical depth PDF is strongly correlated to the amplitude of the density fluctuation spectrum. Differences between models my be significant in the observationally accessible region $`0.02<\tau <4`$. Thus this statistic may be a useful discriminator among models. Similar conclusions hold for the related flux PDF.
3. Cumulative opacity distributions for the models are strongly clustered at low optical depth thresholds and high $`z`$. Significant differences do occur for optical depth thresholds $`\tau _0>1`$, but these may be more difficult to observe.
4. The column density distribution function is a robust statistic relatively insensitive to grid resolution and numerical method. Its redshift evolution is described well by the same naive scaling law that describes the evolution of the optical depth. The slope of the column density distribution is sensitive to the amplitude of the power spectrum on scales roughly the size of the absorbers ($`100`$ kpc). Models with less power at these scales produce steeper distributions in qualitative agreement with semi- analytic arguments (Hui, Gnedin & Zhang 1997). A preliminary comparison with data favors models with more power (sCDM, $`\mathrm{\Lambda }`$CDM , OCDM) over those with less power (tCDM, CHDM).
5. All models show comparable evolution for the number of lines above a given $`\mathrm{I}`$column density threshold in reasonable agreement with the data. Thus this statistic is not a sensitive discriminator among models.
6. Although the shape of the Doppler $`b`$ parameter distribution is well reproduced by all the models, the median of the distribution for sCDM, $`\mathrm{\Lambda }`$CDM , and OCDM models is well below observed values. The median of the $`b`$ parameter increases for models with less power on small scales. Thus the observations favor low power models, such as tCDM, making it difficult for any model considered in this study to simultaneously give good agreement with both the $`\mathrm{I}`$column density and $`b`$ parameter data. This discrepancy is confirmed as well in the nonparametric first moment of the two-point flux distribution function and so is not solely the result of the line-fitting algorithm employed. The solution to this problem may require modification of the reionization history of the universe to produce more pressure broadening of the absorbing structures or a modification of the power spectrum of the underlying cosmology itself.
7. All of the models simulated in this study produce a rapid rise in He $`\mathrm{II}`$mean optical depth with increasing redshift consistent with the interpretation by previous work (Zhang et al. 1997, Croft et al. 1997b) that the observed optical depth is due to absorption by gas in underdense regions where universal expansion dominates the evolution of the gas density. Models with less power on small scales (tCDM, CHDM) produce larger mean He $`\mathrm{II}`$optical depths. Preliminary comparison with the data tends to favor these low power models. However, none of the models can reproduce the break seen by Heap et al. near $`z=3`$. If this break persists in the data, it would most likely reflect that the Haardt- Madau (1996) form for the metagalactic UV ionizing background, based on homogeneous reionization by quasars alone in a clumpy medium, must be modified.
###### Acknowledgements.
This work is supported in part by NSF grant AST-9803137 under the auspices of the Grand Challenge Cosmology Consortium (GC<sup>3</sup>). NASA also supported this work through Hubble Fellowship grant HF-0110401-98A from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. M.M. acknowledges support from the Research and Scholarship Development Fund of Northeastern University. The computations were performed on the Convex C3880, the SGI Power Challenge, and the Thinking Machines CM5 at the National Center for Supercomputing Applications, and the Cray C90 at the Pittsburgh Supercomputing Center under grant AST950004P.
|
no-problem/9906/astro-ph9906110.html
|
ar5iv
|
text
|
# Orbital Modulation of X-rays from Cygnus X-1 in its Hard and Soft States
## 1 Introduction
Cyg X-1 has been identified as a binary system of $`5.6`$-day orbital period which contains an O$`9.7`$ Iab supergiant and a compact object that is believed to be a black hole (Bolton (1972); Webster & Murdin (1972)). The observed intense X-ray flux from this system is thought to be produced close to the black hole in an accretion disk which emits soft X-ray photons and in a hot corona ($`T10^8`$$`10^9`$ K) that inverse-Compton scatters low energy photons to higher energies (e.g., Liang & Nolan (1984); Tanaka & Lewin (1995) and references therein). The accretion flow from the supergiant is probably intermediate between Roche-lobe overflow and stellar wind accretion (e.g., Gies & Bolton 1986b ).
Two physically distinct states of Cyg X-1 have been observed: the hard state and the soft state. Most of the time, Cyg X-1 stays in the hard state where its $`2`$$`10`$ keV luminosity is low and the energy spectrum is hard. Every few years, Cyg X-1 undergoes a transition to the soft state and stays there for weeks to months before returning to the hard state. During the transition to the soft state, the $`2`$$`10`$ keV luminosity increases, often by a factor of more than $`4`$, and the energy spectrum becomes softer (see reviews by Oda (1977); Liang & Nolan (1984) and references therein; also the $`\mathrm{𝑅𝑋𝑇𝐸}`$/ASM light curves in Fig. 1). Interestingly, the total $`1.3`$$`200`$ keV luminosity remained unchanged to within $`15\%`$ during the $`1996`$ hard-to-soft and soft-to-hard state transitions (Zhang et al. (1997)).
The hard state of Cyg X-1 frequently exhibits short, irregular, absorption-like X-ray intensity dips. These dips usually last seconds to hours and seem to occur preferentially near superior conjunction of the X-ray source. They are often thought to be due to absorption in inhomogeneities in the stellar wind from the companion (e.g., Pravdo et al. (1980); Remillard & Canizares (1984); Kitamoto et al. (1984); Bałucińska & Hasinger (1991); Ebisawa et al. (1996)).
The most probable mass for the black hole is about $`10`$ $`M_{}`$(Herrero et al. (1995); see also Gies & Bolton 1986a for a slightly higher value). One of the larger uncertainties in the determination of the mass comes through the inclination angle $`i`$, which remains relatively poorly constrained. The various existing techniques to determine $`i`$, such as those using the variation of the polarization of the optical light, allow it to be in a wide range of $`25`$$`70^{}`$ (e.g., Long, Chanan, & Novick (1980)).
The $`5.6`$-day orbital period of Cyg X-1 may be detected via several effects. In the optical band, this period manifests itself as radial velocity variations of the absorption/emission lines (Bolton (1975)) and as ellipsoidal light variation (e.g., Walker (1972)). Phase-dependent variations of the equivalent width of the UV lines of Si IV and C IV have been reported by Treves (1980) and attributed to the orbital motion of the X-ray heated region of the stellar wind. Orbital modulations in the near-infrared $`J`$ and $`K`$ band (Leahy & Ananth (1992)) and in radio at $`15`$ GHz (Pooley, Fender, & Brocksopp (1999)) have also been reported. The causes of these modulations are still speculative.
X-ray orbital modulations in the hard-state data of several investigations show an intensity minimum around superior conjunction in the folded light curves. A $`1300`$-day record of Ariel 5 ASM observations in the $`3`$$`6`$ keV band yielded an intensity minimum near superior conjunction even though the $`5.6`$ day period was not detected at a convincing level of statistical significance in a power density spectrum (Holt et al. (1979)). The existence of a broad dip near superior conjunction was confirmed in $`100`$ days of hard state data from the WATCH/Eureca wide field X-ray monitor (Priedhorsky, Brandt, & Lund (1995)). In the $`9`$$`12`$, $`12`$$`17`$ and $`17`$$`33`$ keV bands, the dips had depths of $`21\%`$, $`20\%`$, and $`10\%`$ respectively. The width (FWHM) is $`26\%`$ of the period in the $`9`$$`12`$ keV band. It is unclear how this type of broad dip is related to the shorter irregular dips discussed previously. A $`5\%`$ peak-to-peak orbital modulation was also found in 3 years of BATSE data in the $`45`$$`200`$ keV band (Robinson et al. (1996)).
In this paper, we present a detailed study of the orbital modulation in the $`1.5`$$`12`$ keV energy band using data from the All-Sky Monitor (ASM) on board the Rossi X-ray Timing Explorer ($`\mathrm{𝑅𝑋𝑇𝐸}`$). The advantages of $`\mathrm{𝑅𝑋𝑇𝐸}`$/ASM observations lie in its relatively good sensitivity ($`10`$ mCrab in a day), frequent data sampling ($`10`$$`20`$ times a day) and long baseline ($`2.5`$ years). Moreover, the fact that a long ($`80`$ days) soft state was observed by the ASM in $`1996`$ makes it possible to quantitatively compare the two states. An earlier report of the detection of the $`5.6`$ day period in the $`\mathrm{𝑅𝑋𝑇𝐸}`$/ASM data was made by Zhang et al. (1996).
Our analysis focuses on the X-ray orbital modulation with the goal of investigating the cause of the broad intensity dip, an understanding of which may ultimately help constrain the system parameters. Specifically, we present (1) the results of a periodicity search; (2) the folded and individual orbital light curves; (3) a comparison of the orbital modulations in the soft and hard states; and (4) the results from a simulation of the orbital modulation caused by a partially ionized stellar wind from the companion.
## 2 Data
The All-Sky Monitor on board $`\mathrm{𝑅𝑋𝑇𝐸}`$ (Bradt, Rothschild, & Swank (1993)) has been monitoring the sky routinely since $`1996`$ March. The ASM consists of three Scanning Shadow Cameras, each consisting of a coded mask and a position-sensitive proportional counter. A linear least squares fit to the shadow patterns from a $`90`$-s observation by one of the three cameras of the ASM yields the source intensity in three energy bands ($`1.5`$$`3`$, $`3`$$`5`$, and $`5`$$`12`$ keV). The intensity is usually given in units of the count rate expected if the source were at the center of the field of view in one of the cameras; in these units, the $`1.5`$$`12`$ keV Crab nebula flux is about $`75`$ ASM ct/s. The estimated errors of the source intensities include the uncertainties due to counting statistics and a systematic error taken to be $`1.9\%`$ of the intensities. A source is typically observed $`10`$$`20`$ times a day. In the present analysis, we have used source intensities of 90-s time resolution derived at MIT by the RXTE/ASM team. A detailed description of the ASM and the light curves can be found in Levine et al. (1996) and Levine (1998).
The X-ray light curves and hardness ratios from the ASM observations of Cyg X-1 ($`1996`$ March – $`1998`$ September) are shown in Fig. 1. During $`1996`$ May ( MJD $`50220`$, where MJD=JD-2400000.5 ), a transition into the soft state is evident (Cui (1996)). After about $`80`$ days in the soft state, Cyg X-1 returned to the hard state and remained there through 1998 September. The hard-state light curve shows long-term variations on time scale of $`100`$$`200`$ days and rapid flares that seem to occur every $`20`$ to $`40`$ days (see also Cui, Chen, & Zhang (1998)). For the analyses discussed below, the hard-state data are taken from a $`470`$-day interval (MJD $`50367.432`$$`50837.324`$) and the soft-state data from a $`80`$-day interval (MJD $`50227.324`$$`50307.324`$).
In this paper, the hardness ratio HR$`1`$ is defined as the ratio of the ASM count rates in the $`3`$$`5`$ keV band to that of the $`1.5`$$`3`$ keV band, and the hardness ratio HR$`2`$ as the ratio of the count rates of the $`5`$$`12`$ keV band to that of the $`3`$$`5`$ keV band.
## 3 Analysis and Results
Periodicities in both the hard state and the soft state have been sought by means of Lomb-Scargle periodograms of both the light curves and the derived hardness ratios. The Lomb-Scargle periodogram (Lomb (1976); Scargle (1982); Press et al. (1992)) was used to estimate the power density spectrum instead of the classic periodogram based on the Fast Fourier Transform (FFT) since the ASM data points are unevenly spaced in time. In the Lomb-Scargle periodogram, a maximum in the power occurs at the frequency which gives the least squares fit of a sinusoidal wave to the data. We oversampled the spectrum so that the frequencies are more closely spaced than $`1/T`$, where $`T`$ is the total duration of the data used. The goal is to ensure the detection of a peak for a signal that is of border-line statistical significance and to best locate the peak. The frequency range we have searched is up to (or beyond) $`N/(2T)`$, where $`N`$ is the number of data points.
### 3.1 Hard State
The Lomb-Scargle periodograms for the hard state are shown in Fig. 2. There is a distinct peak in the periodogram at a frequency that is consistent with Cyg X-1’s optically determined orbital period, i.e., $`5.599829\pm 0.000016`$ days (Brocksopp et al. (1999); see also Gies & Bolton (1982) and Lasala et al. (1998)). The peak is much more apparent in the periodograms of the hardness ratios than in those of the light curves. Some of the periodograms also have a significant peak at the frequency of $`1/2.8`$ d<sup>-1</sup>, the first harmonic of the orbital period. There is a large peak at a very low frequency corresponding to a period of about $`300`$ days, which is consistent with the reported $`294\pm 4`$ day period by Priedhorsky, Terrell, & Holt (1983) and by Kemp et al. (1983). However, the temporal span of the ASM data is too short to confirm this period; it could simply be “red noise”. In fact, this peak is no longer distinct in the periodograms calculated using an extended set of data, i.e., 860 days of hard state data. No other periodicities stand out at frequencies less than $`20`$ cycles per day except for the “peaks” at $`0.1`$ cycles per day which appear to be red noise (the spectrum for frequencies larger than $`10`$ cycles per day is not shown).
The data were folded modulo the orbital period of $`5.599829`$ days to study the phase-dependent variations (Fig. 3). We used the orbital ephemeris reported recently by Brocksopp et al. (1999). The most distinctive feature in the folded light curves is the broad intensity dip. It is seen in all energy bands and is centered on the superior conjunction of the X-ray source (phase zero). The dip profiles are quite symmetric about superior conjunction. The fractional amplitude of the modulation in the light curves is larger in the lower energy band, which manifests itself as gradual spectral hardening during the dip. The fractional amplitudes of the dip relative to the average non-dip intensities (phase $`0.3`$-$`0.7`$) are $`23`$% for $`1.5`$$`3`$ keV, $`14`$% for $`3`$$`5`$ keV, and $`8`$% for $`5`$$`12`$ keV. The widths (FWHM) are all about 27% of the orbital period. The corresponding fractional changes of HR$`1`$ and HR$`2`$ are about $`13`$% and $`8`$% respectively with similar widths. Taking into account the variation of the non-dip intensity in the folded light curves, we estimated the uncertainty in the fractional orbital modulations to be less than $`4\%`$.
Complex structures are evident within the dip for at least $`25\%`$ of the orbital cycles observed by the ASM. The profile of the structure also seems to vary from cycle to cycle. As the ASM data are unevenly sampled in time, we have found only a few orbital cycles that are relatively uniformly sampled. We show in Fig. 4 one such cycle of the hard state observations with time bins of $`0.1`$ day in the energy band $`1.5`$$`3`$ keV. There is a broad intensity dip at superior conjunction with substantial substructure. In particular, there are two narrow dips near superior conjunction: within a few hours, the intensities dropped by a factor of $`2`$, and the hardness ratios (HR1) increased by a factor of $`>1.6`$. This indicates that the dips were much less pronounced at higher energies as might be expected from an absorption process. These smaller dip-like structures are similar to those reported from previous missions. It is possible that the broad dip may be, partially or wholly, due to the superposition of smaller dips. We do not explore this possibility further because the study of such small dips (at time scale of seconds to hours) requires more frequent sampling around superior conjunction than is provided by the ASM.
### 3.2 Soft State
In contrast with the hard state, Lomb-Scargle periodograms of the soft-state data show no large power at the orbital period compared with the neighboring powers (Fig. 5, left panel). Neither were any other periodicities found in the frequency range of $`0.1`$$`10`$ cycles per day. At low frequencies, i.e. $`0.1`$ cycles per day, red noise is evident. For a direct comparison of the soft state data with the hard state data, we constructed periodograms for an $`80`$-day segment of the hard-state data that has a comparable number of data points (Fig. 5, right panel). The $`5.6`$ day orbital period is clearly detected in the hard state periodogram but is not obvious in the soft state.
In comparing periodograms, we use the normalized variance, i.e., the observed total variance of the count rate divided by the average rate. For a sinusoidal modulation superposed on random noise, the expected height of a peak in the periodogram is then proportional to the product of the number of data points and square of the fractional modulation divided by the normalized variance (see equation 21 in Horne & Baliunas 1986). For the $`1.5`$$`3`$ keV band soft state data, the normalized variance is $`(0.53)^2`$ that of the hard-state data. Thus, for comparable fractional orbital modulations (assumed to be nearly sinusoidal), we expect the signal power of the soft state to be $`(0.53)^23.6`$ times that of the hard state, or $`P204`$ (Fig. 5). The absence of a peak with $`P>25`$ at the orbital frequency in the soft state data thus clearly excludes the presence of comparable fractional orbital modulations in the $`1.5`$$`3`$ keV light curves of the two states. The folded orbital light curves and hardness ratios for the soft state (Fig. 7) also fail to reveal any significant broad dip or spectral hardening near superior conjunction.
A quantitative comparison of the fractional rms amplitude of the X-ray orbital modulation of the two states was estimated from the classic periodogram (power density spectrum estimated using FFT) of the same data used above. We have binned the data and filled data gaps with the average rate in order to apply the FFT. It is well known that the rms variation of the source signal in the data can be estimated using the FFT power spectrum assuming that the signal power can be properly separated out from the total power spectrum (cf., Lewin, van Paradijs, & van der Klis (1988); van der Klis (1989)). It is therefore relevant to study the distribution of the noise power in the periodogram. In the analysis below, only the powers at frequencies $`0.1`$ cycles per day were considered because the noise power spectrum is relatively flat in this region. The powers were first divided by the local mean which was obtained from a linear fit to the power as a function of frequency. The scaled noise powers of both the soft and hard state data were found to be consistent with a $`\chi ^2`$ distribution with $`2`$ degrees of freedom. We then assumed that modulation at the orbital period would yield peaks with the same widths in the power density spectrum from both states. On this basis, we derived the fractional rms amplitude of the orbital modulation for the hard state and an upper limit for the soft state for each ASM energy band at more than $`90\%`$ confidence (cf., Lewin, van Paradijs, & van der Klis (1988); van der Klis (1989)). This procedure was repeated for different time-bin sizes ($`0.0625`$, $`0.125`$, $`0.25`$ and $`0.5`$ days) to check for consistency of the results. We found that in the $`1.5`$$`3`$ keV band, the fractional rms amplitude of the orbital modulation for the soft state is at most $`33\%`$ of that for the hard state. The $`3`$$`5`$ and $`5`$$`12`$ keV bands yield higher percentages.
## 4 Models
The broad dip in the folded light curves cannot be attributed to a partial eclipse by the companion. The companion is a supergiant with a size more than $`10^3`$ times larger than the X-ray emitting region, so an eclipse of duration nearly $`27\%`$ of the orbital period would have to be total. Neither can the dip be caused by absorption by neutral material with solar elemental abundances since the observed $`8\%`$ reduction in flux in the $`5`$$`12`$ keV band would then be accompanied by a flux decrease in the $`1.5`$$`3`$ keV band of more than $`80\%`$ as opposed to the observed $`23\%`$.
We have modeled the broad dip assuming that it is produced by absorption and scattering of the X-rays by a smooth isotropic stellar wind from the companion star. The wind is partially ionized by the X-ray irradiation. The X-ray modulation is then caused by changes in the optical depth along the line of sight to the black hole as a function of orbital phase. For simplicity, we did not consider possible complex structures in the wind, e.g., the tidal streams which could account for the strong X-ray attenuation at late orbital phases ($`>0.6`$) in some other wind accreting systems (e.g., Blondin, Stevens, & Kallman (1991)). In our calculation, we neglected the influence of the UV emission from the optical star upon the ionization state of the wind as we expect it to have little effect on the X-ray opacity in the ASM energy band.
The radiatively driven wind model of Castor, Abbott, & Klein (1975) was adopted in our calculation. In this model, the velocity of the wind can be described by a simple power law for $`R>R^{}`$:
$$\upsilon _{wind}=\upsilon _{\mathrm{}}[1\frac{R^{}}{R}]^\alpha ,$$
(1)
where $`\upsilon _{\mathrm{}}`$ is the terminal velocity of the wind, $`R`$ the distance from the center of the star, $`R^{}`$ the radius of the star, and $`\alpha `$ a fixed index. A spherically symmetric wind is assumed for simplicity. We therefore approximate the wind density profile as:
$$n(R)=[\frac{R^{}}{R}]^2\frac{n_0}{\{1[R^{}/R]\}^\alpha },$$
(2)
where $`n(R)`$ is the number density of the wind, and $`n_0`$ is a wind density parameter, expressed in terms of the proton number density. The mass loss rate by the wind thus is $`\dot{M}=m_Hn_0\times 4\pi R_{}^{}{}_{}{}^{2}\upsilon _{\mathrm{}}`$, where $`m_H`$ is the atomic hydrogen mass.
Simulated ASM light curves in the energy band $`E_1`$$`E_2`$ were produced by integrating along the line of sight from the black hole for a given orbital phase $`\varphi `$:
$`I(\varphi )={\displaystyle _{E_1}^{E_2}}𝑑EI_0(E)Q(E)\underset{windabsorption}{\underset{}{\mathrm{exp}[{\displaystyle _{r_1}^{r_2}}n(R(\varphi ,r,i))\times \sigma (E,\zeta )𝑑r]}}\underset{interstellarabsorption}{\underset{}{\mathrm{exp}[N_H\times \sigma _0(E)]}},`$ (3)
where $`I_0(E)`$ is the intrinsic X-ray energy spectrum, $`Q(E)`$ is the ASM energy-dependent detection efficiency, $`r`$ is the distance from the X-ray source, $`\sigma (E,\zeta )`$ is the photoelectric absorption cross-section per hydrogen atom for the partially ionized gas as a function of the energy and the ionization parameter $`\zeta =L_x/[nr^2]`$ where $`L_x`$ is the effective source luminosity between $`13.6`$ eV and $`13.6`$ keV, $`\sigma _0(E)`$ is the absorption cross section per hydrogen atom for neutral gas, and $`N_H`$ is the interstellar hydrogen column density.
The values for the parameters in equations (2) and (3) were determined or adopted as follows. For the wind model, we took $`\alpha =1.05`$, $`R^{}=1.387\times 10^{12}`$ cm, and $`\upsilon _{\mathrm{}}=1586`$ km s<sup>-1</sup> from Gies & Bolton (1986b) who fitted equations (1) and (2) to the numerical results from Friend & Castor (1982) for the radiative-driven wind profile of Cyg X-1. These values are for binary separation $`a=2R^{}`$, corresponding to a $`98\%`$ Roche lobe fill-out factor of the companion, and for a wind profile that resembles a smooth wind from a single O$`9.7`$ I supergiant. The shape of $`I_0(E)`$ was chosen to be similar to that seen in the ASCA observations of Cyg X-1 in the hard state, i.e. with blackbody and broken power law components (Ebisawa et al. (1996)). The binary inclination angle was taken as $`i=30^{}`$ from the most probable value derived by Gies & Bolton (1986a). The interstellar hydrogen column density was taken as $`N_H=5\times 10^{21}`$ cm<sup>-2</sup>, slightly less than the values used in Ebisawa et al. (1996), for a better fit to the ASM data . The values for $`\sigma _0(E)`$ are from Morrison & McCammon (1983), and finally, for the wind, solar elemental abundances were assumed (as listed in Morrison & McCammon (1983))( Table 1).
The cross-section $`\sigma (E,\zeta )`$ depends highly on the ionization state of the wind. Under the assumption of a steady state, the ionization state of an optically thin gas illuminated by an X-ray source can be uniquely parameterized by the ionization parameter $`\zeta `$ for a given X-ray source spectrum (Tarter, Tucker, & Salpeter (1969)). For each ionization state of the optically thin gas, the local effective X-ray opacity can be uniquely determined from atomic physics calculations. The program XSTAR (v. 1.46, see Kallman & McCray (1982) for theoretical basis) was used to obtain an opacity table which contains $`\sigma (E,\zeta )`$ for a wide range of ionization parameters and energies. For any particular ionization parameter value, $`\sigma (E,\zeta )`$ can be constructed by interpolation. In our model, the Thomson scattering cross section was added to the cross section derived with the use of XSTAR. The wind absorption factor in equation (3) was integrated over the range $`10^{11}`$ cm $`<r<10^{13}`$ cm. For $`r<10^{11}`$ cm the wind is highly ionized while for $`r>10^{13}`$ cm the density of the wind becomes very small; thus the absorption of the X-rays by the wind is negligible in both cases.
Our procedure was to find the wind density parameter $`n_0`$ which produced light curves with fractional orbital modulations matching those obtained from the hard state ASM data. The spectral parameters from Ebisawa et al. (1996) were also adjusted slightly to match the intensity levels in the three ASM energy bands. The best-fit values were obtained by minimizing the $`\chi ^2`$ values of each model light curve relative to the data. The range of acceptable fit, with $`>90\%`$ confidence level, is then estimated based on the increase of $`\chi ^2`$ from the minimum (cf., Lampton, Margon, & Bowyer (1976)). Note the uncertainties in our results do not include the uncertain effects of the assumptions and binary parameters adopted for the calculation. The expected variance of the data in calculating $`\chi ^2`$ was taken to be the variance of the data between phase 0.3–0.7 to account for the possible intrinsic uncertainty associated with the data.
The best-fit model light curves of the hard state for our choice of $`i=30^{}`$ are plotted as the solid lines in Fig. 3 to compare with the observational data. Clearly this simple wind model can account for the observed X-ray orbital modulation in the hard state very well. The adjusted spectral parameters are listed in Table $`1`$. For a distance of $`2.5`$ kpc, the derived intrinsic $`1.3`$$`200`$ keV X-ray luminosity from this model is $`6.6\times 10^{37}`$ ergs s<sup>-1</sup>, which is consistent with the previously reported value (e.g., Zhang et al. (1997)). The wind density parameter is estimated to be $`n_0=(6\pm 1)\times 10^{10}`$ cm<sup>-3</sup>, indicating a total mass loss rate of $`6\times 10^6`$$`M_{}`$per year. This value of $`n_0`$ is a factor of $`3`$ larger than that determined by Gies & Bolton (1986b). The hydrogen (neutral+ionized) column density is about $`1.0\times 10^{23}`$ cm<sup>-2</sup>. The ionization parameter $`\zeta `$ varies from $`10^5`$ to $`10^2`$ along the integration path.
We have studied the effect of the inclination angle $`i`$ upon the quality of the fit of the model light curves to the data. The minimum of the $`\chi ^2`$ value was found at roughly $`i=30^{}`$ for data of all three energy bands. The acceptable range of the inclination angle was found to be $`10^{}i40^{}`$, determined primarily by the data in the 1.5–3 keV band. This constraint is mainly due to the fact that the width of the dip of a fixed fractional amplitude in our model decreases if we increase the inclination angle (Fig. 6). The best-fit $`n_0`$ ranges from $`3.7\times 10^{10}`$ cm<sup>-3</sup> for $`i=40^{}`$ up to $`1.6\times 10^{11}`$ cm<sup>-3</sup> for $`i=10^{}`$. Note that $`n_0`$ decreases if we choose a larger inclination angle $`i`$. The results are relatively insensitive to the ASM efficiency $`Q(E)`$ and the intrinsic X-ray spectral shape.
In the BATSE band, the X-ray opacity is entirely due to electron scattering. We found that in all our acceptable fits, attenuation caused by electron scattering would modulate the apparent intensity by $`4`$$`6\%`$ peak-to-peak, which is in good agreement with the data (Robinson et al. (1996)).
We repeated the same procedure for the soft state for $`i=30^{}`$, assuming the same wind density profile (with $`n_0=6.0\times 10^{10}`$ cm<sup>-3</sup>) and using the energy spectrum adjusted slightly from that in Cui et al. (1997a), again to match the count rates in the three energy bands. The results are plotted as solid lines in Fig. 7. The model produces light curves of much smaller orbital modulations than in the hard state because the wind is more ionized due to a much larger flux of soft X-ray photons in the soft state. In the 1.5—3 keV band, the amplitude of the modulation ($`14\%`$) in the model light curve is not consistent with the upper limit ($`9\%`$) determined in section 3.2. Better fits to the data can be found with smaller values of $`n_0`$. For $`n_0<4\times 10^{10}`$ cm<sup>-3</sup>, the fractional modulation of the model light curve is $`<9\%`$ in the $`1.5`$$`3`$ keV band for the soft state, which is consistent with the upper limit. A wind model with a non-variable wind density therefore cannot explain the data in the hard and soft states simultaneously in case of $`i=30^{}`$. However, the non-detection of the orbital modulation in the soft state can be explained if the wind density is reduced by a factor of about 2 relative to the hard state.
Alternatively, the X-ray orbital modulation observed in the hard state may be caused by partial covering of a central X-ray emitting region by the accretion stream. The wind density in this model is assumed to be much less than that required in the model discussed above, therefore it does not contribute significant X-ray opacity. Hard X-rays are generally thought to be produced by upscattering of low energy photons by electrons in a hot corona (e.g., Liang & Nolan (1984)). Observations seem to favor the geometry of a spherical corona centered on the black hole plus a standard thin disk (Fig. 8) (Dove et al. 1997, (1998); Gierliński et al. (1997); Poutanen, Krolik, & Ryde (1997)). Recent studies indicate that the size of the corona in the hard state could be as large as $`10^9`$ cm (Hua, Kazanas, & Cui (1998)) and that it may shrink by more than a factor of $`10`$ as the soft state is approached (Cui et al. 1997b ; Esin et al. (1998)). For Cyg X-1 in both states, the X-ray emission observed above $`1`$ keV is primarily from the corona. The accretion stream may have a scale height above the disk such that, viewed along the line of sight near superior conjunction of the X-ray source, it partially obscures the outer region of the large corona in the hard state but does not do so in the soft state because the corona is much smaller (Fig. 8). This constrains the distance of the absorber to be a few coronal radii away from the black hole. A covering factor around $`23\%`$ is sufficient to explain the observed depth of the dip in the hard state with a cold absorber of line-of-sight hydrogen column density of ($`1`$$`3`$) $`\times 10^{23}`$ cm<sup>-2</sup>. If we take the degree of ionization into account, the hydrogen column density could be much higher, which may account for the observed modulation in the BATSE band.
## 5 Summary
Our analysis of RXTE/ASM observations of Cyg X-1 leads to the following conclusions: There is a broad smooth dip in the folded orbital light curves of Cyg X-1 in the hard state. The dip is symmetric about superior conjunction of the X-ray source. The depth of the dip relative to the non-dip intensity is around $`23\%`$ in the $`1.5`$$`3`$ keV band, $`14\%`$ in the $`3`$$`5`$ keV band, and $`8\%`$ in the $`5`$$`12`$ keV band. The FWHM of the dip is $`27\%`$ of the orbital period in the energy range $`1.5`$$`12`$ keV. Individual light curves show complex structures around superior conjunction in the form of dips of shorter duration. Finally, no evidence is found for orbital modulation during the $`1996`$ soft state of Cyg X-1.
We examined the possibilities that the broad dip is produced by the absorption of the X-rays by a stellar wind from the companion star. This model reproduces the observed light curves of the hard state well for inclination angles $`10^{}i40^{}`$ and can also explain the soft-state data if there was a reduction in the stellar wind density for the duration of the soft state. Alternatively, the observed X-ray modulation in the hard state may be mostly due to the partial obscuration of a central hard X-ray emitting region by the accretion stream. The lack of the observed orbital modulation in the soft state can be attributed to a significant shrinkage in the size of the X-ray emitting region such that it is no longer obscured by the accretion stream. This model requires the accretion stream to have specific geometric properties, such as its scale height, width, and orientation. In both models, the required hydrogen column density can reproduce $`5\%`$ orbital modulation due to electron scattering as observed in the BATSE data (Robinson et al. (1996)).
We are very grateful to the entire $`\mathrm{𝑅𝑋𝑇𝐸}`$ team at MIT for their support. We thank Saul Rappaport, Ron Remillard, and Shuangnan Zhang for many helpful discussions. We also thank Tim Kallman and Patrick Wojdowski for their help with using XSTAR program.
|
no-problem/9906/cond-mat9906287.html
|
ar5iv
|
text
|
# FORMATION OF THE HEAVY-FERMION STATE - AN EXPLANATION IN A MODEL TRADITIONALLY CALLED LOCALIZED∗∗∗
## Abstract
In contrary to widely spread view about the substantial delocalization of $`f`$ electrons in heavy-fermion (h-f) compounds it is argued that h-f phenomena can be understood with localized $`f`$ electrons. Then the role of crystal-field interactions is essential and the heavy-fermion behaviour can occur for the localized Kramers-doublet ground state.
16.06.1999
In the proposed explanation compounds exhibiting heavy-fermion (h-f) behaviour are considered - in analogy to normal rare-earth intermetallics - within a few electronic subsystems. For the understanding of the h-f behaviour it is essential to distinquish f electrons from conduction electrons. These two subsystems are independent as far as the direct electron hopping from one to other subsystem does not occur. The f subsystem is a highly correlated f$`^{\text{ }n}`$ electonic system. The proposed model takes advantage of two recent findings: i) crystalline-electronic-field (CEF) interactions of the f shell can produce a non-magnetic ground state even in case of the Kramers system (the analytical proof exists, at present, for the f$`^{\text{ 3}}`$ system in the hexagonal symmetry ) and ii) the f$`^{\text{ }n}`$ localized states for an intermetallic compound containing an f atom always lie at the specific-heat probing level (the virtual Fermi level) as the f$`^{\text{ }n}`$ states are many-body states in contrary to single-electron states within the conduction-electron band. The different nature of excitations allows for independent contributions of these two subsystems to magnetic and electronic properties. The CEF state
$`\mathrm{\Gamma }_9`$=$`\frac{\sqrt{3}}{2}`$ $`|`$ $`\pm `$3/2$`>`$ \+ $`\frac{1}{2}`$ $`|`$ $``$9/2$`>`$
given for the f$`^{\text{ 3}}`$ subsystem in the hexagonal CEF interactions is a Non-Magnetic Kramers doublet as expectation values for J<sub>x</sub> J<sub>y</sub>, and J<sub>z</sub>, of the total angular momentum are equal zero. In ref. 1 this state has been proved to be realized as the ground state. In compounds exhibiting the heavy-fermion behaviour, the ground sate of the f-electron subsystem tends to the N-M Kramers doublet. Then one has in the single-ion picture enhanced but finite susceptibility at 0 K and a normal Curie-Weiss behaviour at higher temperatures exactly as is experimentally observed. The N-M Kramers doublet ground state of f electrons behaves like a half-filled band at the Fermi level (2 states and 1 f$`^{\text{ }n}`$ particle) allowing for a ”delocalization” of the f electrons and, in particular, for a many low-energy excitations detected as an enormous specific heat at lowest temperatures. In the presented model correlations between f electrons and conduction electrons proceed via electrostatic in interactions. The heavy-fermion state results from competition between CEF and antiferromagnetic interactions.
Some further implications of the model will be discussed.
<sup>∗∗∗</sup>presented at International Conference on Strongly Correlated Electron Systems, Sendai, Japan, September 7-11, 1992 as the poster 8P-92. The above text has been printed in the abstract booklet. The paper has been rejected from publication by the Organizing Committee despite of the strong author’s complains.
|
no-problem/9906/astro-ph9906129.html
|
ar5iv
|
text
|
# 1ES 1741+196: a BL Lacertae object in a triplet of interacting galaxies? Based on observations made with the Nordic Optical Telescope, operated on the island of La Palma, jointly by Denmark, Finland, Iceland, Norway, and Sweden, in the Spanish Observatorio del Roque de los Muchachos of the Institute de Astrofisica de Canarias. , Based on observations collected at the German-Spanish Astronomical Centre, Calar Alto, operated by the Max-Planck-Institut für Astronomie, Heidelberg, jointly with the Spanish National Commission for Astronomy
## 1 Introduction
Studies of the host galaxies and close environment of BL Lac objects provide important insights into the mechanism, which could be responsible for the extreme properties of these extragalactic objects. Following the standard model, the mass accreting Black Hole in the center of AGN host galaxies accounts for the extraordinary energy output of these objects. Hence, the fuelling mechanism of the central engine and the role played by tidal interaction with neighboring galaxies has to be clarified. Due to the loss of angular momentum by tidal forces either material from a gas-rich neighbor or the gas in the host galaxy itself could fuel the nucleus. Evidence for such a scenario have been claimed e.g. by Hutchings & Neff (1992).
While the nature of BL Lac host galaxies (at least at low-redshifts) is well understood (see e.g. Heidt 1999 for a recent review), the immediate environment ($`<`$ 50 kpc) is as yet poorly studied. This is mainly due to the faintness of most neighboring galaxies (3$``$5 mag fainter than the BL Lac) as well as the limited resolution available from ground. However, in recent years, a noticeable number of BL Lac objects with close companion galaxies (e.g. Falomo et al. (1990, 1991, 1993), Falomo (1996), Heidt et al. (1999)) or signs of interaction have been observed (e.g. Falomo et al. 1995, Heidt et al. 1999). This can be taken as evidence that tidal interaction is potentially important to the BL Lac phenomenon at least in these sources.
1ES 1741+196 (z = 0.083) is a member of the Einstein Slew Survey sample of BL Lac objects (Perlman et al. 1996), whose host and environment has not been studied so far. Therefore we carried out an an imaging and spectroscopical study the results of which are presented here. Throughout the paper $`\mathrm{H}_0=`$ 50 km $`\mathrm{s}^1\mathrm{Mpc}^1`$ and $`\mathrm{q}_0=0`$ is assumed.
## 2 Observations and data reduction
High-resolution imaging data of 1ES 1741+196 were taken with the Nordic Optical Telescope on the night July 12/13 1996. A 1k CCD (scale $`0.176\mathrm{}`$/pixel) and an R filter was used. We observed the BL Lac for 840 sec in total, split in several exposures to avoid saturation of the BL Lac. The night was photometric, standard stars from Landolt (1983) were frequently observed to set the zero point. The data were reduced (debiased, flatfielded using twilight flatfields), cleaned of cosmic ray tracks, aligned and coadded. The FWHM on the final coadded frame is $`0.78\mathrm{}`$.
A longslit spectrum of 1ES 1741+196 and two nearby companion galaxies was taken with the Calar Alto 3.5m telescope on the night April 21/22 1998. The focal reducer MOSCA with a 2k CCD (scale $`0.32\mathrm{}`$/pixel) and grism GREEN\_500 ($`\lambda \lambda 40008000`$ Å with 1.9 Å/pixel) was used. The instrumental resolution with the $`2\mathrm{}`$ slit was 14 Å. The slit was oriented at PA = $`57^{}`$ to cover the centers of the two nearby companion galaxies and the host galaxy of 1ES 1741+196. Two spectra of 1800 sec each were taken. They were bias-subtracted, flatfielded, corrected for night sky background and averaged. Wavelength calibration was carried out using HgAr calibration lamp exposures. Flux calibration was derived from observations of the standard star BD+$`33^{}`$2642 (Oke 1990).
## 3 Data analysis and results
Figs. 1a and b display 1ES 1741+196 and its close environment with two dynamical ranges to show the extent of the host galaxy (a) and the two nearby companion galaxies (b). The two nearby companion galaxies (labelled “A” and “B” in Figure 1b) are at projected distances of $`3.3\mathrm{}`$ and $`12\mathrm{}`$, respectively.
In order to model the BL Lac and its host as well as the two companion galaxies we applied a fully 2$``$dimensional fitting procedure to the images (for details see Heidt et al. 1999). Before fitting companion galaxies A and B the uneven background produced by the host galaxy of 1ES 1741+196 has to be removed. To achieve this, we used the ellipse fitting task ELLIPSE in IRAF to model the core and host of 1ES 1741+196. This model was subtracted from the image resulting in a flat background. After masking projected stars etc. the two companion galaxies were fitted. Next we subtracted the models for the companion galaxies from the original image, masked the central regions of the subtracted companions (which show some residuals) and fitted then the host and core of 1ES 1741+196. To verify the results of our fitting procedure, we subtracted the models for the companion galaxies from the original image, and repeated the procedure above (i.e. made a model of 1ES 1741+196 with ELLIPSE, subtracted the model etc.). The results for the fit parameters between iteration 1 and 2 differ less than 1%.
We used seven different models (three galaxy models with and without a nuclear point source and a bulge+disk model) for the analysis. They were described by a generalized surface brightness distribution with a shape parameter $`\beta `$ (Caon et al. 1993). We have chosen $`\beta `$ = 1 (disk galaxy), $`\beta `$ = 0.25 (de Vaucouleurs) and $`\beta `$ as free parameter. All galaxy models were convolved with the observed PSF, which was obtained by averaging several well exposed stars on the frame. For the nuclear point source a scaled PSF was used. PSF variations were proven to be negligible except in the central regions ($``$ 2 pixel). Absolute magnitudes of the galaxies were calculated using K$``$corrections adopted from Bruzual (1983), the galactic extinction was estimated from Burstein & Heiles (1982).
The results of our fits are summarized in Table 1. The best fit for the host of 1ES 1741+196 was obtained with an elliptical galaxy model with $`\beta `$ = 0.15. The host is very bright ($`\mathrm{M}_\mathrm{R}`$ = $``$24.85), extremely large ($`\mathrm{r}_\mathrm{e}`$ = 51.2 kpc) and has a relatively high ellipticity ($`ϵ`$ = 0.35) with position angle PA = $`48^{}`$. The fit with the de Vaucouleurs model ($`\beta `$ = 0.25) was less satisfactory ($`\chi _{\mathrm{red}}^2`$ = 1.76 vs. 1.32), but even then the host is very bright and large ($`\mathrm{M}_\mathrm{R}`$ = $``$ 24.45, $`\mathrm{r}_\mathrm{e}`$ = 25.4 kpc). The decentering (host versus core centroids) is negligible for both models ($``$ 0.04″). For comparison we give the results of both fits in Table 1. We determined the significance of the best fit with $`\beta `$ = 0.15 by numerical simulations (see Heidt et al. 1999 for a description of the procedure). The result for $`\beta `$ = 0.15 is statistically significant at a 10 $`\sigma `$ level.
Galaxy A can best be fitted by a system consisting of a disk and a bulge ($`\mathrm{m}_\mathrm{R}`$ (bulge) = 18.37 and $`\mathrm{r}_\mathrm{e}`$ (bulge) = 1.2 kpc, $`\mathrm{m}_\mathrm{R}`$ (disk) = 18.27 and $`\mathrm{r}_\mathrm{e}`$ (disk) = 1.8 kpc), which is typical for a Sa-type galaxy. The best fit for galaxy B was obtained with a de Vaucouleurs model ($`\mathrm{m}_\mathrm{R}`$ = 18.04, $`\mathrm{r}_\mathrm{e}`$ = 3.5 kpc).
In Fig. 1c we show the image after subtraction for the model of 1ES 1741+196. The two companion galaxies A and B and a low surface brightness feature between the two companion galaxies are clearly visible. In order to examine the nature of the low surface brightness feature, we further subtracted our models for the two companion galaxies from the image. The result is displayed in Fig. 1d. The feature is still present, the surface brightness at peak is 24.5 mag/sq. arcsec. Its appearance is suggestive of a tidal tail, indicating current interaction between both companion galaxies. It is more condensed towards companion galaxy A, which might imply that material is being stripped from that galaxy. We emphasize that the tidal tail is a real feature and not an artifact caused by our modelling procedure. It always shows up irrespective of the models used (either IRAF/Ellipse task or our own procedure which both produce smooth models).
Since the spectra of the two companion galaxies are contaminated by the contribution from the BL Lac host, we adopted a decomposition procedure to extract the 1-dimensional spectrum for each of the galaxies. Perpendicular to the dispersion, we fitted column by column three Gaussians representing the contribution from the host of 1ES 1741+196 and the companion galaxies to the flux distribution by a least-squares method. The integrated flux of each Gaussian for each column was then used to derive the 1-dimensional spectra. This gave us confidence that the lines found in the spectra of the companion galaxies are not caused by the host of 1ES 1741+196. The spectra are shown in Fig. 2.
All three spectra show Ca K+H, G-band, Mg b and Na D in absorption typical for bulge-dominated galaxies. This is consistent with the results of our fits to the images. No emission lines were detected. From the absorption lines we derive z = 0.084$`\pm `$0.001 for 1ES 1741+196 and companion galaxy A and z = 0.085$`\pm `$0.002 for companion galaxy B. All three galaxies are at the same redshift within the errors (actually the redshifts differ by z = 0.0005). Our redshift for 1ES 1741+196 is in accordance with the value (z = 0.083) given by Perlman et al. (1996), who measured the same absorption lines.
## 4 Discussion
With $`\mathrm{M}_\mathrm{R}`$ = $``$ 24.85 and $`\mathrm{r}_\mathrm{e}`$ = 51.2 kpc, the host of 1ES 1741+196 is one of the brightest and largest BL Lac hosts known to date. This is true even when the de Vaucouleurs model is used. Then $`\mathrm{M}_\mathrm{R}`$ = $``$24.45 and $`\mathrm{r}_\mathrm{e}`$ = 25.4 kpc, which is still considerably brighter and larger than the typical BL Lac host ($`\mathrm{M}_\mathrm{R}`$ = $``$23.5 and $`\mathrm{r}_\mathrm{e}`$ = 10 kpc, Heidt 1999). Similar half-light radii have been found e.g. for PKS 0301-243 and PKS 0548-322 in R-band (Falomo 1995) and H0414+009, MS 0419+197 and and PKS 1749+096 in r-band (Wurtz et al. 1996). However, only PKS 0548-322 is of similar brightness (Falomo 1995). It is remarkable that the results for PKS 0548-322 obtained by Falomo (1995) and Wurtz et al. (1996) differ considerably ($`\mathrm{r}_\mathrm{e}`$ = 51 kpc versus 13.77 kpc and $`\mathrm{M}_\mathrm{R}`$ = $``$ 24.2 versus $``$23.25, respectively).
According to our simulations, the deviation of the galaxy profile of the host of 1ES 1741+196 from a de Vaucouleurs profile is significant. A $`\beta `$ of 0.15 represents flatter light distribution than $`\beta `$ = 0.25. This can be explained by tidal interaction with the two neighboring galaxies, which a) have the same redshift as 1ES 1741+196 and b) are at projected distances of 7.2 kpc (galaxy A) and 26.3 kpc (galaxy B), respectively and are thus within the half-light radius of the host of 1ES 1741+196. During an encounter of galaxies, initial orbital energy of the galaxies is transferred into internal energy, which in turn perturbes the initial mass distribution. The galaxies expand along their impact parameter and contract perpendicular to their impact parameter. Finally, the galaxies blow up and their luminosity profiles become flatter (Madejski & Bien, 1993). The results of our fits to the host of 1ES 1741+196 are consistent with this scenario. The luminosity profile is flat, the galaxy is rather elliptical and the PA = $`48^{}`$ is approximately along the impact parameter between 1ES 1741+196 and the companion galaxies. This effect is not pronounced for the two companion galaxies, but here the situation is complicated due to the interaction by the galaxies themselves.
An interesting observation is the tidal tail emerging from galaxy A possibly connected to galaxy B. It is more condensed towards galaxy A, which would suggest that material has been released from this galaxy. Since galaxy A is most likely a bulge-dominated disk system, the material could well be a mixture of stars and gas. Unfortunately, no emission lines, which would be expected for Sa-type systems or which could be signs of recent star formation induced by tidal interaction can be found in the spectra. This is not unexpected, however. First, the slit had a width of 2″ thus probing the inner part of the galaxy dominated by the bulge. Secondly, the slit orientation covered the tidal tail only in part. Finally, the whole system is polluted by the host galaxy of 1ES 1741+196, which makes it very hard to detect emission lines unless they are very strong.
The observations of 1ES 1741+196 presented here and the observations of 1ES 1440+122 and 1ES 1853+671 (Heidt et al. 1999) may offer an unique opportunity to study nuclear activity induced by tidal forces in BL Lac objects. All objects have relatively bright companion(s) within 10 kpc projected distance. Whereas 1ES 1440+122 has a very bright companion at the same redshift (M. Dietrich, priv. com.) perhaps approaching the BL Lac, 1ES 1741+196 seems to be in an ongoing state of interaction with its companion galaxies and 1ES 1853+671 has a companion which seems to be merging with 1ES 1853+671 itself. Thus these three objects may form a homogeneous sequence from an early to a late stage of interaction. Unfortunately, the redshift of the companion galaxy to 1ES 1853+671 is not known, which makes this consideration a bit speculative.
One might ask, if these three BL Lac objects are typical for their class. Close companion galaxies have often been observed (e.g. Falomo 1990, 1996, Heidt et al. 1999), but in most cases they are relatively faint and redshifts are unknown. As such the three 1ES BL Lac objects are not untypical except the brightness of their companion galaxies.
A major drawback of this consideration is a clear demonstration that activity in BL Lac objects is triggered or maintained by gravitational interaction. The discussion on this subject and its relevance to AGN has a long and contrary discussed history. In one of the last papers dealing with this issue, De Robertis et al. (1998) compared the environments of a well defined sample each of Seyfert and ”normal” galaxies and found essentially no difference. Such a comparison has not been conducted yet for any kind of radio-loud AGN. As already discussed in Heidt et al. (1999) this is a tricky work, but urgently needed.
###### Acknowledgements.
We thank the referee (Dr. J. Stocke) for his critical comments. This work was supported by the DFG (Sonderforschungsbereich 328) and the Finnish Academy of Sciences.
|
no-problem/9906/cond-mat9906077.html
|
ar5iv
|
text
|
# SELF-ORGANIZATION OF COMPLEX SYSTEMS
## 1 Introduction
Scientific inquiry in the second millennium has focused almost exclusively on discovering the fundamental constituents, or building blocks, of nature. The most innermost secrets have been revealed down to ever smaller scales. Matter is formed of atoms; atoms are composed of electrons, protons, and neutrons, and so on down to the smallest scale of quarks and gluons. These basic elements interact through simple physical laws.
In the realm of biology, it is known that life on earth is based on the DNA double helix. But even though we understand perfectly the laws governing the interaction of atoms, we cannot directly extrapolate these laws to explain the beginning of life, or the auto-catalysis of complex molecular networks, or why we have brains that can contemplate the world around us. Due to the overwhelming unlikeliness of random events leading to complex systems like ourselves, it seems as if an organizing agent or “God” must be invoked who puts the building blocks together.
It isn’t necessary to delve into the biological realm to see the ultimate inadequacy of a purely reductionist approach. For instance, the surface of the earth is an intricate conglomerate of mountains, oceans, islands, rivers, volcanoes, glaciers, and earthquake faults, each with its own dynamics. The behavior of systems like these cannot be deduced by examining ever smaller scales to derive microscopic laws; the dynamics and form is “emergent.” Unless one is willing to invoke an organizing agent of some sort, all these phenomena must be self-organized. Complexity must emerge from a self-organizing dynamics. But how?
A few ideas have been proposed that begin to address this problem, which can be characterised as “How do we take God out of the equations.” The most pessimistic view is that one has to describe each and every feature in nature on a case by case basis. Indeed, such a “stamp collection” approach has prevailed in sciences such as biology and geophysics, and attempts to look for a unifying description have in the past been met by very strong scepticism among the practitioners of those sciences, although there have been exceptions such as plate tectonics theory, Kauffman’s work on autocatalytic networks , and Gould and Eldridge’s theory of punctuated equilibrium in biological evolution .
Perhaps nature does not need to invent a multitude of mechanisms, one for each system. The view that only a limited number of mechanisms, or principles, lead to complexity in all its manifestations (from the galactic or universal to the molecular) is supported by the observation of regularities that appear in the statistical description of complex systems. These statistical regularities provide hope and encouragement that a science of complexity may eventually emerge.
For example, river networks, mountain ranges, etc. exhibit scaling behavior, both in the spatial and in the temporal domain, where landslides or sediment deposits interrupt the quiet steady state. These landslides have been observed to be scale free ; similarly the Gutenburg Richter law for earthquakes states that they are also a scale free phenomena, with avalanches (quakes) of all sizes . The distribution of energy released during earthquakes is a simple power law, despite the enormous complexity of the underlying system, involving a multitude of geological structures. Forest fires have a similar behavior , as does volcanic activity . In astrophysical phenomena, there are star quakes, which we observe as pulsar glitches , interrupting quiet periods. Black holes are surrounded by accretion disks, from which the material collapses into the black hole in intermittent, earthquake-like events, which interrupt the otherwise steady evolution and occur over a wide range of scales .
Biological evolution also exhibits long periods of stasis punctuated by extinction events of all sizes. The paleontologists Stephen Jay Gould and Niles Eldredge coined the term “punctuated equilibrium” to describe the pace of evolution. Gould also argues that the record of extinction of species is contingent on seemingly minor accidents, and if the tape of the history of life were to be rerun an entirely different set of species would emerge .
We assert that punctuated equilibrium dynamics is the essential dynamical process for everything that evolves and becomes complex, with a specific behavior that is strongly contingent on its history . The periods of stasis allow the system to remember its past, the punctuations allow change in response to accumulated forcing over long time scales, and the criticality assures that even minor perturbations can have dramatic effects on the specific outcome of a particular system, making it possible to have distinct individual histories and forms.
Perhaps the greatest challenge is to find the mechanism by which the big bang has led to ever increasing complexity in our universe, rather than exploding into a simple gas-like fragmented substance, as explosions usually do, or imploding into a simple solid or black hole. Some intricately balanced feature of the initial state must have existed that allowed this to happen. How that “fine tuning” could have appeared remains a mystery, with Lee Smolin’s speculation of universes created by Darwinian selection being the only attempt so far .
Complexity is a hierarchical phenomenon, where each level of complexity leads to the next: astrophysics, with its own hierarchy of scales, leads to geophysics, which is the prerequisite for chemistry, biology, and ultimately the social sciences. Although the origin of the hierarchy is not understood, we do have the rudiments of a theory for the emergence of one level out of the previous one. Due to this hierarchy of emergence, it isn’t necessary to understand the mechanism of the big bang in order to understand the dynamics of earthquakes.
A common feature of the systems mentioned thus far, and perhaps of all complex systems, is that they are driven by slowly pumping in energy from a lower level of the hierarchy. For instance, biological life is driven by the input of energy from the sun. The energy is stored and later dissipated, in an avalanche process like an earthquake. Even a small increment in energy can trigger a large catastrophe, making these systems strongly contingent on previous history. They operate far from equilibrium, which is necessary since systems in equilibrium tend to become more and more disordered (rather than complex) over time, according to the second law of thermodynamics.
## 2 Complexity and Criticality
One view of systems driven out of equilibrium is that they should tend to a uniform “minimally” stable state generated by some type of optimization process. In traffic flow such a state would correspond to a uniform flow of cars with all cars moving at maximum velocity possible. But these optimized states often are catastrophically unstable, exhibiting breakdown events or avalanches, such as traffic jams . In tokamaks , this means that the ideal state of the plasma with the highest possible energy density is locally stable, but globally unstable with respect to explosive breakdown events. The surface of the sun is unstable with respect to formation of solar flares emitting energy in terms of light or gamma rays. In fact, the actual sets of states that emerge are those which are organized by the breakdown events.
A possible self-organized state is one that is critical in the sense that it has power law spatial and temporal correlations, like equilibrium systems undergoing a second order phase transition. The breakdown events in that state then must also be critical in the sense of a nuclear chain reaction process. In a supercritical system, a single local event, like the injection of a neutron, leads to an exponentially exploding process. A sub-critical process has exponentially decaying activity, always dying out. In the critical state, the activity is barely able to continue indefinitely, with a power law distribution of stopping times, reflecting the power law correlations in the system and vice versa.
It is intuitively clear that complex systems must be situated at this delicately balanced edge between order and disorder in a self-organized critical (SOC) state. In the ordered state, every place looks like every other place. Think of a crystal where the atoms are lined up over millions of inter-atomic distances. In the disordered state, there are no correlations between events that are separated in time or space: we have white noise. Again, it makes no sense to talk about complex behavior. Chaotic systems belong to this latter category. Sub-critical or supercritical states can usually be understood quite easily by analysing the local properties. Only at the critical state, does the compromise between order and surprise exist that can qualify as truly complex behavior. There are very large correlations, so the individual degrees of freedom cannot be isolated. The infinity of degrees of freedom interacting with one another cannot be reduced to a few. This irreducibility is what makes critical systems complex.
Thus, self-organized criticality provides a general mechanism for the emergence of complex behavior in nature. It has been proposed that granular piles , traffic , magnetic fusion plasmas , the crust of the earth , river networks and braided rivers , superconductors in a magnetic field , etc., all operate in a self-organized critical state.
The sandpile was the first model introduced by Bak, Tang, and Wiesenfeld to demonstrate the principle of self-organized criticality . This model has subsequently received a great deal of attention due in part to its potential for having a theoretical solution. Dhar showed that certain aspects of its behavior could be calculated exactly based on the Abelian symmetry of topplings . The sandpile was thought of as a paradigmatic gedanken experiment, but there has also been experimental confirmation of self organized criticality in granular piles. Fig. 1 shows an experiment on a pile of rice by Frette et al. . Grains of rice were dropped between two glass plates by a seeding machine, and the avalanches were monitored by a video camera connected with a computer for data analysis. A power law distribution of avalanches was found, indicating SOC.
Over the past decade there has been a great deal of theoretical work on other models of SOC. Much of this work has focussed on other idealized models of sandpiles. These models typically involve a sequence of nodes to which sand is added until a critical gradient or height is reached locally, triggering redistribution of sand to nearest neighbors. Then a chain reaction of instabilities may occur encompassing all scales up to the system size. Self-organized critical systems evolve toward a scale-free, or critical state naturally, without fine tuning any parameters. This gives rise to power law distributions for the breakdown events. Minimal SOC models have been developed to describe a diverse set of phenomena including earthquakes , solar flares , forest fires , magnetically confined plasma , fluctuations in stock-markets and economics , black hole accretion disks , traffic , biological evolution , braided rivers formed by vortex avalanches in superconductors , and disease epidemics , among others .
Given the preliminary nature of current understanding of complex systems, we are forced to consider one type of system at a time, looking for general principles. Some advancement has come from developing and studying simple computer models which help to conceptualize the essential attributes of the specific phenomena, and eventually to relate those to other phenomena. In the following we shall review a couple of these applications from widely different scientific domains: one from biology (co-evolution of species), one from solid state and geophysics (vortex avalanches and braided rivers), one from the social sciences (traffic), and one from cognitive science (brain function).
## 3 Braided Rivers and Superconducting Vortex Avalanches
Magnetic flux penetrates type II superconductors in quantized vortices which can move when an electrical current is applied, overcoming pinning barriers. When magnetic flux is forced in or out of the superconductor, vortices have been observed to intermittently flow through preferred channels . Using a simple cellular model to mimick this experimental situation, it has been found that the vortex flow makes rivers strikingly similar to aerial photographs of braided fluvial rivers, such as the Brahmaputra . This suggests that a common dynamical mechanism exists for braiding, namely, avalanches of stick-slip events, either sliding sediment or vortices, which organize the system into a critical braided state .
The cellular model includes basic features of vortex dynamics: over-damped motion of vortices, repulsive interactions between vortices, and attractive pinning interactions at defects in the material. It is a coarse grained description at the scale of the range of intervortex interactions, the so-called London length, and throws out most microscopic degrees of freedom (specific information about the vortex cores). As in experiments, vortices are slowly pushed into the system at one boundary (the left) and allowed to leave at the other boundary (the right). The vortex-vortex repulsions cause a gradient to build up in the vortex density across the system. Eventually, as vortices are constantly added, a critical slope is achieved where the force from the gradient of vortex density is opposed by pinning forces, making a delicately balanced vortex pile reminiscent of a pile of sand. Then adding new vortices slowly at the boundary triggers avalanches of vortex motion, where one moving vortex can cause others to become unstuck, leading to a chain reaction. Avalanches of all sizes occur, limited only by the physical size of the system. Since the avalanches have no other characteristic spatial or temporal scale, the model exhibits self-organized criticality. Similar behavior has been observed experimentally, and in molecular dynamics simulations of the microscopic equations of motion.
The spatial variation of the overall vortex flow is measured in terms of the number of vortices moving in each cell, averaged over a long time interval representing many vortices flowing through the system. Fig. 2. represents a “time-lapsed” photograph of vortex motion. Rather than exhibiting uniform flow, the vortices clearly have preferred channels to move in. The braided vortex river resembles networks of interconnected channels formed by water flowing over non-cohesive sediment. Such braided fluvial systems have been observed from aerial photographs to exist for many different length scales and types of sediment . In fact, braiding has been proposed to be the fundamental instability of laterally unconstrained free surface flow over cohesionless beds, and has been found to be a robust feature in simulations of river flow with sediment transport that includes both erosion and redeposition .
A quantitative scaling analysis reveals that the vortex river pattern is a self-affine multifractal with scaling dimensions close to those measured for a variety of braided rivers . Given the vastly different length scales and materials involved, this apparent universality may seem surprising. Nevertheless, it is known that this type of universality can exist in systems which evolve by avalanches into a self-organized critical state . In the case of braided vortex rivers, the patterns are due to a slip-stick process consisting of vortex avalanches, that self-organizes to a critical state resulting in the observed long-range correlations of the braided pattern. It has been postulated that braiding of fluvial rivers is due to a self-organized critical process.
Are there avalanches in fluvial rivers that could self-organize and produce the observed braiding? In fact there are. “Pulses” in bedload transport have been observed to occur on all spatial and temporal scales up to those limited by the size of the river studied . Analogous pulses in the vortex model are seen by measuring the vortex flow through individual lattice cells as a function of time. The flow in a small region of the system is temporally intermittent; there is a broad distribution of intervals between pulses, and the pulses themselves can have a broad range of sizes. These pulses are a consequence of avalanche dynamics in a self-organized critical state in the model. Thus, vortices of magnetic flux are analogous to sediment in fluvial rivers. The elementary stick-slip process is that of sediment slipping and then resticking at some other point, like intermittently moving vortices. The elementary slip event can dislodge nearby sediment leading to a chain reaction of slip events, or avalanches. Sediment transport can be triggered when the local sediment slope is too high; the same is true for vortices in a superconductor. Thus, in both magnetic flux and fluvial rivers it appears that the braiding emerges from a stick-slip process consisting of avalanches of all sizes .
## 4 Is Life a Self-organized Critical Phenomenon?
Evolution has taken place in a highly intermittent way. Periods with little activity have been pierced by major extinction events where many species disappeared, and other species emerged. About 50 million years ago the dinosaurs vanished during such an event, but this is far from the biggest. 200 million years ago we had the Permian mass extinction, and 500 million years ago the Cambrian explosion took place.
Traditional scientific thinking is linear. Nothing happens without a reason. The bigger the impact, the stronger the response. Thus, without further ado paleontologists and other scientists working on early life took it for granted that those extinctions were caused by some external cataclysmic events. Several have been suggested, including climatic changes and volcanic eruptions. The prevailing view on the Cretaceous event is that it was caused by a meteorite hitting earth.
The linear point of view is correct for a simple system near equilibrium, such as a pendulum nearly at rest. But we do know that large events can happen without external impact in geophysical and astrophysical processes. No meteorite is needed in order to have large earthquakes, for instance. Actually, there is some striking statistical regularities indicating that the mass extinctions are part of a self-organized critical process.
Species do not evolve in isolation, so biology is a cooperative phenomenon! The environment of each individual is made up of other individuals. The atmosphere that we breathe is of biological origin, with an oxygen content very different from that at the time of the primordial soup. Species interact in food webs. The interaction can be through competition for resources, as parasites, or by symbiosis. This allows for the possibility that the extinction events can be viewed as co-evolutionary avalanches, where the death of one species causes the death (and birth) of other species, just as the toppling of one grain of rice in the rice pile leads to toppling of other grains.
Let us take a look at the fossil record. Fortunately, Jack Sepkoski has devoted a monumental effort to mapping out the rate of extinction during the last 500 million years. It is extremely important to have as much data as possible, since we cannot make accurate theories for specific events, and therefore must confront theories with observations at the statistical level. The insert in Fig. 3 shows the temporal variations of the number of Ammonoida families. If part of the curve shown is enlarged, the pattern seen on the finer scale looks the same as that seen on the coarser scale. Thus, there is no typical scale for the variations. This scale-independent or self-similar behavior is a strong indication of criticality—it cannot occur in simple systems with few components, including those exhibiting low-dimensional chaotic behavior.
Self-similarity, or scaling, can be expressed more quantitatively in terms of the power spectrum $`p(f)`$ of the time series. The power spectrum is the Fourier Transform of the autocorrelation function. When plotted with log-log axis, Fig. 3, it shows an approximately straight line over a couple of decades. This indicates that the spectrum is a power law, $`p(f)=f^\alpha `$. The slope $`\alpha `$ is approximately unity. This type of dynamics is called one-over-f ($`1/f`$) noise. It is completely impossible to explain the smooth $`1/f`$ behavior with a set of arguments tailored each to events on a separate scale. Even in the absence of any theory, the smooth $`1/f`$ behavior is an empirical indication that the underlying mechanisms are the same on all scales. How else to explain that the curve has the same slope on all scales, and that segments corresponding to different scales join smoothly to form a straight line spanning all scales? Figure 4 shows the distribution of life times $`T`$ of genera, also from Sepkoski’s data. This is another power-law, $`N(T)T^2`$, giving further evidence that life is a critical process.
Because of the complexity of the phenomenon that we are dealing with—the global biological evomlution on all time scales—mathematical modelling is an extremely delicate affair. It is difficult to go from micro-evolution where the mechanisms (genetics) are relatively well understood, to macro-evolution at the largest scale. Geneticists may understand what goes on within a few generations of a few hundreds or a few thousands of rats, but they have little to say about the behavior of an evolving global ecology of millions of species, each with hundreds of millions of individuals.
Kauffman and Johnsen were the first to suggest that the Darwinian dynamics of an ecological network with all species connected through their interactions, positive or negative, could lead to a critical state. The first model for evolution to show SOC was the Bak-Sneppen (BS) model .
The Bak-Sneppen model represents an entire species by a single fitness number. Selection acts on the level of the individual, of course, but to achieve simplification we consider the evolution at the “coarse-grained” species level. Consider a number, $`N`$, of species placed on a circle. Each species interacts with its two neighbors. Each species is assigned a random fitness $`0<f<1`$ which represents its ability to survive in a given environment. Time is discrete, and at each time step the species with the lowest fitness goes extinct, and is replaced by another species with a random fitness $`f`$, $`0<f<1`$. Alternatively, one could view the process as a pseudo extinction where a species is replaced by a mutated variant. Whatever the view, this change in one species affects the fitnesses of its two neighbors: their fitnesses, which might originally have been high, are also replaced by new random fitnesses, reflecting the fact that their existence has become a new ball game. This process of changing the fitnesses of the least fit species and the two it interacts with is continued ad infinitum.
Most of the species have fitnesses above a threshold that has established itself with value approximately 0.67, forming a rather stable network (Figure 5). However, there is a localized region with species of lower fitnesses. These are the species, or niches, that are currently undergoing changes or extinctions as part of a co-evolutionary avalanche.
During an avalanche, nature “experiments” with the species involved, changing many of them several times, until they all have achieved fitnesses above the threshold. If the changes experienced by any given species is measured vs. time, one finds punctuated equilibrium behavior, with periods of stasis interrupted by intermittent bursts. This can be characterized by the power-spectrum of the local activity, which is a $`1/f`$ spectrum with exponent $`\alpha 0.59`$.
Note that in the BS model evolution progresses by elimination of the least fit species, and not by propagation of strong species. This distinction is not merely semantics. One can not have a process of evolution, where the individual species out-competes their environment, the popular view of Darwinian evolution. The complexity of Life is intimately related to the existence of large interactive networks. Actually, extremal dynamics associated with removing the weakest link is essential for the emergence of complex or critical phenomena. The criticality of the SOC earthquake models can also be traced to the breakdown of the weakest site, and not an arbitrary site.
Thus, the mechanism of evolution is “extinction of the least fit” rather than “survival of the fittest”! The best a species can hope for is to be a participant of the global ecological network. In the final analysis, being fit simply means being a self-consistent part of a complex structure.
### 4.1 Ecology dynamics
Perhaps the dynamics of evolution can found in a smaller scale by studying local ecologies or food webs. Keitt and Marquet have studied the dynamics of birds introduced into Hawaiian islands. They measured the extinction rate between successive periods of 10 years, (to be compared with 4 million year intervals used for the analysis of the fossil record) and found a power law distribution and also extracted the lifetime distribution of species, yielding another power law with exponent near unity. A total of 59 extinctions on six islands were included in their statistics. Because of the scant amount of data available, no firm conclusions could be reached, but everything was consistent with an ecology operating at criticality. In a very comprehensive study, Lockwood and Lockwood have analyzed grasshopper infestations in several regions of Idaho and Wyoming. Histograms of annual infestations, measured as the area involved, shows a power law distribution. Although numerous external factors affect the infestation rate, the results suggest criticality.
## 5 Traffic Jams and the Most Efficient State
Our everyday experience with traffic jams is that they are annoying and worth avoiding. Intuitively, many people believe that if we could somehow get rid of jams then traffic would be more efficient with higher throughput. However, this is not necessarily true. By studying a simple model of highway traffic, it is found that the state with the highest throughput is a critical state with traffic jams of all sizes. If the density of cars were lower, the highway would be underutilized; on the other hand, if it were higher there would inevitably be a huge jam lowering throughput. This leaves us with the critical state as the most efficient state that can be achieved. Finding a real traffic network operating at or near peak efficiency may seem highly unlikely. To the contrary, as found in the model, an open network self-organizes to the critical state .
The Nagel-Schreckenberg model is defined on a one dimensional lattice with cars moving to the right. Cars can move with integer velocities in the interval $`[0,v_{max}]`$. The maximum velocity $`v_{max}`$ is typically set equal to 5. This velocity defines how many “car lengths” each car will move at the next time step. If a car is moving too fast, it must slow down to avoid a crash. A slow moving car will accelerate, in a sluggish way, when given an opportunity. The ability to accelerate is slower than the ability to break. Also, cars moving at maximum velocity may slow down for no reason, with probability $`p_{free}`$. A “cruise-control” limit of the model exists where $`p_{free}0`$. This means that all cars which have reached maximum velocity, and have enough headway in front of them to avoid crashes, will continue to move at maximum velocity. Thus it is possible for the motion in the system to be completely deterministic.
If the cars are moving on a ring starting from random initial conditions, at low densities the initial jams will “heal” and the system will reach a deterministic state where the current is equal to the density of cars multiplied by the maximum velocity. This will hold up to some maximum density above which jams never disappear and the current is a decreasing function of density.
Remarkably, maximum throughput, $`j_{max}`$, is selected automatically when the left boundary condition is an infinitely large jam and the right boundary is open. Traffic which emerges from the megajam operates precisely at highest efficiency. This situation is shown in Fig. 6.
The horizontal axis is space and the vertical axis (down) is increasing time. The cars are shown as black dots which move to the right. The diagram allows us to follow the pattern in space and time of the traffic. Traffic jams show up as dense regions which drift to the left, against the flow of traffic. The structure on the left hand side is the front of the megajam (cars inside the megajam are not plotted). Cars emerge from the big jam in a jerky way, before they reach a smooth outgoing pattern operating at $`j_{max}`$. Far away from the front of the megajam all cars eventually reach maximum velocity.
If the outflow is perturbed slightly, traffic jams of all sizes occur. No cataclysmic triggering event, like a traffic accident, is needed to initiate large jams. They arise from the same dynamical mechanism as small jams and are a manifestation of the criticality of the outflow regime. Our natural intuition that large events come from large disturbances is violated. It does not make any sense to look for reasons for the large jams. The large jams are fractal, with small sub-jams inside big jams ad infinitum. Between the subjams are “holes” of all sizes where cars move at maximum velocity. This represents the irritating slow and go driving pattern that we are all familiar with in congested traffic. On the diagram, it is possible to trace the individual cars and observe this intermittent pattern. This behavior gives rise to $`1/f`$ noise, as seen in real traffic flow . This $`1/f`$ behavior can be calculated exactly for this model by formulating the jams as a cascade process . The picture of avalanche dynamics as a fractal in space and time has application to many complex dynamical systems in addition to traffic.
The conventional view is that one should try to get rid of traffic jams in order to increase efficiency and productivity. However, the critical state, with traffic jams of all sizes, is the most efficient state that can actually be achieved. A carefully prepared state where all cars move at maximum velocity would have higher throughput, but it would be dreadfully unstable. The very efficient state would catastrophically collapse from any small fluctuation. A similar situation occurs in the familiar sand pile models of SOC. One can prepare a sand pile with a supercritical slope, but that state is unstable to small perturbations. Disturbing a supercritical pile will cause a collapse of the entire system in one gigantic avalanche.
But there is perhaps even a deeper relationship between traffic and economics . In an economy, humans interact by exchanging goods and services. In the real world, each agent has limited choices, and a limited capability to monitor his changing environment. This is referred to as bounded rationality. The situation of a car driver in traffic can be viewed as a simple example of an agent trying to better his condition in an economy. Each driver’s maximum speed is limited by the other cars on the road and posted speed limits. His distance to the car in front of him is limited by his ability to stop and his need for safety in view of the unpredictability of other drivers. He is also exposed to random shocks from the road or from his car. He may be absent minded. If traffic is a paradigm for economics in general, then perhaps we have found a new economic principle: the most efficient state that can be achieved for an economy is a critical state with fluctuations of all sizes.
## 6 The Critical Brain
Why do we need a brain at all? In a sub-critical world everything would be simple and uniform - there would be nothing to learn. In a supercritical world, everything would be changing all the time in a chaotic way - it would be impossible to learn. The brain is necessary for us in order to navigate in a complex, critical world.
A brain is able not only to remember, but also to forget and adapt to a new situation. In a sub-critical brain memories would be frozen. In a supercritical brain, the patterns would change all the time so no long term memory would be possible. This leaves us with one choice - the brain itself has to be in the in-between critical state. Using physics terminology, it is the high susceptibility of the critical state which makes it adaptable.
Actually, Alan Turing , some time ago, speculated that perhaps the working brain needs to operate at a barely critical level, in order to stay away from the two extremes - namely the too correlated sub-critical level, and the too explosive supercritical dynamics.
In traditional neural network models, the goal has typically been to have the desired patterns represented by very stable states. In the Hopfield model , for instance, the patterns correspond to deep energy minima in a spin glass model. This represents the traditional Hebbian picture where synapses connecting firing neurons are strengthened. Once the desired memory has been encoded, it is hard to adapt to a new situation when the environment changes, because the deep minima have to be removed by a dynamical process. Traditional models are sub-critical. Moreover, the learning process takes place by having an external teacher, a computer algorithm that sets the strengths of the neural network connections. It is hard to see how this can be accomplished without the intervention of an external agent. The learning process of the neural network is not self-organized. Chialvo and Bak have suggested an alternative scheme, which at least in principle could act as a paradigm for real brain processes.
### 6.1 Learning from Mistakes
Recall that in the evolution model, criticality, and hence complexity, was achieved by extremal dynamics where the least fit species were weeded out. Chialvo and Bak used a similar mechanism for brain functioning, with the synapses playing the role of the individual species. Whenever a poor result is achieved, all the synapses which fired in the process are democratically punished. However, good behavior is not rewarded at all; the reward system is all stick and no carrot. There is no Hebbian strengthening of successful synapses. While the model is grossly simplified, the features of the model are all biologically plausible.
The topology of the network is not very important. For simplicity, let us consider neurons arranged in the layered network in Fig. 7, where $`K`$ represents the outputs, $`I`$ the inputs and $`J`$ the middle layer. Each input is connected with each neuron in the middle layer which, in turn, is connected with each output neuron, with weights $`W`$ representing the synaptic strengths. The network must learn to connect each input with the proper output (which is pre-determined) for any arbitrary associative mapping. The weights are initially randomised, $`0<W<1`$.
The dynamical process in its entirety is as follows:
An input neuron is chosen. The neuron $`j_m`$ in the middle layer with the largest $`w(j,i)`$ is firing. Next, the output neuron $`k_m`$ with the maximum $`w(k,j_m)`$ is firing. If the output $`k`$ happens to be the desired one, nothing is done, otherwise $`w(k_m,j_m)`$ and $`w(j_m,i)`$ are both depressed by a fixed amount. The iterative application of this rule leads to a convergence to any arbitrary input-output mapping. Since there are no further changes once the correct result has been achieved, the proper synapses are only barely stronger than some of the incorrect synapses.
Supposed now that the environment changes, so that a different connection between input and output is correct. The neurons which fire and led to the previously correct output are now punished, allowing new connections. Eventually that pattern will also quickly be learned.
The reason for quick re-learning (adaptation) is simple. The rule of adaptation assures that synaptic changes only occur at neurons involved in wrong outputs. The landscape of weights is only re-shaped to the point where the new winners barely support the new correct output, with the old pattern only slightly suppressed. Thus, only a slight suppression of a currently active pattern is needed in order to generate new patterns when need be. In particular, re-learning of “old” patterns which have been correct once in the past is fast. This feature can be strengthened if the synapsis which have never been firing when a good result was achieved are punished more than synapses whose firing has previously led to a good result.
The landscape of synaptic strength in our model after many learning cycles consist of very many values which are very close to those of the active ones, a manifestation of the critical nature of the state. Figure 8 shows a snapshot of the synaptic strengths. The synapse indicated by an arrow is a currently active one, associated with a correct response. Other neurons near the active surface have strengths located slightly below the critical surface. One can imagine that “thinking” is the process of sifting through, and suppressing, patterns which once have been correct, until a combination leading to a good result is achieved. Bits and pieces of patterns that have previously been successful are utilized. Old memories are located at the same spot where they have always been - they have simply been slightly suppressed by more recent patterns.
The biological plausibility of the schema depends on the realization at the neuronal level of two crucial features:
a) Activity propagates through the strongest connections, i.e. extremal, or winner-take all, dynamics. This can be fulfilled by a local circuit organisation, known to exist in all cortices, where the firing of other neurons is shut off by lateral inhibitory connections.
b) Depression of synaptic efficacy involves the entire path of firing neurons. A process must exist such that punishment can be relayed long after the neuron has fired, when the response from the outer world to the action is known. Chialvo and Bak conjectured a mechanism of “tagging” synapses for subsequent punishment, or long term depression (LTD), analogous to (but mirroring) recently reported tagging of synapses for long term potentiation (LTP) . The feed-back probably takes place through the limbic system of neurons, situated in the neck, which spray the large areas of the brain. One could imagine that this global feed-back signal affects all neurons which have recently fired, causing plastic changes of the synaptic connections. The limbic system is disconnected when dreaming, which could explain why we generally do not remember our dreams. Actually, long-distance, long-term synaptic depression has been directly demonstrated by Fitzsimons et al in cultured hippocampal neurons from rat embryos.
In addition to giving insight into mechanisms for learning in the brain, the ideas presented here could be useful for artificial learning processes, for instance in adaptable robots. These possibilities are currently being investigated and appear promising.
Historically, many processes that were considered to be examples of directed learning have been shown to be caused by selection. The Larmarquean theory of evolution as a learning process, where useful acquired features are strengthened, was replaced by the Darwinian theory of evolution as a selection process, where the unfit species are weeded out. A similar paradigm shift occurred in immunology through the theory of clonal selection. Ironically, if the philosophy represented by the Chialvo-Bak model is correct, learning in the brain is not a (directed) learning process either. It is also an example of a co-evolutionary selection process where incorrect connections are weakened.
The paradigm of science in the second millenium, reductionism, is insufficient to explain complexity in nature. There appears to be a need for an outside organizing agent who fine tunes the natural world and puts the building blocks together. We speculate that, instead of this agent, co-evolutionary selection leading to a critical state by removing untenable parts may be the fundamental organizing principle leading to all the possible complexity in the universe.
## References
|
no-problem/9906/cs9906015.html
|
ar5iv
|
text
|
# Learning Transformation Rules to Find Grammatical Relations This paper reports on work performed at the MITRE Corporation under the support of the MITRE Sponsored Research Program. Helpful assistance has been given by Yuval Krymolowski, Lynette Hirschman and an anonymous reviewer. Copyright ©1999 The MITRE Corporation. All rights reserved.
## 1 Introduction
An important level of natural language processing is the finding of grammatical relationships such as subject, object, modifier, etc. Such relationships are the objects of study in relational grammar \[Perlmutter (1983)\]. Many systems (e.g., the KERNEL system \[Palmer et al. (1993)\]) use these relationships as an intermediate form when determining the semantics of syntactically parsed text. In the SPARKLE project \[Carroll et al. (1997a)\], grammatical relations form the layer above the phrasal-level in a three layer syntax scheme. Grammatical relationships are often stored in some type of structure like the F-structures of lexical-functional grammar \[Kaplan (1994)\].
Our own interest in grammatical relations is as a semantic basis for information extraction in the Alembic system. The extraction approach we are currently investigating exploits grammatical relations as an intermediary between surface syntactic phrases and propositional semantic interpretations. By directly associating syntactic heads with their arguments and modifiers, we are hoping that these grammatical relations will provide a high degree of generality and reliability to the process of composing semantic representations. This ability to “parse” into a semantic representation is according to Charniak \[Charniak (1997), p. 42\], “the most important task to be tackled now.”
In this paper, we describe a system to learn rules for finding grammatical relationships when just given a partial parse with entities like names, core noun and verb phrases (noun and verb groups) and semi-accurate estimates of the attachments of prepositions and subordinate conjunctions. In our system, the different entities, attachments and relationships are found using rule sequence processors that are cascaded together. Each processor can be thought of as approximating some aspect of the underlying grammar by finite-state transduction.
We present the problem scope of interest to us, as well as the data annotations required to support our investigation. We also present a decision procedure for finding grammatical relationships. In brief, on our training and test set, our procedure achieves 63.6% recall and 77.3% precision, for an f-score of 69.8.
## 2 Phrase Structure and Grammatical Relations
In standard derivational approaches to syntax, starting as early as 1965 \[Chomsky (1965)\], the notion of grammatical relationship is typically parasitic on that of phrase structure. That is to say, the primary vehicles of syntactic analysis are phrase structure trees; grammatical relationships, if they are to be considered at all, are given as a secondary analysis defined in terms of phrase structure. The surface subject of a sentence, for example, is thus no more than the NP attached by the production S $``$ NP VP; i.e., it is the left-most NP daughter of an S node.
The present paper takes an alternate outlook. In our current work, grammatical relationships play a central role, to the extent even of replacing phrase structure as the descriptive vehicle for many syntactic phenomena. To be specific, our approach to syntax operates at two levels: (1) that of core phrases, which are analyzed through standard derivational syntax, and (2) that of argument and modifier attachments, which are analyzed through grammatical relations. These two levels roughly correspond to the top and bottom layers of the three layer syntax annotation scheme in the SPARKLE project \[Carroll et al. (1997a)\].
### 2.1 Core syntactic phrases
In recent years, a consensus of sorts has emerged that postulates some core level of phrase analysis. By this we mean the kind of non-recursive simplifications of the NP and VP that in the literature go by names such as noun/verb groups \[Appelt et al. (1993)\], chunks \[Abney (1996)\], or base NPs \[Ramshaw and Marcus (1995)\].
The common thread between these approaches and ours is to approximate full noun phrases or verb phrases by only parsing their non-recursive core, and thus not attaching modifiers or arguments. For English noun phrases, this amounts to roughly the span between the determiner and the head noun; for English verb phrases, the span runs roughly from the auxiliary to the head verb. We call such simplified syntactic categories groups, and consider in particular, noun, verb, adverb, adjective, and IN groups.<sup>1</sup><sup>1</sup>1In addition, for the noun group, our definition encompasses the named entity task, familiar from information extraction \[Def (1995)\]. Named entities include among others the names of people, places, and organizations, as well as dates, expressions of money, and (in an idiosyncratic extension) titles, job descriptions, and honorifics. An IN group<sup>2</sup><sup>2</sup>2The name comes from the Penn Treebank part-of-speech label for prepositions and subordinate conjunctions. contains a preposition or subordinate conjunction (including wh-words and “that”).
For example, for “I saw the cat that ran.”, we have the following core phrase analysis:
\[I\]<sub>ng</sub> \[saw\]<sub>vg</sub> \[the cat\]<sub>ng</sub> \[that\]<sub>ig</sub> \[ran\]<sub>vg</sub>.
where \[…\]<sub>ng</sub> indicates a noun group, \[…\]<sub>vg</sub> a verb group, and \[…\]<sub>ig</sub> an IN group.
In English and other languages where core phrases (groups) can be analyzed by head-out (island-like) parsing, the group head-words are basically a by-product of the core phrase analysis.
Distinguishing core syntax groups from traditional syntactic phrases (such as NPs) is of interest because it singles out what is usually thought of as easy to parse, and allows that piece of the parsing problem to be addressed by such comparatively simple means as finite-state machines or transformation sequences. What is then left of the parsing problem is the difficult stuff: namely the attachment of prepositional phrases, relative clauses, and other constructs that serve in modification, adjunctive, or argument-passing roles.
### 2.2 Grammatical relations
In the present work, we encode this hard stuff through a small repertoire of grammatical relations. These relations hold directly between constituents, and as such define a graph, with core constituents as nodes in the graph, and relations as labeled arcs. Our previous example, for instance, generates the following grammatical relations graph (head words underlined):
Our grammatical relations effectively replace the recursive $`\overline{X}`$ analysis of traditional phrase structure grammar. In this respect, the approach bears resemblance to a dependency grammar, in that it has no notion of a spanning S node, or of intermediate constituents corresponding to argument and modifier attachments.
One major point of departure from dependency grammar, however, is that these grammatical relation graphs can generally not be reduced to labeled trees. This happens as a result of argument passing, as in
\[Fred\] \[promised\] \[to help\] \[John\]
where \[Fred\] is both the subject of \[promised\] and \[to help\]. This also happens as a result of argument-modifier cycles, as in
\[I\] \[saw\] \[the cat\] \[that\] \[ran\]
where the relationships between \[the cat\] and \[ran\] form a cycle: \[the cat\] has a subject relationship/dependency to \[ran\], and \[ran\] has a modifier dependency to \[the cat\], since \[ran\] helps indicate (modifies) which cat is seen.
There has been some work at making additions to extract grammatical relationships from a dependency tree structure \[Bröker (1998), Lai and Huang (1998)\] so that one first produces a surface structure dependency tree with a syntactic parse and then extracts grammatical relationships from that tree. In contrast, we skip trying to find a surface structure tree and just proceed to more directly finding the grammatical relationships, which are the relationships of interest to us.
A reason for skipping the tree stage is that extracting grammatical relations from a surface structure tree is often a nontrivial task by itself. For instance, the precise relationship holding between two constituents in a surface structure tree cannot be derived unambiguously from their relative attachments. Contrast, for example “the attack on the military base” with “the attack on March 24”. Both of these have the same underlying surface structure (a PP attached to an NP), but the former encodes the direct object of a verb nominalization, while the latter encodes a time modifier. Also, in a surface structure tree, long-distance dependencies between heads and arguments are not explicitly indicated by attachments between the appropriate parts of the text. For instance in “Fred promised to help John”, no direct attachment exists between the “Fred” in the text and the “help” in the text, despite the fact that the former is the subject of the latter.
For our purposes, we have delineated approximately a dozen head-to-argument relationships as well as a commensurate number of modification relationships. Among the head-to-argument relationships, we have the deep subject and object (SUBJ and OBJ respectively), and also include the surface subject and object of copulas (COP-SUBJ and the various COP-OBJ forms). In addition, we include a number of relationships (e.g., PP-SUBJ, PP-OBJ) for arguments that are mediated by prepositional phrases. An example is in
where \[the attack\], a noun group with a verb nominalization, has its object \[the military base\] passed to it via the preposition in \[on\]. Among modifier relationships, we designate both generic modification and some specializations like locational and temporal modification. A complete definition of all the grammatical relations is beyond the scope of this paper, but we give a summary of usage in Table 1. An earlier version of the definitions can be found in our annotation guidelines \[Ferro (1998)\]. The appendix shows some examples of grammatical relationship labeling from our experiments.
Our set of relationships is similar to the set used in the SPARKLE project \[Carroll et al. (1997a)\] \[Carroll et al. (1998a)\]. One difference is that we make many semantically-based distinctions between what SPARKLE calls a modifier, such as time and location modifiers, and the various arguments of event nouns.
### 2.3 Semantic interpretation
A major motivation for this approach is that it supports a direct mapping into semantic interpretations. In our framework, semantic interpretations are given in a neo-Davidsonian propositional logic. Grammatical relations are thus interpreted in terms of mappings and relationships between the constants and variables of the propositional language. For instance, the deep subject relation (SUBJ) maps to the first position of a predicate’s argument list, the deep object (OBJ) to the second such position, and so forth.
Our example sentence, “I saw the cat that ran” thus translates directly to the following:
| Proposition | Comment |
| --- | --- |
| saw(x1 x2) | SUBJ and OBJ relations |
| I(x1) | |
| cat(x2) | |
| ran(x2)=e3 | SUBJ relation |
| | (e3 is the event variable) |
| mod(e3 x2) | MOD relation |
We do not have an explicit level for clauses between our core phrase and grammatical relations levels. However, we do have a set of implicit clauses in that each verb (event) and its arguments can be deemed a base level clause. In our example “I saw the cat that ran”, we have two such base level clauses. “saw” and its arguments form the clause “I saw the cat”. “ran” and its argument form the clause “the cat ran”. Each noun with a possible semantic class of “act” or “process” in Wordnet \[Miller (1990)\] (and that noun’s arguments) can likewise be deemed a base level clause.
## 3 The Processing Model
Our system uses transformation-based error-driven learning to automatically learn rules from training examples \[Brill and Resnik (1994)\].
One first runs the system on a training set, which starts with no grammatical relations marked. This training run moves in iterations, with each iteration producing the next rule that yields the best net gain in the training set (number of matching relationships found minus the number of spurious relationships introduced). On ties, rules with less conditions are favored over rules with more conditions. The training run ends when the next rule found produces a net gain below a given threshold.
The rules are then run in the same order on the test set to see how well they do.
The rules are condition/action pairs that are tried on each syntax group. The actions in our system are limited to attaching (or unattaching) a relationship of a particular type from the group under consideration to that group’s neighbor a certain number of groups away in a particular direction (left or right). A sample action would be to attach a SUBJ relation from the group under consideration to the group two groups away to the right.
A rule only applies to a syntax group when that group and its neighbors meet the rule’s conditions. Each condition tests the group in a particular position relative to the group under consideration (e.g., two groups away to the left). All tests can be negated. Table 2 shows the possible tests.
A sample rule is when a noun group $`n`$’s
* immediate group to the right has some form of the verb “be” as the head-word,
* immediate group to the left is not an IN group (preposition, wh-word, etc.) and
* $`n`$’s head-word is not an existential “there”
make $`n`$ a SUBJ of the group two groups over to $`n`$’s right.
When applied to the group \[The cat\] (head words are underlined) in the sentence
\[The cat\] \[was\] \[very happy\].
this rule makes \[The cat\] a SUBJect of \[very happy\].
Searching over the space of possible rules is very computationally expensive. Our system has features to make it easier to perform searching in parallel and to minimize the amount of work that needs to be undone once a rule is selected. With these features, rules that (un)attach different types of relationships or relationships at different distances can be searched independently of each other in parallel.
One feature is that the action of any rule only affects the applicability of rules with either the exact same or opposite action. For example, selecting and running a rule which attaches a MOD relationship to the group that is two groups to the right only can affect the applicability of other rules that either attach or unattach a MOD relationship to the group that is two groups to the right.
Another feature is the use of net gain as a proxy measure during training. The actual measure by which we judge the system’s performance is called an f-score. This f-score is a type of harmonic mean of the precision ($`p`$) and recall ($`r`$) and is given by $`2pr/(p+r)`$. Unfortunately, this measure is nonlinear, and the application of a new rule can alter the effects of all other possible rules on the f-score. To enable the described parallel search to take place, we need a measure in which how a rule affects that measure only depends on other rules with either the exact same or opposite action. The net gain measure has this trait, so we use it as a proxy for the f-score during training.
Another way to increase the learning speed is to restrict the number of possible combinations of conditions/constraints or actions to search over. Each rule is automatically limited to only considering one type of syntactic group. Then when searching over possible conditions to add to that rule, the system only needs to consider the parts-of-speech, semantic classes, etc. applicable to that type of group.
Many other restrictions are possible. One can estimate which restrictions to try by making some training and test runs with preliminary data sets and seeing what restrictions seem to have no effect on performance, etc. The restrictions used in our experiments are described below.
## 4 Experiments
### 4.1 The Data
Our data consists of bodies of some elementary school reading comprehension tests. For our purposes, these tests have the advantage of having a fairly predictable size (each body has about 100 relationships and syntax groups) and a consistent style of writing. The tests are also on a wide range of topics, so we avoid a narrow specialized vocabulary. Our training set has 1963 relationships (2153 syntax groups, 3299 words) and our test set has 748 relationships (830 syntax groups, 1151 words).
We prepared the data by first manually removing the headers and the questions at the end for each test. We then manually annotated the remainder for named entities, syntax groups and relationships. As the system reads in our data, it automatically breaks the data into lexemes and sentences, tags the lexemes for part-of-speech and estimates the attachments of prepositions and subordinate conjunctions. The part-of-speech tagging uses a high-performance tagger based on \[Brill (1993)\]. The attachment estimation uses a procedure described in \[Yeh and Vilain (1998)\] when multiple left attachment possibilities exist and four simple rules when no or only one left attachment possibility exists. Previous testing indicates that the estimation procedure is about 75% accurate.
### 4.2 Parameter Settings for Training
As described earlier, a training run uses many parameter settings. Examples include where to look for relationships and to test conditions, the maximum number of constraints allowed in a rule, etc.
Based on the observation that 95% of the relationships are to at most three groups away in the training set, we decided to limit the search for relationships to at most three groups in length. To keep the number of possible constraints down, we disallowed the negations of most tests for the presence of a particular lexeme or lexeme stem.
To help determine many of the settings, we made some preliminary runs using different subsets of our final training set as the preliminary training and test sets. This kept the final test set unexamined during development. From these preliminary runs, we decided to limit a rule to at most three constraints<sup>3</sup><sup>3</sup>3In addition to the constraint on the relationship’s source group type. in order to keep the training time reasonable. We found a number of limitations that help speed up training and seemed to have no effect on the preliminary test runs. A threshold of four was set to end a training run. So training ends when it can no longer find a rule that produces at least a net gain of four in the score. Only syntax groups spanned by the relationship being attached or unattached and those groups’ immediate neighbors were allowed to be mentioned in a rule’s conditions. Each condition testing a head-word had to test a head-word of a different group. Except for the lexemes “of”, “?” and a few determiners like “the”, tests for single lexemes were removed. Also disallowed were negations of tests for the presence of a particular part-of-speech anywhere within a syntax group.
In our preliminary runs, lowering the threshold tended to raise recall and lower precision.
### 4.3 The Results
Training produced a sequence of 95 rules which had 63.6% recall and 77.3% precision for an f-score of 69.8 when run on the test set. In our test set, the key relationships, SUBJ and OBJ, formed the bulk of the relationships (61%). Both recall and precision for both SUBJ and OBJ were above 70%, which pleased us. Because of their relative abundance in the test set, these two relationships also had the most number of errors in absolute terms. Combined, the two accounted for 45% of the recall errors and 66% of the precision errors. In terms of percentages, recall was low for many of the less common relationships, such as generic, time and location modification relationships. In addition, the relative precision was low for those modification relationships. The appendix shows some examples of our system responding to the test set.
To see how well the rules, which were trained on reading comprehension test bodies, would carry over to other texts of non-specialized domains, we examined a set of six broadcast news stories. This set had 525 relationships (585 syntax groups, 1129 words). By some measures, this set was fairly similar to our training and test sets. In all three sets, 33–34% of the relationships were OBJ and 26–28% were SUBJ. The broadcast news set did tend to have relationships between groups that were slightly further apart:
Percent of Relations with Length Set $`1`$ $`2`$ $`3`$ training 66% 87% 95% test 68% 89% 96% broadcast news 65% 84% 90%
This tendency, plus differences in the relative proportions of various modification relationships are probably what produced the drop in results when we tested the rules against this news set: recall at 54.6%, precision at 70.5% (f-score at 61.6%).
To estimate how fast the results would improve by adding more training data, we had the system learn rules on a new smaller training set and then tested against the regular test set. Recall dropped to 57.8%, precision to 76.2%. The smaller training set had 981 relationships (50% of the original training set). So doubling the training data here (going from the smaller to the regular training set) reduced the smaller training set’s recall error of 42.2% by 14% and the precision error of 23.8% by 5%. Using the broadcast news set as a test produced similar error reduction results.
One complication of our current scoring scheme is that identifying a modification relationship and mis-typing it is more harshly penalized than not finding a modification relationship at all. For example, finding a modification relationship, but mistakingly calling it a generic modifier instead of a time modifier produces both a missed key error (not finding a time modifier) and a spurious response error (responding with a generic modifier where none exists). Not finding that modification relationship at all just produces a missed key error (not finding a time modifier). This complication, coupled with the fact that generic, time and location modifiers often have a similar surface appearance (all are often headed by a preposition or a complementizer) may have been responsible for the low recall and precision scores for these types of modifiers. Even the training scores for these types of modifiers were particularly low. To test how well our system finds these three types of modification when one does not care about specifying the sub-type, we reran the original training and test with the three sub-types merged into one sub-type in the annotation. With the merging, recall of these modification relationships jumped from 27.8% to 48.9%. Precision rose from 52.1% to 67.7%. Since these modification relationships are only about 20% of all the relationships, the overall improvement is more modest. Recall rises to 67.7%, precision to 78.6% (f-score to 72.6).
Taking this one step further, the LOC-OBJ and various PP-$`x`$ arguments also all have both a low recall (below 35%) in the test and a similar surface structure to that of generic, time and location modifiers. When these argument types were merged with the three modifier types into one combined type, their combined recall was 60.4% and precision was 81.1%. The corresponding overall test recall and precision were 70.7% and 80.5%, respectively.
## 5 Comparison with Other Work
At one level, computing grammatical relationships can be seen as a parsing task, and the question naturally arises as to how well this approach compares to current state-of-the-art parsers. Direct performance comparisons, however, are elusive, since parsers are evaluated on an incommensurate tree bracketing task. For example, the SPARKLE project \[Carroll et al. (1997a)\] puts tree bracketing and grammatical relations in two different layers of syntax. Even if we disregard the questionable aspects of comparing tree bracketing apples to grammatical relation oranges, an additional complication is the fact that our approach divides the parsing task into an easy piece (core phrase boundaries) and a hard one (grammatical relations). The results we have presented here are given solely for this harder part, which may explain why at roughly 70 points of f-score, they are lower than those reported for current state-of-the-art parsers (e.g., Collins \[Collins (1997)\]).
More comparable to our approach are some other grammatical relation finders. Some examples for English include the English parser used in the SPARKLE project \[(6)\] \[Carroll et al. (1997b)\] \[Carroll et al. (1998b)\] and the finder built with a memory-based approach \[Argamon et al. (1998)\]. These relation finders make use of large annotated training data sets and/or manually generated grammars and rules. Both techniques take much effort and time. At first glance both of these finders perform better than our approach. Except for the object precision score of 77% in \[Argamon et al. (1998)\], both finders have grammatical relation recall and precision scores in the 80s. But a closer examination reveals that these results are not quite comparable with ours.
1. Each system is recovering a different variation of grammatical relations. As mentioned earlier, one difference between us and the SPARKLE project is that the latter ignores many of distinctions that we make for different types of modifiers. The system in \[Argamon et al. (1998)\] only finds a subset of the surface subjects and objects.
2. In addition, the evaluations of these two finders produced more complications. In an illustration of the time consuming nature of annotating or reannotating a large corpus, the SPARKLE project originally did not have time to annotate the English test data for modifier relationships. As a result, the SPARKLE English parser was originally not evaluated on how well it found modifier relationships \[Carroll et al. (1997b)\] \[Carroll et al. (1998b)\]. The reported results as of 1998 only apply to the argument (subject, object, etc.) relationships. Later on, a test corpus with modifier relationship annotation was produced. Testing the parser against this corpus produced generally lower results, with an overall recall, precision and f-score of 75% \[Carroll et al. (1999)\]. This is still better than our f-score of 70%, but not by nearly as much. This comparison ignores the fact that the results are for different versions of grammatical relationships and for different test corpora.
The figures given above were the original (1998) results for the system in \[Argamon et al. (1998)\], which came from training and testing on data derived from the Penn Treebank corpus \[Marcus et al. (1993)\] in which the added null elements (like null subjects) were left in. These null elements, which were given a -NONE- part-of-speech, do not appear in raw text. Later (1999 results), the system was re-evaluated on the data with the added null elements removed. The subject results declined a little. The object results declined more, with the precision now lower than ours (73.6% versus 80.3%) and the f-score not much higher (80.6% versus 77.8%). This comparison is also between results with different test corpora and slightly different notions of what an object is.
## 6 Summary, Discussion, and Speculation
In this paper, we have presented a system for finding grammatical relationships that operates on easy-to-find constructs like noun groups. The approach is guided by a variety of knowledge sources, such as readily available lexica<sup>4</sup><sup>4</sup>4Resources to find a word’s possible stem(s), semantic class(es) and subcategorization category(ies)., and relies to some degree on well-understood computational infrastructure: a p-o-s tagger and an attachment procedure for preposition and subordinate conjunctions. In sample text, our system achieves 63.6% recall and 77.3% precision (f-score = 69.8) on our repertory of grammatical relationships.
This work is admittedly still in relatively early stages. Our training and test corpora, for instance, are less-than-gargantuan compared to such collections as the Penn Treebank \[Marcus et al. (1993)\]. However, the fact that we have obtained an f-score of 70 from such sparse training materials is encouraging. The recent implementation of rapid annotation tools should speed up further annotation of our own native corpus.
Another task that awaits us is a careful measurement of interannotator agreement on our version the grammatical relationships.
We are also keenly interested in applying a wider range of learning procedures to the task of identifying these grammatical relations. Indeed, a fine-grained analysis of our development test data has identified some recurring errors related to the rule sequence approach. A hypothesis for further experimentation is that these errors might productively be addressed by revisiting the way we exploit and learn rule sequences, or by some hybrid approach blending rules and statistical computations. In addition, since generic, time and location modifiers, and LOC-OBJ and various PP-$`x`$ arguments often have a similar surface appearance, one might first just try to locate all such entities and then in a later phase try to classify them by type.
Different applications will need to deal with different styles of text (e.g., journalistic text versus narratives) and different standards of grammatical relationships. An additional item of experimentation is to use our system to adapt other systems, including earlier versions of our system, to these differing styles and standards.
Like other Brill transformation rule systems \[Brill and Resnik (1994)\], our system can take in the output of another system and try to improve on it. This suggests a relatively low expense method to adapt a hard-to-alter system that performs well on a slightly different style or standard. Our training approach accepts as a starting point an initial labeling of the data. So far, we have used an empty labeling. However, our system could just as easily start from a labeling produced as the output of the hard-to-alter system. The learning would then not be reducing the error between an empty labeling and the key annotations, but between the hard-to-alter system’s output and the key annotations. By using our system in this post-processing manner, we could use a relatively small retraining set to adapt, for example, the SPARKLE English parser, to our standard of grammatical relationships without having reengineer that parser. Palmer \[Palmer (1997)\] used a similar approach to improve on existing word segmenters for Chinese. Trying this suggestion out is also something for us to do.
This discussion of training set size brings up perhaps the most obvious possible improvement. Namely, enlarging our very small training set. As has been mentioned, we have recently improved our annotation environment and look forward to working with more data.
Clearly we have many experiments ahead of us. But we believe that the results obtained so far are a promising start, and the potential rewards of the approach are very significant indeed.
## Appendix A Appendix: Examples from Test Results
Figure 1 shows some example sentences from the test results of our main experiment.<sup>5</sup><sup>5</sup>5The material came from level 2 of “The 5 W’s” written by Linda Miller. It is available from Remedia Publications, 10135 E. Via Linda #D124, Scottsdale, AZ 85258, USA. @ marks the relationship that our system missed. * marks the relationship that our system wrongly hypothesized. In these examples, our system handled a number of phenomena correctly, including:
* The coordination conjunction of the objects
\[cars\] and \[trucks\]
* The verb group \[might have\] being an object of another verb.
* The noun group \[He\] being the subject of two verbs.
* The relationships within the reduced relative clause
\[A man\] \[named\] \[Noah\], which makes one noun group a name or label for another noun group.
Our system misses a PP-OBJ relationship, which is a low occurrence relationship. Our system also accidentally make both \[A man\] and \[Noah\] subjects of the group \[wrote\] when only the former should be.
|
no-problem/9906/astro-ph9906251.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
Whereas it is well known that gas may be driven into the core of merging disks, fueling a central AGN or a nuclear starburst, recent studies have shown that a significant fraction of the stellar/gaseous components is expelled into the intergalactic medium along tidal tails (e.g., Hibbard & van Gorkom, 1996; Duc et al, 1997). The tidal debris might be dispersed in the intergalactic/intracluster medium where it adds to the diffuse background light such as that observed in the Coma cluster (Gregg & West, 1998), might fall back towards the merger, or regroup to form a new generation of galaxies, the so-called tidal dwarf galaxies (TDGs). We illustrate in this paper these various phenomena with a detailed multi-wavelength study of the interacting system NGC 2992/3 (Arp 245).
## 2 Observations and simulations
NGC 2992/3 (Arp 245) is a nearby system composed of two interacting spiral galaxies: NGC 2992, an edge-on Sa with a Seyfert 2 nucleus and NGC 2993 a face-on LINER. Table 1 summarizes our observing campaign of Arp 245. An optical image shows in addition to well-defined stellar tidal tails which emanate from each disk resp. to the North and East, a more diffuse 30–kpc long bridge. At 21 cm, the neutral gaseous component shows a stunning morphology (see Fig. 1a). The HI VLA map exhibits a long gaseous tail escaping from NGC 2992 with a massive clump at its tip, a bridge between the two galaxies with an orientation slightly different than in the optical, and a large ring-like structure emanating from NGC 2993. The latter feature has a shape similar to the two other intergalactic HI rings known sofar: the ring in Leo, in the M96 group (Schneider et al., 1989) and the ring near NGC 5291 (Malphrus et al., 1997, Duc & Mirabel, 1998). These two rings are however much larger. Towards the nucleus of NGC 2992, HI is seen in absorption (Fig. 1b). Our H$`\alpha `$ map (Fig. 2a) which traces the ionized gas shows emission line regions in the disk of NGC 2993, in the inner regions of NGC 2992 as well as in the outer regions along filaments and at the tip of the northern tail.
We could reproduce the overall morphology of the system using N-body + SPH numerical simulations of the collision. They indicate that the system is observed in the early stage of the interaction just after the galaxies passed each other for the first time, about 100 Myrs after perigalacticon. The end-product of the collision will be a complete merger. The time scales provided by the numerical simulations are useful to date the various phenomena observed in NGC 2992/3 and discussed herebelow.
## 3 Inflow
Galaxy collisions are efficient in driving gas towards the central regions. The mechanism involves the loss of angular momentum and gas transfer towards the central regions possibly via a bar. This gas will then fuel a starburst or an AGN. Suggestions of gas inflow in Arp 245 come from the analysis of the HI spectrum towards the nucleus of NGC 2992. The line seen is absorption shows an asymmetry redwards of the systemic velocity which is indicative of foreground material falling towards the nucleus (see Fig. 1b). Whereas there is no evidence for the presence of a nuclear starburst in NGC 2992, strong Seyfert–2 type nuclear activity is observed. Indeed, Seyfert–2 galaxies show a statistical excess of large companions with respect to nonactive disk galaxies (e.g: Dultzin-Hacyan et al., 1999). One should note that the study of the inflow in NGC 2992 is largely hampered by the strong optical obscuration due to a prominent dust lane.
## 4 Outflows
Two kinds of outflows are observed in Arp 245 either in the form of small ionization filaments or large tidal tails. These outflows have a different location and nature but have been directly or indirectly triggered by the interaction.
An ionization cone is found in the inner regions of NGC 2992 whereas at larger distances numerous extended emission line regions are observed escaping up to 10 kpc from the inclined disk of NGC 2992. They form narrow filaments that have relative radial velocities with respect to the surrounding HI gas as high as 500 km/s (Marquez et al., 1998; Duc et al., in preparation). Their optical spectra (see Fig. 2b) are consistent with a power-law type ionizing radiation source and a high ionization parameter reaching log(U)=-2 (see Fig. 2b). Outflowing gas could be illuminated by a strong UV nuclear radiation. Other models invoke a strong local heating via shocks (Allen et al.,1999) and gas dragged along by nuclear plasmoids jets.
The material currently observed in the intergalactic medium, i.e along and at the tip of the bridge/tail/ring, was pulled out from the parent galaxies by tidal forces. The atomic hydrogen present in these tidal features accounts for almost half of the total HI mass in the system (5.4 $`10^9`$ M). Such a distribution is typical of interacting systems that lose a significant fraction of their gas in the IGM. Part of it might eventually fall back towards the parent galaxies or be tidally disrupted. The gas clouds that are most likely to survive are those which are expelled to large distances and are dense enough to become gravitationally bound, collapse and form stars. An example of such an object, known as a “Tidal Dwarf Galaxy”, is found at the tip of the northern tidal tail which emanates from NGC 2992. At that location there is a $`10^9`$ M$``$ HI condensation associated with emission-line regions (see Fig. 2a). Their optical spectra are typical of HII regions ionized by young massive stars (see Fig. 2b, up). The star-formation rate deduced from the H$`\alpha `$ luminosity is as high as 0.5 M yr<sup>-1</sup>. Nevertheless, broad band optical+near-infrared photometry of the tidal dwarf indicate that its stellar population is still dominated at this stage of the collision by the old stars pulled out from the disk of the parent galaxy. No star-forming regions appear to be associated with the HI ring south of NGC 2993. One reason for that is that the HI there is not dense enough to have collapsed, contrary to the tail of NGC 2992 or to the HI ring associated with NGC 5291 which hosts more than 20 TDGs (Duc & Mirabel, 1998). A critical HI column density, as high as $`10^{21}`$ cm<sup>-2</sup> in the case of Arp 245, is necessary for the onset of star formation.
From spectra of the HII regions, we could estimate an oxygen abundance of about solar for the TDG. This is higher than the typical metallicity of TDGs (1/3 solar). TDGs are themselves more metal rich than classical dwarf galaxies of the same luminosity (Duc & Mirabel, 1998) due to the fact that they are recycled objects composed of already substantially enriched material. In–situ enrichment in the TDG is unlikely given the constraint on the time scale from the numerical simulations – the interaction in Arp 245 is younger than 100 Myrs. This implies that the material in the outer part parts of NGC 2992, which probably went into forming the TDG, was much more metal rich than gas in the outskirts of typical spirals where abundances of 1/3 solar are usually measured. A large scale enrichment could have been induced by the nuclear outflows evidenced by the ionization filaments. Alternatively, the interaction has directly dragged material into the tidal tail which originally was close to the nucleus where solar metallicities are expected.
In any case, the inflow towards the nucleus and the outflows as exhibited by the ionization filaments and the tidal tails appear to be closely linked phenomena in Arp 245. They have either directly or indirectly been triggered by the collision.
|
no-problem/9906/hep-ph9906373.html
|
ar5iv
|
text
|
# 1 .The diagram representation of the inclusive spectrum (5) (a), and (6) (b).
Talk given at 34th Rencontres de Moriond
”QCD and High Energy Hadronic Interactions”,
Les Arcs, France, March 20–27, 1999
TRANSVERSE SPECTRA OF INDUCED RADIATION
B.G. Zakharov
Landau Institute for Theoretical Physics, GSP-1, 117940,
Kosygina Str. 2, 117334 Moscow, Russia
Abstract
Transverse spectra of induced radiation are discussed within the light-cone path integral approach to the LPM effect. The results are applicable in both QED and QCD.
Recently the Landau-Pomeranchuk-Migdal (LPM) effect in induced radiation in QED and QCD has attracted much attention (see review by Klein and references therein). Understanding the LPM effect in QCD is of great importance for evaluation of parton energy loss in nuclei and a hot QCD medium . The case of hot QCD medium is especially interesting in view of the experiments on $`AA`$-collisions at RHIC and LHC.
In Ref. I have developed a new rigorous light-cone path integral approach to the LPM effect. There I have discussed the $`p_{}`$-integrated spectra. In this talk I discuss the transverse spectra of induced radiation. Similarly to Ref. the results are applicable in both QED and QCD. For simplicity I describe the formalism for an induced $`abc`$ transition in QED for scalar particles with an interaction Lagrangian $`L_{int}=\lambda [\widehat{\psi }_b^+\widehat{\psi }_c^+\widehat{\psi }_a+\widehat{\psi }_b\widehat{\psi }_c\widehat{\psi }_a^+]`$. The corresponding $`S`$-matrix element reads
$$bc|\widehat{S}|a=i𝑑t𝑑\text{r}\lambda \psi _b^{}(t,\text{r})\psi _c^{}(t,\text{r})\psi _a(t,\text{r}),$$
(1)
where $`\psi _i`$ are the wavefunctions (ingoing for $`i=a`$ and outgoing for $`i=b,c`$). I normalize the flux to unity at $`z=\mathrm{}`$ for $`i=a`$ and at $`z=\mathrm{}`$ for $`i=b,c`$, and write $`\psi _i`$ as
$$\psi _i(t,\text{r})=\frac{1}{\sqrt{2E_i}}\mathrm{exp}[i(tz)p_{i,z}]\varphi _i(t,\text{r}).$$
(2)
In the high energy limit, $`E_im_i`$, the dependence of $`\varphi _i`$ on the variable $`\tau =(t+z)/2`$ at $`tz=`$const is governed by the two-dimensional Schrödinger equation
$$i\frac{\varphi _i}{\tau }=H_i\varphi _i,$$
(3)
$$H_i=\frac{\mathrm{\Delta }_{}}{2\mu _i}+e_iA^0+\frac{m_i^2}{2\mu _i},$$
(4)
where $`\mu _i=p_{i,z}`$, $`e_i`$ is the electric charge, $`A^0`$ is the potential of the target.
After some algebra from (1), (2) one can obtain in the high energy limit the following expression for the inclusive probability of induced radiation
$$\frac{d^5P}{dxd\text{q}_bd\text{q}_c}=\frac{2}{(2\pi )^4}\text{Re}\underset{z_1<z_2}{}𝑑𝝆_1𝑑𝝆_2𝑑z_1𝑑z_2gF(z_1,𝝆_1)F^{}(z_2,𝝆_2),$$
(5)
where $`\text{q}_{b,c}`$ are the transverse momenta, $`𝝆`$ is transverse coordinate, $`x=p_{b,z}/p_{a,z}`$ (note that for the particle $`c`$ $`p_{c,z}=(1x)p_{a,z}`$), $`g=\lambda ^2/[16\pi x(1x)E_a^2]`$, $`\mathrm{}`$ means averaging over the states of the target, $`F(z,𝝆)=\varphi _b^{}(t,\text{r})\varphi _c^{}(t,\text{r})\varphi _a(t,\text{r})|_{t=z}`$. Since the wavefunctions enter (5) only at $`t=z`$, $`\varphi _i`$ can be regarded as functions of $`z`$, and $`𝝆`$. In the Schrödinger equation (3) $`z`$ will play the role of time. I represent $`z`$-dependence of $`\varphi _i`$ in terms of the Green’s function, $`K_i`$, of the Hamiltonian (4). Then, in the diagram language (5) is described by the graph of Fig. 1a. I depict $`K_i`$ ($`K_i^{}`$) by $``$ ($``$). The dotted line shows the transverse density matrices at large longitudinal distances in front of ($`z=z_i`$) and behind ($`z=z_f`$) the target.<sup>1</sup><sup>1</sup>1Strictly speaking, in (1), (5) the adiabatically vanishing at $`|z||z_{i,f}|`$ coupling should be used. For simplicity I do not indicate the coordinate dependence of the coupling. If the particle $`a`$ is produced in a hard reaction, and does not propagate from infinity, then $`z_i`$ equals the coordinate of the production point.
Below I will consider the radiation rate integrated over $`\text{q}_c`$. In this case the graph of Fig. 1a is transformed into the one of Fig. 1b. The corresponding analytical expression reads
$`{\displaystyle \frac{d^3P}{dxd\text{q}_b}}={\displaystyle \frac{2}{(2\pi )^2}}\text{Re}{\displaystyle \underset{z_i}{\overset{z_f}{}}}𝑑z_1{\displaystyle \underset{z_1}{\overset{z_f}{}}}𝑑z_2{\displaystyle 𝑑𝝆_{b,f}𝑑𝝆_{b,f}^{^{}}𝑑𝝆_b𝑑𝝆_b^{^{}}𝑑𝝆_a𝑑𝝆_a^{^{}}𝑑𝝆_{a,i}𝑑𝝆_{a,i}^{^{}}g\mathrm{exp}[i\text{q}_b(𝝆_{b,f}𝝆_{b,f}^{^{}})]}`$
$`\times S_b(𝝆_{b,f},𝝆_{b,f}^{^{}},z_f|𝝆_b,𝝆_b^{^{}},z_2)M(𝝆_b,𝝆_b^{^{}},z_2|𝝆_a,𝝆_a^{^{}},z_1)S_a(𝝆_a,𝝆_a^{^{}},z_2|𝝆_{a,i},𝝆_{a,i}^{^{}},z_i),`$ (6)
$$S_i(𝝆_2,𝝆_2^{^{}},z_2|𝝆_1,𝝆_1^{^{}},z_1)=K_i(𝝆_2,z_2|𝝆_1,z_1)K_i^{}(𝝆_2^{^{}},z_2|𝝆_1^{^{}},z_1),$$
(7)
$$M(𝝆_2,𝝆_2^{^{}},z_2|𝝆_1,𝝆_1^{^{}},z_1)=K_b(𝝆_2,z_2|𝝆_1,z_1)K_c(𝝆_2^{^{}},z_2|𝝆_1,z_1)K_a^{}(𝝆_2^{^{}},z_2|𝝆_1^{^{}},z_1).$$
(8)
Using the path integral representation for the Green’s functions one can evaluate analytically the initial- and final-state interaction factors $`S_{a,b}`$ . The factor $`M`$ (8) differs from that of Ref. by the replacement of $`𝝆_2^{^{}}`$ by $`𝝆_2`$ in the Green’s function $`K_c`$. Similar to that of Ref. it can be expressed through the Green’s function $`K_{bc}`$ describing the relative motion of the particles $`b`$ and $`c`$ in a fictitious $`\overline{a}bc`$ system. After analytical integration over the center-of-mass transverse coordinates the radiation rate takes the form
$`{\displaystyle \frac{d^3P}{dxd\text{q}_b}}={\displaystyle \frac{2}{(2\pi )^2}}\text{Re}{\displaystyle \underset{z_i}{\overset{z_f}{}}}𝑑z_1{\displaystyle \underset{z_1}{\overset{z_f}{}}}𝑑z_2{\displaystyle 𝑑𝝉_bg\mathrm{exp}(i\text{q}_b𝝉_b)}`$
$`\times \mathrm{\Phi }_b(𝝉_b,z_2)\mathrm{exp}\left[{\displaystyle \frac{i(z_1z_2)}{L_f}}\right]K_{bc}(𝝉_b,z_2|0,z_1)\mathrm{\Phi }_a(𝝉_a,z_1),`$ (9)
where
$$\mathrm{\Phi }_a(𝝉_a,z_1)=\mathrm{exp}\left[\frac{\sigma _{a\overline{a}}(𝝉_a)}{2}\underset{z_i}{\overset{z_1}{}}𝑑zn(z)\right],\mathrm{\Phi }_b(𝝉_b,z_2)=\mathrm{exp}\left[\frac{\sigma _{b\overline{b}}(𝝉_b)}{2}\underset{z_2}{\overset{z_f}{}}𝑑zn(z)\right]$$
(10)
are the eikonal initial- and final-state absorption factors,<sup>2</sup><sup>2</sup>2 I emphasize, that appearance of the eikonal absorption factors in (9) is a nontrivial consequence of the specific form of evolution operators $`S_{a,b}`$ , and is not connected with applicability of the eikonal approximation. $`𝝉_a=x𝝉_b`$, $`L_f=2E_ax(1x)/[m_b^2(1x)+m_c^2xm_a^2x(1x)]`$. The Hamiltonian for the Green’s function $`K_{bc}`$ reads
$$H_{bc}=\frac{\mathrm{\Delta }_{}}{2\mu _{bc}}\frac{in(z)\sigma _{\overline{a}bc}(𝝉_{bc},𝝉_{ab})}{2},$$
(11)
where $`\mu _{bc}=E_ax(1x)`$, $`𝝉_{ab}=[𝝉_a+𝝉_{bc}(1x)]`$. In (10), (11) $`n(z)`$ is the number density of the target, $`\sigma _{a\overline{a}}`$ and $`\sigma _{b\overline{b}}`$ are the dipole cross sections of interaction with the medium constituent of $`a\overline{a}`$ and $`b\overline{b}`$ pairs, and $`\sigma _{\overline{a}bc}`$ is the three-body cross section for $`\overline{a}bc`$-system.
The integration over $`\text{q}_b`$ in (9) gives the $`x`$-spectrum
$`{\displaystyle \frac{dP}{dx}}=2\text{Re}{\displaystyle \underset{z_i}{\overset{z_f}{}}}𝑑z_1{\displaystyle \underset{z_1}{\overset{z_f}{}}}𝑑z_2g\mathrm{exp}\left[{\displaystyle \frac{i(z_1z_2)}{L_f}}\right]K_{bc}(0,z_2|0,z_1).`$ (12)
In Ref. I have derived the $`p_T`$-integrated radiation rate using the unitarity connection between the probability of $`abc`$ transition and the radiative correction to $`aa`$ transition. The latter is described by the diagram of Fig. 2a, which can be transformed into the graph of Fig. 2b, corresponding to the integral in (12). <sup>3</sup><sup>3</sup>3 The graph of Fig. 2b (and the integral in (12)) in itself requires subtracting of the infinite vacuum counter term. The vacuum term has an imaginary part connected with correction to $`m_a`$, which $`(z_fz_i)`$, and a real part related to the wavefunction renormalization. The latter appears after separating the mass term, and connected with the configurations $`z_1<z_f<z_2`$. This boundary effect is absent if the coupling vanishes at large $`|z|`$. Evidently, in this case the vacuum term does not affect the $`x`$-spectrum. Nonetheless, it is convenient, as was done in Ref. , to keep the vacuum term to simplify the singular $`z`$-integration in (12). One can easy show that the diagram of Fig. 2b can also be obtained directly from that of Fig. 1b after integration over $`\text{q}_b`$.
Equation (9) establishes the theoretical basis for evaluation of the $`p_T`$-dependence of the LPM effect. Note that for transition with the formation length much greater than the target thickness (for the particle $`a`$ incident from infinity) (9) can be expressed through the light-cone wave function $`\mathrm{\Psi }_a^{bc}`$ as
$`{\displaystyle \frac{d^3P}{dxd\text{q}_b}}={\displaystyle \frac{2}{(2\pi )^2}}{\displaystyle 𝑑𝝉𝑑𝝉^{^{}}\mathrm{exp}(i\text{q}_b𝝉^{^{}})\mathrm{\Psi }_a^{bc}(x,𝝉𝝉^{^{}})\mathrm{\Gamma }_{\overline{a}bc}(𝝉_{bc},𝝉_{ab})\mathrm{\Psi }_a^{bc}(x,𝝉)}.`$ (13)
Here, $`𝝉_{bc}=𝝉`$, $`𝝉_{ab}=[𝝉(1x)+𝝉^{^{}}x]`$, $`\mathrm{\Gamma }_{\overline{a}bc}=\left\{1\mathrm{exp}\left[\frac{\sigma _{\overline{a}bc}}{2}𝑑zn(z)\right]\right\}`$ is the Glauber profile function for interaction of $`\overline{a}bc`$ state with the target. The derivation of (13) is based on a connection between the Green’s function $`K_{bc}`$ in vacuum and $`\mathrm{\Psi }_a^{bc}`$ . Equation (13) generalizes the formula for the $`p_T`$-integrated spectrum of Ref. . It is of interest in its own right. In particular, the leading term in $`n(z)`$ of the rhs in (13) gives a convenient formula for evaluation of the Bethe-Heitler cross section through the light-cone wavefunction.
In general case one can estimate the inclusive cross section using the parametrization $`\sigma _{\overline{a}bc}=C_{ab}𝝉_{ab}^2+C_{bc}𝝉_{bc}^2+C_{ca}𝝉_{ca}^2`$ (here $`𝝉_{ca}=(𝝉_{ab}+𝝉_{bc})`$). Then the Hamiltonian (11) takes the oscillator form with the frequency $`\mathrm{\Omega }(z)=\frac{(1i)}{\sqrt{2}}\left[\frac{n(z)C(x)}{E_ax(1x)}\right]^{1/2},`$ with $`C(x)=C_{ab}(1x)^2+C_{bc}+C_{ca}x^2`$. The Green’s function for the oscillator Hamiltonian can be written in the form
$$K_{osc}(𝝉_2,z_2|𝝉_1,z_1)=\frac{\gamma (z_1,z_2)}{2\pi i}\mathrm{exp}\left\{i\left[\alpha (z_1,z_2)𝝉_2^2+\beta (z_1,z_2)𝝉_1^2\gamma (z_1,z_2)𝝉_1𝝉_2\right]\right\},$$
(14)
where the functions $`\alpha `$, $`\beta `$ and $`\gamma `$ can be evaluated in the approach of Ref. . Using the parametrization $`\sigma _{\overline{i}i}=C_{ii}𝝉_i^2`$ one can obtain
$`{\displaystyle \frac{d^3P}{dxd\text{q}_b}}={\displaystyle \frac{1}{(2\pi )^2}}\text{Re}{\displaystyle \underset{z_i}{\overset{z_f}{}}}𝑑z_1{\displaystyle \underset{z_1}{\overset{z_f}{}}}𝑑z_2g{\displaystyle \frac{\gamma (z_1,z_2)}{Q(z_1,z_2)}}\mathrm{exp}\left[{\displaystyle \frac{i\text{q}_b^2}{4Q(z_1,z_2)}}+{\displaystyle \frac{i(z_1z_2)}{L_f}}\right],`$ (15)
where the factor $`Q(z_1,z_2)`$ can be expressed through the parameters $`C_{ij}`$, the functions $`\alpha `$, $`\beta `$, $`\gamma `$, $`n`$, and $`\mathrm{\Omega }`$. The formula for this factor is too cumbersome to be presented here.
The generalization of the above results to the realistic QED and QCD Lagrangians reduces to a trivial replacement of the two- and three-body cross sections, and the vertex factor $`g`$. The latter, due to spin effects in the vertex $`abc`$, becomes an operator. The corresponding formulas are given in Refs. . The results obtained can be applied to many problems. In particular in QCD this approach can be used for evaluation of high-$`p_T`$ hadron spectra, the $`p_T`$-dependence of DY pairs and heavy quarks production in $`hA`$-collisions, angular dependence of the parton energy loss in hot QCD matter produced in $`AA`$-collisions. It is also of interest for study the initial condition for quark-gluon plasma in $`AA`$-collisions.
I would like to thank N.N. Nikolaev and D. Schiff for discussions. I am grateful to J. Speth for the hospitality at FZJ, Jülich, where this work was completed. This work was partially supported by the INTAS grant 96-0597.
|
no-problem/9906/cond-mat9906213.html
|
ar5iv
|
text
|
# Molecular dynamics study of a classical two-dimensional electron system: Positional and orientational orders
## 1 Introduction
More than 60 years ago, Wigner pointed out that an electron system will crystallize due to the Coulomb repulsion for low enough densities (Wigner crystallization) . Although quantum effects play an essential role in a degenerate electron system, the concept of Wigner crystallization can be generalized to a classical case where the Fermi energy is much smaller than the thermal energy. A classical two-dimensional (2D) electron system is wholly specified by the dimensionless coupling constant $`\mathrm{\Gamma }`$, the ratio of the Coulomb energy to the kinetic energy. Here $`\mathrm{\Gamma }(e^2/4\pi ϵa)/k_BT`$, where $`e`$ is the charge of an electron, $`ϵ`$ the dielectric constant of the substrate, $`a`$ the mean distance between electrons and $`T`$ the temperature. For $`\mathrm{\Gamma }1`$ the system will behave as a gas while for $`\mathrm{\Gamma }1`$ as a solid. Experimentally, Grimes and Adams succeeded in observing a transition from a liquid to a triangular lattice in a classical 2D electron system on a liquid-helium surface around $`\mathrm{\Gamma }_c=137\pm 15`$, which is in good agreement with numerical simulations .
On the theoretical side, two conspicuous points have been known for 2D systems: (i) Mermin’s theorem dictates that no true long-range crystalline order is possible at finite $`T`$ in the thermodynamic limit . To be precise, the $`1/r`$ Coulomb interaction is too long ranged to apply Mermin’s arguments directly. Although there have been some theoretical attempts to extend the theorem to the Coulomb case, no rigorous proof has been attained. (ii) A theory due to Kosterlitz, Thouless, Halperin, Nelson, and Young (KTHNY) predicts that a “hexatic” phase, characterized by a short-range positional order and a quasi-long-range orientational order, exists between a liquid phase and a solid phase . Because the KTHNY theory is based on various assumptions and approximations, its validity should be tested by numerical methods such as a molecular dynamics (MD) simulation. Several authors have applied numerical methods to classical 2D electron systems, but they arrived at different conclusions on the KTHNY prediction .
In order to address both of the above problems, the most direct way is to calculate the positional and the orientational correlation functions, which is exactly the motivation of the present study .
## 2 Numerical Method
A detailed description of the simulation is given elsewhere , so we only recapitulate it. We consider a rectangular area with a rigid uniform neutralizing positive background in periodic boundary conditions. The aspect ratio of the rectangle is taken to be $`L_y/L_x=2/\sqrt{3}`$, which can accommodate a perfect triangular lattice . The Ewald summation method is used to take care of the long-range nature of the $`1/r`$ interaction. We have employed Nosé-Hoover’s canonical MD method to incorporate temperature accurately.
The system is cooled or heated across the transition with a simulated annealing method. The results presented here are for $`N=900`$ electrons with MD runs with 30,000–110,000 time-steps for each value of $`\mathrm{\Gamma }`$. The correlation functions and other quantities are calculated for the last $``$20,000 time-steps of each run.
Following Cha and Fertig , we define the positional and the orientational correlation functions from which we identify the ordering in each phase. The positional correlation function is defined by
$$C(r)\rho _𝐆^{}(𝐫)\rho _𝐆(\mathrm{𝟎})=\frac{{\displaystyle \underset{i,j}{}}\delta (r|𝐫_i𝐫_j|)\frac{1}{6}{\displaystyle \underset{𝐆}{}}e^{\mathrm{i}𝐆(𝐫_i𝐫_j)}}{{\displaystyle \underset{i,j}{}}\delta (r|𝐫_i𝐫_j|)},$$
where $`𝐆`$ is a reciprocal vector of the triangular lattice with the summation taken over the six $`𝐆`$’s that give the first peaks of the structure factor \[inset (a) of Fig. 1\]. The orientational correlation function is defined by
$$C_6(r)\psi _6^{}(𝐫)\psi _6(\mathrm{𝟎})=\frac{{\displaystyle \underset{i,j}{}}\delta (r|𝐫_i𝐫_j|)\psi _6^{}(𝐫_i)\psi _6(𝐫_j)}{{\displaystyle \underset{i,j}{}}\delta (r|𝐫_i𝐫_j|)},$$
where $`\psi _6(𝐫)\frac{1}{n_c}_\alpha ^{\mathrm{n}.\mathrm{n}.}e^{6\mathrm{i}\theta _\alpha (𝐫)}`$ with $`\theta _\alpha (𝐫)`$ being the angle of the vector connecting an electron at $`𝐫`$ and the $`\alpha `$-th nearest neighbor with respect to a fixed axis. The summation is taken over $`n_c`$ nearest neighbors that are determined by the Voronoi construction .
## 3 Results and Discussions
Let us first look at the positional and the orientational correlation functions in Fig. 1 for $`\mathrm{\Gamma }=200`$ and $`\mathrm{\Gamma }=160`$, for which the system is well in the solid phase. The positional correlation is seen to decay slowly, indicative of an algebraic (power-law) decay at large distances. The round-off in the correlation function around half of the system size should be an effect of the periodic boundary conditions. The algebraic decay of the positional correlation indicates that the 2D electron solid has only a quasi-long-range positional order. Thus we have obtained a numerical indication that Mermin’s theorem remains applicable to the $`1/r`$ Coulomb interaction, which is consistent with the analytical but approximate results obtained in .
On the other hand, the orientational correlation rapidly approaches a constant, indicating a long-range orientational order. Therefore, while the 2D electron solid has no true long-range crystalline order, we can say that it has a topological order. From a snapshot of the configuration \[see inset (b) in Fig. 1\], we can see that a long-range orientational order is preserved since defects (5- or 7-fold disclinations, etc) tend to appear as dislocation (5-7 combination of disclinations) pairs, i.e., 5-7-5-7 disclination quartets that only disturb the orientational correlation locally. Here the coordination number is again determined from the Voronoi construction.
Now we move on to the orientational correlation function (inset of Fig. 2) around the crystallization, which is obtained by cooling the system from a liquid to a solid. In between a short-range orientational order for $`\mathrm{\Gamma }=120`$ and a long-range one for $`\mathrm{\Gamma }=140`$, the correlation appears to decay algebraically at $`\mathrm{\Gamma }=130`$ with an exponent approximately equal to unity, which deviates from the upper bound of $`1/4`$ predicted by KTHNY . However, numerical difficulties arising from finite-size and finite-time effects prevent us from drawing any definite conclusion on the existence of the hexatic phase. Namely, we cannot rule out the possibility that the power-law behavior is an artifact of insufficient equilibration. In fact, the solid phase persists down to $`\mathrm{\Gamma }=130`$ when the system is heated from a solid, which is understandable if the solid-hexatic and the hexatic-liquid transitions are of first and second order, respectively.
The KTHNY theory is based on a picture that the hexatic-liquid transition occurs through unbinding of disclination pairs. To examine if this is the case, we have calculated a defect-defect correlation function (Fig. 2), which we define as a distribution of 7-fold coordinated electrons with respect to a 5-fold coordinated electron. The correlation function exhibits no qualitative difference between $`\mathrm{\Gamma }=120`$ and $`\mathrm{\Gamma }=130`$, for which the disclinations are not tightly bound as compared with $`\mathrm{\Gamma }=140`$. If we look at a snapshot for $`\mathrm{\Gamma }=130`$ (Fig. 3), we see some domain structure as far as the present numerical simulation with finite-size and finite-time restrictions are concerned. A finite-size scaling will be an interesting future problem if the transition into the hexatic phase is of second order.
We have also looked at a dynamical property, i.e., the power spectral density of the velocity (Fig. 4), which is related to the vibrational density of states and corresponds to the Fourier transform of the velocity autocorrelation function via Wiener-Khinchin’s theorem. While the difference between the solid and the liquid phases appears in the spectrum around zero frequency, which is proportional to the diffusion constant (finite in the liquid or vanishingly small in the solid), we find a peak around the typical phonon frequency, which, curiously enough, persists even in the liquid phase. This indicates that the liquid has well-defined local configurations despite the short-range positional and orientational correlations.
|
no-problem/9906/cond-mat9906314.html
|
ar5iv
|
text
|
# The Aharonov-Bohm effect for an exciton
## Abstract
We study theoretically the exciton absorption on a ring threaded by a magnetic flux. For the case when the attraction between electron and hole is short-ranged we get an exact solution of the problem. We demonstrate that, despite the electrical neutrality of the exciton, both the spectral position of the exciton peak in the absorption, and the corresponding oscillator strength oscillate with magnetic flux with a period $`\mathrm{\Phi }_0`$—the universal flux quantum. The origin of the effect is the finite probability for electron and hole, created by a photon at the same point, to tunnel in the opposite directions and meet each other on the opposite side of the ring.
One of the manifestations of the Aharonov-Bohm (AB) effect in the ring geometry is the periodic dependence of the transmission coefficient for an electron traversing the ring on the magnetic flux $`\mathrm{\Phi }`$ through the ring. The period of oscillations is equal to $`\mathrm{\Phi }_0=hc/e`$ — the universal flux quantum.
For one-dimensional (1D) continuum interacting quantum systems with translational invariance there is also a periodicity of many-particle states as a functions of flux. In 1D lattice systems, the lifting of Galilean invariance allows for various periodicities of the states. For the ground state, this behavior can be interpreted, according to the above definition of $`\mathrm{\Phi }_0`$, as a signature of the existence of elementary excitations with multiple — sometimes even fractional — charges . In the case of strong electron-electron interaction the adequate description of the many-body states is based on excitations of the Wigner-crystal . Furthermore, the absence of sensitivity to the flux in such systems is an indication of the onset of the Mott transition. Similarly, the sensitivity of single-particle energies to the flux can be used as a criterion of the Anderson-type metal-insulator transition in disordered systems. Combined effects of interactions and disorder in 1D have received much attention in the last decade . Numerical studies of pairing effects for two particles with repulsive interaction in a disordered environment were carried out using the AB setting . Other physical manifestations of the AB effect in the ring geometry considered in the literature include the evolution of electron states for a time-dependent flux, and a flux-dependent equilibrium distortion of the lattice caused by electron-phonon interactions.
The physical origin of the flux sensitivity of an electron on the ring is its charge which couples to the vector potential. Correspondingly, the coupling to the flux has the opposite sign for an electron and a hole. For this reason an exciton, being a bound state of electron and hole and thus a neutral entity, should not be sensitive to the flux. However, due to the finite size of the exciton, such a sensitivity will emerge. This effect is demonstrated in the present paper. Below we study the AB-oscillations both in the binding energy and in the oscillator strength of the exciton absorption. We choose as a model a short-range attraction potential between electron and hole, which allows to solve the three-body problem (electron, hole, and a ring) exactly. From this exact solution, we trace the behavior of the AB oscillations when increasing the radius of the ring or the strength of the electron-hole attraction.
Denote with $`\phi _e`$ and $`\phi _h`$ the azimuthal coordinates of the electron and hole, respectively. In the absence of interaction the wave functions of electrons and holes are given by
$$\mathrm{\Psi }_N^{(e)}(\phi _e)=\frac{1}{\sqrt{2\pi }}e^{iN\phi _e},\mathrm{\Psi }_N^{}^{(h)}(\phi _h)=\frac{1}{\sqrt{2\pi }}e^{iN^{}\phi _h},$$
(1)
where $`N`$ and $`N^{}`$ are integers. The corresponding energies are
$$E_N^{(e)}=\frac{\mathrm{}^2}{2m_e\rho ^2}\left(N\frac{\mathrm{\Phi }}{\mathrm{\Phi }_0}\right)^2,E_N^{}^{(h)}=\frac{\mathrm{}^2}{2m_h\rho ^2}\left(N^{}+\frac{\mathrm{\Phi }}{\mathrm{\Phi }_0}\right)^2.$$
(2)
Here $`\rho `$ is the radius of the ring, and $`m_e`$, $`m_h`$ stand for the effective masses of electron and hole, respectively. In the presence of an interaction $`V\left[R(\phi _e\phi _h)\right]`$, where $`R(\phi _e\phi _h)=2\rho \mathrm{sin}(\frac{\phi _e\phi _h}{2})`$ is the distance between electron and hole, we search for the wave function of the exciton in the form
$$\mathrm{\Psi }(\phi _e,\phi _h)=\underset{N,N^{}}{}A_{N,N^{}}\mathrm{\Psi }_N^{(e)}(\phi _e)\mathrm{\Psi }_N^{}^{(h)}(\phi _h).$$
(3)
The coefficients $`A_{N,N^{}}`$ are to be found from the equation
$$\underset{N,N^{}}{}A_{N,N^{}}\left[E_N^{(e)}+E_N^{}^{(h)}\mathrm{\Delta }\right]\mathrm{\Psi }_N^{(e)}(\phi _e)\mathrm{\Psi }_N^{}^{(h)}(\phi _h)+V\left[R(\phi _e\phi _h)\right]\mathrm{\Psi }(\phi _e,\phi _h)=0,$$
(4)
where $`\mathrm{\Delta }`$ is the energy of the exciton. The formal expression for $`A_{N,N^{}}`$ follows from Eq. (4) after multiplying it by $`\left[\mathrm{\Psi }_N^{(e)}(\phi _e)\mathrm{\Psi }_N^{}^{(h)}(\phi _h)\right]^{}`$ and integrating over $`\phi _e`$ and $`\phi _h`$
$$A_{N,N^{}}=\frac{1}{2\pi }_0^{2\pi }𝑑\phi _e_0^{2\pi }𝑑\phi _h\frac{V\left[R(\phi _e\phi _h)\right]\mathrm{\Psi }(\phi _e,\phi _h)}{E_N^{(e)}+E_N^{}^{(h)}\mathrm{\Delta }}e^{i(N\phi _e+N^{}\phi _h)}.$$
(5)
At this point we make use of the assumption that the potential $`V\left[R(\phi _e\phi _h)\right]`$ is short-ranged. This implies that the integral over $`\phi _h`$ is determined by a narrow interval of $`\phi _h`$ close to $`\phi _e`$. Then we can replace $`\phi _h`$ by $`\phi _e`$ in the rest of the integrand. As a result, Eq. (5) simplifies to
$$A_{N,N^{}}=\frac{V_0}{E_N^{(e)}+E_N^{}^{(h)}\mathrm{\Delta }}_0^{2\pi }𝑑\phi _e\mathrm{\Psi }(\phi _e,\phi _e)e^{i(N+N^{})\phi _e},$$
(6)
where the constant $`V_0<0`$ is defined as
$$V_0=\frac{1}{2\pi }𝑑\phi V\left[R(\phi )\right].$$
(7)
Finally we derive a closed equation, which determines the exciton energies. This equation follows from Eqs. (3) and (6) as a self-consistency condition. Indeed, by setting in Eq. (3) $`\phi _e=\phi _h`$, multiplying both sides by $`\mathrm{exp}(iN_0\phi _e)`$, and integrating over $`\phi _e`$, we obtain
$$_0^{2\pi }𝑑\phi _e\mathrm{\Psi }(\phi _e,\phi _e)e^{iN_0\phi _e}=\underset{N}{}A_{N,N_0N}.$$
(8)
Substituting (6) into (8) we arrive at the desired condition
$$1+V_0\underset{N}{}\frac{1}{E_N^{(e)}+E_{N_0N}^{(h)}\mathrm{\Delta }_{N_0}}=0.$$
(9)
For each integer $`N_0`$ the solutions of Eq. (9) form a discrete set, $`\mathrm{\Delta }_{N_0}^m`$. The corresponding (non-normalized) wave functions have the form
$$\mathrm{\Psi }_{N_0}^me^{iN_0\phi _h}\underset{N}{}\frac{e^{iN(\phi _e\phi _h)}}{E_N^{(e)}+E_{N_0N}^{(h)}\mathrm{\Delta }_{N_0}^m}.$$
(10)
The exponential factor in front of the sum insures that in the dipole approximation only the excitons with $`N_0=0`$ can be created by light. The frequency dependence of the exciton absorption, $`\alpha (\omega )`$, can be presented as
$$\alpha (\omega )\underset{m}{}F_m\delta (\mathrm{}\omega E_g\mathrm{\Delta }_0^m),$$
(11)
where $`E_g`$ is the band-gap of the material of the ring; the coefficients $`F_m`$ stand for the oscillator strengths of the corresponding transitions. A general expression for $`F_m`$ through the eigenfunction, $`\mathrm{\Psi }_0^m`$, of the excitonic state reads
$$F_m=\frac{|_0^{2\pi }𝑑\phi _e_0^{2\pi }𝑑\phi _h\mathrm{\Psi }_0^m(\phi _e,\phi _h)\delta (\phi _e\phi _h)|^2}{_0^{2\pi }𝑑\phi _e_0^{2\pi }𝑑\phi _h|\mathrm{\Psi }_0^m(\phi _e,\phi _h)|^2}.$$
(12)
Upon substituting Eq. (10) into Eq. (12) and making use of Eq. (9), we obtain
$$F_m=\left[V_0^2\underset{N}{}\frac{1}{(E_N^{(e)}+E_N^{(h)}\mathrm{\Delta }_0^m)^2}\right]^1.$$
(13)
The latter expression can be presented in a more compact form by introducing the rate of change of the exciton energy with the interaction parameter $`V_0`$. Indeed, taking the differential of Eq. (9), yields
$$F_m=\frac{\mathrm{\Delta }_0^m}{V_0}.$$
(14)
We note that the summation in Eq. (9) can be carried out in a closed form by using the identity
$$\underset{N=\mathrm{}}{\overset{\mathrm{}}{}}\frac{1}{(\pi Na_1)(\pi Na_2)}=\frac{1}{(a_1a_2)}\left(\frac{1}{\mathrm{tan}a_2}\frac{1}{\mathrm{tan}a_1}\right).$$
(15)
For the most interesting case $`N_0=0`$ the parameters $`a_1`$, $`a_2`$ are equal to
$$a_{1,2}=\pi \left[\frac{\mathrm{\Phi }}{\mathrm{\Phi }_0}\pm \left(\frac{\mathrm{\Delta }_0^m}{\epsilon _0}\right)^{1/2}\right],$$
(16)
where
$$\epsilon _0=\frac{\mathrm{}^2}{2\rho ^2}\left(\frac{1}{m_e}+\frac{1}{m_h}\right)=\frac{\mathrm{}^2}{2\mu \rho ^2},$$
(17)
and $`\mu =m_em_h/(m_e+m_h)`$ denotes the reduced mass of electron and hole. Then the equation (9) for the exciton energies takes the form
$$\left(\frac{\mathrm{\Delta }_0^m}{\epsilon _0}\right)^{1/2}=\left(\frac{\pi V_0}{\epsilon _0}\right)\frac{\mathrm{sin}\left(2\pi (\mathrm{\Delta }_0^m/\epsilon _0)^{1/2}\right)}{\mathrm{cos}\left(2\pi (\mathrm{\Delta }_0^m/\epsilon _0)^{1/2}\right)\mathrm{cos}\left(2\pi (\mathrm{\Phi }/\mathrm{\Phi }_0)\right)}.$$
(18)
This equation is our main result. It is seen from Eq. (18) that the structure of the excitonic spectrum is determined by a dimensionless ratio $`|V_0|/\epsilon _0`$. From the definition (7) it follows that, with increasing the radius $`\rho `$ of the ring, $`V_0`$ falls off as $`1/\rho `$. Thus, $`|V_0|/\epsilon _0`$ is proportional to $`\rho `$. In the limit of large $`\rho `$, when $`|V_0|\epsilon _0`$, the spectrum can be found analytically. The ground state corresponds to negative energy and is given by
$$\mathrm{\Delta }_0^0=\frac{\pi ^2V_0^2}{\epsilon _0}\left[1+4\mathrm{cos}\left(\frac{2\pi \mathrm{\Phi }}{\mathrm{\Phi }_0}\right)\mathrm{exp}\left(\frac{2\pi ^2|V_0|}{\epsilon _0}\right)\right].$$
(19)
We note that the prefactor $`\pi ^2V_0^2/\epsilon _0`$ is independent of $`\rho `$. It is equal to the binding energy of an exciton on a straight line. It is easy to see that in the limit under consideration we have $`|\mathrm{\Delta }_0^0||V_0|\epsilon _0`$.
The second term in the brackets of Eq. (19) describes the AB effect for the exciton. In the limit of large $`\rho `$ its magnitude is exponentially small. The physical meaning of the exponential prefactor can be understood after rewriting it in the form $`\mathrm{exp}(2\pi \rho \gamma )`$, where $`\gamma =\pi |V_0|\left(2\mu /\mathrm{}^2\epsilon _0\right)^{1/2}`$ is the inverse decay length of the wave function of the internal motion of electron and hole in the limit $`\rho \mathrm{}`$. Thus, the magnitude of the AB effect in the limit of large $`\rho `$ represents the amplitude for bound electron and hole to tunnel in the opposite directions and meet each other “on the opposite side of the ring” (opposite with respect to the point where they were created by a photon). This qualitative consideration allows to specify the condition that the interaction potential is short-ranged. Namely, for Eq. (19) to apply, the radius of potential should be much smaller than $`\gamma ^1`$. It is also clear from the above consideration that, within a prefactor, the magnitude of the AB effect is given by $`\mathrm{exp}(2\pi \rho \gamma )`$ for arbitrary attractive potential, as long as the decay length $`\gamma ^1`$ is smaller than the perimeter of the ring. In Figs. 1 and 2 we plot the numerical solution of Eq. (18) for various values of $`\mathrm{\Phi }`$ together with the asymptotic solution (19) valid in the limit of large $`\gamma \rho `$. We see that the maximum possible change in exciton energy by threading the ring with a flux $`\mathrm{\Phi }_0/2`$ is $`25\%`$ of the size-quantization energy $`\epsilon _0`$. The asymptotic expression of (19) is good down to $`\gamma \rho \pi ^1`$. In Fig. 3, we show the variation of the exciton energy with $`\mathrm{\Phi }`$ within one period. As expected, the AB oscillations are close to sinusoidal for large values of $`2\pi \gamma \rho `$, whereas for $`2\pi \gamma \rho =1`$, unharmonicity is already quite pronounced. The increase of the exciton energy as the flux is switched on has a simple physical interpretation. If the single-electron energy (2) grows with $`\mathrm{\Phi }`$ then the single-hole energy is reduced with $`\mathrm{\Phi }`$ and vice versa. This suppresses the electron-hole binding. Fig. 3 illustrates how the amplitudes of the AB oscillations decrease with increasing ring perimeter $`2\pi \gamma \rho `$ as described by Eq. (19). The AB oscillations in the oscillator strength are plotted in Fig. 4. As expected, the shift is most pronounced for $`\mathrm{\Phi }=\mathrm{\Phi }_0/2`$, and the relative magnitude is nearly $`80\%`$ for the smallest value of $`2\pi \gamma \rho `$. For larger values of $`2\pi \gamma \rho `$, the oscillations in $`F_0(\mathrm{\Phi })`$ become increasingly sinusoidal as can be seen by differentiating Eq. (19) with respect to $`V_0`$.
In the consideration above we assumed the width of the ring to be zero. In fact, if the width is finite but smaller than the radius of the exciton, $`\gamma ^1`$, it can be taken into account in a similar fashion as in by adding $`\mathrm{}^2\pi ^2/2m_eW^2`$ and $`\mathrm{}^2\pi ^2/2m_hW^2`$ to the single-electron and single-hole energies (2), respectively. Here, $`W`$ stands for the width of the ring and a hard-wall confinement in the radial direction is assumed. This would leave the AB oscillations unchanged. In the opposite case $`W\gamma ^1`$ the oscillations are suppressed. The precise form of the suppression factor as a function of $`(W\gamma )^1`$ is unknown and depends on the details of the confinement.
Let us briefly address the excited states of the exciton corresponding to $`m>0`$. In the limit $`|V_0|\epsilon _0`$ for the energies with numbers $`m<|V_0|/\epsilon _0`$ we get from Eq. (18)
$$\mathrm{\Delta }_0^m=\frac{\epsilon _0}{4}\left[m^2+(1)^m(m+\frac{1}{2})\frac{\epsilon _0}{\pi ^2V_0}\mathrm{cos}\left(\frac{2\pi \mathrm{\Phi }}{\mathrm{\Phi }_0}\right)\right].$$
(20)
In contrast to the ground state as in (19) the AB contribution to the energy $`\mathrm{\Delta }_0^m`$ is not exponentially small. Still the AB term is small (in parameter $`\epsilon _0/|V_0|1`$) compared to the level spacing at $`\mathrm{\Phi }=0`$.
An alternative way to derive Eq. (18) is to follow the Bethe ansatz approach. The intimate relation between Eq. (18) and a Bethe ansatz equation becomes most apparent in the absence of magnetic flux, $`\mathrm{\Phi }=0`$, when (18) can be rewritten as
$$2\pi \rho k_m=2\pi m+2\mathrm{arctan}\left(\frac{\rho k_m}{c}\right),$$
(21)
where $`k_m=(2\mathrm{\Delta }_0^m\mu )^{1/2}/\mathrm{}`$ is the wave vector and $`c=2\pi \mu V_0\rho ^2/\mathrm{}^2`$ parameterizes the strength of the attraction analogously to the well-known $`\delta `$-function gas . At finite flux, the structure of the Bethe ansatz equations will be very similar to the equations for a 1D Hubbard model in the presence of a spin flux coupling to the spin-up and spin-down degrees of freedom of the electrons . We emphasize that in such discrete models the periodicity will also be influenced by whether the number of sites in the ring is even or odd in addition to the continuous situation considered in the present manuscript.
First experimental studies of the AB effect were carried out on metallic rings. The next generation of rings were based on GaAs/AlGaAs hetereostructures as in Refs. and and had a circumference of $`6000`$nm and $`3000`$nm, respectively. For such rings the magnitude of the excitonic AB oscillations will be very small. However, quite recently much more compact ring-shaped dots of InAs in GaAs with a circumference of $`250`$nm were demonstrated to exist. This was achieved by modification of a standard growth procedure used for the fabrication of arrays of self-assembled InAs quantum dots in GaAs. Recent light absorption experiments on nano-rings reveal an excitonic structure. However, it is much more advantageous to search for the AB oscillations proposed in the present paper not in absorption, but in luminescence studies. This is because near-field techniques developed in the last decade allow to ”see” a single quantum dot and thus avoid the inhomogeneous broadening. This technique was applied to many structures containing ensembles of quantum dots (e.g., GaAs/AlGaAs, ZnSe). In particular, extremely narrow and temperature insensitive (up to $`50`$K) luminescence lines from a single InAs quantum dot in GaAs were recorded in Refs. .
In conclusion, we have demonstrated the AB oscillations for a neutral object. This constitutes the main qualitative difference between our paper and previous considerations for two interacting electrons on a ring. Lastly, we note that the possibility of the related effect of Aharonov-Casher oscillations for an exciton was considered previously in Ref. . The underlying physics in Ref. is that even a zero-size exciton having zero charge can still have a finite magnetic moment.
Upon completion of this work we have been made aware of Ref. in which the underlying physics of the AB oscillations of excitonic levels was uncovered. Although the analytical approach employed in Ref. is different from ours, the result obtained for the ground state energy is similar to Eq. (19).
###### Acknowledgements.
This work was supported by the NSF-DAAD collaborative research grant INT-9815194. MER was supported in part by NSF grant DMR 9732820. RAR also gratefully acknowledges the support of DFG under Sonderforschungsbereich 393. We are grateful to M. Büttiker, A. Lorke, T. V. Shahbazyan, R. Warburton, and J. Worlock for useful discussions. We thank A. V. Chaplik for pointing out Ref. to us.
|
no-problem/9906/astro-ph9906416.html
|
ar5iv
|
text
|
# A DIFFERENTIAL X-RAY GUNN-PETERSON TEST USING A GIANT CLUSTER FILAMENT
## 1. INTRODUCTION
Simulations suggest that at present, most of the baryons in the Universe should reside in the diffuse medium filling intergalactic space (e.g., Miralda-Escudé et al. 1996; Cen & Ostriker 1999a). Indeed, the observed baryonic matter in low-redshift galaxies and clusters is only a fraction of the amount predicted by Big Bang nucleosynthesis (Persic & Salucci 1992; Fukugita, Hogan, & Peebles 1998). Observationally, however, little is known about intergalactic medium (IGM) outside the relatively small confines of galaxy clusters where it is sufficiently hot to emit detectable radiation. UV and optical studies show that intercluster hydrogen is almost completely ionized and therefore practically undetectable (Gunn & Peterson 1965), with only a small fraction of it residing in Ly$`\alpha `$ forest clouds (e.g., Rauch et al. 1997; Giallongo, Fontana, & Madau 1997). IGM may be enriched with heavy elements that originate in supernovae and are transported to IGM by supernova-generated winds (e.g., De Young 1978). The amount of heavy elements in the IGM carries important information on the cumulative number of supernova explosions integrated over time, which is inaccessible by other means. Heavy elements with a relative abundance of order 1% solar were detected in Ly$`\alpha `$ absorbers (e.g., Burles & Tytler 1996 and references therein). However, these measurements are restricted by design to the small fraction of high-$`z`$ gas that is in dense clouds, and are unlikely to tell about the IGM on average.
At present, there is no direct information on the amount and composition of the diffuse IGM at low $`z`$ — possibly the largest reservoir of baryons around us — although there are plausible conjectures. At low redshifts, gas in galaxies has roughly solar heavy element abundances and gas in clusters exhibits quite a universal relative iron abundance of 1/3 solar (e.g., Edge & Stewart 1991; Fukazawa et al. 1998). The apparent lack of iron abundance evolution in clusters out to $`z0.3`$ and possibly beyond puts the epoch of cluster gas enrichment at $`z1`$ (Mushotzky & Loewenstein 1997). Today’s clusters should have formed later than that, and there is no obvious reason for the enrichment of IGM to occur preferentially in the regions of space that later became clusters. Therefore, as argued by, e.g., Renzini (1999), cluster metallicity can be taken as representative of the low-$`z`$ universe as a whole. Simulations by Cen & Ostriker (1999b) predict a present-day universal abundance of 0.2 solar with a higher metal concentration around clusters. On the other hand, clusters, unlike the field population, contain mostly ellipticals and few spirals, suggesting that the dense cluster environment provides for an effective stripping of the enriched galaxy gas (e.g., Sarazin 1988). If so, the IGM may have much lower heavy element abundances than the cluster gas. Note that if heavy elements are detected in intercluster space, this would also set a lower limit on the total hydrogen density (and therefore the baryon density $`\mathrm{\Omega }_b`$) as well, since the intercluster relative abundances are unlikely to be higher than the cluster values.
Unlike hydrogen and most helium, heavy elements in the diffuse IGM outside clusters are not expected to be strongly ionized and can in principle be detected via X-ray absorption in an X-ray analog of the Gunn-Peterson test (Shapiro & Bahcall 1980; Aldcroft et al. 1994; Fang & Canizares 1997; Perna & Loeb 1998; Hellsten, Gnedin, & Miralda-Escudé 1998). Such an absorption (mostly due to oxygen and iron) has never been observed. Indeed, the above authors estimate that for an IGM that is uniform on large linear scales, the expected resonant absorption lines in a random direction towards a distant quasar are very weak.
Fortunately, the Universe is not uniform and there is one place in the sky where such a test, using not quasars but galaxy clusters as background candles, may be feasible with the forthcoming Chandra and XMM observatories, as described below.
## 2. LOOKING THROUGH A GIANT CLUSTER FILAMENT
Figure 1 shows the sky distribution of the most distant galaxy clusters from the catalog by Abell, Corwin, & Olowin (1989). Besides the Galactic plane shadow, the most prominent feature of the distribution is a concentration of clusters in Aquarius at $`\alpha =348`$, $`\delta =23`$, with the surface density of clusters about 6 times higher than average over the Southern Galactic hemisphere. This concentration was first noted by Abell (1961) and listed among the most likely candidates of rich distant superclusters. Later spectroscopic data (Ciardullo, Ford, & Harms 1985) showed that members of this concentration in fact form a giant filament along the line of sight, spanning a range of redshifts between 0.08 and 0.21 or about 700 $`h_{50}^1`$ Mpc (Fig. 2). One can reasonably assume that the overdensity of the IGM in this volume is proportional to that of clusters, since a bias on such a large linear scale is unlikely (forthcoming cosmological simulations of the Hubble volume will address this issue, e.g., Colberg et al. 1998). Therefore, this filament should also provide an enhancement in the IGM column density by a factor of about 6. This enhancement makes the absorption detectable in the X-ray spectra of clusters located at the far end of the filament that are seen through the filament.
There are distant and nearby members of the filament that are close in projection (Fig. 2). This enables a “differential” test by measuring a difference in absorption in the nearby and distant cluster spectra and looking for its systematic increase with the cluster distance along the filament. Such a differential measurement can significantly reduce systematic errors due to (a) uncertainty of the Galactic column density and (b) unavoidable instrument calibration inaccuracy. Since the hypothetical absorbing gas is located within the small interval of known and low redshifts, its detection and interpretation can be straightforward.
### 2.1. Expected absorption column density
Big Bang nucleosynthesis predicts a baryonic (plausibly, dominated by IGM) density parameter of $`\mathrm{\Omega }_b0.05h_{50}^2`$ (Walker et al. 1991) $`\pm `$ a factor of 2 from the uncertainly of the measured deuterium abundance (e.g., Steigman 1996). Simulations of the observed Ly$`\alpha `$ forest by Rauch et al. (1997) and the detection of high-$`z`$ helium by Davidsen, Krauss, & Wei (1996) are consistent with the upper bound of the above value. Here we would like to get an upper limit estimate of the possible absorbing column, for which we assume the above upper bound as an IGM density, $`\mathrm{\Omega }_{\mathrm{IGM}}0.1h_{50}^2`$. The difference in hydrogen column density between the nearest and farthest clusters in the filament is
$$N_H(0.1<z<0.2)2.5\times 10^{21}\mathrm{cm}^2h_{50}^1\frac{\mathrm{\Omega }_{\mathrm{IGM}}}{0.1h_{50}^2}\frac{\mathrm{\Delta }}{6}\frac{1}{b_{700}},$$
(1)
where $`\mathrm{\Delta }`$ is the surface overdensity of clusters in that region of the sky and $`b_{700}`$ is the cluster bias on the 700 Mpc scale (if there is any). For comparison, the Galaxy has $`N_H=2\times 10^{20}`$ cm<sup>-2</sup> in that direction (of course, most of the Galactic absorption is due to neutral hydrogen absent in IGM). In the redshift interval in front of the filament ($`z<0.1`$) and assuming no IGM overdensity, $`N_H2\times 10^{20}`$ cm<sup>-2</sup>, negligible compared to that in the filament. Another useful quantity for comparison is a possible column density of warm gas on a line of sight crossing the outskirts of a rich galaxy cluster. For example, extrapolating the X-ray gas density profile for the Coma cluster beyond its virial radius ($`r3`$ Mpc) to 10 Mpc, one obtains $`N_H3\times 10^{20}`$ cm<sup>-2</sup> (excluding the hot, weakly absorbing gas within the virial radius). Again, it is negligible compared to the column density that accumulates along the 700 Mpc overdense filament. Thus the proposed test can indeed probe the absorbing medium far from cluster confines.
### 2.2. Expected conditions in the gas
Simulations predict that a large fraction of all baryons at low redshifts is in the form of “warm” gas with temperatures between $`10^5`$ and $`10^7`$ K (e.g., Cen & Ostriker 1999a). Although these temperatures are low for a complete collisional ionization of heavy elements such as oxygen and iron, in the low-density regions outside clusters, photoionization by Cosmic X-ray Background (CXB) dominates the ionization rate (e.g., Aldcroft et al. 1994). Unlike the uncertain ionizing UV background at high redshifts that is involved in the interpretation of Ly$`\alpha `$ data, the present-day CXB is directly measurable (e.g., Chen, Fabian, & Gendreau 1997). The ionizing radiation is stronger in the immediate vicinity of an X-ray-bright cluster, but CXB dominates beyond a few Megaparsecs from the cluster. The diffuse IGM is optically thin with respect to photoabsorption. For the expected density, the recombination timescales of interest are short compared to the Hubble time, thus the medium should be close to ionization equilibrium. For an estimate of the ionization balance expected in the gas filling our filament and subjected to photoionization by CXB, we used XSTAR code by T. Kallman and J. Krolik<sup>1</sup><sup>1</sup>1ftp://legacy.gsfc.nasa.gov/software/plasma\_codes/xstar. For a range of temperatures $`(330)\times 10^5`$ K and an IGM density $`n_H2\times 10^6`$ cm<sup>-3</sup> (six times the assumed $`\mathrm{\Omega }_{\mathrm{IGM}}`$), oxygen is mainly in the form of OVII and OVIII ions with an increasing fraction of the completely ionized species for increasing temperature. If the IGM is clumpy (which is most likely, e.g., Cen & Ostriker 1999a), plasma in the denser regions would have lower ionization states, for a given temperature. For illustrative purposes of this paper, we use the ionization balance calculated for $`T=3\times 10^5`$ K and the above average gas density; qualitative conclusions are similar for other temperatures and densities in the expected range. For a consistently optimistic estimate, relative abundances of heavy elements in the IGM are assumed to be 0.3 solar. To calculate absorption depth for the obtained mixture of ions, atomic data from Verner et al. (1996ab) were used. Figure 3a shows an example absorbed spectrum. As noted in earlier works, of all elements, oxygen causes the strongest absorption features and has the best chance to be detected.
## 3. DISCUSSION
### 3.1. Absorption lines vs. absorption edges
Earlier work addressing the practical possibility of detecting the IGM has concentrated on resonant absorption lines in the spectra of distant quasars. In the random direction in the sky, the column density of the absorbing material is expected to be low and absorbers spread over a large redshift interval. For such random searches, it is indeed optimal to look for easily identifiable spectral features such as lines. The lines are expected to have very low equivalent width, $`\mathrm{\Delta }E0.1`$ eV, and their detection requires high spectral resolution instruments and large-area telescopes such as the future calorimeter onboard Constellation-X<sup>2</sup><sup>2</sup>2http://constellation.gsfc.nasa.gov, or impractically long observations with grating spectrometers onboard the forthcoming Chandra<sup>3</sup><sup>3</sup>3http://asc.harvard.edu and XMM<sup>4</sup><sup>4</sup>4http://heasarc.gsfc.nasa.gov/docs/xmm observatories. It is often overlooked, however, that equivalent width of the photoionization edges for oxygen ions is comparable to or greater than the width of the resonant lines. Detection of an edge does not require high spectral resolution. For example, Figure 3b shows the absorbed spectrum from Fig. 3a smoothed with a 100 eV resolution (FWHM) that can be achieved with a CCD. To emulate a realistic situation when the exact Galactic column density is unknown and is fitted as a free parameter, the figure also shows a model spectrum without the IGM absorption but with an increased Galactic $`N_H`$ to reproduce the flux at low energies (dotted line). Also shown by dashed line is an IGM-absorbed spectrum in which line absorption is not included (only edges are included). It is apparent that with such energy resolution, edge absorption in the interval $`E=0.61`$ keV (observer frame) is a dominant feature. This absorption is “warm” and cannot be mimicked by increasing the assumed neutral Galactic column density. Note, however, that because weak, broad edges are not so easily identifiable as lines, one can hope to find them only if the redshift of the absorber is known a priori, as in the Aquarius filament. In this filament, the width of the redshift interval containing the absorber, $`0.1z0.2`$, corresponds to an energy interval smaller than the spectral resolution of the CCD detector and is therefore unimportant.
### 3.2. Feasibility with forthcoming observatories
The spectrum in Fig. 3 assumes an optimistic IGM column density expected toward the most distant clusters in the Aquarius filament. Such absorption can be detected already with the CCD instruments ACIS-S onboard Chandra and EPIC onboard XMM, scheduled for launch in the near future. These detectors will have the required sensitivity at the energies below $`0.4`$ keV to straddle the expected absorption feature. Because this measurement does not require high spectral resolution of grating spectrometers that have limited efficiency, it can take advantage of the full effective area of the telescopes, making the required exposures practical. Detailed simulations show that for each of the several X-ray clusters on the far end of the filament, a $`3\sigma `$ or better detection of the IGM absorption with the above parameters can be obtained in a $`6\times 10^4`$ s observation with XMM or in a 2–3 times longer but still practical exposure with Chandra. If the IGM is clumpy so that oxygen is in the lower ionization states that have higher absorption depth, its detection would require shorter exposures. Of course, if the unknown IGM density or its metallicity turn out to be much lower than the above upper-limit estimates, longer exposures will be required. For a differential test described in §2, a significant number of clusters–members of the filament needs to be studied. As Fig. 2 shows, there is a number of appropriate objects, and also an area with more potential candidates with unknown redshifts. Thus the test proposed above appears to be feasible with Chandra and XMM, if the density and metallicity of low-redshift IGM are close to the above optimistic assumptions.
### 3.3. Clusters as background candles
Previous work emphasized quasars as background candles for the search of IGM absorption. By studying distant, randomly selected quasars with future large-area telescopes, one may eventually be able to determine the average properties of IGM. At present, since IGM has not even been detected yet, one can improve the chances of finding its traces by using galaxy clusters as background sources. Clusters form at the intersection of matter filaments (e.g., Colberg et al. 1997) which greatly increases the probability of a favorable line of sight through a dense region of the universe. Because clusters are extended, any absorption detected in their spectrum would have to arise in the truly diffuse medium as opposed to a possible intervening gas-rich galaxy or a Ly$`\alpha `$ cloud in front of a quasar. Although clusters may exhibit intrinsic absorption in their central cooling flow regions (e.g., Allen & Fabian 1994), these regions can easily be masked out from the spectral analysis with an imaging instrument. However, because the angular extent of clusters precludes the use of grating spectrometers such as those onboard Chandra and XMM, they would require a calorimeter, such as the future Constellation X or smaller-scale missions, to detect weak absorption lines arising in the IGM with lower column densities than that expected in the unique Aquarius filament.
The author is grateful to C. Otani for a conversation in 1994 in which the idea of the proposed measurement has originated, and to M. Elvis, T. Aldcroft, W. Forman, A. Vikhlinin, L. David and the referee for many useful comments. This work was supported by NASA contract NAS8-39073.
|
no-problem/9906/cond-mat9906312.html
|
ar5iv
|
text
|
# 1 Log-log plot of Δ𝑃(Δ) for 𝑐=0.795 (threshold), 𝑐=0.785 (desynchronized), and 𝑐=0.805 (synchronized). The system was disturbed by noise with level 10⁻¹⁴. The straight lines have the theoretically predicted slopes.
Comment on “Intermittent Synchronization in a Pair of Coupled Chaotic Pendula”
In , a number of supposedly novel and surprising features were observed in a system composed of two periodically driven and asymmetrically coupled pendula. In particular it was claimed that ‘permanent synchronization … does not occur except as a numerical artifact’. It was suggested that this might be related to the particular type of coupling. In this comment I want to point out that some of these claims cannot be maintained. The synchronization in this system is precisely of standard blow-out type . The observed intermittency is exactly the on-off intermittency well known from the synchronization of multifractal chaotic attractors .
The system studied in is described by $`\ddot{\theta }_m+\dot{\theta }_m/Q+\mathrm{sin}\theta _m=\mathrm{\Gamma }\mathrm{cos}(\mathrm{\Omega }t),\ddot{\theta }_s+\dot{\theta }_s/Q+\mathrm{sin}\theta _s=\mathrm{\Gamma }\mathrm{cos}(\mathrm{\Omega }t)+c[\mathrm{sin}\theta _s\mathrm{sin}\theta _m].`$ The subscripts $`m`$ and $`s`$ stand for master and slave. When they are nearly synchronous, the difference $`\delta \theta _m\theta _s`$ satisfies the linearized equation $`\ddot{\delta }+\dot{\delta }/Q+(1c)\delta \mathrm{cos}\theta =0`$. My first observation is that the same linearized equation would follw from a symmetric coupling, where master and slave have coupling terms $`\pm c/2[\mathrm{sin}\theta _m\mathrm{sin}\theta _s].`$ Since the behavior near the synchronization threshold is governed by the linearized equation, it follows that any eventual abnormal behavior cannot result from the asymmetry of the coupling.
As shown in , the synchronization threshold is given by the condition that the largest Lyapunov exponent $`\lambda _1(c)`$ of the linearized equation is zero. In the present case this gives $`c_c=0.7948`$ for the parameter values considered in . The eigenvalues of the instantaneous systems given in eq.(9) of are irrelevant, except that their fluctuations suggest that also the pointwise Lyapunov exponents might fluctuate. This is indeed the case. Let us consider a finite but large time $`T`$ and define by $`\mathrm{\Lambda }_1`$ and $`\mathrm{\Lambda }_2`$ the multipliers along the stable resp. unstable manifold of the linearized equation. Of course they depend parametrically on the trajectory $`\theta (t)`$. At the synchronization threshold, we have $`\mathrm{log}|\mathrm{\Lambda }_1|T\lambda _1(c)=0`$, where the average is taken over all initial conditions $`\theta (t_0),\dot{\theta }(t_0),\delta (t_0),\dot{\delta }(t_0)`$ with $`t_00`$. Generically we expect that $`\mathrm{log}|\mathrm{\Lambda }_2|/T=\lambda _2(c)<\lambda _1`$, as is verified numerically. Therefore we have only one direction in the space spanned by $`(\delta ,\dot{\delta })`$ along which we must study a possible break up of synchronization.
This break up can occur, even for $`c>c_c`$, if $`\mathrm{\Lambda }_1`$ fluctuates and if the system is perturbed by noise . More precisely, if this noise is infinitesimal and the fluctuations of $`\mathrm{\Lambda }_1`$ follow normal central limit behavior, one expects intermittent bursts with power behaved distributions of amplitudes and of legths of the locked phase .
To describe the fluctuations of $`\mathrm{\Lambda }_1`$ we use the generating function $`g(z;c)=T^1\mathrm{log}|\mathrm{\Lambda }_1|^z`$. The cumulant expansion of $`\mathrm{log}|\mathrm{\Lambda }_1|`$ corresponds to a Taylor expansion $`g(z;c)=z\lambda _1(c)+z^2\sigma ^2(c)/2+\mathrm{}`$, where $`T\sigma ^2(c)`$ is the variance of $`\mathrm{\Lambda }_1`$, and contributions of higher order cumulants are straightforward to compute. The arguments of can now be used straightforwardly to show that amplitudes $`\mathrm{\Delta }=|\delta |`$ of the bursts are distributed according to $`P(\mathrm{\Delta })\mathrm{\Delta }^{\kappa 1}`$ with $`g(z=\kappa ;c)=0`$. Neglecting higher order cumulants this gives $`\kappa =2\lambda _1(c)/\sigma ^2(c)`$. For the parameters used in , simulations with $`T=200`$ give $`\sigma ^2(c)=0.89`$ at $`c=c_c`$, while $`\lambda _1(c)6.1(c_cc)`$. The predicted power laws for $`P(\mathrm{\Delta })`$ are compared to numerical simulations in fig.1. In the same Gaussian approximation, the distribution for the locking intervals $`\tau `$ is for $`cc_c`$ given by the distribution of return times to a reflecting wall of a biased 1-d random walk with drift $`\lambda _1(c)`$ and diffusion constant $`\sigma ^2(c)`$, $`P(\tau )\tau ^{3/2}\mathrm{exp}(\tau \lambda _1(c)^2/2\sigma ^2(c))`$ . This disagrees with the fit in fig.2 of by the prefactor $`\tau ^{3/2}`$ which gives indeed most of the $`\tau `$-dependence seen in that figure.
Peter Grassberger
HLRZ, c/o Forschungszentrum Jülich
D-52425 Jülich, Germany
Received
PACS number: 05.45.+b
|
no-problem/9906/nucl-th9906002.html
|
ar5iv
|
text
|
# DPNU-99-19 Relativistic heavy ion collisions — Where are we now? Where do we go?
## 1 INTRODUCTION
The construction of RHIC (Relativistic Heavy Ion Collider) at BNL will be completed by the time these proceedings are out and it will start accelerating gold nuclei at 100 GeV/A. What is the purpose of ultrarelativistic heavy ion collisions? The reader may have dreams such as i) study of high density matter, ii) production of the quark-gluon plasma, iii) quasi-reproduction of the big bang, iv) understanding of the history of the universe, and so forth. These are actually what will be pursued at RHIC, and so these are not dreams any more. We, however, have to turn our attention to the other side of reality as well: i) the lifetime of the system created in ultrarelativistic heavy ion collisions is very short, ii) the system is not static, iii) observables are dirty, in other words, the interpretation of observables is not straightforward, iv) there are so many models that claim to describe the results of ultrarelativistic heavy ion collisions successfully, and so on.
Thus, in order to fully understand ultrarelativistic heavy ion collisions and QCD, it is really necessary for theorists to attack the following challenges, which were summarized by Matsui at Quark Matter 97 :
(i) Compute, as best as one can, expected properties of dense matter and its phase structure and make predictions for signals of new states of matter.
(ii) Interpret the data from current fixed-target experiments and identify signals of new physics, if any, from backgrounds of “old physics”.
(iii) Describe formation and evolution of dense matter in nuclear collisions and estimate physical conditions to be achieved at future collider experiments.
Since the first half of item (i) will be attacked by Yoshié , I will concentrate on the second half of item (i) and items (ii) and (iii). I will discuss mainly the recent results of heavy ion collisions at CERN SPS at $`E_{\mathrm{lab}}=`$ 160 - 200 GeV/A.
## 2 WHERE ARE WE?
The system created in ultrarelativistic heavy ion collisions is not static. Even if hot/dense hadron matter or quark-gluon plasma is created, it cannot last for long time. The system immediately starts to expand and cool down. Even if the quark-gluon plasma phase is produced initially, it is soon converted back to the hadron matter, which freezes out in a time scale of a few tens of fm. A schematic diagram of the time evolution in ultrarelativistic heavy ion collisions is shown below.
This kind of explanation is often seen in the literature. However, the reader may wonder if this is indeed what is realized in ultrarelativistic heavy ion collisions. Without evidence, the whole picture is a mere conjecture. In the following, I will answer this question with recent experimental results and theoretical inference, and support this picture.
### 2.1 Are hot systems ever produced in ultrarelativistic heavy ion collisions?
Probably the reader is most interested in the possibility of the production of the quark-gluon plasma in ultrarelativistic heavy ion collisions. Nevertheless, I would like to begin with a more fundamental question; are there thermalization and collective motion in ultrarelativistic heavy ion collisions? It is because the phase transition to the quark-gluon plasma is a result of interaction. If no bulk interacting system is created, no phase transition takes place. It is logically possible that two nuclei collide on each other and go through each other without leaving a region with high energy density. However, recent experimental data tell us that approximately thermalized systems are indeed created in ultrarelativistic heavy ion collisions and that the systems go through collective expansion. The evidence includes the result of direct measurement of flow , measurement of the interference of identical particles , and so on. Here I will discuss the transverse mass, $`m_T`$, distribution of final state hadrons .
The transverse mass, $`m_T`$, is defined by $`m_T^2=m^2+p_T^2`$, where $`m`$ is the mass of the particle and $`p_T`$ is its transverse momentum. The beam direction is defined as the longitudinal axis. The $`m_T`$ distribution of final state hadron, $`i`$, at mid rapidity is well-approximated by the following form:
$$\frac{1}{m_T}\frac{dN}{dm_T}\mathrm{exp}\left(\frac{m_T}{T_i}\right).$$
(1)
$`T_i`$ has been called temperature or slope parameter. It is known that $`T_i`$ is a function of particle mass and well-approximated as
$$T_i=a+bm_i,$$
(2)
where $`a`$ and $`b`$ are constants dependent on colliding nuclei, collision energy, and event class such as central or peripheral. This mass dependence of $`T_i`$ has the following simple interpretation: $`a=T_f`$ and $`b=v^2`$, where $`T_f`$ is the temperature of the system at freeze-out and $`v`$ is the flow velocity at freeze-out. This relation is derived by assuming that all particles are locally in thermal equilibrium (not necessarily in chemical equilibrium) and collectively expanding at freeze-out and that the freeze-out time is independent of particle species. Thus, the experimental observation, Eq. (2), is not inconsistent with the formation of hot thermalized system.
Note that, as I emphasized above, this is merely one of the data that support the formation of hot systems. If this were the only evidence, the formation of hot systems cannot be concluded, since even in high energy $`pp`$ collisions the $`m_T`$ distribution is exponential . By combining various observables, the state of the system is deduced. This is the heavy ion way of inference.
### 2.2 What kind of matter is created?
We have learned that some kind of interacting matter is created, at least transiently, in ultrarelativistic heavy ion collisions. The next question is what kind of matter is created. One of the ways to get clues to it is to measure $`J/\psi `$ yield.
As is well known, the suppression of $`J/\psi `$ yield was originally proposed as a signature of the formation of the quark-gluon plasma . The idea was that if the quark-gluon plasma is created, $`J/\psi `$ cannot form because the potential between a $`c\overline{c}`$ pair is Debye screened. However, this is not the only process that suppresses $`J/\psi `$ yield. Processes such as $`J/\psi +ND\overline{D}N`$, can contribute to $`J/\psi `$ suppression. Thus, $`J/\psi `$ suppression is not necessarily a signature of the quark-gluon plasma by itself. Until $`\mathrm{PbPb}`$ experiments began at CERN, $`J/\psi `$ data in pA and AB collisions had been successfully explained by the above process, i.e., $`J/\psi `$ absorption by the nucleon. In $`\mathrm{PbPb}`$ collisions at $`E_{\mathrm{lab}}=160`$ GeV/A, however, it was found that as the transverse energy $`E_T`$ of final state hadrons increases, the $`J/\psi `$ yield, more strictly $`B_{\mu \mu }\sigma (J/\psi )/\sigma (\mathrm{Drell}\mathrm{Yan})`$, drops suddenly at an $`E_T`$ . Hadronic scenarios failed to explain this behavior . However, the nature of this sudden drop is not clear yet, although there are a lot of attempts to explain the behavior . In particular, I remark that even if the phase transition from the hadronic phase to the quark matter is of first order, it cannot lead to the sudden drop of the $`J/\psi `$ yield in such a naive way as discussed in . It is because the energy density jumps at a first order phase transition. Thus, as $`E_T`$ is increased, part of the system begins to become the quark matter at $`E_{T1}`$, and the portion of the quark matter increases gradually. When the whole volume has become the quark matter, $`E_T`$ must have become much larger than $`E_{T1}`$. As a result, the sudden drop in the $`J/\psi `$ yield does not correspond to the sudden formation of the quark-gluon plasma at a first order phase transition.
### 2.3 How is the matter being excited?
We have seen that some new form of matter appears to be created in ultrarelativistic heavy ion collisions. Then, how is the matter excited quantum mechanically? Dileptons are suitable probes to study this problem. Leptons do not interact strongly and can be considered almost penetrative. Observed dileptons carry the information of the early hot/dense stage as well as later stages, and some dileptons are produced even after freeze-out, for instance, by the decay of vector mesons. Accordingly, if vector mesons are modified, it is expected to be observed with dileptons, but not with hadrons.
The first evidence of hadron modification was brought by CERES Collaboration at CERN SPS . They first measured dilepton and meson yields in $`p\mathrm{Be}`$ and $`p\mathrm{Au}`$ collisions, and found that the dilepton spectra in those reactions are explained solely by the decay of final state mesons. This means that no long-lived fireball is created in $`p\mathrm{Be}`$ or $`p\mathrm{Au}`$ collisions. They have, however, shown that the dilepton spectra in $`\mathrm{SU}`$ collisions at $`E_{\mathrm{lab}}=200`$ GeV/A cannot be explained only by the decay of final state mesons. Later, it was found that the data cannot be reproduced without taking account of hadron modification in the hot phase created in the collisions .
Thus, hadrons are modified in medium. Two scenarios are often compared: mass shift and collisional broadening. The two scenarios are often considered two different scenarios. This is, however, not the case. First, I point out that the term, mass shift, is quite confusing. What is observed with dileptons is not the pole masses of vector mesons but the spectral function. In general, for the vector Heisenberg operator $`J_\mu (\stackrel{}{x},t)`$ the polarization tensor $`\mathrm{\Pi }_{\mu \nu }(q_0,\stackrel{}{q})`$ is defined by
$$\mathrm{\Pi }_{\mu \nu }=id^4xe^{iqx}TJ_\mu (\stackrel{}{x},t)J_\nu ^{}(\stackrel{}{0},0)_T,$$
(3)
where $`\mathrm{}_T`$ indicates thermal average at $`T`$. For simplicity, let us set $`\stackrel{}{q}=\stackrel{}{0}`$. The spectral function $`\rho (q_0)`$ is related to the polarization tensor,
$$\rho (q_0)\mathrm{Im}\frac{1}{q_0^2}\frac{\mathrm{\Pi }_\mu ^\mu (q_0)}{\mathrm{tanh}(\beta q_0/2)},\beta =\frac{1}{T}.$$
(4)
The physical significance of the spectral function is that the dilepton production rate at $`T`$, $`(dN_\mathrm{}\overline{\mathrm{}}/d^4xd^4q)_T`$, is related to the spectral function without approximation as follows :
$$\left(\frac{dN_\mathrm{}\overline{\mathrm{}}}{d^4xd^4q}\right)_T\frac{e^{\beta q_0}+1}{(e^{\beta q_0}1)^2}\rho (q_0).$$
(5)
This formula is exact, independent of in what phase the system is. There is some confusion on the meaning of dilepton spectra. Some authors argue that observed dilepton spectra are different from those theorists calculate because final state interactions modify theoretically calculated spectra. This statement stems from the misunderstanding that masses calculated by theorists are the masses of quasi-particles at $`T`$ obtained by diagonalizing the Hamiltonian. If this were the case, the effect of final state interactions may change theoretical predictions substantially and should be taken into account. However, the formula, Eq. (5), is exact, and so no further interface between experiments and theories is needed except purely experimental corrections.
Collisional broadening is a universal phenomenon; it appears wherever collisions take place . Generally, hadronic effective theories calculate this part of hadron modification. The other type of hadron modification, mass shift or global shift of peaks in the spectral function, is special to QCD. In the QCD sum rules, this is expressed as follows . The operator product expansion side of the polarization function is given by
$$\frac{i}{3Q^2}d^4se^{i\omega t}TJ_\mu (\stackrel{}{x},t)J^\mu (\stackrel{}{0},0)_T=C_0\mathrm{log}|Q^2|+\underset{n=1}{\overset{\mathrm{}}{}}\frac{C_n}{Q^{2n}},$$
(6)
where $`Q^2=\omega ^2`$, $`C_i`$’s are condensates, and $`\stackrel{}{q}=\stackrel{}{0}`$ has been assumed as before. The condensates are related to the spectral function by the following exact sum rules:
$$_0^{s_0}\rho (\sqrt{s})s^n𝑑s=\frac{C_0}{n+1}s_0^{n+1}+(1)^nC_{n+1},n0,$$
(7)
where $`s_0`$ is the perturbative QCD threshold. At finite temperature or density, the condensates change because of partial chiral restoration…etc.. The change is reflected to global shift of peak structure in the spectral function at finite temperature or density through the exact sum rules, (7), in addition to the trivial collisional broadening . Thus, the two scenarios for hadron modification are not exclusive with each other. In QCD both mechanisms are indeed at work and should be taken into account. In particular, the sum rules, (7), should be satisfied by every effective model as well.
### 2.4 Possibility of non-equilibrium states?
Heavy ion reactions take place within a finite time. The typical time scale is of the same order as that of strong interaction and the process does not necessarily proceed adiabatically. Therefore, there is plenty of potential room for non-equilibrium phenomena. This non-equilibration is not simply limited to phase space, but also expected in chiral space.
One of such possibilities is disoriented chiral condensate (DCC). Rajagopal and Wilczek proposed a mechanism to create DCC domains called ‘quench mechanism’ . I use, as a model Lagrangian, the linear sigma model defined by
$$=\frac{1}{2}_\mu \varphi _i^\mu \varphi _i\frac{\lambda }{4}(\varphi ^2v^2)^2+H\sigma ,$$
(8)
where $`\varphi _i(\sigma ,𝝅)`$ stands for a vector in internal space; $`H\sigma `$ is an explicit chiral symmetry breaking term due to the finite quark masses.
The original idea of the quench mechanism for DCC formation is summarized as follows. First, chiral symmetry is restored in the central region in particle or nuclear collisions. Then, the chiral fields are assumed to decouple quickly from the heat bath. The chiral fields are thus left around the origin in chiral space, i.e., $`\varphi ^20`$, while the effective potential has returned to its zero temperature form. As a result, the chiral fields are left at the top of the so-called ‘Mexican hat’ effective potential and according to its initial condition at each spatial point, the chiral field $`\varphi `$ rolls down the slope of the effective potential in an arbitrary direction toward the bottom of the potential in chiral space. If in a certain spatial region the chiral field collectively rolls down in approximately the same direction and acquires an expectation value different from that in the vacuum, it will result in a DCC domain in coordinate space. This is schematically shown in Fig. 2.
Actually, the above explanation tells us why we can expect non-equilibrium states in chiral space, but does not tell us what is the mechanism to create ‘domains’, i.e., large scale structure. It is the amplification of low momentum modes caused by the mode instability in the Mexican hat effective potential. The equation of motion for the momentum $`𝐤`$ component of the pion field $`\pi ^i`$ is given in the mean field approximation,
$$\frac{d^2𝝅_𝐤}{dt^2}=[\lambda (v^2\varphi ^2)k^2]𝝅_𝐤,$$
(9)
where the fluctuations of the chiral fields were neglected and $`\varphi ^2`$ is the average of $`\varphi ^2=\varphi ^i\varphi ^i`$. In this approximation, modes with $`k^2<v^2\varphi ^2`$ grow exponentially, while high momentum modes do not. As a result, a large scale structure is expected to emerge.
If a DCC domain is created and all final state pions are emitted from the single domain, it can be shown, using the remaining $`O(3)`$ invariance, that the distribution probability of a quantity $`R`$ defined by
$$R=\frac{N_{\pi _0}}{N_{\pi _0}+N_{\pi ^+}+N_\pi ^{}},$$
(10)
where $`N_{\pi _0}`$, $`N_{\pi ^+}`$, and $`N_\pi ^{}`$ are the numbers of the final state $`\pi _0`$, $`\pi ^+`$, and $`\pi ^{}`$, respectively, takes the following form :
$$P(R)=\frac{1}{2\sqrt{R}}.$$
(11)
Note that $`P(R)`$ diverges at $`R=0`$, but that the average of $`R`$ takes a finite value, 1/3. This probability distribution is quite in contrast to that expected for normal incoherent emission, $`P(R)\delta (R1/3)`$ (See Fig. 3). This behavior is valid only for low $`p_T`$ pions, because DCC formation is caused by amplification of low momentum pion modes.
Eq. (9) tells us that it is necessary to get the system cooled fast enough to have (i) spontaneously symmetry broken effective potential and (ii) discordance between the minimum point of the spontaneously chirally broken effective potential and the position of the mean field in order to have large and discernible domains.
In central ultrarelativistic heavy ion collisions, the initial fluctuation is expected to be large and the typical time scale for the expansion of the system is also large. Thus, the system is likely to maintain equilibration at least in chiral space. The above conjecture has been also confirmed numerically by our group , assuming the Bjorken scaling in the longitudinal direction . This leads to the following conclusion: central ultrarelativistic heavy ion collisions are not the best place to look for DCC formation, contrary to the general expectation. It is not-so-central ultrarelativistic heavy ion collisions that should be collected to look for DCC formation. I also note that in not-so-central ultrarelativistic heavy ion collisions the axial anomaly is expected to bring about DCC domain formation aligned in real space, i.e., one above the reaction place and one below it, but misaligned in chiral space . This conjecture can be tested by utilizing the technique to determine the reaction plane, which has been developed in flow analysis.
At the moment, only one heavy ion experiment has reported the result of a DCC search . Their result was negative. However, they used only central collisions and did not imposed a cut on $`p_T`$. As I discussed above, it is necessary to select non-central but non-peripheral collisions and low $`p_T`$ pions in DCC hunts. Until experiments with such cuts are carried out, the possibility of DCC formation in ultrarelativistic heavy ion collisions will remain an open question.
## 3 WHERE DO WE GO?
RHIC is coming late this year (1999) at $`\sqrt{s}=200`$ GeV/A (A = $`{}_{}{}^{197}\mathrm{Au}`$). Hopefully, LHC will come early next century. While physical observables and methodology will remain essentially the same, energy will be much higher, momentum and invariant mass resolution will be much better (the $`e^+e^{}`$ invariant mass resolution is about 20 - 30 times better at 1 GeV than the current CERES setup at CERN SPS), and acceptance will be much larger. The quark-gluon plasma, whose formation is not decisive at the moment, will be created with much larger probability. Since the decrease of the temperature is slowed near a phase transition irrespective of the order of the transition, if the phase transition occurs, it will be possible to separate dileptons from the (almost) constant temperature period near the critical temperature as a secondary peak or shoulder structure in the invariant mass distribution .
The system created in ultrarelativistic heavy ion collisions is very complex. In order to obtain profound understanding of the small short-lived non-static and potentially non-equilibrated system, it is mandatory to compare as many observables as possible and find correlations among them. The comparison should not be limited to event classes at a fixed collision energy, but should be among events at different collision energies. So far, there has been a large gap in collision energies of heavy ion experiments between $`E_{\mathrm{lab}}=`$ 12 GeV/A (BNL AGS) and $``$ 160 GeV/A (CERN SPS). The gap between CERN SPS and RHIC is considerably large as well. From this viewpoint, the approval and inauguration of the JHF project, which is capable of accelerating heavy ions at $`E_{\mathrm{lab}}=`$ 50 GeV/A and lower, are eagerly awaited. It is also desirable to run RHIC at lower energies.
Finally, I make comments on event generators. Quite a number of event generators exist in the market . Most of them are classical, although Pauli blocking is generally taken into account. This is acceptable as a first step. However, it is not acceptable that most models do not include quarks or gluons and that many models do not include secondary collisions. This is completely contrary to the general expectation that RHIC physics is described by quark and gluon degrees of freedom and the fact that phase transition takes place owing to interaction among quarks and gluons. In this respect, perhaps the most ambitious attempt was Parton Cascade (or VNI) by Klaus Kinder Geiger, although it is not without serious conceptual defects yet. He tragically perished in the Swiss Air Flight 111 crash on September 2, 1998, but I hope that this is not the end. For better understanding of the dynamical features of RHIC and LHC physics, the direction initiated by him should be taken over and further pursued.
|
no-problem/9906/cond-mat9906400.html
|
ar5iv
|
text
|
# Phase Fluctuations and Non-Equilibrium Josephson Effect
## Abstract
We consider a diffusive S-N-S junction with electrons in the normal layer driven out of equilibrium by external bias. We show that, the non-equilibrium fluctuations of the electron density in the normal layer cause the fluctuations of the phase of the order parameter in the S-layers. As a result, the magnitude of the Josephson current in the non-equilibrium junction is significantly supressed relative to its mean field value.
When an electric current flows through a metallic sample, electrons in the metal can no longer be considered as a system in equilibrium. In particular the electronic distribution function $`f(ϵ)`$ differs considerably from the equilibrium Fermi distribution. The low temperature two-step distribution function was recently observed in tunneling experiments .
The ability to change the electronic distribution by simply applying voltage to a metallic system opens a new possibility to control the supercurrent in a S-N-S junction . The idea is based on the description of the supercurrent flow in terms of the electronic states and their occupation probabilities $`f(ϵ)`$. As a function of the phase difference of the two superconducting layers in a S-N-S junction $`\theta =\theta _1\theta _2`$ the supercurrent density can be written as
$$J(\theta )=𝑑ϵf(ϵ)j(ϵ,\theta ),$$
(1)
where $`j(ϵ,\theta )`$ is the contribution of the states with energies between $`ϵ`$ and $`ϵ+dϵ`$. Changing the electronic distribution, one changes the probabilities $`f(ϵ)`$ and thus can change the magnitude and even the direction of the supercurrent (in other words, making a $`\pi `$-junction). Some particular cases of $`j(ϵ,\theta )`$ were considered theoretically .
It is important to realize that Eq. (1) was obtained within the mean field approximation where the phase $`\theta `$ is fixed. Quantum fluctuations of the phase of the order parameter do not induce dramatic change in the amplitude of the Josephson energy, if the conductance of the normal layer is large. The purpose of this Letter is to show that it may not be the case in the non-equilibrium situation.
We show that the non-equilibrium fluctuations in the normal layer cause the fluctuations of the phase of the order parameter in both S-layers, thus affecting the Josephson current in the junction \[the term $`j(ϵ,\theta )`$ in Eq. (1)\]. In particular, the critical current (which is proportional to the Josephson energy $`E_J`$) becomes strongly suppressed by the non-equilibrium. This effect accompanies the change of the magnitude and even of the direction of the supercurrent, which is due to the term $`f(ϵ)`$ in Eq. (1). Thus the critical current in the $`\pi `$-junction is smaller than the mean field value, as was observed in the experiment . Here, we will present the phenomenological derivation which yields the same results as a calculation based on the Keldysh technique (for similar consideration of S-N junction see Ref. ).
To describe the non-equilibrium fluctuations we first identify the collective modes in the junction. For simplicity we will consider an S-N-S sandwich, where each layer can be considered as a 2D film. The effects of the finite thickness of the layers will be discussed later in the Letter.
Consider a superconducting film at zero temperature. All of the excitations with the energy smaller than the superconducting gap $`\mathrm{\Delta }`$ are associated with the phase $`\theta `$ of the order parameter . In the isolated S-layer, the longitudinal phase fluctuations correspond to the usual 2D plasmon with dispersion $`\omega \sqrt{Q}`$. However, when a layer of normal metal is present, the collective mode with the linear dispersion relation appears . In the S-N-S junction there two such modes corresponding to each S-layer.
The time evolution of phase $`\theta `$ is governed by hydrodynamic equations, which in the absence of external magnetic fields can be written as
$`\dot{n}_s^{(i)}+{\displaystyle \frac{1}{2e}}\stackrel{}{}\stackrel{}{j}_s^{(i)}=0,`$ (3)
(4)
$`\stackrel{}{j}_s^{(i)}=e\pi \mathrm{}D_s^{(i)}\nu _s^{(i)}\mathrm{\Delta }\stackrel{}{}\theta ^{(i)},`$ (5)
(6)
$`\mathrm{}\dot{\theta }^{(i)}=2\left(e\phi +{\displaystyle \frac{n_s^{(i)}}{\nu _s^{(i)}}}\right),`$ (7)
where the superscript $`i=1,2`$ labels the layer, $`n_s^{(i)}`$ is the perturbation of the carrier density in S-layers, $`\stackrel{}{j}_s^{(i)}`$ is the supercurrent, and $`\phi `$ is the electric potential. We wrote the London equation (5) for a dirty superconductor and expressed the superfluid density through the diffusion coefficient $`D_s^{(i)}`$ in the normal state of the superconductor and the thermodynamic density of states per unit area in the superconductor $`\nu _s^{(i)}`$. Also in Eq. (10) we neglected the terms which describe the Josephson current. We will discuss this point later.
The electron density is normal metal is governed by the continuity equation and the Ohm’s law
$$e\dot{n}_m+\stackrel{}{}\stackrel{}{j}_m=0;\stackrel{}{j}_m=D_m\stackrel{}{}\left(e^2\nu _m\phi +en_m\right),$$
(8)
where $`n_m`$ is the carrier density, $`\stackrel{}{j}_m`$ is the current and $`D_m`$ is the diffusion coefficient in the N-layer.
Fluctuations of the densities in three layers are coupled through the Coulomb potential
$$\phi =𝑑r^{}V(rr^{})\left[n_s^{(1)}(r^{})+n_s^{(2)}(r^{})+n_m(r^{})\right];$$
(9)
$$V=\frac{e^2}{r}.$$
(10)
So far we have neglected the thickness of the metallic layer wherefore only the sum of the electron densities in two S-layers is coupled to the density of electrons in the metal. Therefore in a strictly two-dimensional model of the S-N-S junction with two identical S-layers the Josephson current (which depends on the phase it difference) is not affected by the density fluctuations in the metal. To couple the fluctuations in the normal metal to the phase difference one needs to introduce some asymmetry into the model either by having two different S-layers or by taking into account the non-zero thickness of the metallic layer. Here we chose the former (see Eq. (5)). The final results do not depend on the asymmetry explicitly, and thus are independent of this choice.
The requirement of the consistency of Eqs. (10) gives two acoustic branches of the collective mode corresponding to the sum and difference of the electron densities of the S-layers with dispersion relations $`\omega _{1,2}(Q)=\omega _2^{}i\omega _{1,2}^{\prime \prime }`$. The lifetime of both modes is finite. These modes are similar to the Schmid-Schön mode in the vicinity of the critical temperature . The only difference is that the normal excitations are not thermally activated in the superconductor itself but rather exist in the normal metallic layer close to the superconductor, however, it does not change the charge dynamics. In the latter (odd) mode finite lifetime appears only due to the asymmetry of the sandwich $`\delta D=[D_s^{(1)}D_s^{(2)}]/2`$
$$\omega _1^{}=Q\sqrt{\frac{\pi \mathrm{\Delta }D_s}{\mathrm{}}};\omega _1^{\prime \prime }=\frac{\pi }{2}\left(\frac{\nu _m(\delta D)^2}{\nu _sD_mD_s}\right)\frac{\mathrm{\Delta }}{\mathrm{}},$$
(11)
where $`\nu _s=\nu _s^{(1)}+\nu _s^{(2)}`$, and $`G_{s,m}`$ denote dimensionless conductances of the superconducting (in the normal state) and normal layers respectively: $`G_{s,m}=2\pi \mathrm{}\sigma _{s,m}/e^2=2\pi \mathrm{}\nu _{s,m}D_{s,m}`$. The conductances are measured in units of $`e^2/2\pi \mathrm{}=1/(25.8K\mathrm{\Omega })`$. The even mode is similar to the “phason” mode found in the NS junction . The lifetime of this mode is due to the coupling with the relaxation mode in the N-layer
$$\omega _2^{}=Q\left(\frac{\pi \mathrm{\Delta }D_s(\nu _s+\nu _m)}{\mathrm{}\nu _m}\right)^{1/2};\omega _2^{\prime \prime }=\frac{\pi }{2}\left(\frac{G_s}{G_m}\right)\frac{\mathrm{\Delta }}{\mathrm{}}.$$
(12)
Equations (12) and (11) are valid for $`\omega _i^{}>\omega _i^{\prime \prime }`$. For the phason mode this condition is satisfied already at small frequencies $`\mathrm{}\omega \mathrm{}\omega _2^{}\mathrm{\Delta }(G_s/G_m)\mathrm{\Delta }`$. The condition for the odd mode is weaker, given the smallness of the asymmetry parameter $`(\mathrm{}\nu _m\delta D)^2G_sG_m`$.
Now let us consider what happens, when a dc \- current is driven in the normal layer. The average currents in the metal are accompanied by the fluctuations known as the shot noise. Since the currents in the metal are coupled to those in the S-layers, it is natural to expect that in the superconductors the fluctuating currents appear as well, and consequently, the phasons are generated.
To include these fluctuations in our description of the S-N-S sandwich, we add Langevin sources $`\delta \stackrel{}{j}_l`$ to the current in the normal metal. Equation (8) takes the form
$$\stackrel{}{j}_m=D_m\stackrel{}{}\left(e^2\nu _m\phi +en_m\right)+\delta \stackrel{}{j}_l.$$
(13)
The Gaussian fluctuations $`\delta \stackrel{}{j}_l`$ are described by their correlator. Out of equilibrium, when the energy relaxation is negligible $`\tau _ϵ\mathrm{}`$, the electronic distribution function $`f(ϵ)`$ in the normal metal is the two-step function
$$f_{ne}(ϵ)=\frac{1}{2}\left[\eta \left(ϵ+\frac{eU}{2}\right)+\eta \left(ϵ\frac{eU}{2}\right)\right],$$
(14)
where $`\eta (x)`$ is the Heaviside function. In that case the non-equilibrium part of the correlator of the fluctuations can be written as
$$\delta j_l^\alpha \delta j_l^\beta _{\omega ,Q}=\frac{1}{2}\delta _{\alpha \beta }e^2D_m\nu _m(eU\mathrm{}|\omega |)\eta (eU\mathrm{}|\omega |).$$
(15)
The difference of superconducting phases of the S-layers $`\theta =\theta _1\theta _2`$ in the presence of the current fluctuations $`\delta \stackrel{}{j}_l`$ can be determined from the system of Eqs. (10), and (13) in the first order in asymmetry
$$\delta \theta =\frac{1}{2e\mathrm{}}\frac{\delta D}{\nu _mD_m}\frac{\pi \mathrm{\Delta }\omega \delta \stackrel{}{j}_l\stackrel{}{Q}}{(\omega ^2\omega _1^2(Q))(\omega ^2\omega _2^2(Q))}.$$
(16)
Therefore, the correlator of the phase fluctuations has two well pronounced poles corresponding to the two collective modes in the S-N-S sandwich and is proportional to the applied voltage $`U`$.
To find the effect of the phase fluctuations on the Josephson current, we need the phase fluctuations in a single point. With the help of Eqs. (15) and (16), we find in the lowest order order in asymmetry
$$\delta \theta ^2_\omega =\frac{d^2Q}{\left(2\pi \right)^2}\delta \theta ^2_{\omega ,Q}=\frac{1}{G_s\mathrm{\Delta }}\frac{eU}{\mathrm{}|\omega |}\eta (eU\mathrm{}|\omega |),$$
(17)
at $`\mathrm{}|\omega |<eU`$ and $`\delta \theta ^2_\omega =0`$ at $`\mathrm{}|\omega |>eU`$. Equation (17) is valid, provided $`\mathrm{}\omega G_s\mathrm{\Delta }/G_m`$.
Note that in Eq. (17) the asymmetry parameter $`\delta D`$ have disappeared and therefore only the subleading terms depend on the asymmetry parameter $`\delta D`$. The leading term Eq. (17) is the contribution of the odd mode Eq. (11) to the integral in Eq. (17). This term dominates because when both modes Eq. (11) and Eq. (12) are well defined, the lifetime of the odd mode is always longer than the lifetime of the even phason mode (provided the asymmetry parameter $`\delta DD_m,D_s`$). However, the asymmetry parameter can not be arbitrary small, as the lifetime of the odd mode should be less than the escape time (which is the time it takes to remove a phason from the system and thus is the characteristic relaxation time). As we consider the S-N-S sandwich of the infinite size, the escape time is practically set to infinity, which is the reason the asymmetry disappears from the odd mode contribution Eq. (17). In this case, the correlator Eq. (17) is independent of the particular choice of asymmetry. The difference between various asymmetry realizations becomes important if one considers a situation when the odd mode is still ballistic, while the even phason mode is already damped. We will not discuss this situation in this Letter.
We have found that in the presence of the normal layer the phase fluctuations in the superconductor are large due to the large number ($`eU/\mathrm{}\omega `$) of the phasons. Now we are interested in the effect of these fluctuations on the Josephson current. In equilibrium it is determined by the difference of time-independent phases of the superconducting order parameter in two S-layers $`J(\theta )\mathrm{sin}(\theta ^{(0)})`$. In the non-equilibrium situation one has to average over the phase fluctuations so the Josephson current becomes modified by a phase factor.
The supercurrent flow in Josephson junctions was analyzed by many authors . Equation (1) expresses the Josephson current density as a sum of contributions of individual electronic states weighted with their distribution function. The exact form of these contributions $`j(ϵ,\theta )`$ depends on the boundary conditions at the S-N interface . In the case of diffusive junction the supercurrent density in the lowest non-vanishing order of transparency can be written as
$`J(\theta )={\displaystyle \frac{2e}{\mathrm{}}}E_J^{(0)}L_f\mathrm{sin}(\theta )e^{i\delta \theta (0)}_\theta ,`$ (19)
(20)
$`L_f=Re{\displaystyle \frac{dϵ}{\sqrt{iϵE_T}}\frac{f(ϵ)}{\mathrm{sinh}\sqrt{{\displaystyle \frac{iϵ}{E_T}}}}},`$ (21)
where the transverse Thouless energy is $`E_T=\mathrm{}D_n/d^2`$ ($`d`$ is the width of the normal layer) and the overall scale is given by the bare Josephson energy $`E_J^{(0)}G_1G_2/\nu _m`$, where $`G_1`$ and $`G_2`$ are the tunneling conductances (in units of $`e^2/\mathrm{}`$) per unit area of the two N-S interfaces in the junction.
Strictly speaking, Eq. (19) is exact only for homogeneous in space phase fluctuation. The effect of the inhomogeneity can be estimated as $`\mathrm{}D_sQ^2/\mathrm{\Delta }`$, while the main contribution comes from the odd phason mode with $`D_sQ^2\mathrm{}\omega ^2/\mathrm{\Delta }`$, and so the correction is of the order $`\mathrm{}^2\omega ^2/\mathrm{\Delta }^2`$ and is small since we consider frequencies $`\mathrm{}\omega eU\mathrm{\Delta }`$.
In equilibrium $`f(ϵ)`$ is given by the Fermi distribution. There are no divergent phase fluctuations, so the factor $`e^{i\delta \theta (0)}_\theta `$ in Eq. (19) is just a number which can be incorporated in the definition of $`E_J^{(0)}`$. At low temperatures the dominant contribution to the integral in Eq. (21) is due to small frequencies. At $`E_TT`$ the critical current is given by
$$J_c^{(0)}=\frac{2e}{\mathrm{}}E_J^{(0)}\mathrm{ln}\left(\frac{E_T}{T}\right).$$
(22)
In the non-equilibrium situation the critical current (22) is modified by two effects. First, the distribution function deviates from the Fermi distribution. At low temperatures it is given by the two-step function (14). The applied voltage $`eU`$ sets the lower limit for the integration in Eq. (21) and for the real part of the integral we obtain
$`L_f=\mathrm{ln}\left({\displaystyle \frac{\mathrm{tanh}^2\sqrt{\frac{eU}{8E_T}}+\mathrm{tan}^2\sqrt{\frac{eU}{8E_T}}}{1+\mathrm{tanh}^2\sqrt{\frac{eU}{8E_T}}\mathrm{tan}^2\sqrt{\frac{eU}{8E_T}}}}\right).`$ (23)
Increasing the applied voltage changes the sign of the logarithm in Eq. (23), see dashed line on Fig. 1, corresponding to the $`\pi `$-junction.
The second effect is the appearance of the fluctuation phase factor in Eq. (19). To evaluate the average over $`\theta `$ we use the phase correlator Eq. (17), Fourier transformed to the time domain
$$e^{i\delta \theta (0)}_\theta =\mathrm{exp}\left(\frac{d\omega }{2\pi }\delta \theta ^2_\omega \right).$$
(24)
The frequency integration in Eq. (24) diverges logarithmically $`\delta \theta ^2\mathrm{ln}(eU/ϵ_{})`$. At large frequencies it is cut off at $`\mathrm{}\omega =eU`$, because at $`\omega >eU`$ there are no classical fluctuations, see Eq. (17).
The infrared divergency in Eq. (24) is due to the fact that we have neglected the Josephson current in the continuity equation (3). When taken into account, it leads to the opening of the gap in the spectrum of the collective modes, $`ϵ_{}=\sqrt{E_J/\nu _m}`$ (where $`E_J`$ is the Josephson energy per unit area). This gap, provides the infrared cut off in Eq. (24). It is essential that in the non equilibrium situation, the Josephson energy decreases with the increase of the bias voltage $`U`$ and vanishes at the critical point. The actual value of $`E_J`$ at a given $`U`$ should be determined from the self-consistency equation \[obtained by substitution of Eq. (24) into Eq. (19)\]
$$E_J=E_J^{(0)}L_f\mathrm{exp}\left[\frac{eU}{\pi \mathrm{\Delta }G_s}\mathrm{ln}\left(eU\sqrt{\frac{\nu _m}{E_J}}\right)\right].$$
(25)
Solving Eq. (25) we obtain
$`E_J(U)=E_J^{(0)}L_f^{\frac{1}{1\alpha }}\left({\displaystyle \frac{E_J^{(0)}}{(eU)^2\nu _m}}\right)^{\frac{\alpha }{1\alpha }}.`$ (26)
where $`\alpha =eU/2\pi \mathrm{\Delta }G_s`$. The expression Eq. (26) is valid when $`eU>\sqrt{E_J/\nu _m}`$. In this case the resulting $`E_J`$ becomes suppressed by the non-equilibrium (compared to its mean field value $`E_J^{(0)}L_f`$).
We now can write down the non equilibrium critical current as
$$J_c=\frac{2e}{\mathrm{}}E_J(U),$$
(27)
where the renormalized Josephson energy is given by Eq. (26). The dependence of the critical current on the bias voltage is illustrated on Fig. 1.
The expression for the Josephson energy Eq. (26) is the main qualitative result of the paper. It shows the effects of non-equilibrium are not limited to the change of the electronic distribution function. In addition, one has to take into account the fluctuations of the superconducting phases in both S-layers of the S-N-S junction. The phase fluctuations result in two observable effects. First, the Josephson energy at the critical point is not described by the mean field power law, but exhibits the non-analytic behavior Eq. (26), illustrated by the inset on Fig. 1. Second, when the bias exceeds the critical value (so that the critical current becomes negative) the Josephson energy is further suppressed relative to the mean field value.
We should warn the reader that Eqs. (23), (26) were obtained for the simplest model of the S-N-S junction, namely the 2D sandwich. The spectrum of collective modes is sensitive to the geometry of the system, therefore, our results are not expected to describe the experimental data (e.g. of Ref. ) in detail. However, we have presented a strong evidence that the acoustic collective modes, which are present in the junction in the non-equilibrium, can be observed by measuring the suppression of the Josephson energy.
In conclusion, we showed that the non-equilibrium fluctuations of the superconducting phases in the S-N-S junction lead to the non-analytic behavior of the Josephson energy at the critical point and to its suppression in the region of the negative critical current, providing a possibility to observe the acoustic collective modes (phasons).
We acknowledge helpful conversations with L.I. Glazman. I.A. is A.P. Sloan and Packard research fellow.
|
no-problem/9906/quant-ph9906066.html
|
ar5iv
|
text
|
# Entanglement Swapping using Continuous Variables
## Abstract
We investigate the efficacy with which entanglement can be teleported using a continuous measurement scheme. We show that by using the correct gain for the classical channel the degree of violation of locality that can be demonstrated (using a CH type inequality) is not a function of the level of entanglement squeezing used in the teleportation. This is possible because a gain condition can always be choosen such that passage through the teleporter is equivalent to pure attenuation of the input field.
(submitted to PRL 22nd October 1998)
It is remarkable that non-local entanglement can be established between particles that have never interacted directly. Here “non-local” refers to the inability of local hidden variable theories to predict the observed correlations. This “entanglement swapping” , may be useful in establishing non-local correlations over very large distances and other applications . Recently Pan et al have demonstrated entanglement swapping of the polarization entanglement created by type II parametric down conversion experimentally. In all discussions and experiments to date discrete measurements and manipulations are made in order to transfer the non-local correlations. For example in the optical experiments, photon coincidences operate photo-current gates. However entanglement swapping is really a special case of teleportation and in work by Vaidman and Braunstein and Kimble , schemes for the teleportation of continuous quantum variables have been proposed. In these schemes continuous measurements and manipulations are used. A preliminary experimental demonstration of continuous variable teleportation of a coherent state has recently been presented by Furusawa et al . An important question to ask is; can non-local entanglement be swapped or teleported using a continuous measurement scheme?
In this paper we show explicitly that this can be achieved. This effect represents a completely new way of transferring non-local information. Of particular practical significance is that the conditions for achieving non-local effects are not stringent.
The optical arrangement we will investigate is shown in Fig. 1. It combines the basic arrangement of entanglement swapping with a 2-mode generalization of the continuous variable teleportation scheme . We consider a non-collinear type II optical parametric oscillator operating at low pump efficiency (OPO1) as our source of entangled photons . In the Heisenberg picture the two outputs, $`A`$ and $`B`$, can be decomposed into their horizontal ($`h`$) and vertical ($`v`$) linear polarization components by
$`A=A_{(h)}\widehat{h}+A_{(v)}\widehat{v}`$ (1)
$`B=B_{(h)}\widehat{h}+B_{(v)}\widehat{v}`$ (2)
where $`\widehat{h}`$ and $`\widehat{v}`$ are orthogonal unit vectors,
$`A_{(h,v)}=A_{0(h,v)}\mathrm{cosh}\chi _1+B_{0(v,h)}^{}\mathrm{sinh}\chi _1,`$ (3)
$`B_{(h,v)}=B_{0(h,v)}\mathrm{cosh}\chi _1+A_{0(v,h)}^{}\mathrm{sinh}\chi _1,`$ (4)
$`A_0`$ and $`B_0`$ are the vacuum inputs to OPO1, and $`\chi _1`$ is its conversion efficiency. We have assumed the bandwidth of the OPO is broad compared to our detection bandwidth and that pump depletion can be ignored. The output state of the combined system in the number state basis is given by
$`{\displaystyle \frac{1}{\sqrt{2}\mathrm{cosh}(\chi _1)}}{\displaystyle \underset{n=0}{\overset{\mathrm{}}{}}}(\mathrm{tanh}\chi _1)^n\left(|n_h,n_v+|n_v,n_h\right)`$ (5)
where
$$|n_i,n_j|n_i_A|n_j_B$$
(6)
and $`n_h`$ and $`n_v`$ are the photon number in the horizontal and vertical polarizations respectively.
This reduces to the number-polarization entangled state
$`{\displaystyle \frac{\chi _1}{\sqrt{2}}}\left(|1_h,1_v+|1_v,1_h\right)+|0`$ (7)
for low pump efficiency (i.e $`\chi _11`$ ). The state given by Eq 7 violates the Clauser-Horne (CH) type inequality
$`S={\displaystyle \frac{R(\theta _A,\theta _B)R(\theta _A,\theta _B^{})+R(\theta _A^{},\theta _B)+R(\theta _A^{},\theta _B^{})}{R(\theta _A^{},_B)+R(_A,\theta _B)}}1`$ (8)
where $`R(\theta _A,\theta _B)`$ is the photon coincidence count rate between polarisation $`\theta _A`$ of beam $`A`$ and $`\theta _B`$ of $`B`$, and $`R(\theta _A,_B)`$ is the equivalent rate counting both polarisations of beam $`B`$. The maximum violation occurs for $`\theta _A=\pi /8`$, $`\theta _B=\pi /4`$, $`\theta _A^{}=3\pi /8`$ and $`\theta _B^{}=0`$, when $`S1.21`$. Strong violations of locality have been observed experimentally with such a state .
Now we consider teleporting, or swapping the entanglement of, one of the beams ($`B`$) from our non-local source using a continuous variable method. We will then investigate the correlations between the teleported beam and beam $`A`$ and determine under what circumstances they still violate the CH inequality. The teleportation is achieved using a second type II OPO (OPO2). The output beams of OPO2, $`C`$ and $`D`$, are given by analogous expressions to those of OPO1 (Eq.2,4). The conversion efficiency of OPO2 is $`\chi _2`$. Beam $`B`$ is split into its two polarizations components ($`B_h`$ and $`B_v`$) at a polarizing beam-splitter (see Fig. 1). Similarly beam $`C`$ is split into $`C_h`$ and $`C_v`$. The horizontally polarized component of OPO1 ($`B_h`$) is mixed with the horizontally polarized component from OPO2, $`C_h`$, on a 50:50 beamsplitter. The outputs of the beamsplitter are directed to two homodyne detection systems which measure the phase ($`X^{}`$) and amplitude ( $`X^+`$) quadratures of the field. Similarly $`B_v`$ and $`C_v`$ are mixed and their quadrature amplitudes detected. The resulting photocurrents are proportional to
$$X_{(h,v)}^\pm =\sqrt{1\eta }X_{\delta (h,v)}^\pm +\sqrt{\eta /2}(X_{B(h,v)}^\pm \pm X_{C0(h,v)}^\pm \mathrm{cosh}\chi _2+X_{D0(v,h)}^\pm \mathrm{sinh}\chi _2)$$
(9)
where, for example $`X_B^{}=i(BB^{})`$ and $`X_B^+=B+B^{}`$. The operators $`X_{\delta (h,v)}^\pm `$ come from vacuum modes introduced by losses in the homodyne systems, which are assumed to have efficiencies $`\eta `$. The photo-currents are then amplified and fed-forward to the interferometric modulation systems (IMS) depicted in Fig. 2 which act on the individual polarization components of the second beam from OPO2, $`D_h`$ and $`D_v`$. The photocurrents from the detection of the horizontally polarized beams are used to modulate $`D_v`$ whilst the photo-currents from the detection of the vertically polarized beams are used to modulate $`D_h`$. The effect of the IMS’s are to displace the amplitudes of the beams by coupling in power from local oscillator beams (LO). The coupling is achieved via electro-optic modulators (EOM) in the interferometer arms. Provided the phase shifts ($`\varphi _{v,h}`$) introduced by the EOM’s are small, the output of the IMS’s ($`D_h^{}`$ and $`D_v^{}`$) are given by
$$D_{(h,v)}^{}=D_{(h,v)}+\overline{E}\varphi _{v,h}$$
(10)
where $`\overline{E}`$ is the coherent amplitude of the LO. In general we have
$$\varphi _{v,h}(t)=\underset{0}{\overset{t}{}}k^+\left(u\right)X_{v,h}^+\left(tu\right)𝑑u+\underset{0}{\overset{t}{}}k^{}\left(u\right)X_{v,h}^{}\left(tu\right)𝑑u$$
(11)
where $`k^\pm `$ contains various constants of proportionality as well as the time response of the feedforward electronics. However, if we restrict our attention to RF frequencies (relative to the local oscillator) for which the frequency response of the electronics is flat we can set
$$k^\pm (u)=\frac{1}{\sqrt{2}\overline{E}}\lambda ^\pm \delta (u)$$
(12)
where $`\lambda ^\pm `$ is the feedforward gain, and so
$$D_{(h,v)}^{}=D_{(h,v)}+\frac{1}{\sqrt{2}}\lambda ^+X_{v,h}^++\frac{1}{\sqrt{2}}\lambda ^{}X_{v,h}^{}$$
(13)
Finally the beams are recombined using a polarizing beamsplitter and a half-wave plate is used to rotate horizontal polarizations into vertical and vice versa. The output beam is
$$D^{}=D_{(v)}^{}\widehat{h}+D_{(h)}^{}\widehat{v}$$
(14)
Setting ($`\lambda =\lambda ^+=i\lambda _{}`$) and assuming unit detection efficiency ($`\eta =1`$) we obtain
$`D^{}`$ $`=`$ $`\left(B_{(h)}+(\mathrm{sinh}\chi _2\lambda \mathrm{cosh}\chi _2)C_{0h}^{}+(\mathrm{cosh}\chi _2\lambda \mathrm{sinh}\chi _2)D_{0v}\right)\widehat{h}`$ (16)
$`+\left(B_{(v)}+(\mathrm{sinh}\chi _2\lambda \mathrm{cosh}\chi _2)C_{0v}^{}+(\mathrm{cosh}\chi _2\lambda \mathrm{sinh}\chi _2)D_{0h}\right)\widehat{v}`$
In the limit of strong squeezing ($`\chi _21`$ such that $`\mathrm{cosh}\chi _2\mathrm{sinh}\chi _2`$) and unity gain ($`\lambda =1`$) beams $`B`$ and $`D^{}`$ become equivalent. It is clear that in this limit the beams $`A`$ and $`D^{}`$ will violate the CH inequality for conditions under which $`A`$ and $`B`$ violated it, showing that the non-locality has been teleported. This is shown in Fig. 3 where Eq 8 is evaluated as a function of polarizer angle with beams $`A`$ and $`D^{}`$ as inputs, unity gain and 99% squeezing.
Very high levels of squeezing are difficult to achieve so it is important to ascertain what levels of squeezing are required to achieve non-local teleportation. Indeed, if we remain at unity gain, the operating point discussed in Ref. and used by Furusawa et al , Fig. 3 also shows that non-locality is lost for squeezing less than about 80%. Surprisingly, though, we are able to recover non-local behavior for low levels of squeezing if we reduce the gain in the feedforward loops. This represents a new and potentially useful operating point.
We can write an analytical relationship between the value of $`S`$ that could be obtained from photon correlation measurements of beams $`A`$ and $`B`$, $`S_{A,B}`$ and that which could be obtained for the same measurements of beams $`A`$ and $`D^{}`$, $`S_{A,D^{}}`$ in the limit that $`\chi _11`$. We must calculate photon coincidence count rates between beams $`A`$ and $`D^{}`$ such as
$$R(\theta _A,\theta _D^{})=in|E_D^{}^{}(\theta _D^{})E_A^{}(\theta _A)E_A(\theta _A)E_D^{}(\theta _D^{})|in$$
(17)
where
$`E_A(\theta _A)`$ $`=`$ $`A_hcos\theta _A+A_vsin\theta _A`$ (18)
$`E_D^{}(\theta _D^{})`$ $`=`$ $`D_h^{}cos\theta _D^{}+D_v^{}sin\theta _D^{}`$ (19)
and $`|in`$ is given by Eq. 7. After some algebra one finds
$$R(\theta _A,\theta _D^{})=\lambda ^2\eta R(\theta _A,\theta _B)+(N^2+\lambda ^2(1\eta ))/2$$
(20)
where
$$N=\mathrm{sinh}\chi _2\lambda \sqrt{\eta }\mathrm{cosh}\chi _2$$
(21)
Similarly we find
$$R(\theta _A,_D^{})=\lambda ^2\eta R(\theta _A,_B)+(N^2+\lambda ^2(1\eta ))$$
(22)
and
$$R(_A,\theta _D^{})=\lambda ^2\eta R(_A,\theta _B)+(N^2+\lambda ^2(1\eta ))$$
(23)
Putting these results together as per Eq. 8 we obtain
$$S_{A,D^{}}=\frac{\frac{N^2}{\lambda ^2}+\eta S_{A,B}+1\eta }{\frac{2N^2}{\lambda ^2}+2\eta }$$
(24)
Consider first unit detection efficiency ($`\eta =1`$). Eq. 24 shows that the non-local correlation is preserved by the teleportation for any level of squeezing provided we set
$$\lambda _{op}=\mathrm{tanh}\chi _2$$
(25)
This effect is shown in Fig. 4 where the maximum of $`S_{A,D^{}}`$ is plotted against the feedforward gain, $`\lambda `$ for various levels of squeezing. As squeezing is reduced equal violations of locality are still achieved for lower levels of gain. The range of feedforward gains for which non-local teleportation is achieved actually broadens to a maximum value as the squeezing is reduced before narrowing again. The mechanism for this surprizing result can be understood by examining the action of the teleporter on an arbitrary, single mode input field, $`a_{in}`$. Under ideal conditions the output field is given by
$$a_{out}=\lambda a_{in}+(\mathrm{cosh}\chi \lambda \mathrm{sinh}\chi )B_0(\lambda \mathrm{cosh}\chi \mathrm{sinh}\chi )A_0^{}$$
(26)
Notice that photons are added to the output through the action of the creation operator, $`A_0^{}`$. These spurious photons are detrimental to the observation of non-local correlations. However no photons are added to the output if the gain condition $`\lambda _{op}=\mathrm{tanh}\chi `$ is choosen as the coefficient of $`A_0^{}`$ goes to zero. The output is then given by
$$a_{out}=\lambda _{op}a_{in}+\sqrt{1\lambda _{op}^2}B_0$$
(27)
Eq.27 is formally equivalent to pure attenuation by a factor $`(1\lambda _{op}^2)`$. Thus when the teleporter is operated with this gain the output beam $`D^{}`$ is simply an attenuated version of $`B`$. Because $`S`$ is a normalized quantity, determined by a ratio of coincidence counts, attenuation does not reduce it.
What we observe in Fig.4 could be considered a smooth transition between teleportation, when the teleporting OPO has strong squeezing, to continuous variable entanglement swapping, when the teleporting OPO has a conversion efficiency similar to that of the source OPO. The teleportation limit is characterized by the exact reproduction of the state of $`B`$ on $`D^{}`$. The beams $`A`$ and $`D^{}`$ are in the same polarization-number entangled state as $`A`$ and $`B`$ were originally. On the other hand, in the entanglement swapping limit, although every photon in $`D^{}`$ has become polarization entangled with a corresponding photon in beam $`A`$ the number entanglement has become strongly diluted. This is due to the effective attenuation which leaves many unpaired photons in beam $`A`$. The joint state of beams $`A`$ and $`D^{}`$ is now strongly mixed. This situation has been referred to by some authors as ”a posteriori” teleportation .
Homodyne detection losses will reduce and, for $`\eta 1/S_{A,B}`$, eventually destroy the non-local effects. At the optimum gain condition (now $`\lambda _{op}=\mathrm{tanh}\chi _2/\sqrt{\eta }`$) this means the homodyne detection efficiencies must be better than about 83%. This limit is independent of the amount of squeezing. However, reducing the squeezing of OPO2 increases the effective attenuation at the optimum gain condition and hence reduces the coincidence count rate. As a result longer counting times are required to observe non-locality. This reduction in signal to noise is typical of entanglement swapping and is an unavoidable consequence of operating below unity gain . Never-the-less we believe an experimental demonstration is feasible with current technology. For example with $`\eta =0.9`$ and 50% squeezing ($`\chi _2=0.34`$) we find $`S_{A,D^{}}=1.08`$ with coincidence count rates reduced to about 10% of their unteleported values.
In summary we have shown that it is possible to teleport the non-local correlations associated with number-polarization entanglement using a continuous variable scheme. The non-local correlations can be teleported for any level of squeezing in the teleporting OPO (OPO2). In general the best operating point for teleportation of the entanglement is where the output of the teleporter is simply an attenuated version of the input beam. This operating point is clearly of importance for a large range of superposition and entangled state inputs.
We thank S.L.Braunstein for stimulating discussions. This work was supported by the Australian Research Council.
|
no-problem/9906/astro-ph9906310.html
|
ar5iv
|
text
|
# HST WFPC2 Imaging of Three Low Surface Brightness Dwarf Elliptical Galaxies in the Virgo Cluster
## 1 Introduction
Little is known or understood about the current stellar populations and/or star formation histories of low surface brightness (LSB) dwarf elliptical (dE) galaxies. What we do know from various studies (e.g. Sung et.al. 1998; Jerjen & Dressler 1997; Secker 1996; Durrell et.al. 1996; Meylan & Prugniel 1994; Lee, Freedman, & Madore 1993; Peterson & Caldwell 1993; Impey, Bothun & Malin 1988; Caldwell 1987; Caldwell & Bothun 1987; Bothun et.al. 1986; Kormendy 1985, 1987; Bothun et.al. 1985; Bothun & Caldwell 1984;) can be summarized as follows (see Ferguson & Bingelli 1994 for a fuller review):
1) LSB dEs in the Virgo and Fornax clusters generally define a tight surface brightness-magnitude relation (Secker & Harris 1996). This relation is driven by the tendency for dE surface brightness profiles to be extremely well-fit by an exponential function, coupled with a near constancy of the disk scale length ($`\alpha `$ $``$ 0.9 $`\pm `$ 0.1 kpc) (see Bothun, Caldwell & Schombert 1989; Caldwell & Bothun 1987; Young and Currie 1995). Thus variations in luminosity are driven solely by variations in central surface brightness. In a simple universe there would also be a corresponding color vs. surface brightness relation, with the lower luminosity dEs being redder than the higher luminosity ones. This would make the surface brightness-magnitude relation merely a fading sequence. Naturally, things are more complicated than this as no color-central surface brightness relation has been observed for any sample. In fact, the available data actually define a relation in the opposite sense, namely dEs with the lowest central surface brightness are the bluest (see Figure 3 in Bothun, Impey & Malin 1991).
2) There is a small but important component of very LSB dEs with large scale lengths that strongly deviate from the standard surface brightness-magnitude relation (see Impey, Bothun & Malin 1988; Bothun, Impey, & Malin 1991; Caldwell, et.al. 1998; O’Neil 1997). Though there is no difference in mean color, these very diffuse dEs may be fundamentally different than the other dEs. Some of the more extreme examples in this class reach central surface brightnesses as low as $`\mu _B`$(0) = 26.0 mag arcsec<sup>-2</sup> but have scale lengths of $``$1.5 kpc.
3) An appreciable fraction of LSB dEs have conspicuous nuclei. Spectroscopy (e.g. Bothun & Mould 1988; Brodie & Huchra 1991; Peterson & Caldwell 1993; Held & Mould 1994) indicates a stellar population similar to that of metal-rich galactic globulars but with stronger Balmer line equivalent widths, perhaps indicating a lower mean age. In general there is little difference in color between most nuclei and the surrounding envelope. Whether these nuclei are mini-bulges (e.g. r<sup>1/4</sup> components) or the site of a secondary star formation event is currently unclear.
4) Most dEs have little neutral hydrogen, suggesting that substantial gas loss may have occurred as the result of baryonic blowout in shallow potentials due to energy input from supernovae (e.g. Dekel & Silk 1986; Vader 1987; Silk, Wyse & Shields 1987; Spaans & Norman 1997).
5) While some possible “transition” objects have been identified on their way to becoming gas poor dEs (e.g. Meurer, Mackie & Carignan 1994; Knezek, Sembach & Gallager 1997; Vader & Chaboyer 1994; Sage, et.al. 1992; Conselice & Gallagher 1998) its fairly unclear what their present evolutionary nature is. Only a handful of these candidate transition galaxies exist, compared to the relatively large numbers of dEs in clusters, suggesting that whatever evolutionary process has produced dEs is no longer ongoing with much frequency.
6) The number density of dEs in groups and clusters seems to be correlated with the total cluster luminosity in the sense that large brighter clusters (e.g. Virgo) have significantly more dEs than fainter clusters such as Fornax (see Ferguson 1991; Secker & Harris 1996). This is a compelling result which strongly suggests some quite macroscopic physical event is responsible for the production of dEs in clusters. Indeed, very deep studies of the Coma cluster suggest that there may be thousands of dEs in that environment (Ulmer et.al. 1996; Bernstein et.al. 1995; Secker, Harris, & Plummer 1997)
Missing from the above list is any explanation as to why the surface brightnesses of these dEs can be so low, at a wide range of $`BV`$ and $`VI`$ colors. Surface brightness is, of course, a convolution of the average separation between the stars and the luminosity function of the stars in the galaxy. Broadly speaking, the available photometric data on dEs is inconsistent with a significant change in the stellar luminosity function – that is, the broad-band integrated colors as well as nuclear spectra indicate the light is dominated by a giant branch augmented by A,F and G main sequence stars. Given this, the most probable reason that these galaxies have such low surface brightnesses is a larger than average separation between the stars, or between individual red giants in the case of giant dominated integrated light. Is this a formation effect? That is, have these systems always been of low mass density (see de Blok & McGaugh 1997) or has there been some profound evolutionary process, perhaps associated with significant mass loss (e.g. Dekel & Silk 1986) that has “puffed” what once were compact galaxies into a considerably more diffuse state?
But what is the evidence, apart from general broad band colors, that the light from dE galaxies is giant dominated. As remarked by Bothun, Impey, & Malin (1991) and McGaugh et.al. (1995), there is particular difficulty in fitting stellar population models to the blue end of the LSB dE sequence, because these objects are blue in the clear absence of on-going star formation. In general, this end is defined by objects with $`BV`$ $``$ 0.4 $``$ 0.5 and $`VI`$ $``$ 0.6 $``$ 0.8. For these objects, their colors can be reproduced using a large population of A,F and G stars, and a reduced giant branch (which indicates a young mean age for the galaxy), thus bringing into question the statement that their light is giant dominated. One way to directly test whether these blue dEs still have giant dominated light is offered by the measurement of luminosity fluctuations using the Hubble Space Telescope (HST). Previous attempts to measure the fluctuation signal of LSB dEs from the ground have been successful. Bothun et.al. (1991) successfully detected the B-band fluctuations in two LSB dE galaxies in Fornax using a detector with pixel size of 0.33 arcseconds under conditions of 0.7-0.8 arcsecond seeing. Jerjen et.al. (1998) measured the R-band fluctuation signal for a few dE galaxies in Sculptor using a detector with pixel size of 0.60 arcseconds under conditions of 1.5 arcsecond seeing.
Clearly, at 0.1 arcsecond per pixel and a PSF of approximately 0.2 arcseconds, HST observations using the Wide Field Planetary Camera-2 (WFPC2) present a unique opportunity for a robust measurement of the fluctuation signal from LSB dEs in structures as distant as the Virgo cluster. A priori, what one might expect from such measurements? Well suppose there is some dE with a region of constant surface brightness of B = 25.0 mag arcsec<sup>-2</sup> a few arcseconds in size. At I, the mean surface brightness will be approximately I = 23.5 mag arcsec<sup>-2</sup>. At 0.1 arcsecs per pixel, each WFPC2 pixel would have B = 28.5 mag. At the distance of the Virgo cluster (m$``$M = 31.5 for this illustrative purpose only), the absolute magnitude per pixel in the I-band is $``$3.0. If the light per pixel is giant dominated then this absolute magnitude level is reached with just 2–10 giants, depending on their spectral type. The Poisson noise associated with such a discrete distribution of giants would indeed be large ($``$ 33%). If, on the other hand, the light from these blue dEs is dominated by F and G main sequence stars, then several hundred per pixel are required and the corresponding fluctuation signal would be significantly reduced. We thus seek to determine the amplitude of the fluctuation signal in a small sample of blue LSB dEs in Virgo to a) directly test that the light from these dEs is still giant dominated and b) to show membership in the Virgo cluster.
Additional motivation for performing these observations is three-fold: 1) There is conflicting information in the literature concerning the metal abundance and/or effective temperatures of the giant branches in these systems. For instance, the interpretation offered by Bothun & Mould (1988) is somewhat different than that put forth by Brodie & Huchra (1991). By measuring the fluctuation signal, we have an opportunity to infer the approximate K-to-M ratio in the composite giant branch. In globular clusters, it has been shown (Reed, Hesser & Shawl 1988), that the K/M giant ratio is a good indicator of metallicity. 2) We know very little about the small scale structure of these enigmatic dwarf galaxies. For instance, are these dEs of low surface brightness because the mean giant luminosity per pixel is low or is the actual surface density of giants (absolute numbers of stars per pixel) low? Determining the luminosity fluctuations associated with discrete numbers of giants per pixel can help resolve this. 3) The nature of the nuclei which frequent many dEs in Virgo remains unclear. One dE in our sample exhibits a very red nucleus that is spatially unresolved from the ground. WFPC2 observations may help to resolve this nucleus to better determine its nature.
In this paper we describe our imaging experiment of three LSB dEs in Virgo. This experiment has never been tried before on Virgo dEs. Section 2 describes our dE sample as well as the instrumentation and data reduction procedures. In section 3 we report on the detection of the fluctuation signal and present some model analysis on the nature of the composite giant branch in these systems. Section 4 gives a complete error analysis for the images and section 5 discusses the nature of the individual dEs in more detail.
## 2 Observations and Data Reduction
### 2.1 The dE Sample - Global Properties
For this study we selected three LSB dEs in Virgo from the ground-based Las Campañas 2.5m Dupont telescope CCD sample of Impey, Bothun, & Malin (1988; IBM hereafter). Two of these three are in the Virgo Cluster Catalog (VCC) of Sandage and Binggeli (1984). The objects chosen are V1L4 (VCC1582), V2L8, and V7L3 (VCC1149).
V1L4 is fairly easy to identify in the ground based images, but the presence of a large number of (apparent) foreground stars makes analysis of this galaxy, at ground-based resolution, difficult. The extrapolated central surface brightness is 24.2 B mag arcsec<sup>-2</sup>, and the integrated B magnitude is 16.7, making it the brightest of the three dEs in this study. The galaxy is circular in appearance, with the hint of a faint spiral arm on the north-eastern side of the galaxy (much like the incipient spiral structure of Malin 1 – see Impey & Bothun 1989). The surface brightness profile consists of a flat central region followed by an exponential fall-off. This lack of a true exponential profile, to R=0, prevents an accurate determination of scale length but its 27.0 B mag arcsec<sup>-2</sup> isophote diameter suggests a scale length similar to the other two dEs in this study.
V2L8 has a central surface brightness of only 25.8 B mag arcsec<sup>-2</sup> making it the most diffuse object in this study (and explaining why it is not a VCC object). The galaxy is roughly circular in appearance but is well nucleated. Unfortunately, in the IBM data there is a CCD flaw running through the center of the galaxy which prevented much analysis of this nucleation. We have included this object in our sample in hopes of resolving the nucleation with HST. V2L8 nominally has a scale length slightly larger than the typical dE in Virgo or Fornax ($`\alpha _{V2L8}`$ = 1.2 kpc). When combined with the very low central surface brightness , V2L8 is well outside the standard surface brightness-magnitude relation discussed in Section 1.
V7L3 is intermediate between the other two. It has a measured central surface brightness of 25.1 mag arcsec<sup>-2</sup> and a scale length of $`\alpha `$ <sub>V7L3</sub> = 1.1 kpc. Like V2L8, it too is an exception to the standard surface brightness – magnitude relation of dE galaxies. As will be seen, V7L3 was the most difficult galaxy to identify in the WFPC2 data because its very diffuse and lacks any nucleation.
The light distribution of the three galaxies is remarkably similar. Once the bright nuclear core of V2L8 is removed, all three dEs have a flat inner surface brightness profile followed by an exponential profile which continues through the detection limit. It is in these flat diffuse regions that we seek to measure the fluctuation signal. These regions may be kept diffuse by the action of background radiation pressure, stellar winds, or some other mechanism that provides enough outward pressure to prevent an increase in density and reduction in scale size of these diffuse regions (i.e. Kepner, Babul & Spergel 1997).
Information from the ground based images are given in Table 1, and described below. All quantities were calculated using the Johnson B band filter unless otherwise noted. It should be stated that although the images and zeropoints used for this table are the same as those used in IBM, the parameters have been independently calculated, by re-doing the surface photometry.
Columns 1 and 2: Galaxy names as given in IBM (Column 1) and in the Binggeli, Sandage, and Tarenghi atlas (1985) (Column 2).
Columns 3 and 4: RA and Dec of the galaxies, as found using the STSDAS METRIC task on the WFPC2 F814W images (J2000 epoch).
Column 5: Central surface brightness, in mag arcsec<sup>-2</sup>.
Column 6: The scale length, in arcsecs, as defined in equation 2 (below).
Column 7: The total B magnitude integrated out to the 27.0 mag arcsec<sup>-2</sup> isophote.
Column 8: The isophotal diameter measured at the $`\mu _B`$= 27.0 mag arcsec<sup>-2</sup> level.
Columns 9 and 10: The B $``$ V and V $``$ I colors, measured through the d=20” aperture for V2L8 and V1L4, and through the d=34” for V7L3, due to the difficulty in obtaining an accurate color at smaller apertures (see section 5). The errors are 0.05 and 0.1 for B $``$ V and V $``$ I, respectively.
The colors of these dEs are fairly blue. For comparison, the typical Galactic globular cluster has colors of B $``$ V = 0.62 $`\pm `$ 0.02 and V $``$ I = 0.93 $`\pm `$ 0.05 (\[Fe/H\] $``$$``$1.7). It is likely that the stellar populations in these dEs are metal-poor with younger mean age than those found in galactic globulars. This would imply a deficit of M-giants which is something that can be constrained from the measured fluctuation signal.
### 2.2 Instrumentation
The WFPC2 consists of three Wide Field cameras and one Planetary camera. The Wide Field cameras have a focal ratio of f/12.9 and a field of view of 80” x 80” with each pixel sub-tending 0.0996 arcsec<sup>2</sup>. The three cameras form an L-shape, with the Planetary camera completing the square. The Planetary camera has a focal ratio of f/28.3, 0.0455 arcsec<sup>2</sup>/pixel, and an overall field of view of 36 arcsec<sup>2</sup>. All four cameras have an 800 x 800 pixel silicon CCD with a thermo-electric cooler to suppress dark current. The WFPC2 has two readout formats – single pixel resolution (FULL mode) and 2x2 pixel binning (AREA mode). The digital to analog converter used a gain of 7 e<sup>-</sup>/digital number.
The data for this survey was acquired on 1 May 1996, 3 August 1996, and 3 October 1996. Each field was chosen so that the center of the dE was located in the WF3 image. Four images of each galaxy were taken using all four WF and PC chips, for a total of 2100s and 2200s through the F300W and F814W filters, respectively. The F814W filter is a broadband filter with $`\lambda _0`$ = 7924 Å and $`\mathrm{\Delta }\lambda _{1/2}`$ = 1497 Å. It is designed to be similar to the Cousins I-band filter. The F300W filter has $`\lambda _0`$ = 2941 Å and $`\mathrm{\Delta }\lambda _{1/2}`$ = 757 Å, and is the WFPC-2 wide band U filter. The F814W images were taken in FULL mode, while the F300W images were taken in AREA mode. Because of the CCD response, the S/N through the F814W filter was considerably higher than through the F300W filter. Surface brightness profiles and structural parameters were all found through the F814W images. Figure 1 shows the full (mosaicked) images through the F814W filter.
Sky flat fields of the sunlight Earth were taken through each filter and routinely calibrated against an internal flat field calibration system. The internal system consists of two lamps (optical and UV) illuminating a diffuser plate. The internal flats are used to monitor and correct for changes in the flat fields. Dark fields are averages of ten calibration frames taken over the space of two weeks. The intrinsic dark rate of the WFPC2 CCDs is $``$0.01 e<sup>-</sup>/pixel/sec. A bias field was generated for each image using extended register pixels which do not view the sky.
The data reduction process was as follows: First, all known bad pixels were removed, using the static mask reference file. The bias level was then removed from each frame. The bias image, generated to remove any position-dependent bias pattern, was then subtracted from the image, as was the dark field image. Flat field multiplication was then performed. All the above image calibration was performed at STScI using the standard WFPC2-specific calibration algorithms (the pipeline). After the images were reduced, they were inspected for obvious flaws such as filter ghosts or reflections. As none were found, all the images were used in the subsequent analysis. Each frame was then shifted, registered and combined, using the STSDAS CRREJ procedure to eliminate cosmic rays and other small scale flaws. The resultant 2100s – 2200s images were then checked by eye to insure any registration errors were less than 0.5 pixel.
### 2.3 Data Reduction
The zeropoints for each field were taken from the PHOTFLAM value given in the image headers. The zeropoint, in the STMAG system (the space telescope system based on a spectrum with constant flux per unit wavelength set to approximate the Johnson system at V), is
$$\mathrm{ZP}_{\mathrm{STMAG}}=2.5\mathrm{log}(\mathrm{PHOTFLAM})\mathrm{\hspace{0.25em}21.1}.$$
For the F814W filter, the PHOTFLAM was 2.5451 x 10<sup>-18</sup>, corresponding to a zeropoint of 22.886. For the F300W filter the PHOTFLAM was 6.0240 x 10<sup>-17</sup>, with a zeropoint of 19.450. Conversion to the Cousins I band was done using the value given in by Whitmore in the WFPC2 Photometry Cookbook of I $``$ F814W = 1.22 $`\pm `$ 0.01 (for objects with the colors of galaxies). Conversion from the F300W band to the Johnson U band is more complicated due to an imperfect match between the filters. As a result, we used the value obtained by O’Neil, Bothun, & Impey (1998) of U $``$ F300W = 0.04 $`\pm `$ 0.1.
The physical center of each galaxy, estimated by centroiding with respect to outer isophotes, was found and ellipses were fit around that point to obtain the intensity in each annulus using the modified GASP software (Cawson 1983; Bothun et.al. 1986). The pixel size of the survey provides a seeing radius (stellar psf) of 0.1” for the Planetary camera, and 0.2” for the Wide Field camera. The average sky-subtracted intensity within each (annular) ellipse was found and calibrated with the photometric zeropoint. Background galaxies were masked with the GASP software, which sets the value of the affected pixel to -32768 and subsequently ignores the affected region.
Exponential surface brightness profiles were plotted against the major axis (in arcsec) for each galaxy, using the following equation:
$$\mathrm{\Sigma }(\mathrm{r})=\mathrm{\Sigma }_0\mathrm{e}^{\frac{\mathrm{r}}{\alpha }}$$
(1)
where $`\mathrm{\Sigma }_0`$ is the central surface brightness of the disk in linear units ($`M_{}`$ /pc<sup>2</sup>), and $`\alpha `$ is the exponential scale length in arcsec. This can also be written (the form used for data analysis) as
$$\mu (\mathrm{r})=\mu (0)+(\frac{1.086}{\alpha })\mathrm{r}$$
(2)
where $`\mu _0`$ is the central surface brightness in mag arcsec<sup>-2</sup>.
The average sky brightness through the F814W filter was 23.01 mag arcsec<sup>-2</sup> (which corresponds to about 21.8 mag arcsec<sup>-2</sup> in the Johnson I-band system). An accurate (error $``$0.25 mag arcsec<sup>-2</sup>) radial surface brightness profile was typically found out to 25.5 mag arcsec<sup>-2</sup> (10% of the sky background).
## 3 Data Analysis and Modeling
### 3.1 Measuring the Fluctuation Signal
The flat surface brightness profile in the inner core of these dE galaxies, combined with the exceptionally flat sky background of our WFPC2 F814W images (flat to less than 0.1%), allows for an accurate detection of luminosity fluctuations caused by the stellar population of these inner regions. Figure 2 shows the grey-scale images for the inner regions of the galaxies in the F814W images. In all three cases the profiles are flat, with mean $`\mu `$ = 24.39, 26.23, and 25.49 F814W mag arcsec<sup>-2</sup> for V1L4, V2L8, and V7L3, respectively. Pixel-to-pixel variations within the flat regions (as defined in Table 2, Column 2), as well as for the sky, were then found by determining the mean electron count and dispersion in three sets of 135 boxes 5, 10, and 15 pixels wide, for a total of 47,250 pixels, which were spread randomly throughout the region of constant surface brightness (see also Bothun, Impey, & Malin 1991). Multiple random samplings of these regions were done so that errors could be determined via statistical bootstrap techniques.
The intrinsic fluctuation signal was found by subtracting, in quadrature, the r.m.s. variation of the sky (still in e<sup>-</sup>) from that within the the constant surface brightness regions. It is precisely the existence of regions of constant surface brightness that encompass several thousand pixels that allows for the fluctuation signal to be measured in such a straight forward manner. That is, the fluctuation signal can be extracted without any need to Fourier analyze the image to recover the power spectrum, as is traditionally done in studies such as these. As will be shown below, this technique has allows the fluctuation signal to be measured to high accuracy when it is detected in this manner.
For these observations, the sky background averaged 145–170 electrons which is well above the readout noise for WFPC2. In the absence of other sources of noise (e.g. filter fluorescence, CTE problems, scattered light – see section 4 for details) the only other contribution to the fluctuation signal besides the galaxy comes from the Poisson noise in the sky background. Division by the average intensity of the constant surface brightness region then gives the fractional luminosity fluctuation which is presumably driven by a Poisson distribution of red giant stars per pixel. However, there is one small complication which make this whole procedure a bit less than straight forward and that is the simple fact that the angular extent of the galaxy (at very faint isophotes) is comparable to the WFPC2 field of view. Examining the outer isophotes from the Las Campañas I band image reveals that, at the maximum radii available for the WFPC2 images (r=80” for V2L8, V1L4 and r = 100” for V7L3), the annular surface brightness is 26.85, 28.83, and 27.69 mag arcsec<sup>-2</sup> for V2L8, V1L4, and V7L3, respectively. This implies that only the light from V1L4 has fallen off enough to render it insignificant (e.g. $`<`$0.5%) in the calculations of both the sky brightness and its r.m.s. variation. For the two other galaxies, 2.5%, and 1.3% of the measured sky value is a contribution from the outer stellar light in V2L8 and V7L3, respectively and needs to be accounted for in the determination of the true sky value.
In the case of sky limited exposures, such as we have, the r.m.s. sky error in electrons is:
$$\sqrt{(skyintensity)+(numberofexposures)(readnoise)^2}=\sigma _{rms}(sky,ine^{}).$$
If we assume that this r.m.s. error represents the true sky noise for all three galaxies (that is, the sky error found is the true $`\sigma _{rms}(sky)`$), we can determine the true galaxy r.m.s. error using
$$\frac{\sqrt{\sigma _{measured}(galaxy)^2\sigma _{rms}(sky)^2}}{\left[galaxyintensity\right]\left[skyintensity\right]}=\sigma _{true}(galaxy).$$
Uncertainties are dominated by the uncertainty in the numerator. A statistical bootstrap method is used to determine the uncertainty in the measured values of $`\sigma _{measured}(galaxy)`$ and $`\sigma _{rms}(sky)`$. These values can be found in Table 2 (see below). In general $`\sigma _{rms}(sky)`$ is larger than the r.m.s of the actual sky counts, in electrons, indicating that readout noise is still a component in the overall noise profile of both the galaxy and sky images.
After grinding through this procedure for all three galaxies, we measure the fractional luminosity fluctuations to be 0.42 $`\pm `$ 0.53, 0.33 $`\pm `$ 0.05, and 0.65 $`\pm `$ 0.17 for V2L8, V1L4, and V7L3, respectively. The fluctuation signal for V2L8 clearly is not statistically significant, but to first order the large and statistically significant fluctuation signal measured for V1L4 and V7L3 confirms would what was introduced in Section 1 (see also Figure 5 in Bothun, Impey, & Malin 1991).
Combining this measure of the luminosity fluctuations with the probable distance modulus to Virgo yields an estimate for the average magnitude of the stars producing the observed fluctuation (see Tonry & Schneider 1988). Of course, the distance modulus to Virgo is uncertain and values of m$``$M = 31.0 – 31.5 remain consistent with the data (see Bothun 1998). Using this range of distance moduli, we can determine the absolute magnitude/pixel for the constant surface brightness regions. For example, in our 2200 second combined exposure, V1L4 has counts of 181.2 $`\pm `$ 0.8 $`\mathrm{e}^{}`$/pixel versus 145.2 $`\pm `$ 0.3 $`\mathrm{e}^{}`$/pixel for the sky or a net count of 36 $`\pm `$ 0.9 $`\mathrm{e}^{}`$/pixel which converts to a mean magnitude/pixel of 28.26 in the Cousins I band ($`m=2.5log\left[\frac{181.2145.2}{7}\right]+\mathrm{\hspace{0.25em}31.26}\mathrm{\hspace{0.25em}1.22}=\mathrm{\hspace{0.25em}28.26}`$). The measured fluctuation signal of 0.33 implies that, on average, there are 10 giants per pixel. Using m $``$ M = 31.0 then, gives $`\overline{M_I}`$ = $``$0.24. For V7L3 we derive a mean magnitude/pixel of 30.01 with an average of 3 giants per pixel. This yields $`\overline{M_I}`$ = $``$0.55. These values are significantly below the typical values of $`\overline{M_I}`$ found for luminous ellipticals (see below).
### 3.2 Modeling the Giant Branch
We now have enough information to approximately model the giant branch in terms of a mixture of giants of spectral type K and M, together with an underlying main sequence of A, F, and G stars. One way to determine our model is to simply appeal to the calculations of Worthey (1994) in which the fluctuation magnitude is listed for a variety of stellar populations of differing ages and metal abundances. However, those models were developed for application to giant ellipticals and its not clear if they are appropriate for our dE galaxies for the following statistical reason: In a giant elliptical at the distance of the Virgo cluster, each pixel would contain several hundred giants (and a total of several thousand stars) and thus each pixel represents a statistically reliable realization of the general stellar population. In our case, this is simply not true as each pixel contains a very small number of giants (certainly less than 10 and maybe as low as 2) and hence we are subject to discrete effects. In the extreme, part of our fluctuation signal may in fact be driven by the tendency for some pixels to contain zero giants. Thus, we are in a much different counting regime than the case of a giant elliptical.
Nonetheless, we begin with an inspection of the Worthey models. In the I-band, $`\overline{M_I}`$ decreases with increasing metallicity for a fixed age population. Its only at near-IR wavelengths that the fluctuation magnitude starts to rapidly increase as you get to more metal rich populations which contain the cooler, luminous M-giants. In addition, throughout the regime of low metallicity ($``$2.00 $``$ \[Fe/H\] $``$ 0.0), $`\overline{M_I}`$ is relatively constant. In this metallicity regime, $`\overline{M_I}`$ $``$ $``$1.8 $`\pm `$ 0.1 over the age range 8–12 Gyr. This is well above the values we found from our data, which are at most $`\overline{M_I}`$ $``$ $``$1.0 for m$``$M = 31.5. So with respect to our data, these models are extremely poor fits in that they achieve $`\overline{M_I}`$ $``$ $``$0.5 only in metal rich cases but those populations have $`VI`$ $``$ 1.3 – 1.4. Conversely, using the bluest $`VI`$ models ($`VI`$ = 0.86 corresponding to \[Fe/H\] = $``$2.00 and age of 8 Gyr) yield $`\overline{M_I}`$ $``$ $``$1.95. So the comprehensive models of Worthey do not appear to have any applicability to dE galaxies if $`\overline{M_I}`$ is mostly driven by metallicity variations; we simply cannot even come close to getting consistent values for both $`VI`$ and $`\overline{M_I}`$ .
To make further progress we model the giant branch by adopting the following procedure: 1) The measured luminosity fluctuation to first order fixes the number of giants per pixel; 2) We assume the giant branch can be populated by stars of spectral type K0 thru M2; 3) We adopt absolute magnitudes and colors for giants as a function of spectral type as shown in Table 3 (not considering types later than M2 as they are typically found in metal rich bulges, a state far removed from the dEs.); 4) We use the observed B $``$ V and V $``$ I colors as additional constraints which help us to evaluate the contribution of A0 – F0 stars to the integrated light.
For a specific demonstration of this procedure we take the case of V1L4 at assumed (m$``$M) = 31.0. The observed fluctuation signal of 33% argues for 10 giants per pixel, to first order. This yields $`\overline{M_I}`$ = $``$0.24 which is approximately the same as for a K0 giant. Since the observed color of V1L4 is bluer than that of a K0 giant in V $``$ I, then there clearly is an important contribution from an underlying bluer population. Hence, we seek an approximate model for the giant branch and the ratio of giant branch to AFG stars that can simultaneously satisfy the color and fluctuation signal constraints, within the observed errors. These AFG stars represent a blue underlying population which could be a populated main sequence or a blue horizontal branch population.
As an example of an acceptable fit, a model (Model A) with 2 A0 and 30 F0 main sequence stars in combination with 2 K0, 2 K2 and 2 K3 giants returns $`\overline{M_I}`$ = $``$0.30, $`BV`$ = 0.56, $`VI`$ = 0.81 and m$``$M = 31.26. Another model (Model B) with 3 A0 and 40 F0 stars in combination with 4 K3 giants returns $`\overline{M_I}`$ = $``$0.34, $`BV`$ = 0.46, $`VI`$ = 0.75 and m$``$M = 31.31. Both of these models return distance moduli estimates consistent with cluster membership. In section 6 we will apply the $`VI`$ vs $`\overline{M_I}`$ calibration of Tonry (1991) and Tonry et.al. (1997) to uncover widely inconsistent results strongly suggesting that, like the Worthey models, the calibration for giant ellipticals does not hold for these galaxies.
Clearly, given the accuracy of the measurements we can only come up with only approximate models, but the particular feature we are interested in constraining from these observations is the mean spectral type (effective temperature) of the giant branch. By gauging this we will have another handle on the metallicity of the stars in these systems. The combination of the observed fluctuation signal and the color does have high constraining power in this regard.
As a further example, we can take Model A above and add a 5% (by number) contribution of M2 stars. This yields $`\overline{M_I}`$ = $``$0.97, $`BV`$ = 0.59, $`VI`$ = 0.92 and m$``$M = 31.43. This is not very consistent with the data and in particular $`\overline{M_I}`$ is too bright and $`VI`$ is marginally too red. To reduce $`\overline{M_I}`$ while still retaining M2 giants, requires the addition of A0 and F0 stars. This addition will make the broad band colors bluer but will also increase the distance modulus as the absolute magnitude per pixel is now increased. If we double the contribution of F-stars we obtain $`\overline{M_I}`$ = $``$0.79, $`BV`$ = 0.50, $`VI`$ = 0.83 and m$``$M = 31.67. Thus, we can only accommodate a small M-star contribution in V1L4 for the largest probably distance modulus to Virgo. For the shorter distance modulus no M-giant contribution can be accommodated. Furthermore, none of our models actually can get as blue as $`VI`$ = 0.7 while being consistent with the derived $`\overline{M_I}`$ (see Table 2). For instance, using G5 giants can drive $`VI`$ down to 0.7 but such models consistently return values for $`\overline{M_I}`$ that are fainter then we observe (see also Worthey 1994).
### 3.3 Overall Results
Table 2 lists our overall results in terms of determining $`\overline{M_I}`$ and its error. All values relevant to the calculation of the fluctuation signal are given in units of electrons per pixel.
The table is laid out as follows:
* Column 1: The galaxy name.
* Column 2: The radius range over which the flat surface brightness profileholds.
* Column 3: The average central surface brightness, through the F814W filter, for the studied regions.
* Column 4: The average central surface brightness, converted to the I band (Section 2.3).
* Column 5: The average galaxy+sky counts within the region defined by Column 2, in electrons.
* Column 6: The r.m.s. error ($`\sigma `$) for Column 5.
* Column 7: The average sky counts for each image, also in electrons.
* Column 8: The r.m.s. error ($`\sigma `$) for Column 7.
* Column 9: The luminosity fluctuation, from electron counts, determined for each galaxy, followed by an error estimate (detailed in section 4).
* Column 10: The absolute fluctuation magnitude ($`\overline{M_I}`$ )
The results of these calculations summarized in this table are clear. The high resolution and low noise of the WFPC2 has allowed for a reliable determination of the luminosity fluctuation signal in 2 out of 3 cases. The amplitude of this signal is large for the cases of of V1L4 and V7L3 and are likely produced by only 2–10 giant stars/pixel depending on the types of giants considered.
In Table 4 we list the best fitting stellar population models to the observed color and fluctuation data. These models were obtained by averaging the results of all models that gave values of $`BV`$, $`VI`$ and $`\overline{M_I}`$ that were within the errors in the data and which produced a distance modulus in the range (m-M) = 31.0 $``$ 31.7. No model that we ran got as blue as $`VI`$ = 0.7. Table 4 is laid out as follows
* Column 1: Galaxy name
* Column 2: Mean spectral type of Giant Branch
* Column 3: K/M giant number ratio if allowed by the data
* Column 4: A+F/K+M number ratio
* Column 5: distance modulus
* Column 6: $`BV`$
* Column 7: $`VI`$
* Column 8: $`\overline{M_I}`$
## 4 Error Analysis
The variance, as measured in electrons, is typically 5–10% higher in the constant surface brightness regions of the dE galaxies compared to the sky background. This is the fluctuation signal but, before we can directly associate that with a Poisson distribution of giant stars per pixel, we must gain a thorough understanding of potential systematic errors arising from the WFPC2 system. These other potential sources of error are:
* Dark glow: This is a non-uniform background which may appear on the WFPC-2 chips and is due to luminescence in the MgF<sub>2</sub> CCD windows under cosmic ray bombardment. Examination for this effect can be done through looking for a small intensity curvature across the sky. This effect is not found in the WFPC-2 images discussed in this paper.
* CTE errors: WFPC-2 chips experience a charge transfer efficiency (CTE) loss across the chip of up to 20% along the Y-axis. This effect, however, is readily reduced by long exposures and high DN counts. The combined images discussed in this paper are the equivalent of 2200s images, providing raw (non-averaged) counts of 2,000 DN (14,000 e<sup>-</sup>) in a 5x5 pixel box. This reduces the effect of CTE errors from the 20% mark to 2% – 3% (i.e. Whitmore 1998). Additionally, the majority of the CTE loss occurs at the edges of the chips, and can readily be seen in a plot of the average sky counts along a chip column. When this was done on the data discussed herein, it was determined that virtually all of the loss occurred in the 50 pixels at the edges of the chips. These pixels were therefore eliminated from the analyzed image, further reducing any CTE problems to under 0.5%.
* S/N loss at the chip edges: Within approximately 50 pixels of the inner edges of the wide field chips the signal-to-noise ratio drops considerably due to vignetting and spherical aberration as the light is divided between two chips. With the images in question, this effect can be readily eliminated by again examining the sky counts in the chip’s inner regions. Eliminating the inner 50 pixels from each image reduced the effects of this problem to zero.
* Geometric Distortion: Geometric distortion near the edges of the chips result in a change in the surface area covered by each pixel. In general, this effect is not relevant for surface photometry where azimuthal averages are taken and the variance in the sky background is determined over areas encompassing thousands of pixels. The flat fields also reduce this problem considerably by boosting the values of the smaller pixels. By analyzing a large number of sky/galaxy regions, each containing a minimum of 25 pixels, we again reduced this effect to under 0.1%.
* Scattered Light: Bright stars whose light falls on the planetary camera pyramid mirror can produce an obvious artifact on the CCDs, typically in the shape of a large arc. None of the images in this paper suffered from this effect.
* Systematic Errors: Potential systematic errors could arise from inappropriate box sizes resulting in under or over-sampling, the underlying galaxy surface brightness not being constant, and the presence of point sources in the studied region. To counter the first problem, the variance was computed for three difference sized boxes (5x5, 10x10, and 15x15 pixels) and the results compared. The weighted differences in sky and galaxy counts between the three box sizes was under 0.2%. To look into the possibility that the studied galaxy regions may not have been flat, a comparison can be done between the counts found in the inner and outer portions of the studied regions. In this case the errors remain under 0.5%.
The cumulative result of these other effects per statistics box results in a potential additional photometric error of up to 1 – 2%. However, our sky and galaxy fluctuation signals and their errors are determined by averaging over approximately 50,000 pixels (in 135 individual boxes) across the WF3 chip and hence these additional errors are ultimately reduced to well under 1%. The difference in luminosity fluctuation between the sky and the galaxy signals ($``$ 5–10%) is well above the level of any possible systematics.
## 5 The Individual Galaxies
The distribution of the giant stars in the inner regions of all three LSB dEs appears to be completely uniform (Figure 4). There are no apparent clumps or clusters seen at our physical resolution scale of approximately 15 pc. Interestingly, because of the low number of giant stars per pixel, WFPC2 imaging essentially renders these galaxies transparent and their presence appears only as a “sky fluctuation” (see also Figure 1). Unless WFPC2 observers are careful, they may well have an object like this in their field without even knowing it.
### 5.1 V1L4
The ground-based data showed a number of bright regions or clumps in this object. However, it is clear from the higher angular resolution WFPC2 data that these regions are mostly background galaxies shining through V1L4 and hence the underlying structure of V1L4 is quite smooth. The background galaxies are described more fully in another paper (O’Neil, Bothun, & Impey 1999) and demonstrate the transparent nature of this and other dE galaxies. A few other “knots” on the arc-second scale can be identified which could be localized regions of star-formation. Confirmation of this, however, can not be provided by the F300W filter observations as that data is extremely noisy.
Analysis of the luminosity fluctuations, described in the last section, show the typical star within V1L4’s nuclear region to have $`\overline{M_I}`$ = $``$0.32 – $``$0.82, (m$``$M = 31.0 – 31.5) which corresponds to spectral types K0 through K2 in the mean. Our models, however, do accommodate the possibility a small M-giant contribution to the fluctuation signal of K/M = 30, provided m$``$M = 31.5. If the luminosity distance is lower, K/M goes to $`\mathrm{}`$, that is, the possibility of any M-type stars existing within this galaxy goes to zero. Keeping the results consistent with the observed V $``$ I color does not change these results, which equate to 13 $`\pm `$ 1 giant stars in a 10 pc<sup>2</sup> region of the galaxy, of which at most 0.5 could be an M giant star. Figure 4(a) shows the core of V1L4, with the region of flat surface brightness lying in the defined annulus. The contour lines in black demark the regions whose brightness is at least 1 $`\sigma `$ above the mean surface brightness in that region, and thus probably are reflective of the actual distribution of the individual giant stars. Interestingly, rather than being evenly distributed throughout the annulus, the majority of the giant stars in this region appear to lie in the southern part of V1L4’s core, accounting for V1L4’s slightly off-center appearance when imaged at coarser angular resolution. Additionally, it should be noted that the higher intensity regions do appear to be grouped, indicating perhaps an old stellar cluster now traced by the remnant giant population.
### 5.2 V2L8
Figures 4(b) and (c) show the inner regions of V2L8, with the region of constant surface brightness again de-marked by white circles and regions 1 $`\sigma `$ above the galaxy brightness defined by black contour lines. The distribution of giant type stars appears fairly even throughout the galaxy. Analysis of the luminosity fluctuations of V2L8 outside the nucleated region does not provide a statistically significant result, with $`\sigma _{galaxy}`$ = 0.42 $`\pm `$ 0.53. One possible reason for this is that galaxy completely fills the WFPC2 field of view and no sky measurement is possible. Indeed, inspection of Table 2 shows that the sky counts are significantly higher for this object, although observing conditions (e.g. variable shuttle-glow, sun angle) could also be responsible for these increased counts. Given the strong detection of the fluctuation signal for the other two dEs in our sample, perhaps this null result indicates that V2L8 is background. That might help to explain why it does not conform to the surface-brightness magnitude relation and why it has a nucleus. Recall, that a previously nucleated dE in Virgo turned out to be Malin 1 (Bothun et.al. 1987).
WFPC2 imaging has clearly resolved the core of V2L8 in the F814W images although the core drops out entirely in the F300W image. IBM measured V $``$ I = 1.9 through a 5 arcsecond diameter aperture. We have reanalyzed the data in attempts to better remove the bad Column and re-measure the nuclear colors, but the measurements are quite sensitive to choice of center. Overall we find colors consistent with the IBM value but can better demonstrate the uncertainty. Based on this we conclude that the V $``$ I color of the nucleus is 1.85 $`\pm `$ 0.15 mag, which is well within the range defined by luminous ellipticals. Thus, the nucleus of this dE galaxy is extraordinarily red, although the envelope of the galaxy appears fairly blue.
But what is the nature of this conspicuous red core? Fitting an $`r^{1/4}`$ profile gives an effective radius ($`r_e`$) of 0.7 arcseconds and and effective surface brightness of 22.0 mag arcsec<sup>-2</sup>. Its F814W magnitude, as measured through an aperture of diameter 2 arcseconds is $``$ 22.8. If V2L8 is in the Virgo cluster, then this nucleus is, in fact, an extremely small scale bulge with $`r_e`$ $``$ 50 pc and has an absolute magnitude at Cousins I of $``$10 to $``$10.5, consistent with it being a bright, very metal-rich globular cluster (perhaps similar to those seen in NGC 5128 – Frogel 1984). Given the extremely diffuse nature of the central regions of this object, the formation of a highly compact bulge is very curious. If true, this is the first identified $`r^{1/4}`$ component of a dE galaxy with such a small scale length. The red color further suggests a metal-rich giant population. Attempts at spectroscopy of this nucleus in February 1998 using the now defunct MT were unsuccessful due to weather and difficulty in finding the nucleus on the acquisition TV. The lack of an observed fluctuation signal, however, has renewed our quest for optical spectroscopy as this object may be background and, like Malin 1, intrinsically large.
### 5.3 V7L3
The WFPC2 images show V7L3 to have a very even stellar distribution, with even its core hardly brighter than the sky background. Remarkably, even with the WFPC2 image (additively) binned in 10 x 10 pixels (giving the image a resolution of 1”/pixel), V7L3 is still a fairly diffuse blob within the sky image and quite difficult to identify. The observed fluctuation signal is relatively large (owing to its lower surface brightness compared to V1L4) and is consistent, to first order, with a stellar population of only 3 giants per pixel, yielding $`\overline{M_I}`$ = $``$0.56 to $``$1.06, or spectral type K2/K3. This is a slightly later spectral type than the case of V1L4, even though both dEs have the same $`VI`$ color. To accommodate this requires a large contribution, per pixel, from the underlying A and F stars (mostly F-stars). However, it is clear that the data can not accommodate M-giants (which have $`M_I`$ $``$ $``$2.4), as the model quickly gets too red. Moreover, the absolute magnitude per pixel in the center regions is fainter than the absolute I-band magnitude of an M2 star, which would lead to fluctuations which are larger than we observe. In fact, it is very difficult to fit any one of our seven component models to the data for this galaxy at the short distance modulus. Successful models tend to be absurd (a point noted earlier by Bothun et.al. 1991 regarding the colors of some of these dEs) and require approximately equal mixtures of A0 main sequence stars and K-giants. For instance model C has equal numbers of A0 and K3 stars (and nothing else) and this returns $`\overline{M_I}`$ = -0.74, $`BV`$ = 0.54, $`VI`$ = 0.85 and m$``$M = 31.12. Adding 10 times as many F stars to this model produces $`\overline{M_I}`$ = -0.88, $`BV`$ = 0.45, $`VI`$ = 0.82 and pushes m$``$M to 31.70. Once again its essentially impossible to push these models as blue as $`VI`$ = 0.70 while simultaneously reproducing the observed $`\overline{M_I}`$ .
Since the composite giant branches appear similar, the more diffuse nature of V7L3, relative to V1L4 must be due directly to a lower surface density of giants or, equivalently, an increased average spatial separation between giants stars. The physical cause of this is unclear. Figure 4(d) shows the core of V7L3 with the regions 1$`\sigma `$ above the galaxy brightness demarcated by black contour lines. Figure 4(d) shows V7L3 to have the most even distribution of giant stars of the three galaxies in this study. This even stellar distribution within V7L3’s core, combined with the circular appearance of the galaxy and the lack of any large stellar knots within V7L3 argues for the idea that LSB galaxies are diffuse and low surface brightness by nature, and not due to outside influences that might cause the galaxies to “puff-up” in some stochastic manner. Under that scenario, one might expect there to be considerably more clumpiness in the stellar distribution than we actually observe, which in all three cases is consistent with an old, dynamically relaxed distribution of giant stars.
## 6 Discussion
The primary result of this study is the firm detection of luminosity fluctuations which are associated with a small number of giants per pixel in two of the three LSB dE galaxies in our sample. Specifically, luminosity fluctuations of the inner, constant surface brightness regions, yields a density of 2-10 red giants/pixel for two of the imaged galaxies. Since the distance to Virgo is relatively well known, we can use the measured fluctuation signal, in combination with the observed $`VI`$ color, to constrain the respective contributions of K and M giants to the observed light. In so doing, the result is clear. We can not simultaneously account for the observed fluctuation signal and the very blue $`VI`$ in any model that has an M-giant contribution. In fact, the models strongly favor very early K-giants and hence a relatively warm effective temperature for the composite giant branch. This implies the population is relatively metal poor.
In more general terms we find that its extremely difficult for any model to reach $`BV`$ $``$ 0.5 with $`VI`$ as blue as 0.7 yet still exhibit $`\overline{M_I}`$ brighter than $``$0.3. This is relatively easy to understand as to achieve such blue colors requires the addition of many F-stars (main sequence or blue horizontal branch stars) which greatly increases the number of stars per pixel and lowers the overall fluctuation signal. So, in this sense, the stellar populations of these blue LSB dE galaxies remain mysterious and ill-constrained. This has been noted as far back as Bothun and Caldwell (1984) and is a manifestation of the basic dilemma involved in trying to produce galaxies with $`BV`$ $``$ 0.5 that have no active star formation and very low surface brightness. The most confident statement we can make, from the fluctuation data, is that the giant branch is likely devoid of a significant population of M-stars.
We can, of course, turn the situation around and derive the distance to the Virgo cluster. Two calibrations are available for this purpose. Tonry (1991) gives
$$M_I(Cousins)=4.84+\mathrm{\hspace{0.25em}3.0}(VI)$$
based on a sample that includes colors as blue as V-I = 0.85. The revision of this calibration, by Tonry et.al. (1997), based on including very red galaxies (and strictly valid only over the range 1.00 $``$ $`VI`$ $``$ 1.30) is
$$M_I(Cousins)=1.74+\mathrm{\hspace{0.25em}4.5}[(VI)\mathrm{\hspace{0.25em}1.15}]$$
For V1L4 we derive $`\overline{m_I}`$ = 30.76 and for V2L3 we get $`\overline{m_I}`$ = 30.45. Both dEs have $`VI`$ = 0.7 $`\pm `$ 0.1. The Tonry (1991) calibration thus yields m$``$M = 33.3 $`\pm `$ 0.3 for the two galaxies averaged. The Tonry et.al. (1997) calibration results in a distance modulus one magnitude farther. If we believed these calibrations, then these objects are clearly not in the Virgo cluster. However, this is more likely indicating that the metallicity driven variation in $`\overline{M_I}`$ , which is at the heart of the calibration (and the Worthey models), just does not apply to LSB dEs possibly due to discrete effects. At some level, the actual surface brightness (e.g. the number of stars per pixel) becomes important. Consider the extreme case where either the surface brightness is sufficiently low, or the pixel size is sufficiently small, that, on average, there is only 1 giant per pixel. So the surface density of giants is now 1 $`\pm `$ 1 and the fluctuation signal would be 100%. In this limit, it is not clear that the $`\overline{M_I}`$ vs $`VI`$ calibration means anything because the dominant driver of the fluctuation signal is the fact that some pixels would have zero giants in them. The case of V7L3, where we derive a surface density of giant stars of $``$ 3 per 10 pc<sup>2</sup>, is close to this limit.
Of course, in this limit, the color fluctuations on the pixel scale would also be very severe. We had hoped to measure this effect with the combination of the F300W and F814W filters but were effectively thwarted by the low S/N in the F300W case. Without this additional information, our constraint on the stellar population per pixel is limited and all we can really do is focus on the relative contributions of K vs M giants. In general, we find that we can not simultaneously produced the inferred pixel density of giants and the observed $`VI`$ color with any model that includes M-giants. Another way to state this is by again comparing our results with the models of Worthey (1994). While it is possible to match our observed spectral fluctuations with Worthey’s predictions, our galaxies still remain significantly bluer in V $``$ I than the model predicts. Since the redder colors of Worthey’s models are due in large part to the presence of late K and M giant stars, this offers further evidence against a significant population of such stars within V1L4 and V7L3. The apparent paucity of these stars is likely an indication that these dE galaxies are relatively metal poor.
For the case of V2L8, we did not detect a fluctuation signal. While this may be due to its large angular extent on the WF3 frame, it might also indicate that V2L8 is background to Virgo. If indeed V2L8 is in the Virgo cluster then we have discovered what is likely the smallest bulge measured to date, having an effective radius of only 50 pc. This bulge is quite red (as red as giant ellipticals) and thus may well be substantially more metal-rich than the rest of the galaxy. Possible, it is a signature of a secondary star formation event that occurred over a very small spatial scale. To date, no other dE LSB galaxy that has been studied shows such a very small, very red core. Clearly, spectroscopy of this core is desirable. Either we have a very small bulge here, or V2L8 is in the background and may therefore by like Malin 1; a LSB object with an L\*, metal-rich bulge (see Impey and Bothun 1989).
Finally, we comment on the LSB nature of these objects. We find no evidence for small scale clumping of stars on the 10-20 parsec spatial scale. To first order, this suggests these systems are dynamically relaxed. Expansion of these systems is then unlikely to be the explanation for their observed low surface brightnesses. Since we have detected surface brightness fluctuations coming from a very small number of stars per pixel, then we know that individual giant stars are dominating the light per pixel. Thus, their LSB nature is also not caused by an absence of giant light. While this is not a surprising result, this study is the first to demonstrate that directly. This leaves the physical separation between individual giant stars as the cause of the observed low surface brightnesses. In the WFPC2 data, such low density galaxies could easily be dismissed as “sky noise” and remain undetected. The continuing difficultly to detect faint, LSB galaxies with any instrumentation has clear implications for reliable determinations of the faint end slope of the galaxy luminosity function.
We acknowledge HST award GO-05496 to help support data acquisition and reduction. We also acknowledge NSF support for low surface brightness galaxy research at the University of Oregon. Conversations with Harry Ferguson on the general subject of luminosity fluctuations have greatly improved our understand of the subject.
References
Bernstein, G. M., Nichols, R. C., Tyson, J. A., Ulmer, M. P., & Whittman, D. 1995 AJ 110, 1507
Binggeli, B., Sandage, A., & Tarenghi, M 1985, AJ, 90, 1681
Bothun, G.D. 1998 Modern Cosmological Observations and Problems, Chapter 2
Bothun, G.D., Impey, C.D., & Malin, David F. 1991 ApJ 376, 4
Bothun, G.D., Caldwell, N., & Schombert, J. 1989 AJ 98, 1542
Bothun, G.D., & Mould, J.R. 1988 ApJ 324, 123
Bothun, G.D., Mould, J.R., Caldwell, N., & MacGillivray, H.T. 1986, AJ, 94, 23
Bothun, G.D. Mould, Jeremy R., Caldwell, Nelson, & MacGillivray, Harvey T. 1986, AJ, 92, 1007
Bothun, G. D., Mould, J. R., Wirth, A., & Caldwell, N. 1985 AJ 90, 697
Bothun, G. D., & Caldwell, C. N. 1984 ApJ 280, 528
Brodie, Jean P., & Huchra, John P. 1991 ApJ 379, 157
Caldwell, Nelson, Armandroff, Taft E., Da Costa, G. S., & Seitzer, Patrick 1998 AJ 115, 535
Caldwell, C. N., 1987 AJ 94, 1116
Caldwell, C. N., & Bothun, G. D. 1987 AJ 94, 1126
Cawson, M. 1983, Ph.D. thesis, University of Cambridge
Conselice, C. & Gallagher 1998, MNRAS 297L, 34
De Blok, W. J. G., & McGaugh, S. S. 1997 MNRAS 290, 533
Dekel, A., & Silk, J. 1986 ApJ 303, 39
Durrell, P., et.al. 1996 AJ 112, 972
Ferguson, Henry C., & Binggeli, Bruno 1994 A&ARv 6, 67
Ferguson, H.C. 1991 BAAS 23, 1338
Frogel, J. 1984 ApJ 278, 119
Held, Enrico V., & Mould, Jeremy R. 1994 AJ 107, 1307
Impey, C., & Bothun, G. 1997 ARA&A 35 267
Impey, C., & Bothun, G. 1989 ApJ 341, 89
Impey, C., Bothun, G., & Malin, D. 1988, ApJ, 330, 634 (IBM)
Jerjen, H., Freeman, K.C., & Binggeli 1998 AJ 116, 2873
Jerjen, H., & Dressler, A 1997 A&AS 124, 1
Kepner, J.V., Babul, A., & Spergel, D.N. 1997 ApJ 487, 61
Knezek, P., M., Sembach, K. R., & Gallagher, J. S. 1997 AAS 191, 8108
Kormendy, J. 1985 ApJ 295, 73
Kormendy, J. 1987 Proceedings of the Eighth Santa Cruz Summer Workshop in Astronomy and Astrophysics, Santa Cruz, CA, July 21-Aug. 1, 1986 New York: Springer-Verlag
Lee, Myung G., Freedman, Wendy L., & Madore, Barry F. 1993 AJ 106, 964L
McGaugh, S., Schombert, J., & Bothun, G. 1995 AJ 109 2019
Meurer, G. R., Mackie, G., & Carignan, C. 1994 AJ 107, 2021
Meylan, Georges, & Prugniel, Philippe 1994; ESO Conference and Workshop Proceedings Meylan,Georges & Prugniel, Philippe ed., Garching: European Southern Observatory
O’Neil, K. 1997, Ph.d. dissertation, University of Oregon, Eugene
O’Neil, K. Bothun, G. & Impey C. 1998, AJ, 117
O’Neil, K. Bothun, G. & Impey C. 1999, submitted to ApJS
Peterson, Ruth C. & Caldwell, Nelson 1993, AJ, 105, 1411
Reed, B.C., Hesser, J., & Shawl, S. 1988 PASP 100 545
Sage, L. J., Salzer, J. J., Loose, H.-H 1992 A&A 265, 19
Sandage,A. & Binggeli, B. 1984 AJ 89 919
Secker, J., Harris, W. E., & Plummer, J. D. 1997 PASP 109, 1377
Secker, Jeff, & Harris, William E. 1996 ApJ 469, 623
Silk, Joseph, Wyse, Rosemary F. G., & Shields, G. A. 1987 ApJ 322L, 59
Silva 1992, Ph.D. dissertation, University of Michigan, Ann Arbor
Spaans, Marco, & Norman, Colin A. 1997 ApJ 488, 2
Sung, E-C, et.al. 1998 ApJ 505 199
Tonry, John, et.al. 1997 ApJ 475, 399
Tonry, John 1991 ApJ 373L 1
Tonry, John, & Schneider, Donald P. 1988 AJ 96, 807
Ulmer, M. P., Bernstein, G. M., Martin, D. R., Nichol, R. C., Pendleton, J. L., & Tyson, J. A. 1996 AJ 112, 2517
Vader, J. Patricia 1987 ApJ 317, 128
Vader, J. Patricia, & Chaboyer, Brian 1994 AJ 108, 1209
Whitemore, B. 1998, preprint
Worthey, G. 1994 ApJS 95, 107
Young, C. & Currie, M. 1995 MNRAS 273 1141
Figures
Figure 1. HST WFPC2 mosaicked images of V1L4 (a), V2L8 (b), and V7L3 (c) taken through the F814W (I band) filter with a 2200s exposure time. These images are 2.6 arcminutes across.
Figure 2. The nuclear regions of the three galaxies, V1L4, V2L8, and V7L3, respectively. These images are each 49.8” across.
Figure 3. Surface brightness profiles of the inner regions of the three Virgo galaxies. Figure 3(a) shows V1L4, (b) shows the inner regions of V2L8 with the bright nucleus not included, and (c) shows the profile of V7L3.
Figure 4. Greyscale images of the central regions of the three galaxies in this study. Figure 4(a) shows V1L4, (b) and (c) show V2L8, and (d) shows V7L3. White circles demarcate the inner and outer edges of the constant surface brightness regions for each galaxy, while black contour line encircle the regions 1 $`\sigma `$ above the sky level. All the images are 20” across, except for Figure 4(b), which shows the core of V2L8. To allow for comparison between images, as section of V2L8 (shown by a black box in Figure 4(b)), which is 20” across is shown in Figure 4(c). Note that these figures show the mosaicked images and are being shown for demonstration of the studied areas only. Mosaicked images were not used for the data analysis.
Tables
Table 1. The photometric and structural properties of the three Virgo galaxies as determined from ground based images.
Table 2. Luminosity fluctuations from the inner regions of the three galaxies.
Table 3. Stellar types used in the models
Table 4: Best fitting stellar populations to the observed pixel colors and luminosity fluctuations.
|
no-problem/9906/astro-ph9906426.html
|
ar5iv
|
text
|
# Precision Measurement of Cosmic-Ray Antiproton Spectrum
## Abstract
The energy spectrum of cosmic-ray antiprotons ($`\overline{p}`$’s) has been measured in the range 0.18 to 3.56 GeV, based on 458 $`\overline{p}`$’s collected by BESS in recent solar-minimum period. We have detected for the first time a distinctive peak at 2 GeV of $`\overline{p}`$’s originating from cosmic-ray interactions with the interstellar gas. The peak spectrum is reproduced by theoretical calculations, implying that the propagation models are basically correct and that different cosmic-ray species undergo a universal propagation. Future BESS flights toward the solar maximum will help us to study the solar modulation and the propagation in detail and to search for primary $`\overline{p}`$ components.
The origin of cosmic-ray antiprotons ($`\overline{p}`$’s) has attracted much attention since their observation was first reported by Golden et al.. Cosmic-ray $`\overline{p}`$’s should certainly be produced by the interaction of Galactic high-energy cosmic-rays with the interstellar medium. The energy spectrum of these “secondary” $`\overline{p}`$’s is expected to show a characteristic peak around 2 GeV, with sharp decreases of the flux below and above the peak, a generic feature which reflects the kinematics of $`\overline{p}`$ production. The secondary $`\overline{p}`$’s offer a unique probe of cosmic-ray propagation and of solar modulation. As other possible sources of cosmic-ray $`\overline{p}`$’s, one can conceive novel processes, such as annihilation of neutralino dark matter or evaporation of primordial black holes . The $`\overline{p}`$’s from these “primary” sources, if they exist, are expected to be prominent at low energies and to exhibit large solar modulations . Thus they are distinguishable in principle from the secondary $`\overline{p}`$ component.
The detection of the secondary peak and the search for a possible low-energy primary $`\overline{p}`$ component have been difficult to achieve, because of huge backgrounds and the extremely small flux especially at low energies. The first and subsequent evidence for cosmic-ray $`\overline{p}`$’s were reported at relatively high energies, where it was not possible to positively identify the $`\overline{p}`$’s with a mass measurement. The first “mass-identified” and thus unambiguous detection of cosmic-ray $`\overline{p}`$’s was performed by BESS ’93 in the low-energy region (4 events at 0.3 to 0.5 GeV), which was followed by IMAX and CAPRICE detections. The BESS ’95 measured the spectrum at solar minimum, based on 43 $`\overline{p}`$’s over the range 0.18 to 1.4 GeV. We report here a new high-statistics measurement of the $`\overline{p}`$ spectrum based on 458 events in the energy range from 0.18 to 3.56 GeV.
Fig.1 shows a schematic view of BESS. It was designed and constructed as a high-resolution spectrometer to perform searches for rare cosmic-rays, as well as various precision measurements. A uniform field of 1 Tesla is produced by a thin (4 g/cm<sup>2</sup>) superconducting coil , through which particles can pass without too many interactions. The magnetic-field region is filled with the tracking volume. This geometry results in an acceptance of 0.3 m<sup>2</sup>sr, which is an order of magnitude larger than those of previous cosmic-ray spectrometers. The tracking is performed by fitting up to 28 hit-points in the drift chambers, resulting in a magnetic-rigidity ($`R`$) resolution of 0.5 % at 1 GV/$`c`$. The upper and lower scintillator-hodoscopes provide two d$`E`$/d$`x`$ measurements and the time-of-flight (TOF) of particles. The d$`E`$/d$`x`$ in the drift chamber gas is obtained as a truncated mean of the integrated charges of the hit-pulses. For the ’97 flight, the hodoscopes were placed at the outer-most radii, and the timing resolution of each counter was improved to 50 psec rms, resulting in $`\beta ^1`$ resolution of 0.008, where $`\beta `$ is defined as particle velocity divided by the speed of the light. Furthermore, a Cherenkov counter with a silica-aerogel ($`n`$ = 1.032) radiator was newly installed , in order to veto $`e^{}/\mu ^{}`$ backgrounds which gave large Cherenkov light outputs corresponding to 14.7 mean photo-electrons when crossing the aerogel.
The 1997 BESS balloon flight was carried out on July 27, from Lynn Lake, Canada. The scientific data were taken for 57,032 sec of live time at altitudes ranging from 38 to 35 km (an average residual air of 5.3 g/cm<sup>2</sup>) and cut-off rigidity ranging from 0.3 to 0.5 GV/$`c`$. The first-level trigger was provided by a coincidence between the top and the bottom scintillators, with the threshold set at 1/3 of the pulse height from minimum ionizing particles. The second-level trigger, which utilized the hit-patterns of the hodoscopes and the inner drift chambers (IDC), first rejected unambiguous null- and multi-track events and made a rough rigidity-determination to select negatively-charged particles predominantly. In addition, one of every 60 first-level triggers was recorded, in order to build a sample of unbiased triggers.
The off-line analysis selects events with a single track fully contained in the fiducial region of the tracking volume with acceptable track qualities. The three d$`E`$/d$`x`$ measurements are loosely required as function of $`R`$ to be compatible with proton or $`\overline{p}`$. The combined efficiency of these off-line selections is 83 – 88 % for $`R`$ from 0.5 to 4 GV/$`c`$. These simple and highly-efficient selections are sufficient for a very clean detection of $`\overline{p}`$’s in the low-velocity ($`\beta <0.9`$) region. At higher-velocities, the $`e^{}/\mu ^{}`$ background starts to contaminate the $`\overline{p}`$ band, where we require the Cherenkov veto; i.e., 1) the particle trajectory to cross the fiducial volume of the aerogel, and 2) the Cherenkov output to be less than 0.09 of the mean output from $`e^{}`$. This cut reduces the acceptance by 20 %, but rejects $`e^{}/\mu ^{}`$ backgrounds by a factor of 6000, while keeping 93 % efficiency for protons and $`\overline{p}`$’s which cross the aerogel with rigidity below the threshold (3.8 GV/$`c`$). Fig.2 shows the $`\beta ^1`$ versus $`R`$ plot for the surviving events. We see a clean narrow band of 415 $`\overline{p}`$’s at the exact mirror position of the protons. The $`\overline{p}`$ sample is thus mass-identified and background-free, as the cleanness of the band demonstrates and various background studies show. In particular, backgrounds of albedo and of mis-measured positive-rigidity particles are totally excluded by the excellent $`\beta ^1`$ and $`R^1`$ resolutions. To check against the “re-entrant albedo” background, we confirmed that the trajectories of all $`\overline{p}`$’s can be traced numerically through the Earth’s geomagnetic field back to the outside of the geomagnetic sphere.
We obtain the $`\overline{p}`$ fluxes at the top of the atmosphere (TOA) in the following way: The geometrical acceptance of the spectrometer is calculated both analytically and by two independent Monte Carlo methods. The live data-taking time is directly measured by two independent scaler systems gated by the “ready” gate which controls the first-level trigger. The efficiencies of the second-level trigger and of the off-line selections are determined by using the unbiased trigger sample. The TOA energy of each event is calculated by tracing back the particle through the detector material and the air. The interaction loss of the $`\overline{p}`$’s is evaluated by applying the same selections to the Monte Carlo events generated by geant/gheisha, which incorporates detailed material distribution and correct $`\overline{p}`$-nuclei cross sections. We subtract the expected number of atmospheric $`\overline{p}`$’s, produced by the collisions of cosmic-rays in the air. The subtraction amounts to $`9\pm 2`$ %, $`15\pm 3`$ % and $`19\pm 5`$ %, respectively, at 0.25, 0.7 and 2 GeV, where the errors correspond to the maximum difference among three recent calculations which agree to each other. Proton fluxes are obtained in a similar way. Atmospheric protons are subtracted by following Papini .
Table I contains the resultant BESS ’97 $`\overline{p}`$ fluxes and $`\overline{p}/p`$ flux ratios at TOA. The first and the second errors represent the statistical and systematic errors, respectively. We checked that the central values of the fluxes are stable against various trial changes of the selection criteria, including uniform application of the Cherenkov veto also to the low $`\beta `$ region. The dominant systematic errors at high and low energies, respectively, are uncertainties in the atmospheric $`\overline{p}`$ calculations and in the $`\overline{p}`$ interaction losses to which we attribute $`\pm `$15 % relative error. As shown in Table I, the BESS ’97 fluxes are consistent with the ’95 fluxes in the overlapping low-energy range (0.2 to 1.4 GeV). The solar activities at the time of the two flights were both close to the minimum as shown by world neutron monitors and by the low-energy proton spectra measured by BESS.
Shown in Fig.3 is the combined BESS (’95+’97) spectrum, in which we detect for the first time a distinctive peak at 2 GeV of secondary $`\overline{p}`$, which clearly is the dominant component of the cosmic-ray $`\overline{p}`$’s.
The measured secondary $`\overline{p}`$ spectrum provides crucial tests of models of propagation and solar modulation, since one has a priori knowledge of the input source spectrum for the secondary $`\overline{p}`$, which can be calculated by combining the measured proton and helium spectra with the accelerator data on the $`\overline{p}`$ production. The distinct peak structure of the $`\overline{p}`$ spectrum also has clear advantages in these tests over the monotonic (and unknown) source spectra of other cosmic-rays.
The curves shown in Fig.3 are recent theoretical calculations for the secondary $`\overline{p}`$ in diffusion model and leaky box model , in which the propagation parameters (diffusion-coefficient or escape length) are deduced by fitting various data on cosmic-ray nuclei such as Boron/Carbon ratio, under the assumption that the different cosmic-ray species (nuclei, proton and $`\overline{p}`$) undergo a universal propagation process. These calculations use as essential inputs recently measured proton spectra , which are significantly (by a factor 1.4 to 1.6) lower than previous data in the energy range (10 to 50 GeV) relevant to the $`\overline{p}`$ production.
These calculations reproduce our spectrum at the peak region remarkably well within their $`\pm `$15 % estimated accuracy . This implies that the propagation models are basically correct and that different cosmic-ray species undergo a universal propagation process.
At low energies, the calculations predict somewhat diverse spectra reflecting various uncertainties , which presently make it difficult to draw any conclusion on possible admixture of primary $`\overline{p}`$ component. As noted in Ref., the rapid increase of solar activity toward the year 2001 will drastically suppress the primary $`\overline{p}`$ component such as from the primordial black holes , while changing the shape of the secondary $`\overline{p}`$ only modestly . This will help us to separate out the “primary” and “secondary” components at low energies. Future annual BESS flights will thus be important to search for primary $`\overline{p}`$ component, to study solar modulation, and to investigate further details of the propagation.
Since most previous data were presented in the form of $`\overline{p}/p`$ flux ratios , a compilation is made in Fig.4, which shows again the unprecedented accuracy of our measurement.
Sincere thanks are given to NASA and NSBF for the balloon launch. The analysis was performed by using the computing facilities at ICEPP, Univ. of Tokyo. This experiment was supported by Grant-in-Aid from Monbusho in Japan and by NASA in the USA.
|
no-problem/9906/nucl-th9906008.html
|
ar5iv
|
text
|
# ON THE ORIGIN OF THE SHORT RANGE NN REPULSION
## Abstract
We calculate S-wave singlet and triplet NN phase shifts stemming from the short-range flavor-spin hyperfine interaction between constituent quarks using the resonating group method approach. A strong short-range repulsion is found in both waves. A fair comparison is performed between the traditional picture, relying on the colour-magnetic interaction, and the present one, relying on the Goldstone boson exchange dynamics. It is shown that the latter one induces essentially stronger repulsion, which is a very welcome feature. We also study a sensitivity of phase shifts and wave function to extention from the one-channel to three-channel resonating group method approximation.
One of the crucial questions of the low-energy QCD is which physics, inherent in QCD, is responsible for the low-energy properties of light and strange baryons and their interactions. At the very high momenta transfer (the ultraviolet regime of QCD) the nucleon is viewed as a system of the weakly interacting partons, which justifies here the use of the perturbative QCD tools. The low-energy properties, like masses or low-energy interactions, is much harder, if impossible, to understand in terms of the original QCD degrees of freedom and one obviously needs effective theories. One can borrow a wisdom of the many fermion system physics, e.g. in condensed matter, which suggests that in such a situation the concept of the quasiparticles in Bogoliubov or Landau sense becomes very useful. The idea of quasiparticles is that in some circumstances one can approximately absorb complicated interactions between bare fermions, i.e. in our case current quarks, into static properties of quasiparticles, e.g. their masses, and what is left beyond that should be treated as residual interactions between quasiparticles. Such a concept should be helpful to understand the low-energy properties of light baryons where a typical momentum of quarks is below the chiral symmetry breaking scale, $`\mathrm{\Lambda }_\chi 1`$ GeV, which implies that the low-energy characteristics of baryons, such as masses, are formed by the nonperturbative QCD dynamics, which is responsible for the chiral symmetry breaking and confinement, but not by the perturbative QCD interactions which should be active at much higher scale, where the quasiparticles do not exist. At low momenta the scalar part of the nonperturbative gluonic interaction between current quarks, which triggers the chiral symmetry breaking (i.e. pairs the left quarks and right antiquarks and vice versa in the QCD vacuum), can be absorbed into the mass of quasiparticles - constituent quarks. At the same time this nonperturbative interaction, iterated in the $`qq`$ t-channel in baryons, leads to the poles which can be identified as Goldstone boson exchange between valence quarks in baryons. This is a general feature and does not depend on details or nature of this nonperturbative interaction. If so, the adequate residual interactions between the constituent quarks in baryons at low momenta, $`q<\mathrm{\Lambda }_\chi `$, should be effective confining interaction and the Goldstone boson exchange (GBE).
By now it is established that such a picture is very successful in light and strange baryon spectroscopy . Similar conclusions have been obtained recently in the lattice studies of $`N\mathrm{\Delta }`$ splitting , in the large $`N_c`$ and phenomenological analyses of L=1 spectra. If such a physical picture is satisfactory, it should also explain baryon-baryon interaction. It is rather evident that at medium and large distances in the baryon-baryon system, where the Pauli principle at the constituent quark level does not play any role, it is fully compatible with the wisdom of nuclear physics, where the $`NN`$ interaction is determined by the Yukawa tail of the pion exchange and the two-pion exchange ($`\rho `$\- and $`\sigma `$\- exchange interactions). The explanation that the short-range repulsion is due to the central spin-independent part of the $`\omega `$ exchange is not satisfactory, however, as in this case the $`\omega N`$ coupling constant should be increased by a factor 3 compared to its empirical value. One takes it for granted that the origin of the short-range $`NN`$ repulsion should be the same as the origin of the nucleon mass and its lowest excitations. If so, the Fermi nature of constituent quarks and specific interactions between them should be of crucial importance to understand the short range $`NN`$ repulsion.
Traditionally the repulsive core in the $`NN`$ system within the constituent quark picture was attributed to the colour-magnetic part of the one gluon exchange (OGE) interaction combined with quark interchanges between $`3Q`$ clusters (for reviews and earlier references see ). However, as it follows from the previous discussion it is dubious to use a language of constituent quarks and at the same time of perturbative one gluon exchange. So the important question is whether one can understand the short-range $`NN`$ repulsion in terms of residual interactions like GBE.
The first simple analysis of possible effects for the S-wave $`NN`$ system from the short-range part of the pion-exchange interaction between quarks was based on the assumption that the $`6Q`$ wave function in the nucleon overlap region has a flavor-spin symmetry $`[33]_{FS}`$, which is the only possible symmetry in the nonexcited $`s^6`$ configuration . In that paper, as well as in the subsequent hybrid models , it was assumed that the pion-exchange produces only some insignificant part of the short-range $`NN`$ repulsion. However, when the GBE-like hyperfine interaction is made strong enough to produce the $`\mathrm{\Delta }N`$ splitting and describe the low-lying spectrum, the situation is different. The GBE-like interaction is more attractive within the $`6Q`$ configuration with the symmetry $`[51]_{FS}`$ and thus the spatially excited configuration $`s^4p^2[51]_{FS}`$ is more favourable and becomes the lowest one . The energy of this configuration is, however, still much higher than the energy of two infinitely separated nucleons and that is why there appears a strong short-range repulsion in the $`NN`$ system. While this result demonstrates that the GBE-like hyperfine interaction could indeed explain the short-range repulsive core in the NN system, it is still only suggestive (the phase shifts have not been calculated) as it is based on the adiabatic approximation and neglects a smooth transition to the distances with the well-clustered $`6Q`$ system. In the present work we go beyond the adiabatic approximation and construct our basis in such a way that it includes not only the lowest important $`s^4p^2`$ and $`s^6`$ configurations like in , but also the well clustered states at medium and long distances. We calculate both the $`{}_{}{}^{3}S_{1}^{}`$ and $`{}_{}{}^{1}S_{0}^{}`$ phase shifts and prove that the GBE-like flavor-spin hyperfine interaction does supply a very strong short-range repulsion in the $`NN`$ system. We compare this repulsion with the one induced by the colour-magnetic interaction within the traditional picture and find the former one to be much stronger. This is a very welcome feature as the models of the short-range $`NN`$ repulsion based on the OGE interaction fail to describe phase shifts above the lab energy of about 300 MeV because of the lack of the strong enough short-range repulsion in those models.
The convenient basis for solving the Schrödinger equation which comprises both the short-range $`6Q`$ configurations, incorporates the $`NN`$ asymptotics as well as a smooth transition from the nucleon overlap region to the medium ranges is suggested by the resonating group method (RGM) approximation. The most simple one-channel ansatz for the six-quark two-nucleon wave function is
$`\psi `$ $`=`$ $`\widehat{A}\{N(1,2,3)N(4,5,6)\chi (\stackrel{}{r})\},`$ (1)
$`\widehat{A}`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{10}}}(19\widehat{P}_{36}),`$ (2)
$`\stackrel{}{r}`$ $`=`$ $`{\displaystyle \frac{\stackrel{}{r}_1+\stackrel{}{r}_2+\stackrel{}{r}_3}{3}}{\displaystyle \frac{\stackrel{}{r}_4+\stackrel{}{r}_5+\stackrel{}{r}_6}{3}}.`$ (3)
Here $`N(1,2,3)`$ is $`s^3`$ harmonic oscillator wave function of the nucleon with a standard $`SU(6)_{FS}`$ spin-isospin part, $`\widehat{A}`$ is an antisymmetrizer at the quark level and the trial function $`\chi (\stackrel{}{r})`$ is obtained by solving the Schrödinger equation, for review see . We remind, however, that at short range the trial function $`\chi (\stackrel{}{r})`$ by no means should be interpreted as a relative motion wave function: in the nucleon overlap region because of the antisymmetrizer the function (1) is intrinsically 6-body function and contains a lot of other “baryon-baryon components”, such as $`\mathrm{\Delta }\mathrm{\Delta }`$, $`NN^{}`$, $`N^{}N^{}`$,… and hidden colour components .
The trial function (1) completely includes the symmetric short-range $`s^6`$ shell-model configuration provided that the harmonic oscillator parameter for the $`s^3`$ nucleon and for $`s^6`$ configuration coinside. This is because of the well-known identity
$$\widehat{A}\{N(1,2,3)N(4,5,6)\varphi _{0s}(\stackrel{}{r})\}_{SI}=\sqrt{\frac{10}{9}}|s^6[6]_O[33]_{FS}>.$$
(4)
Here and below $`\varphi _{Ns}(\stackrel{}{r})`$ denotes the S-wave harmonic oscillator function with $`N`$ excitation quanta, $`[f]_O`$ and $`[f]_{FS}`$ are Young diagrams (patterns) describing the permutational orbital and flavour-spin symmetries in $`6Q`$ system, which are necessary to identify the given configuration in the shell-model basis. It is always assumed that the center-of-mass motion is removed from the shell-model wave function.
However, the ansatz (1) contains only a fixed superposition of different shell-model configurations from the $`s^4p^2`$ shell :
$`\widehat{A}\{N(1,2,3)N(4,5,6)\varphi _{2s}(\stackrel{}{r})\}_{SI}`$ (5)
$`={\displaystyle \frac{3\sqrt{2}}{9}}|(\sqrt{{\displaystyle \frac{5}{6}}}s^52s\sqrt{{\displaystyle \frac{1}{6}}}s^4p^2)[6]_O[33]_{FS}>`$ (6)
$`{\displaystyle \frac{4\sqrt{2}}{9}}|s^4p^2[42]_O[33]_{FS}>`$ (7)
$`{\displaystyle \frac{4\sqrt{2}}{9}}|s^4p^2[42]_O[51]_{FS}>.`$ (8)
One can extend the ansatz (1) and include in addition two new channels, “the $`\mathrm{\Delta }\mathrm{\Delta }`$” and the “hidden colour channel CC” :
$`\psi =\widehat{A}\{N(1,2,3)N(4,5,6)\chi _{NN}(\stackrel{}{r})\}`$ (9)
$`+\widehat{A}\{\mathrm{\Delta }(1,2,3)\mathrm{\Delta }(4,5,6)\chi _{\mathrm{\Delta }\mathrm{\Delta }}(\stackrel{}{r})\}`$ (10)
$`+\widehat{A}\{C(1,2,3)C(4,5,6)\chi _{CC}(\stackrel{}{r})\},`$ (11)
where $`\mathrm{\Delta }(1,2,3)`$ is $`s^3`$ harmonic oscillator $`SU(6)_{FS}`$ wave function of the $`\mathrm{\Delta }`$-resonance and the hidden-colour $`CC`$ channel includes the $`C=`$ colour-octet $`s^3`$ cluster. Here we followed a definition of the hidden-color channel $`CC`$ in ref. . The hidden-color state is constructed so that it contains only the flavor-spin $`[33]_{FS}`$ symmetry. It must be noted that $`C`$ is the color octet but it does not have a definite spin and isospin. Note that all three channels in (11) are highly non-orthogonal because of the antisymmetrizer. This can be easily seen from the fact that identities similar to (4) can be written also for the $`\mathrm{\Delta }\mathrm{\Delta }`$ and $`CC`$ channels. This redundancy in the subspace $`N=0`$ is only a technical one and can be easily avoided by diagonalizing the norm RGM matrix and removing all “forbidden states”. However, in the subspace with $`N=2`$, these three channels become linearly independent since the following identities are also valid for the $`\mathrm{\Delta }\mathrm{\Delta }`$ and $`CC`$ channels:
$`\widehat{A}\{\mathrm{\Delta }(1,2,3)\mathrm{\Delta }(4,5,6)\varphi _{2s}(\stackrel{}{r})\}_{SI}=`$ (12)
$`{\displaystyle \frac{6\sqrt{10}}{45}}|(\sqrt{{\displaystyle \frac{5}{6}}}s^52s\sqrt{{\displaystyle \frac{1}{6}}}s^4p^2)[6]_O[33]_{FS}>`$ (13)
$`+{\displaystyle \frac{8\sqrt{10}}{45}}|s^4p^2[42]_O[33]_{FS}>`$ (14)
$`{\displaystyle \frac{2\sqrt{10}}{9}}|s^4p^2[42]_O[51]_{FS}>,`$ (15)
$`\widehat{A}\{C(1,2,3)C(4,5,6)\varphi _{2s}(\stackrel{}{r})\}_{SI}`$ (16)
$`={\displaystyle \frac{2\sqrt{10}}{5}}|(\sqrt{{\displaystyle \frac{5}{6}}}s^52s\sqrt{{\displaystyle \frac{1}{6}}}s^4p^2)[6]_O[33]_{FS}>`$ (17)
$`+{\displaystyle \frac{2\sqrt{10}}{15}}|s^4p^2[42]_O[33]_{FS}>.`$ (18)
Because the trial functions $`\chi _{NN},\chi _{\mathrm{\Delta }\mathrm{\Delta }}`$ and $`\chi _{CC}`$ are independent full Hilbert space trial functions (i.e. they completely include $`\varphi _{2s}`$), then the compact shell-model configurations $`|\sqrt{\frac{5}{6}}s^52s\sqrt{\frac{1}{6}}s^4p^2[6]_O[33]_{FS}>`$, $`|s^4p^2[42]_O[33]_{FS}>`$ and $`|s^4p^2[42]_O[51]_{FS}>`$ are relaxed and participate as independent variational configurations when one applies the ansatz (11), in contrast to the ansatz (1). The other possible compact $`6Q`$ configurations from the $`s^4p^2`$ shell, such as $`[411]_{FS}`$, $`[321]_{FS}`$ and $`[2211]_{FS}`$ are not taken into account, but they play only a very modest role when one applies the interaction (19) .
First we study the effect of the GBE-like flavor-spin short range interaction. This interaction can be parametrized as
$$V_\chi =\underset{i<j}{}\frac{a_\chi }{m_im_j}\stackrel{}{\tau }_i\stackrel{}{\tau }_j\stackrel{}{\sigma }_i\stackrel{}{\sigma }_j\mathrm{\Lambda }^2\frac{e^{\mathrm{\Lambda }r}}{r},$$
(19)
where $`\stackrel{}{\tau }`$ and $`\stackrel{}{\sigma }`$ are the quark isospin and spin matrices respectively. In this qualitative paper we confine ourselves to the $`\pi `$-exchange between $`u,d`$ quarks as the contribution of the $`\eta `$ exchange is much smaller.
The minus sign of the interaction (19) is related to the sign of the short-range part of the pseudoscalar meson-exchange interaction (which is opposite to that of the Yukawa tail), crucial for the hyperfine splittings in baryon spectroscopy. It is significant that this short-range part appears at the leading order within the chiral perturbation theory (i.e. in the chiral limit), while the Yukawa part contributes only in the subleading orders and vanishes in the chiral limit . The parameter $`\mathrm{\Lambda }`$, which determines a range of this interaction, is fixed by the scale of spontaneous breaking of chiral symmetry, $`\mathrm{\Lambda }1`$ GeV. The Yukawa part of the interaction, on the other hand, is determined by the pion mass and is not important for the interaction of quarks at distances of $`0.50.8`$ fm, which is a typical distance between quarks in the nucleon and which is important for the short-range $`NN`$ interaction. Note that the short-range interaction of the form (19) comes also from the $`\rho `$-exchange , which can also be considered as a representation of the correlated two-pion exchange . The parameter $`a_\chi `$, which determines the total strength of the pseudoscalar and vector-like hyperfine interactions with $`s^3`$ ansatz for both nucleon and $`\mathrm{\Delta }`$ wave function is fixed to reproduce the $`\mathrm{\Delta }N`$ mass splitting. The constituent masses are taken to have their typical values, $`m=\frac{1}{3}m_N`$. When the confining interaction between quarks is assumed to be colour-electric and pairwise and has a harmonic form, it does not contribute at all to the two nucleon problem as soon as the ansätze (1) or (11) are used and the two-nucleon threshold, calculated with the same Hamiltonian, is subtracted. Hence within the given toy model we have only one free parameter, the nucleon matter root-mean-square size $`b`$, which coinsides with the harmonic oscillator parameter of the $`s^3`$ wave function. We fix it to be $`b=0.5`$ fm. The parameters used in the calculation are summarized in Table I.
The second model is a traditional one, based on the colour-magnetic component of OGE
$$V_{cm}=\underset{i<j}{}\frac{a_{cm}}{m_im_j}\lambda _i^C\lambda _j^C\stackrel{}{\sigma }_i\stackrel{}{\sigma }_j\mathrm{\Lambda }^2\frac{e^{\mathrm{\Lambda }r}}{r},$$
(20)
where $`\lambda ^C`$ are color Gell-Mann matrices with an implied summation over $`C=1,\mathrm{},8`$. We want to make a fair comparison between two models and thus use exactly the same $`b`$ and $`\mathrm{\Lambda }`$. The effective OGE coupling constant $`a_{cm}`$ is determined from the $`\mathrm{\Delta }N`$ mass splitting, which is also given in the Table. Thus we can study a difference between repulsion implied by the flavor-spin and color-spin structures of the hyperfine interactions.
In Figures 1 and 2, we show the S-wave triplet and singlet phase shifts as a function of the center of mass momentum for both models, which are negative and thus indicate repulsion in both cases. However, it is immediately seen from comparison that the flavour-spin hyperfine interaction (19) supplies essentially stronger repulsion than the colour-magnetic interaction (20). One of the reasons is that while the colour-magnetic interaction contributes to the short-range repulsion exclusively via the quark-exchange terms which vanish when one approaches the nucleon size to zero, the repulsion in the NN system stemming from the flavor-spin interaction is supported by both direct and quark-exchange terms and does not vanish in this limit.
Next we address the issue whether an extention from the one-channel ansatz (1) to the three-channel ansatz (11) is important for phase shifts and six-quark wave function. We employ the model (19).
In Fig. 3 we compare phase shifts calculated with the one-channel and the three-channel ansätze. While there is some difference, it is not significant. Note that inclusion of the $`\mathrm{\Delta }\mathrm{\Delta }`$ channel will produce an important effect as soon as the long- and intermediate-range attraction between quarks ( $`\pi `$, 2$`\pi `$ or $`\sigma `$ exchanges) is included.
There is, however, a difference in the short-range 6Q wave functions. Unfortunately it is not possible to show the six-body wave function in both cases, but we can compare a projection of the wave function onto the given baryon-baryon component. There is no unique definition of such a projection, because it is not an observable and does not make a direct physical sense (for a discussion on this issue see ref. ). Only a full 6-body wave function can be used to calculate any observable, which includes both the direct and the quark-interchange terms. We shall use two different definitions, one of them via the first power of the norm kernel (this correspond to that one used in )
$$\overline{\chi }_\alpha (\stackrel{}{r}^{\prime \prime })=𝑑\stackrel{}{r}^{}N_{\beta \alpha }(\stackrel{}{r}^{}\stackrel{}{r}^{\prime \prime })\chi _\beta (\stackrel{}{r}^{}),$$
(21)
$$N_{\beta \alpha }(\stackrel{}{r}^{},\stackrel{}{r}^{\prime \prime })=<B_\beta (1,2,3)B_\beta (4,5,6)\delta (\stackrel{}{r}\stackrel{}{r}^{})|19\widehat{P}_{36}|B_\alpha (1,2,3)B_\alpha (4,5,6)\delta (\stackrel{}{r}\stackrel{}{r}^{\prime \prime })>,$$
(22)
where $`B_\alpha =N,\mathrm{\Delta },C`$. The other definition uses a square root of the norm kernel
$$\overline{\chi }_\alpha ^{}(\stackrel{}{r}^{\prime \prime })=𝑑\stackrel{}{r}^{}N_{\beta \alpha }^{1/2}(\stackrel{}{r}^{}\stackrel{}{r}^{\prime \prime })\chi _\beta (\stackrel{}{r}^{}).$$
(23)
Sometimes the latter projection is interpreted as a probability density for a given channel, which is, however, not correct, since only the full 6-body wave function has a direct and clear probability interpretation.
Both types of projections would give an identiacal result if one used a multichannel ansatz for wave function with all possible baryon states. Then the closure relation
$$\underset{\alpha }{}|B_\alpha (1,2,3)B_\alpha (4,5,6)><B_\alpha (1,2,3)B_\alpha (4,5,6)|=I$$
(24)
would be satisfied and one would obtain $`\widehat{N}=(\widehat{N}^{1/2})^2`$ (which is satisfied on the subspace $`N=0`$ but not satisfied on the subspace $`N=2`$ with $`B_\alpha =N,\mathrm{\Delta },C`$).
In Figs. 4 we show projections onto $`NN`$ using both one-channel and three-channel ansätze and both definitions of projections. It is indeed well seen that different definitions give different behaviour of projections at short range. While there is a node with the definition (23), such a node is absent with the definition (21), which illustrates a very limited physical sense of projections. Still, when we compare the projections obtained with different ansätze (1) or (11) within the same definition (23), one observes a significant difference, which is of no surprise since the ansatz (11) is much richer at short distances in the NN system.
In conclusion we summarize. The short-range flavour-spin hyperfine interaction between the constituent quarks implies a strong short-range repulsion in the NN system. This repulsion is essentially stronger than that one supplied by the colour-magnetic interaction within the traditional model. This is a welcome feature as the traditional models, based on the colour-magnetic interaction, do not provide a strong enough short-range repulsion and fail to describe the phase shifts above the lab energy of 300 MeV. Another significant difference is that the interaction (19) implies a repulsion of the same strength in both singlet and triplet partial waves, while the colour-magnetic interaction supplies a repulsion of different strength, which makes it difficult to describe both partial waves at the same time. While both models imply a repulsion in the $`NN`$ system, their implications are dramatically different in the ”H-particle” channel. The colour-magnetic interaction, reinforced by the the Yukawa parts of the meson exchanges, leads to a deeply bound H-particle , while the interaction (19) tends to make the $`6q`$ system with ”H-particle” quantum numbers unbound or loosly bound. The existing experimental data exclude the deeply bound H-particle .
Thus the chiral constituent quark model has a good potential to explain not only baryon spectroscopy, but also the baryon-baryon interaction. The next stage is to add a long-range Yukawa potential tail from one-pion and two-pion (sigma + rho) exchanges (and possibly from omega-exchange) and provide a realistic description of NN system including all the necessary spin-spin, tensor and spin-orbit components. This task is rather involved and all groups with the corresponding experience are invited.
L.Ya.G. acknowledges a warm hospitality of the nuclear theory groups of KEK-Tanashi and Tokyo Institute of Technology. His work is supported by the foreign guestprofessorship program of the Ministry of Education, Science, Sports and Culture of Japan.
|
no-problem/9906/cond-mat9906332.html
|
ar5iv
|
text
|
# Relativistic Effects of Light in Moving Media with Extremely Low Group Velocity
\[
## Abstract
A moving dielectric medium acts as an effective gravitational field on light. One can use media with extremely low group velocities \[Lene Vestergaard Hau et al., Nature 397, 594 (1999)\] to create dielectric analogs of astronomical effects on Earth. In particular, a vortex flow imprints a long–ranging topological effect on incident light and can behave like an optical black hole.
today
\]
According to general relativity, acceleration and gravitation are equivalent in the absence of other forces. A freely falling test particle, seen in any local inertial system, moves along a straight line. And yet, the inertial frames along the particle’s path are non–trivially connected; space–time is curved such that the trajectory is bent in general. An analogous situation occurs when light propagates in a dielectric medium . Seen locally, light rays are straight lines in all volume elements of the medium. Seen globally, the medium elements might move in different directions and drag the light or the refractive index may vary such that light rays are curved. Seen in four–dimensional space–time, light follows a zero–geodesic line with respect to a metric that comprises the medium’s dielectric properties .
Ordinary dielectrics require astronomical velocity gradients to establish some of the spectacular effects of general relativity, velocities that are comparable with the speed of light in the medium. Recently, extraordinary dielectrics that are distinguished by an extremely low group velocity of light have been made on Earth . As we shall describe in this paper, the reported experiment is sensitive enough to detect quantum vortices via an optical Aharonov–Bohm effect. Furthermore, a vortex may become an optical black hole. A vortex turns out to generate an event horizon for light, a radius of no return, beyond which light falls inevitably towards the vortex singularity. Similar to a star that turns into a black hole when the gravitational Schwarzschild radius exceeds the star’s size, a vortex appears as a black hole when the optical Schwarzschild radius exceeds the radius of the core (the size of the “eye of the hurricane”).
Optical effects of moving media have been known for a long time. In 1818 Fresnel concluded correctly from an ether theory that a moving medium will drag light. Fizeau observed Fresnel’s drag effect in 1851. In 1895 Lorentz derived an additional drag component that is due to optical dispersion (the frequency–dependence of the refractive index). Zeeman was able to verify experimentally Lorentz’ effect. In 1923 Gordon formulated the electromagnetism in dispersionless media in terms of an effective gravitational field (an effective non–Euclidean metric). Let us develop a theory of light propagation in highly dispersive and transparent media in the spirit of Gordon’s Lichtfortpflanzung nach der Relativitätstheorie .
The model.— Imagine that a dielectric consists of small volume elements. Each element is sufficiently small such that the refractive index $`n`$ and the medium velocity $`𝐮`$ do not vary significantly, but each volume element is large enough to sustain several optical oscillations. We thus assume that the properties of the dielectric do not vary substantially over the effective optical wave length in the medium. In this case the propagation of light in each medium element does not depend on the polarization, and we can describe light waves by the scalar dispersion relation
$$k^2\frac{\omega ^2}{c^2}\chi (\omega ^{})\frac{\omega ^2}{c^2}=0.$$
(1)
Here $`𝐤^{}`$ denotes the local wave vector, $`\omega ^{}`$ is the local optical frequency, and we use primes to distinguish quantities in locally co–moving medium frames. Let us specify the susceptibility $`\chi (\omega ^{})`$.
Electromagnetically induced transparency has been applied to create dielectrics with extraordinary low group velocity . Here a coherent electromagnetic wave drives the atoms of the medium into a quantum–superposition state such that a probe wave can travel through the dielectric that would otherwise be completely opaque. Under ideal circumstances the probe experiences at a certain frequency $`\omega _0`$ a vanishing susceptibility $`\chi `$ and a real (and extremely low) group velocity without group–velocity dispersion . We thus assume that in the spectral vicinity of $`\omega _0`$ the susceptibility is, up to terms of third order in $`\omega ^{}\omega _0`$,
$$\chi (\omega ^{})=\frac{2\alpha }{\omega _0}(\omega ^{}\omega _0)+\mathrm{O}\left((\omega ^{}\omega _0)^3\right).$$
(2)
We obtain from Eqs. (1) and (2) the group velocity
$$v_g\frac{\omega ^{}}{k^{}}|_{\omega _0}=\left(\frac{k^{}}{\omega ^{}}|_{\omega _0}\right)^1=\frac{c}{\alpha +1}.$$
(3)
For having a definite geometry in mind, we imagine that the medium flow is perpendicular to one axis in space. The driving wave shall run in the direction of this axis, i.e. orthogonally to the motion of the medium. This arrangement has the advantage that the atoms of the medium are not sensitive to the first–order Doppler effect of the drive (and higher–order effects turn out to be irrelevant for our purpose). A monochromatic probe beam of frequency $`\omega _0`$ shall propagate orthogonally to the driving beam, i.e. in the plane where the medium moves. Consequently, the probe experiences the full range of Doppler–detuning of the susceptibility (2) at the sharp resonance $`\omega _0`$, while still propagating in a transparent medium.
The metric.— How does the moving medium appear to the probe? Let us transform the dispersion relation (1) in locally co–moving medium frames to the laboratory frame. We notice that $`k^2\omega ^2/c^2`$ is a Lorentz scalar, and obtain
$$k^2\frac{\omega _0^2}{c^2}\chi (\omega ^{})\frac{\omega ^2}{c^2}=0,\omega ^{}=\frac{\omega _0𝐮𝐤}{\sqrt{1\frac{u^2}{c^2}}},$$
(4)
where $`𝐮`$ denotes the velocity field of the medium. Note that the local Lorentz transformations from the medium frames to the laboratory frame mix the components of the electromagnetic field–strength tensor $`F_{\mu \nu }^{}`$. However, since the dispersion relation (1) in the medium is valid for all components of $`F_{\mu \nu }^{}`$, the light propagation in the laboratory frame is polarization–independent . Relativistic effects of light in slowly moving media diminish with increasing order in $`u/c`$. Therefore, we expand the dispersion relation (4) to second order in $`u/c`$, use the susceptibility (2), and arrive at a result that we can formulate in the spirit of Gordon’s geometric theory . We introduce the covariant wave vector,
$$k_\nu =(\frac{\omega _0}{c},𝐤),$$
(5)
and, adopting Einstein’s summation convention, obtain the dispersion relation
$$g^{\mu \nu }k_\mu k_\nu =0$$
(6)
with
$$g^{\mu \nu }=\left(\begin{array}{cc}1+\alpha \frac{u^2}{c^2}& \alpha \frac{𝐮}{c}\\ \alpha \frac{𝐮}{c}& \mathrm{𝟏}+4\alpha \frac{𝐮𝐮}{c^2}\end{array}\right).$$
(7)
The symbol $``$ denotes the three–dimensional tensor product. We regard $`g^{\mu \nu }`$ as the contravariant metric tensor of the moving medium, whereas the inverse of $`g^{\mu \nu }`$ is the covariant tensor $`g_{\mu \nu }`$. Light rays turn out to be zero–geodesic lines of
$$ds^2=g_{\mu \nu }dx^\mu dx^\nu ,dx^\mu =(cdt,d𝐱).$$
(8)
The moving dielectric appears as a curved space–time, i.e. as an effective gravitational field, to light that travels inside.
Let us introduce the contravariant wave vector $`k^\mu `$ with respect to the metric $`g^{\mu \nu }`$ of the medium,
$$k^\mu g^{\mu \nu }k_\nu .$$
(9)
One can show that this four–vector points into the direction of light propagation,
$$k^\mu =\frac{k^0}{c}\frac{dx^\mu }{dt}.$$
(10)
In other words, $`k^\mu `$ is proportional to the velocity vector of light, i.e. $`k^\mu `$ appears as a kinetic momentum, whereas the covariant wave vector $`k_\nu `$ is the canonical momentum of the light ray.
Optical Aharonov–Bohm effect.— The distinction between canonical and kinetic momentum is as vital to the physics of charged particles in magnetic fields as the distinction of co- and contravariant vectors is to general relativity. The two areas are related. In fact, we obtain from the definition (9) and the metric (7) to lowest order in $`u/c`$
$$k^0=\frac{\omega _0}{c}\alpha \frac{𝐮𝐤}{c},k^i=𝐤+\alpha \frac{\omega _0}{c}\frac{𝐮}{c}.$$
(11)
We apply the dispersion relation (6) and get, up to second–order terms,
$$\underset{i=1}{\overset{3}{}}(k^i)^2=\frac{\omega _0^2}{c^2}.$$
(12)
In geometrical optics one can translate a dispersion relation into a wave equation for the complex positive–frequency component of the field–strength tensor $`F_{\mu \nu }`$ by substituting $`i`$ for $`𝐤`$. In particular, we obtain from the relation (12)
$$\left(i+\alpha \frac{\omega _0}{c^2}𝐮\right)^2F_{\mu \nu }=\frac{\omega _0^2}{c^2}F_{\mu \nu }.$$
(13)
This is precisely the non–relativistic Schrödinger equation of a charged matter wave in a magnetic field. Consequently, light in a slowly moving dispersive medium behaves like an electron wave where the medium velocity u plays the role of the vector potential .
Aharonov and Bohm discovered that under certain circumstances a charged matter wave attains an observable phase shift without experiencing a force. In particular, a thin solenoid produces a vanishing magnetic field outside the coil, and hence generates no force, and yet, matter waves that enclose the solenoid experience a noticeable phase shift due to a vortex of the vector potential. Consequently, in the case of light in moving media, a vortex flow will not bend light in first order, but the vortex will imprint a phase shift onto the incident light . In cylindrical coordinates a vortex with vorticity $`2\pi 𝒲`$ has the velocity profile
$$𝐮=\frac{𝒲}{r}𝐞_\phi .$$
(14)
We compare the wave equation (13) with the Schrödinger equation of Aharonov and Bohm , and read off the phase shift
$$\phi _{_{AB}}=2\pi \nu _{_{AB}},\nu _{_{AB}}=\alpha \frac{\omega _0}{c}\frac{𝒲}{c}=\frac{\omega _0}{c}\frac{𝒲}{v_g}$$
(15)
in the limit of a low group velocity $`v_g`$. Electromagnetically induced transparency has made it possible to reduce $`v_g`$ to $`17\mathrm{m}/\mathrm{s}`$ . In this case the optical Aharonov–Bohm effect is sensitive enough to detect a single quantum vortex with
$$𝒲=\frac{\mathrm{}}{m}.$$
(16)
Indeed, we obtain for a frequency $`\omega _0`$ of $`3\times 10^{15}\mathrm{s}^1`$ and for sodium with a rest mass $`m`$ of about 23 proton masses a $`\phi _{_{AB}}`$ of $`10^2`$. This phase shift between waves that pass the vortex from different sides can be made visible via phase–contrast microscopy . The optical Aharonov–Bohm effect explores the long–ranging topological nature of a quantum vortex, similar to the vortex detection using two interfering condensates. In contrast to this technique, the optical effect may allow in–situ observations of vortices.
Optical black hole.— A classical vortex generates a strongly falling pressure near the vortex core. A tornado, for example, attracts with ease substantial “test particles” and tears them apart. Can a vortex attract light? What happens near the core where the first–order Aharonov–Bohm theory is destined to fail? Let us study the light propagation using the Hamilton–Jacobi method . Here the covariant wave vector $`k_\nu `$ is the negative four–gradient of the eikonal $`S`$, or,
$$\omega _0=\frac{S}{t},𝐤=S.$$
(17)
We interpret the dispersion relation (6) as the Hamilton–Jacobi equation for light rays. In the case of the vortex flow (14) we find in cylindrical coordinates
$`S`$ $`=`$ $`{\displaystyle \frac{\omega _0}{c}}\left[ct+l\phi +R(r)\right],`$ (18)
$`\left({\displaystyle \frac{dR}{dr}}\right)^2`$ $`=`$ $`1{\displaystyle \frac{l^2}{r^2}}+{\displaystyle \frac{\alpha }{r^2}}\left(2{\displaystyle \frac{𝒲l}{c}}+{\displaystyle \frac{𝒲^2}{c^2}}+4{\displaystyle \frac{𝒲^2l^2}{c^2r^2}}\right).`$ (19)
The eikonal (18) characterizes a set of rays with common frequency $`\omega _0`$ and angular momentum $`l(\omega _0/c)`$ that are incident perpendicular to the vortex line. Note that near the origin the modulus of the wave vector grows at least as rapidly as the flow velocity. The ratio $`|S|/u`$ approaches here $`2\sqrt{\alpha }(l/r)(\omega _0/c^2)`$ for $`l0`$ and $`\sqrt{\alpha }(\omega _0/c^2)`$ for $`l=0`$. Consequently, even in the vicinity of the vortex core, geometrical optics is well justified to describe the propagation of light.
How close can light come to the core and still manage to escape? Let us analyze the turning points $`r_0`$ of the radial motion where $`(dR/dr)^2`$ vanishes. For each value of $`l`$ we obtain two points, an outer $`(+)`$ and an inner turning point $`()`$,
$$r_0^2=\frac{1}{2c^2}\left(w_0^2\pm \sqrt{w_0^416\alpha c^2l^2𝒲^2}\right),$$
(20)
$$w_0^2=(\alpha +1)c^2l^2\alpha (cl𝒲)^2,$$
(21)
provided that the argument of the square root in Eq. (20) is non–negative. Otherwise real turning points do not exist, and the incident light is doomed to fall towards the vortex core. At two critical angular momenta $`l_\pm `$ the inner and the outer turning points coincide at $`r_\pm r_0`$. In this case we get from Eq. (20) the relation
$$w_0^2=\pm 4\sqrt{\alpha }𝒲cl_\pm $$
(22)
with, assuming a positive vorticity $`2\pi 𝒲`$, the plus sign for positive $`l_+`$ and the minus sign for negative $`l_{}`$, because $`w_0^2`$ is non–negative. Light rays with angular momenta inside the interval $`(l_{},l_+)`$ have no turning points. Consequently, the critical angular momenta $`l_\pm `$ mark the transition from the fall into the core and a chance to escape. We solve Eqs. (21) and (22) for $`l_\pm `$, obtain
$$l_\pm =\frac{𝒲}{c}\sqrt{\alpha }\left[\sqrt{\alpha }\pm 2\pm \sqrt{(\sqrt{\alpha }\pm 2)^2+1}\right],$$
(23)
and get in the limit of low group velocities $`v_g`$ when $`\alpha c/v_g1`$,
$$l_{}=2\frac{𝒲}{v_g},l_+=\frac{𝒲}{2c}.$$
(24)
Finally, we obtain from Eqs. (20), (22) and (24) the corresponding critical radii $`r_\pm =r_0`$, where turning points cease to exist,
$$r_{}=2\frac{𝒲}{c}\left(\frac{c}{v_g}\right)^{3/4},r_+=\frac{𝒲}{c}\left(\frac{c}{v_g}\right)^{1/4}.$$
(25)
Regardless which path the light is following, as soon as a light ray comes closer than $`r_+`$ to the vortex core, the light faces no other choice than to fall towards the singularity. The optical Schwarzschild radius $`r_+`$ determines a point of no return (in contrast to trajectories in other singular potentials where escaping particles may come arbitrarily close to the singularity). The larger critical radius $`r_{}`$ is a weak Schwarzschild radius where light rays with positive angular momenta can escape but those with negative $`l`$ are trapped. Light rays with positive angular momentum have the advantage of traveling with the flow, whereas those with negative $`l`$ swim against the current, and are efficiently deaccelerated and finally captured.
One might object that the vortex flow (14) of our model will allow medium velocities that exceed $`c`$ near the origin. Note, however, that the flow velocities $`u_\pm `$ at the two Schwarzschild radii are well below $`c`$,
$$u_{}=\frac{c}{2}\left(\frac{v_g}{c}\right)^{3/4},u_+=c\left(\frac{v_g}{c}\right)^{1/4}.$$
(26)
Long before the vortex (14) becomes superluminal, the falling pressure will produce a hole in the vortex core (the “eye of the hurricane”). The vortex appears as an optical black hole if the core radius is smaller than the Schwarzschild radius. Suppose that one could reduce the group velocity of light further to $`1\mathrm{c}\mathrm{m}/\mathrm{s}`$. In this case the velocity $`u_+`$ at the hard Schwarzschild radius $`r_+`$ reaches $`7\times 10^5\mathrm{m}/\mathrm{s}`$ and the flow at the weak Schwarzschild radius $`r_{}`$ is $`2\mathrm{m}/\mathrm{s}`$. The creation of a hard black hole seems to be unrealistic with present technology. However, a weak black hole could be made. For example, one could utilize the torque of Gauss–Laguerre beams to create a classical vortex of a rapidly rotating gas of alkali atoms . Especially appealing would be a quantum black hole with a single quantum vortex (16) as the center of attraction. In alkali Bose–Einstein condensates , the core radius is roughly given by the healing length $`(8\pi \rho a)^{1/2}`$ with $`\rho `$ being the density and $`a`$ the scattering length. For a sodium condensate ($`\rho =5\times 10^{18}\mathrm{m}^3`$ and $`a=2.75\times 10^9\mathrm{m}`$) we obtain a healing length of about $`2\times 10^6\mathrm{m}`$ that significantly exceeds the Schwarzschild radius $`r_{}`$ of about $`10^9\mathrm{m}`$. However, one could employ other alkali isotopes and/or utilize Feshbach resonances to increase the scattering length and, consequently, to reduce the size of vortex cores.
Summary.— A moving dielectric medium acts as an effective gravitational field on light . One could employ media with extremely low group velocities to create dielectric analogs of gravitational effects that usually belong to the realm of astronomy. In particular, a vortex can create a long–ranging Aharonov–Bohm effect on incident light and, on shorter ranges, can behave like a black hole .
We are grateful to M. V. Berry, J. H. Hannay, S. Klein, W. Schleich, and S. Stenholm for helpful discussions. U. L. thanks the Alexander von Humboldt Foundation and the Göran Gustafsson Stiftelse for support. P. P. was partially supported by the research consortium Quantum Gases of the Deutsche Forschungsgemeinschaft.
|
no-problem/9906/astro-ph9906218.html
|
ar5iv
|
text
|
# The IMF and its Evolution
## 1 Introduction
Observations of clusters and associations suggest an average stellar initial mass function (IMF) that is approximately a power law like the Salpeter (1955) function, with a slope of $`x1.35`$ on a $`\mathrm{log}n\mathrm{log}M`$ plot, and a flattening below $`0.35`$ M. This IMF appears in clusters and whole galaxies, for all galactic populations, and even in the intergalactic medium (Sect. 2.1, 2.3, 2.4). However, there are still fluctuations in the slope of the power-law by $`\pm 0.5`$ from cluster to cluster (Scalo 1998), and there are other curious variations too, like a steeper slope in the field (Sect. 2.2), the mass of the most massive star increasing with cloud mass (Sect. 3), the formation of massive stars relatively late and near the centers of clusters (Sect. 3), and the greater proportion of massive stars in starburst galaxies (Sect. 2.5). Considering the robust nature of the IMF, any theory for its origin should be able to reproduce both the average shape and the variations around it with a minimum of free parameters and a minimal dependence on the physical properties of the star-forming clouds.
Another important mass function for star formation is the distribution of cloud and clump masses. This differs from the stellar function in both slope ($`x0.50.8`$ for clouds) and range ($`M_{cloud}10^410^7`$ M; Heithausen et al. 1998; Dickey & Garwood 1989), leading one to wonder why stars form with a steeper mass distribution than their clumps. There must be a preferential selection of lower clump masses for stars, and a cutoff at some minimum star mass.
There are tantalizing indications that we may be able to understand the IMF without fully understanding the origin of either the cloud structure or the processes involved with individual stars. Given the observed structures of clouds, we can imagine how star formation processes might select pieces of this structure in a certain order and end up with the observed IMF and all of its variations. If such clump selection is the correct explanation for the IMF, then it presumably works because most of the star mass is determined by the gas mass immediately available to it during the protostar phase, and because the IMF is an average over many different processes, with each losing its unique contribution when the mass distribution is averaged over a cluster.
Numerical simulations of such sampling demonstrate this point by reproducing essentially all of the observations of the IMF and its systematic and stochastic variations without any free parameters or physical input other than a single characteristic mass for the minimum clump that can make a star. These models obtain (Elmegreen 1997, 1999a): (1) the correct power-law slope and turnover shape of the IMF, with the correct turnover mass, (2) the tendency for the most massive star in a cluster to increase with cloud mass, (3) the shift in the peak or turnover mass for starburst regions without a change in the power-law slope, (4) the delayed formation of massive stars in a cluster, (5) the fluctuations in the slope of the power-law part from cluster to cluster (which result from sampling statistics), and (6) the tendency for the most massive stars in a cluster to concentrate toward the cluster center. The only input to the model is the hierarchical (and fractal) distribution of cloud structure, and the only assumption is that pieces of this hierarchy make stars at a rate that scales with the square root of the local density, which is the rate at which essentially all of the physical processes involved with the onset of star formation operate, including self-gravity, magnetic diffusion, clump collisions, and turbulence dissipation, given the molecular cloud scaling laws.
The hypotheses that IMF theories may be simplified by the gross averaging of star formation processes during the build up of a cluster, and by the intimate connection between its power-law slope and cloud structure, also help to explain why its power-law slope is so similar from region to region, even in different environments and at different times. The point is that the cloud and star formation details may not matter much for the IMF, and that power-law cloud structures are more-or-less universal, perhaps as a result of pervasive turbulence.
In the next section we review the observations of the IMF and some of the implications of these observations in an attempt to sort out what is physically significant and compelling for a theory. Other reviews can be found in the conference proceedings The Stellar Initial Mass Function, edited by Gilmore, Parry & Ryan for ASP Press in 1998. A review that compares various theories with the constraints from observations is in Elmegreen (1999b).
## 2 Observations of the IMF and Implications for the Theory
### 2.1 The Salpeter Slope in Clusters and Galaxies
The IMF at intermediate to high mass can be written $`n(M)d\mathrm{log}MM^xd\mathrm{log}M`$ for slope $`x`$ on a $`\mathrm{log}\mathrm{log}`$ plot. For most clusters, $`x`$ is in the range 1–1.5. Salpeter (1955) suggested $`x1.35`$, which is about the average of the values observed today. The most dependable values come from a mixture of photometry and spectroscopy of star clusters. IMFs based on photometry alone are generally steeper than $`x1.35`$ because of an ambiguity in mass for high mass stars (see discussion in Massey 1998).
Table 1 summarizes the recent observations that obtain $`x11.5`$ in various regions. This “Salpeter” slope is found by star counts in local clusters, integrated light from whole galaxies, elemental abundances, and galaxy evolution models. Steeper values of $`x1.52`$ are found in samples of local field stars or in the low density parts of some clusters (Table 2). Shallower values are found at low mass, where the IMF flattens to nearly zero slope on a $`\mathrm{log}\mathrm{log}`$ plot (Table 3). Shifts either in the peak or in the slope, favoring higher masses, have been found in starburst galaxies (Table 4).
The observations in these tables suggest that the IMF varies a lot, but in fact most of the functions that deviate from the turned-over Salpeter slope are based on indirect measurements that contain questionable assumptions. For example, the slope determined for the local field tends to get steep only at high mass, and the increased value depends on an assumed recent star formation history and an assumed scale height variation with mass and age. The local field is also more populated by low mass stars than high mass stars because low mass stars live longer and drift further from their sites of star formation than high mass stars.
The low density regions of clusters show a steeper IMF too because of an excess of low mass stars, but this is probably related to the greater concentration of high mass stars in cluster cores, as discussed more in Section 3; the overall cluster can still have a flattened-Salpeter IMF. The Hipparcos results quoted by Brown (1998) were based on photometry, rather than spectra, and are typically steep for photometry. Massey et al. (1995) has shown how such IMF values become shallower, like the Salpeter function, when spectra are considered for the determination of stellar mass.
### 2.2 A Steep IMF Slope in The Extreme Field
The most extreme deviation for an IMF measurement is in the remote field regions of the LMC and Milky Way (Table 2). These are regions defined by Massey et al. (1995) to be further than 30 pc from the boundaries of catalogued OB associations. Here the slope at high mass has been measured to be around $`x4`$. Evidently something very unusual is happening. There are several ways to explain this, if it turns out to be true. One way has a normal ($`x1.35`$) IMF in every individual region of star formation, and a steeper IMF in the composite of many regions. This difference between cluster and intergrated IMFs illustrates an important point about cloud destruction, so we discuss it in some detail here (see also Elmegreen 1999a).
In a large region there will in general be many separate clouds that form stars, and these clouds will have some mass function $`n(M_c)dM_cM_c^\gamma dM_c`$ for $`\gamma 1.52`$. If intermediate and high mass stars destroy their clouds because of ionization, and as a result, halt the star formation processes inside them, then more massive clouds will require more massive stars before star formation ends. This leads to a situation where a lot of low mass clouds make primarily low mass stars, with a normal IMF, and where a few high mass clouds make both low mass and high mass stars, also with a normal IMF. But, since there are more low mass clouds, the composite region will have a lot more low mass stars in proportion to high mass stars than is given by each cluster IMF. It follows that even if the IMF inside each region of star formation is the same, a Salpeter IMF for example, the composite IMF from many clouds will be steeper than this.
Consider a specific example. Suppose the IMF in each region of star formation has a certain slope $`x`$, and the largest mass of a star, $`M_L`$, required to destroy a cloud scales with cloud mass $`M_c`$ as $`M_LM_c^\alpha `$ for $`\alpha >0`$. Then the composite IMF from all of the clouds combined will have a slope $`x_{comp}=\left(\gamma 1\right)/\alpha ,`$ which is independent of the IMF slope in each individual cluster.
To evaluate this composite slope, we take $`\gamma =2`$ for a hierarchical cloud system (Fleck 1996; Elmegreen & Falgarone 1996), and $`\alpha =5/16`$ for cluster destruction with a largest stellar mass $`M_L`$. This value of $`\alpha `$ comes from the mass-luminosity relation of ionizing radiation, which scales as $`LM^4`$ for luminosity $`L`$ and stellar mass $`M`$ (Vacca, Garmany, & Shull 1996). A whole cluster’s ionizing luminosity can be evaluated from the expression $`_0^{M_L}L(M)n(M)𝑑M`$ for maximum mass $`M_L`$ and IMF $`n(M)dM=xM_L^xM^{1x}dM`$. This cluster luminosity scales with $`M_L^4`$ too. The constant term in the IMF, $`xM_L^x`$, gives one star at a maximum mass $`M_L`$ from the expression $`_{M_L}^{\mathrm{}}n(M)𝑑M=1`$. The luminosity required to destroy a cloud is the binding energy divided by the cloud crossing time, which is $`\left(GM_c^2/R\right)\left(GM_c/R^3\right)^{1/2}M_c^{5/4}`$, using the Larson (1981) scaling laws for molecular clouds. Setting the luminosity of a cluster, $`M_L^4`$, equal to the power required to destroy a cloud, $`M_c^{5/4}`$, then gives $`\alpha =5/16`$ in the expression $`M_LM_c^\alpha `$.
With $`\gamma =2`$ and $`\alpha =5/16`$, the slope of the composite IMF is $`x_{comp}=\left(\gamma 1\right)/\alpha =16/53.2`$. The value observed by Massey et al. (1995) is $`4`$, which is pretty close to this theoretical result, given the uncertainties in the $`ML`$ relation and other assumptions, and with the observations.
It is important to note that the extreme field IMF found by Massey et al. (1995) is not representative of galaxies in general. Integrated light and elemental abundances give an average IMF for whole galaxies that has the same slope at intermediate and high mass as individual clusters, namely, the Salpeter value of $`x1.35`$. This simple fact implies that massive stars cannot generally halt star formation in their clouds. If they did, then the composite IMF for a whole galaxy would be significantly steeper than the individual IMF in each cluster. Massive stars may destroy their clouds, in the sense that they push the gas around, but they cannot generally halt star formation in them except possibly in the extreme field. The extreme field could differ from the environment in OB associations because of a much lower pressure in the extreme field. A low pressure could conceivably lead to more efficient cloud ionization and the cessation of star formation in even the dense clumps.
The requirement that the composite IMF be equal to the cluster IMF also means that $`\alpha =1/x`$ in the above analysis (with $`\gamma =2`$, as required for a hierarchical gas distribution). This is just what is expected for random star formation, where the largest stellar mass increases with cloud mass simply because of random sampling from the IMF. That is, the largest stellar mass satisfies $`_{M_L}^{\mathrm{}}n(M)𝑑M=1`$, as discussed above, and this gives a constant of proportionality $`n_0=xM_L^x`$ in the expression $`n(M)=n_0M^{1x}`$. Thus the total number of stars scales with $`M_L^x`$. If the efficiency is about constant with cloud mass (and the smallest mass star is much less massive than $`M_L`$), then this total number scales about with the cloud mass, giving $`M_L^xM_c`$, or $`\alpha =1/x`$.
There are other explanations for the steep IMF in the extreme field. Star forming regions are typically much larger than 30 pc, often extending in a coherent fashion up to several hundred parsecs (Efremov 1995), so the 30 pc limit in the definition of the extreme field may allow some normal cluster, association, or star-complex members to be included. In that case, the steep slope in the outer regions of a cluster may occur for the same reasons as the shallow slope in the inner region, i.e., segregation of the most massive stars towards the center.
In summary, the general form of the IMF is probably invariant among clouds of different masses, giving a maximum stellar mass that increases with cloud mass as the power $`1/x=1/1.35`$ as a result of random sampling (i.e., more massive clouds sample further out into the high mass tail of the IMF). This explains the similarity between the composite IMF of whole galaxies and the IMFs of individual clusters. However, in the extreme field, where conditions like ambient pressure are very different than in OB associations, star-forming clouds could be more quickly and easily destroyed by ionization from stars, and in this case, the maximum stellar mass could increase much more slowly with cloud mass, as the power $`1/4`$ instead of $`1/1.35`$. As a result, the composite IMF can be much steeper than the individual IMFs in each cluster. Alternatively, the extreme field IMF could be sampling only the low mass members of an extended cluster whose other members are more centrally located.
### 2.3 An IMF that is Independent of Cluster Density
One of the most startling aspects of the observed IMF is that it is virtually invariant from cluster to cluster, aside from likely statistical fluctuations (Elmegreen 1999a), and this relative invariance spans a range of a factor of 200 in cluster density (Hunter et al. 1997; Massey & Hunter 1998) and a factor of 10 in metal abundance (Freedman 1985; Massey, Johnson & DeGioia-Eastwood 1995).
The density independence means that the IMF is probably not the result of protostar, star, or clump interactions. If it were, then dense regions, which should have more of these interactions, would differ from low density regions, where there are few or no interactions. The IMF is also not likely to result from accretion of cloud material during stellar orbital motion. Stars in denser regions orbit in a shorter time and have more gas to accrete. Neither is the IMF or any part of it from the coalescence of stars (i.e., massive stars are not formed from the coalescence of low mass stars or protostars).
This lack of a density dependence for individual stars in the IMF contrasts with the situation for binary stars and disks. The binary fraction is smaller in denser regions, and protostellar disks are smaller too (see review in Elmegreen et al. 1999). The protostellar binary fraction is lower in both the Trapezium cluster (Petr et al. 1998) and the Pleiades cluster (Bouvier et al. 1997) than it is in the Tau-Aur region, by a factor of $`3`$. Also, the peak in the separation distribution for binaries is smaller (90 AU) in the part of the Sco-Cen association that contains early type stars than it is (215 AU) in the part of the Sco-Cen association that contains no early type stars (Brandner & Köhler 1998).
The cluster environment also apparently affects disks. Mundy et al. (1995) suggested that massive disks are relatively rare in the Trapezium cluster, and Nürnberger et al. (1997) found that protostellar disk mass decreases with stellar age in the Lupus young cluster, but not in the Tau-Aur region, which is less dense. When massive stars are present, as in the Trapezium cluster, uv radiation can photoionize the neighboring disks (Johnstone et al. 1998).
These observations make sense in terms of the relative interaction rates for stars, binaries, and disks (Elmegreen et al. 1999). The size of a typical embedded cluster is $`0.1`$ pc, and the number of stars is several hundred. This makes the stellar density on the order of $`10^310^4`$ stars pc<sup>-3</sup>. For example, in the Trapezium cluster, the stellar density is $`5000`$ stars pc<sup>-3</sup> (Prosser et al. 1994) or higher (McCaughrean & Stauffer 1994), and in Mon R2 it is $`9000`$ stars pc<sup>-3</sup> (Carpenter et al. 1997). A stellar density of $`10^3`$ M pc<sup>-3</sup> corresponds to an H<sub>2</sub> density of $`10^4`$ cm<sup>-3</sup>. Molecular cores with densities of $`10^5`$ cm<sup>-3</sup> or higher (e.g., Lada 1992) can easily make clusters this dense, because star formation efficiencies are typically 10%-40% (e.g., see Greene & Young 1992; Megeath et al. 1996; Tapia et al. 1996).
The density of $`n_{star}=10^3`$ stars pc<sup>-3</sup> in a cloud core of size $`R_{core}0.2`$ pc implies that objects with this density will collide with each other in one crossing time if their cross section is $`\sigma \left(n_{star}R_{core}\right)^10.005`$ pc<sup>2</sup>, which corresponds to a physical size of $`6500\left(R_{core}[pc]n_{star}/10^3\right)^{1/2}`$ AU. This is the size of protostellar disks and long-period binary stars. Thus disks and binaries should be affected by interactions in the cluster environment, but not individual stars or the IMF.
### 2.4 The Flattening at Low Mass: a Characteristic Mass for Stars
The IMF flattens on a $`\mathrm{log}\mathrm{log}`$ plot at stellar masses of around and below $`0.3`$ M. Table 3 summarizes the observations. The mass at which this flattening occurs is observed to vary a bit from region to region, particularly in clusters (i.e., the mass at the peak in NGC 6231 is 2.5 M, much higher than normal; Sung, Bessell, & Lee 1998), but such variations could be the result of mass segregation in the sense that high mass stars are often concentrated towards cluster cores (see Sect. 3). There is even evidence for a turnover in the IMF at masses less than 0.3 M for several regions, but this is uncertain because the stars at the low mass end are usually close to the limit of detection.
The importance of the IMF flattening is that this is the only characteristic scale known for the star formation process. Molecular clouds and their pieces have a power law mass distribution from sub-stellar masses to the masses of clouds as big as the galactic scale height. There is essentially no characteristic scale for clouds. The mass distributions for open clusters and perhaps even primordial globular clusters are power laws too, with about the same slope as for clouds (Elmegreen & Efremov 1997; see review in Elmegreen et al. 1999). The rest of the IMF is a power law too. But the IMF does have a characteristic scale at the low mass end, where it flattens at about 0.3 M.
The existence of such a characteristic mass is an important clue to the mechanism of star formation. For example, we know now that the characteristic mass is not the Jeans mass at an optical depth of unity, as formerly suggested, because this mass is too small, $`10^3`$ M (e.g., Rees 1976). The two most promising suggestions for the origin of the characteristic mass are: (1) self-limitation of accretion by protostellar winds triggered at the deuterium-burning mass (Nakano, Hasegawa, & Norman 1995; Adams & Fatuzzo 1996), and (2) the inability of a cloud piece smaller than the thermal Jeans mass to become self-gravitating and collapse to a star, given the temperature and pressure of a molecular cloud core (Larson 1992; Elmegreen 1997).
The first of these limits would seem to be relatively independent of environment, while the second should scale with $`T^2/P^{1/2}`$ for cloud temperature $`T`$ and cloud-core pressure $`P`$. Both values are about the same locally, where $`T10`$K and $`P10^6`$ k<sub>B</sub> cm<sup>-3</sup>, and since $`T^2`$ and $`P^{1/2}`$ tend to vary together with galactocentric radius and star formation activity (Elmegreen 1997, 1999b), the two masses should remain the same in most normal regions.
To check the theoretical predictions, we should look for places where $`T^2/P^{1/2}`$ deviates a lot from its local value. If the mass at the peak of the IMF, or where the IMF flattens, varies from region to region along with the quantity $`T^2/P^{1/2}`$, then the second model would be preferred; if the peak mass does not, then the first model is better. For example, Larson (1998) suggested that the peak in the IMF was shifted towards higher masses in the early Universe, in order to account for the G dwarf problem, the large heavy element abundance and high temperature in galactic cluster gas, and the high luminosities of distant galaxies. Variations like this would be more easily explained by an IMF model that depends on the thermal Jeans mass.
The thermal Jeans mass, which contains the combination of parameters $`T^2/P^{1/2}`$, is approximately constant in normal regions of star formation. This is because the numerator in this expression is approximately proportional to the cooling rate per unit mass in molecular clouds (which scales about as $`T^2T^3`$ – see Neufeld, Lepp, & Melnick 1995), and the denominator is approximately proportional to the heating rate per unit mass from starlight and cosmic rays in typically active disks. The starlight and cosmic ray intensities scale with the background column density of stars, and the pressure in the midplane of the disk scales with the square of this column density. Thus the square root of pressure goes with the column density of background stars. As long as heating equals cooling and the mass-to-light ratio in a galactic disk is about constant, and as long as the factor by which star-forming clouds have a higher pressure than the ambient pressure is about constant, the thermal Jeans mass is about the same in all dense cloud regions. If the mass-to-light ratio goes down, then the thermal Jeans mass can go up. Perhaps this occurs in starburst regions. Conversely, if the mass-to-light ratio is abnormally high, then the thermal Jeans mass can go down.
An example of the latter situation might arise in the inner regions of M31. There the molecular cloud heating rate is low and the cloud temperature is close to $`3`$K, instead of the usual 10K (Allen et al. 1995; Loinard & Allen 1998). These clouds also exist in the part of the disk where the stellar column density is high in old stars, so the interstellar pressure is not particularly low. As a result, the thermal Jeans mass can be lower in ultracold clouds than in normal clouds, possibly as low as $`0.01`$ M instead of 0.3 M (Elmegreen 1999c). For this reason, a significant population of Brown Dwarf stars might be present in ultracold molecular clouds. If they are found, then the model based on the thermal Jeans mass would be preferred over the model based on the deuterium burning limit.
The thermal Jeans model is preferred also if a reasonably high fraction, say $`>10`$%, of all the material in a collapsing cloud piece gets into a star. This leaves a lot of mass for wind expulsion and disk erosion, but it also implies that the star mass depends somewhat on the mass of the cloud piece in which it forms. In that case, wind-limitations to the stellar mass would not be very important, causing only a factor of 2–10 variation in the ratio of star mass to cloud mass. Most of the mass variation along the IMF, which spans a factor of $`10^3`$ in mass, would then have to come from something else, and the mass of the pre-stellar cloud piece is a likely place.
Another observation that could help distinguish between possible origins for the characteristic stellar mass is the discovery of powerful pre-main sequence winds from extremely low-mass Brown Dwarfs, i.e., stars too small to ignite even deuterium. If pre-main sequence contraction energy alone is enough to start a wind, then deuterium burning would not be relevant to the limitation of stellar mass.
There is some evidence already that the mass function for dense cloud cores containing about a solar mass is similar to the IMF (Motte, André, & Neri 1998; Testi & Sargent 1998). This is the type of observation that could clarify the origin of the characteristic mass for star formation.
### 2.5 Top-Heavy IMFs in Starburst Regions
There has been considerable discussion about a shift in the IMF towards proportionally more high mass stars in starburst regions, although many of the initial reports are now being questioned. The original motivation for this idea was the observation that the luminosity of the starburst was so high, given the total mass from the rotation curve, that there could not be a normal proportion of high and low mass stars but only an excess of high mass stars. Now, more detailed modeling, and in the case of M82, a lower extinction correction (Devereux 1989, Satyapal et al. 1997), makes the stellar luminosity seem about right for the mass. A summary of these observations is in Table 4. In addition, a top-heavy IMF would produce too much oxygen in proportion to other elements (Wang & Silk 1993), and the aging population of stars would be too red (Charlot et al. 1993).
Considering the basic form of the IMF, which is a power law with a lower cutoff or flattening at some characteristic mass, one can easily envision variations that lead to top-heavy IMFs as a result of an upward shift in the characteristic mass. A predicted downward shift leading to an excess of Brown Dwarfs was mentioned in a previous section. The upward shift would come in the same way, but from an increase rather than a decrease in the value of $`T^2/P^{1/2}`$. It is more difficult to envision a top heavy IMF that results from a decrease in the slope of the power law part, because the very existence of a power law suggests a scale-free process, which means that it is essentially free of dependence on physical parameters. Power law mass distributions often result from geometric (e.g., fractal) or self-regulatory (e.g., equilibrium coalescence) effects instead.
The IMF model in Elmegreen (1997), in which the power law part comes from a weighted selection of clump pieces in a hierarchically structured cloud and the low mass cutoff comes from the thermal Jeans mass, gets a simple shift in the whole IMF towards higher mass, with a constant slope in the power-law part, as $`T^2/P^{1/2}`$ increases. A computer simulation showing this result was in that paper.
An amazing thing about the IMF is that the characteristic mass at the low end, where the flattening occurs, appears to be nearly constant from region to region. As discussed above, this may simply reflect equilibrium thermal conditions with varying $`T`$ and $`P`$ but constant $`T^2/P^{1/2}`$, or it may reflect a constant wind-limited mass at the threshold of deuterium burning. The upward shift for starbursts, if real, provides a good test for the models. It is easier to increase $`T^2/P^{1/2}`$ in warm regions at slightly elevated pressures than to affect the deuterium burning limit, which would seem to be independent of environment. Thus the exact form of the IMF in starburst conditions is extremely important for the models. In this respect, the reported slight upward shift in the characteristic mass for the 30 Dor cluster in the LMC (Nota et al. 1998) is noteworthy. This is the closest starburst-like region, and therefore the most promising for providing a firm observation of the IMF from direct star counting. Unfortunately, this cluster could suffer from mass segregation effects as in other clusters, in which case the upward shift would appear only in the nuclear region.
The discussion about starburst IMFs begs the question of whether there is an upper limit to the mass of a star that can form. No such upper limit has been found yet. That is, the upper limit in any particular region just keeps increasing as the total stellar mass increases, as expected for random star formation (see theory in Elmegreen 1983, 1997, and observations in Massey & Hunter 1998). Yet there would seem to come a time where this stellar mass increase would have to stop. After all, if we scale the $`1/x`$ power law relation between the maximum star mass and total star mass to all of the young stellar mass in the galaxy, with an age less than the $`2`$ million year lifetime of a massive star, then the total young stellar mass is $`10^7`$ M and the expected maximum stellar mass is
$$M_{max}50\left(\frac{10^7\mathrm{M}_{}}{10^{4.5}\mathrm{M}_{}}\right)^{1/1.35}\mathrm{M}_{}3600\mathrm{M}_{}.$$
(1)
Here we have normalized this power law relation to the maximum mass ($`50`$ M) and total mass ($`10^{4.5}`$ M) in the Orion OB association. The result is very inaccurate, of course, but the lack of Galactic stars containing several thousand solar masses suggests that there is an upper mass cutoff.
An alternative explanation for the lack of thousand-M stars is that each star-forming region is independent, so the total stellar mass used in the above equation is the maximum stellar mass in the largest region of star formation, rather than the maximum for all regions in the Galaxy. In that case, the numerator in the above expression should be $`10^{5.5}`$ M for the largest star complexes forming in $`10^7`$ M spiral arm clouds, and $`M_{max}200`$ M, which may be possible a few places in the Galaxy. If such stars are found, then there may be no maximum mass based on physical principles, only one based on sampling statistics.
## 3 Peculiarities with Massive Stars: central concentration and late appearances in clusters, and a preference for massive clouds
Most massive stars form in giant molecular clouds in OB associations, and not in small clouds like Taurus, which seem to contain only low mass stars (Larson 1982; Myers & Fuller 1993). Massive stars also form relatively late in the evolution of a star cluster, after many low mass stars have already formed (Herbig 1962; Iben & Talbot 1966; Herbst & Miller 1982; Adams, Strom & Strom 1983).
There have been several attempts to explain the correspondence between extreme star mass and cloud mass as a consequence of different mechanisms for star formation or different physical conditions in large and small clouds (Larson 1992; Khersonsky 1997), however observations like this are expected from random star formation alone (Elmegreen 1983; Walter & Boyd 1991; Massey & Hunter 1998), so the need for any special theory is not compelling.
If stars form randomly in all clouds, with stellar masses in the proportion given by a normal IMF, then statistical effects will make the massive stars, which are relatively rare, more likely to appear after there are $`1001000`$ M of other stars already (Elmegreen 1983; Schroeder & Comins 1988). This means that massive stars tend to show up only in massive clouds, and when they do, they are relatively late compared to the more common low mass stars. Simulations of this effect are in Elmegreen (1999a). Note that the average time of appearance of a star with a particular mass is still independent of that mass in this statistical interpretation, so if there is a systematic bias toward a late appearance of high mass stars, then some physical process for this would be required. Stahler (1995) suggested, however, that even the proposed examples of such bias probably have other interpretations, so the entire effect could be just statistical.
Another peculiar observation of massive stars is that they tend to appear near the centers of star clusters, surrounded by the lower mass stars (see reviews in Elmegreen et al. 1999; Testi, Palla, & Natta 1998). This peculiar distribution for massive stars has been observed using color gradients in 12 clusters (Sagar & Bhatt 1989), and from the steepening of the IMF with radius in several clusters (Pandey, Mahra, & Sagar 1992), including Tr 14 (Vazquez et al. 1996), the Trapezium in Orion (Jones & Walker 1988; Hillenbrand 1997; Hillenbrand & Hartmann 1998), and, in the LMC, NGC 2157 (Fischer et al. 1998), SL 666, and NGC 2098 (Kontizas et al. 1998).
The usual explanation for this effect is that massive stars sink to the center of a cluster during dynamical relaxation, but several clusters seem too young for this to have happened (Bonnell & Davies 1998), including Orion Trapezium (Hillenbrand & Hartmann 1998). In that case, the high mass stars had to have been born near the cluster centers, perhaps because the most massive clumps were closer to the center at the time the massive stars were born in them. There are other explanations too. The stars near the center could have accreted gas faster and ended up more massive (Larson 1978, 1982; Zinnecker 1982; Bonnell et al. 1997); they or their predecessor clumps could have coalesced more (Larson 1990; Zinnecker et al. 1993; Stahler, Palla, & Ho 1999; Bonnell, Bate, & Zinnecker 1998), or the most massive stars and clumps forming anywhere could have migrated to the center faster because of a greater gas drag (Larson 1990, 1991; Gorti & Bhatt 1995, 1996; Saiyadpour, Deiss, & Kegel 1997). A problem with most of these models is that they are inconsistent with the observation that the IMF is nearly independent of cluster density (Sect. 2.3). Another model without this problem suggests that the central location of the most massive stars is from the central location of the most massive cloud pieces, which is expected for a hierarchical cloud (Elmegreen 1999a).
## 4 Evolution of the IMF
The discussion above suggests that the IMF has been somewhat constant in time and place, except possibly for an upward shift in the mass at the IMF peak for starburst regions (Sect. 2.5). There was also a suggestion that the IMF was shifted towards higher mass in the early Universe (Larson 1998), although very old stars and old intergalactic gas seem to show evidence for a normal IMF (Table 1).
The direct observations of normal star-forming regions point to a universal IMF, with deviations perhaps only from statistical fluctuations in small samples and from mass segregation in clusters. The observations of regions with extremely low star-forming activity suggest a shift towards lower masses, either with a steeper IMF (as observed by Massey et al. 1995) or, possibly, a downward shift in the peak (as predicted by Elmegreen 1999c). Observations of regions with extremely high star-forming activity suggest an analogous shift towards higher masses, possibly as a result of an upward shift in the peak.
If the mass at the peak of the IMF can really change with star formation activity, possibly as a result of changes in the ratio $`T^2/P^{1/2}`$, which is in the thermal Jeans mass, then there are several important implications. First, the ratio $`T^2/P^{1/2}`$ depends roughly on the light-to-mass ratio in a galaxy disk, because the numerator is proportional to the cooling, and therefore heating rate in molecular clouds, and the denominator is proportional to the local mass column density (Sect. 2.4). This means that if the light-to-mass ratio is high, the peak in the IMF can shift towards higher masses, and vice versa. Now it follows from the Schmidt law, which has a star formation rate proportional to average density to some power greater than unity (e.g., Kennicutt 1998), that the gas consumption rate in a star-forming region increases with higher density, and the luminosity-to-mass ratio for luminous young stars increases too. If the peak in the IMF increases along with the higher L/M ratio, then we get the interesting result that the IMF peak increases with the gas consumption rate (Elmegreen 1999a). We might also have a higher efficiency of star formation in such a region, because of the generally greater self-binding of clouds in high pressure or high velocity-dispersion gas (Elmegreen, Kaufman, & Thomasson 1993). This circumstance could then explain why some starburst regions have all three of these peculiarities at the same time (see review in Telesco 1988).
What happened in the early Universe is more difficult to assess. Although the temperature was higher from the cosmic microwave background, in proportion to $`(1+z)`$, the average density of the Universe was higher too, in the proportion $`(1+z)^3`$, and the pressure, which is a product of density and temperature, was higher by $`(1+z)^4`$. Thus the ratio of $`T^2/P^{1/2}`$ in the thermal Jeans mass was independent of $`z`$. However, $`T`$ and $`P`$ variations in newly forming galaxies should dominate these average $`z`$ variations, and the thermal Jeans mass could have gone either way. If the earliest stars formed in cool high-pressure shocks, then perhaps the thermal Jeans mass was lower than it is today, producing Brown Dwarfs. On the other hand, if the temperature was higher because of an inability to cool without metals, then the Jeans mass could have been higher. The observation of nearly normal abundances in Ly $`\alpha `$ forest lines and the intercluster medium (Table 1) suggest that this characteristic mass probably did not vary much in the early Universe.
## 5 Summary
The IMF is a power law at intermediate to high mass, with a flattening on a log-log plot at low mass. The mass at which the flattening occurs is the only characteristic mass that has been clearly observed for star formation, and is therefore an important indicator of physical processes that depend on scale. Examples might be the thermal Jeans mass or the minimum mass for deuterium burning and stellar winds, both of which have about the right value. Methods to distinguish between these two possibilities were discussed in Section 2.4. The power-law part of the IMF may not indicate specific physical processes, but be more of a remnant from the observed scale-free geometry of pre-stellar clouds. Random sampling models for such geometries reproduce essentially all of the IMF properties with very little sensitivity to free parameters. In this case, much of the physics of the star formation process may be unrecoverable from the power-law part of the IMF alone.
The lack of any obvious dependence of the IMF on cluster density places strong constraints on the physical processes that might be involved (Sect. 2.3). The steep IMF in the extreme field (Sect. 2.2), as well as other systematic variations in the IMF, such as the concentration of massive stars in cluster cores (Sect. 3) and the shift in the IMF towards higher masses in starburst regions (Sect. 2.5), all suggest specific physical differences in the properties of star-forming regions and perhaps in the mechanisms of star formation too. Differences in the IMF from place to place and time to time may eventually tell us more about star formation than any single IMF, which may have washed out any such details in the averaging process.
|
no-problem/9906/cond-mat9906229.html
|
ar5iv
|
text
|
# Ivory Tower Universities and Competitive Business Firms
## Abstract
There is nowadays considerable interest on ways to quantify the dynamics of research activities, in part due to recent changes in research and development (R&D) funding. Here, we seek to quantify and analyze university research activities, and compare their growth dynamics with those of business firms. Specifically, we analyze five distinct databases, the largest of which is a National Science Foundation database of the R&D expenditures for science and engineering of 719 United States (US) universities for the 17-year period 1979–1995. We find that the distribution of growth rates displays a “universal” form that does not depend on the size of the university or on the measure of size used, and that the width of this distribution decays with size as a power law. Our findings are quantitatively similar to those independently uncovered for business firms, and consistent with the hypothesis that the growth dynamics of complex organizations may be governed by universal mechanisms.
In the study of physical systems, the scaling properties of fluctuations in the output of a system often yield information regarding the underlying processes responsible for the observed macroscopic behaviour. Here, we analyze the fluctuations in the growth rates of university research activities, using five different measures of research activity. The first measure of the size of a university’s research activities that we consider is R&D expenditures. The rationale for using R&D expenditures as a measure of research activity is that research is an expensive activity that the university finances with external support.
We first analyze a database containing the annual R&D expenditures for science and engineering of 719 US universities for the 17-year period 1979–1995 ($``$ 12,000 data points). The expenditures are broken down by school and department. The annual growth rate of R&D expenditures is, by definition, $`g(t)\mathrm{log}[S(t+1)/S(t)]`$, where $`S(t)`$ and $`S(t+1)`$ are the R&D expenditures of a given university in the years $`t`$ and $`t+1`$ respectively. We expect that the statistical properties of the growth rate $`g`$ depend on $`S`$, since it is natural that the fluctuations in $`g`$ will decrease with $`S`$. Therefore, we partition the universities into groups according to the size of their R&D expenditures (Fig. 1a). Figure 1b suggests that the conditional probability density, $`p(g|S)`$, has the same functional form, with different widths, for all $`S`$.
We next calculate the width $`\sigma (S)`$ of the distribution of growth rates as a function of $`S`$. Figure 1c shows that $`\sigma (S)`$ scales as a power law
$$\sigma (S)S^\beta ,$$
(1)
with $`\beta =0.25\pm 0.05`$. In Fig. 1d, we collapse the scaled conditional probability distributions onto a single curve.
To test if these results for the dynamics of R&D expenditures are valid for other measures of research activity, we next analyze another measure of a university’s research activities, the number of papers published each year. We analyze data for the 17-year period 1981–1997 from the US University Science Indicators , which records the number of papers published by the top 112 US universities ($``$ 1,900 data points). We find that the analog of Fig. 1 holds. Particularly striking is the fact that the same exponent value, $`\beta =1/4`$, is found (Fig. 2a) and that the same functional form of $`p(g|S)`$ is displayed (Fig. 2b).
Next, we consider as a measure of size the number of patents issued to a university. We “manually” retrieve from the webpages of the US Patent and Trademark Office’s database the number of patents issued to each of 106 universities each year of the 22-year period 1976–1997 ($``$ 2,300 data points). We confirm that the analog of Fig. 1 holds, with the same exponent value, $`\beta =1/4`$ (Fig. 2a), and the same functional form of $`p(g|S)`$, Fig. 2b.
To test if our findings hold for different academic systems, we analyze two databases on research funding of English and Canadian universities. The same quantitative behavior is found for the distribution of growth rates and for the scaling of $`\sigma `$, with the same exponent value (Fig. 2a) and the same functional form of $`p(g|S)`$, Fig. 2b. Thus, the analysis of all five databases confirms that the same quantitative results hold across different measures of research activity and academic systems.
We next address the question of how to interpret our empirical results. We start with the observation that research is an expensive activity, and that the university must “offer” its research to external sources such as governmental agencies and business firms. Thus, an increase in R&D expenditures at university $`A`$ and a decrease at university $`B`$ implies that the funders of research increasingly choose their research from university $`A`$ as opposed to university $`B`$. This qualitative picture parallels the competition among different business firms, so it is natural to enquire if there is quantitative support for this analogy between university research and business activities. To quantitively test this analogy, we note that the results of Fig. 1 are remarkably similar to the results found for firms and countries. We plot in Fig. 2c the scaled conditional probabilities $`p(g|S)`$ for countries, firms and universities, and find that the distributions for the different organizations fall onto a single curve.
There is, however, one difference: For firms and countries, we find $`\beta 1/6`$, while for universities, $`\beta 1/4`$. We can understand this difference using a model for organization growth. In the model, each organization —university, firm, or country— is made up of units. The model assumes these units grow through an independent, Gaussian-distributed, random multiplicative process with variance $`𝒲^2`$. Units are absorbed when they become smaller than a “minimum size”, which is a function of the activity they perform. Units can also give rise to new units if they grow by more than the minimum size for a new unit to form. The model predicts $`\beta =𝒲/[2(𝒲+𝒟)]`$, where $`𝒟`$ is the width of the distribution of minimum sizes for the units. For firms, the range of typical sizes is very broad —from small software and accounting firms to large oil and automobile firms— suggesting a large value of $`𝒟`$. On the other hand, for universities, the range of typical sizes is much narrower, suggesting a small value of $`𝒟`$ and implying a larger value of $`\beta `$ than for business firms. This is indeed what we observe empirically.
Business firms are comprised of divisions and universities are made up of schools or colleges, so it is natural to consider the internal structure of these complex organizations. We next quantify how the internal structure of a university depends on its size by calculating the conditional probability density $`\rho (\xi |S)`$ to find a school of size $`\xi `$ in a university of size $`S`$ (Fig. 3a). The model predicts that $`\rho (\xi |S)`$ obeys the scaling form
$$\rho (\xi |S)S^\alpha f\left(\xi /S^\alpha \right),$$
(2)
where $`f(u)u^\tau `$ for $`u1`$, and $`f(u)`$ decays as a stretched exponential for $`u1`$. We find $`\tau =0.37\pm 0.10`$ (Fig. 3b), and $`\alpha =0.75\pm 0.05`$ (Fig. 3c). We test the scaling hypothesis (2) by plotting the scaled variables $`\rho (\xi |S)/S^\alpha `$ versus $`\xi /S^\alpha `$. Figure 3b shows that all curves collapse onto a single curve, which is the scaling function $`f(u)`$.
Equation (2) implies that the typical number of schools with research activities in a university of size $`S`$ scales as $`S^{1\alpha }`$, while the typical size of these schools scales as $`S^\alpha `$. Hence, we can calculate how $`\sigma `$ depends on $`S`$,
$$\sigma (S)(S^{1\alpha })^{1/2}𝒲(\xi ).$$
(3)
In order to determine $`\sigma `$, we first find the dependence of $`𝒲`$ on $`\xi `$. Figure 3d shows that $`𝒲\xi ^\gamma `$ with $`\gamma =0.16\pm 0.05`$. Substituting into (3) and remembering that the typical size of the schools is $`S^\alpha `$, we obtain $`\sigma (S)(S^{1\alpha })^{1/2}(S^\alpha )^\gamma `$, which leads to the testable exponent relation
$$\beta =\frac{1\alpha }{2}+\alpha \gamma .$$
(4)
For $`\alpha 3/4`$ and $`\gamma 1/6`$, Eq. (4) predicts $`\beta 1/4`$, in surprising agreement with our empirical estimate of $`\beta `$ from the five distinct databases analyzed (Fig. 2a).
Our results are consistent with the possibility that the statistical properties of university research activities are surprisingly similar for different measures of research activity and for distinct academic systems. Moreover, our findings for university research resemble those independently found for business firms and countries. One possible explanation is that peer review, together with government oversight, may lead to an outcome similar to that induced by market forces, where the analog of peer-review quality control may be consumer evaluation, and the analog of government oversight may be product regulation.
ACKNOWLEDGEMENTS: We are indebted to the referees of this manuscript for helpful suggestions which motivated additional analysis of four databases reported here and the possible practical implications discussed. We thank M. Barthélemy, S.V. Buldyrev, D. Canning, X. Gabaix, S. Havlin, P.Ch. Ivanov, H. Kallabis, Y. Lee, and B. Roehner for stimulating discussions. We also thank N. Bayers, E. Garfield, and especially R.E. Hudson for help with obtaining the ISI database. We thank NSF, and LANA thanks FCT/Portugal for financial support.
|
no-problem/9906/cond-mat9906291.html
|
ar5iv
|
text
|
# On the time evolution of the entropic index
## Abstract
We adapt the Kolmogorov-Sinai entropy to the non-extensive perspective recently advocated by Tsallis. The resulting expression is an average on the invariant distribution, which should be used to detect the genuine entropic index $`Q`$. We argue that the condition $`Q>1`$ is time dependent.
The importance of establishing a connection between dynamics and thermodynamics, and the difficulties with this ambitious purpose, are illustrated in a very attractive way in the recent book by Zaslavsky. Zaslavsky shows that the problems with this connection are not caused by the phenomenon of the Poincaré recurrences:In strongly chaotic systems the Poincaré recurrences are frequent and erratic, and result in a Poisson form for the distribution of recurrence times $`P_R(t)`$, namely a distribution with the following form ($`t>0`$)
$$P_R(t)\mathrm{exp}(h_{KS}t),$$
(1)
where $`h_{KS}`$ denotes the Kolmogorov-Sinai entropy, thereby proving the intimate connection between thermodynamics and mechanics in this case. The problem is that Eq.(1) rests on a condition of strong chaos that seems to be an exception rather than a rule. In general, the phase space of Hamiltonian systems is rarely totally chaotic. We cannot rule out the possibility that even in the case of seemingly chaotic systems small islands of stability lie on the phase space. The presence of an island of stability has impressive consequences. The region of separation between the deterministic island and the chaotic sea is fractal and self-similar. These properties result in stikness.This means that a generic trajectory with initial conditions located somewhere else, in the chaotic sea, through a fast process of diffusion will reach that surface and will stick to it for very extended sojourn times, with the distribution density $`\psi (t)`$. As a consequence , we have:
$$\underset{t\mathrm{}}{lim}P_R(t)=\psi (t)$$
(2)
and
$$\underset{t\mathrm{}}{lim}\psi (t)=const/t^{(2+\beta )},$$
(3)
with$`\beta >1`$. This results is compatible with the important theorem of Kac , according to which the mean value of the first moment of the distribution $`P_R(t)`$ is finite.
Zaslavsky also points out that two distinct billiards, each of them characterized by the dynamical properties necessary to realize the ergodic condition, when coupled to one another through a small hole in the wall that separates one billiard from the other, do not show any equilibration, at least for a very extended interval of time, and rather the trajectories seem to prefer one of the two billiards: An effect reminiscent of the action of Maxwell’s demon. This is the consequence of the breakdown of the thermodynamic condition of Eq.(1), provoked by the emergence of the slow tail of Eq.(3). Zaslavsky reaches the important conclusion “that chaotic dynamics exhibit some memory-type features which have to be suppressed in order to derive the laws of thermodynamics”. In this paper we want to prove that if such a memory erasing process exists, it must be perceived as the source of a transition from non-extensive to extensive thermodynamics rather than a transition from dynamics to thermodynamics.
First of all, let us stress that it is essential to use the Tsallis entropy rather than the conventional Gibbs entropy. The Tsallis entropy reads
$$H_q=\frac{1𝑑x\mathrm{\Pi }(x)^q}{q1}.$$
(4)
Note that this entropy is characterized by the index $`q`$ whose departure from the conventional value $`q=1`$ signals the thermodynamic effects of either long-range correlations in fractal dynamics or the non-local character of quantum mechanics . The function $`\mathrm{\Pi }(x)`$ denotes a distribution of a generic variable $`x>0`$, including the case where $`xt`$, with $`t`$ being a Poincaré recursion time or a time of sojourn at the border between chaotic sea and stability island. As earlier pointed out, this latter physical interpretation applies to the case where fractal dynamics result in a breakdown of the conventional condition of a finite time scale. It has been recently pointed out that the maximization of the entropy of Eq.(4) under the condition of the existence of a finite first moment, in line with the theorem of Kac, results in an inverse power law like that of Eq.(3).
As important as this result is, it would leave open the problem raised by Zaslavsky as to the connection between dynamics and thermodynamics, if, as it is correct to do, a special attention were to be devoted to the Kolmogorov-Sinai (KS) entropy. The study of the connection between dynamics and thermodynamics is making significant progresses along the lines of the seminal work of Krylov . Under his influence interesting attempts are currently being made at relating the KS entropy to the thermodynamical entropy. Of remarkable interest are the work of Gaspard, relating the KS entropy for a dilute gas to the standard thermodynamical entropy per unit volume of an ideal gas , and the more recent paper by Dzugutov, Aurell and Vulpiani , who express the KS entropy of a simple liquid in terms of the excess entropy, namely, the difference between the thermodynamical entropy and that of the ideal gas at the same thermodynamical state. Latora and Baranger studied the KS entropy of some maps, and for one of them, the cat map, they found analytical results. To properly appreciate the significance of the result obtained by these authors, we must note first of all that the KS entropy is a kind of entropy per unit of time, and that within this perspective the thermodynamical regime is expected to be expressed by a condition where the entropy growth is linear in time. Latora and Baranger found that at short times the regime of entropy increase, instead of being linear, is exponential in time. They also found that the KS regime, beginning at the end of this initial transient process, is not permanent and that after a given time a form of saturation takes place. All these are indications of a time evolution of the thermodynamical properties of the systems that are probably related to the main problem under discussion in this paper, namely the aging of the non-extensive thermodynamics of Tsallis .
A recent work by Tsallis, Plastino and Zheng illustrates the convenience of generalizing the KS procedure so as to make it efficient to study the thermodynamical properties of fractal dynamics. However, these authors adopt heuristic arguments and do not provide direct prescriptions on how to express the ensuing generalized version of KS entropy in terms of the invariant distribution. We refer to this generalized form of KS entropy as Kolmogorov-Sinai-Tsallis (KST) entropy. Its explicitly form reads:
$$H_q(N)=\frac{1_{\omega _0\mathrm{}\omega _{N1}}p(\omega _0\mathrm{}\omega _{N1})^q}{q1}.$$
(5)
The numerical procedure to evaluate the KST entropy is the same as that we have to adopt to evaluate the KS entropy. For clarity we remind the reader about this prescription using the most elementary phase-space as possible, namely, a one-dimensional interval $`[0,1]`$ for the continuous variable $`x`$. This phase space is divided in $`l`$ cells with the same width $`1/l`$. Then we run the dynamical system under study. In this paper we shall focus on one-dimensional maps, thereby fitting the restriction of considering a one-dimensional phase space. However, our conclusions are not restricted to maps, since, as we shall see, these conclusions can be straightforwardly applied to the hamiltonian systems discussed by Zaslavsky.
Running a one-dimensional map means producing a sequel of values $`x_0\mathrm{}x_j\mathrm{}`$. Since any point of this one-dimensional trajectory is located in a given cell, producing a trajectory is equivalent to generating a sequel of values $`\omega _0\mathrm{}\omega _j\mathrm{}`$, where $`\omega _j`$ is the label of the cell occupied by the trajectory at the “time” $`j`$. After creating the sequence $`\omega _0\mathrm{}\omega _j\mathrm{}`$, which for simplicity we assume to be infinitely long, we proceed with the evaluation of the KST entropy as follows. We fix a window of size $`N`$ and we move the window along the sequence. For any window position we record the labels lying within the window, starting from the first, $`\omega _0`$, till to last, $`\omega _{N1}`$. Notice that the subscrits now refer to the order within the window, which must not be confused with the subscript denoting the position of the symbol in the whole sequence. Moving the window we can evaluate how many times the same combinations of symbols appears, and from this frequency we evaluate the probablity distribution $`p(\omega _0\mathrm{}\omega _{N1})`$ which is then used to evaluate the KST entropy of Eq.(5).
This kind of calculation has been recently applied to evaluating the KST of a symbolic sequence obtained with a stochastic rule essentially equivalent to the intermittent dynamical process behind the one-dimensional processes of Lévy diffusion, and consequently, equivalent, in principle, to the processes studied in Ref.. We notice that the result of the research work of Ref. proved that the KST entropy is a linear function of N only when the proper entropic index $`q`$ is used. This entropic index turned out to fulfill the condition $`q1`$ in a striking conflict with the result of Ref., which predicts $`q>1`$. This paper, among other things, aims at shedding light into the reasons for this conflict, which will be duly accounted for.
The key way of settling the problems of determining the KST adopted in this paper is the same as that found by Pawelzik and Shuster to evaluate a different form of generalized KS entropy, and precisely that obtained by replacing the Gibbs entropy with the Rényi entropy, namely, the Kolmogorov-Sinai-Rényi (KSR) entropy. For the calculation of both the KST and KSR entropy it is essential to adopt the following very important relation:
$$\underset{i}{}p_i^q=\frac{1}{M}\underset{j=1}{\overset{M}{}}\stackrel{~}{p}_j^{q1}.$$
(6)
which has been advocated by Pawelzik and Schuster .
This important relation is worth of some illustration. For any window of size $`N`$ we create a multidimensional phase space, of dimension $`N`$, associating any point $`x_j`$ of our original one-dimensional phase space with the $`N`$-dimensional point $`x_j,x_{j+1}\mathrm{}x_{j+N1}`$. This means that the original one-dimensional cells of size $`1/l`$ become $`N`$-dimensional squares with the same size. Any cell of the resulting $`N`$-dimensional phase space corresponds to one of the combinations $`\omega _j,\omega _{j+1}\mathrm{}\omega _{i+N1}`$ whose frequency must be properly evaluated to determine the distribution $`p(\omega _0\mathrm{}\omega _{N1})`$, which, in turn, is necessary for the calculation of the KST entropy of Eq.(5).The cells of this N-dimensional phase-space are properly labelled and the symbol $`p_i`$ appearing on the l.h.s. of Eq.(6) is the probability that a trajectory of this $`N`$-dimensional phase space running for an unlimited amount of time is found in the $`ith`$ cell. The term on the r.h.s. of Eq.(6) refers to a calculation procedure based on the adoption of a single trajectory running from the time $`j=1`$ to the time $`j=M`$. This trajectory carries with itself a $`N`$-dimensional square of size $`2/l`$, of which the $`N`$-multidimensional point $`x_j,x_{j+1}\mathrm{}x_{j+N1}`$ is the center. The symbol $`\stackrel{~}{p}_j`$ denotes the probability that the same trajectory, at earlier or later times, is found in this cell. The mathematical arguments invoked by Pawelzik and Schuster to explain why the power $`q1`$ on the r.h.s. of Eq.(6) corresponds to the power $`q`$ appearing on the l.h.s of the same equation has a transparent physical meaning: The trajectory carrying the moving $`N`$-dimensional cell of size $`2/l`$ explores with more (less) frequency the regions of the $`N`$-dimensional phase space of higher (lower) probability, so that the sum over the running index j implicitly includes the missing factor of $`\stackrel{~}{p}_j`$.
Rather than using the key relation of Eq.(6) to introduce a generalized correlation integral, as Pawelzik and Schuster propose, we now have recourse to an approach inspired to that of Tsallis et al.. To see how this procedure works let us consider, as an illustrative example, a window of size $`2`$ ($`N=1`$), making Eq.(6) read
$$\underset{i,j=1}{\overset{l}{}}p(\omega _i,\omega _j)^q=\frac{1}{M}\underset{j=1}{\overset{M}{}}\stackrel{~}{p}(x_j,x_{j+1})^{q1}.$$
(7)
We note that
$$\stackrel{~}{p}(x_j,x_{j+1})=\rho (x_j,x_{j+1})(2/l)^2,$$
(8)
where $`\rho (x_j,x_{j+1})`$ denotes the probability density at $`(x_j,x_{j+1})`$.
We show that the relation between the probability density referring to the window of size $`N=2`$ and that concerning the window of size $`N=1`$ is given by
$$\rho (x_j,x_{j+1})(2/l)^2=\rho (x_j)(2/l)exp(\lambda _j).$$
(9)
To explain the dynamical origin of Eq.(9) and to define the parameter $`\lambda _j`$ as well, it is convenient to remind the reader that, as earlier pointed out, we are considering a one-dimensional map, defined by
$$x_{j+1}=\mathrm{\Phi }(x_j),$$
(10)
where, for the time being, the form of the function $`\mathrm{\Phi }(x)`$ is left unspecified. The parameter $`\lambda _j`$ is the Lyapunov coefficient corresponding to the time step $`j`$, and it is defined as follows
$$\lambda _jln(\frac{\mathrm{\Delta }x_{j+1}}{\mathrm{\Delta }x_j}),$$
(11)
or, equivalently, as
$$\lambda _jln[\mathrm{\Phi }^{}(x_j)].$$
(12)
We note that, in general, in the case of the intermittent map that we are considering, the Lyapunov coefficient does not lose dependence on the initial condition of the trajectory.The initial point of a trajectory is, let us say, $`x_0`$ and the Lyapunov coefficient is evaluated by studying, in addition to the trajectory starting at $`x_0`$, also an auxiliary trajectory which begins at $`x_0+\mathrm{\Delta }x_0`$. The symbol $`\mathrm{\Delta }x_j`$ denotes the distance between the two trajectories at the time $`j`$. We shall come back to the important aspect of the dependence on the initial condition later on in this letter. From the definition itself of Eq.(11) we have:
$$\mathrm{\Delta }x_{j+1}=\mathrm{\Delta }x_jexp(\lambda _jt).$$
(13)
The departure of a trajectory from another one, initially belonging to the same interval of size $`1/l`$, implies that the total number of trajectories contained in the square of area $`l^2`$ is decreased by the exponential $`exp(\lambda _j)`$, thereby resulting in Eq.(9) as an effect of moving from a window of size $`1`$ to a window of size $`2`$.
It is evident that the counterpart of Eq.( 8) in the case of a window of size $`1`$ is $`\stackrel{~}{p}(x_j)=\rho (x_j)(2/l)`$. Thus Eq.(9) becomes
$$\stackrel{~}{p}(x_j,x_{j+1})=\stackrel{~}{p}(x_j)exp(\lambda _j).$$
(14)
The effect of moving from the window of size $`1`$ to a generic window of size $`N`$ is expressed by the more general equation
$$\stackrel{~}{p}(x_j,\mathrm{},x_{j+N})=\stackrel{~}{p}(x_j)exp(\underset{n=0}{\overset{N1}{}}\lambda _{j+n}).$$
(15)
Before illustrating the physical consequences of the general prediction of Eq.(15) it is convenient to make some more comments about the Lyapunov coefficient time evolution.The conclusion expressed by Eq.(15), supplemented by Eq.(12), leads to define the following time dependent Lyapunov coefficient:
$$\mathrm{\Lambda }(N,x)\underset{n=0}{\overset{N1}{}}ln[\mathrm{\Phi }^{}(x_n)]=\underset{n=0}{\overset{N1}{}}ln(\frac{\mathrm{\Delta }x_{n+1}}{\mathrm{\Delta }x_n})=ln(\underset{n=0}{\overset{N1}{}}\frac{\mathrm{\Delta }x_{n+1}}{\mathrm{\Delta }x_n})=ln(\frac{\mathrm{\Delta }x_N}{\mathrm{\Delta }x_0}).$$
(16)
It is convenient to express the property of an auxiliary trajectory of departing from the trajectory under study through the following function $`\delta (t)`$
$$\delta (t)lim_{\mathrm{\Delta }x_00}\frac{\mathrm{\Delta }x_t}{\mathrm{\Delta }x_0}.$$
(17)
Note that we are now considering windows of a so large size $`N`$ as to identify $`N`$ with the continuous variable $`t`$ and to neglect $`1`$ compared to $`N`$. Tsallis et al. have recently shown that the extension of the KS method, resting on the Tsallis entropy of Eq.(4), yields
$$\delta (t)=[1+(1Q)\kappa _Q(x)t]^{\frac{1}{1Q}}.$$
(18)
This very important result was found by these authors studying the time evolution of the trajectories in the phase space corresponding to a window of size $`1`$, which corresponds to the one-dimensional phase space under study in this paper. Then these authors made the heuristic assumption that the increase of the number of cells takes place according to the same prediction as that leading to the increasing departure of a trajectory of interest from an auxiliary trajectory, namely, a trajectory with an initial condition very close to that of the trajectory of interest. This assumption is the same as that adopted in this paper to derive Eq.(9). They also made the heuristic assumption, not done here, that at each time step the occupied cells have the same probability. The validity of the prediction of Eq.(18) has been checked in two distinct cases. The former case is that of the logistic map at the onset of chaos, where the adoption of multifractal arguments yields $`Q<1`$. It has to be pointed out, however, that Lyra and Tsallis do not explain why the numerical calculation of the function $`\delta (t)`$ of Eq.(17) results in wild fluctuations and Eq.(18) is found to be accurate only for the amplitude increase of these oscillations. The latter case refers to the maps used to derive Lévy diffusion processes. In this case the mere adoption of analytical calculation yields $`Q>1`$.
The important research work of Lyra and Tsallis and Tsallis and co-workers did not afford direct indications of how to carry out in practice the calculation of the KST entropy, and, especially, of how to go through a kind of statistical averaging. We are now in the right position to provide these directions so as to discuss to what an extent the entropic indexes $`Q`$ predicted by the earlier papers of Refs., and really result in a linear time increase of the KST entropy. In fact from the joint use of Eq.(15), Eq.(16) and Eq.(17) we get:
$$H_q(t)=[1k(q)𝑑xp(x)^q\delta (t,x)^{1q}]/(q1),$$
(19)
where $`k(q)(2/l)^{q1}`$ stems from the replacement of the discrete sum of Eq.(6) with the continuous integral. Note that this makes the KST entropy dependent on the size of the cells as the so called $`ϵentropy`$ (see, for example, Ref.).Note also that to derive Eq.(19) we adopted the arguments of Ref., and thus the obvious generalization of Eq.(6):
$$\underset{i}{}p_i^qf(x_i)=\frac{1}{M}\underset{j=1}{\overset{M}{}}\stackrel{~}{p}_j^{q1}f(x_j),$$
(20)
where $`f(x_i)`$ is a generic function,
It is evident that the prediction of Refs.,, yielding $`Q<1`$, results in a linear time increase of the KST entropy only if the fluctuations can be ignored. Actually these fluctuations are multifractal in nature and are a manifestation of the effect observed more than ten years ago by Anania and Politi . These authors showed that the Feigenbaum attractor, discussed in terms of an algebraic index $`\beta `$, results in a fluctuating spectrum $`h(\beta )`$. They also noticed that the behavior of a finite distance is described by algebraic exponents over a a limited range. All these observations might be related to the possibility that the entropic index $`Q`$ has to be considered as time dependent also in the case $`Q<1`$. We are confident that the result of Eq.(19) affords a way of discussing the thermodynamic aspects of these interesting phenomena.
Let us now apply the fundamental result of Eq.(19) to the case of intermittent motion. This is the Manneville map, whose explicit expression is:
$$x_{n+1}=\mathrm{\Phi }(x_n)=x_n+x_n^z(mod.1)(1z).$$
(21)
This map has been more recently used by Gaspard and Wang to discuss the algorithmic complexity of sporadic randomness. Using the main result of this paper we are now in a position to prove the aging effect of the non-extensive thermodynamics of Tsallis. We limit our predictions to three distinct time regions. It has to be stressed that a first time scale is given by the time step $`\mathrm{\Delta }t=1`$. The inverse power law nature of the waiting time distribution $`\psi (t)`$ is perceived at times much larger than $`\mathrm{\Delta }t=1`$. The short-time regions is given by times comparable to this microscopic time scale, which is expected to be dominated by a condition of total chaos, and, consequently to be associated with $`Q=1`$. In fact the short-time region will be dominated by the trajectories with initial conditions in either the chaotic region or in the laminar region, but conveniently close to the border with the chaotic region. Another important time is given by $`T`$, which is the mean waiting time in the laminar region, finite for $`z<2`$. The intermediate time region refers to times $`\mathrm{\Delta }t<<t<<T`$ and the large time regions refers to $`t>>T`$.
In the intermediate time region adopting an approach similar to that used in Ref., we find that the heuristic prediction of Eq.(18) is exact. Furthermore we find :
$$Q=1+(z1)/z$$
(22)
and
$$\kappa _Q(x)=\frac{x^{\frac{Q1}{2Q}}}{2Q}.$$
(23)
In conclusion we obtain that when the entropic index $`q`$ is given the magic value established by Eq.(22), the time evolution of $`H_q(t)`$ becomes linear. Thus we recover the prediction of Ref., implying that the dynamical processes of Lévy diffusion, in an intermediate time region, are associated to non-extensive thermodynamics. This is not a permanent condition: We know from the work of Gaspard and Wang that in the long-time limit ($`N\mathrm{}`$) $`\mathrm{\Lambda }(N,x)/N`$ becomes constant and loses the dependence on the initial condition. Consequently, on the basis itself of the theory illustrated in this paper, it is straigtforward to make the prediction that in the long-time limit $`Q=1`$.
It is also evident that the theoretical predictions of this paper refer to a case where the size $`1/l`$ of the cells is made arbitrarily small. This explains why in the intermediate time region $`Q=1+\frac{z1}{z}>1`$, in an apparent conflict with the result of the numerical analysis of Ref. yielding $`Q=[\frac{z}{(z1)}2]^\alpha `$ with $`\alpha 0.15`$, namely, $`Q<1`$. This is so because the authors of Ref. studied the KST entropy of a dynamical system with algoritmic complexity equivalent to that of the map of Ref., and so to that of the the Manneville map, using only two cells. The corresponding symbolic sequence is characterized by extended strings with the same symbol, either $`1`$ or $`1`$, and the entropy increase is generated by the random length of these sequences. The strings with the same symbol are indistinguishable from phases of regular motion, implying $`Q=0`$, while the sporadic randomness would yield $`Q=1`$. Therefore, it is reasonable that the numerical analysis, as a balance between these two processes, yields $`Q<1`$. The adoption of arbitrarily small cells, on the contrary, makes it possible to relate the extended laminar regions to $`Q>1`$, in accordance with the prediction on superdiffusion of.
We notice that the nature itself of the connection between dynamics and thermodynamics established by the work of Refs. and implies the aging of the non-extensive thermodynamics of superdiffusion. In fact, in this case $`Q>1`$ and Eq.(18) yields a function $`\delta (t)`$ faster than the exponential, in the sense that this function diverges at a finite time. Actually, this divergence does not have a physical significance: The function $`\delta (t)`$ is forced to depart from the prediction of Eq.(18) by the exit from the laminar region. In the long-time scale the sequel of many exits from the laminar region and many random injections into to it makes it possible to adopt Eq.(18), provided that the entropic index $`Q`$ is assumed to slowly regress to the ordinary value $`Q=1`$.We think that this conclusion agrees with the observation made by Wang that the $`ϵ`$-entropy increases linearly in time for a Lévy process. In fact, from the entropic point of view the Manneville map is equivalent to the map used in for the dynamical derivation of the Lévy processes, and Ref., in turn, shows that the Lévy diffusion regime is reached as a consequence of the repeated action of randomness, established by the chaotic part of the map. The ultimate effect of this randomness is that of producing the Markov property, a condition necessary for the realization of the Lévy diffusion, which is in fact Markov. In principle, the case $`Q<1`$ might be compatible with an eternal form of non-extensive thermodynamics. If this is so, or not for reasons related to the effects discovered by Anania and Politi,can be assessed with further research work based on the adoption of the important result of Eq.(19).
We are now in a position to answer the important issue raised by Zaslavsky. It is easy to relate all these results to the Hamiltonian dynamics of interest for Zaslavsky. In fact, as pointed out by Zaslavsky himself, the main statistical properties of the dynamical processes of interest are determined by the waiting time distribution $`\psi (t)`$ of Eq.(3) and the time evolution of the Lyapunov coefficients is strictly dependent on the power law of $`\psi (t)`$, as it can be realized by comparing Eq.(3) to Eq.(18) in the light of the conjecture made in Ref. that $`\psi (t)=k\delta (<\kappa _0(x)>t)`$. We also note that it is straigthforward to prove that the KST entropy of the billiards discussed by Zaslavsky can be evaluated using the prescription of Eq. (19), provided that the probability distribution $`p(x)`$ is meant to refer to the corresponding phase space. Zaslavsky shows that the short-time evolution of $`\psi (t)`$ is Poisson-like. We note also that the conjecture $`\psi (t)=k\delta (<\kappa _0(x)>t)`$ fits the prescription of Eq. (1) in the case where $`Q=1`$. Thus, if we limit our observation to the short-time dynamics we reach the conclusion that the entropic index fits the extensive requirement $`Q=1`$. At later times, however, when the inverse power law nature of the function $`\psi (t)`$ shows up, we expect that $`Q`$ might come close to the non-extensive value $`Q=1+1/(2+\beta )`$ resulting from the theoretical analysis of Ref.. Furthermore, on the basis of the results of Ref., we expect again that in the long-time limit the extensive value $`Q=1`$ is recovered. The billiards studied by Zaslavsky are characterized by the joint action of a chaotic sea and of the fractal dynamics at the border between chaotic sea and stability islands. We believe that the memory erasing process that according to Zaslavsky is necessary to suppress the effects of Maxwell’s demon is produced by the action itself of the chaotic sea. The extended time regime prior to this final condition, however, is already thermodynamical, and this paper answers the question raised by Zaslasky about the thermodynamic nature of a dynamical system whose statistical properties seem to be a manifestation of Maxwell’s demon. The main conclusion of this letter is that the Maxwell demon is compatible with thermodynamics provided that the non-extensive perspective of Tsallis is adopted.
|
no-problem/9906/astro-ph9906480.html
|
ar5iv
|
text
|
# Photometric Redshifts for DPOSS Galaxy Clusters at 𝑧<0.4
## 1 Introduction
There are many cosmological uses for rich clusters of galaxies. They provide useful constraints for theories of large-scale structure formation and evolution, and represent valuable (possibly coeval) samples of galaxies to study their evolution in dense environments. Studies of $`\xi _{cc}`$, the cluster two-point correlation function, are a powerful probe of large-scale structure, and the scenarios of its formation. Until recently, it has been impractical to obtain large numbers of redshifts for glaxy clusters, forcing cosmologists to deproject their distribution mathematically. We show that it is feasible to generate a catalog of galaxy clusters at $`z<0.4`$ with accurately estimated photometric redshifts.
## 2 Observations
Our data is taken from the Digitized Second Palomar Observatory Sky Survey (DPOSS). The digitization, star-galaxy separation, and photometric calibration procedures are described in Weir et al. (1995). We have improved star-galaxy classification using a much larger training set.
We use a simple color selection of candidate cluster galaxies, coupled with the adaptive kernel method (Silverman, 1986) to generate galaxy surface density maps. A bootstrap technique is then used to generate the statistical significance map associated with a given surface density map. This map is then used to detect overdensities of galaxies on the sky which indicate candidate galaxy clusters. In our test fields, we recover all of the known Abell clusters, and find a large number of new clusters.
## 3 Redshift Estimation
Because the $`4000\AA `$ break is shifting through the blue bandpass of DPOSS at $`z<0.4`$, the $`gr`$ color changes rapidly with redshift. We make the crude assumption that all cluster galaxies are a single–age, early–type population, and use a k–correction model to estimate redshifts from the $`gr`$ color alone. We simply use the mean $`gr`$ color of the galaxies in a cluster, after a background correction, to estimate the redshift. In Figure 1, a $`gr`$ vs. $`ri`$ diagram for galaxies to $`M_r=19.6`$ in a typical DPOSS field (36 sq. deg.) is shown. Also shown are the k-correction tracks for Scd and E galaxies. The rapid change in $`gr`$ between $`z=0`$ and $`z=0.4`$ for early type galaxies allows us to estimate redshifts for galaxy clusters.
A separate redshift estimate, from the magnitude of the n-th brightest galaxy, can also be made, but it is much more sensitive to errors in the correction for field galaxies.
### 3.1 Technique
In practice, the redshift estimation must be done iteratively. First, we detectcandidate clusters in our galaxy density maps. From those areas in our maps where there are no clusters, we estimate the background galaxy density and $`gr`$ color distribution. This background correction is then applied to each cluster candidate in a fixed radius, corresponding to an Abell radius at $`z=0.15`$, the expected median redshift of our clusters.
The redshift of each cluster is then estimated from the mean $`gr`$ color of the galaxies inside this radius. Using this redshift estimate, we recalculate $`R_{Abell}`$, and estimate the redshift using the mean color in this radius. This procedure is repeated until the estimated redshift converges.
In Figure 2, we show the mean $`gr`$ color for galaxies in 36 Abell clusters with spectroscopic redshifts. The line shown is a theoretical k–correction track for E–type galaxies. It is NOT a fit to the data. The mean deviation of the data from the theoretical curve is $`\mathrm{\Delta }z=0.004`$. This suggests that we can estimate redshifts for our candidate clusters in an accurate, unbiased way, directly from calibrated plate photometry.
This result relies on the large number of same age and type galaxies in clusters at low redshift. As the cluster galaxy population changes with redshift, this technique will eventually fail. For $`0.4<z<0.8`$, the $`ri`$ color could be used; unfortunately, our $`i`$ plate data are not deep enough for this purpose. We have obtained deeper CCD imaging of our low–$`z`$ candidates, where we will attempt to detect and estimate redshifts for more distant clusters.
###### Acknowledgements.
R. Gal was supported in part by NASA GSRP NGT5-50215 and a Kingsley Fellowship. SGD acknowledges the support of the Norris Foundation.
|
no-problem/9906/astro-ph9906190.html
|
ar5iv
|
text
|
# The nature of SN 1997D: low-mass progenitor and weak explosion All the SN 1997D spectra used in this paper were kindly provided by Massimo Turrato
## 1 Introduction
The Type II supernova (SN) 1997D discovered on Jan. 14.15 UT (De Mello & Benetti db97 (1997)) is a unique event characterized by extremely low expansion velocity, low luminosity, and very low amount ($`0.002M_{}`$) of radioactive <sup>56</sup>Ni (Turatto et al. tmy98 (1998)). An analysis of the observational data led Turatto et al. (tmy98 (1998)) to conclude that they caught SN 1997D around day 50 after it had exploded as a red supergiant with the mass of 26 $`M_{}`$ and radius of $`R_0300R_{}`$. The derived ejecta mass is $`M24M_{}`$ and kinetic energy is $`E4\times 10^{50}`$ erg. They propose a scenario in which the low <sup>56</sup>Ni mass in SN 1997D is caused by a fall-back of material onto the collapsed remnant of the explosion of a 25–40 $`M_{}`$ star. An exciting implication is that SN 1997D might be accompanied by black hole formation (Zampieri et al. zsc98 (1998)).
Here we present arguments for an alternative view on the origin of SN 1997D, which in our opinion was a descendant from the low end of the mass range of core-collapse supernova (CCSN) progenitors. The problem we attempted to solve first was to find the hydrodynamical model, which could reproduce the light curve, the velocity at the photosphere, and the line profiles of major strong lines (Section 2). We emphasize the importance of the line profile analysis, since it provides a robust information on the velocity at the photosphere. The latter is a crucial parameter for constraining hydrodynamical models. Unexpectedly for us it turned out that Rayleigh scattering in SN 1997D is significant and may be used as a powerful diagnostic tool. The emphatic role of Rayleigh scattering in this case is related to the low energy-to-mass ratio ($`E/M`$) of SN 1997D (Turatto et al. tmy98 (1998)), which results in the higher than normal density at the photospheric epoch for SN II-P. A combination of hydrodynamical modelling and robust analysis of spectra at the photospheric epoch permitted us to impose tight constraints on $`E`$, $`M`$, and $`R_0`$ of SN 1997D.
In addition, we analyzed nebular spectra of SN 1997D using a nebular model (Section 3). To make such an analysis as secure as possible we first checked the model taking advantage of the well studied SN 1987A at a similar epoch. We found modelling the nebular spectrum of SN 1997D beneficial in discriminating between low and high-mass options for the ejecta. To our knowledge the present paper is a first attempt of a SN II-P study to simultaneously make use of all data: light curve, photospheric and nebular spectra. Some implications of the low mass and kinetic energy of the SN 1997D ejecta for the systematics of CCSN, explosion mechanism, and galactic population of supernova remnants (SNR) are discussed in the final section. Below we adopt for SN 1997D the dust extinction $`A_B=0.0`$ mag and the distance 13.43 Mpc following Turatto et al. (tmy98 (1998)).
## 2 Photospheric epoch
### 2.1 The velocity at photosphere
According to general results of hydrodynamical simulations of SNe II-P the radiation cooling of the expanding envelope at the plateau phase proceeds in a specific regime of the cooling recombination wave (Grassberg et al. gin71 (1971)). As a result the photosphere in SN II-P resides at the well defined jump between the almost completely recombined (ionization degree $`10^4`$) transparent atmosphere and the fully ionized sub-photospheric layers of high opacity. The velocity at the photosphere determined from the observed scattering line profiles during photospheric epoch (plateau) thus gives us a position of the cooling recombination wave and therefore is of vital importance for constraining parameters of the hydrodynamical model.
To measure the photospheric velocity in the Jan. 17 spectrum of SN 1997D we concentrated on the 5600–6700 Å band which contains strong, clearly-cut spectral lines (Fig. 1) of H, Na I, Ba II, Fe II, and Sc II. Most of them are well observed in other SNe II-P. However, due to the low expansion velocity it is possible to distinguish here some spectral features never observed before, e.g. Sc II 6605 Å line (Fig. 1). A Monte Carlo technique used for modelling the spectrum (Fig. 1) suggests an absorbing photosphere and a line scattering atmosphere (Schwarzschild-Schuster model). In total 19 lines are included for this spectral range. The Sobolev optical depth was computed assuming the analytical density distribution in the envelope
$$\rho =\rho _0\left[1+(v/v_\mathrm{k})^n\right]^1$$
(1)
which corresponds to a plateau at velocities $`v<v_\mathrm{k}`$ and a steep slope $`\rho v^n`$ ($`n8`$) in the outer layers at $`v>v_\mathrm{k}`$. Parameters $`\rho _0`$ and $`v_\mathrm{k}`$ are defined by the ejecta mass $`M`$, kinetic energy $`E`$, and index $`n`$. The case shown in Fig. 1 is characterized by $`M=6M_{}`$, $`E=1.2\times 10^{50}`$ erg, and $`n=8`$, although one may easily fit the spectrum using higher mass and higher energy. Since we do not solve the full problem of radiation transfer in ultraviolet we adopt that metals are singly ionized and find level populations assuming appropriate excitation temperature ($`4200`$ K). For the standard abundance we assume here, Sc II 6605 Å line is too strong; therefore its abundance is reduced by a factor of two. It may well be that the odd behavior of Sc II line reflects different excitation conditions for Sc II and other metals rather than abundance pattern. We found that hydrogen excitation has to be cut beyond $`v=1400`$ km s<sup>-1</sup> to prevent washing out the 6500 Å peak. Close to the photosphere within the layer $`\mathrm{\Delta }v=300`$ km s<sup>-1</sup> the net emission in H$`\alpha `$ is comparable to its scattering component. We simulated this emission assuming the line scattering albedo greater than unity.
In spite of its simplicity the model is appropriate for the confident estimate of the photospheric velocity which was found to be $`v_\mathrm{p}=900`$ km s<sup>-1</sup> (Fig. 1) with an uncertainty less than 100 km s<sup>-1</sup>. The value of $`v_\mathrm{p}=970`$ km s<sup>-1</sup> reported by Turatto et al. (tmy98 (1998)) is consistent with the above estimate. Our choice was a compromise between two possibilities: (1) washing out many observed features in the spectrum if $`v_\mathrm{p}1000`$ km s<sup>-1</sup>, and (2) producing significant excess in emission components for Na I D<sub>1,2</sub> and Ba II 6142 Å lines if $`v_\mathrm{p}800`$ km s<sup>-1</sup>. The value $`v_\mathrm{p}=900`$ km s<sup>-1</sup>, although optimal, still leads to some extra emission in Na I D<sub>1,2</sub>, Ba II 6142 Å and Fe II 6249 Å lines (Fig. 1). Preliminary analysis indicated that this drawback of the model may be overcome, if Rayleigh scattering on neutral hydrogen is taken into account.
### 2.2 Rayleigh scattering effects
Rayleigh scattering on neutral hydrogen in the optical dominates over Thomson scattering at an extremely low ionization degree $`x<0.001`$ which is the case for SN II-P atmospheres at the photospheric epoch. To get an idea of the role of Rayleigh scattering in the spectrum of SN 1997D we adopt the analytical density profile given by Eq.(1) with a power index $`n=8`$. Let us first estimate the Rayleigh optical depth $`\tau _\mathrm{R}`$ using the cross-section by Gavrila (g67 (1967)) and assuming conditions of the atmosphere of a normal SN II-P (e.g. SN 1987A) for two extreme cases: completely mixed and unmixed envelopes. Assuming for SN 1987A $`E=1.1\times 10^{51}`$ erg, $`M=15M_{}`$, a helium/metal core mass $`M_\mathrm{c}=4.2M_{}`$, and $`v_\mathrm{p}=2600`$ km s<sup>-1</sup> at the age $`t=50`$ d (Woosley w88 (1988); Shygeyama & Nomoto sn90 (1990); Utrobin viu93 (1993)) one gets at the wavelength 6142 Å (Ba II line) a Rayleigh optical depth of 0.07 (mixed) and 0.1 (unmixed).
SN 1997D is essentially different in that respect. Adopting the ejecta model by Turatto et al. (tmy98 (1998)), viz. total mass $`M=24M_{}`$, helium/metal core mass $`M_\mathrm{c}=6M_{}`$, $`v_\mathrm{p}=900`$ km s<sup>-1</sup> at the expansion time $`t=50`$ d (the epoch of Jan. 17) one finds the Rayleigh optical depth in the range 1.3–1.8 at $`\lambda =6142`$ Å, more than one order of magnitude exceeding that in normal SNe II-P. For the model with parameters scaled-down by a factor of four ($`M=6M_{}`$, helium/metal core mass $`M_\mathrm{c}=1.5M_{}`$), one obtains $`0.33<\tau _\mathrm{R}<0.44`$. Our study showed that such values cannot be ignored in modelling line profiles.
Moreover, to treat Rayleigh scattering in an adequate way one has to abandon the assumption of a fully absorbing photosphere and instead include a diffuse reflection of photons from the photosphere. We describe the diffuse reflection by a plane albedo $`A(\mu ,ϵ)`$ which is a function of cosine $`\mu `$ of incident angle and thermalization parameter $`ϵ=k_\mathrm{a}/(k_\mathrm{a}+k_\mathrm{s})`$. Here $`k_\mathrm{a}`$ is the absorption coefficient and $`k_\mathrm{s}`$ is the scattering coefficient. In the approximation of the isotropic scattering the plane albedo reads
$$A(\mu ,ϵ)=1\varphi (\mu ,ϵ)\sqrt{ϵ}$$
(2)
where the function $`\varphi (\mu ,ϵ)`$ is defined by the integral equation (cf. Sobolev s75 (1975))
$$\varphi (\mu ,ϵ)=1+\frac{1}{2}(1ϵ)\mu \varphi (\mu ,ϵ)_0^1\frac{\varphi (\mu _1,ϵ)}{\mu +\mu _1}𝑑\mu _1,$$
(3)
which was solved numerically to create a table of $`A(\mu ,ϵ)`$.
In the absence of Rayleigh scattering, non-zero albedo for $`ϵ=0.3`$ slightly (by 4%) increases the intensity of the emission component compared to the purely absorbing photosphere (Fig. 2). The difference obviously becomes larger for smaller thermalization parameter. Rayleigh scattering significantly decreases the emission component due to backscatter and subsequent absorption of photons by the photosphere in the case of $`ϵ=0.3`$ and $`\tau _\mathrm{R}=1`$. Another effect of Rayleigh scattering is washing out of the absorption trough by continuum photons drifted from blue to red; this effect is especially pronounced for weak lines and is of minor importance for strong lines. This modelling shows how the emission excess in Na I and Ba II lines (Fig. 1) may be suppressed.
Apart from Rayleigh scattering and diffuse reflection by the photosphere we made two other essential modifications to our Monte Carlo model of line formation. First, we took electron scattering into account. The electron density distribution is recovered from the H$`\alpha `$ line profile using a two-level plus continuum approximation. Second, we calculated the population of three lowest levels of Ba II using the observed flux in the spectrum on Jan. 17. This approximation is fairly good in analyzing the blue side of the absorption trough of the Ba II 6142 Å line. We adopted the standard barium abundance (Grevesse & Sauval gs98 (1998)) and the Ba II fractional ionization $`n(\mathrm{BaII})/n(\mathrm{Ba})=1`$. The latter seems to be a good approximation for the outer layers of SN 1987A at the stage when strong Ba II lines are present (Mazzali et al. mlb92 (1992)).
With the modified Monte Carlo model the synthetic spectrum is calculated for two relevant cases: a high-mass model with parameters $`M=24M_{}`$ and $`M_\mathrm{c}=6M_{}`$ (Fig. 3a) and a low-mass model with parameters $`M=6M_{}`$ and $`M_\mathrm{c}=1.5M_{}`$ (Fig. 3b). Note, that both models have the same photospheric velocity $`v_\mathrm{p}=900`$ km s<sup>-1</sup> and the same ratio $`E_{50}/M=1/6`$, where $`E_{50}`$ is the kinetic energy in units of $`10^{50}`$ erg and $`M`$ in $`M_{}`$. Complete mixing, which implies a minimum Rayleigh optical depth, gives $`\tau _\mathrm{R}=1.3`$ and $`0.33`$ for high and low-mass models, respectively. The thermalization parameter $`ϵ`$ in sub-photospheric layers is 0.35 and 0.24 for high and low-mass models, respectively. In the high-mass model Rayleigh scattering suppresses the emission components of Na I D<sub>1,2</sub>, Ba II 6142 Å, and Ba II/Fe II peak at 6500 Å down to an unacceptably low level (Fig. 3a). The low-mass case fits the observations fairly well (Fig. 3b). Computations of spectra for different values of Rayleigh optical depth led us to conclude that the tolerated upper limit is 0.6. Yet the Rayleigh optical depth cannot be lower than $`0.3`$, otherwise the emission in the Na I D<sub>1,2</sub> and Ba II 6142 Å lines becomes too strong. We find an optimal value is $`\tau _\mathrm{R}=0.45`$ with an uncertainty of about 0.15 for the Jan. 17 spectrum.
### 2.3 Diagnostics of ejecta mass and kinetic energy
The observational limitations upon Rayleigh optical depth in the atmosphere of SN 1997D may be combined with the restriction on the density in the outer layers imposed by the blue absorption edge of Ba II 6142 Å in order to constrain ejecta mass and kinetic energy. The idea may be illustrated using a toy model, in which the supernova envelope is represented by a homogeneous sphere with the boundary velocity $`v_0`$. Given the photospheric velocity and Rayleigh scattering optical depth one finds the product $`\rho v_0t`$, whereas the blue edge of the Ba II 6142 Å absorption gives the outer velocity $`v_0`$. For a specific phase $`t`$ one gets then ejecta mass $`M=(4\pi /3)\rho (v_0t)^3`$ and kinetic energy $`E=(3/10)Mv_0^2`$.
In practice we used a more realistic density profile given by Eq.(1) with a power index $`n=8`$. In this case, likewise for the simple model considered above, one can find, accepting a certain ejecta mass, the corresponding value of the kinetic energy compatible with the blue edge of the Ba II 6142 Å absorption in the SN 1997D spectrum on Jan. 17. Again, we adopted a standard barium abundance with the Ba II ion as the dominant ionization state. Variation of the model mass under the condition that the Ba II 6142 Å absorption is reproduced results in the corresponding variation of the kinetic energy. Taking into account uncertainties of the Ba II 6142 Å absorption fit we found a region of allowed parameters (“barium” strip) in the mass–kinetic energy ($`ME`$) plane (Fig. 4). The lower and upper limits of Rayleigh optical depth, 0.3 and 0.6, respectively, produce another strip of allowed parameters (“Rayleigh” strip) in this plane. The overlap of “barium” and “Rayleigh” strips gives a tetragonal region where the ejecta mass and kinetic energy of SN 1997D are confined. One sees that optimal values of ejecta mass should reside around $`M6M_{}`$, while kinetic energy should be close to $`E10^{50}`$ erg.
The suggested diagnostics, unlikely useful for ordinary SNe II-P, proved efficient for constraining parameters of SN 1997D. A warning should be kept in mind that a cosmic barium abundance was assumed here. This may in general not be the case since SN 1987A demonstrates that barium overabundance in SNe II-P may be as large as a factor of two relative to the cosmic value (Mazzali et al. mlb92 (1992)). If barium abundance in SN 1997D ejecta is twice the cosmic value, then the “barium” strip in the $`ME`$ plane has to be shifted down by a factor $`1.3`$ towards lower values of kinetic energy. It is remarkable that this diagnostics does not depend on the supernova distance. However there is a weak dependence on reddening via the colour temperature determined from 4500 Å/6140 Å flux ratio which affects the Ba II excitation. Unaccounted reddening leads to the overestimation of the mass obtained from the Ba II line.
### 2.4 Light curve
The light curve of SN II-P during the plateau phase is determined by the ejecta mass $`M`$, kinetic energy $`E`$, pre-SN radius $`R_0`$, the structure of the outer layers, the <sup>56</sup>Ni mass and its distribution, and the chemical composition of the envelope (Grassberg et al. gin71 (1971); Utrobin viu89 (1989), viu93 (1993)). The <sup>56</sup>Ni mass in SN 1997D is reliably measured by the light curve tail. The structure of outer layers of the pre-SN may normally be recovered from the initial phase of the light curve, which was unfortunately missed in the case of SN 1997D. Therefore we used a standard pre-SN density structure with the polytrope index of three, though models with other density structure were also tried. The abundance of the deeper part of the envelope, e.g., the transition region between the H-rich envelope and metal/helium core affects the final stage of photospheric regime and may therefore be probed by the light curve at the end of the plateau phase. In general, the parameters $`M`$, $`E`$, and $`R_0`$ then may be found from the plateau phase duration, luminosity at plateau phase (e.g., in $`V`$ band), and velocity at the photospheric level. In a situation when the plateau phase duration is unknown the optimal Rayleigh optical depth in the atmosphere ($`\tau _\mathrm{R}0.45\pm 0.15`$) provides the missing constraint. The description of the radiation hydrodynamics code used for supernova study may be found elsewhere (Utrobin viu93 (1993), viu96 (1996)).
An extended grid of hydrodynamical models of SN 1997D led us to the conclusion that requirements imposed by the $`V`$ light curve, velocity at the photosphere, and Rayleigh optical depth are consistent with those estimated above from the $`M`$$`E`$ diagram. The optimal hydrodynamical model is characterized by the following parameters: the ejecta mass $`M=6M_{}`$, kinetic energy $`E=10^{50}`$ erg, and pre-SN radius $`R_0=85R_{}`$. To prevent the emergence of a luminosity spike at the end of plateau phase and to explain the narrow peak of the H$`\alpha `$ emission in the nebular spectrum on day $`300`$, we suggest mixing between the helium layer and the H-rich envelope (Fig. 5). The adopted helium/metal core mass before mixing is $`M_\mathrm{c}=1.5M_{}`$. With 0.002 $`M_{}`$ of radioactive <sup>56</sup>Ni this choice of parameters results in a $`V`$ light curve, which fits the observational data (Turatto et al. tmy98 (1998)) and is consistent with the observational upper limits by Evans at early epochs (Fig. 6). The velocity at the photosphere in this model is 830 km s<sup>-1</sup> in agreement with that found from the spectrum synthesis. Remarkably we obtained the same $`E/M`$ ratio as Turatto et al. (tmy98 (1998)). In our model the first spectrum on Jan. 17 corresponds to the epoch of 46 days after the explosion in good agreement with the 50 days found by Turatto et al. (tmy98 (1998)).
The diffusion approximation used in the hydrodynamical model breaks down at the transition from the plateau to the radioactive tail about $`t65`$ d. To reproduce the tail, we translated the bolometric luminosity computed in the hydrodynamical model into the $`V`$ band luminosity using the two assumptions about the spectrum of escaping radiation at the tail phase. The first one admits that the spectrum is black-body with the constant effective temperature calculated at $`t=65`$ d. This gives somewhat higher $`V`$ luminosity compared to observations at the tail stage (Fig. 6). An alternative approach assumes that the spectrum of escaping radiation during the tail phase is the same as in the observed nebular spectrum at $`t150`$ d (Turatto et al. tmy98 (1998)). The latter assumption is more realistic and provides a good fit to observations (Fig. 6). This agreement justifies the adopted <sup>56</sup>Ni mass of 0.002 $`M_{}`$ originally obtained by Turatto et al. (tmy98 (1998)).
The envelope structure computed in the hydrodynamical model was then used to recalculate the synthetic spectrum in a way similar to that described in Section 2.2. The model spectrum agrees well with the observed spectrum on Jan. 17 (Fig. 7). Of particular importance is an excellent fit for the emission component of the Na I 5889, 5896 Å doublet which is free of blending, thus being a reliable probe for Rayleigh optical depth in the atmosphere.
Analyzing hydrodynamical models with different sets of input parameters gives us confidence that the envelope mass and the pre-SN radius are determined with the uncertainty of about 1 $`M_{}`$ and $`10R_{}`$, respectively. Therefore, we estimate the ejected mass as $`6\pm 1M_{}`$ with the invariant ratio $`E_{50}/M=1/6`$ and the radius of pre-SN as $`85\pm 10R_{}`$.
A dust extinction in the host galaxy (NGC 1536) cannot be ruled out. It is unlikely, however, significant since the galaxy is nearly face-on. With some dust extinction (if any) the kinetic energy and/or the pre-SN radius should be increased accordingly. For instance, the dust extinction $`A_V=0.1`$ mag suggests the $`13\%`$ increase of the kinetic energy.
## 3 Nebular phase
### 3.1 Nebular model
The high quality late time spectrum of SN 1997D at the nebular epoch $`t300`$ d (Turatto et al. tmy98 (1998)) gives us an opportunity of a complementary test for the ejecta model. Our goal here is to reproduce all the strong lines observed in the spectrum, viz. H$`\alpha `$, \[O I\] 6300, 6364 Å, and \[Ca II\] 7291, 7324 Å using the density distribution of the hydrodynamical model. We assume that the ejected envelope consists of two distinctive regions: a core and an external H-rich envelope. The core in the nebular model is a macroscopic mixture of radioactive <sup>56</sup>Ni, H-rich matter (component A), He-rich matter (component B), O-rich matter (component C), and the rest of metals, e.g., C, Ne, Mg (component D). The latter does not contribute noticeably to the lines we address. The density of the A, B, and D components is equal to the model local density $`\rho `$, while the O-rich matter of density $`\rho _\mathrm{O}`$ may be clumpy with the density contrast $`\chi _\mathrm{O}=\rho _\mathrm{O}/\rho `$. Voids arising from the oxygen clumpiness are presumably filled in by the <sup>56</sup>Ni bubble material.
The average gamma-ray intensity was calculated using a formal solution of the transfer equation with known distribution of <sup>56</sup>Ni and assuming the absorption approximation with the absorption coefficient $`k=0.03`$ cm<sup>2</sup> g<sup>-1</sup>. The fraction of deposited energy lost by fast electrons on heating and ionization of hydrogen, helium, and oxygen was taken from Kozma & Fransson (kf92 (1992)). The rate of nonthermal excitation and ionization of helium in the H-rich matter was added to the hydrogen ionization rate to take account of hydrogen ionization by UV radiation produced by helium nonthermal excitation and ionization. Due to this process the H$`\alpha `$ intensity is insensitive to the He/H ratio. The photoionization of hydrogen from the second level by hydrogen two-photon radiation and by the Balmer continuum were taken into account as well. The Balmer continuum radiation consists of the recombination hydrogen continuum and the rest of ultraviolet radiation created by the radiation cascade of the deposited energy of radioactive <sup>56</sup>Co. This additional component of Balmer continuum radiation was specified assuming that a fraction $`p`$ of the deposited energy is emitted in Balmer continuum with the spectrum $`j_\nu \nu ^2`$. We adopted $`p=0.2`$ according to estimates by Xu et al. (xu92 (1992)) for SN 1997A at the nebular epoch.
In thermal balance only the principal coolants are included: hydrogen lines, C I 2967 Å, 4621 Å, 8727 Å, 9849 Å, Mg II 2800 Å, \[O I\] 6300, 6364 Å, \[Ca II\] 3945 Å, 7300 Å, Ca II 8600 Å, and Fe II lines. For the sake of simplicity the total Fe II cooling rate of permitted and semi-forbidden lines is assumed to be equal to the cooling rate of one Mg II 2800 Å line. However, unlike for the real Mg II line collisional saturation is omitted to allow photon branching in Fe II lines. Cooling via the excitation of Fe II forbidden lines is represented by \[Fe II\] 8617 Å and \[Fe II\] 4287 Å lines, which are the most efficient coolants for the relevant temperature and electron density. We include also adiabatic cooling; it is important in the outer region of the hydrogen envelope. Metals with a low ionization potential (Mg, Ca, Fe) are assumed singly ionized.
With a specified density distribution and <sup>56</sup>Ni mass the primary fitting parameter is the velocity at the core boundary $`v_\mathrm{c}`$, which affects line intensities via the mass of the mixed core $`M_\mathrm{c}`$ exposed to the intense gamma-rays. The amount of matter in components A, B, and C should then be determined from the spectrum fit.
### 3.2 Model test
Before applying the nebular model to SN 1997D it is instructive to compute the nebular spectrum of the well studied SN 1987A. We used the CTIO spectrum corrected for reddening at the epoch 339 days (Phillips et al. phh90 (1990); Pun et al. pun95 (1995)). The primordial metal abundance (Z) is assumed to be 0.4 solar. The density distribution in the envelope is approximated by Eq. (1) with $`n=8.5`$. Compromise values of ejecta mass $`M=15M_{}`$ and kinetic energy $`E=1.1\times 10^{51}`$ erg are adopted (Woosley w88 (1988); Shygeyama & Nomoto sn90 (1990); Utrobin viu93 (1993)). Apart from the <sup>56</sup>Ni mass (0.075 $`M_{}`$), we specify the amount of metals in the core $`M_{\mathrm{met}}=0.5M_{}`$ (component D), in line with the expectations for an 18–22 $`M_{}`$ progenitor (Woosley & Weaver ww95 (1995); Thielemann et al. tnh96 (1996)).
A satisfactory description of line profiles and intensities of H$`\alpha `$, \[O I\] 6300, 6364 Å, and \[Ca II\] 7291, 7324 Å (Fig. 8) is obtained with the test model (TM) for a sound choice of parameters (Table 1). The table gives ejecta mass ($`M`$), kinetic energy ($`E_{50}`$), the primordial-to-solar metal abundance ratio ($`Z/Z_{}`$), velocity at the outer boundary of the mixed core ($`v_\mathrm{c}`$), oxygen density contrast ($`\chi _\mathrm{O}`$), core mass ($`M_\mathrm{c}`$), and other core components, viz. H-rich matter ($`M_\mathrm{H}`$), He-rich matter ($`M_{\mathrm{He}}`$), O-rich matter ($`M_\mathrm{O}`$), and metals ($`M_{\mathrm{met}}`$). All masses in Table 1 are given in solar masses. The amount of H-rich and He-rich matter in the mixed core inside 2000 km s<sup>-1</sup> ($`2M_{}`$ and $`0.7M_{}`$, respectively) are in good agreement with values advocated by Kozma & Fransson (kf98 (1998)). The rest of newly synthesized helium ($`1M_{}`$) is presumably mixed with the H-rich component. The oxygen mass and density contrast were found from best “eye-fit” of flux of the \[O I\] doublet. The value $`\chi _\mathrm{O}=5.5`$ corresponds to the oxygen filling factor $`0.045`$, a value earlier found by Andronova (a92 (1992)).
The oxygen mass estimate is hampered somewhat by the uncertainty arising from the poorly known fraction of oxygen cooled via CO and SiO emission. In SN 1987A the mass of cool oxygen in the CO dominated region is estimated as 0.2 $`M_{}`$ (Liu & Dalgarno ld95 (1995)). With a comparable oxygen mass hidden in the SiO region we thus miss about 0.4 $`M_{}`$ of oxygen. Therefore the total oxygen mass must be $`1.6M_{}`$ in rough agreement both with the estimate from the HST spectrum at nonthermal excitation phase (Chugai et al. nch97 (1997)) and predictions of stellar evolution models for an 18–22 $`M_{}`$ progenitor (Woosley & Weaver ww95 (1995); Thielemann et al. tnh96 (1996)).
Omitting details, we conclude that the test of nebular model in the case of SN 1987A is successful and demonstrates that the model is able to recover reliable values of important parameters.
### 3.3 SN 1997D: low and high-mass models
Now we turn to the nebular spectrum of SN 1997D at $`t300`$ d. First, the 6 $`M_{}`$ case based on the hydrodynamical model (Section 2.4) will be considered. Some refinement of the hydrodynamical model is needed, however, to apply it to nebular epoch. The amount of metals in the mixed core is specified assuming that masses of metals and oxygen are equal. This is a reasonable assumption for a low-mass pre-SN. The oxygen abundance in He-rich matter (component B) was assumed one tenth cosmic, while a carbon abundance of 0.03 is adopted for He-rich matter in the 6 $`M_{}`$ model. We then also consider the 24 $`M_{}`$ case based on the model by Turatto et al. (tmy98 (1998)) with the composition taken from Nomoto & Hashimoto (nh88 (1988)).
The low-mass nebular model of SN 1997D fits the observed spectrum fairly well (Fig. 9a) with the optimal choice of parameters represented by model M1 (Table 1). The \[Ca II\] 7300 Å profile was reproduced for the core velocity $`v_\mathrm{c}=600\pm 30`$ km s<sup>-1</sup>. This parameter is of primary importance, since it determines the absolute mass of the core components with the adopted density structure. We failed to fit the absolute flux of this line with a cosmic primordial abundance adopted for the model M2 (Table 1 and Fig. 9b), while the primordial abundance 0.3 of cosmic in model M1 provides an excellent fit. The oxygen doublet intensity is determined primarily by the mass of the O-rich matter, although some 20% come from He and H-rich matter. The value of 0.035 $`M_{}`$ is corrected for the unseen cool oxygen assuming that we see 3/4 of all the pure oxygen in the \[O I\] doublet as in SN 1987A. A strong oxygen overdensity is not required. We found that $`\chi _\mathrm{O}=1.2`$ in model M1 provides somewhat better agreement with the observed ratio of \[O I\] doublet components than $`\chi _\mathrm{O}=1`$. The amount of He-rich matter is a lower limit for the mass of the He shell in the pre-SN. One may admit up to 1 $`M_{}`$ of helium mixed microscopically with the H-rich envelope without a notable effect on line intensities.
Unfortunately our nebular model is not applicable to the earlier nebular spectrum of SN 1997D on day 150. The reason is the significant optical depth in the Paschen continuum predicted by the model. In such a situation the multilevel statistical equilibrium must be solved together with a full radiation transfer, which is beyond the scope of this paper. Moreover, we found that the observed H$`\alpha `$ profile at this epoch is odd exhibiting a significant redshift of unclear origin. A prima face explanation assuming <sup>56</sup>Ni asymmetry cannot accommodate to the late time nebular spectrum ($`300`$ d) lacking such an asymmetry.
To evaluate the uncertainty related with the assumption of the same fraction (1/4) of cool O-rich gas as in SN 1987A, we compared parameters relevant to molecular formation (density and temperature) in the O-rich matter at a similar nebular epoch. We found that the density of O-rich gas in SN 1997D is lower, while the temperature is somewhat higher compared to SN 1987A. Both parameters suggest therefore that formation of molecules in SN 1997D cannot be more efficient than in SN 1987A, which means that the fraction of unseen pure oxygen in SN 1997D does not exceed that in SN 1987A. With the uncertainty of the core velocity the estimated range of pure oxygen mass in SN 1997D is 0.02–0.07 $`M_{}`$.
We applied the nebular model to the high-mass case ($`M=24M_{}`$). Due to the large mass of the He/O core the velocity of the core boundary is too high and inconsistent with the observed \[Ca II\] doublet profile. Mixing all the freshly synthesized helium with the hydrogen envelope reduces the core velocity but not sufficiently to resolve this controversy (Fig. 10a). Another serious problem is a high \[O I\] doublet flux and wrong doublet ratio. The five-fold reduction of amount of line-emitting oxygen, presumably due to molecular formation and cooling, alleviates the problem of total flux in the \[O I\] doublet. Yet the problem of high $`I(6364)/I(6300)`$ ratio in this model remains (Fig. 10b).
Summing up, we found that the hydrodynamical model of moderate mass ejecta ($`6M_{}`$) which contains low amount of freshly synthesized oxygen (0.02–0.07 $`M_{}`$) is consistent with the nebular spectra of SN 1997D. The model of high-mass ejecta as such is incompatible with the observed nebular spectra.
## 4 Discussion
We attributed SN 1997D to the SN II-P event characterized by the kinetic energy $`E10^{50}`$ erg and ejecta mass $`6\pm 1M_{}`$. The ejecta are dominated by H-rich matter and contains 0.02–0.07 $`M_{}`$ of freshly synthesized oxygen. The estimated <sup>56</sup>Ni mass is about 0.002 $`M_{}`$ in accordance with the value found by Turatto et al. (tmy98 (1998)). The pre-SN had a moderate radius of 85 $`R_{}`$ and possibly a low primordial metallicity, 0.3 cosmic.
At first glance the suggested low metallicity of SN 1997D disagrees with the assumed cosmic abundance of barium which implies a relative overabundance by a factor of three. The latter should not confuse us, however, after SN 1987A in which the relative barium overabundance is about five (Mazzali et al. mlb92 (1992)) for the comparable metallicity of both supernovae. Amazingly, the relatively small pre-SN radius and low primordial metallicity of SN 1997D both are reminiscent of SN 1987A. Possibly it reflects some trend for low metallicity progenitors to have smaller pre-SN radii compared to SNe II-P with cosmic metallicity.
When combined the ejecta mass and the collapsed core mass (presumably 1.4 $`M_{}`$), the total pre-SN mass amounts to 6–9 $`M_{}`$ prior to outburst. The main-sequence progenitor likely was more massive because of a possible wind mass-loss. In the context of general results of stellar evolution theory, the low mass of freshly synthesized oxygen ($`<0.1M_{}`$) is compelling evidence that the progenitor of SN 1997D was a main-sequence star from the 8–12 $`M_{}`$ range. These stars are known to end their life with very low amount ($`<0.1M_{}`$) of synthesized oxygen (Nomoto n84 (1984); Woosley w86 (1986)). Ejecta of CCSN produced by such stars must contain very small amount of <sup>56</sup>Ni, significantly less than normal CCSN (Woosley w86 (1986)), which is also in line with SN 1997D.
The fact that at least some CCSN originating from the 8–12 $`M_{}`$ mass stars have low kinetic energy ($`E10^{50}`$ erg) and eject small amounts of <sup>56</sup>Ni ($`0.002M_{}`$) modifies a picture of CCSN with “standard” kinetic energy of $`10^{51}`$ erg and <sup>56</sup>Ni mass of 0.07–0.1 $`M_{}`$. The new situation in the systematics of CCSN is visualized by the $`EM_{\mathrm{ms}}`$ and <sup>56</sup>Ni mass–$`M_{\mathrm{ms}}`$ plots (Fig. 11) which show the position of SN 1997D along with two other well studied CCSN, SN 1987A (Woosley w88 (1988); Shygeyama & Nomoto sn90 (1990); Utrobin viu93 (1993)) and SN 1993J (Bartunov et al. bbpt94 (1994); Shigeyama et al. setal94 (1994); Woosley et al. woetal94 (1994); Utrobin viu96 (1996)). The primary significance of this plot is that both nearly constant kinetic energy ($`10^{51}`$ erg) and <sup>56</sup>Ni mass ($`0.08M_{}`$) in the range of progenitor masses between $`13M_{}`$ and $`20M_{}`$ abruptly drop at the low end of massive star range producing CCSN (around 10 $`M_{}`$). It would be not unreasonable to consider that SN 1997D is a prototype for a new family of CCSN (below referred to as “weak CCSN”) which occupies the same place on the $`E`$$`M_{\mathrm{ms}}`$ and <sup>56</sup>Ni mass–$`M_{\mathrm{ms}}`$ plots as SN 1997D.
Unfortunately, there are no clear theoretical predictions in regard to weak CCSN. Yet current trends in the core-collapse modelling seem to be generally consistent with Fig. 11. Two explosion mechanisms are related to producing CCSN: prompt (core rebound) and delayed (neutrino-driven mechanism). For 8–10 $`M_{}`$ progenitors the prompt mechanism attains its highest efficiency (Hillebrandt et al. hnw84 (1984)) with the kinetic energy of ejecta $`10^{50}`$ erg (Baron & Cooperstein bc90 (1990)), while the delayed mechanism, on the contrary, has the lowest efficiency in this mass range with similar energy $`10^{50}`$ erg (Wilson et al. wmww86 (1986)). Thus both explosion mechanisms remain viable in the context of SN 1997D. However, possibly only the neutrino-driven mechanism is able to account for the kinetic energy increase with the progenitor mass in the range from about 10 $`M_{}`$ to $`13M_{}`$ (Wilson et al. wmww86 (1986); Burrows b98 (1998)).
How frequent are SN 1997D-like phenomena? The first thought is that they are extremely rare, since among $`10^2`$ identified SN II-P events only one such case has been discovered so far. However, with the low absolute luminosity ($`14`$ mag) and the brief plateau duration (40–50 days) compared to normal SN II-P characteristics ($`16.5`$ mag and 80–100 days, respectively) it would not be surprising, if SN 1997D-like events were as frequent as $`20\%`$ of normal SN II-P rate. Such a rate might be maintained by progenitors from the mass interval $`\mathrm{\Delta }M_{\mathrm{ms}}1M_{}`$ in the vicinity of main-sequence mass $`10M_{}`$.
Progenitors from the 8–12 $`M_{}`$ mass range were suggested earlier as counterparts for supernovae with a dense circumstellar wind, low ejecta mass ($`1M_{}`$), and possibly normal kinetic energy (e.g. SN 1988Z, Chugai & Danziger cd94 (1994)). The present attribution of SN 1997D to the same mass range introduces some dissonance with the former conjecture. In reality, this controversy is not serious since 8–12 $`M_{}`$ progenitors are characterized by very complicate evolutions of their cores (Nomoto n84 (1984); Woosley w86 (1986)) and therefore a different outcome for slightly different initial mass is quite conceivable. Moreover, it may well be that with a normal metallicity presupernova of weak CCSN also vigorously loses mass and explodes in a dense wind thus producing a luminous supernova (possibly SN IIn) due to the ejecta wind interaction.
Another intriguing possibility is that a presupernova of weak CCSN might lose all the hydrogen envelope in a close binary system before the explosion. In this case weak CCSN will be a mini-version of SN Ib with low explosion energy, low amount of <sup>56</sup>Ni, and, eventually, low luminosity. Unfortunately it will not be easy to detect such events.
If weak CCSN are as frequent as $`20\%`$ of all CCSN, a good fraction of galactic population of SNR may be related to these supernovae. We cannot miss the opportunity to speculate that at least two historical supernovae SN 1054 and SN 1181 might be identified with weak CCSN. Nomoto (n84 (1984)) already argued that the Crab Nebula was created by CCSN with the progenitor mass around 9 $`M_{}`$. The luminosity of SN 1054, which was normal for SN II, could be explained then by the interaction of ejecta with a dense pre-SN wind. This suggestion in fact is a modification of the earlier idea that the initial phase of the light curve of SN 1054 could be powered by the shock wave propagating in the circumstellar ($`r10^{15}`$ cm) envelope (Weaver & Woosley ww79 (1979)). The second possible counterpart of a galactic, weak CCSN is SN 1181. With the absolute luminosity of $`13.8`$ mag at maximum (Green & Gull gg82 (1982)) and half year period of visibility SN 1181 is the closest analogue of SN 1997D. Low radial velocities ($`1100`$ km s<sup>-1</sup>) of SN 1181 filaments claimed by Fesen, Kirshner & Becker (fkb88 (1988)) seem to strengthen this identification. If the association of SN 1181 is correct then we expect to find very low amount of newly synthesized oxygen and iron-peak elements in this supernova remnant.
###### Acknowledgements.
We thank Massimo Turatto for sending spectra of SN 1997D. We are grateful to Ken Nomoto, Bruno Leibundgut, Peter Lundqvist, and Wolfgang Hillebrandt for discussions and comments. N.Ch. thanks Bruno Leibundgut for hospitality at ESO and V.U. thanks Wolfgang Hillebrandt and Ewald Müller for hospitality at MPA. This work was supported in part by the RFBR (project 98-02-16404) and the INTAS-RFBR (project 95-0832).
|
no-problem/9906/hep-ph9906337.html
|
ar5iv
|
text
|
# Structure function evolution at next-to-leading order and beyond
## 1 Introduction
One of the important objectives of studying structure functions in deep-inelastic scattering (DIS) is a precise determination of the QCD scale parameter $`\mathrm{\Lambda }`$ (i.e., the strong coupling $`\alpha _s`$) from their scaling violations. In this talk we briefly present results of two studies aiming at an improved control and a reduction of the corresponding theoretical uncertainties.
## 2 Flavour-singlet evolution in NLO
The evolution of structure functions is usually studied in terms of scale-dependent parton densities and coefficient functions. In this case the predictions of perturbative QCD are affected by two unphysical scales: the renormalization scale $`\mu _r`$ and the mass-factorization scale $`\mu _f`$. While the former is unavoidable, the latter can be eliminated by recasting the evolution equations in terms of observables . In the flavour-singlet sector, this procedure results in
$$\frac{d}{d\mathrm{ln}Q^2}(\begin{array}{c}F_2\\ F_B\end{array})=𝒫(\alpha _s(\mu _r),\frac{\mu _r^2}{Q^2})(\begin{array}{c}F_2\\ F_B\end{array})$$
(1)
with $`F_B=dF_2/d\mathrm{ln}Q^2`$ or $`F_B=F_L`$. The kernels $`𝒫`$ are combinations of splitting functions and coefficient functions which become prohibitively complicated in Bjorken-$`x`$ space at NLO. Thus Eqs. (1) are most conveniently treated using modern complex Mellin-moment techniques .
We have performed leading-twist NLO fits to the $`F_2^p`$ data of SLAC, BCDMS, NMC, H1, and ZEUS. Statistical and systematic errors have been added quadratically, the normalization uncertainties have been taken into account separately. The singlet/non-singlet decomposition has been constrained by the $`F_2^n/F_2^p`$ data of NMC. The initial shapes $`F_{2,B}(x,Q_0^2)`$ are expressed via standard parametrizations for parton densities at $`\mu _f=Q_0`$.
In order to establish the kinematic region which can be safely used for fits of $`\alpha _s`$ in the leading-twist NLO framework, the lower $`Q^2`$-cut applied to the data has been varied between 3 and 30 GeV<sup>2</sup>. When the normalized momentum sum of the partons defining the $`F_{2,B}`$ initial distributions is left free, the fits with $`Q_{\mathrm{cut}}^2<10\text{ GeV}^2`$ prefer values significantly different from unity, see Fig. 1. Also shown in this figure is the $`Q_{\mathrm{cut}}^2`$-dependence of the fitted values for $`\alpha _s(M_Z)`$, now imposing the momentum sum rule. The results for $`Q_{\mathrm{cut}}^27\text{ GeV}^2`$ tend to lie above the $`Q_{\mathrm{cut}}^210\text{ GeV}^2`$ average of $`\alpha _s(M_Z)=0.114`$ (dashed line).
In Fig. 2 we display the renormalization scale dependence of the $`\alpha _s(M_Z)`$ central values for the safe choice $`Q_{\mathrm{cut}}^2=10\text{ GeV}^2`$. The conventional, but somewhat ad hoc, prescription of estimating the theoretical error by the variation due to $`0.25\mu _r^2/Q^24`$ results in
$$\alpha _s(M_Z)=0.114\pm 0.002_{\mathrm{exp}}\begin{array}{c}+\mathrm{\hspace{0.17em}0.006}\\ \mathrm{\hspace{0.17em}0.004}\end{array}|_{\mathrm{scale}}.$$
(2)
Other theoretical uncertainties are considerable smaller and can be neglected at this point. The uncertainty due to possible higher-twist contributions, for instance, can be estimated at about 1% via the target-mass effects included in the fits.
## 3 Non-singlet evolution in NNLO
The theoretical error in Eq. (2) clearly calls for NNLO analyses. The necessary contributions to the $`\beta `$-function and the coefficient functions are known. However, only partial results are available for the 3-loop terms $`P^{(2)}(x)`$ in the splitting-function expansion ($`a_s\alpha _s/4\pi `$)
$$P=a_sP^{(0)}+a_s^2P^{(1)}+a_s^3P^{(2)}+\mathrm{}.$$
(3)
For the non-singlet part of $`F_2`$ considered here (NS<sup>+</sup>), present information comprises the lowest five even-integer moments , the full $`N_f^2`$ piece , and the most singular small-$`x`$ term .
We have performed a systematic study of the constraints imposed on $`P_{\mathrm{NS}}^{(2)+}(x)`$ by these results. Four approximations spanning the current uncertainty range are shown in Fig. 3, together with their convolutions with a typical input shape.
$`P_{\mathrm{NS}}^{(2)+}(x)`$ is well determined at $`x0.15`$, with a total spread of about 15% at $`x0.3`$. At (non-asymptotically) small $`x`$ its behaviour is rather unconstrained despite the known leading $`x0`$ contribution. As the splitting functions enter scaling violations always via convolutions
$$(Pf)(x)=_x^1𝑑y/yP(x/y)f(y)$$
(4)
with smooth initial distributions $`f(x)`$, the residual uncertainties are much reduced for observables over the full $`x`$-range. In the present case they prove to be negligible at $`x>0.02`$.
The net effect of the NNLO correction is finally illustrated in Fig. 4, where the scale-derivative of $`F_2^{\mathrm{NS}}`$ is shown for $`\mu _r=Q`$ and $`N_f=4`$, using an $`\alpha _s`$-value typical for the fixed-target region. The inclusion of this correction into fits is expected to lead to a slightly lower central value for $`\alpha _s`$ and a considerably reduced theoretical uncertainty.
## 4 Summary and outlook
We have analyzed present $`ep/\mu p`$ $`F_2`$-data in a factorization-scheme independent framework . We find that $`Q^2,W^2>10\text{ GeV}^2`$ is a safe region for leading-twist NLO fits of $`\alpha _s`$. Our central value is close to that of the standard pre-HERA analysis in , but lower than the recent result of using a lower $`Q^2`$-cut of 2 GeV<sup>2</sup>. The irreducible renormalization-scale uncertainty turns out to be larger than expected from .
We have derived approximate $`x`$-space expressions for the 3-loop non-singlet splitting functions $`P_{\mathrm{NS}}^{(2)}`$, including error estimates . This approach is complementary to, but more flexible than, the integer-moment procedures pursued in . The remaining uncertainties of $`P_{\mathrm{NS}}^{(2)}`$ are small for the evolution at $`x>10^2`$, thus allowing for detailed NNLO analyses in this region. An extension to the singlet case is in preparation.
|
no-problem/9906/astro-ph9906311.html
|
ar5iv
|
text
|
# Dust-obscured star formation and AGN fuelling in hierarchical models of galaxy evolution
## 1 Introduction
The history of star formation in dusty galaxies was recently discussed by Blain et al. (1999c), who assumed that the distant galaxies recently detected using the 450/850-$`\mu `$m Submillimetre Common-User Bolometer Array (SCUBA) camera (Holland et al. 1999) were the high-redshift counterparts of local ultraluminous IRAS galaxies. The global star formation rate (SFR) in dust obscured galaxies was inferred to be significantly greater than that of optically selected high-redshift galaxies (Steidel et al. 1996a,b, 1999), subject to the uncertain fraction of the luminosity of the submillimetre-selected samples of galaxies (Smail, Ivison & Blain 1997; Barger et al. 1998; Hughes et al. 1998; Barger et al. 1999a; Blain et al. 1999b; Eales et al. 1999) that is produced by accretion processes in active galactic nuclei (AGN). A fraction of at most 30 per cent, and more likely 10–20 per cent, is suggested by both follow-up observations (Frayer et al. 1998; Ivison et al. 1998; Smail et al. 1998; Barger et al. 1999b; Frayer et al. 1999; Lilly et al. 1999), and information derived in other wavebands; see section 5.4 of Blain et al. (1999c), Almaini, Lawrence & Boyle (1999) and Gunn & Shanks (1999). Using a different approach, in which the high-redshift SCUBA population is decoupled from the local infrared-luminous galaxies, Trentham, Blain & Goldader (1999) were able to reconcile the SCUBA counts with a less dramatic amount of obscured star-formation activity. Use another empirical approach, Tan, Silk & Balland (1999) derived results somewhere between the two. A summary of the existing data on the history of star formation is presented in Fig. 1.
Although well constrained, and in accord with the available observational data, the models in Blain et al. (1999c) and Trentham et al. (1999) included few details of the physical origin of the large luminosity of SCUBA galaxies. Semi-analytic models of hierarchical galaxy formation, in which galaxies assemble by the merger of progressively larger subunits (Cole et al. 1994; Baugh et al. 1998; Kauffmann & Charlot 1998; Somerville, Primack & Faber 1999) have been used to account for a wide range of observations in the optical and near-infrared wavebands, and have been extended into the far-infrared and submillimetre wavebands by Guiderdoni et al. (1998). These models involve a large number of free parameters, and the interplay between them can make it difficult to identify the most important physics responsible for a particular observation. In this paper we develop a model of infrared-luminous galaxies in a simple version of such a scenario (Blain & Longair 1993a,b; Jameson, Longair & Blain 1999), which includes many fewer parameters and hopefully makes the astrophysics more transparent. We attempt to reproduce the SCUBA counts by invoking bright dust-enshrouded bursts of either star formation activity or AGN fuelling at the epochs of mergers.
Motivation for considering the SCUBA galaxies as luminous mergers is provided by both the optical identifications of the Smail et al. (1998) sample, which appear to contain a large fraction of interacting galaxies, and the gas consumption rate that is inferred from observations of CO emission of two submillimetre-selected galaxies made using the Owens Valley Millimeter Array (Frayer et al. 1998, 1999), which cannot be sustained for more than a few $`10^8`$ yr. Even the faint and compact counterparts listed in Smail et al. (1998) could be merging galaxies, but too faint to identify as such; see the simulations of the appearance of high-redshift mergers in Bekki, Shioya & Tanaka (1999). If the SCUBA galaxies are the high-redshift counterparts of the low-redshift ultraluminous infrared galaxies, which are predominantly merging systems, then this also offers support for modeling the SCUBA galaxies as mergers. Using our simple model, we emphasise the most important features and the underlying physics of the evolution of submillimetre-selected galaxies and their relationship to the population of quiescent galaxies.
In Section 2 we describe the details of the model, and investigate the constraints imposed by the intensity of the far-infrared and submillimetre-wave background radiation and the counts of low-redshift IRAS galaxies. We discuss the evolution of the luminosity density in the model and compare the models with observations in the same way as the evolving IRAS luminosity function models discussed by Blain et al. (1999c). In Section 3 we discuss the predictions in the context of source counts in the submillimetre and far-infrared wavebands, and investigate whether the SCUBA galaxies can easily be explained in an hierarchical picture of galaxy formation and evolution. In Section 4 the corresponding background radiation intensities and galaxy counts in the near-infrared and optical wavebands are discussed. In Section 5 we review the parameters we have introduced to describe the models. We present our main conclusions in Section 6. A value of Hubble’s constant $`H_0=100h`$ km s<sup>-1</sup> Mpc<sup>-1</sup>, with $`h=0.5`$, a density parameter $`\mathrm{\Omega }_0=1`$ and a cosmological constant $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$ are assumed.
## 2 An analytic hierarchical picture
The evolution of galaxy-scale structures under gravity according to hierarchical clustering models can be analysed using the Press–Schechter formalism (Press & Schechter 1974), which describes the time-dependent mass spectrum of bound objects. The analytic results of the Press–Schechter formalism are in quite acceptable agreement with those of N-body simulations (Brainerd & Villumsen 1992). The formalism can be extended to yield a very straightforward semi-analytic merger rate, under the single assumption that the process of halo mergers is independent of mass (Blain & Longair 1993a,b).
### 2.1 The Press–Schechter Formalism
According to the Press–Schechter prescription, the mass spectrum of bound objects with masses between $`M`$ and $`M+\mathrm{d}M`$ is
$$N_{\mathrm{PS}}(M,z)=\frac{\overline{\rho }}{\sqrt{\pi }}\frac{\gamma }{M^2}\left(\frac{M}{M^{}}\right)^{\gamma /2}\mathrm{exp}\left[\left(\frac{M}{M^{}}\right)^\gamma \right],$$
(1)
in which $`\overline{\rho }`$ is the smoothed comoving density of the Universe, dominated by dark matter, $`\gamma =(3+n)/3`$, where $`n`$ is the power-law index of primordial density fluctuations, and $`M^{}(z)`$ is a parameter which describes the evolution of density fluctuations as a function of redshift $`z`$:
$$M^{}(z)=M^{}(0)\left[\frac{\delta (z)}{\delta (0)}\right]^{2/\gamma }.$$
(2)
$`\delta (z)`$ is the function describing the growth of perturbations in a general cosmology, and is derived from the equation,
$$\ddot{\delta }+2\frac{\dot{R}}{R}\dot{\delta }\frac{4\pi G\overline{\rho }}{R^3}\delta =0,$$
(3)
in which $`R`$ is the scalefactor of the Universe. In an Einstein de-Sitter model, the growing mode has $`\delta (1+z)^1`$. $`M^{}(0)`$ is the typical mass of bound objects at $`z=0`$. Inhomogeneities do not grow if $`\gamma 0`$, that is if $`n3`$. For scale-invariant density fluctuations, $`n=1`$ or $`\gamma =4/3`$. This is close to the value observed on the largest scales from the cosmic microwave background radiation (CMBR). Observations of large-scale structure indicate that $`n1.5`$, or $`\gamma 1/2`$, on the smallest scales (Peacock & Dodds 1994), which can be associated with the transfer function between a primordial $`n=1`$ spectrum and the spectrum after recombination.
Using the Tully–Fisher relation (Hudson et al. 1998), Blain, Möller & Maller (1999) obtained a value $`M^{}(0)=3.6\times 10^{12}`$ M. The exact value of $`M^{}(0)`$ is not very important here, as a mass-to-light ratio is introduced to convert the mass spectrum into a luminosity function.
### 2.2 Deriving a merger rate
Working from the mass spectrum $`N_{\mathrm{PS}}(M,z)`$, Blain & Longair (1993a,b) showed that a formation rate of bound objects $`\dot{N}_{\mathrm{form}}(M,z)`$ in galaxy halo mergers can be constructed if the mass distribution of the components involved in a statistical sample of merger events is assumed to be independent of mass. In this case, the merger rate can be represented accurately by the function
$$\dot{N}_{\mathrm{form}}=\dot{N}_{\mathrm{PS}}+\varphi \frac{\dot{M}^{}}{M^{}}N_{\mathrm{PS}}\mathrm{exp}\left[(1\alpha )\left(\frac{M}{M^{}}\right)^\gamma \right],$$
(4)
where
$$\dot{N}_{\mathrm{PS}}=\gamma \frac{\dot{M}^{}}{M^{}}N_{\mathrm{PS}}\left[\left(\frac{M}{M^{}}\right)^\gamma \frac{1}{2}\right].$$
(5)
$`\varphi `$ and $`\alpha `$ are numerical constants, typically about 1.7 and 1.4; their exact values depend on the assumed mass distribution of merging components (Blain & Longair 1993a), but have little effect on the results. The values of both $`\varphi `$ and $`\alpha `$ are weak functions of $`\gamma `$ and depend on the world model parameters, but the form $`\varphi /\sqrt{\alpha }`$ that appears in the calculations of the background radiation intensity and metal abundance is almost independent of the value of $`\gamma `$.
### 2.3 Deriving observable quantities
The merger rate as a function of mass $`\dot{N}_{\mathrm{form}}`$ can be readily used to estimate a number of observable quantities, starting with the luminosity density (or volume emissivity),
$$ϵ_\mathrm{L}(z)=0.007c^2\frac{x(z)}{1f_\mathrm{A}}M\dot{N}_{\mathrm{form}}dM,$$
(6)
in which $`x(z)`$ is the ratio of the mass of baryonic matter converted into metals by nucleosynthesis in a merger-induced starburst to the total dark mass involved in the merger. The rationale behind this form of relation is given by Longair (1998). The factor of 0.007 is the approximate efficiency of conversion of mass into energy in stellar nucleosynthesis. The parameter $`f_\mathrm{A}<1`$ describes the fraction of the total luminosity of merging galaxies that is attributable to accretion in AGN, and is expected to lie in the range $`0.1f_\mathrm{A}0.3`$ (Genzel et al. 1998; Lutz et al. 1998; Almaini et al. 1999; Barger et al. 1999b; Gunn & Shanks 1999). The parameter $`x(z)`$ is expected to vary with redshift $`z`$. Blain & Longair (1993b) predicted a flat background spectrum in the submillimetre and far-infrared wavebands, assuming a constant value of $`x`$. Subsequent observations (e.g. Fixsen et al. 1998) demand a redshift-dependent form of $`x(z)`$, as discussed by Blain et al. (1999c).
By evaluating the integral in equation (6), the luminosity density can be expressed as
$$ϵ_\mathrm{L}(z)=0.007c^2\frac{x(z)}{1f_\mathrm{A}}\overline{\rho }\frac{\varphi }{\sqrt{\alpha }}\frac{\dot{M}^{}}{M^{}}.$$
(7)
Interestingly,
$$\frac{\dot{M}^{}}{M^{}}=\frac{2}{\gamma }\frac{\dot{\delta }(z)}{\delta (z)},$$
(8)
and so, because the density contrast $`\delta (z)`$ is not a function of the perturbation spectral index $`\gamma `$, the $`\gamma `$ dependence in this term is just a simple scaling. Thus the effect of the value of $`\gamma `$ on the background radiation intensity can be studied or removed very easily. In an Einstein–de Sitter model the luminosity density $`ϵ_\mathrm{L}x(z)(1+z)^{3/2}`$.
The comoving density of metals produced in starbursts between a redshift $`z_0`$, at which star formation activity begins, and $`z`$, is
$$\rho _\mathrm{m}(z)=\overline{\rho }\frac{\varphi }{\sqrt{\alpha }}_z^{z_0}\frac{1}{c}\frac{x(z)}{(1+z)}\frac{\dot{M}^{}}{M^{}}\frac{\mathrm{d}r}{\mathrm{d}z}dz,$$
(9)
where $`r`$ is the radial comoving distance coordinate. Note that this result depends on the merger efficiency parameter $`x`$ but not on the AGN fraction $`f_\mathrm{A}`$, as metals are only generated in merger-induced starbursts and not in AGN fuelling events. Similarly, the background radiation intensity per unit solid angle emitted by these galaxies, which have a spectral energy distribution (SED) $`f_\nu `$, is
$$I_\nu =\frac{1}{4\pi }_0^{z_0}\frac{ϵ_\mathrm{L}(z)}{1+z}\frac{f_{\nu (1+z)}}{f_\nu ^{}d\nu ^{}}\frac{\mathrm{d}r}{\mathrm{d}z}dz.$$
(10)
If the form of $`ϵ_\mathrm{L}`$ (equation 7) is included explicitly, then
$$I_\nu =\frac{0.007c^2\overline{\rho }}{4\pi (1f_\mathrm{A})}\frac{\varphi }{\sqrt{\alpha }}_0^{z_0}\frac{x(z)}{1+z}\frac{\dot{M}^{}}{M^{}}\frac{f_{\nu (1+z)}}{f_\nu ^{}d\nu ^{}}\frac{\mathrm{d}r}{\mathrm{d}z}dz.$$
(11)
Details of the assumed dust SED can be found in Blain et al. (1999c). The mid-infrared SED is assumed to take the form $`f_\nu \nu ^{1.7}`$.
None of the quantities listed above are affected by the time dependence of the release of energy during merging events; however, the source count requires the time profile of the merger induced starburst/AGN to be included. For simplicity, this profile is assumed to have a top-hat form with duration $`\sigma `$. The time profile of the luminosity generated in a detailed simulation of the merger of gas-rich galaxies is discussed by Mihos & Hernquist (1996), Bekki et al. (1999) and Mihos (1999). The typical duration of AGN fuelling events and starbursts may differ; for example, a lower limit to the duration of a starburst is set by the lifetime of the highest mass stars, but there is no lower limit to the duration of an AGN fuelling event. However, to avoid introducing an unnecessarily complicated model, the time-scale of a merger induced luminous phase is assumed to be independent of its origin. In addition, because not all the mergers of dark matter haloes that take place at each epoch need induce a starburst/AGN, a fraction $`F1`$ is assumed. Again, this fraction could differ for starbursts and AGN, but for simplicity it is assumed not to. The luminosity of a typical merger induced starburst/AGN of mass $`M`$ is thus
$$L(M,z)=0.007c^2\frac{x(z)}{1f_\mathrm{A}}\frac{1}{F\sigma }M.$$
(12)
The source count $`N`$ of galaxies per unit solid angle brighter than a flux density $`S_\nu `$ is
$$N(S_\nu )=_0^{z_0}_{M_{\mathrm{min}}}^{\mathrm{}}F\sigma \dot{N}_{\mathrm{form}}(M,z)dMD^2(z)\frac{\mathrm{d}r}{\mathrm{d}z}dz,$$
(13)
where $`D(z)`$ is the comoving distance parameter. The minimum mass merger visible at a flux density $`S_\nu `$ and redshift $`z`$ is
$$M_{\mathrm{min}}=\frac{4\pi D^2(1+z)S_\nu }{0.007c^2}\frac{F\sigma }{x(z)}(1f_\mathrm{A})\frac{f_\nu ^{}d\nu ^{}}{f_{\nu (1+z)}}.$$
(14)
The time-scale and bursting fraction parameters, $`\sigma `$ and $`F`$, always appear together in calculations, and thus the single parameter $`F\sigma `$ is constrained by observations. We define $`(F\sigma )^1`$ to be the activity parameter, which is large in violent starbursts/AGN and free to vary as a function of redshift. The rate of energy release within each individual starburst/AGN is controlled by the value of the activity parameter. Within a representative cosmological volume the presence of a population of either rare long-lived or common short-lived starbursts/AGN cannot be distinguished. This is why the time-scale and bursting fraction parameters $`\sigma `$ and $`F`$ are bound together in the activity parameter.
Incorporating redshift evolution of the activity parameter introduces another degree of freedom into the count model, in addition to that provided by the redshift evolution of the star formation/AGN fuelling efficiency parameter $`x`$. Of course, $`x`$, $`F`$ and the time-scale $`\sigma `$ are also free to vary as a function of mass. At present, we find no compelling reason to incorporate this additional complication into the model.
### 2.4 Constraining the parameters
The background radiation intensities and source counts calculated from the equations derived above depend on a range of parameters: the world model, defined by $`H_0`$, $`\mathrm{\Omega }_0`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$; the perturbation spectral index $`n`$ and the value of $`M^{}(0)`$; the constants $`\varphi `$ and $`\alpha `$ in the merger rate; the merger efficiency $`x(z)`$; the AGN fraction $`f_\mathrm{A}`$; the fraction of mergers that lead to a starburst/AGN $`F`$; their duration $`\sigma `$; and their SED $`f_\nu `$.
Blain et al. (1999c) used the low-redshift 60-$`\mu `$m IRAS source count and the 175-$`\mu `$m ISO counts to constrain their models; here, however, we use the bright 60-$`\mu `$m counts and the form of the far-infrared/submillimetre background spectrum, the two best determined observables, to constrain the parameters that describe the merger efficiency $`x(z)`$. If the SED does not depend on the mass of the merging galaxies, then the background spectrum (equation 10) is determined entirely by the form of the merger efficiency $`x(z)`$,
$$I_\nu \frac{\varphi }{\gamma \sqrt{\alpha }}_0^{z_0}\frac{x(z)}{1f_\mathrm{A}}\frac{\dot{\delta }(z)}{\delta (z)}\frac{1}{1+z}\frac{f_{\nu (1+z)}}{f_\nu ^{}d\nu ^{}}\frac{\mathrm{d}r}{\mathrm{d}z}dz.$$
(15)
Note that the dependence of $`I_\nu `$ on the perturbation index $`n`$ through $`\gamma `$ is completely separate from the dependence on the world model. Thus the background spectrum determined by Puget et al. (1996), Guiderdoni et al. (1997), Dwek et al. (1998), Fixsen et al. (1998), Hauser et al. (1998), Schlegel, Finkbeiner & Davis (1998) and Lagache et al. (1999) can always be used to constrain the form of the merger efficiency $`x(z)`$. We adopt a form of $`x(z)`$ identical to the ‘peak model’ described in Blain et al. (1999c): <sup>1</sup><sup>1</sup>1Note that the form of this equation published in Blain et al. (1999c) contained a typographical error in the index of $`(1+z)`$.
$$x(z)=2x_0\left[1+\mathrm{exp}\frac{z}{z_{\mathrm{max}}}\right]^1(1+z)^{p+(2z_{\mathrm{max}})^1}.$$
(16)
This is not a uniquely appropriate functional form of $`x(z)`$. It was originally chosen to allow the star-formation history derived by Madau et al. (1996) to be fitted. Its three parameters can be manipulated to produce a wide range of plausible star formation histories. The three parameters are: $`p`$, the asymptotic low-redshift slope of the merger efficiency $`x(z)`$ in $`(1+z)`$; $`z_{\mathrm{max}}`$, the redshift above which the high-redshift exponential cut-off starts to take effect; and $`x_0`$ the value of $`x(0)`$. The epoch of most intense star-formation/AGN-fuelling corresponds to a redshift $`z5z_{\mathrm{max}}`$ in these models.
In Figs 2(a) and (b) the probabilities of fitting the background radiation spectrum and the slope of the 60-$`\mu `$m counts predicted from the merger efficiency $`x(z)`$, defined in equation (16), to observations are shown as a function of the key parameters $`p`$ and $`z_{\mathrm{max}}`$, as an example for a dust temperature of 45 K. In Fig. 2(c) the joint probability of fitting both sets of data is shown. Note that a constant dust temperature is assumed. The value of $`f_\mathrm{A}`$ does not affect the results. If different dust temperatures are assumed, then the position of maximum probability moves around the $`p`$$`z_{\mathrm{max}}`$ plane. However, when the form of the evolution of luminosity density is calculated for each temperature, the curve has a similar form. The best-fitting values of $`p`$ and $`z_{\mathrm{max}}`$, and the corresponding values of the merger efficiency $`x_0`$, the activity parameter $`(F\sigma )_0^1`$ and the density parameter in metals at $`z=0`$, $`\mathrm{\Omega }_\mathrm{m}`$, are presented in Table 1 for four plausible values of the dust temperature: 35, 40, 45 and 50 K. Note that the best fitting values of $`p`$ and $`z_{\mathrm{max}}`$ depend only weakly on the world model assumed.
The physical processes that demand a form of luminosity density which rises steeply with increasing redshift before turning over have not been considered here in any detail. It seems likely, however, that the steep decline in the star formation rate to the present day is related to the declining gas content of galaxies at $`z<1`$, and that the behaviour at high redshifts could be attributable to relatively inefficient cooling of gas and thus of star formation in the mergers of metal-poor high-redshift systems (see Pei & Fall 1995 and Pei, Fall & Hauser 1999 for discussions of gas and dust evolution in the Universe). We discuss these issues further in Jameson, Blain & Longair (2000).
The bright low-redshift 60-$`\mu `$m count,
$$N_{60}\sqrt{\frac{x_0^3}{(F\sigma )_0(1f_\mathrm{A})^3}}S_{60}^{3/2},$$
(17)
is independent of the cosmological model. At $`S_{60}=10`$ Jy, $`N_{60}=19\pm 2`$ sr<sup>-1</sup> (Saunders et al. 1990). A value of $`\sqrt{x_0^3/(F\sigma )_0(1f_\mathrm{A})^3}=(11\pm 2)\gamma \times 10^7`$ Gyr<sup>-1/2</sup> provides a good fit for any dust temperature between 30 and 60 K. The values of the normalization of the luminosity density $`x_0/(1f_\mathrm{A})`$ and the activity parameter $`(F\sigma )_0^1`$ at $`z0`$ that are required to fit the background spectrum and 60-$`\mu `$m counts depend on the fluctuation index $`\gamma `$ as $`\gamma ^1`$ and $`\gamma `$ respectively. Thus the mass-to-light ratio of merging galaxies (equation 12) is expected to be independent of the value of the perturbation index. The values of $`F\sigma `$ listed in Table 1 are lower limits to the time-scale of the starburst/AGN $`\sigma `$, as $`F1`$. They are generally consistent with the starburst time-scales of order $`5\times 10^7`$ yr derived by Mihos & Hernquist (1996) from hydrodynamical simulations of galaxy mergers.
The background radiation spectra and 60-$`\mu `$m counts corresponding to the models listed in Table 1 are shown in Figs 3 and 4(a) respectively. Note that the inclusion of positive evolution in the efficiency parameter $`x(z)`$ overcomes the underprediction of the 60-$`\mu `$m counts in the constant-$`x`$ model of Blain & Longair (1996); see their fig. 6. The slope of the 60-$`\mu `$m counts in the new model also more adequately represents the data than the predictions of Guiderdoni et al. (1998). The luminosity function of nearby dusty galaxies at 60-$`\mu `$m (Saunders et al. 1990; Soifer & Neugebauer 1991) predicted in the 40-K model, which is calculated at $`z=0`$ by evaluating $`F\sigma \dot{N}_{\mathrm{form}}`$ at the mass corresponding to a luminosity $`L`$ in equation (12), is shown in Fig. 4(b). The form of the luminosity function depends on the value of the perturbation index $`\gamma `$, even though the 60-$`\mu `$m counts are independent of $`\gamma `$. The best representation of the observed function is provided by a value of $`\gamma =2/3`$, or $`n1`$. A similar high-luminosity slope could be achieved by modifying the form of equation (12) so that $`LM^\beta `$, where $`\beta >1`$. However, to match the observations with a scale-independent value of $`n=1`$, a rather extreme value of $`\beta =2`$ is required. In the work that follows $`n`$ is assumed to take the value $`1`$, similar to the value found for this range of masses by Peacock & Dodds (1994). In this case, the faint-end slope of the low-redshift 60-$`\mu `$m luminosity function is equivalent to a Schechter function parameter $`\alpha =5/3`$ (Schechter 1976). This is the same as the faint-end slope $`\alpha =1.60\pm 0.13`$ of the optically selected luminosity function derived by Steidel et al. (1999) at $`z3`$, which describes the most numerous population of high-redshift galaxies that are actively forming stars. At $`0.75<z<1`$, the luminosity function of the blue star-forming galaxies in the CFRS survey also has a faint end slope $`\alpha =1.56`$ (Lilly et al. 1995). Steep faint-end slopes with indices of about $`1.8`$ are expected for the luminosity functions of dwarf, irregular and infrared-luminous galaxies, as discussed by Hogg & Phinney (1997).
### 2.5 Self-consistency
In section 4 of Blain et al. (1999c) the self-consistency of models of dust-obscured galaxy formation was discussed. We demanded that a sufficiently large mass of metals, and associated dust, was required to be generated by nucleosynthesis at each epoch to account for the far-infrared emission predicted by the model. As the mass of dust required to generate a given far-infrared luminosity depends strongly on the dust temperature, this consistency condition is most easily expressed as a minimum dust temperature at each redshift. In the cases of the models listed in Table 1, this lower limit to the dust temperature is presented in Fig. 5, both with and without an assumed high-redshift Population III to generate dust. If 2 per cent of the total star formation activity takes place in a high-redshift Population III, then the self-consistency limits are always satisfied. The details of the calculations can be found in Blain et al. (1999c). It is assumed that all the dust that is generated prior to a particular redshift still existed at that redshift and was available to absorb and reprocess the light from young hot stars and AGN. However, in an hierarchical model, only a small fraction of all the dust generated by any epoch is found in galaxies actively involved in a merger at that epoch; this fraction increases from about 8 to 25 per cent progressively from the 35-K to 50-K model listed in Table 1. Thus, because most of the dust will be found in quiescent objects, the condition in Fig. 5 is less severe than it might be. However, even if only 10 per cent of all dust is involved in a luminous dust enshrouded merger, then the lower limit to the temperature increases only by about 50 per cent, and so this consistency requirement is not difficult to satisfy. The same correction is required if 90 per cent of the energy generated in merging galaxies is attributable to accretion onto AGN, that is, if the AGN fraction $`f_\mathrm{A}=0.9`$. Hence, the models are readily self-consistent if high-redshift Population-III stars exist to generate early metals. More sophisticated models of the coupled evolution of dust, gas and stars have been presented by Pei & Fall (1995), Eales & Edmunds (1996) and Pei et al. (1999).
### 2.6 Metal enrichment and the production of low-mass stars
At the present epoch the density of metals formed by nucleosynthesis in stars $`\mathrm{\Omega }_\mathrm{m}(0)`$ (equation 9) is expected to be about $`10^3`$ in the models presented in Table 1. If solar metallicity, about 2.5 per cent by mass (Savage & Sembach 1996), is typical of the Universe as a whole, and the density parameter in baryons $`\mathrm{\Omega }_\mathrm{b}h^2=0.019`$ (Burles & Tytler 1998), then $`\mathrm{\Omega }_\mathrm{m}1.9\times 10^3`$ if $`h=0.5`$. Thus all the models listed in Table 1 are consistent with this limit, even if the AGN fraction $`f_\mathrm{A}=0`$ and all the luminosity of merging galaxies is due to star formation activity that generates heavy elements.
The density parameter in the form of stars at the present epoch $`\mathrm{\Omega }_{}(0)=(5.9\pm 2.3)\times 10^3`$ (Gnedin & Ostriker 1992). Observations of Lyman-$`\alpha `$ absorbers along the line of sight to distant quasars allow the evolution of the mass of neutral gas and the typical metallicity in the Universe to be traced as a function of epoch (Storrie-Lombardi, McMahon & Irwin 1996; Pettini et al. 1997). In Fig. 6(a) the mass of material that has been processed into stars is derived as a function of epoch in each of the star-formation histories listed in Table 1, assuming that the AGN fraction $`f_\mathrm{A}=0`$ and a Salpeter initial mass function (IMF) with a lower mass limit of 0.07 M. In this case about 65–70 per cent of all stars formed are still burning at the present epoch. The values of $`\mathrm{\Omega }_{}(0)`$ predicted are thus about 3 times larger than the observed value, but are comparable with the values derived in our earlier models (Blain et al. 1999c). In order to account for this difference, either a lower mass limit to the IMF of about 1 M or a value of the AGN fraction $`f_\mathrm{A}0.75`$ is required. This high-mass IMF would be compatible with the inferred lower limit to the IMF of 3 M required by Zepf & Silk (1996) to explain the mass-to-light ratios of elliptical galaxies, and by Rieke et al. (1993) to interpret observations of M82. Stars with masses less than 3 M appear to be less numerous than expected from a Salpeter IMF in recent observations of the low-redshift starburst galaxy R136 (Nota et al. 1998). Goldader et al. (1997) report that the results of near-infrared spectroscopy of nearby IRAS galaxies with luminosities between 10<sup>11</sup> and 10<sup>12</sup> L support a deficit of stars with masses less than 1 M in these systems. More details about variations in the high-redshift IMF are discussed by Larson (1998).
Metals appear to be overproduced by about a factor of 5 at redshifts of 2 and 3 in the hierarchical models, as shown in Fig. 6(b), but again these results can be reconciled with the observations if a significant fraction of the luminosity of dusty galaxies is being powered by accretion on to AGN. Note, however, that the observations of metallicity could be biased against metal-rich regions of the Universe, either because of their small physical size (Ferguson, Gallagher & Wyse 1998) or because of the complete obscuration of a fraction of background QSOs (Fall & Pei 1993). ASCA X-ray observations of significant enrichment in intracluster gas (Mushotzky & Loewenstein 1997; Gibson, Loewenstein & Mushotzky 1998), could indicate that this is the case, at least in high-density environments.
The observed turn-over in the neutral gas fraction and the maximum rate of star formation shown in Fig. 6(a) are approximately coincident in redshift, and the rate of enrichment in the hierarchical models is broadly consistent with the slope interpolated between the three highest redshift data points plotted in Fig. 6(b).
### 2.7 The history of star-formation/AGN fuelling
In Fig. 7 the form of evolution of the luminosity density is shown as a function of redshift in the models listed in Table 1. All the models predict curves with a rather similar form at $`z<2`$. The transformation between luminosity and star formation rate is the same as that assumed by Blain et al. (1999c): that is, a SFR of 1 M yr<sup>-1</sup> is equivalent to a luminosity of $`2.2\times 10^9`$ L. At low redshifts the evolution of luminosity density is consistent with optical and near-infrared observations, and with the results presented in our earlier paper.
## 3 Source counts
### 3.1 Fitting the available data
The models presented in Table 1 were constrained using the properties of the counts of the low-redshift 60-$`\mu `$m IRAS galaxies. The same formalism can be used to determine the counts of more distant dusty galaxies in the mid-/far-infrared and millimetre/submillimetre wavebands, where a large amount of additional information about the surface density of more distant dusty galaxies is available. There is an upper limit to the surface density of sources at 2.8 mm (Wilner & Wright 1997); counts at 850 $`\mu `$m (Smail et al. 1997; Barger et al. 1998; Holland et al. 1998; Hughes et al. 1998; Barger et al. 1999a; Blain et al. 1999b; Eales et al. 1999); upper limits (Smail et al. 1997; Barger et al. 1998), and a new count (Blain et al. 2000) at 450 $`\mu `$m; 175-$`\mu `$m ISO counts from Kawara et al. (1998) and Puget et al. (1999); 95-$`\mu `$m counts from Kawara et al. (1998); and 7- and 15-$`\mu `$m counts from an extremely deep ISO image of Abell 2390 (Altieri et al. 1999), which yields counts that are even deeper than those determined in blank-field surveys by Oliver et al. (1997), Aussel et al. (1999) and Flores et al. (1999).
If the values of the activity parameter at redshift zero, $`(F\sigma )_0^1`$, listed in Table 1 are used to estimate the counts of galaxies at 850 and 175 $`\mu `$m, then the results underpredict the observed counts by a large factor. The form of evolution of the merger efficiency $`x(z)`$ is fixed by the observed background radiation intensity, and so, keeping within the framework of our well-constrained models, the value of the activity parameter $`(F\sigma )^1`$ at high redshift must be allowed to increase above its value at redshift zero in order to account for the observations. This has the effect of increasing the luminosity of high-redshift mergers, thus increasing the 175- and 850-$`\mu `$m counts. However, the background radiation intensity and the low-redshift 60-$`\mu `$m counts remain unchanged.
The form of evolution of the activity parameter $`(F\sigma )^1`$ that is required to explain the data is illustrated in Fig. 8. In Fig. 8(a) the ratio of the model predictions and the observed counts at wavelengths of 175 and 850 $`\mu `$m (Kawara et al. 1998; Blain et al. 1999b respectively) are compared as a function of the activity parameter in the four models listed in Table 1. The same value of the activity parameter cannot account for the observed counts at both wavelengths simultaneously, and the value required to explain the low-redshift 60-$`\mu `$m counts is different from either. The value of the activity parameter required to fit the 60-, 175- and 850-$`\mu `$m counts increases monotonically. Because the median redshift of the galaxies contributing to the counts at these redshifts is expected to increase monotonically, in Fig. 8(b) we present the ratio of the model predictions and the observed counts as a function of a parameter $`p_\sigma `$ that describes a simple form of exponential redshift evolution of the activity parameter,
$$(F\sigma )^1=(F\sigma )_0^1\mathrm{exp}p_\sigma z.$$
(18)
The exponential form provides a reasonable fit to the data, but is only one example of a whole family of potential functions. The important feature is that the function chosen to represent the activity parameter $`(F\sigma )^1`$ increases rapidly with increasing redshift.
The zero-redshift value of the activity parameter $`(F\sigma )_0^1`$ is fixed by requiring that the low-redshift 60-$`\mu `$m count prediction is in agreement with observations; see Table 1. The values of the evolution parameter $`p_\sigma `$ that correspond to the most reasonable fit for assumed single dust temperatures of 35, 40, 45 and 50 K are about 1.5, 1.5, 2.0 and 2.3 respectively. If the specific form of the redshift evolution of the activity parameter $`(F\sigma )^1`$ shown in equation (18) is assumed, then a dust temperature of 35 or 40 K is most consistent with the data, the same temperature that was required for consistency by both Blain et al. (1999c) and Trentham et al. (1999), and is in agreement with the dust temperatures derived for high-redshift QSOs by Benford et al. (1999).
The increase in the value of the activity parameter $`(F\sigma )^1`$ as a function of redshift can be interpreted in terms of two extreme scenarios, or as a combination of both. In the first scenario, the fraction of dark halo mergers that lead to a luminous phase in a dusty galaxy $`F`$ is fixed, but that the duration of the luminous phase $`\sigma `$ is less at high redshifts. This is plausible, based on the results of simulations of galaxy mergers (e.g. Mihos 1999; Bekki et al. 1999); on average, the typical mass of a merging pair of galaxies is expected to be less at high redshifts in an hierarchical scenario of galaxy evolution, and the gas content of the galaxies is expected to be greater. As a result, the dynamical time of a merger would be expected to decrease with increasing redshift, and the viscosity of the ISM would be expected to increase. Both of these factors might be expected to increase the star formation efficiency of a merger with increasing redshift. In the second scenario, the duration of the luminous phase associated with a merger $`\sigma `$ is independent of redshift, but the fraction of mergers that induce such a phase $`F`$ is reduced as redshift increases. It is perhaps more plausible that the second of these scenarios could produce the large change in the activity parameter $`(F\sigma )^1`$, by a factor of about 100 from $`z=0`$ to $`z=3`$ that is required to fit the data. This is because the duration of the luminous phase of a merger-induced starburst $`\sigma `$ must exceed the lifetime of a reasonably massive star, i.e. $`\sigma >10^7`$ yr. If star formation activity powers a significant fraction of the SCUBA galaxies, as seems reasonable, then this limit to the value of the merger duration $`\sigma `$ is constrained to be greater than about 10<sup>7</sup> yr, only a few times less than the values of $`\sigma `$ at $`z=0`$ listed in Table 1. Thus it seems likely that a large fraction of the increase in the value of the activity parameter $`(F\sigma )^1`$, which is required at high redshifts to account for the observed counts, should be attributed to a reduction in the fraction $`F`$ of dark halo mergers that generate a luminous galaxy. We speculate that this may be connected with the lower typical metallicity expected at higher redshifts. In a lower metallicity system the cooling of dense gas would be expected to be less efficient, and so a large amount of high-mass star formation may be unable to take place during the short merger process.
### 3.2 Predicted counts of dusty galaxies
Counts predicted by the four models listed in Table 1, employing the values of $`p_\sigma `$ listed above, are compared with observations at wavelengths of 15, 60, 175, 450, 850, 1300 and 2800 $`\mu `$m in Fig. 9. While these models do not present unique solutions, fewer parameters are involved in the model than the number of separate pieces of constraining data. In future, by comparing the predictions of the models with observations, especially with the redshift distributions of the SCUBA galaxies (Barger et al. 1999b; Lilly et al. 1999; Smail et al. 1999, in preparation), the models can be developed to account more accurately for the increasing amount of available data.
### 3.3 The corresponding radio counts
There is a tight correlation between the flux densities of low-redshift galaxies in the radio and far-infrared wavebands (see the review by Condon 1992). Thus the counts of faint galaxies observed in the radio waveband should not be overproduced when the SEDs of the galaxies in the 35-, 40-, 45- and 50-K models presented here are extended into the radio waveband using this correlation. It is permissible to underpredict the counts, as there will be a contribution from AGN to the faint counts, which need not be associated with powerful restframe far-infrared emission from dust. Partridge et al. (1997) report a 8.4-GHz galaxy count of $`1.0\pm 0.1`$ arcmin<sup>-2</sup> brighter than a flux density of 10 $`\mu `$Jy. The correspondings count predicted by the 35-, 40-, 45- and 50-K models are 0.6, 0.7, 1.0 and 0.6 arcmin<sup>-2</sup> respectively. Thus all the models discussed here are consistent with the deep radio observations. For comparison, a count of 0.8 arcmin<sup>-2</sup> is predicted by the modified Gaussian model discussed in Barger et al. (1999b), which is modified from the results of the simple luminosity evolution models presented by Blain et al. (1999c).
### 3.4 Redshift distributions
The redshift distributions of submillimetre-selected sources at, or just below, the flux density limits of current surveys have been discussed by Blain et al. (1999c) in the context of models of a strongly evolving population of distant dusty galaxies, based on the low-redshift IRAS galaxy luminosity function. The first spectroscopic observations of a large fraction of the potential optical counterparts to SCUBA galaxies identified in deep multicolour optical images (Smail et al. 1998) have been made by Barger et al. (1999b) (see Fig. 10). This redshift distribution is consistent with the optical identifications made by Lilly et al. (1999) in a SCUBA survey of Canada–France Redshift Survey (CFRS) fields. The distribution shown in Fig. 10 is, however, subject to potential misidentifications of SCUBA galaxies. For example, recent deep near-infrared images show that two of the Smail et al. (1998) SCUBA galaxies, which were originally identified with low-redshift spiral galaxies, can more plausibly be associated with extremely red objects (EROs) that were unidentified in optical images (Smail et al. 1999). In two other cases, at $`z=2.55`$ (Ivison et al. 1999) and $`z=2.81`$ (Ivison et al. 1998), the identifications have been confirmed by detections of redshifted CO emission (Frayer et al. 1998; 1999), and in another case spectroscopy and ISO observations (Soucail et al. 1999) strongly support the identification of a ring galaxy at $`z=1.06`$.
The preliminary redshift distribution of SCUBA galaxies, shown in Fig. 10, is broadly consistent with the predictions of the Gaussian model of Blain et al. (1999c). A modified Gaussian model, as shown in Fig. 1, was described by Barger et al. (1999b); the values of the evolution parameters in the modified Gaussian model were explicitly fitted both to the background radiation intensity and count data and to the observed median redshift. In Fig. 10 the observed redshift distribution is compared with the redshift distributions predicted in the Gaussian and modified Gaussian models, and with the predictions of the hierarchical models developed here (see Table 1). Median redshifts of about 2.2, 2.7, 3.2 and 3.5 are expected in the 35-, 40-, 45- and 50-K hierarchical models respectively.
The redshift distributions predicted by the hierarchical models have median redshifts greater than that in the modified Gaussian model, but less than those in either the other models presented by Blain et al. (1999c) or the hierarchical model E from Guiderdoni et al. (1998), all of which provide a reasonable fit to both the background radiation intensity and source counts in the far-infrared/submillimetre waveband. Based on these results, the coolest 35-K model seems to be in best agreement with the available data. A model in which the single-temperature dust clouds discussed here are replaced by a temperature distribution will probably be required to account for the redshift distribution of the SCUBA galaxies. When the two spiral galaxies at $`z<0.5`$ are replaced by EROs at $`z>1`$ (Smail et al. 1999), the agreement between the 35-K hierarchical prediction and the observed redshift distribution is rather satisfactory.
In all the hierarchical models, despite strong negative evolution of the mass-to-light ratio of mergers with increasing redshift, most of the detected galaxies are expected to lie at redshifts less than 5, and so will be accessible to multi-waveband study using 8-m class telescopes. When final reliable identifications and redshifts for submillimetre-selected galaxies are available, this information will be crucial for refining the hierarchical model.
## 4 Optical backgrounds and counterparts
The discussion has so far centred on the properties of merging galaxies as observed through their dust emission in the mid-infrared, far-infrared and millimetre/submillimetre wavebands. Here we assume the same forms of evolution of both the merger efficiency parameter $`x`$ and the activity parameter $`(F\sigma )^1`$ that were required to account for the data in the far-infrared and submillimetre wavebands in the previous section, but make predictions in the near-infrared, optical and ultraviolet wavebands. In particular, we investigate the 35-K model, in which the redshift distribution of SCUBA galaxies is in best agreement with observations.
Subject to the uncertain fraction of the luminosity of these galaxies that is assumed to be powered by star formation activity, we predict the integrated background radiation intensity from the near-infrared to ionizing ultraviolet wavebands, and the counts of galaxies with SEDs that are dominated by evolved stars in the near-infrared $`K`$-band, and by young stars in the optical $`B`$-band. By requiring that the $`K`$\- and $`B`$-band counts are reproduced accurately, we estimate both the fraction of all energy released in mergers that is reprocessed by dust $`A`$ and the normalization of the activity parameter at $`z=0`$, $`(F\sigma )_0^1`$ in the optical waveband. For a discussion of the evolution of faint galaxies and their stellar populations see Ellis (1997).
### 4.1 $`K`$-band counts
The counts of galaxies in the $`K`$-band at a flux density $`S_K`$ can be predicted by assuming the forms of the merger efficiency $`x(z)`$, as listed in Table 1, an SED typical of evolved stars $`f_\nu ^K`$ (Charlot, Worthey & Bressan 1996), the Press–Schechter mass function (equation 1) and the mass-to-light ratio of evolved stellar populations $`R_{\mathrm{ML}}`$. The SED was calculated using a 9.25-Gyr old Bruzual–Charlot instantaneous burst model with a Salpeter IMF. Upper and lower mass limits of 0.1 and 125 M were assumed for the IMF. Note that the form of the evolved stellar spectrum derived is almost independent of the exact values of the upper and lower mass limits assumed. The $`K`$-band count
$$N_K(S_K)=_0^{z_0}_{M_K(z)}^{\mathrm{}}N_{\mathrm{PS}}(M)dMD(z)^2\frac{\mathrm{d}r}{\mathrm{d}z}dz,$$
(19)
with
$$M_K(z)=4\pi D^2(1+z)S_KR_{\mathrm{ML}}(z)\frac{f_\nu ^{}^Kd\nu ^{}}{f_{\nu _K(1+z)}^K}.$$
(20)
By ensuring that the predicted counts match the observed $`K`$-band counts, a suitable form of the mass-to-light ratio $`R_{\mathrm{ML}}`$ is determined as a function of redshift. The mass in this ratio is defined as the mass of the dark matter haloes of galaxies, taken from the Press–Schechter function (equation 1), and the luminosity is the bolometric luminosity of the evolved stellar population in the galaxies.
In order to reproduce the observed $`B`$-band counts, the counts derived for the evolved and merging components are added together, as shown in Fig. 11(b). The redshift dependence of the mass-to-light ratio is the same as that of the luminosity density of evolved stars,
$$ϵ_\mathrm{L}(z)\frac{1}{1+z}\frac{_0^{z_0}\frac{x(z^{})}{1+z^{}}dz^{}}{_z^{z_0}\frac{x(z^{})}{1+z^{}}dz^{}},$$
(21)
which depends on the SFR at all earlier epochs. A factor of $`1+z`$ is included in the denominator to mimic the effects of passive stellar evolution (Longair 1998). At $`z=0`$ a form of the mass-to-light ratio,
$$\frac{R_{\mathrm{ML}}}{\mathrm{M}_{}\mathrm{L}_{}^1}=\{\begin{array}{cc}80,\hfill & \mathrm{if}L_{10}2;\hfill \\ 117L_{10}^{0.55},\hfill & \mathrm{otherwise},\hfill \end{array}$$
(22)
where $`L_{10}=L/10^{10}`$ L, is required to match the observed $`K`$-band counts (Fig. 11a) and the faint-end slope of the observed $`K`$-band luminosity function (Gardner et al. 1996; Szokoly et al. 1998).
The well fitting $`K`$-band count that is derived from the model with this form of the mass-to-light ratio $`R_{\mathrm{ML}}`$ is shown, along with the observational data, in Fig. 11(a).
### 4.2 $`B`$-band counts
Both passive evolved galaxies and luminous merging galaxies make a contribution to the $`B`$-band counts. The evolved contribution is predicted by evaluating the function that produces the $`K`$-band count at the frequency of the $`B`$-band. The extrapolation is made using the model SED described above. An additional population of merging galaxies is also included. Their counts are determined using equation (13) directly, with a value of
$$M_{\mathrm{min}}=\frac{1}{1A}\frac{4\pi D^2(1+z)S_B}{0.007c^2}\frac{F\sigma }{x(z)}(1f_\mathrm{A})\frac{f_\nu ^{}^Bd\nu ^{}}{f_{\nu _B(1+z)}^B}.$$
(23)
$`A`$ is the fraction of the total energy released in a merger that is reprocessed into the far-infrared waveband, and $`f_\nu ^B`$ is the SED of a flat star-forming young stellar spectrum at frequencies less than the Lyman limit frequency $`\nu _{\mathrm{Ly}}=3.3\times 10^{15}`$ Hz. $`f_\nu ^B\nu ^0`$ if $`\nu \nu _{\mathrm{Ly}}`$ and zero otherwise. The blue power-law SED expected from an AGN will be described reasonably well by this SED at $`\nu <\nu _{\mathrm{Ly}}`$.
The faint counts at $`B>21`$, which are dominated by merging galaxies, can be reproduced in the model only if the dust absorption fraction $`A0.8`$ and the zero-redshift activity parameter $`(F\sigma )_0^1=2.5`$ Gyr<sup>-1</sup>. The activity parameter incorporated in the model evolves with redshift as shown in equation (18), with the value of $`p_\sigma =1.5`$ that is appropriate in the 35-K model. Note that this value of the activity parameter is less than that required to account for the observed submillimetre-wave counts, and that the ratio of energy emitted in the restframe ultraviolet and far-infrared wavebands is 1:4. This ratio is equivalent to 1.75 magnitudes of extinction when integrated over the optical and ultraviolet wavebands.
The redshift distribution of faint galaxies at $`z>1`$ derived in the hierarchical model is in good agreement with that observed for galaxies with $`B<24`$ (Cowie, Hu & Songaila 1995); these details are discussed more extensively elsewhere (Jameson et al. 1999, in preparation). Note that the dependence of the submillimetre and faint $`B`$-band counts on the merging efficiency parameter $`x(z)`$ and the AGN fraction $`f_\mathrm{A}`$ is identical, and so the value of the AGN fraction does not affect the resulting counts.
The most numerous population of faint high-redshift optically selected galaxies known are the Lyman-break galaxies (Steidel et al. 1996a, 1999) at $`2.5z4.5`$, which have apparent SFRs of a few 10 M yr<sup>-1</sup>. The surface density of detected Lyman-break galaxies is about an order of magnitude greater than that of submillimetre-selected galaxies, while their SFRs are typically about an order of magnitude less.
The violence of a typical merger-induced starburst/AGN, and thus its detectability, is determined by the product of the merging efficiency and activity parameters, $`x(F\sigma )^1`$, in the hierarchical model. The value of this composite parameter that is required to fit the observed submillimetre-wave counts is about 40 times greater than that required to fit the faint $`B`$-band counts. The value of the activity parameter $`(F\sigma )^1`$ that is required to fit the submillimetre-wave counts is a factor of about 10 times greater than that required to fit the $`B`$-band counts. These differences are thus comparable with the observed ratios of the surface densities and luminosities of typical galaxies in the submillimetre-selected and Lyman-break samples.
Based on these differences, we suggest a scenario in which the optically selected Lyman-break galaxies and the submillimetre-selected SCUBA galaxies are drawn from the same underlying population of luminous galaxy merger events, but are distinguished by being observed during two distinct phases of the merger process. We associate one phase, which is very luminous, short-lived and heavily dust enshrouded, with the SCUBA galaxies, and the other, which is less luminous and relatively lightly obscured, with the Lyman-break galaxies. During the first phase, which is about 40 times more luminous, but only about a tenth of the duration as compared with the second, most of the activity in the merger will be almost completely obscured from view in the optical waveband, but extremely bright in the submillimetre waveband. This phase is consistent with the extremely compact nuclear starburst/AGN activity observed by Downes & Solomon (1998) on sub-kpc scales in nearby ultraluminous IRAS galaxies. During the second phase, less intense star formation activity would probably be distributed throughout the ISM of both merging galaxies. A short lived ultraluminous phase and a longer-lived less intense burst of star formation activity during a merger are consistent with the results of hydrodynamic models of galaxy mergers by Mihos & Hernquist (1996) and Bekki et al. (1999).
However, while plausible, this scenario is not necessarily correct. The faint counts in the submillimetre and optical waveband could simply be drawn from two distinct populations. The questions of whether and how ultraluminous dust-enshrouded mergers are connected with the Lyman-break galaxies can only be answered by making multiwaveband observations of large samples of submillimetre-selected galaxies in order to observe a time sequence of merging galaxies and to investigate the merger process in detail. Observations of the Lyman-break population in the submillimetre waveband (Chapman et al. 1999) will also help to address these questions.
### 4.3 Integrated background light
The global luminosity density predicted by the hierarchical models is based on the evolution of the merging efficiency parameter $`x(z)`$. By making minor modifications to the formalism presented in Section 2.3, the models listed in Table 1 can be used to predict the background radiation intensity.
The near-infrared/optical/ultraviolet background radiation intensity produced by merging galaxies can be calculated as shown in equation (11), if the fraction of energy generated by a merger that is absorbed by dust $`A`$ is included. Thus
$$I_\nu ^{\mathrm{opt}}(1A)\frac{\varphi }{\gamma \sqrt{\alpha }}_0^{z_0}\frac{x(z)}{1f_\mathrm{A}}\frac{\dot{\delta }(z)}{\delta (z)}\frac{f_{\nu (1+z)}^B}{f_\nu ^{}^Bd\nu ^{}}\frac{\mathrm{d}r}{\mathrm{d}z}\frac{\mathrm{d}z}{1+z}.$$
(24)
The background radiation intensity due to the evolved stellar population is not given directly by the expression for the far-infrared background in equation (11). In that case the volume emissivity is given by equations (21) and (22). By integrating this emissivity of evolved objects over redshift, assuming the SED $`f_\nu ^K`$ introduced above, the background radiation intensity produced by evolved non-merging galaxies can be calculated. When it is added to the background radiation spectrum produced in merging galaxies, a complete prediction of the background intensity between the near-infrared and near-ultraviolet wavebands is obtained.
The background radiation intensity predicted using the 35-K model, which can explain the $`K`$\- and $`B`$-band galaxy counts, is shown across the millimetre to the ultraviolet wavebands in Fig. 12. The background intensity is in agreement with almost all the observed limits and detections. The only spectral region in which the background radiation intensity is still not very well defined is between about 3 and 7 $`\mu `$m. At these wavelengths, the dominant source of background radiation switches from dust emission to starlight. There is almost certainly an additional, and perhaps dominant, contribution to the background radiation intensity in the mid-infrared waveband from very hot dust grains in the central regions of AGN, which are not modelled here. The model curves shown Fig. 12 are not extrapolated into this region from the well-determined populations of galaxies at 60-$`\mu `$m and in the $`K`$ band.
## 5 Overview of model parameters
We have introduced a series of parameters and functions to account for different observations section by section through the paper. Excluding the world model parameters $`H_0`$, $`\mathrm{\Omega }_0`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$, five parameters are required to define the merger rate of dark matter haloes. Five further independent parameters \[$`x_0`$, $`p`$, $`z_{\mathrm{max}}`$, $`(F\sigma )_0^1`$ and $`p_\sigma `$\] and a SED for dusty galaxies, are required to fit the counts at wavelengths of 15, 60, 175, 450 and 850 $`\mu `$m and the submillimetre-wave/far-infrared background radiation spectrum. The most important element of the models are the two functions that describe the merger efficiency parameter $`x(z)`$ and the activity parameter $`(F\sigma )^1`$. We present appropriate forms for these functions in equations (16) and (18) respectively, but stress that these forms are not unique. As more data becomes available, other functional forms or free-form fitting functions may be more appropriate. By modifying the activity parameter $`(F\sigma )_0^1`$, introducing the total fraction of luminosity absorbed by dust $`A`$, and including a template SED for star-forming galaxies in the rest frame ultraviolet waveband, the $`B`$-band counts can also be reproduced. In Table 2 we summarize these parameters and the most important pieces of data used to constrain them. The values of the parameters required to fit the data in the 35-K model are also listed.
## 6 Conclusions
1. We have presented a simple model of hierarchical galaxy formation which incorporates the effects of obscuration by dust, in which the galaxies that are detected in submillimetre-wave surveys are observed during a merger-induced episode of star formation or AGN fuelling. The aim of this model is to elucidate the most important physical processes that could be at work in luminous dusty galaxies, rather than to provide a detailed quantitative description.
2. The model is constrained primarily by the intensity of background radiation in the far-infrared/submillimetre waveband. From these data alone, the luminosity density from high-redshift galaxies is inferred to exceed that deduced from observations in the rest frame ultraviolet and optical wavebands by up to an order of magnitude. The source counts and background radiation intensity in the submillimetre, far-, mid- and near-infrared, and optical wavebands are reproduced adequately in the model without introducing a large number of parameters.
3. The counts of galaxies detected in the far-infrared/submillimetre and optical wavebands, and the associated background radiation intensities in these wavebands are consistent if about 4 times more energy is emitted by galaxies after being reprocessed into the far-infrared waveband by interstellar dust as is radiated directly in the optical/ultraviolet waveband.
4. In order to account for the observed abundance of distant galaxies detected at 175 and 850 $`\mu `$m using ISO and SCUBA, the mass-to-light ratio of a typical galaxy merger must decrease with redshift, by a factor of about 10 and 200 at $`z=1`$ and 3 respectively. Thus high-redshift mergers must be typically more violent as compared with their low-redshift counterparts. We suggest two possible physical explanations. First, that gas is converted into stars/feeds an AGN uniformly more efficiently and rapidly in all merging galaxies as redshift increases, perhaps due to a lower bulge-to-disk ratio, which makes disk instabilities grow more quickly (Mihos & Hernquist 1996). Secondly, that a decreasing fraction of dark matter halo mergers are associated with an efficient mode of star formation/AGN fuelling as redshift increases.
5. In the context of galaxy formation within merging dark matter haloes, we have described how the physical processes that convert merging mass into visible radiation must evolve with redshift in order to account for the data in the far-infrared and submillimetre wavebands. This has previously been discussed by Guiderdoni et al. (1998), in the conventional context of semi-analytic models, where gas is assumed to cool into dark matter haloes and form stars on galactic scales. In order to account for the observations, an additional population of ultraluminous galaxies was incorporated arbitrarily into their models. We have improved our previous models (Blain et al. 1999c) significantly, by including some astrophysics and not simply invoking an empirical form of the evolution of a low-redshift luminosity function to fit the data. By assuming only a single population of luminous merging galaxies we are able to account for all the data in the far-infrared and submillimetre wavebands. Clear forms of evolution of both the efficiency with which luminosity is generated by a galaxy merger as a function of redshift, and of a function that connects the duration of the luminous phase and the fraction of dark matter halo mergers that generate a luminous event are required to reproduce the results of observations. The way in which gas is processed in the sub-kpc core regions of galaxy mergers to reproduce the necessary high efficiency and short time-scale of luminous events must be investigated in future work.
6. We find that the observed counts of both submillimetre-selected galaxies and Lyman-break galaxies can be accounted for in terms of merger events in an hierarchical model of galaxy formation, which include identical forms of evolution with redshift, but with different absolute normalisations. We find that 80 per cent of the total amount of energy generated in merger-induced starbursts/AGN is liberated in the far-infrared waveband. It is plausible that the submillimetre-selected galaxies and the Lyman-break galaxies are associated with temporally distinct phases of a common population of merging dark matter haloes. A scenario in which a short-lived, highly obscured far-infrared starburst/AGN phase dominates the integrated luminosity of the merger and is surrounded in time by a less luminous, more lightly obscured phase that lasts about 10 times longer is consistent with the data. In this scenario, the merger would be classified as a SCUBA galaxy if it was observed during the short-lived phase, and as a Lyman-break galaxy during the long-lived phase.
7. The results presented here provide excellent opportunities for further study. Two key scientific questions remain unanswered. First, what are the physical processes that are responsible for the evolution of both the star-formation/AGN-fuelling efficiency and the activity parameter in galaxy mergers as a function of redshift? Secondly, what is the relationship between samples of faint galaxies selected in the optical waveband and submillimetre-selected galaxies? Larger samples of submillimetre-selected galaxies and more comprehensive multiwaveband follow-up observations will allow these questions to be answered.
## Acknowledgements
We thank Nigel Metcalfe for providing a comprehensive list of optical count data, and Chris Mihos, Priya Natarajan, Kate Quirk, Chuck Steidel and Neil Trentham for providing useful comments on the manuscript. Thanks are also due to an anonymous referee for helpful suggestions and prompt reading of the manuscript. AWB, AJ and RJI acknowledge PPARC, IS thanks the Royal Society, and JPK thanks the CNRS for support. In addition, AWB thanks MENRT for support while in Toulouse, and the Caltech AY visitors program for support while this work was completed.
|
no-problem/9906/astro-ph9906163.html
|
ar5iv
|
text
|
# The Bright SHARC Survey: The X–ray Cluster Luminosity Function 1footnote 11footnote 1Based on data obtained at the Kitt Peak National Observatory, European Southern Observatory, the Canada–France–Hawaii Telescope and Apache Point Observatory
## 1 Introduction
The observed evolution of the space density of clusters of galaxies provides a powerful constraint on the underlying cosmological model. Many authors have demonstrated – both analytically and numerically – that the expected abundance of clusters, as a function of cosmic epoch, is a sensitive test of the mean mass density of the universe ($`\mathrm{\Omega }_m`$) and the type of dark matter (Press & Schechter 1974; Lacey & Cole 1993, 1994; Oukbir & Blanchard 1992, 1997; Henry 1997; Eke et al. 1996, 1998; Viana & Liddle 1996, 1999; Bryan & Norman 1998; Reichart et al. 1999; Borgani et al. 1999).
Measurements of the evolution of the cluster abundance have made significant progress over the past decade. For example, in their seminal work, Gioia et al. (1990) and Henry et al. (1992) computed the luminosity function of X–ray clusters extracted from the Einstein Extended Medium Sensitivity Survey (EMSS) and concluded that the X-ray Cluster Luminosity Function (XCLF) evolved rapidly over the redshift range of $`0.14z0.6`$.
The launch of the ROSAT satellite heralded a new era of investigation into the XCLF. The ROSAT All–Sky Survey (RASS) has provided new determinations of the local XCLF and has demonstrated that there is little observed evolution in the XCLF out to $`z0.3`$ (Ebeling et al. 1997; De Grandi et al. 1999) in agreement with the earlier work of Kowalski et al. (1984). In addition, the ROSAT satellite has supported several investigations of the distant X–ray cluster population (RIXOS, Castander et al. 1995; SHARC, Burke et al. 1997, Romer et al. 1999; RDCS, Rosati et al. 1998; WARPS, Jones et al. 1998; Vikhlinin et al. 1998a; NEP, Henry et al. 1998). Initially, such investigations reported a deficit of high redshift, low luminosity clusters consistent with the original EMSS result (Castander et al. 1995). However, over the last few years, there has been a growing consensus for a non–evolving XCLF. First, Nichol et al. (1997) re–examined the EMSS cluster sample and determined that the statistical evidence for evolution of the EMSS XCLF had decreased in light of new ROSAT data. Second, several authors have now conclusively shown that the XCLF does not evolve out to $`z0.7`$ for cluster luminosities of $`\mathrm{L}_\mathrm{x}<3\times 10^{44}\mathrm{erg}\mathrm{s}^1`$ (Collins et al. 1997; Burke et al. 1997; Rosati et al. 1998; Jones et al. 1998).
Above $`\mathrm{L}_\mathrm{x}=3\times 10^{44}\mathrm{erg}\mathrm{s}^1`$, recent work has indicated that the XCLF may evolve rapidly in agreement with the original claim of Gioia et al. (1990). Reichart et al. (1999) highlighted a deficit of luminous ($`\mathrm{L}_\mathrm{x}>5\times 10^{44}\mathrm{erg}\mathrm{s}^1`$) EMSS clusters at $`z>0.4`$ i.e. the EMSS survey has both the sensitivity and area to find such clusters but does not detect them. Moreover, Vikhlinin et al. (1998b) has recently reported evidence for a deficit of luminous clusters at $`z>0.3`$ based on the $`160\mathrm{deg}^2`$ ROSAT survey (Vikhlinin et al. 1998a).
In this paper, we report on the first determination of the bright end of the XCLF that is independent of the EMSS. In sections 2 & 3, we outline the Bright SHARC sample of clusters used herein and its selection function. In sections 4 & 5, we present the derivation of the XCLF and discuss its implications. Throughout this paper, we use $`\mathrm{H}_\mathrm{o}=50\mathrm{k}\mathrm{m}\mathrm{s}^1\mathrm{Mpc}`$ and $`q_o=\frac{1}{2}`$ to be consistent with other work in this field. All quoted luminosities are in the hard ROSAT passband \[$`0.52.0`$ keV\] and are aperture and k–corrected (see Romer et al. 1999 for details).
## 2 The Bright SHARC Sample
The details of the construction of the Bright SHARC survey are presented in Romer et al. (1999). The Bright SHARC was constructed from 460 deep ($`\mathrm{T}_{\mathrm{exp}}>10`$ ksecs), high galactic latitude ($`|b|>20^{}`$), ROSAT PSPC pointings which cover a unique area of $`178.6\mathrm{deg}^2`$. Using a wavelet–based detection algorithm, $`10277`$ X–ray sources were detected in these pointings of which $`374`$ were measured to be significantly extended ($`>3\sigma `$; see Nichol et al. 1997) relative to the ROSAT PSPC point–spread function. The Bright SHARC represents the brightest 94 of these 374 extended cluster candidates above a ROSAT count rate of 0.0116 $`\mathrm{cnts}\mathrm{s}^1`$. This corresponds to a flux limit of $`1.4\times 10^{13}\mathrm{erg}\mathrm{s}^1\mathrm{cm}^2`$ \[0.5–2.0 keV\] for the average neutral hydrogen column density of the Bright SHARC and a cluster temperature of $`6`$ keV.
Over the past two years, we have optically identified the most likely X–ray emitter for $`91`$ of these $`94`$ Bright SHARC cluster candidates and have discovered $`37`$ clusters, $`3`$ groups of galaxies and $`9`$ nearby galaxies (the remainder are blends of X–ray sources e.g. AGNs & stars; see Romer et al. 1999). We find $`12`$ clusters in the range $`0.3z0.83`$ (median redshift of $`z=0.42`$) and have independently detected cluster RXJ0152-7363 ($`z=0.83`$ based on 3 galaxy redshifts obtained at the CFHT) which is one of the most luminous, high redshift X–ray clusters ever detected (see Romer et al. 1999). This cluster has also been detected by the WARPS and RDCS surveys (see Ebeling et al. 1999; Rosati, private communication).
## 3 Selection Function
An important part of any survey is a solid understanding of the selection function i.e. the efficiency of finding objects as a function of both cosmological and operational parameters. In the case of the EMSS cluster sample, the selection function is somewhat straightforward since the EMSS optically identified all sources regardless of their observed X–ray extent. This is not the case for the Bright SHARC and therefore, the most direct way of modelling the selection function is through Monte Carlo simulations. The details of such simulations are given in Adami et al. (1999) but we present here some initial results.
The Bright SHARC selection function is obtained by adding artificial clusters to PSPC pointings and determining if these clusters would have satisfied the Bright SHARC selection criteria. In this way, we can accurately model the effects of the signal–to–noise cut, the extent criteria and blending of sources. To date, we have only simulated artificial clusters at four different intrinsic luminosities, $`\mathrm{L}_\mathrm{x}=1,\mathrm{\hspace{0.17em}2},\mathrm{\hspace{0.17em}5}\&\mathrm{\hspace{0.17em}10}`$ ($`\times \mathrm{\hspace{0.17em}10}^{44}\mathrm{erg}\mathrm{s}^1`$ i.e. equally spaced in log–space), and covering a redshift range of $`0.3z1`$ in steps of $`\delta _z=0.05`$. These artificial clusters were constructed assuming an isothermal King profile with $`r_c=250`$ kpc and $`\beta =\frac{2}{3}`$. The effects of changing the cluster profile and its parameters are explored in detail in Adami et al. (1999).
For each combination of $`\mathrm{L}_\mathrm{x}`$ and $`z`$, we added, one at a time, 10 artificial clusters to each of 10 PSPC pointings (randomly chosen from all pointings available in the SHARC survey). The positions of the clusters in these pointings were chosen at random but recorded for later use when computing the area surveyed (see below). For each artificial cluster, we computed the expected number of ROSAT PSPC counts for its given luminosity, redshift and the exposure time of the pointing. We then took a Poisson deviate about the expected number of counts and distributed the counts at random assuming a redshifted King profile. The pointing was then processed in the same fashion as the real data (see Romer et al. 1999; Nichol et al. 1997) thus allowing us to determine if the cluster would have been selected for the Bright SHARC.
In Figure 1, we show the results of these initial simulations. The effective area of the Bright SHARC was computed by splitting the PSPC field–of–view into 4 annulii ($`2.\mathrm{}56.\mathrm{}25`$, $`6.\mathrm{}2511.\mathrm{}25`$, $`11.\mathrm{}516.\mathrm{}25`$ and $`16.\mathrm{}522.\mathrm{}5`$; the central part of the PSPC was excluded) and multiplying the area in each of these annulii by the measured success rate of detecting clusters in these same annulii (as a function of the input cluster redshift and luminosity). We then summed the effective area in these annulii over all pointings used – again as a function of luminosity and redshift – to provide the total effective area sampled by the simulations. Finally, we re–scaled the results to obtain the expected area for all 460 PSPC pointings used in the Bright SHARC.
## 4 Luminosity Function
The luminosity function of the Bright SHARC was determined using the $`1/\mathrm{V}_\mathrm{a}`$ methodology outlined in Avni & Bahcall (1980), Henry et al. (1992) and Nichol et al. (1997), where $`\mathrm{V}_\mathrm{a}`$ is the available sample volume for any given cluster in the survey. Using the selection function presented in Figure 1, $`V_a`$ can be computed for a cluster of luminosity $`\mathrm{L}_\mathrm{x}`$ using
$$\mathrm{V}_\mathrm{a}=_{z_{low}}^{z_{high}}\mathrm{\Omega }(\mathrm{L}_\mathrm{x},z)V(z)𝑑z,$$
(1)
where $`z_{low}`$ and $`z_{high}`$ are the lower and upper bounds of the redshift shell of interest, $`V(z)`$ is the volume per unit solid angle for that redshift shell and $`\mathrm{\Omega }(\mathrm{L}_\mathrm{x},z)`$ is the effective area of the Bright SHARC from Figure 1. In practice, the integral in Eqn. 1 is replaced by a sum over the discrete values of $`\mathrm{\Omega }(\mathrm{L}_\mathrm{x},z)`$ obtained from the simulations. Linear interpolation was used where necessary to obtain finer resolution in both luminosity and redshift space.
The luminosity function was derived by summing the $`\mathrm{V}_\mathrm{a}`$ values for all clusters in the Bright SHARC as a function of luminosity i.e.
$$\mathrm{n}(\mathrm{L})=\frac{1}{\mathrm{\Delta }\mathrm{L}}\underset{i=1}{\overset{N}{}}\frac{1}{\mathrm{V}_{\mathrm{a}}^{}{}_{}{}^{i}},$$
(2)
where $`\mathrm{\Delta }\mathrm{L}`$ is the width of the luminosity bins and $`N`$ is the number of clusters in that luminosity bin (Table 1). For the results presented here, we have restricted ourselves to the luminosity range $`10^{44}\mathrm{L}_\mathrm{x}10^{45}\mathrm{erg}\mathrm{s}^1`$.
In Figure 2, we present the Bright SHARC XCLF and compare it to the Southern SHARC XCLF (Burke et al. 1997) and measurements of the local XCLF (Ebeling et al. 1997; De Grandi et al. 1999). We provide, in Table 1, the data points displayed in Figure 2 together with the redshift and luminosity ranges studied. We also provide the number of clusters in each bin. We have not performed a parametric fit to the data because of the limited dynamic range in luminosity available from our present simulations.
## 5 Discussion
Figure 2 demonstrates that the high redshift XCLF does not evolve below $`\mathrm{L}_\mathrm{x}=5\times \mathrm{\hspace{0.17em}10}^{44}\mathrm{erg}\mathrm{s}^1`$, or below $`\mathrm{L}^{}`$ in the XCLF ($`\mathrm{L}^{}=5.7_{0.93}^{+1.29}\times 10^{44}\mathrm{erg}\mathrm{s}^1`$ from Ebeling et al. 1997). Using a Kolmogorov–Smirnov (KS) test similar to that discussed by De Grandi et al. (1999), we find that the Bright SHARC unbinned data is fully consistent with the low redshift XCLF over the luminosity range $`1\mathrm{L}_\mathrm{x}5\times \mathrm{\hspace{0.17em}10}^{44}\mathrm{erg}\mathrm{s}^1`$ (we find a KS probability of 0.32). This result is consistent with previous work (Nichol et al. 1997; Burke et al. 1997; Rosati et al. 1998; Jones et al. 1998) but pushes the evidence for a non–evolving XCLF to the highest luminosities presently reached by ROSAT data. This is illustrated by the fact that we find 6 distant clusters in the luminosity range $`3\mathrm{L}_\mathrm{x}5\times \mathrm{\hspace{0.17em}10}^{44}\mathrm{erg}\mathrm{s}^1`$ (Table 1) which is more than any other ROSAT archival survey. The issue therefore becomes the degree of observed evolution in the XCLF above $`\mathrm{L}^{}`$.
We have investigated evolution in the XCLF above $`\mathrm{L}^{}`$ in two separate ways. First, we have one very luminous high redshift cluster in the Bright SHARC – RXJ0152.7-1357 ($`z=0.83`$; $`\mathrm{L}_\mathrm{x}=8.5\times 10^{44}\mathrm{erg}\mathrm{s}^1`$) – which probes the XCLF above $`\mathrm{L}^{}`$. The implied position of this cluster in the XCLF is given in Figure 2 and Table 1 (we used a redshift range of $`0.3z1`$ and luminosity range of $`7\times 10^{44}\mathrm{L}_\mathrm{x}10^{45}\mathrm{erg}\mathrm{s}^1`$ when computing the volume sampled by this cluster). As can be seen in Figure 2, the implied space density of RXJ0152-1357 agrees with the local XCLF and may be evidence for a non-evolving XCLF above $`\mathrm{L}^{}`$. However, we must remain cautious since RXJ0152.7-1357 has a complex X–ray morphology indicative of an on–going merger which may have enhanced its luminosity (see Ebeling et al. 1999). Such disturbed or non–spherical morphologies (both in the X–rays and optical) appear to be common at these high redshifts – e.g. MS1054.4-0321 at $`z=0.823`$ (Donahue et al. 1999) and RXJ1716.6+6708 at $`z=0.813`$ (Henry et al. 1998; Gioia et al. 1999) – and may indicate that we are witnessing the epoch of massive cluster formation.
The second path of investigation is to repeat the analysis of Collins et al. (1997) and Vikhlinin et al. (1998b) and compute the number of expected Bright SHARC clusters at these bright luminosities assuming a non–evolving XCLF. In the luminosity range $`5\times 10^{44}\mathrm{L}_\mathrm{x}10^{45}\mathrm{erg}\mathrm{s}^1`$ and redshift range $`0.3z0.7`$, we would predict $`4.9`$ clusters based on the De Grandi et al. (1999) XCLF (or $`3.5`$ clusters using the Ebeling et al 1997 XCLF). At present, the Bright SHARC contains only one confirmed cluster in this range, RXJ1120.1+4318 at $`z=0.60`$ and $`\mathrm{L}_\mathrm{x}=5.03\times 10^{44}\mathrm{erg}\mathrm{s}^1`$. The Poisson statistical significance (Gehrels 1986) of this observed deficit is $`96\%`$ (or $`90\%`$ for Ebeling et al. 1997).
One way to increase the statistical significance of any possible deficit of high redshift X–ray luminous clusters is to combine the Bright SHARC and $`160\mathrm{d}\mathrm{e}\mathrm{g}^2`$ survey of Vikhlinin et al. (1998a) as both surveys should have similar selection functions. We have determined that $`201`$ ROSAT PSPC pointings are in common between the two surveys (see Romer et al. 1999), or $`44\%`$ of the area of the Bright SHARC. Vikhlinin et al. (1998b) have noted that their survey probably contains only 2 candidate clusters above $`\mathrm{L}_\mathrm{x}=3\times 10^{44}\mathrm{erg}\mathrm{s}^1`$ and $`z>0.3`$ (based on photometric redshift estimates), while they would have expected 9 clusters above this redshift and luminosity limit. In fact, Bright SHARC spectroscopy of one of these two candidate clusters – RXJ1641+8232 – shows that it is at $`z=0.195`$ (based on 3 galaxy redshifts). The other candidate cluster – RXJ1641+4001 – is not in the Bright SHARC.
Therefore, the Bright SHARC plus the $`160\mathrm{d}\mathrm{e}\mathrm{g}^2`$ survey cover a total area of $`260\mathrm{deg}^2`$ and scaling the aforementioned numbers appropriately, we would expect $`7.6`$ (using the De Grandi et al. 1999 XCLF) or $`5.5`$ (using the Ebeling et al. 1997 XCLF) clusters in the luminosity range $`5\times 10^{44}\mathrm{L}_\mathrm{x}10^{45}\mathrm{erg}\mathrm{s}^1`$ and redshift range $`0.3z0.7`$. By comparison, this joint survey contains only 1 confirmed<sup>2</sup><sup>2</sup>2 On–going spectroscopic redshift measurements of further candidate clusters in the $`160\mathrm{d}\mathrm{e}\mathrm{g}^2`$ survey have yet to reveal a single $`\mathrm{L}_\mathrm{x}>5\times 10^{44}\mathrm{erg}\mathrm{s}^1`$ cluster (Vihklinin, private communication) X–ray luminous, high redshift cluster. From Gehrels (1986), the statistical significance of this deficit is now 99.5% or 97.5% respectively. We note however that there are still candidates in both surveys that require further optical follow–up i.e. one of the three unidentified Bright SHARC candidates mentioned in section 2 could be a high redshift cluster.
The above analyses suffer from small number statistics and incomplete optical follow–up. Moveover, the local XCLFs are also affected by small number statistics at these high X–ray luminosities. Therefore, the issue of evolution above $`\mathrm{L}^{}`$ in the XCLF remains unclear. However, we note that four independent surveys of distant clusters of galaxies have now seen a potential deficit of X-ray luminous high redshift clusters (SHARC, WARPS, EMSS, $`160\mathrm{deg}^2`$ survey). It is worth stressing that some level of XCLF evolution is expected above $`\mathrm{L}^{}`$ at high redshift and the degree of such evolution is a strong indicator of $`\mathrm{\Omega }_m`$ (see Blanchard & Oukdir 1992, 1997; Reichart et al. 1999).
The way to resolve these problems is to construct larger samples of clusters over bigger areas of the sky. This is certainly possible for the local XCLF e.g. using the on–going REFLEX survey of clusters being constructed from the RASS (see Böhringer et al. 1998). However, for the distant XCLF, it is unlikely that a significant amount of further area ($`>>260\mathrm{deg}^2`$) can be added to the present surveys using the existing ROSAT pointing archive; one may have to wait for the XMM satellites (see Romer 1998). The next major improvements to the results presented in this paper will be to obtain a more detailed view of the Bright SHARC selection function as well as to complete the optical follow–up of the remaining candidates. Another improvement would be the possible combination of all the ROSAT distant surveys (SHARC, $`160\mathrm{deg}^2`$, RDCS, NEP & WARPS) thus maximising the amount of volume sampled at high redshift.
## 6 Acknowledgements
The authors would like to thank Alain Blanchard, Jim Bartlett, Francisco Castander, Harald Ebeling, Pat Henry, Andrew Liddle, Piero Rosati & Alex Vikhlinin for helpful discussions over the course of this work. This research was supported through NASA ADP grant NAG5-2432 (at NWU) and NASA LTSA grant NAG5-6548 (at CMU). AM thanks the Carnegie Mellon Undergraduate Research Initiative for financial assistance. We thank Alain Mazure and the IGRAP–LAS (Marseille, France) for their support.
|
no-problem/9906/math-ph9906010.html
|
ar5iv
|
text
|
# Untitled Document
Comment on “A simple expression for the terms in the Baker–Campbell–Hausdorff series”
Hiroto Kobayashi
Department of Applied Physics, School of Engineering, The University of Tokyo
Hongo, Bunkyo-ku, Tokyo 113-8656, Japan
## Abstract
It is pointed out that Reinsch’s matrix operation formulation of calculating the Baker–Campbell–Hausdorff series \[math-ph/9905012\] is equivalent to the straightforward series expansion. The amount of calculation does not decrease by his method.
Using matrix operations, Reinsch proposed a simple expression of terms in the Baker–Campbell–Hausdorff series of $`\mathrm{log}(\mathrm{exp}x\mathrm{exp}y)`$, where $`x`$ and $`y`$ are noncommutative variables. We here point out that his formulation applies to the series expansion of general functions of noncommutative variables. In fact, the matrix formulation is equivalent to the straightforward series expansion in the sense that the amount of calculation does not decrease by his method.
Let us show in the following the relation between the series expansion and the matrix operation. Suppose that $`a`$ and $`b`$ are polynomials of noncommutative variables $`x,y,z,\mathrm{}`$. We express the polynomials as
$$a=\underset{i=0}{\overset{\mathrm{}}{}}a_i,b=\underset{i=0}{\overset{\mathrm{}}{}}b_i,$$
(1)
where $`a_i`$ and $`b_i`$ denote the $`i`$th-order terms. When we need to the series expansion of the product $`c=ab`$ up to the $`n`$th order, we define two $`(n+1)\times (n+1)`$ matrices $`A`$ and $`B`$ as
$$A_{ij}=\underset{k=0}{\overset{n}{}}a_k\delta _{i+k,j},B_{ij}=\underset{k=0}{\overset{n}{}}b_k\delta _{i+k,j}.$$
(2)
We obtain the $`i`$th-order term $`c_i`$ of the product of $`a`$ and $`b`$ as
$$c_i=C_{1,i+1}\mathrm{with}C=AB\mathrm{for}in.$$
(3)
We derive Eq. (3) in the following way. First we note that
$`ab`$ $`=`$ $`(a_0+a_1+\mathrm{}+a_n+\mathrm{})(b_0+b_1+\mathrm{}+b_n+\mathrm{})`$ (4)
$`=`$ $`a_0b_0+(a_0b_1+a_1b_0)+\mathrm{}+(a_0b_n+a_1b_{n1}+\mathrm{}+a_nb_0)+\mathrm{}`$
$`=`$ $`{\displaystyle \underset{i=0}{\overset{\mathrm{}}{}}}{\displaystyle \underset{j=0}{\overset{i}{}}}a_jb_{ij},`$
that is,
$$c_i=\underset{j=0}{\overset{i}{}}a_jb_{ij}.$$
(5)
On the other hand, the equation $`C=AB`$ is
$$\left(\begin{array}{ccccc}c_0& c_1& \mathrm{}& \mathrm{}& c_n\\ & c_0& c_1& & \mathrm{}\\ & & \mathrm{}& \mathrm{}& \mathrm{}\\ & & & \mathrm{}& c_1\\ & & & & c_0\end{array}\right)=\left(\begin{array}{ccccc}a_0& a_1& \mathrm{}& \mathrm{}& a_n\\ & a_0& a_1& & \mathrm{}\\ & & \mathrm{}& \mathrm{}& \mathrm{}\\ & & & \mathrm{}& a_1\\ & & & & a_0\end{array}\right)\left(\begin{array}{ccccc}b_0& b_1& \mathrm{}& \mathrm{}& b_n\\ & b_0& b_1& & \mathrm{}\\ & & \mathrm{}& \mathrm{}& \mathrm{}\\ & & & \mathrm{}& b_1\\ & & & & b_0\end{array}\right).$$
(6)
It is obvious that Eq. (6) yields Eq. (5), and hence Eq. (3) is proved.
Furthermore, the $`i`$th-order term of the summation $`a+b`$ is obtained from $`(A+B)_{1,i+1}`$. Combining this with Eq. (3), we obtain the fact that for a general polynomial $`f(a)`$, the $`i`$th-order term $`[f(a)]_i`$ is given by $`[f(A)]_{1,i+1}`$. Moreover, the matrix $`f(A)`$ has the same form of $`A`$, that is
$$[f(A)]_{ij}=\underset{k=0}{\overset{n}{}}[f(a)]_k\delta _{i+k,j}.$$
(7)
Thus for two general polynomials $`f(a)`$ and $`g(b)`$, the $`i`$th-order term $`[f(a)g(b)]_i`$ is given by $`[f(A)g(B)]_{1,i+1}`$. A function $`h(f(a)g(b))`$ is also calculated from $`h(f(A)g(B))`$. In the above way, we find that the matrix operation is equivalent to the straightforward series expansion.
Reinsch specifically calculated $`\mathrm{log}(\mathrm{exp}M\mathrm{exp}N)`$ for two $`(n+1)\times (n+1)`$ matrices $`M`$ and $`N`$ defined as
$$M_{ij}=\delta _{i+1,j}x=\left(\begin{array}{ccccc}0& x& 0& \mathrm{}& 0\\ & 0& x& \mathrm{}& \mathrm{}\\ & & 0& \mathrm{}& 0\\ & & & \mathrm{}& x\\ & & & & 0\end{array}\right),N_{ij}=\delta _{i+1,j}y=\left(\begin{array}{ccccc}0& y& 0& \mathrm{}& 0\\ & 0& y& \mathrm{}& \mathrm{}\\ & & 0& \mathrm{}& 0\\ & & & \mathrm{}& y\\ & & & & 0\end{array}\right).$$
(8)
As shown above, it is only a special case that $`[\mathrm{log}(\mathrm{exp}M\mathrm{exp}N)]_{1,i+1}`$ gives the $`i`$th-order term of $`\mathrm{log}(\mathrm{exp}x\mathrm{exp}y)`$. (Note that the logarithm and the exponential function of noncommutative variables are defined by their polynomial expansion.) The amount of calculation to obtain the $`i`$th-order term does not decrease by Reinsch’s method. For example, in order to calculate the product of $`\mathrm{exp}x`$ and $`\mathrm{exp}y`$, we need to multiply matrix elements once for the zeroth order, twice for the first order, and three times for the second order, as shown in the following equation:
$$\left(\begin{array}{ccc}1& x& \frac{1}{2}x^2\\ 0& 1& x\\ 0& 0& 1\end{array}\right)\left(\begin{array}{ccc}1& y& \frac{1}{2}y^2\\ 0& 1& y\\ 0& 0& 1\end{array}\right)=\left(\begin{array}{ccc}1& x+y& \frac{1}{2}x^2+xy+\frac{1}{2}y^2\\ 0& 1& x+y\\ 0& 0& 1\end{array}\right).$$
(9)
The number of multiplication is the same as the corresponding straightforward series expansion.
Finally, we would like to draw attention to the NCAlgebra package for Mathematica , which is useful for series expansions containing noncommutative variables.
I would like to thank Prof. N. Hatano for his useful comments.
M.W. Reinsch: math-ph/9905012.
`http://math.ucsd.edu/~ncalg/` .
|
no-problem/9906/astro-ph9906100.html
|
ar5iv
|
text
|
# Detached white-dwarf close-binary stars – CV’s extended family
## 1 Introduction
Cataclysmic variable stars (CVs) have not always been as we see them today. They evolve from pairs of main-sequence stars in relatively long period orbits. We know this because the white dwarf components of CVs were once the cores of giant stars much larger than the CVs are now. The standard explanation for this invokes a phase during which both stars orbit within a single envelope (derived from the giant star). As the stars orbit they lose angular momentum to the envelope which is ejected, leaving a much tighter binary star.
This so-called “common-envelope phase” does not produce a CV: some other angular momentum loss, such as magnetic braking, is required to further whittle down the orbit before mass transfer from the still-unevolved secondary star can get underway. Clearly we must expect to find binary stars which have gone through common-envelope evolution, but have yet to become CVs. These stars, which for simplicity we will call pre-CVs – although they will not always manage to become CVs – should consist of white dwarf stars with low mass companions, typically M dwarf stars. I will look at examples of these stars in section 3. They are of direct interest to evolutionary models of CVs and give us clean examples of irradiated stellar atmospheres.
After the common-envelope phase, the binary may still be of sufficiently large separation that it cannot become a CV before the secondary star has itself evolved. If this occurs one can expect a second common-envelope phase. If the binary survives this, a pair of white dwarfs or a “double-degenerate” may emerge; such systems may also be produced from the remnants of Algols. I will refer to them as DDs. There has been much interest in DDs mainly because they are a possible progenitor system of Type Ia supernovae. The idea here is that as gravitational wave radiation shortens their orbital periods, DDs will eventually start mass transfer at orbital periods of order 100 seconds. While they may survive this (and then emerge as AM CVn stars), it is likely instead that they will merge. If the merged product exceeds the Chandrasekhar limit, collapse will occur which might ignite fusion violently enough to give a Type Ia supernova, with no remnant. The biggest problem with this model appears to be whether explosions occur as opposed to much more gentle collapses leaving neutron stars; this is largely a matter for theoretical models. However, a different aspect is directly testable: if DDs are Type Ia progenitors then there should be a population of DDs with total masses above the Chandrasekhar limit and with periods short enough to merge within the lifetime of the Galaxy, which works out at about 10 hours. I now turn to what is known about DDs.
## 2 Double-Degenerates
The first double-degenerate discovered, L870-2 \[Saffer et al., 1988\], consists of two cool ($`7`$,$`000\mathrm{K}`$) white dwarfs in a $`1.56`$ day period orbit. Around the same time as this discovery, there were three surveys to find the short period population relevant to Type Ia supernovae \[Robinson & Shafter 1987, Bragaglia et al., 1990, Foss et al., 1991\]. These were mostly unsuccessful, although a system called 0957-666 was found to have a 1.18 day period \[Bragaglia et al., 1990\].
Soon after this work, model atmosphere and evolutionary model fits to the spectra of white dwarfs revealed a population of low mass ($`<0.45\mathrm{M}_{}`$) objects \[Bergeron et al., 1992, Bragaglia et al., 1995\]. On the other hand, white dwarfs which evolve from single stars within the age of the Galaxy are expected to have a minimum mass of around $`0.55\mathrm{M}_{}`$. The models are dependent upon the uncertainties of mass loss on the AGB, but some white dwarfs have masses as low as $`0.33\mathrm{M}_{}`$, which is too low for them even to have reached the AGB. These must be the helium cores of stars which failed to advance beyond the RGB, perhaps as a result of mass loss within a binary. This suggested that concentrating on the low mass white dwarfs might be an effective method for finding close binaries, as indeed proved to be the case \[Marsh et al., 1995, Marsh, 1995\]. This has raised the number of DDs with measured periods to 15, with another 7 sdB binaries that probably have white dwarf companions (see table 1). During this work it was also found that the original period determination for 0957-666 was in error; the revised value of $`0.061`$ days remains the shortest known for these systems \[Moran et al., 1997\].
The observed periods are compared to the results of binary “population synthesis” \[Iben et al., 1997\] in Fig. 2. I have assumed that all the sdB stars
in the left column of Table 1 have white dwarf companions and that they will emerge as DDs with little alteration in period; this remains to be proved. The essential result of the comparison is that theory and observation match fairly well, although there is perhaps a hint that there may be a dearth of DDs with periods around $`0.5`$ days.
Things become more interesting when one looks at the 2-dimensional distribution of mass and period (Fig. 2); the relative reliability of mass determination for non-accreting white dwarfs is a significant advantage compared to the normal case for CVs. Fig 2 shows a significant discrepancy between theory and observation. Theory predicts the existence of a large fraction of very low mass white dwarfs ($`0.25\mathrm{M}_{}`$) which are not observed; I can think of no plausible observational selection effect to side-step this discrepancy. Reinforcing this problem, particular systems, such as 0957-666 (the left-most point), lie in regions of near-zero probability according to theory. While the theory has many free parameters that can be adjusted to produce a better fit, the absence of very low mass white dwarfs is a puzzle as it suggests that for some reason we never see the results of mass loss early on the RGB.
### 2.1 Numbers of DDs
With only 15 bona-fide DDs with measured periods, compared to over 300 CVs \[Ritter & Kolb 1998\], it may seem that they are relatively rare. In fact the reverse is the case: my best guess at the space density of DDs is $`5\times 10^4\mathrm{pc}^3`$, of order 20 times that of CVs, including the very faint and so far undetected CVs presumed to have “bounced” at 80 mins orbital period \[Politano 1996\]. The estimate for DDs is based on the relatively well determined space density for all white dwarfs \[Knox et al., 1999\] and the roughly 10% of white dwarfs that are DDs \[Saffer et al., 1998, Maxted & Marsh, 1999\]. The difference in observed numbers is down to ease of detection. This means that there are some 250 million DDs in the Galaxy, with perhaps 1 million systems with periods of less than an hour; they are likely to be the dominant source of low frequency gravitational waves in the Galaxy \[Hils et al., 1990\].
Can DDs be the progenitors of Type Ia supernovae? We have now found systems of short enough period, and one, KPD 0422+5421 \[Koen et al., 1998\], may even have enough mass. In terms of numbers, and leaving aside the issue of whether they really explode on merging, the answer would appear to be yes, they remain a viable progenitor. While we have not found convincing examples of systems with enough mass, these are probably just rare; only about 1 in 40 of DDs is expected to be such a system \[Iben et al., 1997\] and we have been concentrating specifically on low mass systems.
## 3 Pre-CVs
When one searches for DDs, one also finds pre-CVs. I define these as binaries containing a white dwarf (or sub-dwarf which will evolve into a white dwarf) and an M dwarf companion. Higher mass companions are excluded because (a) it becomes hard to see the white dwarf if the companion is too bright and (b) theoretically, CVs are descended from systems with mass ratio $`q=M_{\mathrm{MS}}/M_{\mathrm{WD}}<0.28`$ \[Politano 1996\], and since white dwarfs are usually below a solar mass, this implies M dwarf companions. There are 27 pre-CVs known; 22 are white dwarf/M dwarf systems, and 5 are sub-dwarf/M dwarf systems.
The observed periods are compared to theory in Fig. 4.
Observations and theory do not compare well. In this case however, I think it is likely that observational selection could be to blame for the lack of systems at both long and short periods. At long periods, the radial velocity of the white dwarf is relatively low and irradiation-induced emission from the M dwarf will be weak. There are a good number of white dwarf/M dwarf pairs known which don’t have measured orbital periods, and the long period systems may well be lurking amongst them. At short periods only very low mass M stars can remain inside their Roche lobe; for example the shortest period system listed, GD 448, has a mass of $`0.09\mathrm{M}_{}`$, barely above the brown dwarf limit \[Maxted et al., 1998\]. It is difficult to see any sign of the M dwarf in GD 448, with only weak emission at H$`\alpha `$; we may well be missing still shorter period systems.
The masses of the white dwarfs in pre-CVs are not as well determined as they are for DDs because the line profiles are often filled in by emission from the M dwarf. However, there are enough known to be certain that helium core white dwarfs exist in some numbers. I define helium-core white dwarfs as those with masses $`<0.5\mathrm{M}_{}`$; some are borderline, but there is little doubt for systems such as GD 448 ($`M_{\mathrm{WD}}=0.41\pm 0.01\mathrm{M}_{}`$) and RR Cae ($`0.36\pm 0.04\mathrm{M}_{}`$). Therefore there must be helium-core white dwarfs amongst the CVs too, as expected theoretically \[Politano 1996\], although observational selection effects which favour high masses may count against their detection.
There are four eclipsing white dwarf/M dwarf systems known (GK Vir, RR Cae, NN Ser and EC 1347-1258). Observations of these have the potential to provide accurate system parameters and to detect orbital period changes as may be caused by solar-type magnetic cycles. These systems cover a range of M dwarf mass, and it would be particularly interesting, for example, to see if the period of RR Cae, which has a very low mass M dwarf ($`0.09\mathrm{M}_{}`$), changes since the standard disrupted magnetic braking model would suggest a low level of magnetic activity in such a star.
The numbers of pre-CVs are comparable to DDs i.e. they are intrinsically much more common than CVs. It may prove difficult to detect potential period-gap-crossing systems against this background.
### 3.1 Irradiation in pre-CVs
The pre-CVs provide clean systems for the study of irradiation of stars. A result of interest for CV studies is that the Balmer emission lines induced by irradiation are significantly broadened by optical depth effects (Fig. 4), \[Maxted et al., 1998, Wood et al., 1999\]. The broadening is of order $`40\mathrm{km}\mathrm{s}^1`$, which is enough to severely limit their usefulness for imaging the secondary star in CVs; the CaII lines seem to be a better option (Fig. 4). The same broadening is seen in chromospherically active stars, which is perhaps surprising given the rather different mechanisms producing the lines.
## 4 Conclusions
Over the last ten years the number of double-degenerate binaries has gone from 1 to 15 and it is apparent that they are intrinsically extremely common within our Galaxy, with a space density of order $`5\times 10^4\mathrm{pc}^3`$. Their short periods, which range from $`1.5`$ hours to a few days, are a testament to the orbital shrinkage involved in ejecting the envelopes of the two white dwarfs. In terms of numbers, they remain a viable progenitor class for Type Ia supernovae.
The pre-CVs have grown similarly in number, although observational selection affects detection at periods above a day or so and below two hours. Amongst them are helium core white dwarfs, and presumably this must be the case for CVs too. Irradiation-induced Balmer emission is broadened by radiative transfer effects, and should be avoided in favour of CaII for imaging the secondary stars in CVs.
## 5 Acknowledgements
I thank Pierre Maxted for many conversations about these systems, and Zhanwen Han, Lev Yungelson and Gijs Nelemans for insights into evolutionary theory.
|
no-problem/9906/hep-lat9906018.html
|
ar5iv
|
text
|
# Probing the quark-gluon plasma with a new Fermionic correlator
## Abstract
We present the first measurement of a new correlation function of Fermion bilinears in finite temperature QCD with and without dynamical quarks in a quantum number channel in which non-trivial correlations are known to be present for purely gluonic operators. We find that the Fermion correlator vanishes for $`T3T_c/2`$, in agreement with the expectation for weakly interacting quarks in a quark-gluon plasma.
preprint: hep-lat/9906018 TIFR/TH/99-29
The Relativistic Heavy Ion Collider (RHIC) in BNL, New York and the Large Hadron Collider (LHC) in CERN, Geneva may yield a new state of matter, called Quark-Gluon Plasma (QGP), which could have existed in our universe a few microseconds after the Big Bang. It is a theoretically challenging task to deduce from first principles as many properties of the plasma as possible. Such a program may help in devising clear and unique signals of QGP.
Lattice simulations of Quantum Chromo Dynamics (QCD) have provided a robust approach based on first principles towards this end. Such simulations of field theories in equilibrium at finite temperature ($`T`$) use a discretisation of the Euclidean formulation for partition functions—
$$Z(\beta )=𝒟\varphi \mathrm{exp}\left[_0^{1/T}𝑑td^3x(\varphi )\right],$$
(1)
where $`\varphi `$ is a generic field, $``$ the Lagrangian density, and the Euclidean “time” runs from 0 to $`1/T`$. The path integral is over field configurations which are periodic (anti-periodic) in Euclidean time for the Bosonic gluon (Fermionic quark) fields. Due to a lack of symmetry between the space and Euclidean time directions in eq. (1), this problem has only a subgroup of the full 4-dimensional rotational symmetry of the $`T=0`$ Euclidean theory. Since the partition function above contains equal weights for all configurations which are related by these symmetries, only those operators which transform as scalars under this reduced symmetry group have non-vanishing expectation values.
For the lattice discretised problem the symmetry groups reduce to discrete subgroups of the continuum symmetry groups. It is useful to write the partition function of eq. (1) as the trace of the transfer matrix in one of the spatial directions. Correlation functions along that direction can then be classified by the irreducible representations (irreps) of the symmetry group of the transfer matrix. Unlike operator expectation values, correlation functions are generally non-vanishing in all representations— not just the scalar. At $`T=0`$, the symmetry group of the transfer matrix for QCD using the staggered Fermions for quarks has been studied extensively, and the representations of corresponding Fermion bilinear correlation functions are well known . The symmetries and representations of screening correlation functions at finite temperature have been worked out recently .
The main point of this last analysis is that the symmetry group of the $`T>0`$ transfer matrix is smaller than that of the $`T=0`$ transfer matrix. As a result, the $`T=0`$ irreps reduce further at finite temperature. All correlation functions block diagonalise under the isometry group of the spatial slice of the thermal lattice, the dihedral group $`D_4^h`$. As an example, a correlation function in, say, the $`z`$-direction of any vector (V) or pseudo-vector (PV) operator, $`(V_x,V_y,V_t)`$, in the $`T=0`$ theory breaks up into two scalar ($`A_1^+`$) irreps of $`D_4^h`$, the components $`V_t`$ and $`V_x+V_y`$, and a $`B_1^+`$ irrep $`V_xV_y`$. This happens for gluonic Wilson-loop operators such as plaquettes, and also for the quark bilinears operators .
The plaquette operators, restricted to a $`z`$-slice, transform as a PV set at $`T=0`$ and provide a good example of this reduction. The combinations $`P_{xy}`$ and $`P_{tx}+P_{ty}`$ transform as the $`A_1^+`$ (scalar) component of the PV and have non-vanishing expectation values at finite temperature. On the other hand, $`P_{tx}P_{ty}`$ transforms as the $`B_1^+`$ . Due to the reasons given earlier, this last expectation value must vanish, and we show later that it does. However, the correlation function need not, and, indeed, does not. Non-trivial screening has been observed through gauge-invariant gluonic correlation functions in all the other quantum number channels (labelled by the irreps of $`D_4^h`$) as well.
Screening masses obtained from correlation functions built out of staggered Fermion field operators have also been extensively studied in the past . The correlators which have been measured in the past are the $`A_1^+`$ from the scalar (S) and pseudo-scalar (PS) channels, and $`A_1^+`$ combinations of the vector (V) and pseudo-vector (PV) channels. The two $`A_1^+`$ correlators descending from S and PS see a lower screening mass ($`\mu `$) than those descending from the V and PV. The latter are consistent with the expectation from free Fermion field theory—
$$\mu a=2\mathrm{sinh}^1\left(\sqrt{(ma)^2+\mathrm{sin}^2\left(\frac{\pi }{N_t}\right)}\right),$$
(2)
where $`a`$ is the lattice spacing, $`m`$ the quark mass, and $`N_t`$ is the number of lattice sites in the Euclidean time direction ($`T=1/N_ta`$). Even some other measurements, such as those of “wavefunctions”, which seemed to indicate a more complicated picture , can be understood in terms of weakly interacting quarks . Here we re-examine the screening masses with the complete decomposition of operators into irreps of the finite temperature invariance group. In particular, we report in this letter the results of the first measurement of the $`B_1^+`$ correlation function constructed from local Fermion bilinears (see Table III of ) and compare our results with those obtained with gluonic operators.
We have simulated QCD with four light degenerate flavors of quarks at temperatures above the phase transition temperature, $`T_c`$, on lattices of size $`4\times 10^2\times 16`$, using the Hybrid Monte Carlo (HMC) algorithm . The longest direction, $`N_z`$, was chosen to be four times the Euclidean time direction, $`N_t`$, so that correlations could be followed to a distance of $`2/T`$. One simulation was performed at $`T=3T_c/2`$ with the coupling $`\beta =5.1`$ and the quark mass $`m=0.015/a`$ where $`a`$ is the lattice spacing. The second simulation was made at $`T=2T_c`$ with $`\beta =5.15`$ and $`m=0.01/a`$. With our choice of $`N_t=4`$, $`a=1/4T`$. The temperature identifications are made using previous measurements of the critical coupling on lattices with larger values of $`N_t`$ . Companion runs were made in quenched QCD on lattices of the same size at couplings corresponding to $`3T_c/2`$ and $`2T_c`$ for the quenched theory using a Cabbibo-Marinari pseudo-heat-bath algorithm .
We thermalised the HMC simulation at $`3T_c/2`$ with two different runs— one starting from an ordered gauge configuration, and the other from a pure gauge configuration thermalised at $`2T_c`$. Agreement in measurements of all thermodynamic quantities was used to decide on thermalisation. The plaquette average turned out to be the most stringent test, since it is the least noisy. At $`2T_c`$ thermalisation was tested by checking that a run starting from an ordered gauge configuration gave the same thermodynamics as one starting from a thermal $`3T_c/2`$ configuration.
Once thermalisation was achieved, two runs were made at each temperature— one with a trajectory length of one molecular dynamics (MD) time unit, and another with a trajectory length half as long. At $`3T_c/2`$ statistics were collected from 875 such configurations generated using the long trajectory, and 285 with the short trajectory. Previous studies have shown that the physics is much simpler at $`2T_c`$. We did smaller runs at this temperature— collecting statistics of 445 configurations with the short trajectory length and 100 with the long trajectories.
The question of autocorrelations is important whenever statistical inferences are to be made. It was found that autocorrelations of local operators, such as plaquettes, were the same with the two different trajectory lengths mentioned above. However, with any simulation algorithm that undergoes critical slowing down, short distance operators are decorrelated faster than those which are dominated by large distance scales. Thus, the effective number of measurements of short distance operators is not the same as that for extended operators. This is most problematic for correlation function measurements, where the correlator at different distances may have entirely different autocorrelations. These are usually difficult to measure directly and a different approach to the problem seems necessary.
If the errors in Fermion bilinear correlation functions, $`\mathrm{\Delta }C(z)`$, are evaluated with the assumption that there are no autocorrelations, then they depend systematically on the separation $`z`$. We found that for sufficiently large statistics, $`\mathrm{\Delta }C(z)`$ falls exponentially with $`z`$ and its logarithmic slope is almost independent of statistics. Hence it is possible to quote a single number as a figure of merit for decorrelation—
$$D=\frac{\mathrm{\Delta }C(N_z/2)}{\mathrm{\Delta }C(0)}.$$
(3)
$`D`$ is usually larger than unity, and depends on the specifics of the simulation algorithm and its tuning. Since smaller values of $`D`$ are preferable, tuning the algorithm should be done to minimise this.
The values of $`D`$ obtained depended on the channel being studied: the largest values of $`D`$ were found in the $`A_1^+`$ irreps coming from the V or the PV, and the smallest in the $`B_1^+`$ irreps. In each channel, we found only a weak dependence of $`D`$ on the number of measurements— indicating that it is a direct measure of the efficiency of the algorithm.
We found that the long trajectories (1 MD time unit) give about half the value of $`D`$ as obtained with the shorter trajectory. Since it takes twice as long to run the longer trajectory, the computational effort, $`E=D\times (CPU\mathrm{time})`$, involved in getting equal statistical errors is the same for these two trajectory lengths. However, the analysis of correlated errors in screening mass measurements is simplified with the longer trajectories. In test runs with trajectories 2.5 MD time units long, we found no further decrease in $`D`$, and hence an increase in $`E`$. Such a dependence of $`E`$ on trajectory length is characteristic of the HMC simulation algorithm . In the dynamical QCD simulations, we found $`D`$ to lie in the range 250 to 1000. The quenched simulations were significantly easier to decorrelate— the value of $`D`$ was lower by a factor of roughly 40 compared to the full theory simulations. This information on autocorrelations has been incorporated in all our statistical analyses.
In Table I, we report our measurements of screening masses for the known $`A_1^+`$ correlators. Our results are in very good agreement with previous measurements in the quenched theory at $`N_t=4`$ . There is also a remarkable agreement between the $`A_1^+`$ screening masses obtained from the quenched and the dynamical Fermions simulations at both temperatures. The four $`A_1^+`$ irreps coming from the V and PV channels gave degenerate screening masses which agree extremely well with the free field theory estimate in eq. (2). The $`A_1^+`$ correlators in the S and PS channels gave smaller screening masses, which increase marginally with $`T`$.
Hadron mass measurements in 4-flavor QCD at quark mass, $`ma=0.01`$ and $`\beta =5.15`$, corresponding to our runs at $`2T_c`$, have been performed before . A comparison with these $`T=0`$ measurements, listed in the last column of Table I, shows that our finite temperature screening masses are completely different—
$$\frac{\mu (T)}{m(T=0)}\{\begin{array}{cc}3\hfill & \text{ (}A_1^+\text{ from PS),}\hfill \\ 2\hfill & \text{ (}A_1^+\text{ from V).}\hfill \end{array}$$
(4)
In contrast, earlier measurements for $`N_t=8`$ lattices yielded $`\mu /m1`$ for the $`A_1^+`$ screening mass in the vector channel V . This made it difficult to argue for a thermal effect, although $`\mu `$ agreed with eq. (2) even in that case. The present measurement resolves this problem.
Our main new result is the first measurement of the correlation function in the $`B_1^+`$ channels. Correlators in this irrep obtained with PV and V are identical up to a sign, configuration by configuration. Hence we restrict our attention only to the $`B_1^+`$ coming from the V. We found that the $`B_1^+`$ correlation functions vanish to the best of the measurement precision (see Figure 1). The correct statistical procedure is to quote the $`\chi ^2`$ value for the fit of the data by a correlation function which is identically zero. We found
$$\chi ^2/\mathrm{DOF}=\{\begin{array}{cc}7/15\hfill & \text{(}\frac{3}{2}T_c\text{, dynamical),}\hfill \\ 11/15\hfill & \text{(}2T_c\text{, dynamical).}\hfill \end{array}$$
(5)
The quenched runs gave very similar results—
$$\chi ^2/\mathrm{DOF}=\{\begin{array}{cc}7/15\hfill & \text{(}\frac{3}{2}T_c\text{, quenched),}\hfill \\ 5/15\hfill & \text{(}2T_c\text{, quenched).}\hfill \end{array}$$
(6)
The numbers in eq. (5) have been obtained with the statistics collected in the longer of the two runs made at each temperature. The HMC simulations with smaller statistics also gave very similar results at both these temperatures.
One possible explanation for these remarkable results is an exact symmetry between the $`x`$ and $`y`$ directions, configuration by configuration. If this were so, then non-scalar operators would vanish, not only on the average, but identically. As a result, the correlation function in all but the scalar channel would also vanish.
We test for this symmetry by correlation functions of the $`B_1^+`$ plaquette operator, discussed earlier. Its average must vanish, and does—
$$P_{tx}P_{ty}=\{\begin{array}{cc}(0.6\pm 1.8)\times 10^4\hfill & \text{(}\frac{3}{2}T_c\text{, quenched),}\hfill \\ (5.5\pm 3.8)\times 10^5\hfill & \text{(}2T_c\text{, quenched).}\hfill \end{array}$$
(7)
However, the corresponding correlation function does not vanish. The same configurations used in the Fermionic correlator measurements lead to gluonic $`B_1^+`$ correlations which are clearly non-zero.
This correlation function is exhibited in Figure 2 for the quenched QCD simulations. Since the corresponding screening mass is large , the correlation function is somewhat noisy at large distances but it is clearly very different from zero at small distances. The test of a vanishing value of this correlation function gives $`\chi ^2/\mathrm{DOF}800`$. Thus the gauge configurations underlying our measurements have a strong configuration-to-configuration asymmetry between the $`x`$ and $`y`$ directions, allowing the gluonic $`B_1^+`$ correlators to survive.
Another possibility is that we have accidentally chosen an operator which has small overlap with the lowest $`B_1^+`$ eigenvector of the transfer matrix. One way to test this is to work with other operators. It is often the case that the overlap on the lowest eigenvector improves by delocalising the operator in some way. We constructed the “meson” operators using quark propagators computed with fuzzed links. This fails to improve the $`B_1^+`$ correlation function. We conclude that there must be a dynamical reason for the vanishing of the $`B_1^+`$ correlator in the high temperature phase of QCD, when measured using Fermion bilinears.
The numerical agreement of the screening masses extracted from the exponential fall-off of the $`A_1^+`$-correlators with free field values of eq. (2) has been used earlier to argue that one sees weakly interacting quarks in the high temperature phase of QCD. In the free-field theory limit, the Fermionic $`B_1^+`$ correlators studied here would vanish. Consequently, our observations can be seen as additional evidence for the weakly interacting picture. Indeed, comparing the $`\chi ^2`$ values, we find that the $`B_1^+`$ effective coupling in the quark sector is approximately 40 times smaller than that in the gluon sector, assuming the overlaps to be similar. Since the vanishing or existence of a correlation function is easy to observe, we believe that the $`B_1^+`$ correlator is a qualitatively better indicator of the non-interacting nature of the quarks in the quark-gluon plasma.
The drawback of this picture of non-interacting Fermions is well-known— the $`A_1^+`$ irreps coming from the S and PS channels are not degenerate with those coming from the V and PV channels. A plausible argument to understand this phenomenon is to note that the $`A_1^+`$ correlator coming from the S channel mixes with the glue sector of the theory. As a result, the screening mass in this channel will be contaminated by those in the gluonic $`A_1^+`$ sector, of which the lowest is the Debye screening mass, $`m_D`$. In quenched simulations at $`3T_c/2`$, $`m_D/T=2.8\pm 0.2`$ . Since the screening mass for the S channel lies in between $`m_D`$ and the V channel mass, as seen in Table I, it is consistent with such a conjecture.
We employed staggered Fermions for this investigation. It would be interesting to confirm these results for the Wilson Fermions as well. The symmetries of the lattice are, of course, independent of the type of Fermions employed and the group theoretic arguments apply with small modifications. In particular, the break-up of the zero temperature spectrum under the $`D_4^h`$ group proceeds without change, although the actual operators realising the irreps do change.
To summarize, we have used a new Fermionic correlator to demonstrate that the quarks in the quark-gluon plasma are weakly interacting already at temperatures as low as $`3T_c/2`$. The particular correlator we used is a much better probe of Fermion interaction strength than those used earlier.
|
no-problem/9906/hep-th9906053.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
During the last few years, the low-energy effective theories of $`N=2`$ supersymmetric Yang-Mills theories in $`d=4`$ space-time dimensions with various gauge groups and matter representations have been subject of much interest, following . At a generic point on the Coulomb branch of the moduli space of vacua, the massless degrees of freedom constitute $`r`$ free $`U(1)`$ vector multiplets, where $`r`$ is the rank of the microscopic gauge group. The dynamics of these multiplets is encoded in the special Kähler geometry of the Coulomb branch. The states of the theory are characterized by their electric and magnetic charges $`n_e`$ and $`n_m`$ with respect to the $`U(1)`$ gauge fields and possibly also some quark number charges $`S`$ (if the microscopic theory contains hyper multiplet matter). Given two such states with quantum numbers $`(n_e,n_m,S)`$ and $`(n_e^{},n_m^{},S^{})`$ respectively, we can form the ‘symplectic’ product
$$c=n_en_m^{}n_mn_e^{}.$$
(1)
To see the physical significance of $`c`$, we can consider quantizing one of the particles in the presence of the other. The wave ‘function’ is then in fact a section of a non-trivial line bundle over (punctured) space, the first Chern class of which, when evaluated on an $`S^2`$, equals $`c`$. We say that the states are mutually non-local if $`c`$ is non-zero.
The mass $`M`$ of an arbitrary state obeys the inequality
$$M|Z|,$$
(2)
where $`Z`$ is the the central charge that appears in the $`N=2`$ supertranslations algebra. The central charge is in general a linear combination of conserved Abelian charges, e.g.
$$Z=an_e+a_Dn_m+mS,$$
(3)
where the coefficients $`a`$, $`a_D`$ and $`m`$ are some holomorphic functions of the moduli and parameters of the theory. States which saturate the bound (2), so called BPS states, are of particular interest. Being the lightest states in their charge sector, generically they cannot disappear as the moduli and parameters of the theory are varied. (The exception is when a domain wall of ‘marginal stability’ is reached, where the phases of the central charges of three BPS states are equal. It might then be possible for the heaviest particle to decay into the two lighter ones and be absent from the spectrum on the other side of the domain wall.)
Along certain submanifolds of the Coulomb branch, the central charges corresponding to certain sets of quantum numbers vanish. The corresponding BPS states, if they are present in the spectrum of the theory, would then be massless. The nature of the low energy effective theory depends crucially on the electric and magnetic quantum numbers of these states. If the symplectic product $`c`$ vanishes for any pair of massless particles, it is always possible to perform an electric/magnetic duality transformation after which all massless states have purely electric charges. The low energy effective theory would then be massless $`N=2`$ supersymmetric QED with some number of different flavours of matter. Our focus in this paper is the case when at least one symplectic product does not vanish, so that the theory necessarily contains both electrically and magnetically charged massless states. The low energy effective theory is then an exotic interacting $`N=2`$ superconformal theory with no known Lagrangian description.
An example where such a phenomenon could possibly occur was first discovered in pure $`SU(3)`$ super Yang-Mills theory, where the central charges for mutually non-local states vanishes at a certain critical point in the moduli space . An equivalent critical point can be found in the moduli space of $`SU(2)`$ super Yang-Mills theory with a single hyper multiplet in the fundamental representation . There are also more complicated examples known. In all these examples, the relevant BPS states are known to exist in the spectrum in the weak-coupling region, where semi-classical methods can be trusted. The critical point is at strong coupling, though, and it is conceivable that these states have decayed at some domain wall of marginal stability before this point is reached. The purpose of this paper is to investigate a theory close to such a critical point to see whether the spectrum really contains arbitrarily light mutually non-local states.
Given a set of quantum numbers $`(n_e,n_m,S)`$, it is in general a difficult problem to determine if a corresponding BPS state exists in the spectrum (at a given point in the moduli space) . In principle, this question can be answered by realizing the theory as an $`M`$-theory configuration . We thus consider $`M`$-theory on an eleven-manifold $`M^{1,10}`$ of the form
$$M^{1,10}𝐑^{1,3}\times 𝐑^3\times Q^4,$$
(4)
where the first factor is four-dimensional Minkowski space, the second factor is three dimensional Euclidean space, and the last factor is some four-dimensional manifold of $`SU(2)`$ holonomy. We also introduce an $`M`$-theory five-brane with a world-volume of the form $`W^{1,5}`$ of the form
$$W^{1,5}𝐑^{1,3}\times p\times \mathrm{\Sigma },$$
(5)
where $`p`$ is a point in $`𝐑^3`$ and $`\mathrm{\Sigma }`$ is a two-manifold in $`Q^4`$. The theory on $`W^{1,5}`$ then has an effective four-dimensional low-energy limit on $`𝐑^{1,3}`$, and this is the theory we are interested in. Excitations around the vacuum defined by $`M^{1,10}`$ and $`W^{1,5}`$ are described by $`M`$-theory two-branes (membranes) with world-volumes $`S^{1,2}`$ of the form
$$S^{1,2}\mathrm{\Gamma }\times p\times D,$$
(6)
where $`\mathrm{\Gamma }`$ defines the world-line of a particle in $`𝐑^{1,3}`$ and $`D`$ is a two-manifold in $`Q^4`$ whose boundary $`C=D`$ lies on $`\mathrm{\Sigma }`$. (That a two-brane can end on a five-brane has been shown in .) The mass $`M`$ of the state is given by
$$M=2_DV_D,$$
(7)
where $`V_D`$ is the volume-form of $`D`$. The homology class $`[C]`$ of $`C`$ determines the quantum numbers $`(n_e,n_m,S)`$ of the state. In particular, the intersection number $`[C][C^{}]`$ of two homology classes equals the symplectic product of the corresponding electric and magnetic charges, i.e.
$$[C][C^{}]=n_en_m^{}n_mn_e^{}.$$
(8)
The $`SU(2)`$ holonomy of $`Q^4`$ means that this space is hyper Kähler, i.e. it admits a two-sphere $`S^2`$ of inequivalent complex structures $`J`$. Equivalently, it can be regarded as a Ricci-flat Kähler manifold, and therefore admits a covariantly constant holomorphic two-form $`\mathrm{\Omega }`$. The relationship between these two descriptions is as follows: Given a complex structure $`J`$, we have $`\mathrm{\Omega }K^{}+iK^{\prime \prime }`$, where $`K^{}`$ and $`K^{\prime \prime }`$ are the Kähler forms corresponding to two other complex structures $`J^{}`$ and $`J^{\prime \prime }`$ such that $`J`$, $`J^{}`$ and $`J^{\prime \prime }`$ are all orthogonal. The requirement that the effective theory on $`𝐑^{1,3}`$ has $`N=2`$ supersymmetry is equivalent to demanding that $`\mathrm{\Sigma }`$ be holomorphically embedded in $`Q^4`$ with respect to some complex structure $`J`$ . The central charge $`Z`$ of a state corresponding to a two-manifold $`D`$ is then given by
$$Z=_D\mathrm{\Omega }_D,$$
(9)
where $`\mathrm{\Omega }_D`$ is the pullback (by the embedding map) of $`\mathrm{\Omega }`$ to $`D`$. One can show that the inequality (2) holds. It is saturated, i.e. the state is BPS, if $`D`$ is holomorphically embedded with respect to a complex structure $`J^{}`$ which is orthogonal to $`J`$. Given $`J`$, there is a circle $`S^1`$ of such $`J^{}`$, corresponding to the phase of the central charge $`Z`$.
Before we answer the question of the light spectrum near a critical point, we must decide exactly what states we are looking for. The strongest result would be to establish the existence of mutually non-local BPS states, the masses of which of course vanish as we approach the critical point where their central charges vanish. This is fairly difficult, though, since one would have to construct the corresponding two-manifolds $`D`$ exactly. The requirements that $`D`$ be holomorphically embedded with respect to the appropriate complex structure and also intersects $`\mathrm{\Sigma }`$ along a real curve are difficult to analyze, except in certain special situations. An alternative would be to simply find mutually non-local states, whose masses vanish at the critical point but which are not BPS. This would be a fairly weak result: Given two such states, one could construct a third such state by simply taking the sum of the corresponding two-manifolds $`D`$. We will instead consider an intermediate set of states that we call asymptotically BPS. By this we mean that
$$M/|Z|1$$
(10)
as we approach the critical point. This implies that $`M`$ goes to zero as we approach the critical point. However, the sum of two such states is in general not of the same kind. It seems plausible that the existence of an asymptotically BPS state implies the existence of a BPS state, but we have no proof of this.
## 2 The computation
We will consider the case of $`SU(2)`$ super Yang-Mills theory with one hypermultiplet quark in the fundamental representation. As explained in , the hyper Kähler manifold $`Q^4`$ is in this case simply $`𝐑^3\times S^1`$ with coordinates $`X^4`$, $`X^5`$, $`X^6`$ and $`X^{10}`$, where $`X^{10}`$ is periodic with period $`2\pi `$. The complex structure $`J`$ can be described by declaring that $`s=X^6+iX^{10}`$ and $`v=X^4+iX^5`$ are holomorphic coordinates. The Kähler form $`K`$ and the covariantly constant holomorphic two-form $`\mathrm{\Omega }`$ are then given by
$`K`$ $`=`$ $`i\left(dsd\overline{s}+dvd\overline{v}\right)`$ (11)
$`\mathrm{\Omega }`$ $`=`$ $`2dsdv.`$ (12)
It is convenient to replace $`s`$ by the single-valued coordinate $`t=\mathrm{exp}(s)`$. Then the complex surface $`\mathrm{\Sigma }`$ can be written as (, )
$$t^2+(v^2\varphi ^2)t+\mathrm{\Lambda }^3(vm)=0.$$
(13)
Here $`m`$ is the bare mass of the quark hypermultiplet, and the modulus $`\varphi ^2`$ parametrizes the moduli space of vacua. $`\mathrm{\Lambda }`$ is the dynamically generated scale of the theory, which we will choose so that $`\mathrm{\Lambda }^3=2`$. If we solve the equation for $`s=\mathrm{log}t`$ we then get
$$s=\mathrm{log}\left(\frac{1}{2}(v^2\varphi ^2)+\sqrt{f}\right),$$
(14)
where
$$f=\frac{1}{4}(v^2\varphi ^2)^2+2(vm).$$
(15)
We see that $`\mathrm{\Sigma }`$ can be thought of as a double cover of the $`v`$-plane (because of the ambiguity of the square root). The two sheets meet at the four branch points, joined by branch cuts, where $`f`$ vanishes.
Generically, the branch points lie at different values of $`v`$, but by choosing $`m=\frac{3}{2}`$ and $`\varphi ^2=3`$ (the Argyres-Douglas point), we get $`f=\frac{1}{4}(v1)^3(v+3)`$, i.e. three of the branch points coincide at $`v=1`$. We are interested in the geometry close to this point, so we write
$`m`$ $`=`$ $`{\displaystyle \frac{3}{2}}+\delta m`$ (16)
$`\varphi ^2`$ $`=`$ $`3+\delta \varphi ^2`$ (17)
$`v`$ $`=`$ $`1+u.`$ (18)
Dropping all non-leading terms in $`\delta m`$, $`\delta \varphi ^2`$ and $`u`$, we then get
$$s=u\sqrt{f},$$
(19)
where
$$f=u^3(\delta v)^3$$
(20)
and $`\delta v`$ is defined so that
$$(\delta v)^3=2\delta m\delta \varphi ^2.$$
(21)
The three branch points are thus located at $`u=\omega \delta v`$, where $`\omega `$ is an arbitrary cubic root of unity. We define the homology classes $`[C_1]`$, $`[C_2]`$, and $`[C_3]`$ by representative curves $`C_1`$, $`C_2`$, $`C_3`$ that winds tightly around two of these branch points but not the third. (Only two of these classes are linearly independent, though.) It is easy to see that the intersection form of these curves is given by
$$\left(\begin{array}{ccc}0& 1& 1\\ 1& 0& 1\\ 1& 1& 0\end{array}\right)$$
(22)
so that any two of the corresponding states are mutually non-local.
To construct the surfaces corresponding to BPS states it is convenient to change variables from $`s`$ and $`u`$ to $`x`$ and $`y`$ defined by the relations
$`s`$ $`=`$ $`(x+y)\delta v`$ (23)
$`u`$ $`=`$ $`(xy)\delta v.`$ (24)
The equation for $`\mathrm{\Sigma }`$ then reads
$$(x+y)\delta v=(xy)\delta v\sqrt{(xy)^31}(\delta v)^{3/2}+𝒪((\delta v)^2).$$
(25)
Solving for $`y`$, we get
$$y=\frac{1}{2}\sqrt{x^31}(\delta v)^{1/2}+𝒪(\delta v).$$
(26)
In these variables, the Kähler form and the holomorphic two-form are
$`K`$ $`=`$ $`2i|\delta v|^2\left(dxd\overline{x}+dyd\overline{y}\right)`$ (27)
$`\mathrm{\Omega }`$ $`=`$ $`4(\delta v)^2dydx.`$ (28)
We now consider a surface $`D`$ with a boundary $`C`$ on $`\mathrm{\Sigma }`$ that winds around the two branch points at $`x=\omega `$ and $`x=\omega ^{}`$, where $`\omega `$ and $`\omega ^{}`$ are two different cubic roots of unity, but not the third. The central charge is independent of the exact form of $`D`$, so we can parametrize it by the real parameters $`\xi `$ and $`\eta `$ such that $`1\xi 1`$ and $`1\eta 1`$ and let it be of the form
$`x`$ $`=`$ $`x(\xi )`$ (29)
$`y`$ $`=`$ $`{\displaystyle \frac{\eta }{2}}\sqrt{x^3(\xi )1}(\delta v)^{1/2},`$ (30)
where the function $`x(\xi )`$ obeys $`x(1)=\omega `$ and $`x(1)=\omega ^{}`$. The central charge is then given by
$$Z=4(\delta v)^2_\omega ^\omega ^{}𝑑x_1^1\frac{d\eta }{2}\sqrt{x^31}(\delta v)^{1/2}=4(\delta v)^{5/2}_\omega ^\omega ^{}𝑑x\sqrt{x^31}.$$
(31)
In particular, $`|Z|0`$ as $`\delta v0`$. Notice that $`\mathrm{\Sigma }`$ in the limit $`\delta v0`$ has $`Z_2XZ_3`$ symmetry: $`yy`$ and $`x\mathrm{exp}(2\pi i/3)x`$, and is a flat sheet: $`y=0`$. The three surfaces $`D`$ must break the $`Z_3`$ but have the $`Z_2`$ symmetry, and they must intersect with $`\mathrm{\Sigma }`$ orthogonally. The surface (29) has these properties, and it will turn out that the surface that is asymptotically BPS is really of this form. The induced metric on such a surface is
$`ds^2`$ $`=`$ $`2|\delta v|^2\left(|dx|^2+|dy|^2\right)`$ (32)
$`=`$ $`g_{\xi \xi }d\xi ^2+2g_{\xi \eta }d\xi d\eta +g_{\eta \eta }d\eta ^2,`$ (33)
where the components are
$`g_{\xi \xi }`$ $`=`$ $`2|\delta v|^2|x^{}(\xi )|^2+𝒪((\delta v)^3)`$ (34)
$`g_{\xi \eta }`$ $`=`$ $`𝒪((\delta v)^3)`$ (35)
$`g_{\eta \eta }`$ $`=`$ $`{\displaystyle \frac{|\delta v|^3}{2}}|\sqrt{x^3(\xi )1}|^2+𝒪((\delta v)^4).`$ (36)
The mass is then
$$M=2_1^1𝑑\eta _1^1𝑑\xi \sqrt{detg}=4|\delta v|^{5/2}_1^1𝑑\xi |x^{}(\xi )\sqrt{x^3(\xi )1}|+𝒪((\delta v)^3).$$
(37)
Comparing (31) and (37), we see that $`M/|Z|1`$ as $`\delta v0`$ provided that the function $`x(\xi )`$ is chosen so that the phase of
$$x^{}(\xi )\sqrt{x^3(\xi )1}$$
(38)
is independent of $`\xi `$. We have not succeeded in finding such functions analytically, but they are not difficult to approximate numerically. For example, if we take $`\omega =\mathrm{exp}(2\pi i/3)`$ and $`\omega ^{}=\mathrm{exp}2\pi i/3`$, we find that the central charge (31) is real, i.e. symmetric under complex conjugation. The same should be true for the expression (38), from which follows that $`\mathrm{Re}x(\xi )`$ is an even and $`\mathrm{Im}x(\xi )`$ an odd function of $`\xi `$. We can then expand $`x(\xi )`$ around $`\xi =1`$ as
$$x(\xi )=\mathrm{exp}(2\pi i/3)+x^{}(1)(\xi +1)+𝒪((\xi +1)^2),$$
(39)
where the phase of $`x^{}(1)`$ equals $`4\pi /9`$. The curve $`x(\xi )`$ can be well approximated by a rather flat parabola. Rotating the configuration through $`2\pi /3`$ around the origin of the $`x`$-plane, we obtain the solutions for the two other homology classes. Hence we have found the three mutually non-local asymptotic BPS states.
We have benefited from discussions with Philip Argyres and Piljin Yi. The research of M. H. is supported by the Swedish Natural Science Research Council (NFR).
|
no-problem/9906/astro-ph9906191.html
|
ar5iv
|
text
|
# Evidence for the stratification of Fe in the photosphere of G191-B2B
## 1 Introduction
Following the first discovery of the presence of heavy elements in the photospheres of H-rich (DA) white dwarfs with IUE (Bruhweiler & Kondo 1981, 1983), it is now well established that they are ubiquitous in the group of hottest objects, with effective temperatures in excess of 55000K (e.g. Marsh et al. 1997b; Barstow et al. 1993). Extensive further studies in the far UV have eventually revealed the presence of absorption lines from C, N, O, Si, S, P, Fe and Ni in a number of objects (e.g. Vennes et al. 1992; Sion et al. 1992; Holberg et al. 1994; Vennes et al. 1996).
Apart from the detectable far UV absorption lines, these heavy elements have important effects on other spectral regions. For example, they systematically alter the flux level and shape of the optical Balmer line profiles, from which $`T_{eff}`$ and log g can be determined. The effect is to lower the measured value of $`T_{eff}`$ by several thousand degrees, compared to that determined under the assumption that the star has a pure H envelope (Barstow et al. 1998). In the extreme ultraviolet (EUV), the heavy element opacity dramatically blocks the emergent flux, yielding a much steeper short wavelength cutoff than is seen in a star with a pure H atmosphere. While, the general shape of the EUV spectrum of metal-containing hot DA white dwarfs has been understood qualitatively, since Vennes et al. (1988) first successfully interpreted the EXOSAT spectrum of Feige 24 with an arbitrary mixture of elements, quantitative agreement between the observations and the predictions of theoretical model atmospheres has proved much more elusive. For example, initial attempts failed to match either the flux level or even the general shape of the continuum of one of the best studied white dwarfs, G191$``$B2B (e.g. Barstow et al. 1996). This was eventually perceived to be due to inclusion of an insufficiently large number of heavy element lines (mainly Fe and Ni). Improved non-LTE calculations, containing some 9 million predicted Fe and Ni lines, rather than just the 300,000 or so observed experimentally, were able to provide a self-consistent model which could accurately reproduce the EUV, UV and optical spectra (Lanz et al. 1996). Subsequently, other authors have obtained similar results with both LTE and non-LTE models (e.g. Wolff et al. 1998; Koester et al. 1997; Chayer et al. 1997). Interestingly, Wolff et al. (1998) obtained a lower Fe abundance from their far UV analysis than did Lanz et al. (1996; $`Fe/H=2\times 10^6`$ and $`1\times 10^5`$ respectively), while agreeing with the amount of Fe ($`1\times 10^5`$) required by the EUV spectra. However, we note that Wolff et al. (1998) only study the restricted wavelength range available from the GHRS spectra, confining their analysis to only a few, relatively weak FeV lines. In contrast, Lanz et al. (1996) considered the best match to all the Fe lines visible in the IUE spectrum. Furthermore, the far UV lines are much less sensitive to the Fe abundance than the EUV continuum. Consequently, it is likely that the apparent discrepancy is not significant.
Despite these advances important problems remain. First, the good agreement between the observed EUV spectrum of G191$``$B2B and latest model predictions can only be obtained by inclusion of a significant quantity of helium, either in the photosphere or in the form of an interstellar/circumstellar $`\mathrm{He}\mathrm{ii}`$ component (see Lanz et al. 1996). At the comparatively limited $`0.5`$Å resolution of EUVE in the region of the $`\mathrm{He}\mathrm{ii}`$ Lyman series, and the consequent blending of these lines with features from heavier elements, the inferred He contribution cannot be directly detected. If He is really present, two alternative interpretations arise. Either there is an interstellar/circumstellar component or the material resides predominantly in the stellar photosphere. If the former explanation holds, the amount of $`\mathrm{He}\mathrm{ii}`$ required implies an extremely high ionization fraction (80% ) when compared with the measured $`\mathrm{He}\mathrm{i}`$ column density. This is a much higher value than appears to be typical of the local interstellar medium (ISM) in general (Barstow et al. 1997a). On the other hand, if there is a significant photospheric component the implied abundance of He/H=$`5.5\times 10^5`$ is in disagreement with the upper limit of $`2\times 10^5`$ imposed by the absence of a detectable 1640Å feature in the UV.
A partially successful attempt has been made to resolve these issues by adopting a physically more realistic model, where the helium is gravitationally stratified rather than homogeneously distributed within the atmosphere (Barstow & Hubeny 1998). The required interstellar He ionization fraction remains high (59% ), compared to that of the local ISM, but the predicted strength of the 1640Å $`\mathrm{He}\mathrm{ii}`$ line becomes consistent with observation. Unfortunately, with the stratified model, the predicted $`\mathrm{He}\mathrm{ii}`$ Lyman series lines are somewhat stronger than can be accommodated by the EUVE spectrum. In the end, the issue of the $`\mathrm{He}\mathrm{ii}`$ component will only be solved when a much higher resolution spectrum is obtained, capable of resolving any $`\mathrm{He}\mathrm{ii}`$ lines from those of heavier elements. The J-PEX sounding rocket spectrometer, with an expected flight in early 1999, should provide such data (Bannister et al. 1999).
The second major difficulty in understanding the spectrum of G191$``$B2B has received considerably less attention. Indeed, to some extent, with the recent successes in dealing with the optical, UV and medium-long wavelength EUVE simultaneously, it has been specifically ignored. All the results of Lanz et al. 1996, Barstow & Hubeny (1998) and others, discussed above, only consider the EUV spectral data longward of $`180`$Å. Nevertheless, there is significant flux detected shortward of this, in the EUVE short wavelength channel and in the soft X-ray. Furthermore, the flux level predicted by the most successful current models is between five and ten times that observed.
A complete understanding of the atmosphere of G191$``$B2B requires that we also explain the short wavelength spectrum as well as the longer wavelength spectral ranges. We show here that the entire spectrum of G191$``$B2B, from the short wavelength EUV to the optical, can be explained if the Fe known to be present in the atmosphere is not homogeneously mixed but stratified, with a decreasing abundance towards the outer layers of the stellar envelope. Such a heavy element distribution may be an indication of ongoing mass loss from the star, which has important consequence for the spectral evolution of G191$``$B2B and hot DA white dwarfs in general.
## 2 EXAMINATION OF THE SHORT WAVELENGTH FLUX PROBLEM
### 2.1 Observations
As in our earlier papers on G191$``$B2B, we utilised the ‘dithered’ EUVE spectrum obtained on 1993 December 7-8 (see Lanz et al. 1996) and the coadded IUE echelle spectrum of Holberg et al. (1994). The EUVE ‘dither’ mode consists of a series of pointings slightly offset in different directions from the nominal source position to average out flat field variations. As these data have been extensively described elsewhere, we just give a brief summary here. The EUVE exposure times were 58,815s, 49239s and 60,816s in the SW, MW and LW ranges respectively. We assume that the residual efficiency variation, after the effect of the ‘dither’, is 5% , as observed for HZ43 (e.g. Barstow Holberg and Koester 1995; Dupuis et al. 1995), quadratically adding a systematic error of this magnitude to the statistical errors of the data. As one of the brightest EUV sources in the sky, the spectrum of G191$``$B2B is well-exposed across most of the wavelength range, achieving the maximum signal-to-noise possible with EUVE (limited by the residual fixed pattern efficiency variation) at the maximum spectral resolution. Since the raw spectra oversample the true resolution by a factor $`4`$, we have generally chosen to bin the data by this factor during the spectral analysis. However, in the SW range, the heavy element opacity causes a dramatic reduction in the observed stellar brightness. Consequently, in an exposure optimised for the MW and LW ranges, the signal-to-noise at shorter wavelengths is very low. To approach the signal-to-noise achieved at longer wavelengths and produce a data set with no bins containing zero counts, it was necessary to rebin the data below $`190`$Å by a further factor 8 (32 in total). Inevitably, detailed spectral line information is lost but the data can otherwise constrain the model spectra. Figure 1 shows the complete EUVE spectrum of G191$``$B2B, comparing it to best fit model of Lanz et al. (1996).
Recent work (Barstow Hubeny & Holberg 1998) has shown that the value of $`T_{eff}`$ determined from an analysis of the optical Balmer lines is sensitive to a combination of non-LTE effects and heavy element line blanketing. In their analysis, Lanz et al. (1996) adopted an effective temperature of 56000K, after taking these effects into account. However, their work was carried out using a model grid with a fixed value of the surface gravity (log g=7.5). The more complete analysis of Barstow et al. (1998) spans the log g range from 7.0 to 8.0 and yields a slightly lower temperature of 53720K, from a combined Balmer and Lyman line analysis. For this study we adopt $`T_{eff}`$=54000K and log g=7.5 in the model calculations.
### 2.2 Non-LTE spectral models with heavy elements
The homogeneous non-LTE heavy element rich calculations used here originate in the work of Lanz et al. (1996) and Barstow et al. (1998) and have been described extensively in those papers. Briefly, the models include a total of 26 ions of H, He, C, N, O, Si, Fe and Ni in calculations with the programme $`\mathrm{tlusty}`$(Hubeny 1988; Hubeny & Lanz 1992, 1995). Radiative data for the light elements have been extracted from TOPBASE, the database of the opacity project (Cunto et al. 1993), except for extended models of carbon atoms (K. Werner, private communication). For iron and nickel, all the levels predicted by Kurucz (1988) are included, taking into account the effect of over 9.4 million lines.
Barstow et al. (1998) computed a model grid over a range of $`T_{eff}`$ from 52000K to 68000K and log g from 7.0 to 8.0 but only considered a single representative value of the Fe abundance. These calculations have also been extended to deal with helium stratification by Barstow & Hubeny (1998). As part of our continuing study of G191$``$B2B-like hot DA white dwarfs we have extended the stratified computations to match the grid of Barstow et al. (1998) in $`T_{eff}`$ and log g while enlarging the range of Fe abundances considered (see table 1).
In addition to the calculation of the intrinsic stellar EUV spectrum, it is necessary to take account of the effect of the intervening interstellar medium. The basic model that deals with $`\mathrm{H}\mathrm{i}`$, $`\mathrm{He}\mathrm{i}`$ and $`\mathrm{He}\mathrm{ii}`$ opacity is now well-established (Rumph Bowyer & Vennes 1994) and we apply the modifications described by Dupuis et al. (1995) to treat the converging line series near the $`\mathrm{He}\mathrm{i}`$ and $`\mathrm{He}\mathrm{ii}`$ edges.
### 2.3 Spectral analysis
The analysis technique used to compare models and data has been described extensively in several earlier papers (e.g. Lanz et al. 1996; Barstow et al. 1997a,b etc.). Hence, we just give a brief resumé here. We utilise the programme $`\mathrm{xspec}`$ to fold model spectra through the EUVE instrument response, taking into account the overlapping higher spectral orders in the instrumental effective area and applying the long wavelength corrections described by Dupuis et al. (1995). Goodness of fit is determined using a $`\chi ^2`$ statistic and the best agreement between model and data is achieved by seeking to minimise the value of that parameter. While visually, good agreement can be achieved between model and data disagreements in some details of the line strengths often lead to high values of the reduced $`\chi ^2`$ ($`\chi ^2/\nu `$, where $`\nu `$ is the number of degrees of freedom). Formally, a good fit should have $`\chi _{red}^22`$ or less and if it is much greater than this, estimates on the parameter uncertainties cannot be evaluated from the change in $`\chi ^2`$ according the usual values of $`\mathrm{\Delta }\chi ^2`$ (e.g. $`\mathrm{\Delta }\chi ^2=5.89`$ for $`1\sigma `$ uncertainty and 5 degrees of freedom, see Press et al. 1992).
An alternative way to estimate such uncertainties is to make use of the F test, which calculates the significance of differences in the values of $`\chi ^2`$ determined from separate fits. The F parameter is simply the ratio of the two values of $`\chi _{red}^2`$. The significance of its value depends on the number of degrees of freedom and can be determined from standard tables. This can be used to determine whether or not one model might provide a significantly better fit than another. Uncertainties are then estimated by tracking the value of F as an individual parameter is varied until it reaches a predetermined value corresponding to a particular significance.
### 2.4 Comparison of models and data
The work of Lanz et al. (1996) and Barstow & Hubeny (1998) has been very successful in explaining the spectrum of G191$``$B2B at wavelengths longward of $`180`$Å. However, taking the models that best match that spectral region and extending the comparison to shorter wavelengths reveals a significant discrepancy, with the observed flux falling well below that predicted (figure 1). The largest difference is nearly an order of magnitude. Closer inspection of figure 1 reveals that, while the onset of the disagreement is quite sudden, at $`190`$Å, the fit is not perfect at the longer wavelengths. Agreement is very good above 260Å, but there are some significant differences between model and data between 190Å and 260Å.
Interestingly, the match of the MW and LW data to the model can be much improved by restricting further the wavelength range under consideration. For example, ignoring the data shortward of 210Å delivers a large improvement in the fit, changing the value of $`\chi ^2`$ by more than a factor 2, from 4827 (530 degrees of freedom) to 2064 (491 dof). Applying the F-test to these values shows that the improvement is hugely significant. Furthermore, this is coupled with a reduction in the required Fe abundance to Fe/H$`=3.8\times 10^6`$. The quality of agreement with the long wavelength data, already very good with the current generation of models, remains more or less unchanged. Therefore, nearly all the improvement in the fit lies between 210Å and $`300`$Å, as illustrated in figure 2. Here, we highlight the most interesting spectral range, between 100Å and 300Å, below which the source is barely detectable with EUVE. As expected from the above discussion, the agreement between model and data is excellent above 210Å, but there is an increasing discrepancy to shorter wavelengths.
The difference between the model prediction and observation (see figure 2) is characterised mainly by a difference in slope. A clue as to the possible explanation of the short wavelength flux discrepancy can be found by considering the region below 180Å separately from the rest of the spectrum. An improved match to the observed flux level is obtained by allowing the Fe abundance to vary freely, also shown in figure 2, after fixing the interstellar columns at the values determined from the longer wavelength fit. However, this exercise yields a Fe/H ratio of $`4\times 10^5`$, much higher than either the values obtained in the analyses of Lanz et al. (1996) and Barstow & Hubeny (1998) or with the more restricted short wavelength limit considered here.
Figure 3 shows the mass depth ($`\mathrm{\Delta }\mathrm{M}`$, total mass above the point of interest) of the EUV line and continuum at monochromatic optical depth $`\tau _\nu =2/3`$ as a function of wavelength computed for the model that gives the best overall fit to the data. It can be seen that there is a steep change in the depth at which both the continuum and lines are formed at $`180`$Å, just where short wavelength discrepancy begins (see figures 1 and 2). The sharp decrease in the depth of the continuum formation between 190 and 160Å is a combined result of a number of intervening continuum edges, mostly of FeV ($`\lambda `$ 165.5, 173.1, 176.1, 180.2, 184.3), and partly NiV ($`\lambda `$ 163.2, 171.3, 174.9, 177.0, 180.0). There are also edges of light ions, namely CIV ground-state (192.2), NIV (160.1, 179.4) and OIV (160.3, 181.1), which are less important than the FeV features. Coupling this with the larger Fe abundance required to match the short wavelength data leads us to suggest that the Fe is not homogeneously mixed in the atmosphere but has a depth dependent abundance. We examine this possibility in the rest of this paper.
## 3 Non-LTE heavy element-rich models with a stratified Fe component
Barstow & Hubeny (1998) have already reported on the effects of helium stratification in heavy element-rich model atmospheres. They utilised the programme $`\mathrm{tlusty}`$ (Hubeny 1988; Hubeny & Lanz 1992, 1995; Lanz et al. 1996) including modifications to allow the abundance of any element at any depth point within an atmosphere to be a completely free parameter. However, to generate a physically realistic model it would be necessary to carry out diffusion calculations for every element. In the case of helium, this is comparatively straightforward. As pointed out by Vennes et al. (1988), the radiation pressure on helium is insufficient to counteract the downward force of gravity to a degree yielding a photospheric helium abundance large enough to explain the observed EUV/soft X-ray flux deficiency, due to the comparatively small number of lines in the EUV waveband. Hence, Vennes et al. (1988) proposed a model where the the relative abundance of He is determined by the equilibrium between ordinary diffusion and gravitational settling and depends on effective temperature, surface gravity and the mass of the overlying H layer. Barstow & Hubeny (1998) adopted the approach of Vennes et al. (1988) to calculate the depth dependent abundance profile of He for their stratified models and this is also the case in the models used in this work.
For heavier elements, the number of lines found in the EUV is much greater than for helium and radiative forces become important. In these circumstances, simple diffusive equilibrium is no longer an adequate treatment of the relative abundances of the elements. Detailed models dealing with the effects of radiative levitation have been constructed by several workers. Most recently, Chayer, Fontaine & Wesmael (1995a) and Chayer et al. (1994) have carried out calculations that include all the heavy elements so far found in the atmosphere of G191$``$B2B. Unfortunately, for our purposes, the range of mass depth considered by them, running from $`\mathrm{\Delta }\mathrm{M}/\mathrm{M}10^4`$ upto $`10^{15}`$, correponds to a region of the envelope below that of the line/continuum formation depths in the $`\mathrm{tlusty}`$ models. We note that Chayer et al. (1994) consider the fractional mass depth $`\mathrm{\Delta }\mathrm{M}/\mathrm{M}`$, whereas we deal with the total mass $`\mathrm{\Delta }\mathrm{M}`$, independent of the mass of the star. However, it is a simple matter to convert our scale to theirs by dividing $`\mathrm{\Delta }\mathrm{M}`$ by the known mass of G191$``$B2B ($`0.5M_{}`$, Marsh et al. 1997a). Hence, with line/continuum formation depths above $`\mathrm{\Delta }\mathrm{M}3\times 10^{16}1\times 10^{15}`$ (corresponding to $`\mathrm{\Delta }\mathrm{M}/\mathrm{M}6\times 10^{16}2\times 10^{15}`$), there is little information in the Chayer et al. (1994, 1995a) work that could be usefully incorporated into the $`\mathrm{tlusty}`$ models.
Since, we have no a priori information on the possible depth dependent abundance of Fe, either from observation or a detailed physical model, we have taken two somewhat arbitrary approaches in an attempt to produce one or more atmosphere models that can match the complete EUVE spectrum of G191$``$B2B. First, we looked at a series of simple ‘slab’ models, where we divide the atmosphere into two or three discrete regions and fix the Fe abundance at a constant value within these depth ranges but allow it to vary from region to region. As starting points we used the nominal Fe abundances determined from the best fit models to the short wavelength and medium wavelength EUVE spectra and placed the slab divisions near the continuum formation depth. We consider models with two and three layers, but it is important to note that the resulting discontinuities are unphysical and that this choice is only a rough attempt to mimic the true depth dependence of the Fe abundance that might be expected from diffusion equilibrium. Table 2 lists the details of the slab models, giving Fe abundances and depth ranges, the quoted depth representing the lower limit of the given abundance for each layer. We note that the abundances of all other elements were constant, as specified in Barstow & Hubeny (1998; C/H$`=2.0\times 10^6`$, N/H$`=1.6\times 10^7`$, O/H$`=9.6\times 10^7`$, Si/H$`=3.0\times 10^7`$, Ni/H$`=1.0\times 10^6`$).
A second approach to modelling the Fe abundance was to modify the diffusion calculation for gravitational settling of helium to deal with Fe (see Vennes et al. 1988; Barstow & Hubeny 1998). This is straightforward, since the atomic mass and effective charge are free parameters in the calulation and can be adjusted. However, in dealing with helium, it is realistic to assume that there is an infinite reservoir overlaid by a thin H shell, a useful boundary condition. As Fe is not a product of nuclear burning in a white dwarf of normal mass, such as G191$``$B2B, it can only exist as a trace element and the Fe reservoir cannot be specified in the same way. In this case it is necessary to assume that there is some limiting Fe abundance in the deeper layers of the atmosphere. Irrespective of any assumptions about this limiting abundance, what is clear from a range of test diffusion calulations for Fe is that the rate of change of the abundance profile is very steep. That is, for a sensible abundance of Fe at the continuum formation depth, the outer layers are completely depleted of Fe while the Fe abundance in the deeper regions is so high that the emergent EUV flux is negligible. This is not too suprising, as the atomic mass of Fe is fourteen times that of He, and highlights the already well documented fact that radiative levitation effects are necessary to explain the presence of photospheric Fe.
We have not developed a complete radiative levitation calculation for this work but the possible effects can be examined, to first order, by applying a reverse acceleration term in the diffusive equilibrium calculation. Examination of radiative acceleration predictions for Fe (see figure 12 of Chayer et al. 1995a) shows that the value of $`\mathrm{g}_{\mathrm{rad}}`$ reaches a plateau-like maximum near the stellar surface. Consequently, we can adopt a constant value for the radiative acceleration term in our calculations. It is important to stress that our choice of log $`\mathrm{g}_{\mathrm{rad}}`$ and the reservoir Fe abundance are entirely empirical, based on achieving Fe abundances that approximately correspond to those included in the slab models and which also match the range observed in the homogeneous analysis of section 3 (see table 2).
## 4 Stratified analysis of the EUV spectrum of G191$``$B2B
In principle, a detailed study of any individual star requires a grid of models to be calculated spanning the possible range of values of all parameters. In practice, this is not feasible for a star like G191$``$B2B because of the number of free parameters that must be considered. A heavy element-rich atmospheric model is specified by $`T_{eff}`$ and log g, plus the abundance of each element included in addition to hydrogen - seven in the models used here. For G191$``$B2B and similar stars, determination of the values of all the parameters is a multi-stage process. Temperature and gravity are estimated from the Balmer lines while abundances can be determined from an analysis of the far UV absorption line strengths. Abundance determinations are typically iterative, with initial estimates made using a first guess at the composition refined by recalculation of a fully converged model. We now know that, to achieve a completely consistent interpretation of the data, it is also necessary to redetermine $`T_{eff}`$ and log g in the light of the measured heavy element abundances (Barstow et al. 1998).
Most analyses performed have made the reasonable simplifying assumption that the stellar envelopes are homogeneous. Once it is necessary to consider depth dependent abundances, the number of possible variables increases dramatically, since it is then necessary to specify the element abundances at each depth point. Since, the models constructed for this work have 70 depth points, this could mean that it is necessary to deal with $`70`$ times the number of variables handled in the homogeneous work unless the problem is restricted in some sensible way. The dominant opacity source in the EUV is Fe, therefore, we have chosen to confine the analysis to the study of Fe stratification and assume that all other heavy elements are homogeneously mixed, with the exception of helium which is also stratified as described above and by Barstow & Hubeny (1998).
Even dealing with a single element it is necessary to define the abundance at 70 depth points and the problem can be reduced further by dividing the atmosphere into a smaller number of discrete regions or slabs or trying to specify a smooth abundance profile with a smaller number of diffusion/levitation parameters, as described in section 3 above. Even so, there is no single variable that can be adjusted to yield the desired result in terms of matching both short and longer wavelength EUVE spectra at the same time. However, from the discussion of the short wavelength problem (section 2), it is at least possible to express the goal as one of steepening the short wavelength region of the spectrum below $`200`$Å compared to the longward flux level.
Taking as a starting point the Fe abundances determined from separate fits to short and long wavelength EUV spectra, the model grid listed in table 2 has been constructed in an incremental way, in response to the results of a fit to an earlier model. In each new model, only one or two small changes were made from the previous example, devised to bring the predicted spectrum into closer agreement with that observed (but not always successfully). We consider the results of all these analyses together in table 3, together with the best-fit homogeneous model, for reference. The probability that the best fit model (fe6) is a significant improvement on each of the other models is calculated and listed in table 3. In each case, the entire useful EUV spectrum of G191$``$B2B from 120Å to 600Å was considered. The interstellar column densities were allowed to vary completely freely while H layer mass (for H and He stratification), $`T_{eff}`$ and log g were fixed at their nominal values of $`M_H=1.2\times 10^{13}`$, 54000K and 7.5 respectively.
Most of the stratified models offer a significant improvement over that homogeneous model which gives the best match to the EUV spectrum of G191$``$B2B. However, some are rather more successful than others. We examined two different types of slab model, first where the upper layers of the photosphere have the greater Fe abundance and second with this situation reversed. Model fe1, which falls into the former category gives a worse agreement overall, in comparison with the homogeneous case (figure 4). The predicted MW flux clearly falls below the observed level while there is an opposite discrepancy in the SW range. Those slab models where the deeper Fe abundance is greater than that in the outer layers give the best agreement in most cases. The very best of these, model fe6, is a good match to the observed spectrum throughout the complete EUV range (figure 5). Any residual differences are of similar magnitude to those obtained with the homogeneous models when those deal with the more restricted wavelength range above $`180`$Å. Neither of the two models which have Fe abundance profiles determined from the balance of radiative levitation and gravitational settling are in good agreement with the observations. For example, the better of these, fe7, requires a very high $`\mathrm{He}\mathrm{ii}`$ column density to force agreement with the short wavelength flux level. This leaves an unacceptably high flux decrement at and below the 228Å $`\mathrm{He}\mathrm{ii}`$ Lyman series limit (figure 6).
Thus far, the analysis has only addressed the level of agreement between model predictions and observations in the EUV spectral range. However, as the Fe abundances are adjusted at different depths to force agreement with the EUV observations, it is important to consider the effect this has on the predicted Fe line strengths in the far UV. At this point, we can discard those stratified models which do not work and limit the analysis to the selected few that do. Using the F-test, we define this group by assessing which model fits have values of $`\chi _{red}^2`$ which indicate that the probability the fit is significantly worse than the best model (fe6) lies below 99% (see table 3). The models included are fe3, fe4, fe5, fe6, fe9, fe10 and fe14. Figure 7 shows the a region of the IUE NEWSIPS coadded spectrum of G191$``$B2B spanning the range $`1370`$ to 1380Å and including several of the strongest FeV lines. Also shown, in decreasing order of predicted line strength are synthetic spectra computed for models fe4, fe3, fe6, fe9 and fe14. Models fe4 and fe3 are a very good match, with the fe6 and fe9 FeV line strengths being slightly weaker than observed. To assess quantitatively the level of agreement, we have carried out a further analysis, fitting the models predictions to the two strongest FeV lines seen in figure 7 (at 1373.8 and 1376.5Å). The resulting values of $`\chi _{red}^2`$ are listed (in brackets) in table 3, showing that all models are in good agreement with the data and that none can be excluded on the basis of the F test. While model fe6 is the best in the EUV range, fe10, which is not significantly different gives the best match to the far-UV FeV lines.
## 5 Discussion
We have been able to provide the first consistent explanation of the complete spectral flux distribution of the hot DA white dwarf G191$``$B2B, including the short wavelength EUV region ($`<190`$Å) which has previously been problematic, using a grid of atmosphere models with a depth dependent abundance of Fe. Potentially, this is an important breakthrough in our understanding of this and related DAs with significant photospheric heavy element abundances. However, the choice of model structures is somewhat arbitrary, even artificial, in the absence of any a priori physical constraints that we might apply. Hence, it is necessary to re-examine the justification for this approach, question its physical reality and assess the uniqueness of the models before consideration of the possible implications of the results.
In comparison with the homogeneous atmosphere models used in the earlier studies of G191$``$B2B (see figure 1), the stratified models described here seek to suppress the level of the short wavelength ($`<190`$Å) flux compared to the longer wavelengths, also steepening the spectral slope towards short wavelengths in this region. A number of the models tried were successful, but there may be other mechanisms that might yield a similar effect. Probably the most important question concerns the atomic data included in the models. It is well-known that there are large uncertainties in the continuum and line opacities of the Fe group elements. As the dominant EUV opacity source, the Fe data will be the most critical. However, there are two arguments againts this possibility. First, it seems unreasonable to suppose that any errors should be concentrated in a particular wavelength range. Second, where the greatest uncertainties lie, below $`160`$Å, there are very few Fe lines (or of any other species). This might suggest that, in fact, there should be less problem in the short wavelength region. However, we cannot ignore this potential problem completely. Bautista (1996) and Bautista and Pradhan (1997) have computed photoionization cross-section and oscillator strengths for FeIV and FeV, expanding the Opacity Project database by and appreciable number of new transitions. Their calculations take account of transitions between states lying below the first ionization threshold and states lying above. As Bautista and Pradhan note, these transitions can make an important contribution to the total photoabsorption, even though they do not appear as resonances in the photoionisation cross-sections. An alternative explanation for the short wavelength flux decrement is that extra EUV opacity may be present in the form of species, different elements or ionization stages, that have not been taken into account in the models. Again, this seems unlikely, as the few detected elements that we have not included (S and P) are only present in traces too small to have any noticeable effet in the EUV (Vennes et al. 1996). Any elements not yet detected will have still lower abundances.
The choice of stratified models is really determined by the level of complexity that can be accomodated in the modelling process and by the availability of information to determine the abundance depth profile of a given element. The two layer slab models calculated, therefore, represent the simplest possible development of the idea of a depth dependent abundance. Extending this to deal with three layers (models fe1, fe12, fe13, fe14) is a modest increase in complexity, adding two further unknowns (position of the lower layer boundary and the layer abundance) to the three required for two layers. In this work, the additional, narrow layer (2-4 depth points) is used to make a smoother transition between the two main slabs. Generation of a smooth abundance profile is also a result of the diffusion/levitation calculations included in models fe7 and fe8.
It is rather striking that it is the simplest two region slab models that give the best agreement with the data. Of the three layer models, only fe14, where the intermediate layer is reduced to one depth point is in reasonable agreement and the diffusion/levitation models are the worst as a group. One conclusion we could draw from this is that the transition region for the change in Fe abundance is indeed narrow. In addition, the evidence seems to indicate that the Fe abundance is greatest in the deeper layers of the atmosphere but with the abundance in the outer regions being finite and the contrast between the two values being limited to a factor $`50`$. For example, the best fit fe6 model (to the EUV data) has a layer abundance ratio of 40, and predicts far UV Fe line strengths that are consistent with the observed spectrum. In comparison, the fe14 model (layer abundance ratio 60), which also gives a good match in the EUV, yields far UV Fe line strengths that are much weaker than observed. This also indicates that the abundance of Fe in the outer layer cannot be less than $`5\times 10^7`$.
It is interesting to note that models fe9 and fe10, which are not significantly worse than the best model (fe6), have a lower $`\mathrm{He}\mathrm{ii}`$ column than any of the other successful models. The resulting He ionization fractions, 37% and 42% respectively, are much closer to the 27% mean value typical of the local ISM (Barstow et al. 1997), compared with either the homogeneous analysis of Lanz et al. (1996; $`80`$% ), the stratified H/He work of Barstow & Hubeny (1998; $`50`$%) or the good models considered here (52% for fe6). On the basis of achieving the lowest He ionization fraction we might favour model fe9 over the others.
However, we must be cautious in taking any of these results too literally, since there are an enormous number of possible, more complex, abundance profiles that we have not yet tested and which might give an equally good or better result. Furthermore, we have only examined the stratification of Fe. There is no reason to assume that this is the only element that might be stratified. Indeed, there is good evidence to show that nitrogen is stratified in the slightly cooler, less heavy element-rich, DA white dwarf REJ1032+532 (Holberg et al. in preparation).
Ideally, we should investigate all these possibilities with new calculations but there is a major problem in producing the large number of models needed. We must seek ways of confining the problem. It may be possible to provide some constraints by measuring abundances using lines that are formed at different depths within the envelope. However, this approach will be particularly sensitive to any uncertainties in the oscillator strengths. Furthermore, as any effects are likely to be quite subtle and the range of uncertainty in determining abundances from far UV line strengths is typically a factor 2, it will be necessary to obtain data of considerably higher signal-to-noise than that currently available. Even then, as examination of the EUV and far UV line formation depths shows (figure 3), it will only be possible to investigate a narrow region of the photosphere, occupying 1 dex in mass depth. Interestingly, there is much more contrast in line formation depth within the EUV and between the EUV and far UV than in the far UV range alone.
If we accept at face value the evidence this work presents, that the Fe in G191$``$B2B is stratified in two main layers with abundances of $`4\times 10^5`$ (lower layer) and $`1\times 10^6`$ (upper layer), it is interesting to explore the possible implications. A relative depletion of Fe in the outer layers of the envelope may be an indication of ongoing mass-loss in the star. The effects of mass-loss in white dwarf atmospheres has hardly been examined, although there is evidence for this occuring in the hot DO white dwarf REJ0503$``$289 (Barstow & Sion, 1994). A preliminary study of this problem by Chayer et al. (1993) shows that the outer layers of the envelope will become depleted over time. However, the mass loss rate used in the calculation ($`10^{16}M_{}`$/yr) was sufficient to eliminate the reservoir of the heavy elements completely within a few thousand years. On this basis, to see any heavy elements at all, the mass loss rate in G191$``$B2B must be considerably lower. There is clearly a need for new radiative levitation calculations coupled with mass-loss to evaluate this problem properly.
## 6 Conclusion
We have demonstrated that the complete spectrum of G191$``$B2B can be explained by a model atmosphere where Fe is stratified, with increasing abundance at greater depth. The abundance profile appears to be sharply stepped and may explain the difficulties in matching observed photospheric abundances, usually obtained by analyses utilising homogeneous model atmospheres, to the detailed radiative levitation predictions. Particularly as the latter are only strictly valid for regions deeper than where the EUV/far UV lines and continua are formed. Chayer et al. (1993) show that the outer layers of the envelope will become depleted over time if a weak wind is present. Hence, if found to be the only explanation of the observed spectrum, the relative depletion of Fe in the outer layers of the atmosphere could be the first evidence for radiatively driven mass loss in the star.
In addition, the work presented here may contribute to the resolution of the issue of the possible presence of $`\mathrm{He}\mathrm{ii}`$ along the line of sight to the star, discussed by Lanz et al. (1996) and Barstow & Hubeny (1998), and its likely location, in the photosphere or ISM. We find that two of our best stratified models yield an $`\mathrm{He}\mathrm{ii}`$ column density and He ionization fraction closer to the local ISM values than results obtained in the earlier studies. However, this particular problem will only be completely solved when a much higher resolution spectrum is obtained, capable of separating $`\mathrm{He}\mathrm{ii}`$ lines from those of heavier elements. We anticipate that the J-PEX spectrometer will provide such data in early 1999 (Bannister et al. 1999). Through its ability to study individual lines, this instrument may also be able to deliver new information on the stratification of Fe and other elements.
## Acknowledgements
The work of MAB was supported by PPARC, UK, through an Advanced Fellowship. JBH wishes to acknowledge NASA grants NAG 5-2738 and NAG 5-3472. Data analysis and interpretation were performed using NOAO $`\mathrm{iraf}`$, NASA HEASARC and Starlink software.
|
no-problem/9906/cond-mat9906433.html
|
ar5iv
|
text
|
# QUANTITATIVE MODEL OF LARGE MAGNETOSTRAIN EFFECT IN FERROMAGNETIC SHAPE MEMORY ALLOYS
## I Introduction
In addition to some giant magnetostriction materials, ferromagnetic shape memory alloys was recently suggested as a general way for the development of a new class of the magnetic-field-controlled actuator materials . It is now a goal of research projects in several groups directed on the development of ferromagnetic alloys exhibiting also a martensitic phase transition that would allow control of large strain effect by application of a magnetic field at constant temperature in a martensitic state. Numerous candidate shape memopy materials were explored including Ni<sub>2</sub>MnGa, Co<sub>2</sub>MnGa, FePt CoNi, and FeNiCoTi during past few years . Magnetically driven strain effect is expected to occur in these systems. According to results reported in the large strains of 0.19% can be achieved in a magnetic field of order 8 kOe in the tetragonal martensitic phase at 265 K of single-crystal samples of Ni<sub>2</sub>MnGa. This strain is an order of magnitude grater than the magnetostriction effect of the parent, room temperature cubic phase.
Ni<sub>2</sub>MnGa is an ordered by L21 ferromagnetic Heusler alloy having at high temperature cubic (a = 5.822 A) crystal structure that undergoes martensitic transformation at 276 K into a tetragonally distorted structure with crystalline lattice parameters: a=b=5.90 A and c=5.44 A . The martensitic phase accommodates the lattice distortion connected with transformation by formation of three twin variants twinned usually on $`110`$ planes and having the orientation of the tetragonal symmetry axes nearly to three possible $`\left[100\right]`$ directions. The saturation value of magnetization was found to be about of 475 G. Magnetization curve of the low-temperature twinned phase usually displays two-stage structure at 265 K with a sharp crossover at about of 1.7 kOe from easy low-field magnetization below to a hard stage above this value up to the 8 kOe saturation field value. Such a behavior is connected with a different response of different twin variants to the applied field. The measurements usually show a definite magnetostrain value along $`\left[100\right]`$ as a function of the magnetic field applied in the same direction . It is generally expected that a large macroscopic mechanical strain induced by the magnetic field in similar type systems is microscopically realized trough the twin boundaries motion and redistribution of different twin variants fractions in a magnetic field. The main thermodynamic driving forces have in this case magnetic nature and connected with high magnetization anisotropy and differences in magnetization free energies for different twin variants of martensite .
The main goal of this brief publication is to give the right thermodynamic consideration of the mechanical and magnetic properties of a similar type materials and represent the quantitative model describing large magnetostrain effect observed in several ferroelastic shape memory alloys such as Ni<sub>2</sub>MnGa. It is shown that the magnetic field induced deformation effect directly follows from the general thermodynamic rules such as Poisson equation and connected with the strain dependence of magnetization. A simple model of magnetization for the internally twinned martensitic state and its dependence on the strain is considered and applied to explain the results of experimental study of large magnetostrictive effects in Ni<sub>2</sub>MnGa.
## II General Thermodynamic Consideration
Consider the general thermodynamic properties of the materials which can show both the ferroelastic and the ferromagnetic properties. Most of the shape memory alloys usually display ferroelastic behavior in martensitic state connected with redistribution of different twin variant fractions of martensite under the external stress applied through the motion of twin boundaries. Ferromagnetic shape memory materials have an additional possibility to activate the deformation process in a twinned martensitic state by the application of magnetic field simultaneously with magnetization of the material. According to general thermodynamic principles both the mechanical and the magnetic properties of similar type materials can be represented by the corresponding state equations:
$$\sigma =\sigma (\epsilon ,h)$$
(1)
$$m=m(\epsilon ,h)$$
(2)
where, the Eq.(1) reflects the mechanical properties through stress-strain $`\sigma \epsilon `$ equation in presence of magnetic field $`h`$ and Eq.(2) gives the magnetization value $`m`$ as a function of magnetic field applied $`h`$ and strain $`\epsilon `$. Both these equation can be obtained from an appropriate thermodynamic potential as follows:
$$\sigma (\epsilon ,h)=\frac{}{\epsilon }\stackrel{~}{G}(\epsilon ,h)m(\epsilon ,h)=\frac{}{h}\stackrel{~}{G}(\epsilon ,h)$$
(3)
where, $`\stackrel{~}{G}(\epsilon ,h)=G(\epsilon ,h)hm(\epsilon ,h),`$ and $`G(\epsilon ,h)`$ is the specific Gibbs free energy at fixed temperature and pressure condition. Both state equation are not completely independent functions and must satisfy known Poisson’s rule:
$$\frac{}{h}\sigma (\epsilon ,h)=\frac{}{\epsilon }m(\epsilon ,h)$$
(4)
Integration of this equation over the magnetic field starting from $`h=0`$ at a fixed strain gives an important representation of the mechanical state equation including magnetic field effects:
$$\sigma =\sigma _0\left(\epsilon \right)\frac{}{\epsilon }\underset{0}{\overset{h}{}}𝑑hm(\epsilon ,h)$$
(5)
According to this equation the external stress on the left is balanced in equilibrium by both the pure mechanical stress $`\sigma _0\left(\epsilon \right)=\sigma (\epsilon ,0)`$ resulting from the mechanical deformation of the material at $`h=0`$ and the additional magnetic field induced stress that is represented by the second term on the right in this equation. It is also important to note that all the effect of the magnetic field on the mechanical properties is directly determined by the strain dependence of magnetization. In a particularly important case: $`\sigma =const=0`$ one can obtain a general equation determining magnetically induced strain (usually called as a magnetic shape memory or MSM-effect) as follows:
$$\sigma _0\left(\epsilon \right)=\frac{}{\epsilon }\underset{0}{\overset{h}{}}𝑑hm(\epsilon ,h)$$
(6)
and its linearized solution:
$$\epsilon ^{msm}\left(h\right)=\left(\frac{d\sigma _0}{d\epsilon }\right)_{\epsilon =0}^1\left(\frac{}{\epsilon }\underset{0}{\overset{h}{}}𝑑hm(\epsilon ,h)\right)_{\epsilon =0}$$
(7)
that can be used when $`\epsilon `$ is much less than a martensite lattice tetragonal distortion value $`\epsilon _0`$. According to Eqns.(6) and (7) the magnetization and its dependence on the strain is responsible for the MSM-effect and is the main subject for detailed discussion and modeling.
## III The model and its application to Ni<sub>2</sub>MnGa tetragonal martensite
Consider a typical situation corresponding to measurements of large strain induced by the magnetic field in the tetragonal internally twinned martensite of Ni<sub>2</sub>MnGa obtained from the austenitic single crystal studied in when the magnetic field is applied along $`\left[100\right]`$ direction of parent austenitic phase and strain measurements were performed in the same axial direction. In this case the crystallographic $`\left[100\right]`$ $`\left[010\right]`$ $`\left[001\right]`$ axes for all three possible twin variants of the tetragonal martensitic phase will be nearly parallel to the external field applied. More exactly, the crystallographic orientation relationships between the austenitic and martensitic phases can be obtained by using the usual methods of the crystallographic theory. Additional small rotations of the tetragonal phase axes are expected but the corresponding rotation angles can not exceed few degrees in the case of Ni<sub>2</sub>MnGa and may be neglected for simplicity. Fig.1. schematically shows the expected alignment of the magnetic field applied, crystallographic orientations and magnetization curves for three possible tetragonal phase variants.
Therefore, the magnetic field is applied along the tetragonal symmetry axis only for one type twin variant (which is called here the axial, or $`a`$-type) and simultaneously in the transversal direction in respect to the tetragonal symmetry axes of another two (transversal, or $`t`$-type) variants. The investigation of magnetization properties of Ni<sub>2</sub>MnGa performed for a single tetragonal variant of martensite obtained by the mechanical compression method has shown a considerable difference between the magnetization curves along the tetragonal $`\left[100\right]`$ direction in comparison to another transversal $`\left[010\right]`$ and $`\left[001\right]`$ directions. It was found that a tetragonal axis is the easiest magnetization direction and requires considerably less value of saturation field $`h_a`$ than a saturation field $`h_t`$ characterizing magnetization in two hard transversal directions as it is schematically shown in Fig.1. In a general case the calculation of magnetization for the material with a complicated twin microstructure geometry requires some special approach. In this paper we will ignore, for simplicity, a similar type problem and will consider these effects in other publications. Taking into account the presence of magnetic anisotropy and difference in magnetization behavior between the axial $`m_a(h)`$ and transversal $`m_t(h)`$ twin variants we consider a simple model of magnetization for the multi-variant martensitic state that gives the main contribution into magnetisation insensitive to the fine details of twin microstructure. This model treats the multi-twin martensitic state as a composite material consisting of an easy magnetization area occupied by axial type twins and hard magnetization region of two transversal twin variants. Denoting as $`x`$ the total volume fraction of the axial twin domain and $`(1x)`$ \- transversal type twin domain fractions, respectively, one can write the magnetization of the material as follows:
$$m(x,h)=xm_a(h)+(1x)m_t(h)$$
(8)
where, $`m_a(h)`$ and $`m_t(h)`$ are specific magnetization functions for the axial and transversal variants, respectively. On the other hand, the macroscopic strain along the axial direction can be found from a similar type equation:
$$\epsilon =x\epsilon _a^0+(1x)\epsilon _t^0=\frac{3}{2}\epsilon _0(x\frac{1}{3})$$
(9)
where, the diagonal matrix elements $`\epsilon _a^0=\epsilon _0`$ and $`\epsilon _t^0=\frac{1}{2}\epsilon _0`$ represent the relative tetragonal distortion of the martensite crystal lattice along its tetragonal axis and two transversal directions, respectively. The compression distortion $`\epsilon _0=5.4\%`$ along the tetragonal symmetry axis was found in case of Ni<sub>2</sub>MnGa. One can easily exclude the fraction dependence from these two equations and obtain the magnetization as function of the macroscopic strain for the internally twinned martensitic state:
$$m(\epsilon ,h)=\left\{\frac{1}{3}m_a(h)+\frac{2}{3}m_t(h)\right\}+\frac{2}{3}\left(\epsilon /\epsilon _0\right)\left\{m_a(h)m_t(h)\right\}$$
(10)
This equation immediately reproduces all the main peculiarities of the experimental magnetization curve including the sharp change of its slope at $`h=1.75kOe`$ as indicated in Fig.2. This singularity exactly appears at $`h=h_a`$ where the easy stage of magnetization process inside of the axial twin variants domain is finished. According to Eq.(10) $`m(\epsilon _0,h)=m_a\left(h\right)`$ and $`m(\epsilon _0/2,h)=m_t\left(h\right)`$ so, one can use this fact to obtain both the $`h_a=1.75kOe`$ and $`h=8kOe`$ from the experimental magnetization curves measured in the multi-variant state. The model magnetization curve $`m(0,h)`$ corresponding to zero strain value shows the same type behavior and singularity in slope as the experimental one. The difference between them is caused by the second term in Eq.(10) that gives an additional strain dependent contribution into the magnetisation. This contribution is directly connected with the MSM-effect and can be easily taken into account just after its calculation.
One can also obtain the final equations representing the effect of magnetic field on the strain by using basic expressions (7) derived before from the general thermodynamic consideration:
$$\epsilon ^{msm}\left(h\right)=\frac{2}{3}\left(\epsilon _0\frac{d\sigma _0}{d\epsilon }\right)_{\epsilon =0}^1\underset{0}{\overset{h}{}}𝑑h\left\{m_a(h)m_t(h)\right\}$$
(11)
## IV Discussion and conclusions
As follows from this equation, two factors determine the strain value and its field dependence. The first one is proportional to the slope of stress-strain curve and can be found from the usual mechanical compression test without magnetic field applied. The integral term reflects the effects of magnetic anisotropy and determines the functional magnetic field dependence of the strain. In particular, in absence of the magnetization anisotropy when $`m_a(h)=m_t(h)`$ deformation effect is also vanishes. The saturation level of the strain is achieved at $`h=h_t`$ and above where $`m_a(h)=m_t(h)=m^{sat}`$ and where the material has its maximal magnetization value $`m^{sat}`$. One can easily obtain the corresponding saturation value of the strain performing the necessary integrations in Eq.(11) as follows:
$$\epsilon _{sat}^{msm}=\frac{1}{3}\left(\epsilon _0\frac{d\sigma _0}{d\epsilon }\right)_{\epsilon =0}^1\left(h_th_a\right)m^{sat}$$
(12)
Precise quantitative estimation of the saturation strain requires, in general, the corresponding mechanical testing. Here, we will use a simple estimation of $`d\sigma _0/d\epsilon \sigma _0/\epsilon _0`$. So, $`\epsilon _{sat}^{msm}\frac{1}{3}\left(\sigma _0\right)^1\left(h_th_a\right)m^{sat}`$ where the characteristic stress $`\sigma _0`$ representing ferroelastic mechanical behavior of the material is expected to be about of 20MPa in Ni<sub>2</sub>MnGa martensite. Using also the values of $`h_t8kOe`$ , $`h_a1.75kOe`$ and $`m^{sat}475G`$ found from the magnetization curve analysis one can obtain a simple estimation: $`\epsilon _{sat}^{msm}0.49\%.`$ More precise estimation that follows from the mechanical testing results gives $`d\sigma _0/d\epsilon \left(2÷3\right)\sigma _0/\epsilon _0.`$ Consequently, $`\epsilon _{sat}^{msm}(0.24÷0.16)\%`$ which is in a better quantitative agreement with $`\epsilon _{sat}^{msm}0.14\%`$ experimental value. In order to achieve the larger magnetostrain effect comparable with the lattice tetragonal distortion value $`\epsilon _05\%`$ one will need materials with very low $`\sigma _02MPa`$ detwinning stress value. This task seems can be considered as the realistic one because the observation of $`\sigma _02MPa`$ and $`\sigma _08MPa`$ were reported in some publications.
Fig.3. shows the field behavior of the strain that follows from the model and its change for the different values of the magnetic anisotropy factor $`k=h_a/h_t`$ defined as a ratio between the axial and transversal saturation fields.
The dimensionless strain response $`\epsilon ^{msm}\left(h\right)/\epsilon _{\mathrm{max}}`$ normalized by,
$$\epsilon _{\mathrm{max}}=\frac{1}{3}\left(\epsilon _0\frac{d\sigma _0}{d\epsilon }\right)_{\epsilon =0}^1h_tm^{sat}$$
(13)
increases from zero value at $`k=1`$ simultaneously with a corresponding shape change and shows the maximal possible deformation effect and linear type singularity for the low field strain behavior at $`k=0`$. This case corresponds to the maximally strong anisotropy when the axial saturation field becomes infinitely small $`h_a0`$ and $`m_a(h)`$ immediately achieves its saturation level $`m^{sat}`$ starting from an arbitrary low magnetic field and then still remains equal to a constant $`m^{sat}`$ value during the magnetization process. Therefore, one can conclude that a linear low field behavior usually predicted in some previously developed models is directly connected with their assumption on the complete saturation of magnetization for the axial type twin variants. According to the present model such a type of assumption can be physically reasonable in the limit $`h_a0`$ only. In other case the strain shows the normal parabolic type behavior in the low field $`h<h_a`$ region in agreement with the experimental observations. A good correspondence between the model and experimental results is indicated in Fig.4. We neglected here the small hysteresis effects which are usually observed assuming to give the more detailed discussion of this problem by using some new developments and quantitative descriptions of hysteresis in shape memory materials.
Acknowledgments
Authors acknowledge the Physical and Mat. Science Departments of the Helsinki Technological University supported this work.
|
no-problem/9906/gr-qc9906017.html
|
ar5iv
|
text
|
# Observables and gauge invariance in the theory of non-linear spacetime perturbations
## References
|
no-problem/9906/math9906197.html
|
ar5iv
|
text
|
# The size of spanning disks for polygonal curves
## 1. Introduction.
Let $`K`$ be a closed polygonal curve in $`^3`$ consisting of $`n`$ line segments. Assume that $`K`$ is unknotted, so that it is the boundary of an embedded disk in $`^3`$. This paper considers the question: How many triangles are needed to triangulate a Piecewise-Linear (PL) spanning disk of $`K`$? The main result, Theorem 1 below, exhibits a family of unknotted polygons with $`n`$ edges, $`n\mathrm{}`$, such that the minimal number of triangles needed in any triangulated spanning disk grows exponentially with $`n`$. More specifically, we construct a sequence of unknotted simple closed curves $`K_n`$ in $`^3`$ having the following properties for each $`n0`$.
* The curve $`K_n`$ is an unknotted polygon with at most $`10n+9`$ edges.
* Any PL embedding of a triangulated disk into $`^3`$ with boundary $`K_n`$ contains at least $`2^{n1}`$ triangular faces.
The polygons $`K_1`$ and $`K_3`$ are pictured in Figure 1.
The existence of these curves is related to the complexity of certain topological algorithms. Algorithms to test knot triviality by a search for embedded PL spanning disks are searching for disks that can be exponentially more complicated than their boundary curves. Algorithms of this type include those described in ,,,,. Some approaches to problems in computational group theory, such as the word problem, are also based on a search for a spanning disk, and may face similar difficulties.
The lower bound given in our examples can be compared with the following upper bound: the results of and show that every unknotted polygon with at most $`n`$ edges in $`^3`$ bounds a PL embedded triangulated disk which has at most $`C^{n^2}`$ triangles, where $`C>1`$ is a constant independent of $`n`$. The exponent $`n^2`$ comes from the requirement that the polygon be embedded in the 1-skeleton of a triangulated $`3`$-manifold. A triangulation that contains an $`n`$-edge polygon and using $`O(n^2)`$ tetrahedra always exists, and this bound cannot always be improved, see Avis and El Gindy . On the other hand, also shows that a set of points in general position, i.e. one for which no four points lie on a plane and no three on a line, can be triangulated using $`O(n)`$ simplices. It seems plausible that one could obtain an improved upper bound of $`C^n`$ triangles for a PL spanning disk of a polygon whose vertices are in general position.
A result similar to the one proved in this paper was announced in , but the geometric analysis suggested there seems difficult to establish rigorously. We consider here a different set of polygonal curves $`K_n`$ than those used in , and to establish their properties use topological arguments based on ideas from the classification of diffeomorphisms of surfaces and from Morse theory .
Although the main result concerns PL curves and surfaces, in some parts of the proof it becomes convenient to work with smooth surfaces and smooth mappings. This allows use of basic results from smooth Morse theory. The arguments could be carried out entirely in the PL context, at the expense of using less well known versions of Morse theory. Passing between the PL and smooth settings is achieved by approximating PL maps by smooth maps.
## 2. Construction of $`K_n`$
We now describe how to construct the unknotted curves $`K_n`$. The curve $`K_0`$ is contained in the $`xz`$-plane, see Figure 4. The construction of $`K_n`$ begins with the PL 4-braid $`\alpha `$ depicted in Figure 2, where $`\alpha =\sigma _1\sigma _2^1`$ in terms of the standard generators $`\sigma _1,\sigma _2,\sigma _3`$ of the braid group on four strands, see . This braid consists of four arcs running between the planes $`\{z=1\}`$ and $`\{z=0\}`$, along each of which $`z`$ is monotonically decreasing. The planes $`\{z=0\},\{z=1\}`$ each intersect $`\alpha `$ at four points. We arrange these points along the $`x`$-axis and label them by $`p_1=(2,0),p_2=(1,0),p_3=(1,0),p_4=(2,0)`$. In this labeling we only consider the $`xy`$-coordinates.
A diffeomorphism $`\phi `$ of the 4-punctured plane $`^2\backslash \{p_1,p_2,p_3,p_4\}`$ is associated to the braid $`\alpha `$. This diffeomorphism is induced by taking the punctured plane at level $`z=1`$ and sliding it down the braid to level $`z=0`$. Its action on the plane in indicated in Figure 3. The action is the identity outside a disk of radius three around the origin.
The curve $`K_n`$ is formed from an iterated braid
$$\beta _n=\alpha ^n\alpha ^n,$$
running between the planes $`\{z=n\},\{z=n\}`$. Between each pair of planes $`\{z=k\}`$ and $`\{z=k+1\}`$, $`K_n`$ consists of a single copy of $`\alpha `$ for $`0kn1`$ and a single copy of $`\alpha ^1`$ for $`nk1`$. In the braid group, $`\beta _n`$ is equivalent to the trivial 4-braid, which consists of four parallel vertical segments. The construction of $`K_n`$ is completed by appropriately connecting together the four strands at the upper and lower ends to form a closed curve, as shown in Figure 4. Above the plane $`\{z=n\}`$ we add a pair of line segments from $`p_1`$ at height $`z=n+2`$ to each of $`p_1`$ and $`p_2`$ at height $`z=n`$, and from $`p_3`$ at height $`z=n+1`$ to each of $`p_3`$ and $`p_4`$ at height $`z=n`$. Similarly, below the plane $`\{z=n\}`$ we add a pair of line segments from $`p_2`$ at height $`z=n1`$ to each of $`p_2`$ and $`p_3`$ at height $`z=n`$, and from $`p_2`$ at height $`z=n2`$ to each of $`p_1`$ and $`p_4`$ at height $`z=n`$. Because the braids $`\alpha ^n`$ and $`\alpha ^n`$ cancel in the braid group, it is clear that $`K_n`$ is unknotted.
Our main result is the following:
###### Theorem 1.
For each $`n0`$,
1. $`K_n`$ is unknotted.
2. $`K_n`$ contains at most $`10n+9`$ edges.
3. Any piecewise-smooth embedded disk spanning $`K_n`$ intersects the $`y`$-axis in at least $`2^{n1}`$ points.
4. Any embedded PL triangulated disk $`D_n`$ bounded by $`K_n`$ contains at least $`2^{n1}`$ triangles.
The condition (3) that the disk intersects a line many times implies condition (4), that it contains many triangles, since each triangle can intersect a line transversely at most once.
We prove Theorem 1 in §3. We first construct a standard spanning disk for $`K_n`$, which we call $`F_n`$. Figure 6 shows $`F_0`$ and $`F_1`$. To understand the behavior of $`F_n`$ and other disks spanning $`K_n`$, we prove some facts about diffeomorphisms and “train tracks”. These are applied in §4 to count the intersections of the $`y`$-axis and $`F_n`$. In §5 we use Morse Theory to show that any other spanning disk is at least as complicated, along the $`y`$-axis, as the standard disk.
## 3. Construction of a standard spanning disk
In this section we describe how to construct for each $`K_n`$ a particular smooth spanning disk $`F_n`$. This standard disk intersects each plane $`\{z=c\},n1<c<n+1`$ in two arcs, which are embedded and disjoint. At $`z=\pm n`$ the arcs lie along the $`x`$-axis, joining $`p_1,p_2`$ and $`p_3,p_4`$ respectively. For $`n2`$ these arcs are shown in Figure 5 at heights $`z=n,z=n1`$ and $`z=n2`$. In Figure 5 the four arcs appear in the three pictures in order (1,2,3,4), (2,3,1,4) and (3,1,2,4), read left to right.
Above the plane $`\{z=n\}`$ the standard disk consists of two triangles in the $`xz`$-plane, one with a base along the segment from $`p_1`$ to $`p_2`$ and one with a base along the segment from $`p_3`$ to $`p_4`$. Below $`z=n`$ it is bounded by a six-sided polygon in the $`xz`$-plane meeting $`\{z=n\}`$ along two segments, one running from $`p_1`$ to $`p_2`$ and one from $`p_3`$ to $`p_4`$. Between $`\{z=n\}`$ and $`\{z=n\}`$ the standard disk twists so that its boundary follows $`K_n`$, as made precise below.
The arcs in the first disk of Figure 5 are taken by $`\phi `$ to the arcs in the second disk in the Figure, and those in turn are taken by $`\phi `$ to the arcs in the rightmost disk. The arcs of the braid indicate the motion of the disk in the process of sliding from $`z=1`$ to $`z=0`$. A composition of a counterclockwise half-twist interchanging the first two punctures, followed by a clockwise half-twist interchanging the second two punctures gives $`\phi `$. Corresponding to the braid $`\alpha ^n`$ is the iterate $`\phi ^n`$ of $`\phi `$.
We now give a precise description of the construction of $`F_n`$, based on $`\phi `$. Begin with a planar polygonal curve bounding a disk $`L_n`$ in the $`xz`$-plane, formed as follows: Take vertical segments from $`(2,0,n)`$ to $`(2,0,n)`$, $`(1,0,n)`$ to $`(1,0,n)`$, $`(1,0,n)`$ to $`(1,0,n)`$ and $`(2,0,n)`$ to $`(2,0,n)`$. At the top, add a line segment from $`(2,0,n+2)`$ to each of $`(2,0,n)`$, $`(1,0,n)`$ and from $`(1,0,n+1)`$ to each of $`(1,0,n)`$, $`(2,0,n)`$. At the lower end. add a line segment connecting $`(2,0,n2)`$ to each of $`(2,0,n)`$, $`(2,0,n)`$ and $`(1,0,n1)`$ to each of $`(1,0,n)`$, $`(1,0,n)`$, as shown in Figure 7.
The standard disk $`F_n`$ is the image of this planar disk $`L_n`$ under a diffeomorphism $`J_n:^3^3`$, that preserves the $`z`$-coordinates of points, and carries the boundary of $`L_n`$ to $`K_n`$. The diffeomorphism $`\phi `$ is isotopic to the identity map on the plane. So there is a continuous family of diffeomorphisms of the plane $`\phi _t,0t1`$, with $`\phi _1=`$ identity and $`\phi _0=\phi `$. Define a diffeomorphism $`G:^2\times [0,1]^2\times [0,1]`$ by $`G(x,y,t)=(\phi _t(x,y),t)`$. Then $`G`$ carries the vertical line segments in $`^2\times [0,1]`$ with $`xy`$-coordinates $`(2,0),(1,0),(1,0),(2,0)`$ to the braid $`\alpha `$. Also define a diffeomorphism $`H:^2\times [0,1]^2\times [0,1]`$, by $`H(x,y,t)=(\phi (x,y),t)`$. This extends $`\phi `$ to $`^2\times [0,1]`$ as a product. The diffeomorphism $`J_n`$ is defined to be the identity for $`zn`$ and $`zn`$. For $`0ktk+1n`$, $`J_n(x,y,t)=GH^{nk1}(x,y,tk)+(0,0,k)`$, and for $`nk1tk0`$, $`J_n(x,y,t)=GH^{n+k1}(x,y,kt)+(0,0,2tk)`$.
A Morse function $`f`$ on a smooth closed manifold has a finite number of critical points $`\{c_i]`$ with distinct values under $`f`$. A Morse function on a manifold with boundary is a Morse function when restricted to both the boundary and the interior of the manifold. Critical points are either interior critical points or boundary critical points. A Morse function on a disk has at least two critical points, one maximum and one minimum, and if there are exactly two critical points then both must occur on the boundary, since there is a maximum and minimum value for the restriction to the boundary. The construction of $`F_n`$ gives $`z`$ as a Morse function on $`F_n`$ that has four critical points. Two are maxima, at $`z=n+2`$ and $`z=n+1`$, one is a minimum, at $`z=n2`$, and one is a saddle point, at $`z=n1`$. All four critical points lie on the boundary of $`F_n`$.
## 4. An invariant train track for $`\phi `$
To understand the iterates of $`\phi `$, we use an associated combinatorial object called an invariant train track. The theory of train tracks is described in ; we need here only elementary ideas from this theory. A train track is a 3-valent graph that is embedded on a surface. The edges, called tracks, are embedded smoothly and the three tangent directions at the vertices, called switches, lining up to give a $`C^1`$-embedding of the union of any pair of edges meeting at a vertex. Train tracks have fibered neighborhoods, closed neighborhoods filled by fibers. Fibers are intervals transverse to the edges, much like the tracks of a mono rail, and there is a projection map of the fibered neighborhood to the train track. A curve is carried by a train track if it can be isotoped into the fibered neighborhood so that it is transverse to the fibers. Such a curve is roughly parallel to the tracks, but may run many times over each track. The curve is determined up to isotopy by a set of weights. These are non-negative integers assigned to each track, giving the number of times the curve runs over that edge, in either direction. At each switch there is a switching condition: the weight assigned to the one “incoming” track is the sum of the weights of the two “outgoing” tracks. The weights for any two tracks near a vertex determine the weight for the third. An example is shown in Figure 8.
A curve $`C`$ carried by a train track can be projected onto the train track, meaning that the embedding of the curve can be composed with the projection of each fiber in the fibered neighborhood to the base point of that fiber on the train track. Each track is given a weight by the projection, corresponding to the number of pre-images in $`C`$ of a point in the interior of the track. The curve $`C`$ can be reconstructed from these weights, by taking a number of copies of each track given by the weights and joining them together near the switches. There is a unique way to join that gives an embedded curve. The resulting simple closed curve is unique up to isotopy.
As with curves, a train track $`T^{}`$ is carried by another train track $`T`$ if $`T^{}`$ can be isotoped into a fibered neighborhood of $`T`$ so that its vertices are carried to vertices and so that the tracks of $`T^{}`$ are transverse to the fibers of the fibered neighborhood of $`T`$. We can then project $`T^{}`$ into $`T`$ by mapping each fiber to its base point on $`T`$. If $`T^{}`$ carries weights on its branches, then these can be summed to give weights on the branches of $`T`$ to which it projects, as in Figure 9.
A train track is said to be invariant under a diffeomorphism $`\phi `$ of a surface if its image $`\phi (T)`$ is carried by $`T`$.
For later application, we replace the level planes $`\{z=c\}`$ of the height function $`z`$ with the level sets of a different function $`f_n:^3`$, that agrees with $`z`$ in a large ball around the origin, a ball that contains the disks we will be considering. Thus in subsequent arguments we will be able to view either $`f_n`$ or $`z`$ interchangeably as the Morse function we are using. The level sets of $`f_n`$ are a family of spheres rather than planes $`\{z=c\}`$. To construct $`f_n`$, we first choose a large constant $`R_n>0`$ such that a ball of radius $`R_n`$ centered at the origin contains $`F_n`$ in its interior. For each $`t`$ with $`R_n<t<R_n`$, define $`\mathrm{\Sigma }_t`$ to be the 2-sphere obtained by taking the disk $`\{(x,y,z):x^2+y^2R_n,z=t\}`$ and capping it to form a convex 2-sphere enclosing the point $`(0,0,2R_n)`$. Figure 10 shows some of these spheres. The spheres are the level sets of a function
$$f_n:R^3[2R_n,\mathrm{}).$$
The restriction of $`f_n`$ to the disks we will consider agrees with $`z`$, and $`R_n`$ will be chosen large enough so that the level sets of $`f_n`$ look identical to flat planes in a ball containing the disks. Note that we can use different functions $`f_n`$ for different values of $`n`$, if necessary, to ensure that our choice of $`R_n`$ is sufficiently large. The diffeomorphism of the 4-punctured plane $`\phi `$, which was the identity outside of a disk of radius three around the origin, induces a diffeomorphism $`\phi :SS`$ of the 4-punctured sphere $`S`$, which we call by the same name.
There is an invariant train track $`T`$ for $`\phi `$, depicted in Figure 11, and also shown with a fibered neighborhood in Figure 8. An assignment of weights to all the tracks of $`T`$ is completely determined by assigning two weights $`a`$ and $`b`$ on the two indicated tracks, as in Figure 8. The non-negative integers $`a`$ and $`b`$ are arbitrary, but all other weights are determined by the switching conditions. Each choice of $`a,b`$ gives rise to a unique simple closed curve carried by $`T`$, and we refer to $`a,b`$ as the weights with which this curve is carried by $`T`$.
To understand the iterates of $`\phi `$ we study the image of $`T`$ under $`\phi `$. The image $`\phi (T)`$ can be isotoped so that vertices of $`\phi (T)`$ go to vertices of $`T`$ and tracks of $`\phi (T)`$ are transverse to the fibers of the fibered neighborhood of $`T`$, as indicated in Figure 11.
###### Lemma 2.
The train track $`T`$ is invariant under the homeomorphism $`\phi `$. A curve carried by $`T`$ with weights $`a,b`$ is mapped by $`\phi `$ to a curve carried by $`T`$ with weights $`a+b`$ and $`a+2b`$.
Proof: The image of $`T`$ under $`\phi `$ can be isotoped into the fibered neighborhood of $`T`$ as shown in Figure 11. The tracks with initial weights $`a`$ and $`b`$ have projected onto them tracks with total weight $`a+b`$ and $`a+2b`$ respectively. A curve carried by $`T`$ with weights $`a,b`$ is similarly carried to a curve carried with weights $`a+b,a+2b`$. $`\mathit{}`$
So $`T`$ is an invariant train track for $`\phi `$, and a curve $`C`$ carried by $`T`$ with weights $`a`$ and $`b`$, has image $`\phi (C)`$ which is also carried by $`T`$, but with weights $`a+b`$ and $`a+2b`$. When $`\phi `$ is iterated, the weights on these two tracks grow according to a Fibonacci sequence:
$$\{(a,b),(a+b,a+2b),(2a+3b,3a+5b),(5a+8b,8a+11b),\mathrm{}\}.$$
###### Lemma 3.
A curve carried by $`T`$ with weights $`a_00`$ and $`b_0a_0`$ is mapped by the diffeomorphism $`\phi ^n`$ to a curve carried by $`T`$ with weights $`a_n`$ and $`b_n`$, satisfying $`a_n2^na_0`$ and $`b_n2^nb_0`$.
Proof: Under the action of $`\phi `$ the weight $`a`$ corresponding to a curve $`C`$ is transformed to the weight $`a+b2a`$ corresponding to $`\phi (C)`$ and the weight $`b`$ to $`a+2b2b`$. The result follows by iterating $`n`$ times. $`\mathit{}`$
Let $`B`$ denote the simple closed curve on a 4-punctured sphere $`S`$ that separates the points $`p_1,p_2`$ from $`p_3,p_4`$, as shown in Figure 12. We analyze the number of intersections between $`B`$ and a curve $`C`$ in the 4-punctured sphere $`S`$ that is carried by $`T`$ with weights $`a,b`$. We show there is no isotopy of $`C`$ in the 4-punctured sphere which can reduce the number of intersections with $`B`$ below $`2a+2b`$.
###### Lemma 4.
A curve $`C`$ in $`S`$ that is carried by the train track $`T`$ with weights $`a`$ and $`b`$ intersects $`B`$ in at least $`2a+2b`$ points.
Proof: In a surface containing two intersecting simple closed curves, a 2-gon is a disk on the surface whose boundary consists of an arc from each of the curves and whose interior is disjoint from each of them. It is shown in \[11, Lemma 3.1, pp. 108\] that if two simple closed curves on a surface have more intersections than the minimal possible number in their isotopy class, then each contains an arc such that the two arcs together bound a 2-gon on the surface.
Let $`C`$ be a curve lying in the fibered neighborhood of $`T`$, transverse to the fibers and carried with weights $`a`$ and $`b`$. It follows from the above that if $`C`$ can be isotoped in $`S`$ to have fewer than $`2a+2b`$ points of intersection with $`B`$, then there exists an arc $`\beta `$ contained in $`B`$ and an arc $`\gamma `$ contained in $`C`$ that together bound a 2-gon in $`S`$, whose interior is disjoint from $`BC`$. We will show that there is no such 2-gon between $`C`$ and $`B`$, and hence that $`C`$ can not be isotoped to reduce the number of its intersections with $`B`$.
The arc $`\gamma `$ lies on $`C`$ and so lies in the fibered neighborhood of $`T`$, and is transverse to the fibers. Moreover $`\gamma `$ intersects $`B`$ only at its two endpoints, and therefore lies either to the right or to the left of $`B`$ on $`S`$, where “left” refers to the side containing $`p_1,p_2`$ and “right” to the side containing $`p_3,p_4`$.
An arc carried by $`T`$ with interior to the right of $`B`$ runs once around the third puncture and, together with $`\beta `$, must separate the third and fourth punctures. Similarly an arc carried by $`T`$ with interior on the left of $`B`$ runs once around either the first or second punctures before returning to $`B`$, and together with $`\beta `$ separates the first and second punctures. In either case such an arc is not homotopic to an arc in $`B`$, (rel boundary), and therefore cannot cobound a disk with an arc $`\beta `$ contained in $`B`$. So $`\beta \gamma `$ cannot cobound a 2-gon, and it follows that the number of intersections of $`B`$ and $`C`$ cannot be reduced. $`\mathit{}`$
###### Corollary 5.
Let $`p_1,p_2,p_3,p_4`$ denote four distinct marked points on a 2-sphere and let $`B`$ denote a simple closed curve separating $`p_1,p_2`$ from $`p_3,p_4`$. Let $`\delta `$ be the simple closed curve that is the boundary of a neighborhood of an arc joining $`p_1`$ to $`p_2`$ in the complement of $`B`$. Then $`\phi ^n(\delta )`$ intersects $`B`$ in at least $`2^n`$ points.
Proof: While $`\delta `$ is not carried by $`T`$, its image $`\phi (\delta )`$ is carried by $`T`$ with weights $`a=0,b=1`$, and $`\phi ^2(\delta )`$ is carried with weights $`a=1,b=2`$. Lemma 3 can be applied to $`\phi ^2(\delta )`$ and its iterates, so $`\phi ^n(\delta )=\phi ^{n2}\phi ^2(\delta )`$ is carried with weights at least $`a=2^{n2},b=2^{n1}`$. By Lemma 4, the curve $`B`$ intersects a curve carried by the train track with weights $`a,b`$ in at least $`2a+2b`$ points. Since $`2a+2b2b2^n`$, the result follows. $`\mathit{}`$
###### Corollary 6.
An arc $`\gamma `$ joining $`p_1`$ to $`p_2`$ in the complement of $`B`$ has image under $`\phi ^n`$ that intersects the closed curve $`B`$ on the 4-punctured 2-sphere $`S`$ in at least $`2^{n1}`$ points.
Proof: The simple closed curve $`\delta `$ is isotopic to the boundary of a regular neighborhood of $`\gamma `$ and $`\phi ^n(\delta )`$ is isotopic to the boundary of a regular neighborhood of $`\phi ^n(\gamma )`$. For any arc in $`S`$ that intersects $`B`$ transversely, the boundary of a sufficiently thin neighborhood of the arc intersects $`B`$ in twice the number of points that the arc intersects $`B`$. If $`\phi ^n(\gamma )`$ intersected $`B`$ in fewer than $`2^{n1}`$ points then we can form a thin neighborhood of $`\phi ^n(\gamma )`$ whose boundary intersects $`B`$ in less than $`2^n`$ points, contradicting Corollary 5. $`\mathit{}`$
## 5. Combinatorial complexity of spanning disks for $`K_n`$
In this section we show that any PL spanning disk for $`K_n`$ contains exponentially many triangles, proving the main result.
Proof of Theorem 1: Let $`n`$ be any fixed positive integer. The assertion (1) that $`K_n`$ is unknotted follows from its construction as the composition of a braid and its inverse.
The curve $`K_n`$ can be constructed with straight segments as follows: Four segments above $`\{z=n\}`$ and four below $`\{z=n\}`$ cap off the braid. Between $`\{z=n\}`$ and $`\{z=n\}`$, a single line segment forms the entire fourth strand, and the first three strands are formed from $`2n`$ copies of the first three strands of $`\alpha `$. Each copy of $`\alpha `$ requires five segments for the first three strands. The total number of segments needed is no more than $`10n+9`$, which is assertion (2).
We prove assertion (3) in three steps. Recall that for each fixed $`n`$, $`K_n`$ bounds a smooth disk $`F_n`$ in $`^3`$ that we call the standard disk, and that $`B_0=\mathrm{\Sigma }_0\{x=0\}`$ is the closed curve obtained by intersecting $`\mathrm{\Sigma }_0`$ with the plane $`\{x=0\}`$. We first show that $`B_0`$ intersects $`F_n`$ in at least $`2^{n1}`$ points. We then consider an arbitrary smooth spanning disk $`E_n`$, and show that the number of intersections of $`E_n`$ with $`B_0`$ is at least as large as that of $`F_n`$ with $`B_0`$. In the third step, we approximate an arbitrary PL disk $`D_n`$ by a smooth disk to obtain the same conclusion in the PL setting.
The standard disk $`F_n`$ is swept out by arcs joining points of $`K_n`$ in the level sets $`\{z=\text{ constant}\}`$, as they descend from $`\{z>n+2\}`$ to $`\{z<n2\}`$. One arc appears below $`\{z=n+2\}`$, the second below $`\{z=n+1\}`$. The arcs join together to form a single arc at $`\{z=n1\}`$, and this in turn disappears below $`\{z=n2\}`$. The height function given by the restriction of the $`z`$-coordinate to $`F_n`$ defines a Morse function on $`F_n`$, and this Morse function has no critical points in the interior of $`F_n`$.
The arc $`\gamma =\gamma _n`$ in $`F_n\{z=n\}`$ connects $`p_1`$ and $`p_2`$, as shown in Figure 13. Denote by $`\gamma _t`$ the arc in $`F_n\{z=t\}`$ that is in the same component of $`F_n\{zt\}`$ as $`\gamma `$. For each integer $`k`$ with $`0<kn`$, as we slide $`\gamma _k`$ down one unit along $`F_n`$, $`\gamma _k`$ is deformed along $`F_n`$ to $`\gamma _{k1}=\phi (\gamma _k)`$. So as $`t`$ decreases to 0, the arc $`\gamma _n`$ is slid along $`F_n`$ to an arc $`\gamma _0`$ that is the image of $`n`$ iterations of $`\phi `$.
Let $`B_0`$ denote the closed curve along which the level set $`\mathrm{\Sigma }_0=f_{n}^{}{}_{}{}^{1}(0)`$ intersects the $`yz`$-plane. Then $`B_0`$ separates the four points of intersection of $`\mathrm{\Sigma }_0`$ and $`K_n`$ into pairs, $`p_1,p_2`$ and $`p_3,p_4`$. The standard disk $`F_n`$ intersects $`B_0`$ in at least $`2^{n1}`$ points by Corollary 6.
Our goal is to show that an arbitrary PL disk bounded by $`K_n`$ intersects $`B_0`$ in at least as many points as does $`F_n`$. Before considering PL disks, we first consider a smooth spanning disk $`E_n`$. In this setting we will apply some basic results from the Morse Theory of smooth functions on surfaces; see for an exposition of smooth Morse Theory. We will then shift back to the PL setting. The height function $`z`$, or the function $`f_n`$ that agrees with it on the region we are studying, will serve as the Morse functions. Let $`E_n`$ denote an arbitrary disk spanning $`K_n`$ such that
1. $`E_n`$ has smoothly embedded interior.
2. The height function $`z`$ restricted to $`E_n`$ is a Morse function.
We now show, using Morse theory, that the surface $`E_n`$ intersects the closed curve $`B_0`$ in at least as many points as does the “standard disk” $`F_n`$. Choose a value of $`R_n`$ large enough so that $`F_n`$ and $`E_n`$ both lie in the interior of the ball of radius $`R_n`$, and as before form the Morse function $`f_n`$ whose level sets are spheres $`\mathrm{\Sigma }_t`$ for $`t>2R_n`$. The intersection of $`E_n`$ with the spheres $`\mathrm{\Sigma }_t`$ at non-critical levels is contained in $`\mathrm{\Sigma }_t\{z=t\}`$. As $`t`$ decreases from $`\mathrm{}`$ to $`2R_n`$, the sphere $`\mathrm{\Sigma }_t`$ begins to intersect $`K_n=E_n`$, when $`t=n+2`$. As $`t`$ decreases there are first one, then two arcs in $`\mathrm{\Sigma }_tE_n`$, along with a (possibly empty) collection of simple closed curves. For $`n+1<t<n+2`$, $`\mathrm{\Sigma }_tE_n`$ consists of a single arc $`\beta _t`$, along with a possibly empty collection of simple closed curves. For $`n<t<n`$, $`\mathrm{\Sigma }_tE_n`$ contains two arcs connecting the four points of $`\mathrm{\Sigma }_tK_n`$. As $`t`$ decreases from $`n+1`$, $`\beta _t`$ is continuously deformed as long as $`E_n`$ is transverse to $`\mathrm{\Sigma }_t`$. As long as the transversality continues to hold, let $`\beta _t`$ denote the arc that is in the same component of $`E_n\{zt\}`$ as $`\beta `$.
As long as passing through the critical level {$`t=c`$} does not change which pairs of points on $`K_n`$ are connected by the pairs of arcs, it is possible to extend the definition of $`\beta _t`$ to one of the two arcs below the critical point. We define $`\beta _{cϵ}`$ to be the arc connecting the same pair of points as $`\beta _{c+ϵ}`$. In these cases even the isotopy class of the arc is preserved, though the curve $`\beta _t`$ does not change continuously when $`t`$ passes through the critical level $`c`$.
For the values $`n<t<n`$, the level sets of $`f_n`$ are transverse to $`K_n`$, so any critical points lie in the interior of the disk $`E_n`$. There are three types of changes in $`\beta _t`$ that can occur when descending from $`\mathrm{\Sigma }_{c+ϵ}`$ to $`\mathrm{\Sigma }_{cϵ}`$, as indicated in Figure 14.
1. Moving past a saddle critical point connects $`\beta _{c+ϵ}`$ to a simple closed curve of $`\mathrm{\Sigma }_{c+ϵ}E_n`$ to form $`\beta _{cϵ}`$.
2. Moving past a saddle critical point connects $`\beta _{c+ϵ}`$ to itself to form $`\beta _{cϵ}`$ together with a simple closed curve.
3. Moving past a saddle critical point connects $`\beta _{c+ϵ}`$ to the second arc of $`\mathrm{\Sigma }_{c+ϵ}E_n`$. No arc $`\beta _{cϵ}`$ is defined.
The first two types of moves are inverses, since reversing the direction of a type (1) move gives a type (2) move and vice-versa. The level surface in which $`\beta _{c+ϵ}`$ lies is either a 2-punctured sphere, if $`cn+1`$ or $`cn1`$, and is a 4-punctured sphere otherwise. In a 2-punctured sphere there is a unique isotopy class of arcs connecting the two punctures, so the isotopy class of $`\beta _t`$ in $`\mathrm{\Sigma }_t`$ is unchanged when passing through the critical point. In a 4-punctured sphere there are many isotopy classes of arcs connecting two of the punctures, but as we pass a saddle critical point of type (1) or type (2), the curve $`\beta _t`$ remains in the complement of the second arc of intersection. The complement of an arc in a sphere is homeomorphic to an (open) disk, and in a disk there is a unique isotopy class of arcs connecting any two points. So the isotopy class of $`\beta _t`$ in $`\mathrm{\Sigma }_t`$ remains unchanged by a saddle move in these cases.
The third type of critical point does change the isotopy class of the arc, since it changes the boundary points connected by the arc. The following lemma asserts that this can occur at most once.
###### Lemma 7.
Suppose that $`f_n:D`$ is a Morse function on a topological disk $`D`$ that restricts to a Morse function on $`D`$. Suppose also that $`f_n|_D`$, the restriction of $`f_n`$ to $`D`$, has at most four critical points on $`D`$. Then $`f_n`$ can have at most one interior critical point of type (3) that is a saddle connecting distinct arcs in the level set of $`f_n`$.
Proof: Suppose there is an interior critical point with critical value $`c`$ that is a saddle connecting distinct arcs in the level set $`f_n=c+ϵ`$. The four arcs leaving the saddle point hit the boundary of $`D`$ at four distinct points. These arcs divide $`D`$ into four quadrants, which meet $`D`$ in four arcs. Each of the boundary arcs has its two endpoints on the level set $`f_n=c`$, and is non-constant on these boundary arcs. So each contains at least one maximum or minimum of $`f_n|_D`$. Suppose there were a second saddle critical point on a level set $`\{f_n=c^{}\}`$. Since critical points of Morse functions have distinct values, we have that $`c^{}c`$. The four arcs emerging from the second saddle are therefore disjoint from the first critical level set, and contained in one of the previously defined quadrants. See Figure 15.
The intersection of this quadrant with $`D`$ has four points on which $`f_n`$ takes the value $`c^{}`$, and therefore $`f_n|_D`$ has at least three critical points in this quadrant. It follows that $`f_n|_D`$ has at least six critical points, contradicting the hypothesis. So only one saddle of type (3) can occur. $`\mathit{}`$
Since $`E_n`$ is a topological disk whose boundary has four critical points for the height function, at most one type (3) critical point can occur. Assume first that a type (3) critical point does not occur for $`t>0`$, On the standard disk, $`\gamma _0`$ is obtained from $`\gamma _n`$ by a continuous deformation involving no critical points, while $`\beta _0`$ is obtained from $`\beta _n`$ by a process that may include passing through critical points of types (1) and (2), but none of type (3). Therefore the isotopy class of $`\beta _0`$ in the 4-punctured sphere is the same as that of $`\beta _{n+2ϵ}`$. Since $`\beta _t`$ is isotopic to $`\gamma _t`$ for $`t`$ close to $`n+2`$, we conclude that $`\beta _0`$ is isotopic to $`\gamma _0`$. By Lemma 4 $`\beta _0`$ intersects $`B_0`$ in at least as many points as $`\gamma _0`$.
Now consider the case where a type (3) critical point does occur for some $`t>0`$. By Lemma 7, there are no type (3) critical points for $`t<0`$. In this case we repeat the previous argument,but using the function $`f_n`$ rather than $`f_n`$. Note that as $`z`$ increases from $`\{z=k\}`$ to $`\{z=k+1\}`$ for $`k`$ an integer with $`nk1`$, the level sets of $`F_n`$ are again transformed by an application of $`\phi `$. We replace $`\gamma `$ with the arc $`\gamma ^{}`$ in $`E_n\mathrm{\Sigma }_n`$ which joins $`p_2`$ to $`p_3`$, and $`\delta `$ with $`\delta ^{}`$, the boundary of a regular neighborhood of $`\gamma _n^{}`$.
The curve $`\delta ^{}`$ is carried with weights $`a=1,b=0`$ by $`T`$ and $`\phi (\delta ^{})`$ is carried with weights $`a=1,b=1`$. By Lemma 3, $`\phi ^n(\delta ^{})`$ is carried by $`T`$ with weights $`a2^{n1}`$ and $`b2^{n1}`$. By Lemma 4 a curve isotopic to $`\phi ^n(\delta ^{})`$ intersects $`B_0`$ in at least $`2a+2b=2^{n+1}`$ points. Then $`\phi ^n(\gamma ^{})`$ intersects $`B_0`$ in at least half as many, or $`2^n`$ points. As $`t`$ increases from $`n`$ to 0, the arc $`\gamma _t^{}`$ in the component of $`E_n\{zt\}`$ containing $`\gamma ^{}`$, is carried to an arc in $`\mathrm{\Sigma }_0`$ isotopic to $`\phi ^n(\gamma _n^{})`$. So $`E_n`$ again must intersect $`B_0`$ in at least $`2^{n1}`$ points. In every case the curve $`B_0`$ intersects $`E_n`$ in at least $`2^{n1}`$ points.
Now consider an arbitrary PL disk $`D_n`$ with boundary $`K_n`$. After an arbitrarily small isometry of $`^3`$, we can arrange that $`D_n`$ intersects the $`y`$-axis transversely in a finite number of points, The disk $`D_n`$ can be approximated by a disk $`E_n`$, with smoothly embedded interior, that coincides with $`D_n`$ in a neighborhood of each intersection point with the $`y`$-axis, and that remains disjoint from other points of the $`y`$-axis. We choose $`R_n`$ larger, if necessary, so that in the passage from the Morse function $`z`$ to the Morse function $`f_n`$, the three surfaces $`D_n,E_n`$ and $`F_n`$ that we consider only intersect the flat parts of the spheres $`S_t`$ which form the level sets of $`f_n`$. So the intersection of $`B_0`$ with $`D_n,E_n`$ and $`F_n`$ is the same as that of the $`y`$-axis with these surfaces, each intersection being in the interior of a triangular face of $`D_n`$.
But we have shown that $`E_n`$ intersects $`B_0`$ in at least $`2^{n1}`$ points, and it therefore follows that the PL disk $`D_n`$ also intersects $`B_0`$ in at least $`2^{n1}`$ points.
Since a triangle transversely intersects a line in at most one point, and $`B_0`$ agrees with the $`y`$-axis in a ball containing $`D_n`$, $`E_n`$ and $`F_n`$, this implies that $`D_n`$ contains at least $`2^{n1}`$ triangles, and Theorem 1 is proved. $`\mathit{}`$
Remarks:
1. If we allow spanning disks that self-intersect, then the number of triangles required to span $`K_n`$ grows only linearly with $`n`$. If a spanning surface of arbitrary genus is allowed, it can be shown that the number of triangles required to span $`K_n`$ grows at most quadratically in $`n`$ .
2. The number of Reidemeister moves required to transform any unknotted curve constructed with $`n`$ polygonal edges into a single triangle has an exponential upper bound derived in . For the particular $`K_n`$ constructed here, the number of Reidemeister moves required to transform the projection of $`K_n`$ to a projection with no crossings grows only linearly with $`n`$.
3. The argument establishes somewhat better estimates then claimed above. If we embed $`K_n`$ into $`^3`$ in a more efficient way, then for large $`n`$, $`K_n`$ has at most $`6n`$ segments and the number of triangles contained in any disk spanning $`K_n`$ grows faster than a constant times $`\varphi ^{2n}`$, where $`\varphi `$ is the golden ratio.
## 6. Acknowledgments
This paper was completed while the first author was visiting the Institute for Advanced Study. The authors are grateful to J. Lagarias and the referee for helpful suggestions on the exposition.
|
no-problem/9906/astro-ph9906467.html
|
ar5iv
|
text
|
# RELATIVISTIC THERMAL BREMSSTRAHLUNG GAUNT FACTOR FOR THE INTRACLUSTER PLASMA. III. ANALYTIC FITTING FORMULA FOR THE NONRELATIVISTIC EXACT GAUNT FACTOR
## 1 INTRODUCTION
The present authors have recently carried out accurate calculations on the relativistic thermal bremsstrahlung Gaunt factor for the intracluster plasma (Nozawa, Itoh, & Kohyama 1998). They have also presented accurate analytic fitting formulae which summarize the numerical resluts of the calculations (Itoh et al. 1999). Their calculation is based on the method of Itoh and his collaborators (Itoh, Nakagawa, & Kohyama 1985; Nakagawa, Kohyama, & Itoh 1987; Itoh, Kojo, & Nakagawa 1990; Itoh et al. 1991, 1997). In calculating the relativistic thermal bremsstrahlung Gaunt factor for the high-temperature, low-density plasma, Nozawa, Itoh, & Kohyama (1998) have made use of the Bethe-Heitler cross section (Bethe & Heitler 1934) corrected by the Elwert factor (Elwert 1939). They have also calculated the Gaunt factor by using the Coulomb-distorted wave functions for nonrelativistic electrons following the method of Karzas & Latter (1961). In Itoh et al. (1999), the present authors have constructed accurate analytic fitting formulae by combining the relativistic Elwert Gaunt factor with the nonrelativistic exact Gaunt factor. The former Gaunt factor is accurate at high temperatures, whereas the latter is accurate at low temperatures. For the plasmas with relatively low temperatures, the nonrelativistic exact Gaunt factor alone is sufficient for the analysis of the radiation which comes from these plasmas. Therefore, it is worthwhile to present an accurate analytic fitting formula which reproduces the numerical results of the calculations for the nonrelativistic exact Gaunt factor which have been reported in Nozawa, Itoh, & Kohyama (1998). The present paper is organized as follows. We will present the accurate analytic fitting formula in $`\mathrm{\S }`$2. Concluding remarks will be given in $`\mathrm{\S }`$3.
## 2 ANALYTIC FITTING FORMULA
The thermal bremsstrahlung emissivity in the nonrelativistic limit is expressed in terms of the nonrelativistic exact Gaunt factor $`g_{\mathrm{NR}}`$ (Nozawa, Itoh, & Kohyama 1998) by
$`<W(\omega )>_{\mathrm{NR}}d\omega `$ $`=`$ $`1.426\times 10^{27}g_{\mathrm{NR}}(\gamma ^2,u)\left[n_e(\mathrm{cm}^3)\right]\left[n_j(\mathrm{cm}^3)\right]Z_j^2\left[T(\mathrm{K})\right]^{1/2}`$ (1)
$`\times `$ $`e^udu\mathrm{ergs}\mathrm{s}^1\mathrm{cm}^3,`$
$`u`$ $``$ $`{\displaystyle \frac{\mathrm{}\omega }{k_BT}},`$ (2)
$`\gamma ^2`$ $``$ $`{\displaystyle \frac{Z_{j}^{}{}_{}{}^{2}\mathrm{Ry}}{k_BT}}=Z_{j}^{}{}_{}{}^{2}{\displaystyle \frac{1.579\times 10^5\mathrm{K}}{T}}.`$ (3)
In the above, $`\omega `$ is the angular frequency of the emitted photon, $`T`$ is the temperature of the electrons, $`n_e`$ is the number density of the electrons, $`n_j`$ is the number density of the ions with the charge $`Z_j`$. It should be noted that the thermal bremsstrahlung emissivity in the nonrelativistic limit is a function of $`\gamma ^2`$ and $`u`$ only. It does not depend on $`Z_j`$ and $`T`$ separately, but on the ratio $`Z_{j}^{}{}_{}{}^{2}/T`$. This is a remarkable fact for nonrelativistic electrons.
In Figure 1 we show the nonrelativistic Gaunt factor as a function of $`u`$ for various values of $`\gamma ^2`$. In Figure 2 we show the nonrelativistic Gaunt factor as a function of $`\gamma ^2`$ for various values of $`u`$.
We give an analytic fitting formula for the nonrelativistic exact Gaunt factor. The range of the fitting is $``$3.0 $``$ $`\mathrm{log}_{10}\gamma ^2`$ 2.0, $``$4.0 $``$ $`\mathrm{log}_{10}u`$ $``$ 1.0. We express the Gaunt factor by
$`g_{\mathrm{NR}}`$ $`=`$ $`{\displaystyle \underset{i,j=0}{\overset{10}{}}}b_{ij}\mathrm{\Gamma }^iU^j,`$ (4)
$`\mathrm{\Gamma }`$ $``$ $`{\displaystyle \frac{1}{2.5}}[\mathrm{log}_{10}\gamma ^2+0.5],`$ (5)
$`U`$ $``$ $`{\displaystyle \frac{1}{2.5}}[\mathrm{log}_{10}u+1.5].`$ (6)
The coefficients $`b_{ij}`$ are presented in TABLE 1. The accuracy of the fitting is generally better than 0.1%.
## 3 CONCLUDING REMARKS
We have presented accurate analytic fitting formulae for the nonrelativistic exact Gaunt factor for thermal bremsstrahlung. The analytic fitting formulae have been constructed to reproduce the numerical results of the calculation by the method of Karzas & Latter (1961) reported in Nozawa, Itoh, & Kohyama (1998). The accuracy of the fitting is generally better than 0.1%. The present fitting formula can be used widely as far as the electrons are nonrelativistic.
We thank Professor Y. Oyanagi for allowing us to use the least square fitting program SALS. This work is financially supported in part by the Grant-in-Aid of Japanese Ministry of Education, Science, Sports, and Culture under the contract #10640289.
Figure Legends
| FIG.1. | Nonrelativistic exact Gaunt factor as a function of $`u`$ for various values of $`\gamma ^2`$. |
| --- | --- |
| FIG.2. | Nonrelativistic exact Gaunt factor as a function of $`\gamma ^2`$ for various values of $`u`$. |
|
no-problem/9906/cond-mat9906039.html
|
ar5iv
|
text
|
# First order phase transition in a 1+1-dimensional nonequilibrium wetting process
## A Definition of the model:
The model is defined in terms of growth of a one-dimensional interface on a lattice of $`N`$ sites with associated height variables $`h_i=0,1,\mathrm{},\mathrm{}`$ and periodic boundary conditions. We consider a restricted solid-on-solid (RSOS) growth process, where the height differences between neighboring sites can take only values $`0,\pm 1`$. In addition, a hard-core wall at zero height is introduced. The model depends on three parameters $`q,q_0`$, and $`p`$. It evolves by random sequential updates, i.e., in each update attempt a site $`i`$ is randomly selected and one of the following processes is carried out:
– adsorption of an adatom with probability $`q_0\mathrm{\Delta }t`$ at the
bottom layer $`h_i=0`$ and probability $`q\mathrm{\Delta }t`$ at higher
layers $`h_i>0`$:
$$h_ih_i+1,$$
(1)
– desorption of an adatom from the edge of a terrace
with probability $`1\mathrm{\Delta }t`$:
$$h_i\mathrm{min}(h_{i1},h_i,h_{i+1}),$$
(2)
– desorption of an adatom from the interior of a terrace
with probability $`p\mathrm{\Delta }t`$:
$$h_ih_i1\text{if}h_{i1}=h_i=h_{i+1}>0.$$
(3)
A process is carried out only if the resulting interface height $`h_i`$ is non-negative and does not violate the RSOS constraint $`|h_ih_{i\pm 1}|1`$. The time increment per sweep (N attempted updates) is $`\mathrm{\Delta }t1/\mathrm{max}(1,q_0,q+p)`$.
The phase diagram for the case $`q_0=q`$ has been studied in , where a continuous wetting transition was found. Clearly, the moving state is not affected by $`q_0`$ and thus the transition line above which it is stable remains unchanged. However, the stability of the pinned state strongly depends on $`q_0`$, modifying the phase diagram and the nature of the wetting transition. In order to gain some insight into the mechanism leading to first-order transition, we first consider the $`p=1`$ case. Here detailed balance is obeyed , wherefore the transition can be described in the framework of equilibrium statistical mechanics . We then consider the case $`p1`$ numerically.
## B The case $`p=1`$:
For $`p=1`$ and $`q1`$ the dynamic rules satisfy detailed balance and the probability of finding the interface in a configuration $`\{h_1,\mathrm{},h_N\}`$ can be expressed in terms of a potential $`V(h)`$ by
$$P(h_1,\mathrm{},h_N)=Z_N^1\mathrm{exp}\left[\underset{i=1}{\overset{N}{}}V(h_i)\right],$$
(4)
where the partition sum $`Z_N=_{h_1,\mathrm{},h_N}e^{_iV(h_i)}`$ runs over all interface configurations obeying the RSOS constraint. The potential is given by
$$V(h)=\{\begin{array}{cc}\mathrm{}\hfill & \text{if }h<0,\hfill \\ \mathrm{ln}(q/q_0)\hfill & \text{if }h=0,\hfill \\ h\mathrm{ln}(q)\hfill & \text{if }h>0.\hfill \end{array}$$
(5)
As shown in the inset of Fig. 1, the attractive interaction between substrate and bottom layer is incorporated as a potential well at zero height. For $`q<1`$ the slope of the potential is positive so that the interface is always pinned to the wall. For $`q>1`$, where the slope is negative, the interface can ‘tunnel’ through the potential barrier and eventually detaches from the substrate. It should be noted that in this case, the equilibrium distribution (4) is no longer valid, i.e., the system enters a non-stationary nonequilibrium phase.
The nature of the transition depends on the depth of the potential well. For $`q_0<\frac{2}{3}`$, the potential well is deep enough to bind the interface to the wall at the transition point $`q_c=1`$, giving rise to a localized equilibrium distribution with a discontinuous transition. For $`q_0>\frac{2}{3}`$, no localized solution exists at $`q=1`$ and the transition becomes continuous. The two transition lines are separated by a tricritical point at $`q_0^{}=\frac{2}{3},q_c=1`$.
In order to prove the existence of the first-order line, we apply a transfer matrix formalism . Defining a transfer matrix $`T`$ acting in spatial direction by
$$T_{h,l}=\{\begin{array}{cc}q/q_0\hfill & \text{if }|hl|1\text{ and }l=0,\hfill \\ q^l\hfill & \text{if }|hl|1\text{ and }l>0,\hfill \\ 0\hfill & \text{otherwise},\hfill \end{array}$$
(6)
we compute the eigenvector $`\varphi `$ of $`T`$ corresponding to the largest eigenvalue $`\mu `$, which determines the steady-state properties of the system. For $`q=1`$ the solution reads
$$\mu =(z+1)/q_0,\varphi _0=q_0,\varphi _h=z^h,$$
(7)
where $`h1`$ and
$$z=\frac{\sqrt{1+2q_03q_0^2}}{2(1q_0)}\frac{1}{2}.$$
(8)
The stationary density of exposed sites at the bottom layer is given by $`n_0=\varphi _0^2/_{h=0}^{\mathrm{}}\varphi _h^2`$. It is nonzero for $`q_0<\frac{2}{3}`$ and vanishes linearly at the tricritical point. This proves the existence of the first-order phase transition line in Fig. 1.
In Ref. the density $`n_0`$ and the interface width $`w=(hh)^2^{1/2}`$ at $`q_0=q`$ were found to scale as
$$n_0(q_cq)^{x_0},w(q_cq)^\gamma ,$$
(9)
with the critical exponents $`x_0=1`$ and $`\gamma =\frac{1}{3}`$. Using the transfer matrix approach, we can prove that these bulk exponents remain valid along the entire second order phase transition line, except for the tricritical point where $`x_0=\gamma =\frac{1}{3}`$. Moreover, approaching the tricritical point from the left along the first order transition line, it can be shown that the two quantities scale as
$$n_0(q_0^{}q_0)^{x_0^{}},w(q_0^{}q_0)^\gamma ^{},$$
(10)
where $`x_0^{}=\gamma ^{}=1`$.
## C The case $`p1`$:
In this case the dynamic rules do not satisfy detailed balance and the model cannot be solved using the previous methods. Performing Monte-Carlo simulations we determined the phase diagrams for various values of $`p`$. For $`p<1`$ we find that the moving and the pinned phases coexist in a whole region of the parameter space rather than just on a line, as is the case for equilibrium first order transitions. As shown in Fig. 2, the coexistence regime for $`p=0.2`$ ends at the tricritical point $`q_0^{}=0.515(10),q_c=0.6868(2)`$, where the second-order phase transition line starts. Unlike metastable states, the pinned phase is thermodynamically stable inside the coexistence regime, i.e., its life time $`\tau `$ grows exponentially with the system size, as shown in the inset of Fig. 2.
For $`p>1`$, however, there is no region of phase coexistence and the phase diagram is similar to that of Fig. 1. For instance, for $`p=2`$ the first-order phase transition line ends at the tricritical point $`q_0^{}=0.73(1)`$, $`q_c=1.2326(3)`$. For $`p1`$, we expect that Eqs. (9)-(10) still describe the scaling behavior at the tricritical point, although with different sets of critical exponents.
In order to understand the mechanism leading to phase coexistence for $`p<1`$, let us consider the evolution of a large droplet (an interval where the interface is detached from the bottom layer) in the vicinity of the upper terminal point of the coexistence regime $`q=1,q_0=0`$. Because of the RSOS constraint, the growing droplet eventually reaches an almost triangular shape with unit slope at the edges. The interface of the triangular droplet fluctuates predominantly by diffusion of pairs of sites with equal height. Inspecting the dynamic rules, it is easy to verify that these ‘landings’ of the staircase move upwards with rate $`q`$ and downwards with rate $`1`$. Hence, for $`q>1,q_0=0`$ the droplet is stable with a life-time exponential in its lateral size. For $`q>1`$ and $`q_0>0`$, fluctuations of the bottom layer are biased to move upwards at the edges of the droplet. Thus the droplet grows and the interface eventually detaches from the bottom layer. On the other hand, if $`q_0=0`$ and $`q_c<q<1`$, fluctuations at the top of the triangular droplet are biased to diffuse downwards to the edges. Therefore, the droplet shrinks at constant velocity in a time proportional to its size, ensuring the stability of the pinned phase.
As shown in Fig. 3, this robust mechanism for the elimination of droplets also works for positive values of $`q_0`$. If the interface detaches from the substrate over some distance due to fluctuations, the resulting droplet grows and reaches an almost triangular shape. In the coexistence regime, the droplets are biased to shrink in a time proportional to its size, resulting in a stable pinned phase. However, spontaneously created small islands next to the bottom layer contribute to the broadening of the droplets, reducing the range of phase coexistence. This explains why the upper boundary of the coexistence regime decreases as $`q_0`$ is increased. At the upper boundary the stationary density of exposed sites at the bottom layer $`n_0`$ is found to change discontinuously.
The transition line above which the unbound phase is stable is independent of the growth rate $`q_0`$. This line is the lower curve in Fig. 4, which is common to all four diagrams. For $`q_0`$ smaller than some threshold $`\overline{q}_0`$, the pinned and the unbound phases coexist in a certain region of the phase diagram. As can be seen in the Figure, this region is bounded by two lines which intersect to the right at the equilibrium transition point $`p=q=1`$. For $`q_0<q_{c,0}0.399`$, this is the only intersection point of the two lines and the phase coexistence region extends down to $`p=0`$. On the other hand, for $`q_0>q_{c,0}`$ the two lines also intersect on the left at another tricritical point, reducing the size of the region of phase coexistence. This region disappears at $`q_0=\overline{q}_0`$. On the basis of our numerical simulations, it is not possible to conclude whether $`\overline{q}_0`$ is equal or strictly smaller than $`2/3`$.
## D Discussion:
Within a more general framework, the coexistence of the moving and the pinned phase may be viewed as follows. The evolution of the interface may be described in terms of the KPZ equation
$$_th=D^2h+\lambda (h)^2+\zeta (x,t)+V^{}(h)+v_0$$
(11)
with positive heights $`h(x,t)>0`$, where the velocity $`v_0`$ plays the role of $`qq_c`$. Clearly, for $`\lambda =0`$ the transition takes place at $`v_0=0`$. For $`\lambda >0`$, the nonlinear term of Eq. (11) may be interpreted as an additional force acting on tilted parts of the interface in the direction of growth. This force supports the growth of droplets wherefore the interface detaches for any $`v_0>0`$. However, if $`\lambda <0`$ this force acts against the direction of growth. Consequently, a sufficiently tilted interface does not propagate and may even move downwards. For $`v_0>0`$ this leads to the formation of fluctuating droplets with a triangular shape and a finite slope at the edges. If the short-range force at the bottom layer is strong enough, such droplets, once formed, will shrink at constant velocity. Thus, the moving and the pinned phase can only coexist in those parts of the phase diagram where $`\lambda `$ is negative. In fact, as shown in , $`\lambda `$ is negative along the transition line for $`p<1`$ and changes sign at $`p=1`$.
The phenomenon of phase coexistence was first observed in Toom’s two-dimensional north-east-center voting model . It was also shown that open boundaries in certain one-dimensional diffusive models may exhibit similar phenomena . The model discussed in this work demonstrates that phase coexistence can also emerge in homogeneous one-dimensional driven systems.
We would like to thank M.R. Evans for valuable discussions. The support of the Israel Science Foundation, the Israel Ministry of Science, and the Inter-University High Performance Computation Center is gratefully acknowledged. H.H. would like to thank the Weizmann Institute for hospitality where parts of this work have been done.
|
no-problem/9906/physics9906027.html
|
ar5iv
|
text
|
# Self-Pulsating Semiconductor Lasers: Theory and Experiment
## I Introduction
Self-pulsating semiconductor lasers (SPSL’s) are of great interest owing to their potential application in telecommunication systems as well as in optical data storage applications. In particular, in the latter case they are realized as so-called narrow–stripe geometry CD lasers where the self-pulsation is achieved via saturable absorption in the transverse dimension limiting the active region. A profound knowledge and understanding of their operation dynamics is therefore desired.
SPSL’s have been studied since the first diode lasers became available in the late 1960s . These first semiconductor lasers, although designed to operate in continuous wave (CW) mode, showed self-induced pulsations of the light intensity due to a combination of two reasons: (i) the laser resonance is internally excited through the nonlinear interaction of various longitudinal laser modes, thus causing mode beating at very high frequency; (ii) defects in the active material act as saturable absorbing areas, thus causing absorptive Q-switching processes.
In the case of self-pulsations caused by saturable-absorbing effects, the self-pulsation frequency (SPF) dependence on the pump current was investigated in . In later works the self pulsations were attributed to undamped relaxation oscillations (RO) . The precise values of the ROF, as calculated from a small-signal analysis, and the actual SPF, highly nonlinear, are however different, the SPF being always smaller than the ROF .
Saturable absorption effects, causing self-pulsations in stripe-geometry lasers have been investigated since the early 1980s . Saturable absorption is also responsible for self-pulsations in double-section laser diodes . A similar mechanism of dispersive Q-switching has been invoked to describe self-pulsations in multisection Distributed Feedback Lasers .
In this paper we study both experimentally and theoretically the dependence of the self-pulsation frequency (SPF) of narrow–stripe geometry self-pulsating semiconductor lasers, also known as CD–lasers, on the bias pump current. In these lasers, self–pulsation is induced via saturable absorption in the transverse dimension of the active region. The rate–equation model of Ref. has been proven to be quite successful in describing the mechanism of self-pulsation and has already been used with success in analyzing such lasers subject to weak optical feedback . There it was found that, with and without feedback, there are two distinct regions in the SPF vs. pump–current curve, one where spontaneous emission dominates the laser dynamics between pulses and one where spontaneous emission always plays a minor role.
In section II we present detailed measurements of the SPF vs pump–current curve. This curve confirms most of the findings of , and also shows a distinct cross-over point distinguishing between linear and square–root–like behavior. In section III we confront the experimental results with a theoretical model, inspired by Ref. . Its results agree qualitatively well with the experimental results, showing a distinct cross-over region. The location of the cross-over region is shown to be determined by the spontaneous emission rate. In Section IV we discuss the relationship between the SPF and the ROF using a small signal analysis. We discuss the various bifurcations that are predicted by our model, and compare it with the model of Ref. .
## II Experiment
We use a SHARP CD semiconductor laser diode, model LTO22MD. The laser emits a continuous train of regular pulses with a frequency that depends on the bias pump current. A bulk layer of AlGaAs constitutes the active layer of this Fabry-Perot cavity that emits at $`800`$–nm wavelength. The gain section is defined by the p-electrical contact and has the following approximate dimensions: $`250`$$`\mu `$m long, $`2`$$`\mu `$m wide, and $`0.2`$$`\mu `$m thick. A very narrow contact of $`2\mu `$m allows for current injection. Since the region capable of stimulated emission extends to both sides beyond the narrow stripe of the current contact, the wings of the optical field distribution will interact with these unpumped, and therefore absorbing, regions. In fact, these regions are saturably absorbing; when the optical intensity in the wings of the mode is large enough, the electron–hole pair population in the unpumped region reaches transparency, thus allowing a “self–Q–switched” pulse. There is no sharp boundary between the pumped and unpumped regions, making carrier diffusion an important effect. Indeed, in the model of Ref. carrier diffusion between the pumped and unpumped regions is crucial for the appearance of self–pulsation.
The experimental set-up is illustrated in Fig. 1. The laser is temperature controlled by a Peltier cooler at 20 C. It is DC biased by a low noise current supply. Laser emission is collected by an anti-reflection coated $`0.65`$ N.A. laser diode lens. The resulting parallel beam is passed through a $`30`$–dB isolator to avoid spurious effects caused by optical feedback and is launched into a $`60`$–GHz photodiode (New Focus Model 1006). The converted electrical signal is observed with a $`22`$–GHz bandwidth spectrum analyzer (HP 8563A). The typical RF spectrum of the SPSL is characterized by a main peak at the SPF, followed by overtones. The uncertainty on the self-pulsation frequency measurement is mainly due to the measurement of bias current, which has an error of less than $`0.1`$ mA. The resolution of the spectrum analyzer is $`100`$ kHz and the video filter is $`30`$ kHz.
For values near the threshold current, the low power emission makes it difficult to observe the signal. The value of the spectral density of the self pulsations is very close to the noise level and also the width of the feature in the power spectrum is wider than at higher currents. To overcome this problem, a small current modulation is applied to the device for injection currents below 47 mA ,. Its power is kept sufficiently low so that it does not affect the oscillation behaviour of the laser and does not induce any supplementary oscillation phenomena, e.g. relaxation oscillation or self-pulsations originating from a cross modulation of the carrier density. The self-pulsation frequency shows up as an enhancement of the oscillation of the laser emission if the two frequencies coincide. This allows an accurate determination of the SPF frequency close to threshold.
Figure 2 a shows the optical power and SPF as a function of the bias current. The L-I curve has been recorded using an integrating sphere. It is assumed that all emitted power is collected. The laser is characterized by a threshold current of $`44`$ mA and a slope efficiency of $`0.22`$ mW/mA. The SPF varies from $`1`$ to $`4`$ GHz in a bias current range of $`46`$ to $`64`$ mA, which was the maximum injection current we could reach with these devices. In the region of the lasing threshold the experimental values present a square-root like behavior dependence reminiscent of standard relaxation oscillations as exhibited by a CW-semiconductor laser. For bias currents above 55 mA this dependence was no longer observed and the SP behavior appears to have a more linear dependence on the bias current.
## III Theory
In this section we use a simple model to explain the observed bias–current dependence of the SPF. The investigated laser has a narrow–stripe geometry, which can be modeled in a straightforward way using rate-equations for the optical intensity $`S`$ (suitably normalized to represent the number of photons in the cavity), the number of electron–hole pairs $`N_1`$ in the pumped region, and the number of electron–hole pairs $`N_2`$ in the unpumped (absorbing) region:
$`{\displaystyle \frac{dS}{dt}}`$ $`=`$ $`[g_1(N_1N_{t1})+g_2(N_2N_{t2})\kappa ]S+R_{sp}+F_S(t),`$ (2)
$`{\displaystyle \frac{dN_1}{dt}}`$ $`=`$ $`{\displaystyle \frac{J}{e}}{\displaystyle \frac{N_1}{\tau _s}}g_1(N_1N_{t1})S{\displaystyle \frac{N_1vN_2}{T_{12}}},`$ (3)
$`{\displaystyle \frac{dN_2}{dt}}`$ $`=`$ $`{\displaystyle \frac{N_2}{\tau _s}}g_2(N_2N_{t2})S+{\displaystyle \frac{N_1/vN_2}{T_{21}}}.`$ (4)
where $`g_1`$ ($`g_2`$) is the gain coefficient at the transparency number $`N_{t1}`$ ($`N_{t2}`$) in the pumped (unpumped) region, $`\kappa `$ is the total loss rate. $`R_{sp}=\beta _{sp}\eta _{sp}N_1/\tau _s`$ is the spontaneous emission rate, $`\eta _{sp}`$ is the spontaneous quantum efficiency, $`\beta _{sp}`$ is the spontaneous emission factor and $`\tau _s`$ is the carrier lifetime. $`F_S(t)`$ is a delta-correlated Langevin noise source with correlation $`<F_S(t_1)F_S(t_2)>=2R_{sp}S\delta (t_1t_2)/\tau _s`$, $`J`$ is the bias pump-current, $`e`$ is the elementary charge, $`v=V_1/V_2`$ is the volume ratio of pumped and unpumped region, $`T_{12}`$ is the diffusion time from the pumped region to the unpumped region, and $`T_{21}`$ is the diffusion time from unpumped to pumped region. These two diffusion times are interrelated through the volume ratio $`v`$ :
$$v=\frac{V_1}{V_2}=\frac{T_{12}}{T_{21}}.$$
(5)
Our model (2-4) is a simplification of the model used in Ref. , where the carrier dependence of the carrier lifetime $`\tau _s`$ is taken into account using the well-known second order expression for $`\tau _s^1`$ in the carrier number $`N_j`$. Here, we neglect this dependence for the moment, as it simplifies the analytical work and qualitatively gives similar results.
Using the parameter values listed in Table 1, Eqs. (2-4) are numerically solved with a standard algorithm . In Figure 3 we show the resulting SPF-J curves, with and without spontaneous emission noise. Each value of the curves is calculated from an average over $`10^3`$ pulses. It is seen that the observed kink in the SPF-J curve is the result of spontaneous emission noise. There is a shift of the kink towards larger currents upon increasing the spontaneous emission level. For the values of table 1 and $`\beta _{sp}=1.3\times 10^6`$, $`J_{xover}82`$ mA. It should be noticed that we do not expect a quantitave agreement between experimental and numerical results, since the model neglects important effects, such as gain saturation. Nevertheless, the qualitative trends are well reproduced allowing us to physically understand the origin of the experimental features.
Figure 4 shows time traces of the intensity for different bias currents. Clearly, the interpulse intensity drastically increases with current in the vicinity of the kink. For currents $`J<<J_{xover}`$ the interpulse intensity is well dominated by the spontaneous emission (panel a)), while for currents $`J>J_{xover}`$ spontaneous emission does not affect the intensity significantly. The kink-current $`J_{xover}`$ can be defined as the highest current at which the interpulse intensity is dominated by spontaneous emission noise. As can be seen in Eq. (2) spontaneous emission increases the intensity generation rate with an amount $`R_{sp}`$. The effect of this on the self–pulsation process depends on the generation rate through stimulated emission $`R_{stim}=[g_1(N_1N_{t1})+g_2(N_2N_{t2})]S`$. For currents $`J<J_{xover}`$, $`R_{sp}>R_{stim}`$ in the interpulse region while for $`J>J_{xover}`$, the contrary happens. Therefore, the kink pump current $`J_{xover}`$ could be mathematically identified through
$$R_{sp}R_{stim}(J_{xover}),$$
(6)
where the current dependence of $`R_{stim}`$ reflects the need to solve Eq. (6) implicitly using all three equations (2-4) at the time at which the intensity reaches the minimum.
The long–dashed curve in Fig. 3 is obtained by putting $`R_{sp}=0`$ in Eq. (2). In that situation, the interpulse intensity becomes extremely small upon decreasing the pump current. The smaller the interpulse intensity becomes, the longer it takes for the absorber to reach transparency. When including noise ($`R_{sp}0`$), the interpulse intensity remains at a much higher level in the same pump current interval because of the spontaneous emission rate $`R_{sp}`$. This will significantly increase the speed with which a new pulse is generated after the previous one has depleted the absorber. We note that the Langevin noise source $`F_S(t)`$ in Eq. (2) is responsible for the timing jitter of the pulses. In the region $`J<J_{xover}`$ a single noise event in between pulses may significantly delay or advance the birth of the next pulse, causing substantial jitter. For pump currents above the cross–over, the relative effect of the noisy events, and hence the jitter, is much smaller. The existence of two pump current regions with very different jitter characteristics was also found in Ref. .
In figure 5 the maximum pulse intensity ($`S_{max}`$) and the minimum interpulse intensity ($`S_{min}`$) vs. the bias current are shown. An abrupt change (note that the scale in panel c) is logarithmic) of $`S_{min}`$ can be seen at $`J_{xover}`$ (while $`S_{max}`$ takes it maximum value). The kink-current $`J_{xover}`$ is therefore identified as the highest current at which the interpulse intensity is dominated by spontaneous emission noise. The kink also denotes the boundary between two regimes that can be described as follows: For currents larger than $`J_{xover}`$ the self-pulsation has the character of undamped RO, while for currents below this value clear self-Q-switching takes place. Obviously, for currents $`J>J_{xover}`$ the absorber is not depleted deeply enough to cause a Q-switch: as soon as transparency is reached, the absorber is bleached but the pump is strong enough to prevent total bleaching. For currents $`J<J_{xover}`$, the pump is small enough to allow total bleaching of the absorbing regions, after which the number of electron–hole pairs in the absorbing region has to start all over again. No bifurcation in the usual sense can, however, be attributed to this critical current.
In the next section, we will look at the relationship between ROF and SPF in more detail.
## IV Relaxation Oscillations and Self-Pulsations
In the previous section we introduced a simple model which provides an explanation for the peculiar cross-over region in terms of the average level of spontaneous emission. Here we will put our numerical findings in an analytical framework, which leads to a clearer picture of the self-pulsation characteristics.
This is achieved by solving for the CW solutions of Eqs. (2-4) and investigating their stability properties. First we look for laser threshold, which is defined as the circumstance for which the trivial solution ($`S=0`$) looses stability in the absence of spontaneous emission. We therefore put $`R_{sp}=0`$ in Eq. (2) and obtain:
$`N_{th}`$ $`=`$ $`{\displaystyle \frac{g_1N_{t1}+g_2N_{t2}+\kappa }{g_1+\frac{g_2\tau _s}{T_{12}+v\tau _s}}}`$ (7)
$`{\displaystyle \frac{J_{th}}{e}}`$ $`=`$ $`N_{th}[{\displaystyle \frac{1}{\tau _s}}+{\displaystyle \frac{1}{T_{12}}}{\displaystyle \frac{v}{T_{12}}}{\displaystyle \frac{\tau _s}{T_{12}+v\tau _s}}]`$ (8)
Using the parameters listed in Table 1, we find $`J_{th}=44.53`$ mA.
In total, Eqs. (2-4) have three possible CW solutions. Below threshold, only the solution with $`S=0`$ is physically meaningful (the other two have negative power). At threshold the solution $`S=0`$ becomes unstable while one of the other two becomes stable with positive power. This is found after performing a standard linear stability analysis, which yields for every CW solution a set of (complex) characteristic exponents $`\lambda =\lambda _r+i\lambda _i`$. When any of these exponents has a positive real part ($`\lambda _r>0`$), the CW solution is unstable. The imaginary part $`\lambda _i`$ denotes the frequency with which perturbations initially will grow. Figure 6 shows how the real parts of the characteristic exponents of the relevant CW solution vary with bias current. The CW solution is found to be unstable on the interval $`44.556\stackrel{<}{}J\stackrel{<}{}\mathrm{\hspace{0.17em}92}`$ mA. For bias currents $`J>92`$ mA, stable CW emission is found. On the other side of the interval, a more complex behavior is found. At $`J=44.53`$ mA, the CW solution is stable, but looses its stability already at $`J=44.556`$ mA. This sequence of bifurcations from the nonlasing ($`S=0`$) state to self pulsation occurs in a very narrow range of currents around threshold. Thus the sequence will be experimentally very hard to resolve due to different noise sources; the laser will seemingly begin to oscillate as soon as it crosses threshold.
Thus, our model (2-4) shows that there exists a CW solution that looses stability at $`J=44.556`$ mA and regains stability at $`J=92`$ mA. In between these values, the CW state is unstable, as indicated by a complex conjugate pair of characteristic exponents with positive real parts (Hopf-instability). The region of instability coincides obviously with the region of self-pulsating behavior, and is bounded by two Hopf-bifurcations. When the laser operates at a bias current $`44.556<J<92`$ mA, small perturbations to the CW state in question initially grow as $`\mathrm{exp}[(\lambda _r+i\lambda _i)t]`$, i.e., with angular frequency $`\lambda _i`$. The linear stability analysis does not provide any information on how this initial growth will saturate. Numerical results from Eqs. (2-4) show that the resulting SPF is always smaller than the ROF $`\lambda _i/2\pi `$. This is illustrated in Fig. 3. Both frequencies meet at $`J=44.556`$ mA and at $`J=92`$ mA, the two Hopf bifurcation points. In the former case it means that the SPF must increase when coming from higher bias currents to reach the RO value. However, this increase only occurs in a very small range of currents so that it would be very hard to observe in the experiment.
It should be noted that a different scenario is found in Ref. . There, the carrier lifetime $`\tau _s`$ is considered to be carrier dependent, to account for the radiative, non-radiative, and Auger processes :
$$\tau _{s,j}^1(N_j)=A_{nr,j}+B_jN_j+C_jN_j^2,$$
(9)
where $`j=1`$ denotes the pumped region and $`j=2`$ denotes the unpumped region. This carrier dependence is considered necessary because during the strong pulsations, large variations in the carrier numbers $`N_j`$ may occur .
It was found in Ref. that the carrier dependence of $`\tau _{s,j}(N_j)`$ plays a significant role around threshold. This is in sharp contrast with the well-known CW edge-emitting lasers where $`N`$ is clamped immediately above threshold. The kink region, lying far above threshold, is not affected significantly by taking into account the carrier dependence of $`\tau _s`$. This illustrates the robustness of the cross-over behavior. At the high end of the self-pulsation interval, also a Hopf bifurcation is found, but the dynamics at the low end differs from the one discussed here. First of all, there is no window of stability just after threshold. Figure 7 shows the location of the various CW solutions as a function of bias pump current. The $`S=0`$ solution (horizontal solid line) is only shown for currents where it is stable. It looses stability at $`J=J_{th}`$. Around $`J=0.85J_{th}`$ two CW solutions are born out of a bifurcation. Both CW solutions are linearly unstable. This is in contrast with the model discussed above where the upper branch CW solution is stable in a short pump interval after its birth. The bifurcation which starts the self-pulsation is not a Hopf one but a homoclinic bifurcation (collision of a limit cycle and saddle). Thus, in the model of Ref. self-pulsation occurs in a region bounded by a Hopf-bifurcation on the high bias side and a homoclinic bifurcation at the low bias side. This type is not uncommon in (passive Q-switching) self-pulsating lasers with saturable absorbers .
## V Conclusions
We have investigated, both experimentally and theoretically, the dependence of the self-pulsation frequency of semiconductor CD lasers upon changes in the bias current. A distinct kink is found in this dependence, which is investigated using a rate–equation model. We have identified that the kink is caused by spontaneous emission, whose average intensity sets a lower bound on the emitted laser intensity and thereby on the average intensity, which determines the relaxation oscillation frequency.
From our analysis we conclude that below the crossover point, the self-pulsations behave as passive Q-switching oscillations while above the crossover the behavior approaches undamped relaxation oscillations.
The relationship between the relaxation oscillation frequency and the self-pulsation frequency is investigated by means of a small signal analysis. We observe that the relaxation oscillation frequency so obtained is an upper limit for the self-pulsating frequency. It is also found that self-pulsation occurs in a bias current interval bounded by two Hopf bifurcations. A small window of stable CW emission is found very close to the laser threshold in the absence of spontaneous emission. The model of Ref. does not show such a window of stable emission terminated by a Hopf bifurcation, but a homoclinic bifurcation is responsible for the onset of the self-pulsating behavior. However, for the lasers we used in the experiment, such differences between the models are irrelevant since they occur in a very small range of currents too close to threshold to be resolved. These results raise an interesting question on the nature of the bifurcation at the lower side of the self-pulsation interval.
Note added: A bifurcation analysis of the Yamada model neglecting interstripe diffusion has been published. The bifurcation scenario is different from the one for our model (2)-(4) (and closer to the one in ). However, these differences in the deterministic behavior have no physical relevance, since they appear in parameter domains for which the dynamics is dominated by noise.
## Acknowledgments
This work was supported by the European Union through the Project HCM CHRX-CT94-0594.
|
no-problem/9906/quant-ph9906050.html
|
ar5iv
|
text
|
# Pulse-order invariance of the initial-state population in multistate chains driven by delayed laser pulses
## Abstract
This paper shows that under certain symmetry conditions the probability of remaining in the initial state (the probability of no transition) in a chainwise-connected multistate system driven by two or more delayed laser pulses does not depend on the pulse order.
The process of stimulated Raman adiabatic passage (STIRAP) has received a great deal of attention in the past decade because of its potential for efficient and robust population transfer between two states $`\psi _1`$ and $`\psi _3`$ via an intermediate state $`\psi _2`$. STIRAP uses two delayed laser pulses, a pump pulse $`\mathrm{\Omega }_P(t)`$ linking states $`\psi _1`$ and $`\psi _2`$ and a Stokes pulse $`\mathrm{\Omega }_S(t)`$ linking states $`\psi _2`$ and $`\psi _3`$. By applying the Stokes pulse before the pump pulse (counterintuitive order) and maintaining adiabatic-evolution conditions and two-photon resonance between states $`\psi _1`$ and $`\psi _3`$, one ensures complete and smooth transfer of population from $`\psi _1`$ to $`\psi _3`$, regardless of whether the intermediate state is on or off single-photon resonance. Applying the two pulses in the intuitive order \[$`\mathrm{\Omega }_P(t)`$ before $`\mathrm{\Omega }_S(t)`$\] leads to oscillations in the on-resonance case and to STIRAP-like transfer in the off-resonance case. The success of STIRAP has prompted its extension to multistate chainwise-connected systems , where a similar distinction between the intuitive and counterintuitive pulse orders exists.
In view of the great difference in the final-state population for the two pulse orders, surprisingly, the initial-state population has been found to be the same for both orders in the three-state case, provided the Hamiltonian has a certain symmetry . The present paper extends this result to multistate chains. Thus it establishes another similarity between three-state and multistate systems.
The time evolutions of the probability amplitudes $`𝐜(t)=[c_1(t),c_2(t),\mathrm{},c_N(t)]^T`$ of the $`N`$ states satisfy the Schrödinger equation (in units $`\mathrm{}=1`$) ,
$$i\dot{𝐜}(t)=𝐇(t)𝐜(t).$$
(1)
In the rotating-wave approximation the Hamiltonian of the multistate chain is given by the tridiagonal matrix
$$𝐇=\left[\begin{array}{cccccc}0& \mathrm{\Omega }_{12}& 0& \mathrm{}& 0& 0\\ \mathrm{\Omega }_{12}& \mathrm{\Delta }_2& \mathrm{\Omega }_{23}& \mathrm{}& 0& 0\\ 0& \mathrm{\Omega }_{23}& \mathrm{\Delta }_3& \mathrm{}& 0& 0\\ \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}\\ 0& 0& 0& \mathrm{}& \mathrm{\Delta }_{N1}& \mathrm{\Omega }_{N1,N}\\ 0& 0& 0& \mathrm{}& \mathrm{\Omega }_{N1,N}& 0\end{array}\right].$$
(2)
The system is supposed to have $`N=2n+1`$ states and the Rabi frequencies $`\mathrm{\Omega }_{j,j+1}(t)`$ obey the relations
$`\mathrm{\Omega }_{j,j+1}(t)`$ $`=`$ $`\{\begin{array}{cc}\xi _j\mathrm{\Omega }_P(t),\hfill & j\text{ odd},\hfill \\ \xi _j\mathrm{\Omega }_S(t),\hfill & j\text{ even},\hfill \end{array}`$ (6)
$`\xi _j`$ $`=`$ $`\xi _{N+1j},`$ (7)
$`\mathrm{\Omega }_P(t)`$ $`=`$ $`\mathrm{\Omega }_0f(t\tau ),`$ (8)
$`\mathrm{\Omega }_S(t)`$ $`=`$ $`\mathrm{\Omega }_0f(t+\tau ),`$ (9)
and $`f(x)=f(x)`$. The functions $`\mathrm{\Omega }_P(t)`$ and $`\mathrm{\Omega }_S(t)`$ describe the envelopes of the two pulses, $`2\tau `$ is the pulse delay, $`\mathrm{\Omega }_0`$ is an appropriate unit of Rabi frequency, and the (constant) relative coupling strengths $`\xi _j`$ are proportional to the corresponding Clebsch-Gordan coefficients. The detunings are supposed to obey the relations
$`\mathrm{\Delta }_j(t)`$ $`=`$ $`\mathrm{\Delta }_{N+1j}(t),`$ (11)
$`\mathrm{\Delta }_j(t)`$ $`=`$ $`\mathrm{\Delta }_j(t),(j=2,3,\mathrm{},n+1).`$ (12)
For example, Eqs. (2) and (Pulse-order invariance of the initial-state population in multistate chains driven by delayed laser pulses) apply to $`2J+1`$-state systems ($`J`$ integer), formed by the sublevels in $`JJ`$ or $`JJ1`$ transition, coupled by two laser pulses $`\mathrm{\Omega }_P(t)`$ and $`\mathrm{\Omega }_S(t)`$ with $`\sigma ^+`$ and $`\sigma ^{}`$ polarizations .
I shall show that when conditions (2) and (Pulse-order invariance of the initial-state population in multistate chains driven by delayed laser pulses) are satisfied the probability of remaining in the initial state (the probability of no transition) does not depend on the pulse order, i.e., it is invariant upon the interchange of $`\mathrm{\Omega }_P(t)`$ and $`\mathrm{\Omega }_S(t)`$. Since the $`\mathrm{\Omega }_P\mathrm{\Omega }_S`$ swap is equivalent to the index change $`jN+1j`$ in $`𝐇(t)`$, the $`\mathrm{\Omega }_P\mathrm{\Omega }_S`$ invariance of the population of the initial state $`\psi _j`$ is equivalent to the assertion that for a given pulse order, the probability of remaining in state $`\psi _j`$, provided the system is initially in state $`\psi _j`$, is equal to the probability of remaining in state $`\psi _{N+1j}`$, provided the system is initially in state $`\psi _{N+1j}`$. In terms of the transition matrix $`𝐔(+\mathrm{},\mathrm{})`$, defined by $`𝐜(+\mathrm{})=𝐔(+\mathrm{},\mathrm{})𝐜(\mathrm{})`$, this invariance means that for any $`j=1,2,\mathrm{},n+1`$,
$$U_{jj}(+\mathrm{},\mathrm{})=U_{N+1j,N+1j}(+\mathrm{},\mathrm{}).$$
(13)
The proof of Eq. (13) is carried out in several steps. The first step is to show that the eigenvalues and the eigenstates of $`𝐇(t)`$ have certain symmetric properties. These properties lead to symmetries of the Hamiltonian in the adiabatic basis, which determine certain symmetries of the adiabatic transition matrix, which in turn lead to the property (13) of the diabatic transition matrix.
It follows from Eqs. (2) and (Pulse-order invariance of the initial-state population in multistate chains driven by delayed laser pulses) that the $`\mathrm{\Omega }_P\mathrm{\Omega }_S`$ swap is equivalent to time reversal in $`𝐇(t)`$,
$$\mathrm{\Omega }_P(t)\mathrm{\Omega }_S(t)\text{is equivalent to }tt.$$
(14)
Hence, since the $`\mathrm{\Omega }_P\mathrm{\Omega }_S`$ swap does not change the eigenvalues of the Hamiltonian, $`𝐇(t)`$ has the same eigenvalues as $`𝐇(t)`$. The eigenvalues $`\lambda _j(t)`$ of $`𝐇(t)`$ are therefore even functions of time,
$$\lambda _j(t)=\lambda _j(t),(j=1,2,\mathrm{},N).$$
(15)
Since $`𝐇(t)`$ is real and symmetric, its eigenvalues are real and its eigenstates can be chosen real too. The components of the eigenstates (the adiabatic states) $`𝐰^j(t)=[w_1^j(t),w_2^j(t),\mathrm{},w_N^j(t)]^T`$ are expressed in terms of $`w_1(t)`$ (for simplicity, the label $`j`$ is omitted for the moment) as
$`{\displaystyle \frac{w_2(t)}{w_1(t)}}`$ $`=`$ $`{\displaystyle \frac{\lambda (t)}{\xi _1\mathrm{\Omega }_P(t)}}g_2(t),`$
$`{\displaystyle \frac{w_3(t)}{w_1(t)}}`$ $`=`$ $`{\displaystyle \frac{\lambda (t)\left[\lambda (t)\mathrm{\Delta }_2(t)\right]\xi _1^2\mathrm{\Omega }_P^2(t)}{\xi _1\xi _2\mathrm{\Omega }_P(t)\mathrm{\Omega }_S(t)}}g_3(t),`$
$`\mathrm{},`$
and in terms of $`w_N(t)`$ as
$`{\displaystyle \frac{w_{N1}(t)}{w_N(t)}}`$ $`=`$ $`{\displaystyle \frac{\lambda (t)}{\xi _1\mathrm{\Omega }_S(t)}}g_2(t),`$
$`{\displaystyle \frac{w_{N2}(t)}{w_N(t)}}`$ $`=`$ $`{\displaystyle \frac{\lambda (t)\left[\lambda (t)\mathrm{\Delta }_2(t)\right]\xi _1^2\mathrm{\Omega }_S^2(t)}{\xi _1\xi _2\mathrm{\Omega }_P(t)\mathrm{\Omega }_S(t)}}g_3(t),`$
$`\mathrm{}`$
Generally, one can write $`w_k(t)/w_1(t)=g_k(t)`$ and $`w_{N+1k}(t)/w_N(t)=g_k(t)`$. For $`k=n+1`$, one finds $`g_{n+1}(t)w_N(t)=g_{n+1}(t)w_1(t)`$. It follows that
$`{\displaystyle \frac{w_{N+1k}(t)}{w_k(t)}}={\displaystyle \frac{g_k(t)}{g_k(t)}}{\displaystyle \frac{g_{n+1}(t)}{g_{n+1}(t)}},`$
for any $`k=1,2,\mathrm{},n+1`$. Hence
$`w_1(t)`$ $`=`$ $`g_{n+1}(t)/\nu (t),`$ (16)
$`w_2(t)`$ $`=`$ $`g_2(t)g_{n+1}(t)/\nu (t),`$ (18)
$`\mathrm{},`$
$`w_{n+1}(t)`$ $`=`$ $`g_{n+1}(t)g_{n+1}(t)/\nu (t),`$ (20)
$`\mathrm{},`$
$`w_{N1}(t)`$ $`=`$ $`g_{n+1}(t)g_2(t)/\nu (t),`$ (21)
$`w_N(t)`$ $`=`$ $`g_{n+1}(t)/\nu (t).`$ (22)
The normalization factor $`\nu (t)`$ is obviously invariant upon time reversal, which means that $`\nu (t)=\nu (t)`$. Equations (22), which are valid for $`g_{n+1}^j(t)0`$ (case I), lead to the relation (with the label $`j`$ restored)
$$w_k^j(t)=w_{N+1k}^j(t),\text{(case I)},$$
(24)
with $`k=1,2,\mathrm{},n+1`$.
If $`g_{n+1}^m(t)=0`$ (case II) for a certain $`\lambda _m(t)`$, we have $`w_{n+1}^m(t)=0`$ and $`w_{n+2}^m(t)=w_n^m(t)`$, which leads to
$$w_k^m(t)=w_{N+1k}^m(t),\text{(case II)},$$
(25)
with $`k=1,2,\mathrm{},n+1`$. Such a case arises for the zero-eigenvalue eigenstate in systems with $`N=3,7,11,\mathrm{}`$ states and zero detunings.
The symmetry relations (Pulse-order invariance of the initial-state population in multistate chains driven by delayed laser pulses) for the adiabatic states determine certain symmetries of the Hamiltonian in the adiabatic basis. The transformation from the original (diabatic) basis to the adiabatic basis, $`𝐜(t)=𝐖(t)𝐚(t)`$, is carried out by the orthogonal matrix $`𝐖(t)`$, whose columns are the normalized eigenvectors $`𝐰^j(t)`$. Here $`𝐚(t)=[a_1(t),a_2(t),\mathrm{},a_N(t)]^T`$ is the column-vector of the adiabatic probability amplitudes. The Schrödinger equation in the adiabatic basis reads
$$i\dot{𝐚}(t)=𝐇^a(t)𝐚(t),$$
(26)
where $`𝐇^a(t)=𝐇^{\text{adb}}(t)+𝐇^{\text{nonadb}}(t)`$ with
$`𝐇^{\text{adb}}(t)`$ $`=`$ $`𝐖^T(t)𝐇(t)𝐖(t),`$ (28)
$`𝐇^{\text{nonadb}}(t)`$ $`=`$ $`i𝐖^T(t)\dot{𝐖}(t).`$ (29)
The adiabatic part $`𝐇^{\text{adb}}(t)`$ is a diagonal matrix containing the eigenvalues $`\lambda _j(t)`$ of $`𝐇(t)`$ on the main diagonal. The nonadiabatic part $`𝐇^{\text{nonadb}}(t)`$ has zeros on the main diagonal, while the off-diagonal elements are equal to the nonadiabatic couplings $`H_{jk}^{\text{nonadb}}(t)=i𝐰^j(t)\dot{𝐰}^k(t)`$. It is readily seen from Eq. (24) that the nonadiabatic coupling between two case-I adiabatic states $`𝐰^j(t)`$ and $`𝐰^k(t)`$ is an odd function of time. Really,
$`H_{jk}^{\text{nonadb}}(t)`$ $`=`$ $`i{\displaystyle \underset{l=1}{\overset{N}{}}}w_l^j(t)\dot{w}_l^k(t)`$ (31)
$`=`$ $`i{\displaystyle \underset{l=1}{\overset{N}{}}}w_{N+1l}^j(t)\dot{w}_{N+1l}^k(t)`$ (32)
$`=`$ $`H_{jk}^{\text{nonadb}}(t),\text{(case I }\text{ case I)}.`$ (33)
The nonadiabatic coupling between a case-I eigenstate $`𝐰^j(t)`$ and a case-II eigenstate $`𝐰^m(t)`$ is an even function,
$$H_{jm}^{\text{nonadb}}(t)=H_{jm}^{\text{nonadb}}(t),\text{(case I }\text{}\text{ case II)}.$$
(34)
The symmetry of $`𝐇^a(t)`$ determines a certain symmetry of the adiabatic transition matrix $`𝐔^a(+\mathrm{},\mathrm{})`$, defined as $`𝐚(+\mathrm{})=𝐔^a(+\mathrm{},\mathrm{})𝐚(\mathrm{})`$. In order to find it, I introduce the evolution matrix $`𝐆(t,0)`$ via $`𝐚(t)=𝐆(t,0)𝐚(0)`$. Evidently, the first column of $`𝐆(t,0)`$ is the solution of Eq. (26) for the initial condition $`𝐚(0)=(1,0,0,\mathrm{},0)^T`$, the second column is the solution for $`𝐚(0)=(0,1,0,\mathrm{},0)^T`$, and so on. When all nonadiabatic couplings are odd functions of time \[Eq. (33)\], time reversal in Eq. (26) is equivalent to complex conjugation of $`𝐚(t)`$ (case A). When a case-II eigenstate $`𝐰^m(t)`$ exists \[then the nonadiabatic couplings involving it are even functions, Eq. (34)\], time reversal in Eq. (26) is equivalent to complex conjugation of $`𝐚(t)`$ and change of sign of $`a_m(t)`$ (case B). This means that
$$𝐆(t,0)=\{\begin{array}{cc}𝐆^{}(t,0),\hfill & \text{(case A)},\hfill \\ \mathrm{𝐈𝐆}^{}(t,0)𝐈,\hfill & \text{(case B)},\hfill \end{array}$$
(35)
where $`𝐈`$ is a diagonal matrix with units on its diagonal, except the $`(m,m)`$-th element which is $`1`$. It follows from Eq. (35) and the unitarity of $`𝐆`$ that
$`𝐔^a(+\mathrm{},\mathrm{})`$ $`=`$ $`𝐆(+\mathrm{},0)𝐆(0,\mathrm{})`$
$`=`$ $`𝐆(+\mathrm{},0)𝐆^{}(\mathrm{},0)`$
$`=`$ $`\{\begin{array}{cc}𝐆(+\mathrm{},0)𝐆^T(+\mathrm{},0),\hfill & \text{(case A)},\hfill \\ 𝐆(+\mathrm{},0)\mathrm{𝐈𝐆}^T(+\mathrm{},0)𝐈,\hfill & \text{(case B)}.\hfill \end{array}`$
Hence
$$\left[𝐔^a(+\mathrm{},\mathrm{})\right]^T=\{\begin{array}{cc}𝐔^a(+\mathrm{},\mathrm{}),\hfill & \text{(case A)},\hfill \\ \mathrm{𝐈𝐔}^a(+\mathrm{},\mathrm{})𝐈,\hfill & \text{(case B)}.\hfill \end{array}$$
(37)
The transition matrices in the diabatic and adiabatic bases are related by
$`𝐔(+\mathrm{},\mathrm{})=𝐖(+\mathrm{})𝐔^a(+\mathrm{},\mathrm{})𝐖^T(\mathrm{}).`$
Then one finds from Eqs. (Pulse-order invariance of the initial-state population in multistate chains driven by delayed laser pulses) and (37) that in both cases A and B,
$`U_{N+1j,N+1j}(+\mathrm{},\mathrm{})`$
$`={\displaystyle \underset{k,l=1}{\overset{N}{}}}w_{N+1j}^k(+\mathrm{})U_{kl}^a(+\mathrm{},\mathrm{})w_{N+1j}^l(\mathrm{})`$
$`={\displaystyle \underset{k,l=1}{\overset{N}{}}}w_j^k(\mathrm{})U_{lk}^a(+\mathrm{},\mathrm{})w_j^l(+\mathrm{})`$
$`=U_{jj}(+\mathrm{},\mathrm{}).`$
This completes the proof.
It should be emphasized that the pulse-order invariance applies to the population of the initial state only, while the populations of all other states depend on the pulse order. This is clearly demonstrated in Fig. 1 where the initial-state population $`P_1`$ and the final-state population $`P_5`$ are plotted against the pulse delay $`\tau `$ in the case of a five-state system, initially in state $`\psi _1`$. The figure shows that $`P_5`$ behaves very similarly to STIRAP with a broad plateau of high transfer efficiency for $`\tau >0`$ and oscillations for $`\tau <0`$ . In contrast, $`P_1`$ is a symmetric function of $`\tau `$, as follows from the above results.
Finally, the pulse-order invariance of the initial-state population has been derived without the assumption of adiabatic evolution. Hence it applies to the general nonadiabatic case, as long as the pulse duration is long enough to validate the rotating-wave approximation.
This work has been supported financially by the Academy of Finland.
|
no-problem/9906/gr-qc9906050.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
Since Alcubierre published his ‘warp drive’ spacetime , the proposal has been criticized from various viewpoints by a number of authors . One of the problems, concerning the amount of exotic matter needed to support a warp bubble capable of transporting macroscopic objects , was partially solved in . Another objection, claiming a divergence of quantum fluctuations on a warp drive background, is probably not valid in the general case . However, serious problems remain, and as we will see, it is unlikely that the original ansatz can be modified to circumvent all of them, at least for superluminal bubbles.
In the next section, the warp drive geometry is introduced; in the subsequent section, we review some of the objections that have been raised. Section 3 deals with the behaviour of quantum fluctuations on a fixed Alcubierre background. In section 4 we discuss the unreasonably high energies macroscopic warp bubbles would need. In section 5 we come to the crucial problem: part of the energy supporting the warp drive moves tachyonically. A summary is given in section 6.
## 2 The Alcubierre spacetime
The warp drive metric is
$$ds^2=dt^2+(dxv_s(t)f(r_s)dt)^2+dy^2+dz^2,$$
(1)
with $`r_s=\sqrt{(xx_s(t))^2+y^2+z^2}`$, and $`v_s=\frac{dx_s}{dt}`$, where $`x_s(t)`$ is the path followed by the center of the warp bubble. The function $`f`$ has the properties $`f(0)=1`$ and $`f(r_s)0`$ as $`r_s\mathrm{}`$. $`x_s(t)`$ is then a timelike geodesic with proper time equal to the coordinate time outside the bubble. We will assume that $`f`$ has compact support, which we will call the warp bubble. This is a natural assumption since it implies that the energy densities associated to the geometry do not stretch all the way to spacelike infinity.
At first sight there is no speed limit, in the sense that if $`v_s>1`$, a particle moving along $`x_s(t)`$ would be able to outrun a photon moving in the Minkowskian part of spacetime. This is also characteristic of traversable wormholes , but unlike wormholes, the warp drive does not need non–trivial topology. However, it will become clear that as soon as the bubble goes superluminal, the geometry (1) develops unphysical features, not all of which can be mended by simple modifications of the spacetime.
A problem the warp drive has in common with traversable wormholes is a violation of the energy conditions of general relativity. If quantum field theory (QFT) is introduced, this is no longer a crucial problem; for example, the well–known Casimir effect violates the Weak Energy Condition (WEC)<sup>1</sup><sup>1</sup>1Helfer et al. argued that this would not be true in realizable experimental set–ups due to the properties of known materials, but their results do not in principle rule out a Casimir WEC violation.. As has been known for a long time, QFT on curved spacetimes indicates that spacetime curvature itself could cause violations of the energy conditions, and recently a class of wormholes was found which would self–stabilize, in the sense that the negative energy (‘exotic matter’) densities needed to sustain the wormhole geometry would arise from vacuum fluctuations of conformal fields due to the curvature of the wormhole geometry itself .
## 3 Quantum fields on an Alcubierre background
Hiscock argued that the energy density due to fluctuations of conformally coupled quantum fields would diverge at particle horizons within the bubble, which are present as soon as $`v_s>1`$. The calculation was only performed for the $`1+1`$ dimensional version of the warp drive geometry, but it is reasonable to assume that a similar phenomenon would occur in four dimensions . Hiscock’s calculations involved a coordinate transformation making the warp drive spacetime manifestly static. In two dimensions, the geometry turned out to be similar to that of a 2–dimensional black hole. Calculations of the stress–energy tensor of a conformal field living on such a background indicate that if the field has reached thermal equilibrium, its temperature at spacelike infinity must be equal to the Hawking temperature of the black hole, otherwise vacuum fluctuations would diverge strongly at the horizon. In the case of the constant velocity warp drive, the temperature of the horizon would never be equal to that of a field on the background, from which a divergence was inferred. However, it is questionable that such a divergence would be present if the warp bubble had gone superluminal and developed horizons a finite time in the past, a situation comparable to that of a newly formed black hole not in thermal equilibrium with the field at infinity.
## 4 Unreasonably high energies
Ford and Roman suggested an uncertainty–type principle which places a bound on the extent to which the WEC is violated by quantum fluctuations of scalar and electromagnetic fields: The larger the violation, the shorter the time it can last for an inertial observer crossing the negative energy region. This so–called quantum inequality (QI) can be used as a test for the viability of would–be spacetimes allowing superluminal travel. By making use of the QI, Ford and Pfenning were able to show that a warp drive with a macroscopically large bubble must contain an unphysically large amount of negative energy. This is because the QI restricts the bubble wall to be very thin, and for a macroscopic bubble the energy is roughly proportional to $`R^2/\mathrm{\Delta }`$, where $`R`$ is a measure for the bubble radius and $`\mathrm{\Delta }`$ for its wall thickness. It was shown that a bubble with a radius of 100 meters would require a total negative energy of at least
$$E6.2\times 10^{62}v_s\text{kg},$$
(2)
which, for $`v_s1`$, is ten orders of magnitude bigger than the total positive mass of the entire visible Universe.
In , it was shown that this number is very much dependent on the details of the geometry. The total energy can be reduced dramatically by keeping the surface area of the warp bubble itself microscopically small, while at the same time expanding the spatial volume inside the bubble. The most natural way to do this is the following:
$$ds^2=dt^2+B^2(r_s)[(dxv_s(t)f(r_s)dt)^2+dy^2+dz^2].$$
(3)
$`B(r_s)`$ is a twice differentiable function such that, for some $`\stackrel{~}{R}`$ and $`\stackrel{~}{\mathrm{\Delta }}`$,
$`B(r_s)=1+\alpha `$ for $`r_s<\stackrel{~}{R},`$
$`1<B(r_s)1+\alpha `$ for $`\stackrel{~}{R}r_s<\stackrel{~}{R}+\stackrel{~}{\mathrm{\Delta }},`$
$`B(r_s)=1`$ for $`\stackrel{~}{R}+\stackrel{~}{\mathrm{\Delta }}r_s,`$ (4)
where $`\alpha `$ will in general be a very large constant; $`1+\alpha `$ is the factor by which space is expanded. For $`f`$ one chooses a function with the properties
$`f(r_s)=1`$ for $`r_s<R,`$
$`0<f(r_s)1`$ for $`Rr_s<R+\mathrm{\Delta },`$
$`f(r_s)=0`$ for $`R+\mathrm{\Delta }r_s,`$
where $`R>\stackrel{~}{R}+\stackrel{~}{\mathrm{\Delta }}`$.
A spatial slice of the geometry one gets in this way can be easily visualized in the ‘rubber membrane’ picture. A small Alcubierre bubble surrounds a neck leading to a ‘pocket’ with a large internal volume, with a flat region in the middle. It is easily calculated that the center $`r_s=0`$ of the pocket will move on a timelike geodesic with proper time $`t`$.
Using this scheme, the required total energy can be reduced to stellar magnitude, in such a way that the QI is satisfied. On the other hand, the energy densities are still unreasonably large, and the spacetime has structure with sizes only a few orders of magnitude above the Planck scale.
## 5 Energy moving locally faster than light
The most problematic feature of the warp drive geometry is the behaviour of the negative energy densities in the warp bubble wall . If the Alcubierre spacetime is taken literally, part of the exotic matter will have to move superluminally with respect to the local lightcone. It is easy to see that all exotic matter outside some surface surrounding the center (let us call this the critical surface), will move in a spacelike direction. For $`v_s>1`$, there has to be exotic matter outside the critical surface, since the function $`f`$ must reach the value $`0`$ for some $`r_s`$ (which, of course, can be infinity), and the negative energy density is proportional to $`\left(\frac{df}{dr_s}\right)^2`$ for an ‘Eulerian observer’ . As noted in , the Alcubierre spacetime is an example of what can happen when the Einstein equations are run in the ‘wrong’ direction, first specifying a metric, then calculating the associated energy–momentum tensor.
The problem of tachyonic motion can be interpreted as meaning that part of the necessary exotic matter is not able to keep up with the rest of the bubble: if one would try to make a warp bubble go superluminal, the outer shell would be left behind, destroying the warp effect.
It is conceivable that the problem can be circumvented, for example by letting the distribution of exotic matter expand into a ‘tail’ in the back. It may be possible to do this in a way compatible with both the QI and the Quantum Interest Conjecture introduced in , which states that a pulse of negative energy must always be followed by a larger pulse of positive energy. However, it is unlikely that one can get rid of tachyonic motion of the exotic matter without introducing a naked curvature singularity in the front of the bubble .
## 6 Summary
It would seem that the main problems of the warp drive can not be solved without retaining some unphysical features or introducing new ones, such as high energy densities, curvature radii of the order of the Planck length, and naked singularities. We have limited the discussion to superluminal warp bubbles. Subluminal bubbles are still an open possibility, and it is not inconceivable that microscopic ones might even occur naturally. Due to the absence of horizons, potential problems due to diverging vacuum fluctuations will not arise, and there will be no tachyonic motion of exotic matter. Possibly the geometry can be chosen in such a way that the necessary negative energy densities are partly supplied by the changes in vacuum fluctuations induced by the curvature, as in . An interesting question is whether one can construct a spacetime similar to the subluminal warp drive which avoids negative energy densities altogether, but for this no ansatz is available at the present time.
## Acknowledgements
I have benefited from the interesting comments of M. Alcubierre, D.H. Coule, and S.V. Krasnikov. It is a pleasure to thank the European Office for Aerospace Research and Development for financial support.
|
no-problem/9906/cs9906023.html
|
ar5iv
|
text
|
# Computational Geometry Column 35
## Abstract
The subquadratic algorithm of Kapoor for finding shortest paths on a polyhedron is described.
A natural shortest paths problem with many applications is: Given two points $`s`$ and $`t`$ on the surface of a polyhedron of $`n`$ vertices, find a shortest path on the surface from $`s`$ to $`t`$. This type of within-surface shortest path is often called a geodesic shortest path, in contrast to a Euclidean shortest path, which may leave the 2-manifold and fly through 3-space. Whereas finding a Euclidean shortest path is NP-hard \[CR87\], the geodesic shortest path may be found in polynomial time. After an early $`O(n^5)`$ algorithm \[OSB85\], an $`O(n^2\mathrm{log}n)`$ algorithm was developed that used a technique the authors dubbed the continuous Dijskstra method \[MMP87\]. This simulates the continuous propagation of a wavefront of points equidistant from $`s`$ across the surface, updating the wavefront at discrete events. It was another decade before this result was improved, by a clever $`O(n^2)`$ algorithm that does not track the wavefront \[CH96\]. This latter algorithm is simple enough to invite implementations, and several have appeared. Fig. 1 shows an example of using one implementation to find the shortest paths from $`s`$ to each vertex of a convex polyhedron.
Although other geometric shortest path problems saw the breaking of the quadratic barrier (see \[Mit97\]), paths on polyhedra resisted. One impediment is evident from Fig. 1: even on a convex polyhedron, there can be $`\mathrm{\Omega }(n^2)`$ crossings between polyhedron edges and paths to the vertices. So any algorithm that maintains these paths and treats edge-path crossings as events will be quadratic in the worst case. The continuous Dijskstra paradigm faces a similar dilemma: Examples exist for which there are $`\mathrm{\Omega }(n^2)`$ wavefront arc-edge crossings. These obstacles have recently been surmounted by a new algorithm by Sanjiv Kapoor that achieves $`O(n\mathrm{log}^2n)`$ time complexity \[Kap99\].
Kapoor’s algorithm follows the wavefront propagation method, and is surprisingly similar in overall structure to the original continuous Dijskstra algorithm \[MMP87\].
The algorithm maintains two primary geometric objects throughout the processing: the wavefront itself, $`W`$, which is a sequence of circular arcs, each centered on either $`s`$ or a vertex of the polyhedron (where paths may turn on nonconvex polyhedra); and a collection $`B`$ of boundary edges, edges of the polyhedron yet to be crossed by the wavefront. Both of these have size $`O(n)`$. Elements of $`W`$ and elements of $`B`$ are related and grouped by a nearest neighbor relation: $`eB`$ is associated with arc $`aW`$ if $`a`$ is closer to $`e`$ than to any other arc in $`W`$. Boundary edges associated with one arc are grouped into a boundary section, and arcs associated with one boundary edge are grouped into a wavefront section. It is this grouping that permits avoiding the quadratic number of arc-edge crossing events. The number of wavefront section-edge events is only $`O(n)`$.
There remains another quadratic quagmire to be skirted: Identifying the next event requires computing the distance from an edge to a wavefront potentially composed of $`n`$ arcs. Kapoor handles this by building a hierarchical convex hull structure for both the wavefront sections and the boundary sections. Subhulls are connected by tangent bridges; internal nodes store an “alignment angle” that represents the unfolding relationship between sibling hulls. These structures permit computing the distance between a $`W`$-section and a $`B`$-section in (essentially) logarithmic time. Updating the data structures consumes $`O(\mathrm{log}^2n)`$ amortized time per event, which leads to the final $`O(n\mathrm{log}^2n)`$ time complexity.
The details are formidable, and implementation will be a challenge. But the many applications and the significant theoretical improvement suggest implementations will follow eventually.
|
no-problem/9906/nucl-ex9906013.html
|
ar5iv
|
text
|
# Centrality Dependence of Directed and Elliptic Flow at the SPS
## 1 Motivation
In the Fourier decomposition of the azimuthal distribution of particles, the first and second coefficients correspond to the directed, $`v_1`$, and elliptic, $`v_2`$, flow, respectively . The elliptic flow is expected to be sensitive to the system evolution at the time of maximum compression . Ollitrault showed that in a hydro model the elliptic flow is proportional to the initial space elliptic anisotropy of the overlapping region weighted by the number of nucleon collisions in the beam direction. This initial space elliptic anisotropy, which we will call $`\epsilon `$, has been calculated for a Woods-Saxon density distribution and shown to be almost insensitive to the nucleon-nucleon cross section . It is enlightening to plot $`v_2/\epsilon `$ versus centrality in order to look for changes in the reaction mechanism or properties of the nuclear matter. Thus the motivation for this work is to find a signature (elliptic flow), scan this signature as a function of a control parameter (centrality), and, after first dividing out the geometry of the initial state, look for a change in the physics (unexpected behavior).
## 2 Experiment
NA49 has published directed and elliptic flow results from the NA49 Main Time Projection Chambers for a set of data taken with a medium impact parameter trigger . We now have a new set of data taken with a minimum bias trigger so that we can study the flow centrality dependence. Also, the tracks from the Main and Vertex TPCs are combined resulting in full coverage of the forward hemisphere. The data in the graphs below presenting flow as a function of rapidity have been reflected about mid-rapidity. The data have been integrated over $`p_t`$ and in some cases also over $`y`$ using as weights the measured double differential cross sections . The data have been sorted into six centrality bins using the Zero Degree Calorimeter, with “cen1” being the most central and “cen6” the most peripheral. The impact parameter values for these bins have been estimated from the number of participants which were obtained by integrating the yields . Slightly higher values of $`b`$, used in the oral presentation of this paper, were determined from the fraction of the total cross section corresponding to each bin. Only some of the available data has been analyzed so far. Thus these data are preliminary and no systematic errors have been included yet.
## 3 Results
The rapidity dependence of directed and elliptic flow integrated over the whole range of measured impact parameters up to about 11 fm is shown in Fig. 2. The pion $`v_1`$ values hug the axis near mid-rapidity and the $`v_2`$ values for both pions and protons appear to slightly peak somewhat away from mid-rapidity. For pions the $`v_1`$ and $`v_2`$ values are shown for different centrality bins in Fig. 2. Both sets of flow values increase continuously as the reaction becomes more peripheral. The elliptic flow values for pions have been integrated over rapidity up to $`y=6`$ and are shown in Fig. 4, together with simulations from RQMD v2.3 . The flow from RQMD peaks at a medium impact parameter whereas the flow from experiment continues to rise. In Fig. 4 the $`v_2`$ values have been divided by the initial space elliptic anisotropy . In addition, results from RQMD v3.0 which includes a phase transition are shown. Typical hydro results are also shown. The data are below hydro indicating a lack of complete equilibration in the reaction . The data are above the RQMD resonance gas and tantalizingly close to the RQMD phase transition calculation. Clearly, it is important to process the full set of NA49 data and obtain final results.
|
no-problem/9906/astro-ph9906214.html
|
ar5iv
|
text
|
# Reddening of microlensed LMC stars vs. the location of the lenses
## 1 Introduction
One of the main puzzles of Galactic microlensing surveys is the poorly determined location of the lens population of the events towards the Magellanic Clouds. Currently there are two popular views on the issue: (a) the lenses are located in the halo, hence are likely baryonic dark matter candidates (Alcock et al. 1997); (b) both the lenses and sources are part of the Magellanic Clouds, hence are stars orbiting in the potential well of the Clouds (Sahu 1994, Wu 1994, Zhao 1998a,b, 1999a,b, Weinberg 1999). The amount of star-star lensing is sensitive to assumptions of the structure and equilibrium of the Magellanic Clouds (Gould 1995, Zhao 1998a, Aubourg et al. 1999, Salati et al. 1999, Gyuk & Gates 1999, Gyuk, Dalal & Griest 1999, Evans & Kerins 1999). For star-star self-lensing in the LMC to be efficient, the LMC should be fairly thick in the line of sight. To break the degeneracy of the models, we should design observations which are sensitive to the location of the lens and the thickness of the LMC. Several lines of attack have been proposed in Zhao (1999a). For example, a direct signature of self-lensing of a dense, but extended stellar component, is that the lensed stars should be at the far side of the component, hence somewhat fainter than the unlensed ones. There are tantalizing evidences for this distance effect playing a role both in the events towards the Galactic bulge/bar, particular two clump giant events OGLE-BLG-3 and OGLE-BLG-10 (Stanek 1995), and in the events towards the LMC, particularly the clump giant event MACHO-LMC-1 (cf. Zhao et al. 1999). Unlike the end-on cigar-shaped Galactic bar, the LMC is an irregular disk galaxy and it is close to face-on, so its front-to-back thickness is hard to resolve with photometric or trigonometric parallax if the LMC is indeed thin: a $`500`$pc spread in the line of sight translates to $`0.02`$mag in distance modulus, or $`0.2`$ micro arcsec in parallax.
Here we propose a more practical test for the above two popular models of the location of the lenses. We propose to measure the distribution of the reddening of individual LMC stars in small patchs of sky centered on the microlensed stars. Basicly some kind of “reddening parallaxes” can be derived for these stars from the line of sight depth effect, i.e., the dust layer in the LMC makes stars behind the layer systematicly redder than those in front of the layer. This is a variation of the well-known technique of differentiating the near/far side and the trailing/leading of a spiral arm with the dust lane that runs across a close-to-edge-on spiral galaxy. Our method involves obtaining multi-band photometry and/or spectroscopy of fairly faint ($`1921`$mag) stars during or well after microlensing.
After a brief account of reddening in the LMC §2, we describe our basic argument about the excess reddening of microlensed sources in §3. We discuss several complications of the method (e.g. patchiness of dust) in §4. We model the dependence on the thickness of the dust layer in §5. We summarize the results and the observational strategy in §6.
## 2 Dust layer of the LMC and the Galactic foreground
The internal extinction in the LMC is fairly small because of its close to face-on geometry and it is patchy. Hence it is a subtle effect that we propose to measure. Internal extinction of the LMC has been studied many times in the past (e.g., Hill et al. 1994, Oestreicher & Schmidt-Kaler 1996). Harris et al. (1997) select a sample of a few thousand OB stars from their LMC UBVI multi-band photometric survey, and they map out dust patches in the LMC. They find that extinction largely follows a thin disk with a FWHM of about $`100200`$pc, and $`E(BV)=0.2`$mag., averaged over the whole LMC including the extinction by the Galactic foreground. The internal extinction (discounting Galactic foreground) in the optical U band
$$A_U=4.72E(BV)=0.6\mathrm{mag}.$$
(1)
Stronger absorption is expected in the ultra-violet.
While dust distribution in the LMC and the Galactic foreground is known to be very clumpy and extinction is patchy, there is surprisingly little variation on sub-pc scale. Harris et al. find strong variations of reddenings among members of OB associations within a few arcmins of each other in the LMC (their Figure 13). About 5% of the lines of sight have low-extinction “holes”. Typical size of the dust patches is between $`1^{}`$ to $`10^{}`$; $`1^{}`$ is about $`15`$pc at the LMC’s distance.
The Galactic foreground extinction towards the LMC has also been mapped out by Oestreicher et al. (1995) with foreground stars. They find reddening varies from 0 to $`E(BV)=0.15`$mag. across the surface of the LMC with a mean at $`E(BV)=0.06`$mag.. Nevertheless the dust patches appear mostly larger than 30’, which translates to about 2pc for a line of sight path of 200pc in the thin layer of Galactic dust.
Extinction has also been studied in great detail towards the Baade’s window ($`l=1^o`$, $`b=4^o`$), a relatively clear field near the Galactic center. Stanek (1996) uses red clump giants in the Galactic bulge from the OGLE microlensing survey to map out the extinction, which is due to dust patches within about $`2`$ kpc of the Sun (Arp 1965). The extinction $`A_V`$ varies from about $`1.3`$mag to $`2.5`$mag (99% confidence interval) across a $`40^{}\times 40^{}`$ field. While dust patches of $`10^{}`$ are clearly visible, we find (cf. Fig. 1) that the variation of $`A_V`$ of neighbouring patches of $`30^{\prime \prime }`$ to $`3^{}`$ is rarely more than 20%-30%. The only modest variation on these scales in this line of sight means few clouds of size $`0.3`$pc to $`2`$pc.
## 3 Basic signal
Consider the effects of placing a thin layer of dust in the mid-plane of a relatively thicker stellar disk of the LMC (cf. Fig. 2); thin stellar disk models are less interesting because they do not provide enough microlensing events (Gould 1995). Here we assume a uniform stellar disk of the LMC, much thicker than the clumpy dust layer; the thickness is $`1000`$ pc and $`100`$ pc respectively. Now nearly 50% of the LMC stars are in front of the dust layer, hence free from reddening (always discounting reddening by the Galactic dust layer in the solar neighbourhood unless otherwise specified). Another nearly 50% of the LMC stars are at the back the dust layer, hence reddened by some measurable amount. In between there is a negligible fraction of the stars reddened by an intermediate amount.
Now suppose that all current 30 microlensing events towards the LMC are due to machos, the we observe about $`(15\pm 4)`$ lensed sources in front of the dust layer with negligible reddening, and $`(15\pm 4)`$ sources behind with some measurable amount of reddening.
In comparison, if the lenses were in the LMC disk, then there would be a higher probability, say $`1p`$, of finding sources behind the dust layer than the probability, $`p`$, of finding sources in front. It is a simple calculation to show that $`p=1/8=12.5\%`$ for a uniform slab model of the stellar disk of the LMC, and $`p15\%`$ for a $`\mathrm{sech}^2`$-disk. So star-star self-lensing models would predict only about $`(4\pm 2)`$ stars out of the 30 stars to be in front of the dust layer with only the Galactic foreground reddening (cf. Fig. 2).
That self-lensing models grossly under-predict the number of microlensed stars with the Galactic foreground reddening is the main signal that differentiate them with macho-lensing models. A difference at about $`3\sigma `$ level is expected for the current 30 or so microlensing events. It appears that a long-term study of the reddening distribution of microlensed stars towards the Magellanic Clouds can set firm limit on self-lensing and the fraction of machos in the halo; a quantitative analysis is given in Zhao (1999c).
## 4 Practical issues and solutions
The above arguments are robust. The key condition is that any measurement error of the reddening is small enough to separate stars with only Galactic foreground reddening from those with Galactic plus LMC reddening confidently (cf. Fig. 2). The arguments apply to a variety of star and patchy extinction distributions with a few conditions.
* They are insensitive to the thickness and vertical profile of the stellar disk as long as it is much thicker than the dust layer. This is to be examined in detail in §5.
* They are insensitive to the patchiness of the dust layer of the LMC as long as the LMC dust layer has very few “holes”; a star behind such a hole can be confused with one in front of the hole as far as reddening is concerned; this happens perhaps about 5% of the time (Harris et al.).
* They are insensitive to the patchiness of the dust layer of the Galaxy as long as the Galactic foreground reddening is indeed smooth on $`10^{}`$ scale (Oestreicher et al. 1995).
* They are insensitive to self-extinction in a very localized dusty cocoon as long as we avoid mass-losing AGB stars and early-type stars in star-forming regions; better choices might be Red Giant Branch and Clump stars and late $`AF`$ type bright main sequence stars since they are generally old enough to drift away from the dusty cocoons at their birth place.
To check the validity of these conditions, we can use random unlensed stars in the immediate neighbourhood of the lensed stars to map out the dust patches in the LMC and Galactic foreground. Polarization maps or existing HI and CO maps of the LMC would also be helpful for this purpose. This way we can identify and stay away from regions with unmeasurably low extinction. We should exclude microlensing candidates which happen to fall in low-extinction holes with unmeasurable difference between stars in front and stars behind the LMC dust layer. We can then apply the Galactic foreground subtraction to individual stars in the remaining sample. The reddening distribution of microlensed stars can then be analyzed for signs of deficiency of “reddening-free” stars, an indication of self-lensing.
Existing photometry of the microlensing survey fields are typically in one or two broad passbands. This is generally not enough for accurate determination of reddening. Reddening can be determined by constructing reddening-free indices with photometry of three to seven broad bands, or with low resolution spectroscopy; e.g., Terndrup et al. (1995) show that reddening towards the Baade’s window of the Galactic bulge can be derived from the $`H_\beta `$ index. Typical accuracy is about $`0.020.05`$ mag. in $`E(BV)`$ with these methods.
A practical definition of low-extinction holes might be regions with LMC internal extinction $`E(BV)0.05`$mag.; these regions cover perhaps on the order 10% of the surface of the LMC. Harris et al. show that the reddening of individual OB stars can be determined with UBVI photometry to about $`\sigma (BV)=0.04`$mag., or about 30% of the average internal extinction of the LMC disk $`E(BV)=0.13`$mag.
A way to reduce variation is to select random unlensed stars as close to the microlensing line of sight as possible. This way they are likely to share the same dust patch. The dust maps of Oestreicher et al. (1995) suggest that Galactic foreground extinction is likely smooth on $`10^{}`$ scale or smaller, and can be subtracted out accurately. For the dust in the LMC, it appears safe to work within small patches of the sky of $`4^{\prime \prime }`$ scale, which corresponds to about $`1`$pc in the LMC, and less than $`0.004`$pc in the solar neighbourhood. At these scales variations of reddening are likely at 10%-20% level among stars behind the dust layer (cf. Fig. 1). Such low-level variations would hardly affect our results since there would be little chance of mis-classifying a star at the back of the LMC dust layer as a star in front of the layer, even after allowing for measurement errors at 30% level (cf. Fig. 2). It is challenging to find enough bright unblended LMC stars from the ground in such a tiny $`4^{\prime \prime }\times 4^{\prime \prime }`$ patch of the sky, though.
## 5 Reddening vs. lens location
For the clarity of the argument we adopt a set of simple models for the density distributions of the dust $`\nu _d(D)`$, the lenses $`\nu _l(D)`$ and the stars $`\nu _{}(D)`$: they are distributed in three uniform layers with a width $`w_d`$, $`W_{}`$ and $`W_{}`$ and a mean distance $`D_{\mathrm{LMC}}`$, $`D_l`$, $`D_{}`$. We compute the excess reddening of the microlensed star
$$\xi \frac{A_s}{A_u}1$$
(2)
where
$`A_u`$ $`=`$ $`{\displaystyle \frac{_0^{\mathrm{}}A(D)P(D)𝑑D}{_0^{\mathrm{}}P(D)𝑑D}},`$ (3)
$`A_s`$ $`=`$ $`{\displaystyle \frac{_0^{\mathrm{}}A(D_s)\tau (D_s)P(D_s)𝑑D_s}{_0^{\mathrm{}}\tau (D_s)P(D_s)𝑑D_s}},`$ (4)
are the average dust absorptions to the unlensed LMC stars and to the microlensed LMC source stars respectively, and
$$A(D)=C_1_0^D\nu _d(D_d)𝑑D_d$$
(5)
is the absorption to a star at distance $`D`$, and
$$P(D)dD=C_2\nu _{}(D)D^2dD$$
(6)
is the probability of locating a star at distance $`D`$ to $`D+dD`$, and
$$\tau (D_s)=C_3_0^{D_s}𝑑D_l\nu _l(D_l)\frac{(D_sD_l)D_l}{D_s}$$
(7)
is the optical depth to a source star at distance $`D_s`$.
Fig. 3 shows the excess reddening $`\xi `$ as a function of the location of the lenses, here the renormalized typical lens distance $`D_l/D_{}`$. Varying the thickness of the dust layer between $`100\mathrm{p}\mathrm{c}w_d200\mathrm{p}\mathrm{c}`$ barely makes any difference as long as the dust layer is thinner than the stellar disk; the effect is marginally visible only for the thin disk model. The excess reddening is a constant 70% for purely self-lensing models ($`D_l/D_{}=1`$) insensitive to the exact values of $`W_{}`$ and $`w_d`$ as long as $`W_{}w_d`$. The prediction is somewhat sensitive to a plausible small offset between an unviralized stellar disk and the dust layer $`\mathrm{\Delta }_{}D_{}D_{\mathrm{LMC}}`$; the excess reddening becomes even stronger if the stellar population of the LMC disk is shifted slightly closer to us than the dust layer. In general, the excess reddening is at 1% level if the lens population is in the halo (the macho zone), and above 40% if the lens population coincides with the stellar disk of the LMC. So the two scenarios are distinguishable if we can control patchiness of reddening and measurement error to better than 20% level.
## 6 Conclusion and strategy for observations
In summary, we have studied effects of dust layer in the LMC on the microlensing events. We find that self-lensing models of the LMC draw preferentially sources behind the dust layer of the LMC, and hence can be distinguished from the macho-lensing models once the reddening by dust is measured. The effect is insensitive to the exact thickness of the dust layer and the stellar disk (cf. Fig. 3). The deficiency of reddening-free microlensed stars is likely a robust discriminator of the two types of lensing scenarios.
The clumpiness of the dust, together with the fairly large error of reddening vector derived from broad-band photometry, can lead to a large scatter in the relation between reddening and line of sight depth. The finer structure of the patchy extinction in the LMC remains to be studied as well since previous reddening maps of the LMC, e.g., Harris et al.’s map from the sparsely distributed luminous OB stars, are limitted to a spatial sampling of the order $`1^{}`$ (or 10 pc). The trick here is to work only in small patches of the sky of $`4^{\prime \prime }\times 4^{\prime \prime }`$ scale, so that stars likely share the same patch of dust cloud (cf. Fig. 1). For example, the amounts of extinction by the Galactic foreground and the LMC change wildly from one microlensing line of sight to another (cf. Fig. 2), but within each small patches the extinction is well-correlated with the line of sight distance, and there is little ambiguity to classify a star as in front of or behind the dust layer if we can measure the reddening. Another trick is to obtain as many lines of sight as possible to beat down all variations (20%-100% due to patchy extinction, and 30%-100% due to measurement error) by a factor $`1/\sqrt{N}`$, where $`N`$ is the number of microlensing lines of sight. We can differentiate the halo-lensing models from self-lensing models at $`3\sigma `$ level with the reddening distribution of current 30 microlensed stars and their neighbours.
It would take about a few nights with a 2.5m telescope at a good seeing site to obtain accurate (1%) photometry in several broad or narrow bands for the present sample of microlensed stars and neighbouring stars in the $`V=2021`$mag range. Seeing is critical to reach fainter stars. Nevertheless, there are two general problems of ground studies. From the ground the fainter ($`V>20`$mag.) LMC stars are often blended in the $`1^{\prime \prime }2^{\prime \prime }`$ seeing disk, which results in spurious colors, hence unphysical reddening. A $`4^{\prime \prime }\times 4^{\prime \prime }`$ patch of the LMC might show only a handful of $`V=20`$ mag stars but contain hundreds of fainter objects at the resolution limit of the Hubble Space Telescope (HST). Second it is difficult to access the ultra-violet band from the ground, which is the most sensitive band for measuring dust absorption. For these reasons, photometry or spectroscopy in the ultra-violet from HST is desirable for getting an unambiguous answer.
The author thanks Paul Hodge, Puraga Guhathakurta, David Spergel, Tim de Zeeuw for encouragements, Walter Jaffe and Frank Israel for enlightening discussions and Bryan Miller specially for many helpful comments on the presentation.
|
no-problem/9906/hep-th9906189.html
|
ar5iv
|
text
|
# A non-perturbative analysis of symmetry breaking in two-dimensional ϕ⁴ theory using periodic field methods
## 1 Introduction
Recently spherical field theory has been introduced as a non-perturbative method for studying quantum field theory . The starting point of this approach is to decompose field configurations in a $`d`$-dimensional Euclidean functional integral as linear combinations of spherical partial waves. Regarding each partial wave as a distinct field in a new one-dimensional system, the functional integral is rewritten as a time-evolution equation, with radial distance serving as the parameter of time. The core idea of spherical field theory is to reduce quantum field theory to a set of coupled quantum-mechanical systems. The technique used is partial wave decomposition, but this can easily be generalized to other modal expansions. Instead of concentric spheres and partial waves, we might instead consider a generic smooth one-parameter family of ($`d1`$)-dimensional manifolds (disjoint and compact) and basis functions defined over each manifold. There are clearly many possibilities and we can optimize our expansion scheme to suit the specific problem at hand.
One important and convenient feature of spherical field theory is that it eliminates the need to compactify space — the spherical quantization surface is already compact. This is useful for studying phenomena which might be influenced by our choice of boundary conditions, such as topological excitations. Another interesting example arises in the process of quantization on noncommutative geometries. Spherical field methods have recently been adapted to study quantum field theory on the noncommutative plane . In many instances, however, maintaining exact translational invariance is of greater value than non-compactness or exact rotational invariance. In that case one could consider a system whose spatial dimensions have been compactified to form a periodic box. The next step would be to expand field configurations using free modes of the box and evolve in time. We will refer to this arrangement as periodic-mode field theory or, more simply, periodic field theory.<sup>1</sup><sup>1</sup>1Periodic field theory could be viewed as a hybrid of the Hamiltonian and momentum-space lattice formalisms. This combination, however, has not been discussed in the literature.
In periodic field theory the Hamiltonian is time independent and linear momentum is exactly conserved. In this work we describe the basic features of periodic field theory and use it to analyze spontaneous symmetry breaking in Euclidean two-dimensional $`\varphi ^4`$ theory. We use the method of diffusion Monte Carlo to simulate the dynamics of the theory. The techniques discussed here have several advantages over conventional Euclidean lattice Monte Carlo methods. One is that periodic boundary conditions are not imposed on the time variable, making it easier to determine the particle mass from an exponential decay fit. Another is that the zero-momentum mode is regarded as a single degree of freedom (rather than a collective mode on the lattice), which provides a simpler description of vacuum expectation values and symmetry breaking. Other advantages arise in systems not considered here, such as the absence of fermion doubling and extensions to Minkowski space using non-stochastic computational methods.
The organization of this paper is as follows. We begin with a derivation of periodic field theory. We then analyze two-dimensional $`\varphi ^4`$ theory and its corresponding periodic-field Hamiltonian using diffusion Monte Carlo methods. We compute the critical coupling at which $`\varphi \varphi `$ reflection symmetry is broken, and determine the critical exponents $`\nu `$ and $`\beta `$.<sup>2</sup><sup>2</sup>2We are using standard notation. $`\nu `$ is associated with the inverse correlation length, and $`\beta `$ corresponds with the behavior of the vacuum expectation value of $`\varphi `$. We find that our value of the critical coupling is in agreement with recent lattice results , and our values for the critical exponents are consistent with predictions via universality and the two-dimensional Ising model.
## 2 Periodic fields
We start with free scalar field theory in two dimensions subject to periodic boundary conditions
$$\varphi (t,xL)=\varphi (t,x+L).$$
(1)
In our discussion $`t`$ is Euclidean time, analytically continued from imaginary Minkowskian time. Let $`𝒥`$ be an external source satisfying the same boundary conditions and which vanishes as $`\left|t\right|\mathrm{}`$. The Euclidean generating functional in the presence of $`𝒥`$ is given by
$$Z[𝒥]𝒟\varphi \mathrm{exp}\left\{_{\mathrm{}}^{\mathrm{}}𝑑t_L^L𝑑x\left[\frac{1}{2}\left((\frac{\varphi }{t})^2+(\frac{\varphi }{x})^2\right)+\frac{\mu ^2}{2}\varphi ^2𝒥\varphi \right]\right\}.$$
(2)
We now expand in terms of periodic-box modes,
$`\varphi (t,x)`$ $`=\sqrt{\frac{1}{2L}}_{n=0,\pm 1,\mathrm{}}\varphi _n(t)e^{in\pi x/L},`$ (3)
$`𝒥(t,x)`$ $`=\sqrt{\frac{1}{2L}}_{n=0,\pm 1,\mathrm{}}𝒥_n(t)e^{in\pi x/L}.`$
These are also eigenmodes of momentum and each $`\varphi _n`$ or $`𝒥_n`$ carries momentum $`\frac{n\pi }{L}`$. In terms of these modes, we have
$$Z[𝒥]_n𝒟\varphi _n\mathrm{exp}\left\{_{\mathrm{}}^{\mathrm{}}𝑑t_n\left[\frac{1}{2}\frac{d\varphi _n}{dt}\frac{d\varphi _n}{dt}+\frac{1}{2}\left(\frac{n^2\pi ^2}{L^2}+\mu ^2\right)\varphi _n\varphi _n𝒥_n\varphi _n\right]\right\}.$$
(4)
For notational purposes we will define
$$\omega _n=\sqrt{\frac{n^2\pi ^2}{L^2}+\mu ^2}.$$
(5)
Using the Feynman-Kac formula, we find
$$Z[𝒥]0\left|T\mathrm{exp}\left\{_{\mathrm{}}^{\mathrm{}}𝑑tH_𝒥\right\}\right|0,$$
(6)
where
$$H_𝒥=_n\left[\frac{1}{2}\frac{}{q_n}\frac{}{q_n}+\frac{1}{2}\omega _n^2q_nq_n𝒥_nq_n\right],$$
(7)
and $`|0`$ is the ground state of $`H_0`$. Since $`H_0`$ is the usual equal time Hamiltonian, $`|0`$ is the vacuum. $`H_0`$ consists of a set of decoupled harmonic oscillators, and it is straightforward to calculate the two-point correlation functions,
$$0|\varphi _n(t_2)\varphi _n(t_1)|0=\frac{\delta }{\delta 𝒥_n(t_2)}\frac{\delta }{\delta 𝒥_n(t_1)}Z[𝒥]|_{𝒥=0}=\frac{1}{2\omega _n}\mathrm{exp}\left[\omega _n\left|t_2t_1\right|\right].$$
(8)
We now include a $`\varphi ^4`$ interaction term as well as a counterterm Hamiltonian, which we denote as $`H_{c.t.}`$. The new Hamiltonian is
$`H_𝒥`$ $`=_n\left[\frac{1}{2}\frac{}{q_n}\frac{}{q_n}+\frac{1}{2}\omega _n^2q_nq_n𝒥_nq_n\right]`$ (9)
$`+\frac{\lambda }{4!2L}_{n_1+n_2+n_3+n_4=0}q_{n_1}q_{n_2}q_{n_3}q_{n_4}+H_{c.t.}.`$
We will regulate the sums over momentum modes by choosing some large positive number $`N_{\mathrm{max}}`$ and throwing out all high-momentum modes $`q_n`$ such that $`\left|n\right|>N_{\mathrm{max}}`$. This corresponds to a momentum cutoff
$$\mathrm{\Lambda }^2=\left(\frac{N_{\mathrm{max}}\pi }{L}\right)^2.$$
(10)
In two-dimensional $`\varphi ^4`$ theory, renormalization can be implemented by normal ordering the $`\varphi ^4`$ interaction term. This corresponds to cancelling diagrams of the type shown in Figure 1. Using (8), we find
$$H_{c.t.}=\frac{6\lambda b}{4!2L}_{n=N_{\mathrm{max}},N_{\mathrm{max}}}q_nq_n,$$
(11)
where
$$b=_{n=N_{\mathrm{max}},N_{\mathrm{max}}}\frac{1}{2\omega _n}.$$
(12)
For the remainder of our discussion we will use the Hamiltonian
$`H`$ $`=H_0=_n\left[\frac{1}{2}\frac{}{q_n}\frac{}{q_n}+\frac{1}{2}(\omega _n^2\frac{\lambda b}{4L})q_nq_n\right]`$ (13)
$`+\frac{\lambda }{4!2L}_{n_1+n_2+n_3+n_4=0}q_{n_1}q_{n_2}q_{n_3}q_{n_4}.`$
## 3 The $`\varphi _2^4`$ phase transition
The existence of a second-order phase transition in two-dimensional $`\varphi ^4`$ theory has been derived in the literature . The phase transition is due to $`\varphi `$ developing a non-zero expectation value and the resultant spontaneous breaking of $`\varphi \varphi `$ reflection symmetry. It is generally believed that this theory belongs to the same universality class as the two-dimensional Ising model and therefore shares the same critical exponents. In this section we apply periodic field methods to $`\varphi ^4`$ theory in order to determine the critical coupling and critical exponents $`\nu `$ and $`\beta `$. We have chosen $`\nu `$ and $`\beta `$ since these are, in our opinion, the easiest to determine from direct computations. All other exponents can be derived from these using well-known scaling laws. From the Ising model predictions we expect
$$\nu =1,\beta =\frac{1}{8}.$$
(14)
$`\nu `$ is the exponent associated with the inverse correlation length or, equivalently, the mass of the one-particle state. We will determine the behavior of the mass as we approach the critical point from the symmetric phase of the theory. Let $`|a`$ be any state even under reflection symmetry. We consider the matrix element
$$f(t)=a\left|q_0\mathrm{exp}\left(tH\right)q_0\right|a\text{.}$$
(15)
Inserting energy eigenstates $`|i`$ satisfying $`H|i=E_i|i`$, we have
$$f(t)=_i\mathrm{exp}\left(tE_i\right)\left|i\left|q_0\right|a\right|^2.$$
(16)
Since $`|a`$ and $`|0`$ are even under reflection symmetry and $`q_0`$ is odd, the vacuum contribution to the sum in (16) vanishes. In the limit $`t\mathrm{},`$ (16) is dominated by the contribution of the next lowest energy state, the one-particle state at rest.<sup>3</sup><sup>3</sup>3We are assuming that this contribution does not also vanish. This is generally true, and we can always vary $`|a`$ to make it so. In this limit we have
$$f(t)e^{mt},$$
(17)
where $`m`$ is the mass. We can compute $`f(t)`$ numerically using the method of diffusion Monte Carlo (DMC). The idea of DMC is to model the dynamics of the imaginary-time Schrödinger equation using the diffusion and decay/production of simulated particles. The kinetic energy term in the Hamiltonian determines the diffusion rate of the particles (usually called replicas) and the potential energy term determines the local decay/production rate. A self-contained introduction to DMC can be found in .
$`\nu `$ is defined by the behavior of $`m`$ near the critical coupling $`\lambda _c,`$
$$m(\lambda _c\lambda )^\nu .$$
(18)
Once we determine $`f(t)`$ using DMC simulations we can extract $`m`$ and $`\nu `$ using curve-fitting techniques. We have calculated $`m`$ as a function of $`\lambda `$ for several different values of $`L`$ and $`N_{\mathrm{max}}`$. Results from these calculations are presented in the next section.
$`\beta `$ is the critical exponent describing the behavior of the vacuum expectation value. In the symmetric phase the vacuum state is unique and invariant under the reflection transformation $`\varphi \varphi `$ (or equivalently $`q_nq_n,`$ for each $`n`$). In the broken-symmetry phase the vacuum is degenerate as $`L\mathrm{}`$, and $`q_0`$, the zero-momentum mode, develops a vacuum expectation value. In the $`L\mathrm{}`$ limit tunnelling between vacuum states is forbidden. One ground state, $`|0^+,`$ is non-zero only for values $`q_0>0`$ and the other, $`|0^{}`$, is non-zero only for $`q_0<0`$. We will choose $`|0^+`$ and $`|0^{}`$ to be unit normalized.
Let us now define the symmetric and antisymmetric combinations,
$`|0^s`$ $`=\frac{1}{\sqrt{2}}\left(|0^++|0^{}\right)`$ (19)
$`|0^a`$ $`=\frac{1}{\sqrt{2}}\left(|0^+|0^{}\right).`$
We will select the relative phases of $`|0^{}`$ and $`|0^+`$ so that they transform from one to the other under reflection symmetry. $`|0^s`$ and $`|0^a`$ are then symmetric and antisymmetric (respectively) under reflection symmetry.
To avoid notational confusion in the following, we will write $`\widehat{q}_n`$ to denote the quantum-mechanical position operator and $`q_n`$ for the corresponding ordinary variable. Let $`\left\{\widehat{𝒫}_z\right\}_{z(\mathrm{},\mathrm{})}`$be the spectral family associated with the operator $`\frac{\widehat{q}_0}{\sqrt{2L}}`$.<sup>4</sup><sup>4</sup>4The extra factor $`\frac{1}{\sqrt{2L}}`$ has been included for later convenience. This implies that $`_a^b𝑑\widehat{𝒫}_z`$ is a projection operator whose action on a general wavefunction $`\mathrm{\Psi }`$ is
$$\left(_a^b𝑑\widehat{𝒫}_z\right)\mathrm{\Psi }(q_0,q_1,\mathrm{})=\theta (\frac{q_0}{\sqrt{2L}}a)\theta (b\frac{q_0}{\sqrt{2L}})\mathrm{\Psi }(q_0,q_1,\mathrm{}).$$
(20)
From the support properties of $`|0^+`$ and $`|0^{}`$, we deduce
$`|0^+`$ $`=\sqrt{2}_0^{\mathrm{}}𝑑\widehat{𝒫}_z|0^s=\sqrt{2}_0^{\mathrm{}}𝑑\widehat{𝒫}_z|0^a`$ (21)
$`|0^{}`$ $`=\sqrt{2}_{\mathrm{}}^0𝑑\widehat{𝒫}_z|0^s=\sqrt{2}_{\mathrm{}}^0𝑑\widehat{𝒫}_z|0^a.`$
Using our new spectral language, we can write
$$\frac{\widehat{q}_0}{\sqrt{2L}}=_{\mathrm{}}^{\mathrm{}}z𝑑\widehat{𝒫}_z.$$
(22)
We now consider the vacuum expectation value $`0^+\left|\varphi \right|0^+`$.<sup>5</sup><sup>5</sup>5We could also consider $`0^{}\left|\varphi \right|0^{}`$. By reflection symmetry $`0^{}\left|\varphi \right|0^{}=0^+\left|\varphi \right|0^+.`$ Making use of translational invariance, we have
$$0^+\left|\varphi \right|0^+=\frac{1}{2L}_L^L𝑑x0^+\left|\varphi (t,x)\right|0^+=0^+\left|\frac{\widehat{q}_0}{\sqrt{2L}}\right|0^+.$$
(23)
From (21) and (22), we conclude that
$$0^+\left|\varphi \right|0^+=20^s\left|_0^{\mathrm{}}z𝑑\widehat{𝒫}_z\right|0^s=2_0^{\mathrm{}}𝑑zzg(z),$$
(24)
where
$$g(z)=0^s\left|\frac{d\widehat{𝒫}_z}{dz}\right|0^s.$$
(25)
$`g(z)`$ satisfies the normalization condition
$$_{\mathrm{}}^{\mathrm{}}𝑑zg(z)=2_0^{\mathrm{}}𝑑zg(z)=1.$$
(26)
In our calculations we will be working with large but finite $`L`$. In this case the ground state degeneracy is not exact, and the symmetric state $`|0^s`$ is slightly lower in energy than the antisymmetric state $`|0^a`$. We can now use this observation (that $`|0^s`$ is the lowest energy state) to rewrite $`g(z)`$ as
$$g(z)=0^s\left|\frac{d\widehat{𝒫}_z}{dz}\right|0^s=\underset{t\mathrm{}}{lim}\frac{b\left|\mathrm{exp}\left\{tH\right\}{\scriptscriptstyle \frac{d\widehat{𝒫}_z}{dz}}\mathrm{exp}\left\{tH\right\}\right|b}{b\left|T\mathrm{exp}\left\{2tH\right\}\right|b},$$
(27)
where $`|b`$ is any state such that $`0^s|b`$ is non-zero.
For free field theory $`g(z)`$ can be exactly calculated,
$$g(z)=\sqrt{\frac{2\mu L}{\pi }}e^{2\mu Lz^2}.$$
(28)
For non-trivial coupling we can calculate the right-hand side of (27) using DMC methods. In Figure 2 we have plotted $`g(z)`$ for $`L=2.5\pi `$ and $`L=5\pi `$. In each case $`\frac{\lambda }{4!}=2.76`$ and $`\mathrm{\Lambda }=4.`$ All quantities are measured in units where $`\mu =1`$. As can be seen, the distributions are bimodal and the maxima for both curves occur near $`\pm 0.55`$. We observe that the peaks are taller and narrower for larger $`L.`$ This is consistent with our intuitive picture of fluctuations in the functional integral. For a small but fixed deviation in the average value of $`\varphi `$, the net change in an extensive quantity such as the action or total energy scales proportionally with the size of system. Consequently the average size of the fluctuations must decrease with $`L`$. We can estimate the amplitude of the fluctuations, $`\mathrm{\Delta }\varphi `$, by assuming a quadratic dependence in $`\mathrm{\Delta }\varphi `$ about the local minimum. The net effect of the fluctuation should not scale with $`L`$, and we conclude that<sup>6</sup><sup>6</sup>6The dimension of time does not enter here since we are considering properties of the vacuum, the ground state of the Hamiltonian defined at a given time. This is in contrast with lattice calculations which usually consider the quantity $`\varphi `$, the average of $`\varphi `$ over all space and time.
$$\mathrm{\Delta }\varphi \left(\frac{1}{\sqrt{L}}\right)^{\mathrm{\#}\text{spatial dim.}}=\frac{1}{\sqrt{L}}.$$
(29)
This agrees with the free field result in (28) and also appears to be consistent with the peak widths plotted in Figure 2.
Let $`z_{\mathrm{max}}`$ be the location of the non-negative maximum of $`g(z)`$. Since $`g(z)`$ becomes sharply peaked as $`L\mathrm{}`$,
$$0^+\left|\varphi \right|0^+=2_0^{\mathrm{}}𝑑zzg(z)\underset{L\mathrm{}}{}2z_{\mathrm{max}}_0^{\mathrm{}}𝑑zg(z)=z_{\mathrm{max}}\text{.}$$
(30)
This gives us another option for calculating the vacuum expectation value. We can either integrate 2z$`g(z)`$ or read the location of the maximal point $`z_{\mathrm{max}}`$. Both will converge to the same value as $`L\mathrm{}`$. However, the $`z_{\mathrm{max}}`$ result is less prone to systematic error generated by the $`O(\frac{1}{\sqrt{L}})`$ fluctuations described above.<sup>7</sup><sup>7</sup>7We can see this explicitly in free field theory, where the vacuum expectation value should vanish. $`z_{\mathrm{max}}=0`$ as desired, but $`2_0^{\mathrm{}}𝑑zzg(z)=\frac{1}{\sqrt{2\pi \mu L}}.`$ We will therefore use
$$0^+\left|\varphi \right|0^+=z_{\mathrm{max}}.$$
(31)
$`\beta `$ is defined by the behavior of the vacuum expectation value as we approach the critical coupling,
$$0^+\left|\varphi \right|0^+(\lambda \lambda _c)^\beta .$$
(32)
Using DMC methods, we have computed the $`\lambda `$ dependence of the vacuum expectation value for several values of $`L`$ and $`N_{\mathrm{max}}.`$ The results are shown in the next section.
## 4 Results
The results of our diffusion Monte Carlo simulations are presented here. For each set of parameters $`L`$ and $`N_{\mathrm{max}},`$ the curves for $`m`$ and $`0^+\left|\varphi \right|0^+`$ near the critical coupling have been fitted using the parameterized forms
$$m=a\left(\frac{\lambda _c^m}{4!}\frac{\lambda }{4!}\right)^\nu $$
(33)
and
$$0^+\left|\varphi \right|0^+=b\left(\frac{\lambda }{4!}\frac{\lambda _c^\varphi }{4!}\right)^\beta .$$
(34)
For the mass curves, data points with $`mL^1`$ have correlation lengths exceeding the size of our system and are of questionable significance. We have therefore fit these curves two different ways, once using all data points and a second time excluding small $`m`$ values. The curves for the data set $`L=2.5\pi `$ and $`N_{\mathrm{max}}=10`$ are shown in Figures 3 and 4.<sup>8</sup><sup>8</sup>8As mentioned before, we are using units where $`\mu =1`$. The error bars represent an estimate of the error due to Monte Carlo statistical fluctuations and, in the case of the mass data, pollution due to higher energy states. Let $`\stackrel{~}{\chi }_d^2`$ denote the reduced chi-squared value for $`d`$ degrees of freedom. The results of the fits are as follows:
$`L=2.5\pi ,`$ $`N_{\mathrm{max}}=8`$ ($`\mathrm{\Lambda }=3.2`$):
$`\frac{\lambda _c^m}{4!}`$ $`=3.1\pm 0.2,\nu =1.1\pm 0.1,a=0.31\pm 0.02,\stackrel{~}{\chi }_{15}^2=0.81\text{ (all),}`$ (35)
$`\frac{\lambda _c^m}{4!}`$ $`=3.1\pm 0.2,\nu =1.0\pm 0.1,a=0.30\pm 0.03,\stackrel{~}{\chi }_8^2=0.96\text{(partial),}`$
$`\frac{\lambda _c^\varphi }{4!}`$ $`=2.5\pm 0.1,\beta =0.15\pm 0.02,b=0.65\pm 0.05,\stackrel{~}{\chi }_8^2=0.45`$
$`L=2.5\pi ,`$ $`N_{\mathrm{max}}=10,`$ ($`\mathrm{\Lambda }=4`$):
$`\frac{\lambda _c^m}{4!}`$ $`=2.9\pm 0.2,\nu =1.1\pm 0.1,a=0.35\pm 0.02,\stackrel{~}{\chi }_{12}^2=0.46\text{ (all),}`$ (36)
$`\frac{\lambda _c^m}{4!}`$ $`=2.9\pm 0.2,\nu =1.0\pm 0.1,a=0.37\pm 0.03,\stackrel{~}{\chi }_6^2=0.37\text{(partial),}`$
$`\frac{\lambda _c^\varphi }{4!}`$ $`=2.5\pm 0.1,\beta =0.18\pm 0.01,b=0.70\pm 0.03,\stackrel{~}{\chi }_9^2=0.29`$
$`L=5\pi ,`$ $`N_{\mathrm{max}}=16,`$ ($`\mathrm{\Lambda }=3.2`$):
$`\frac{\lambda _c^m}{4!}`$ $`=3.0\pm 0.2,\nu =1.2\pm 0.1,a=0.32\pm 0.03,\stackrel{~}{\chi }_{13}^2=0.87\text{ (all),}`$ (37)
$`\frac{\lambda _c^m}{4!}`$ $`=3.1\pm 0.2,\nu =1.2\pm 0.1,a=0.30\pm 0.03,\stackrel{~}{\chi }_{10}^2=0.88\text{(partial),}`$
$`\frac{\lambda _c^\varphi }{4!}`$ $`=2.7\pm 0.1,\beta =0.12\pm 0.02,b=0.62\pm 0.04,\stackrel{~}{\chi }_6^2=0.50`$
$`L=5\pi ,`$ $`N_{\mathrm{max}}=20,`$ ($`\mathrm{\Lambda }=4`$):
$`\frac{\lambda _c^m}{4!}`$ $`=2.8\pm 0.2,\nu =1.2\pm 0.1,a=0.35\pm 0.06,\text{ }\stackrel{~}{\chi }_{11}^2=1.2\text{(all),}`$ (38)
$`\frac{\lambda _c^m}{4!}`$ $`=2.9\pm 0.2,\nu =1.3\pm 0.1,a=0.36\pm 0.06,\text{ }\stackrel{~}{\chi }_8^2=1.1\text{(partial),}`$
$`\frac{\lambda _c^\varphi }{4!}`$ $`=2.5\pm 0.1,\beta =0.11\pm 0.02,b=0.65\pm 0.04,\stackrel{~}{\chi }_8^2=0.88.`$
These results are subject to errors due to the finite size $`L`$ and finite cutoff scale $`\mathrm{\Lambda }`$. We will use our data for different values of $`L`$ and $`\mathrm{\Lambda }`$ to extrapolate to the limit $`L\mathrm{},`$ $`\mathrm{\Lambda }\mathrm{}.`$ For the parameters $`\nu `$, $`a`$, $`\beta `$, $`b`$ we use the naive asymptotic form
$$x(L,\mathrm{\Lambda })=x+\frac{1}{\mathrm{\Lambda }^2}x_{\mathrm{\Lambda }^2}+\frac{1}{L^2}x_{L^2}+\mathrm{}.$$
(39)
For the critical couplings $`\lambda _c^m`$ and $`\lambda _c^\varphi `$, however, we modify the finite $`L`$ correction according to the finite-size scaling hypothesis
$$\lambda (L,\mathrm{\Lambda })=\lambda +\frac{1}{\mathrm{\Lambda }^2}\lambda _{\mathrm{\Lambda }^2}+\frac{1}{\left|L\right|^{1/\nu }}\lambda _{\left|L\right|^{1/\nu }}+\mathrm{}.$$
(40)
The results we find are<sup>9</sup><sup>9</sup>9Due to the relatively weak dependence on $`L`$ and $`\mathrm{\Lambda }`$, the reduced chi-squared values for our extrapolation fits are quite small ($`1`$) and do not provide a useful statistical measure.
$`\frac{\lambda _c^m}{4!}`$ $`=2.5\pm 0.2\pm 0.1,\nu =1.3\pm 0.2\pm 0.1,a=0.43\pm 0.05\pm 0.02\text{ (all)},`$ (41)
$`\frac{\lambda _c^m}{4!}`$ $`=2.3\pm 0.2\pm 0.1,\nu =1.4\pm 0.2\pm 0.1,a=0.48\pm 0.06\pm 0.02\text{ (partial)},`$
$`\frac{\lambda _c^\varphi }{4!}`$ $`=2.5\pm 0.1\pm 0.1,\beta =0.13\pm 0.02\pm 0.01,b=0.71\pm 0.04\pm 0.03.`$
The first error bounds include inaccuracies due to Monte Carlo statistics, higher energy states (for the mass curves), and extrapolation. The second error bounds represent estimates of the systematic errors due to our choice of initial state, time step parameter, and bin sizes in the DMC simulations. For data generated from the mass curves, the main source of error was due to extrapolation. The extrapolation error for the vacuum expectation value data was still the most significant, though considerably smaller than that for the mass calculations. This reduction is probably a result of the method used to measure the vacuum expectation value.<sup>10</sup><sup>10</sup>10We are refering to the result $`0^+\left|\varphi \right|0^+=z_{\mathrm{max}}`$, which eliminates peak broadening effects for finite $`L`$. This is discussed at the end of the previous section. Our results for the critical exponents are consistent with the Ising model predictions
$$\nu =1,\beta =\frac{1}{8}.$$
(42)
The results for the critical coupling $`\frac{\lambda _c^m}{4!}`$ and $`\frac{\lambda _c^\varphi }{4!}`$ are in agreement with the recently obtained lattice result <sup>11</sup><sup>11</sup>11Critical exponents were not measured in this study.
$$\frac{\lambda _c}{4!}=2.56_{.01}^{+.02}.$$
(43)
## 5 Summary
We have discussed the generalization of spherical field theory to other modal expansion methods, in particular, periodic field theory. Using periodic field methods we have analyzed two-dimensional $`\varphi ^4`$ theory and computed the critical coupling and critical exponents $`\nu `$ and $`\beta `$ associated with spontaneous breaking of $`\varphi \varphi `$ reflection symmetry. Our value of the critical coupling is in agreement with a recent lattice calculation, and our values for the critical exponents are consistent with the critical exponents of the two-dimensional Ising model. This lends support to the popular belief that the two theories belong to the same universality class.
The full set of diffusion Monte Carlo computations used in our analysis required about 30 hours on a 350 MHz PC processor. Complete codes can be obtained upon request from the authors. The required computational time appears to be dominated by the number of operations required to update the Hamiltonian, which scales as $`N_{\mathrm{max}}^2`$. Errors can be reduced quite substantially by using larger values of $`L`$ and $`N_{\mathrm{max}}`$ and utilizing large-scale parallel processing. No less important, however, is that periodic field theory provides a simple and efficient approach to studying non-perturbative phenomena with only modest computer resources. Improvements are now under way to utilize fast Fourier transform methods and increase the computational speed. Future studies have been planned to analyze phase transitions in other field theory models.
Acknowledgment
We are grateful to Eugene Golowich for useful advice and discussions. We also thank Jon Machta for comments on finite-size scaling and the referee of the original draft for suggesting several improvements. Support provided by the National Science Foundation under Grant 5-22698.
Figures
Figure 1. The only divergent diagram, which can be cancelled by normal ordering.
Figure 2. Plot of $`g(z)`$ for $`L=2.5\pi `$ and $`L=5\pi `$. In each case $`\frac{\lambda }{4!}=2.76`$ and $`\mathrm{\Lambda }=4`$ .
Figure 3. Plot of $`m`$ as a function of $`\frac{\lambda }{4!}`$ for $`L=2.5\pi ,`$ $`N_{\mathrm{max}}=10`$.
Figure 4. Plot of $`0^+\left|\varphi \right|0^+`$ as a function of $`\frac{\lambda }{4!}`$ for $`L=2.5\pi ,`$ $`N_{\mathrm{max}}=10`$.
|
no-problem/9906/nucl-th9906050.html
|
ar5iv
|
text
|
# Parity violation through color superconductivity
## Abstract
We give a pedagogical discussion of how color superconductivity can produce parity violation in cold quark matter at very high densities.
In this note, we give a pedagogical discussion of how, for massless quarks at very high densities, the formation of a spin-zero color superconducting condensate spontaneously breaks both the axial $`U(1)`$ symmetry and parity . This observation is implicit in the seminal work of Bailin and Love, is noted by Alford, Rajagopal, and Wilczek, and is explicitly discussed by Evans, Hsu, and Schwetz .
For simplicity, consider two degenerate flavors of quarks, and assume that a quark-quark condensate forms in the color-antitriplet channel . For massless quarks, two of the four possible condensates with total spin $`J=0`$ are
$$\varphi _1^a=ϵ^{abc}ϵ_{fg}q_{f}^{b}{}_{}{}^{T}C\gamma _5q_g^c\mathrm{and}\varphi _2^a=ϵ^{abc}ϵ_{fg}q_{f}^{b}{}_{}{}^{T}C\mathbf{\hspace{0.17em}1}q_g^c,$$
(1)
where $`a,b,c=1,2,3`$ are $`SU(3)_c`$ color indices, $`f,g=1,2`$ are $`SU(2)_f`$ flavor indices, and $`C`$ is the charge conjugation matrix. $`\varphi _{1,2}^a`$ are antitriplets under $`SU(3)_c`$ gauge transformations and singlets under $`SU(2)_f`$ rotations . The condensate $`\varphi _1^a`$ is even under parity, $`J^P=0^+`$, while $`\varphi _2^a`$ is odd, $`J^P=0^{}`$. There are two other condensates , but they do not change our qualitative arguments about parity violation, and so we omit them.
In the limit where mass and instanton-induced terms can be neglected, the effective Lagrangian for color superconductivity is
$$_0=\left|_\mu \varphi _1\right|^2+\left|_\mu \varphi _2\right|^2+\lambda \left(\left|\varphi _1\right|^2+\left|\varphi _2\right|^2|v|^2\right)^2,$$
(2)
where $`|\varphi |^2_a(\varphi ^a)^{}\varphi ^a`$. When mass and instanton effects are neglected, the Lagrangian is symmetric under axial $`U(1)`$ transformations, which rotate $`\varphi _1^a`$ and $`\varphi _2^a`$ into each other. Therefore, there is only one quartic coupling, $`\lambda `$. The Lagrangian (2) generates nonzero vacuum expectation values for the $`\varphi ^a`$’s, which can be written as
$$\varphi _1^a=v^a\mathrm{cos}\theta ,\varphi _2^a=v^a\mathrm{sin}\theta .$$
(3)
Condensation picks out a given direction in color space for $`v^a`$, and a given value for $`\theta `$. $`v^a0`$ breaks the $`SU(3)_c`$ color symmetry, which produces color superconductivity. $`\theta 0`$ breaks the axial $`U(1)`$ symmetry. Further, whenever $`\theta 0`$, there is a nonzero $`J^P=0^{}`$ condensate $`\varphi _2^a`$; this represents the spontaneous breaking of parity (relative to the external vacuum).
This breaking of parity is actually familiar from the spontaneous breaking of chiral symmetry. Consider two flavors of massless quarks; the effective potential is $`O(4)`$-symmetric, involving the $`J^P=0^+`$ $`\sigma `$\- and $`J^P=0^{}`$ $`\pi `$-meson fields. For massless quarks, it is as likely for a parity-odd pion condensate to form as it is for a parity-even $`\sigma `$-meson condensate. This does not happen in nature, because nonzero quark masses break chiral symmetry explicitly, and thus favor a $`0^+`$ condensate.
Similarly, it is important to add to the effective Lagrangian (2) terms which explicitly break the axial $`U(1)`$ symmetry:
$$^{}=c\left(\left|\varphi _1\right|^2\left|\varphi _2\right|^2\right)+m^2\left|\varphi _2\right|^2.$$
(4)
As shown by Berges and Rajagopal , the first term is due to instantons, with $`c`$ proportional to the instanton density. Instantons are attractive in the $`J^P=0^+`$ channel, and repulsive in the $`J^P=0^{}`$ channel, so $`c`$ is positive.
In the second term, each power of the current quark mass $`m_q`$ is accompanied by one power of $`\varphi _2^a`$. Since $`\varphi _2^a`$ itself is not gauge invariant, the simplest gauge-invariant term is $`m_q^2|\varphi _2|^2`$ , so $`mm_q`$. Thus, the pseudo-Goldstone boson for the axial $`U(1)`$ symmetry is extremely light, $`m10`$ MeV, taking $`m_q`$ to be the up or down quark mass and assuming the constant of proportionality between $`m`$ and $`m_q`$ to be of order 1. This is in contrast to the explicit breaking of chiral symmetry, where the corresponding term is linear in the quark mass. The pseudo-Goldstone bosons are the pions which are relatively heavy, $`m_\pi 140\mathrm{MeV}\sqrt{m_q}`$.
Both instanton and mass terms act to favor the formation of the $`0^+`$ condensate $`\varphi _1`$ over that of the $`0^{}`$ condensate $`\varphi _2`$. Consider, however, the limit of very high densities. When the quark chemical potential $`\mu \mathrm{}`$, the instanton density and so $`c`$ vanish like $`\mu ^{29/3}`$ (for two flavors). The real question is whether at some density the current quark mass is negligible compared to the scale of the condensate. If this happens, we reach an “instanton-free” region in which quarks are effectively massless, $`^{}`$ can be neglected, and parity is spontaneously broken.
Because mass terms are always present, the true thermodynamic ground state is always the parity-even $`0^+`$ condensate, i.e., $`\theta =0`$. There is, however, a finite probability for the system to condense in a parity-odd state, i.e., $`\theta 0`$. The size and lifetime of this state is set by the mass of the pseudo-Goldstone bosons. For chiral symmetry breaking, the characteristic scale is $`1/m_\pi 1.4`$ fm. This is small compared to the time and length scales of a heavy-ion collision, so that parity-odd fluctuations average to zero. On the other hand, the region in space-time over which a parity-odd color superconducting condensate forms is large, $`1/m20`$ fm. If the collision time is shorter than this time scale, there is a finite probability that the system decays in a parity-odd state. We therefore propose to trigger on phase-space regions where nuclear matter is cool and dense, in order to observe the formation of parity-odd color-superconducting condensates on an event-by-event basis. A possible global parity-odd observable was discussed in .
|
no-problem/9906/astro-ph9906334.html
|
ar5iv
|
text
|
# Characterizing the structure of interstellar turbulence
## 1 Introduction
Although numerical simulations of transsonic and supersonic turbulence appropriate to interstellar gas have been carried out for several years now (Porter, Pouquet, & Woodward 1992, 1994; Padoan & Nordlund 1999; Mac Low et al. 1998; Stone, Ostriker, & Gammie 1998) there are only a few direct comparisons between numerical results and astrophysical observations (e.g. Falgarone et al. 1994; Padoan et al 1999; Rosolowsky et al. 1999). This is mainly due to the lack of appropriate measures applicable both to simulated and observed structures. Measures common for turbulence studies like the power spectrum of spatial or velocity fluctuations or the probability distribution of velocity increments are not easily applied to observations where their use is greatly impaired by the limitations due to finite signal to noise ratio and limited telescope resolution.
To obtain clues to the true physical nature of interstellar turbulence, characteristic scales and any inherent scaling laws have to be measured and modelled. A major problem with characterizing both the observations and the models is to determine what scaling behaviour, if any, is present in complex turbulent structures. Both the velocity and density fields need to be considered, but only the radial velocity and column densities can be observed.
One measure useful for characterizing structure and scaling in observed maps of molecular clouds is the $`\mathrm{\Delta }`$-variance, $`\sigma _\mathrm{\Delta }^2`$, introduced by Stutzki et al. (1998). It can better separate observational effects from the real cloud structure than e.g. the power spectrum or fractal dimensions. The $`\mathrm{\Delta }`$-variance spectrum clearly shows characteristic scales and scaling relations, and its logarithmic slope can be analytically related to the spectral index of the corresponding power spectrum.
Stutzki et al. (1998) and Bensch et al. (1999) have applied the $`\mathrm{\Delta }`$-variance analysis to observations of the Polaris Flare and the FCRAO survey of the outer galaxy. They found a relatively universal law describing these clouds, with a power law structure at scales below the cloud size and the general cloud size as the only characteristic scale within the resolution limit. Given the limited number of samples, however, it is not yet possible to draw conclusions on the scaling of turbulence in molecular clouds in general. To study the common behaviour and differences between several clouds and interstellar regions the analysis of more and larger maps obtained with a good signal-to-noise ratio is required.
In order to understand the physical significance of the characterization of the observational maps by $`\mathrm{\Delta }`$-variance spectra, we apply here the same analysis to simulated gas distributions resulting from MHD models. In this first paper, we try to get a general feeling for the scaling behaviour in different models, and for the influence of the different parameters and numerical approaches on the produced structures. We only perform a qualitative comparison to the observations here. In a subsequent paper we will attempt to make a detailed fit of several observed regions using MHD models including the solution of the radiative transfer problem.
## 2 Structure measure by the $`\mathrm{\Delta }`$-variance
### 2.1 Definitions
The $`\mathrm{\Delta }`$-variance was comprehensively introduced by Stutzki et al. (1998). We will repeat here only the formalism essential for the further analysis in this paper.
The $`\mathrm{\Delta }`$-variance is a type of averaged wavelet transform that measures the variance in an $`E`$-dimensional structure $`f(r)`$ filtered by a spherically symmetric down-up-down function of varying size (Zielinsky & Stutzki 1999). It is defined by
$$\sigma _\mathrm{\Delta }^2(l)=_{\mathrm{}}^{\mathrm{}}\left((f(r)f)\underset{l}{}(r)\right)^2𝑑r$$
(1)
where, the $``$ stands for a convolution and $`_l`$ describes the down-up-down function with the length $`l`$ of each step
$$\underset{l}{}(r)=𝒱_E^1(\frac{2}{l})^E\{\begin{array}{cc}1\hfill & |r|l/2\hfill \\ 1/(3^E1)\hfill & l/2<|r|<3l/2\hfill \\ 0\hfill & |r|>3l/2\hfill \end{array}$$
(2)
with $`𝒱_E`$ being the volume of the $`E`$-dimensional unit sphere.
Thus, the $`\mathrm{\Delta }`$-variance measures the amount of structural variation on a certain scale, e.g. in a map or three-dimensional distribution. A familiar, slightly different kind of variance defined for one-dimensional problems is the Allan-variance commonly used for stability investigation (Schieder et al. (1989)). In contrast to the $`\mathrm{\Delta }`$-variance, it works with a non-symmetric up-down filter.
Instead of convolving the structure in ordinary space with a filter function one can carry out the $`\mathrm{\Delta }`$-variance analysis in Fourier space by simple multiplication. This directly relates the $`\mathrm{\Delta }`$-variance to the power spectrum of a structure. If $`P(k)`$ is the radially averaged power spectrum of the structure $`f(r)`$, the $`\mathrm{\Delta }`$-variance is given by
$$\sigma _\mathrm{\Delta }^2(l)=_0^{\mathrm{}}P(k)|\stackrel{~}{}_l(k)|^2k^{E1}𝑑k$$
(3)
where $`\stackrel{~}{}_l`$ is the Fourier transform of the $`E`$-dimensional down-up-down function with the scale length $`l`$, and we are using $`k`$ to denote the spatial frequency or wavenumber.
If the power spectrum is given by a simple power law, $`P(k)k^\zeta `$, the $`\mathrm{\Delta }`$-variance also follows a power law $`\sigma _\mathrm{\Delta }^2l^\alpha `$ with $`\alpha =\zeta E`$ within the exponential range $`0\zeta <E+4`$. (To avoid confusion with the ratio between thermal and magnetic pressure we use $`\zeta `$ for the power spectral index rather than the variable $`\beta `$ used by Stutzki et al. (1998) and Bensch et al. (1999).) The main advantages of the $`\mathrm{\Delta }`$-variance compared to the direct computation of the Fourier power spectrum are the clear spatial separation of different effects influencing observed structures like noise or finite observational resolution, and the robustness against singular variations due to the regular filter function.
Furthermore, for most astrophysical structures, where a periodic continuation is not possible, Bensch et al. (1999) has shown that the periodicity artificially introduced by the Fourier transform can lead to considerable errors. Here, even the $`\mathrm{\Delta }`$-variance has to be determined in ordinary space. For the simulations examined in this paper, periodic wrap around is not a problem because it is already explicitly assumed, so we apply the faster Fourier method to determine their $`\mathrm{\Delta }`$-variance spectra.
In Fig. 1 we compare the power spectrum and $`\mathrm{\Delta }`$-variance for three simulations described below. They have different driving scales, and therefore each shows a different characteristic scale visible as a turn-over at large lags in the $`\mathrm{\Delta }`$-variances and at small wavenumbers in the power spectrum, respectively. At smaller lags and higher wavenumbers power laws can be seen in both cases (with their slopes related by the analytic relation given above). A steep drop-off follows at the smallest scales indicating the resolution limit of the simulation. The power laws are equivalent in both cases, but the characteristic scale at one end and the resolution limit at the other end of the spectrum can be more clearly seen in the $`\mathrm{\Delta }`$-variance. The smooth spatial filter function in the $`\mathrm{\Delta }`$-variance analysis still provides a good measure for the behavior at large scales whereas the power spectrum suffers from the low significance of the few remaining points there.
The $`\mathrm{\Delta }`$-variance analysis of astronomical maps was extensively discussed and demonstrated by Bensch et al. (1999). In all the observations analyzed by them, the total cloud size was the only characteristic scale detected by means of the $`\mathrm{\Delta }`$-variance. Below that size they found a self-similar scaling behaviour reflected by a power law with index $`\alpha =0.5\mathrm{}1.3`$ corresponding to a Fourier power spectral index $`\zeta =2.5\mathrm{}3.3`$. The analysis of further maps will be discussed in future work.
### 2.2 Two-dimensional maps and three-dimensional structures
In the application to molecular cloud structure simulations we have to restrict the analysis either to the three-dimensional structure or to the two-dimensional projection of the structure which would be astronomically observed, e.g. in optically thin lines or the FIR dust emission.
Stutzki et al. (1998) have shown that the spectral index of the power spectrum $`\zeta `$ for an spatially isotropic structure remains constant on projection. This means that the projected map of three-dimensional density structures shows the same $`\zeta `$ as the original structure as long as we assume that the astronomical structure is on the average isotropic. Consequently the slope of the $`\mathrm{\Delta }`$-variance grows by 1 in projection.
In Fig. 2 we demonstrate this for a simulation where we determined the $`\mathrm{\Delta }`$-variance of the three-dimensional structure and the $`\mathrm{\Delta }`$-variances of the three perpendicular projections. The dashed lines show the three projected $`\mathrm{\Delta }`$-variances. Now we multiply the three-dimensional $`\mathrm{\Delta }`$-variance by the abscissa values to obtain the same local slope as measured in two dimensions (dotted line). However, we still have to correct for the scale length of the measure. The length of an arbitrary three-dimensional vector is reduced on projection to two dimensions by a factor $`\pi /4`$ on the average. Therefore, we adjust the local scale by this factor for the $`\mathrm{\Delta }`$-variance determined in three dimensional space. The resulting plot is shown as the thick solid line in Fig. 2. We obtain exactly the same general behaviour as for the projected maps. The equivalent plot for numerous other models verified this as a general behavior. Hence, we can either consider the three-dimensional variance or the projected variances and can simply translate them into each other.
The treatment of the three-dimensional variances is favourable from the viewpoint that it measures exactly the scales as they occur in the density structure. However, the projected maps are favourable to have a means of direct comparison to astronomical observations. Putting the relation to the observations at first priority, we will show in the following the variances translated to the two-dimensional behaviour and we will only mention the physical three-dimensional scales if they appear to be especially prominent.
In this paper, we will restrict ourselves to simple projections taking them as representations of the integrated map of optically thin lines or optically thin continuum emission. We will not treat the full radiative transfer problem which had to be solved for a general treatment. Optical depths effects in fractal and random structures were discussed by Ossenkopf et al. (1998) and they will be taken into account in a subsequent paper dealing with the simulation of certain molecular clouds.
As a side-result of this comparison we find however, that the treatment of maps instead of three-dimensional cubes by observers can easily lead to a misinterpretation of the structure scaling. Fig. 3 compares the three-dimensional $`\mathrm{\Delta }`$-variances computed in 3-D and the same variance corrected for 2-D projection as it could be measured by an observer for a hydrodynamic decaying turbulence model. Whereas the plot for 3-D only shows a broad distribution of structures, the human eye tries to see in the 2-D curve at least a reasonable range with a power law between 0.03 and 0.2. Except for the smallest lags dominated by numeric viscosity as discussed below the plot is quite similar to variances obtained e.g. by Stutzki et al. (1998) for molecular clouds. Hence, noncritical observers might be forced to see a self-similar behaviour even if there is no strong indication for a power law.
## 3 Numerics
### 3.1 Computations
We use simulations of uniform decaying and driven turbulence with and without magnetic fields described by Mac Low et al. (1998) in the decaying case and by Mac Low (1998) in the driven case. These simulations were performed with the astrophysical MHD code ZEUS-3D<sup>1</sup><sup>1</sup>1Available by registration with the Laboratory for Computational Astrophysics of the National Center for Supercomputing Applications at the email address lca@ncsa.uiuc.edu (Clarke 1994). This is a three-dimensional version of the code described by Stone & Norman (1992a, b) using second-order advection (Van Leer 1977), that evolves magnetic fields using constrained transport (Evans & Hawley 1988), modified by upwinding along shear Alfvén characteristics (Hawley & Stone 1995). The code uses a von Neumann artificial viscosity to spread shocks out to thicknesses of three or four zones in order to prevent numerical instability, but contains no other explicit dissipation or resistivity. Structures with sizes close to the grid resolution are subject to the usual numerical dissipation, however.
In this paper, we attempt to use these simulations to derive some of the observable properties of supersonic turbulence. Although our dissipation is clearly greater than the physical value, we can still derive useful results for structure in the flow that does not depend strongly on the details of the behavior at the dissipation scale. Such structure exists in incompressible hydrodynamic turbulence (e.g. Lesieur 1997). In Mac Low et al. (1998) it was shown that the energy decay rate of decaying supersonic hydrodynamic and MHD turbulence was independent of resolution with a resolution study on grids ranging from $`32^3`$ to $`256^3`$ zones. Because both numerical dissipation and artificial viscosity act across a fixed number of zones, increasing resolution yields decreasing dissipation. The results we describe in this paper suggest that in some cases observable features may be independent enough of resolution, and thus of the strength of dissipation. Despite the limitations of our method we can therefore draw quantitative conclusions. Again, we support this assertion by appealing to resolution studies whenever possible.
The simulations used here were performed on a three-dimensional, uniform, Cartesian grid with side $`L=2`$, extending from -1 to 1 with periodic boundary conditions in every direction. For convenience, we have normalized the size of the cube to unity in the analyses described here, so that all length scales are in fractions of the cube size. An isothermal equation of state was used in the computations, with sound speed chosen to be $`c_s=0.1`$ in arbitrary units. The initial density and, in relevant cases, magnetic field were both initialized uniformly on the grid, with the initial density $`\rho _0=1`$ and the initial field parallel to the $`z`$-axis.
The turbulent flow is initialized with velocity perturbations drawn from a Gaussian random field determined by its power distribution in Fourier space, following the usual procedure. As discussed in detail in Mac Low et al. (1998), it is reasonable to initialize the decaying turbulence runs with a flat spectrum with power from $`k_d=1`$ to $`k_d=8`$ because that will decay quickly to a turbulent state. Note that the dimensionless wavenumber $`k_d=L/\lambda _d`$ counts the number of driving wavelengths $`\lambda _d`$ in the box. A fixed pattern of Gaussian fluctuations drawn from a field with power only in a narrow band of wavenumbers around some value $`k_d`$ offers a very simple approximation to driving by mechanisms that act on that scale. To drive the turbulence, this fixed pattern was normalized to produce a set of perturbations $`\delta \nu (x,y,z)`$, and at every time step add a velocity field $`\delta v(x,y,z)=A\delta \nu `$ to the velocity $`v`$, with the amplitude $`A`$ now chosen to maintain constant kinetic energy input rate, as described by Mac Low (1998).
### 3.2 Resolution Studies
In Figure 4 we show how numerical resolution, or equivalently the scale of dissipation, influences the $`\mathrm{\Delta }`$-variance spectrum that we find from our simulations. We test the influence of the numerical resolution on the structure by comparing a simple hydrodynamic problem of decaying turbulence computed at resolutions from $`64^3`$ to $`256^3`$, with an initial rms Mach number $`M=5`$ (Model D from Mac Low et al. 1998).
In contrast to the results from Mac Low (1999) which showed little dependence of the energy dissipation rate on the numerical resolution, we find here remarkable differences in the scaling behaviour of the turbulent structures. At small scales we find a very similar decay in the relative structure variations up to scales of about 10 times the pixel size (0.03, 0.06, and 0.1 for the resolutions $`256^3`$, $`128^3`$, and $`64^3`$, respectively) in all three models. This constant length range starting from the pixel scale clearly identifies this decay as an artifact from the simulations which can be attributed to the numerical viscosity acting at the smallest available size scale.
Another very similar behaviour can be observed at the largest lags where the relative structure variations decay for all three simulations on a length scale covering a factor two below half the cube size. This structure reflects the original driving of the turbulence with a maximum wavenumber $`k_d=8`$ that manifests itself in the production of structure on the corresponding length scale. Only for the $`256^3`$ cubes we find a range of an approximately self-similar behaviour at intermediate scales that is not yet smoothed out by the influence of numerical viscosity.
Structures larger than at most half the cube size are suppressed by the use of periodicity in the simulations. Together with the viscosity range of about 10 pixels there is only a scale factor about 10, 5 or 3 remaining for the three different resolutions where we can study true structure not influenced by the limiting conditions of the numerical treatment. For the derivation of reliable scaling laws, we must therefore use at least simulations on the $`256^3`$ grid. On the other hand we know, however, that the limits of the observations also constrain the scaling factor for structure investigations in observed maps to at most a factor 10 in general (Bensch et al. (1999)).
Although we have plotted here only the results for a hydrodynamic model there are no essential differences to the resolution dependence when magnetic fields are included as discussed below.
### 3.3 Statistical Variations
Another question concerns the statistical significance of the structure in the simulations. Since each simulation and even each time step provides another structure there is a priori no reason to believe that a statistical measure like the $`\mathrm{\Delta }`$-variance is about the same for each realization of a given HD/MHD problem.
Restricted by the huge demand for computing power in each simulation we cannot provide a statistically significant analysis of many realizations for each problem. However, we will try to provide some general clues for the uncertainty of the $`\mathrm{\Delta }`$-variance measured for a certain structure.
A first impression can be obtained from the differences in the three projections of one cube in Fig. 2. Because each projection provides an independent view on the three-dimensional structure their variation can be considered a rough measure for the statistical significance of the $`\mathrm{\Delta }`$-variance plots. We see that the curves agree well up to lags of about a quarter of the cube size but deviate considerably at larger lags. This is explained by the number of structures contributing to the variations at each scale. Whereas we find many small fluctuations dominating the variance at small scales there is in general only one main structure responsible for the variance at the largest scale. Its different appearance from different directions then produces the uncertainty in the $`\mathrm{\Delta }`$-variance there.
Looking at the variance determined in three dimensions in Fig. 2 we see however that it provides already a kind of average over the three projected functions. Analyzing the three-dimensional cubes thus removes already part of the statistical variations that could be seen by an observer when looking at the two-dimensional projections only. The statistical uncertainty is reduced for the $`\mathrm{\Delta }`$-variances determined in three dimensions considered below.
As another estimate for the uncertainty in this case we study the variances for different time steps in the evolution of a continuously driven hydrodynamic model. In the evolution of the simulation different structures are produced which should behave statistically equal since the general process of their formation and destruction remains the same.
Fig. 5 shows four different timesteps in an HD model driven at wavenumber $`k_d=2`$ each separated by 0.75 the box crossing time at the rms velocity. The variations even at larger scales are much less than in Fig. 2. It appears that the $`\mathrm{\Delta }`$-variance does a good job of characterizing invariant properties of the structure. Only for high accuracy determinations of the slope or the reliable identification of self-similarity ensemble averages should be taken by computing many realizations.
## 4 Results
### 4.1 Decaying hydrodynamic turbulence
We have computed the $`\mathrm{\Delta }`$-variance spectra for two models of decaying hydrodynamic turbulence, one with initial rms Mach number $`M=5`$, noted as Model D in Mac Low et al. (1998), and one otherwise identical model with initial $`M=50`$, not published before. As noted above, these models were excited with a flat-spectrum pattern of velocity perturbations, which would correspond to a rather steep spectrum $`\sigma _\mathrm{\Delta }^2L^2`$ in 2D or $`L^3`$ in 3D, respectively. Both were run at a resolution of $`256^3`$.
The first time steps in Figure 6 show that only hypersonic turbulence provides a self-similar behavior, indicated by a power-law $`\mathrm{\Delta }`$-variance spectrum. In this case, there appears to be structure corresponding to a power-law spectrum of $`k^{2.5}`$, somewhat steeper than the $`k^2`$ that would be expected from a simple box full of step-function shocks, but approaching the steepness observed for real interstellar clouds. When the turbulence decays to supersonic rms velocities at later times, or in the model having only supersonic initial velocities, the spectrum indicates no self-similar structure but a distinctive physical scale that evolves with time to larger sizes.
We speculate that a physical explanation for this observation might be drawn from the nature of dissipation in supersonic turbulence. Energy does not cascade from scale to scale in a smooth flow through wavenumber space as is assumed by analyses following Kolmogorov (1941) for subsonic turbulence. Rather, energy on large scales is directly transferred to scales of the shock thickness by shock fronts, and there dissipated. As a result, energy is not added to small and intermediate scale structures at the same rate that it is dissipated. Combined with a fairly steep power spectrum, this means that the smaller scale structures will be lost to viscous dissipation first, moving the typical size to larger and larger scales.
We can quantify the change in typical scale by simply fitting a power-law to the lag $`L_{pk}`$ at which $`\sigma _\mathrm{\Delta }^2`$ reaches a peak. This was done for the 3D $`\mathrm{\Delta }`$-variance where the peak appears more prominent than in Fig. 6 and represents the true length scale without projection effects. For the model starting at Mach 5 we find a variation in time of $`L_{pk}t^q`$ with $`q=0.51`$.
These decaying turbulence models were found by Mac Low et al. (1998) to lose kinetic energy at a rate $`E_{\mathrm{kin}}t^\eta `$, with $`\eta 1`$. However, Mac Low (1999) showed that driven hydrodynamic turbulence dissipates energy $`E_{\mathrm{kin}}v^3/\mathrm{}`$, corresponding to a kinetic energy decay rate of $`\eta =2`$ if the effective decay length scale $`\mathrm{}`$ were independent of time. From this observation, a time dependence of $`\mathrm{}t^{1/2}`$ was deduced. Mac Low (1999) also showed that the characteristic driving length-scale $`1/k_d`$ was the most likely identification for $`\mathrm{}`$. Identifying $`\mathrm{}`$ for decaying turbulence with the length scale containing the most power in the $`\mathrm{\Delta }`$-variance spectrum $`L_{pk}`$ seems natural, and yields excellent agreement in the time-dependent behavior of the length scale, since $`L_{pk}t^{0.51}`$.
### 4.2 Driven hydrodynamic turbulence
In Figure 7 we show the $`\mathrm{\Delta }`$-variance spectra for models of supersonic hydrodynamic turbulence driven with a fixed pattern of Gaussian random perturbations having only a narrow range of wavelengths and two different energy input rates. The driving wavelengths are 1/2, 1/4, and 1/8 of the cube size, corresponding to driving wavenumbers of $`k_d=2`$, 4, and 8. In the upper graph (models HE2, HE4, and HE8 from Mac Low 1999), the driving power is by a factor 10 higher than in the lower graph (models HC2, HC4, and HC8). The equilibrium rms Mach numbers here are 15, 12, and 8.7, for the high energy models driven with $`k_d=2`$, 4, and 8 respectively, and 7.4, 5.3, and 4.1 for the low energy simulations. All of these models were run at $`128^3`$ resolution.
All spectra show a prominent peak characterizing the dominant structure length. It is obviously related to the scale on which the turbulence is driven but the exact position depends on the energy input rate. Whereas all peak positions in the strongly driven case are at about 0.5 times the driving wavelength (correcting the scales from Fig. 7 by the projection factor $`4/\pi `$), they change in the lower graph from 0.8 times the driving wavelength for $`k_d=2`$ to 0.6 $`\lambda _d`$ for $`k_d=8`$. Thus, only the strongly hypersonic models provide a constant relation between the driving scale and the dominant scale of the density structure.
Below the peak scale, a power-law distribution of structure is observed, while above this scale, the spectrum drops off very quickly. The power-law section of the spectrum has a slope between 0.45 for the high Mach number models and 0.75 for the lower Mach numbers, corresponding to a power spectrum power law of $`k^{2.45}\mathrm{}k^{2.75}`$. This agrees with the slope observed in the case of hypersonic decaying turbulence and is well in the range observed in real molecular clouds.
Further simulations should systematically study the transition from supersonic to hypersonic velocities in driven models to find the critical parameters for the onset of a self-similar behaviour and the exact relation between the peak position, the driving scale, and the viscous dissipation length in this case.
### 4.3 MHD models
Now we can examine what happens when magnetic fields are introduced to models of both decaying and driven turbulence. In Figure 8 we begin by examining the $`\mathrm{\Delta }`$-variance spectra of a decaying model with $`M=5`$ and initial rms Alfvén number $`A=1`$, equivalent to a ratio of thermal to magnetic pressure $`\beta =0.08`$. This $`256^3`$ model was described as Model Q in Mac Low et al. (1998).
No power law behavior is observed, with the spectra showing a uniformly curved shape remarkably devoid of distinguishing features. We emphasize that this behavior is preserved through a resolution study encompassing a factor of four in linear resolution, suggesting that it is not simply due to numerical diffusivity, but rather is a good characterization of the structure of a strongly magnetized plasma. Thus we conclude that self-similar, power-law behavior is not a universal feature of MHD turbulence, and that observations showing such curved $`\mathrm{\Delta }`$-variance spectra may reflect the true underlying structure, rather than being imperfect observations of self-similar structure. The magnetic field tends to transfer power from larger to smaller scales quickly, overpowering the evolution of the characteristic driving scale seen in the hydrodynamical models.
A similar behavior is visible in the driven turbulence models shown in Fig. 9. In the upper part of the figure the $`\mathrm{\Delta }`$-variance spectra for three $`128^3`$ models with driving wavenumber $`k_d=4`$ and ratios of thermal to magnetic pressure of $`\beta =0.02`$, 0.08, and 2.0 (models MC4X, MC45, and MC41 as described by Mac Low 1999) are shown along with a hydrodynamical model ($`\beta =\mathrm{}`$) with identical driving (HC4). The MHD models all have equilibrium rms Mach number $`M5`$; their equilibrium rms Alfvén numbers are about 0.8, 1.6, and 8 respectively. In the lower graph we have plotted the equivalent extreme cases of $`\beta =0.02`$ and $`\beta =\mathrm{}`$ for the $`k_d=2`$ driving.
We find again that the magnetic fields have some tendency to transfer energy from large to small scales, presumably through the interactions of non-linear MHD waves. The more energy that is transferred down to the dissipation scale, the less power is seen in the $`\mathrm{\Delta }`$-variance spectra, suggesting that the strong field ($`\beta =0.02`$) is more efficient at energy transfer than the weaker, higher $`\beta `$ fields. The larger-scale $`k_d=2`$ driving admittedly shows much less drastic effects than the $`k_d=4`$ driving, emphasizing that the magnetic effects are secondary in comparison to the nature of the driving.
This transfer of energy to smaller scales has implications for the support of molecular clouds. There have been suggestions by Bonazzola et al. (1987) and Léorat et al. (1990) that turbulence can only support regions with Jeans length greater than the effective driving wavelength of the turbulence. The transfer of power to smaller scales might increase the ability of turbulence driven at large scales to support even small-scale regions against collapse. Computations including self-gravity that may confirm this are described by Mac Low, Heitsch, & Klessen (1999).
### 4.4 The velocity space
The $`\mathrm{\Delta }`$-variance measuring the density structure of the HD/MHD simulations can be compared directly to the analysis of astrophysical maps taken in optically thin tracers. However, there is much additional information in the velocity space which has to be addressed too.
Here, the $`\mathrm{\Delta }`$-variance cannot be applied to the observations since they retrieve only the line-of-sight integrated one-dimensional velocity component convolved with the density. Nevertheless, we can apply it to analyze the characteristic quantities in the simulations where we have the full information on the spatial distribution of the velocity vectors. As the $`\mathrm{\Delta }`$-variance measures the relative amount of structure on certain scales in the density cubes it can be applied in the same way to the velocity components or the energy density.
Fig. 10 shows the $`\mathrm{\Delta }`$-variances for for the kinetic energy density, and the $`x`$-velocity component of the driven hydrodynamic model discussed in Sect. 7. The plots can be compared to the $`\mathrm{\Delta }`$-variances of the corresponding density structures shown in the upper part of Fig. 7. We see a shift of the dominant structure size from the driving wavelength that is directly seen in the velocity structure to smaller scales for the density structure. The energy density structure shows an intermediate behavior as a combination of density and velocity structure.
The same comparison for the supersonic model shown in the lower part of Fig. 7 provides much smaller differences in the peak position of the $`\mathrm{\Delta }`$-variance for the three quantities. This means that hypersonic velocities are not able to create density structures at the scale of injection but only on some smaller scales whereas smaller velocities produce void and compressed regions directly at the scale of their occurrence.
The slopes in the self-similar range at smaller scales are different for the density and velocity structure. The Gaussian perturbations in velocity space create a $`\mathrm{\Delta }`$-variance slope of 2.1 in the projected velocities but do not translate into the same structural variations in the other quantities. In the density structure, perturbations are created more efficiently at smaller scales so that we obtain a slope of 0.45. The energy density structure turns out to be dominated by the density variations so that we find about the same slope there.
The difference in the $`\mathrm{\Delta }`$-variances between the three quantities is however probably due to the special driving mechanism. If we apply the same analysis to the decaying turbulence models, we find that the peak position and slopes in all three quantities approach each other after some time, so that an equipartition of structure in density and velocity is produced. In the first steps of the decaying model from Fig. 6 we still find a difference in the slopes of the $`\mathrm{\Delta }`$-variances between the density and velocity structure of a factor 1.5 to 2 whereas the slopes are almost identical at the latest step. Applying the same line of reasoning to the astrophysical observations, the comparison of density and velocity structure there might help to clarify the state of relaxation and the driving mechanism creating structure in interstellar clouds.
## 5 Conclusions and Outlook
### 5.1 Conclusions
In this paper we have shown that wavelet transform methods, as exemplified by the $`\mathrm{\Delta }`$-variance described by Stutzki et al. (1998), offer a useful tool for comparison of observed structure in molecular clouds to simulations of magnetized turbulence. The $`\mathrm{\Delta }`$-variance spectrum can be analytically related to the more commonly used Fourier power spectrum, but has distinct advantages: it explicitly reveals finite map size and finite resolution effects; it works in the absence of periodic boundary conditions; and it will reveal characteristic structure scale even in the presence of shocks and other sharp discontinuities. One note of caution is called for in its use, however: 2D spectra are proportional to the 3D spectra multiplied by the lag, and this can introduce apparent power-law behavior even in cases where the 3D spectra do not appear to have any such behavior intrinsically.
We computed $`\mathrm{\Delta }`$-variance spectra for the numerical simulations of compressible, hydrodynamical and MHD turbulence described by Mac Low et al. (1998) in the freely decaying case, and by Mac Low (1999) in the case of driven turbulence, along with a few extra models run to expand the parameter space in interesting directions. Resolution studies reveal that the $`\mathrm{\Delta }`$-variance spectra cleanly pick out the scale on which artificial viscosity operates, which appears as a steeply dropping section of the spectrum at small lags. Examination of spectra from widely different times for driven models in equilibrium shows that the $`\mathrm{\Delta }`$-variance spectrum offers a stable characterization of the dynamically varying structure.
Decaying hydrodynamical turbulence excited initially with a range of length scales only appears to have self-similar, power-law behavior in the hypersonic regime. Once the rms Mach number drops below $`M4`$ or so, a distinct length scale appears that grows as the square root of time. This appears to confirm the prediction made by Mac Low (1999) that the effective driving scale must increase to explain the inverse linear dependence of the kinetic energy dissipation rate on the time.
Driven hydrodynamical turbulence can maintain self-similar, power-law behavior at scales less than the driving scale, with a slope that lies directly in the range of slopes observed for real molecular clouds. In the observations, such power-laws extend to the largest scales in the map that can be analyzed, suggesting that driving mechanisms may be acting that add power on scales larger than those of the individual clouds and clumps that are mapped.
Molecular clouds are observed to have magnetic fields strong enough for the Alfvén velocities to be of the same order of magnitude as the observed rms velocities (e.g. Crutcher 1999). We have therefore examined the effects of magnetic fields on our results from the hydrodynamic models. We find that even strong magnetic fields often have fairly small effects, but that they do tend to transfer power from large to small scales, with implications for the support of small Jeans unstable regions by large-scale driving mechanisms. Contrary to some expectations, we find that magnetic fields do not tend to create self-similar behavior, but rather tend to destroy it, but that weaker fields appear to do so less. Hypersonic turbulence with Alfvén numbers of a few appears to be consistent with the observations of both power-law behavior and relatively strong magnetic fields.
The combined analysis of the velocity and density structure in molecular clouds can help to distinguish between the possible mechanisms driving interstellar turbulence and to provide information on the internal relaxation or virialization of the clouds on different scales.
### 5.2 Outlook
The next step in this work is to move from a general characterization of supersonic turbulence to attempts to fit observations of specific real interstellar clouds, using what we have learned so far to guide our search. This should yield constraints on the effective Mach and Alfvén numbers in these clouds, and begin to show whether supersonic, super-Alfvénic turbulence can indeed give a good description of the structure of molecular clouds.
To get a detailed comparison between observations and simulations we have to solve the full radiative transfer problem relating the simulated structure to maps in common lines such as the lower CO transitions, which are often optically thick. Having the full radiative transfer computations also allows the fit to include not just the observed map scaling relations but also the peak intensities, line ratios, and line shapes, placing significant additional constraints on the models.
Furthermore the structure analysis must be extended beyond the investigation of isotropic scaling behaviour. Appropriate measures for anisotropy or filamentarity, and the relationship between the density and the velocity structure have to be found. Our first results presented here have only scratched the surface of the possibilities for systematic comparison between cloud observations and direct turbulence simulations.
###### Acknowledgements.
We thank F. Bensch, A. Burkert, and J. Stutzki for useful discussions. V.O. acknowledges support by the Deutsche Forschungsgesellschaft through the grant SFB 301C. Computations were performed at the Rechenzentrum Garching of the Max-Planck-Gesellschaft. ZEUS was used by courtesy of the Laboratory for Computational Astrophysics at the NCSA. This research has made use of NASA’s Astrophysics Data System Abstract Service.
|
no-problem/9906/cond-mat9906088.html
|
ar5iv
|
text
|
# Conductance fluctuations near Anderson transition
## Abstract
In this paper we report measurements of conductance fluctuations in single crystal samples of Si doped with P and B close to the critical composition of the metal insulator transition ($`n_c4\times 10^{18}cm^3`$). The measurements show that the noise, which arises from bulk sources, does not diverge as the Ioffe-Regal limit ($`k_Fl`$ 1) is approached from the metallic side. At room temperatures, the magnitude of the noise shows a shallow maximum around $`k_Fl`$ 1.5 and drops sharply as the insulating state is approached.
Electron localization and Metal-Insulator (MI) transition has been a topic of considerable interest for quite some time and in particular in last two decades after the scaling theory clarified some of the key physics ingredients . One of the most researched mechanism of the MI transition is the Anderson-Mott transition that occurs in semiconductors doped to a critical concentration ($`n_c`$). A number of thermodynamic and transport studies have been done in the past to understand the nature of the transition . One very important physical quantity that has not been investigated in doped semiconductors close to the critical composition is the conductance fluctuations or noise. In this paper, we report the results of the measurement of conductance noise in single crystals of Si doped with P and B so that we can approach the critical region from the metallic side ($`n/n_c`$ 1).
In this paper we sought an answer to one important question: does the magnitude of conductance fluctuations diverge as we approach the Anderson transition in the heavily doped Si system? We think that this issue has not been rigorously looked into. The only reported experiment that has systematically studied the fluctuations as a function of disorder close to Anderson transition is in thin films of In<sub>2</sub>O<sub>x</sub> . The authors reported a sharp rise in the magnitude of the conductance noise (measured close to the room temperature) as the disorder is increased and the Ioffe-Regal limit ($`k_Fl`$ 1) is approached. It is thus of interest to investigate whether it is a universal phenomenon.
Conductance fluctuations with spectral power density $`1/f`$ (often known as $`1/f`$ noise) has been seen in disordered metallic films (like that in Ag and Bi) and also in oxide and C-Cu composites near the composition close to the M-I transition. The main focus of these works , based on disordered films, was to investigate the Universal Conductance Fluctuations (UCF) . The issue of divergence (or its absence) of fluctuations as a function of disorder has been investigated. However, no work has been reported so far on experimental determination of conductance fluctuation in doped crystalline semiconductor (like Si doped with P or B) with concentration close to the critical composition. Our choice of doped single crystal Si was mainly guided by the fact that the Anderson transition has been most well studied in this system and most theoretical work has taken this as a model substance. This is also a well defined system in which it is possible to get well characterized samples.
Polished wafers of $`111`$ orientation(grown by the Czochralski method) and thickness $`300\mu `$m were sized down to a length of 2 mm, width 0.1-0.2 mm and were thinned down by etching to a thickness of 15-25 $`\mu `$m. (The samples used in this experiment were kindly supplied by Prof. D.H. Holcomb of Cornell University.) These wafers were used previously extensively in conductivity studies . Details of growth and conductivity data can be found elsewhere. Table I contains the necessary numbers. In all, we investigated five different samples with $`k_Fl`$ varying between 2.8 to 0.78. Calculation of $`l`$, the mean free path, is based on room temperature resistivity). This contains both uncompensated (Si(P)) and compensated (Si(P,B)) samples.
The noise was measured by a five probe ac technique on samples of bridge type configuration with active volume for noise detection ($`\mathrm{\Omega }`$) $`10^6`$ cm<sup>3</sup> with peak current density $`10^2`$ A/cm<sup>2</sup>. The noise was measured at $`T`$ = 300 K and $`T`$ = 4.2 K with temperature stability better than 10 mK. The background noise primarily consisted of Johnson noise $`4k_BTR`$ from the sample. The spectral power density $`S_v(f)V_{bias}^2`$. Leads of gold wires of diameter $`25\mu `$m were bonded to the sample by a specially fabricated wire bonder. The contacts were Ohmic and have temperature independent contact resistance $`1\mathrm{\Omega }`$. All the relevant numbers about the samples studies are given in table I. In this table the mean free path $`l`$ in the parameter $`k_Fl`$ has been obtained from the room temperature resistivity data. The zero temperature conductivity, $`\sigma _0`$, shown in table I, has been obtained from the conductivity ($`\sigma (T)`$) below 4.2 K by using the power law, $`\sigma (T)=\sigma _0+mT^\nu `$.
For all the samples studied the spectral power density at a given frequency ($``$ 1 Hz) was found to depend strongly with sample volume $`\mathrm{\Omega }`$ when it was varied by more than a factor of 20. We show three examples in figure 1. Typically, $`S_v(f)\mathrm{\Omega }^\nu `$ with $`\nu 1.11.3`$. This is seen at both 300 K and 4.2 K. This implies that predominant contribution to the noise arises from the bulk. A strong surface or contact contribution weakens the dependence of noise on $`\mathrm{\Omega }`$ and makes $`\nu <1`$. This is an important observation because in previous studies on semiconductors (done on films or devices with interfaces) the doping concentration was much smaller ($`nn_c`$) and the noise had substantial contribution from surfaces or interfaces . Our experiments clearly show that the noise in heavily doped single crystals arises from the bulk.
In figure 2 we show the noise (measured at $`f=3`$ Hz) as a function of the parameter $`k_Fl`$ for the 5 samples studied by us. The data at 4.2 K are shown in the inset. Here $`k_F`$ is determined from the carrier density $`n`$ using $`k_F=(3\pi ^2n)^{1/3}`$ and $`l`$ was determined from the room temperature resistivity $`\rho `$ using the free electron expression relation $`l=\mathrm{}k_F/ne^2\rho `$. The noise is expressed through the normalized value $`\gamma `$ defined as:
$$\gamma =fS_v(f)(\mathrm{\Omega }n)/V_{bias}^2$$
(1)
In this representation we used $`\gamma `$ as a dimensionless number which represents the normalized noise. $`\gamma `$ is often referred to as the Hooge’s parameter. Strictly speaking, this normalization to a frequency independent $`\gamma `$, is valid only for $`S_v(f)1/f`$. To be consistent we have evaluated $`\gamma `$ at $`f=`$ 3 Hz for all the samples. It can be seen in figure 2 that $`\gamma `$, has a distinct dependence on $`k_Fl`$. At $`T=300`$ K $`\gamma `$ shows a shallow hump at $`k_Fl`$ 1-1.5. However as the insulating state is approached $`\gamma `$ shows a turn around and actually decreases. At $`T=4.2`$ K (see inset of figure 2) $`\gamma `$ has a peak at $`k_Fl`$ 2.3. However, $`\gamma `$ stays close to 1 and does not diverge as $`k_Fl`$ 1. The mechanism of noise at 4.2 K and 300 K are expected to be different. As a result we do not expect the same dependence of $`\gamma `$ at two widely different temperatures. However, our data shows that irrespective of the temperature, $`\gamma `$ does not diverge as we enter the insulating state. This is unlike what has been seen in disordered films of In<sub>2</sub>O<sub>x</sub> where $`\gamma >10^5`$ when $`k_Fl`$ 1. For the sake of comparison this is shown in figure 3 along with our data. In our case $`\gamma `$ never becomes as large as 10<sup>5</sup>-10<sup>7</sup> as seen in the oxide films and over the whole range $`\gamma `$ is substantially smaller. In the same graph we have shown $`\gamma `$ of thin metal films. For Si(P,B) samples the $`\gamma `$ are at least three orders of magnitude higher than that seen in conventional thin metallic films ($`\gamma 10^3`$ to 10<sup>-5</sup>). It is extremely interesting to note that lightly doped Si films on sapphire samples $`\gamma 10^3`$ although in this case it is likely that the noise arises from the surfaces/interfaces. The lightly doped Si samples in which $`\gamma 10^3`$ has been observed, the doping level $`10^{13}10^{14}`$ /cm<sup>3</sup>. In our case the sample which has the least $`\gamma `$ at room temperature has a level of doping $`4\times 10^{18}`$ /cm<sup>3</sup> and for this sample $`\gamma `$ is already down to 0.25. If this trend continues then $`\gamma 10^3`$ for $`n10^{16}`$ /cm<sup>3</sup>. We believe that for $`n`$ less than this level of doping, the surface states will dominate the noise mechanism.
We next investigate the spectral dependence of the noise power $`S_v(f)`$. At both $`T=`$ 4.2 K and 300 K the predominant spectral dependence is almost $`1/f`$ type with $`S_v1/f^\alpha `$ with $`\alpha `$ 0.9-1.25. This $`1/f`$ dependence has been seen over six orders of magnitude in frequency in the range $`f10^4`$ Hz to 10<sup>2</sup> Hz for three samples with $`k_Fl=`$ 2.8 (PS24), 1.68 (D150) and 0.78 (E90). At $`T=`$ 4.2 K the spectral dependence of noise tend to deviate from pure $`1/f`$ form. This can be seen in figure 4, where we have plotted the data as $`f\times S_v(f)`$ vs. $`f`$. For all the samples the $`f\times S_v(f)`$ is featureless at room temperature and the slope corresponds to $`\alpha `$ 1.05-1.2. At $`T=`$ 4.2 K, the uncompensated (and more metallic) samples retain their $`1/f`$ form. However, as the disorder increases on compensation by B doping, additional features show up as can be seen in figure 4. This is quite prominent in the most disordered sample E90 which is rather close to the insulating side ($`\sigma _0`$ = 0). Given the scope of the paper we do not elaborate on this point. However, we note the important observation that as the critical region is approached ($`k_Fl`$ 1), the spectral dependence of the noise power undergoes a change.
As pointed out earlier, the dependence of $`\gamma `$ on $`k_Fl`$ at both $`T=`$ 4.2 K and 300 K is drastically different from that carried out on thin disordered films of In<sub>2</sub>O<sub>x</sub> near the Anderson transition . Study of conductance fluctuations noise near Anderson transition has also been done in C-Cu films . Interestingly, in this case the $`\gamma `$ $``$ 1-5 at room temperature for all the samples which are close to the critical region. The noise at $`T=`$ 4.2 K in the same systems show some what larger $`\gamma `$ 10 but it is not as large as that seen in the In<sub>2</sub>O<sub>x</sub> films. Also, noise measurements near the Anderson transition in La<sub>1-x</sub>Sr<sub>x</sub>VO<sub>3</sub> thin films did not show any indication of divergence . Another study of noise where the metal-insulator boundary has been crossed is the investigation on a percolating system Pt/SiO<sub>2</sub> composite done at 300 K . In this case, however, $`\gamma `$ undergoes a change by 3-4 orders of magnitude when the percolation threshold is crossed. We can then conclude that the behavior of $`\gamma `$ close to the critical region of Anderson transition may not be universal. In all likelihood, it depends on the mechanism that produces the noise. For the Si(P.B) we are carrying out an extensive investigation of the noise to find the temperature and field dependence which can identify the mechanism which is causing the noise. This will be elaborated in a future publication.
Figure Caption
Fig.1. Volume ($`\mathrm{\Omega }`$) dependence of the noise in Si(P,B) samples at room temperature. Similar dependence has been observed at $`T=`$ 4.2 K as well. Relatively large error in volume determination results from rounding-off of the edges of the samples during chemical etching. The dotted line has a slope of $``$ 1.1.
Fig.2. Variation of the normalized noise parameter $`\gamma `$ as a function of disorder as measured by the parameter $`k_Fl`$ at $`T=`$ 300 K. Inset shows that data at $`T=`$ 4.2 K. The solid line is guide to the eye.
Fig.3. Comparison of $`\gamma `$ for different solids with varying $`k_Fl`$. The data points show our data. The shaded and hatched regions have been taken form other published data, principally ref.5.
Fig.4. Variation of spectral density of noise with frequency at $`T=`$ 300 K and $`T=`$ 4.2 K in three representative samples. Data for different samples have been shifted for clarity.
|
no-problem/9906/hep-ph9906369.html
|
ar5iv
|
text
|
# MONOJET RATES IN ULTRARELATIVISTIC HEAVY ION COLLISIONS
Basing on model concepts we research the monojet-to-dijet ratio as a function of the jet energy detection threshold in ultrarelativistic collisions of nuclei. We provide an comparative analysis of the contribution to monojet yield from gluon radiation before initial hard parton-parton scattering and from non-symmetric dijet energy losses in quark-gluon plasma, which expected to be created in ultrarelativistic heavy ion collisions.
Talk given at XIVth Conference on Ultrarelativistic Nucleus-Nucleus Collisions
”Quark Matter’99”, Torino, Italy, May 10-15, 1999
Hard jet production is considered to be an effective probe for formation of super-dense matter – quark- gluon plasma (QGP) in future heavy ion collider experiments at RHIC and LHC. High $`p_T`$ parton pair (dijet) from a single hard scattering is produced at the initial stage of the collision process (typically, at $`\begin{array}{c}<\hfill \\ \hfill \end{array}0.01`$ fm/c). It then propagates through the QGP formed due to mini-jet production at larger time scales ($`0.1`$ fm/c), and interacts strongly with the comoving constituents in the medium. The various aspects of hard parton passage through the dense matter are discussed intensively . In particular, the strong acoplanarity of dijet transverse momentum , the dijet quenching (a suppression of high $`p_T`$ jet pairs) and a monojet-to-dijet ratio enhancement were originally proposed as possible signals of dense matter formation in ultrarelativistic ion collisions.
In the simple QCD picture for a single hard parton-parton scattering without initial state gluon radiation (i.e. when jets from dijet pair escape from primary hard scattering vertex back-to-back in azimuthal plane with equal absolute transverse momentum values, $`p_{T1}=p_{T2}`$) a monojet is created only if one of the two hard partonic jets loses so much energy due to multiple scattering in the dense matter that effectively we can detect only one single jet in the final state. The monojet rate is obtained by integrating the dijet rate over the transverse momentum $`p_{T2}`$ of the second (unobserved) jet with the condition that $`p_{T2}`$ be smaller than the threshold value $`p_{cut}`$ (or the threshold jet energy $`E_T=p_{cut}`$ <sup>1</sup><sup>1</sup>1Due to fluctuations of the transverse energy flux arising from a huge multiplicity of secondary particles in the event, the ”true” jet recognition in ultrarelativistic heavy ion collisions is possible beginning only from some energy threshold .). Then rate of dijets $`R^{dijet}`$ with $`p_{T1},p_{T2}>p_{cut}`$ and monojets $`R^{mono}`$ with $`p_{T1}>p_{cut}`$ ($`p_{T2}<p_{cut}`$) in central $`AA`$ collisions is calculated as integral over all possible jet transverse momenta $`p_{T1}`$, $`p_{T2}`$ and longitudinal rapidites $`y_1`$, $`y_2`$.
At first in the framework of the simple model we demonstrate that monojet-to-dijet ratio can be related to mean the acoplanarity measured in the units of the jet threshold energy, namely
$$\frac{R^{mono}}{R^{dijet}}\frac{<|K_T|>}{E_T}.$$
(1)
The results of physics simulation have been obtained in the three scenarios for jet quenching due to collisional energy losses <sup>2</sup><sup>2</sup>2Although the radiative energy losses of a high energy parton dominate over the collisional losses by up to an order of magnitude, it will, in the first place, soften particle energy distributions inside the jet, increase the multiplicity of secondary particles, but will not affect the total jet energy . On the other hand, the collisional energy loss turns out to be practically independent on jet cone size and emerges outside the narrow jet cone. of jet partons in mid-rapidity region $`y=0`$ : $`(i)`$ no jet quenching, $`(ii)`$ jet quenching in a perfect longitudinally expanding QGP (the average collisional energy losses of a hard gluon $`<\mathrm{\Delta }E_g>10`$ GeV, $`<\mathrm{\Delta }E_q>=4/9<\mathrm{\Delta }E_g>`$), $`(iii)`$ jet quenching in a maximally viscous quark-gluon fluid, resulting in $`<\mathrm{\Delta }E_g>20`$ GeV. Initial state gluon radiation has been taken into account with the PYTHIA Monte-Carlo model at c.m.s. energy $`\sqrt{s}=5.5A`$ TeV.
Then we conclude that rescattering of hard partons in medium results in weaker $`E_T`$-dependence of ratio $`R^{mono}/R^{dijet}`$ to $`<|K_T|>/E_T`$ (see fig.1). With growth of energy losses the ratio we are interested in has a tendency to be constant, what would be interpreted as the signal of super-dense matter formation.
|
no-problem/9906/astro-ph9906493.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
Soon after quasars were discovered, it was suggested that they are powered by the accretion of gas onto supermassive black holes at the centres of galaxies (Lynden Bell 1969). The space density of luminous quasars is, however, two order of magnitudes smaller than that of bright galaxies and there has been a long-standing debate whether quasars occur only in a subset of galaxies or whether all galaxies harbour a quasar for a short time (Rees 1984, Cavaliere & Szalay 1986, Rees 1990). In recent years there has been mounting observational evidence that the evolution of normal galaxies and quasars is closely linked and that quasars are short-lived. The evolution of the total star formation rate density of the Universe, the space density of starbursting galaxies and that of luminous quasars appear to be remarkably similar. All three show a strong increase of more than an order of magnitude from $`z=0`$ to $`z2`$ (Boyle & Terlevich 1998, Dickinson et al. 1998, Sanders & Mirabel 1996). There is also increasing dynamical evidence that supermassive black holes reside at the centres of most galaxies with substantial spheroidal components. The masses of the black holes scale linearly with the masses of the spheroids, with a constant of proportionality in the range $`0.0020.006`$. (Kormendy & Richstone 1995; Magorrian et al. 1998; van der Marel 1999). The mass function of nearby black holes derived using the observed bulge luminosity–black hole mass relation is also consistent with that inferred from the 5GHz radio emission in galactic cores and from the quasar luminosity function itself (Franceschini, Vercellone & Fabian 1998; Salucci et al. 1999). These results strongly support the idea that QSO activity, the growth of supermassive black holes and the formation of spheroids are all closely linked (e.g. Richstone et al. 1998; Haehnelt, Natarajan & Rees 1998; Cattaneo, Haehnelt & Rees 1999).
The most striking observed property of quasars is their strong evolution with redshift. A number of papers have shown that the rise in the space density of bright quasars from the earliest epochs to a peak at $`z2`$ can be naturally explained in hierarchical theories of structure formation if the formation of black holes is linked to the collapse of the first dark matter haloes of galactic mass (e.g. Efstathiou & Rees 1988, Carlberg 1990, Haehnelt & Rees 1993, Cavaliere, Perri & Vittorini 1997, Haiman & Loeb 1998). None of these papers provided more than a qualitative explanation for why the present abundance of bright quasars is two orders of magnitude below that at $`z2`$ (see also Blandford & Small 1992).
In this paper, we focus on the low redshift evolution of the quasar population and its connection to the hierarchical build-up of galaxies predicted in cold dark matter (CDM) -type cosmologies. The formation and evolution of galaxies in such cosmologies has been studied extensively using semi-analytic models of galaxy formation. These models follow the formation and evolution of galaxies within a merging hierarchy of dark matter halos. Simple prescriptions are adopted to describe gas cooling, star formation, supernova feedback and merging rates of galaxies. Stellar population synthesis models are used to generate galaxy luminosity functions, counts and redshift distributions for comparison with observations. It has been shown in many recent papers that hierarchical galaxy formation models can reproduce many observed properties of galaxies both at low and at high redshifts. Some highlights include variations in galaxy clustering with luminosity, morphology and redshift (Kauffmann, Nusser & Steinmetz 1997; Kauffmann et al 1999b, Baugh et al 1999), the evolution of cluster galaxies (in particular of ellipticals) (Kauffmann 1995; Kauffmann & Charlot 1998) and the properties of the Lyman break galaxy population at $`z3`$ (Baugh et al 1998; Governato et al. 1998; Somerville, Primack & Faber 1999; Mo, Mao & White 1999).
In these models, the quiescent accretion of gas from the halo results in the formation of a disk. If two galaxies of comparable mass merge, a spheroid is formed. It has been demonstrated that a merger origin for ellipticals can explain both their detailed internal structure (see for example Barnes 1988; Hernquist 1992,1993; Hernquist, Spergel & Heyl 1993; Heyl, Hernquist & Spergel 1994) and global population properties such as the slope and scatter of the colour-magnitude relation and its evolution to high redshift (Kauffmann & Charlot 1998). Simulations including a gas component (Negroponte & White 1983; Barnes & Hernquist 1991, 1996; Mihos & Hernquist 1994) have also shown that mergers drive gas far enough inwards to fuel nuclear starbursts, and probably also central black holes. This is the standard paradigm for the orgin of ultra-luminous infrared galaxies (ULIRGs), which can have star formation rates in excess of several hundred solar masses a year and are almost always associated with merging or interacting systems (Sanders & Mirabel 1996). A significant fraction of the ULIRGs also exhibit evidence of AGN activity (Genzel et al 1998).
In this paper, we assume that major mergers are responsible for the growth and fuelling of black holes in galactic nuclei. If two galaxies of comparable mass merge, the central black holes of the progenitors coalesce and a few percent of the gas in the merger remnant is accreted by the new black hole on a timescale of a few times $`10^7`$ years. Under these simple assumptions, the model is able to reproduce both the relation between bulge luminosity and black hole mass observed in nearby galaxies and the evolution of the quasar luminosity function with redshift. Our models also fit many aspects of the observed evolution of galaxies, including the present-day K-band luminosity function, the evolution of the star formation rate density as function of redshift, and the evolution of the total mass density in cold gas inferred from observations of damped Lyman-alpha systems. Finally, we study the evolution of the abundance of starbursting systems and the relationship between the luminosities of quasars and their host galaxies.
## 2 Review of the Semi-Analytic Models of Galaxy Formation
For most of this paper, the semi-analytic model we employ is that used by Kauffmann & Charlot (1998) to study the origin of the colour-magnitude relation of elliptical galaxies formed by mergers in a high-density ($`\mathrm{\Omega }=1`$, $`H_0`$ = 50 km s<sup>-1</sup> Mpc<sup>-1</sup>, $`\sigma _8=0.67`$) cold dark matter (CDM) Universe. The parameters we adopt are those of Model “A” listed in Table 1 of that paper. In order to study how changes in cosmological parameters affect our results, we have also considered a low-density model with $`\mathrm{\Omega }=0.3`$, $`\mathrm{\Lambda }=0.7`$, $`H_0=66`$ km s<sup>-1</sup> Mpc<sup>-1</sup> , $`\sigma _8=1`$. This model is discussed separately in section 7.
More details about semi-analytic techniques may be found in Kauffmann, White & Guiderdoni (1993), Kauffmann et al. (1999a), Cole et al. (1994) and Somerville & Primack (1999). Below we present a brief summary of the main ingredients of the model. Because quasar evolution in our model depends strongly on the evolution of cold gas, we pay particular attention to the parameters that control this.
1. Merging history of dark matter halos. We use an algorithm based on the extended Press-Schechter theory to generate Monte Carlo realizations of the merging paths of dark halos from high redshift until the present (see Kauffmann & White (1993) for details). This algorithm allows all the progenitors of a present-day object to be traced back to arbitrarily early times.
2. The cooling, star formation and feedback cycle We have adopted the simple model for cooling first introduced by White & Frenk (1991). All the relevant cooling rate equations are described in that paper. Dark matter halos are modelled as truncated isothermal spheres and it is assumed that as the halo forms, the gas relaxes to a distribution that exactly parallels that of the dark matter. Gas then cools, condenses and forms a rotationally supported disk at the centre of the halo.
We adopt the empirically-motivated star formation law for disk galaxies suggested by Kennicutt (1998), which has the form $`\dot{M}_{}=\alpha M_{\mathrm{cold}}/t_{\mathrm{dyn}}`$, where $`\alpha `$ is a free parameter and $`t_{\mathrm{dyn}}`$ is the dynamical time of the galaxy. If angular momentum is conserved, the cold gas becomes rotationally supported once it has collapsed by a factor of 10 on average, so the dynamical time may be written $`t_{\mathrm{dyn}}=0.1R_{\mathrm{vir}}/V_\mathrm{c}`$, where $`R_{\mathrm{vir}}`$ and $`V_\mathrm{c}`$ are the virial radius and circular velocity of the surrounding dark halo. Note that according to the simple spherical collapse model, the virial radius scales with circular velocity and with redshift as $`R_{\mathrm{vir}}V_\mathrm{c}(1+z)^{3/2}`$, so that $`t_{\mathrm{dyn}}`$ scales with redshift as $`(1+z)^{3/2}`$ and is independent of $`V_c`$.
Once stars form from the gas, it is assumed that supernovae can reheat some of the cold gas to the virial temperature of the halo. The amount of cold gas lost to the halo in time $`\mathrm{\Delta }t`$ can be estimated, using simple energy conservation arguments, as
$$\mathrm{\Delta }M_{\mathrm{reheat}}=ϵ\frac{4}{3}\frac{\dot{M}_{}\eta _{SN}E_{SN}}{V_c^2}\mathrm{\Delta }t,$$
(1)
where $`ϵ`$ is an efficiency parameter, $`E_{\mathrm{SN}}10^{51}`$ erg is the kinetic energy of the ejecta from each supernova, and $`\eta _{\mathrm{SN}}`$ is the number of supernovae expected per solar mass of stars formed. The parameters $`\alpha `$ and $`ϵ`$ together control the fraction of baryons in the form of hot gas, cold gas and stars in dark matter halos. In practice, adjusting $`ϵ`$ changes the stellar mass of the galaxies, whereas adjusting $`\alpha `$ changes their cold gas content. We choose $`ϵ`$ to obtain a good fit to the observed present-day K-band galaxy luminosity function and $`\alpha `$ to reproduce the present cold gas mass of our own Milky Way ($`4\times 10^9M_{}`$). We have also run models where $`\alpha `$ is not a constant, but varies with redshift as $`\alpha (z)=\alpha (0)(1+z)^\gamma `$. For positive $`\gamma `$, less gas is turned into stars per dynamical time in galaxies at high redshift than at low redshift, as may be the case if the star formation efficiency increases with time. As shown in sections 3 and 5, this scaling of $`\alpha `$ has very little effect on the properties of galaxies at $`z=0`$, but can produce a much stronger evolution of the cold gas fractions to high redshift and so a more strongly evolving quasar population. In the following sections, we will show that $`\gamma =12`$ is required to fit both the evolution of the total mass density in cold gas inferred from damped Ly$`\alpha `$ absorption sustems and the rise in the space density of bright quasars from $`z=0`$ to $`z=2`$.
3. Cooling flows in massive halos. As discussed in previous papers, the cooling rates given by the White & Frenk (1991) model lead to the formation of central cluster galaxies that are too bright and too blue to be consistent with observation if the cooling gas is assumed to form stars with a standard initial mass function. Our usual “fix” for this problem has been to assume that gas cooling in halos with circular velocity greater than some fixed value does not form visible stars. One question is whether this material should then be available to fuel a quasar. We have considered two cases: 1) Gas that cools in halos with circular velocity greater than $`600`$ km s<sup>-1</sup> does not form visible stars, nor does it accrete onto the central blackhole. 2) Gas that cools in these massive halos does not form stars, but it is available to fuel quasars.
4. The merging of galaxies and the formation of ellipticals and bulges. As time proceeds, a halo will merge with a number of others, forming a new halo of larger mass. All gas that has not already cooled is assumed to be shock heated to the virial temperature of this new halo. This hot gas then cools onto the central galaxy of the new halo, which is identified with the central galaxy of its largest progenitor. The central galaxies of the other progenitors become satellite galaxies, which are able to merge with the central galaxy on a dynamical friction timescale. If two galaxies merge and the mass ratio between the satellite and the central object is greater than 0.3, we add the stars of both objects together and create a bulge component. If $`M_{\mathrm{sat}}/M_{\mathrm{central}}<0.3`$, we add the stars and cold gas of the satellite to the disk component of the central galaxy. When a bulge is formed by a merger, the cold gas not accreted by the blackhole is transformed into stars in a “burst” with a timescale of $`10^8`$ years. Further cooling of gas in the halo may lead to the formation of a new disk. The morphological classification of galaxies is made according their B-band disk-to-bulge ratios. If $`M(B)_{\mathrm{bulge}}M(B)_{\mathrm{total}}<1`$ mag, then the galaxy is classified as early-type (elliptical or S0).
5. Stellar population models. We use the new metallicity-dependent stellar population synthesis models of Bruzual & Charlot (1999,in preparation), which include updated stellar evolutionary tracks and new spectral libraries. The chemical enrichment of galaxies is modelled as described in Kauffmann & Charlot (1998). As chemical evolution plays little role in our quasar models, we do not describe the recipes again in this paper.
## 3 The Global Evolution of Stars and Gas
Figure 1 shows the K-band luminosity function of galaxies at $`z=0`$ compared with recent observational results. The K-band luminosities of galaxies are a good measure of their total stellar masses, rather than their instantaneous star formation rates, and are affected very little by dust extinction. By normalizing in the K-band, we ensure that our models have produced roughly the correct total mass density of stars by the present day. As can be seen, virtually identical results are obtained for a model where the star formation efficiency $`\alpha `$ is constant and for a model where $`\alpha (1+z)^2`$. Figure 1 also shows that cutting off star formation in cooling flows is required in order to avoid producing too many very luminous galaxies.
Figure 2 shows the evolution of the star formation rate density in the models. The data points plotted in figure 2 have been taken from Lilly et al (1996), Connolly et al (1997), Madau et al (1996) and Steidel et al (1999), and have been corrected for the effects of dust extinction as described in Steidel et al (1999). As can be seen, the evolution of the SFR density in the models agrees reasonably well with the observations. Both show a factor $`10`$ increase from the present day to $`z12`$, followed by a plateau. In the models, there is no strong decrease in the SFR density until redshifts greater than 6. In the model where $`\alpha `$ evolves with redshift as $`(1+z)^2`$, the percentage of stars formed in merger-induced bursts increases from less tham 10% at $`z=0`$ to 50% at $`z=2.5`$. By redshift 4, two-thirds of the total star formation occurs in the burst mode. In the model with constant $`\alpha `$, the fraction of stars formed in bursts increases much less, from $``$ 10 % at $`z=0`$ to 25 % at high redshift. As will be demonstrated in the next sections, the constant $`\alpha `$ model is unable to fit the observed increase in the quasar space densities at high redshift.
Figure 3 compares the predicted evolution of the mean mass density in the form of cold galactic gas in the models compared with the values derived from surveys of damped Ly$`\alpha `$ systems by Storrie-Lombardi, McMahon & Irwin (1996). As can be seen, the model with $`\alpha (1+z)^1`$ agrees well with the data, but the model with constant $`\alpha `$ severely underpredicts the mass density of cold gas at high redshifts. Note that the error bars on the data points in figure 2 are large and that taking into account the effects of dust extinction would tend to move the points upwards (Pei & Fall 1995). The model in which $`\alpha (1+z)^2`$ thus cannot be excluded. As we discuss in section 5, this model leads to the strongest evolution in the quasar space densities.
## 4 The Growth of Black Holes and the Bulge Luminosity –Black Hole Mass Relation
In our models, supermassive black holes grow by merging and accretion of gas during major mergers of galaxies. We assume that when any merger between two galaxies takes place, the two pre-existing black holes in the progenitor galaxies coalesce instantaneously. In major mergers, some fraction of the cold gas in the progenitor galaxies is also accreted onto the new black hole. As discussed in the previous section, the cold gas fractions of galaxies increase strongly with redshift. In hierarchical cosmologies, low mass bulges form at higher redshift than high mass bulges. To obtain the observed linear relation between black hole mass and bulge mass, the fraction of gas accreted by the black hole must be smaller for low mass galaxies, which seems reasonable because gas is more easily expelled from shallower potential wells (equation 1). We adopt a prescription in which the ratio of accreted mass to total available cold gas mass scales with halo circular velocity in the same way as the mass of stars formed per unit mass of cooling gas. For the parameters of our model, this may be written as
$$M_{\mathrm{acc}}=\frac{f_{\mathrm{BH}}M_{\mathrm{cold}}}{1+(280\mathrm{k}\mathrm{m}\mathrm{s}^1/\mathrm{V}_\mathrm{c})^2}.$$
(2)
$`f_{\mathrm{BH}}`$ is a free parameter, which we set by matching to the observed relation between bulge luminosity and black hole mass of Magorrian et al (1998) at a fiducial bulge luminosity $`M_V=19`$. We obtain $`f_{\mathrm{BH}}=0.03`$ for the model with $`\alpha (1+z)^2`$, $`f_{\mathrm{BH}}=0.04`$ for the model with $`\alpha (1+z)^1`$ and $`f_{\mathrm{BH}}=0.095`$ for the constant $`\alpha `$ model, a value which is uncomfortably large.
There are no doubt physical processes other than major mergers that contribute to the growth of supermassive black holes. For example, we have neglected the accretion of gas during minor mergers when a small satellite galaxy falls into a much larger galaxy (Hernquist & Mihos 1995). We have also neglected the accretion of gas from the surrounding hot halo (Fabian & Rees 1995, Nulsen & Fabian 1999). It has been suggested that this may occur in the form of advection dominated accretion flows (Narayan & Yi 1995) and may produce the hard X-ray background (Di Matteo and Fabian 1997). If such processes contribute significantly to the growth of supermassive black holes, we would need to lower the fraction of the cold gas accreted by the black hole during major mergers. The general trends predicted by our model would not be affected.
We have produced absolute magnitude-limited catalogues of bulges from our models and show scatterplots of black hole mass versus bulge luminosity in figure 4. The thick solid line shows the relation derived by Magorrian et al and the dashed lines show the 1$`\sigma `$ scatter of their observational data around this relation. The first panel illustrates what happens if we add a fixed fraction of the gas to the black hole at each merging event – the relation is considerably too shallow. The second and third panels show the relation obtained if the prescription in equation 2 is adopted. As can be seen, the slope is now correct. The model with $`\alpha (1+z)^2`$ exhibits considerably more scatter than the constant $`\alpha `$ model. This scatter arises from the fact that bulges of given luminosity form over a wide range in redshift. Because galaxies are more gas rich at higher redshift, bulges that form early will contain bigger black holes than bulges that form late. The gas fractions of galaxies rise more steeply in the model with $`\alpha (1+z)^2`$ than in the model with $`\alpha `$ constant, giving rise to the increased scatter in panel 3.
One observationally-testable prediction of our model is that elliptical galaxies that formed recently should harbour black holes with smaller masses than the spheroid population as a whole. This is illustrated in the fourth panel of figure 4, where we show the relation between bulge luminosity and black hole mass for “isolated” ellipticals in the $`\alpha (1+z)^2`$ model. These are elliptical galaxies that reside at the centres of dark matter halos of intermediate mass; the fact that they have not yet accreted a new disk component means that they were formed by a major merger at most a few Gyr ago. To select such objects in the real Universe, one would look for ellipticals outside clusters that have no neighbours of comparable or larger luminosity within a radius of $`1.5`$ Mpc. Conversely we predict that rich cluster ellipticals and bulges with large disks should have relatively massive black holes for their luminosity since these objects formed early.
## 5 Evolution of the Quasar Luminosity Function
Quasars have long been known to evolve very strongly. Their comoving space density increases by nearly two orders of magnitude from $`z0.1`$ to an apparent peak at $`z2.5`$. The evolution at redshifts exceeding $`2.5`$ is controversial: there is evidence that the number density of optically bright quasars declines from $`z2.5`$ to $`z5`$ (Shaver et al. 1996, see Madau 1999 for a review) , but it remains to be seen whether the same is true for fainter objects or for active galactic nuclei detected at X-ray wavelengths (Miyaji, Hasinger & Schmidt 1998).
In a scenario where black holes grow by merging and by accretion of gas, the number density of the most massive black holes increases monotonically with time. This is illustrated in figure 5, where we plot the black hole mass function in our model at a series of redshifts. From now on, unless stated otherwise, we only show results for our “fiducial” model in which the star formation efficiency $`\alpha `$ scales as $`(1+z)^2`$ and gas cooling in halos with circular velocities greater than 600 km s<sup>-1</sup> is not accreted by black holes.
Quasars are activated when two galaxies (and their central black holes) merge and fresh gas is accreted onto the new black hole in the remnant. Merging rates increase with redshift in hierarchical cosmologies and it has been suggested that this alone might explain the observed evolution of quasars (Carlberg 1990). Figure 6 demonstrates that mergers alone will not do the job. We plot the evolution of the number density of the major mergers that produce black holes of various masses. The number density of merging events producing black holes of $`10^{10}M_{}`$ decreases at high redshift, simply because such massive objects form very late. The number density of mergers producing smaller black holes does increase to high redshift, but the effect is too small to explain the observed increase in quasar space densities from $`z=0`$ to $`z=2`$.
Another hypothesis is that black holes run out of fuel at late times. That such an effect does exist in our models is shown in figure 7, where we plot the redshift dependence of the amount of gas accreted by merged black holes of given mass. The amount of accreted gas accreted increases by a factor $`3`$ from $`z=0`$ to $`z=1`$. Note that in our models quasars do not run out of fuel because the gas supply is almost exhausted at present, but because cool gas is converted into stars more efficiently at low redshift.
In order to make direct comparisons with observational data, we must specify how a black hole accretion event turns into a quasar light curve. We assume that a fixed fraction $`ϵ_B`$ of the rest mass energy of the accreted material is radiated in the B-band. This results in the following transformation between the accreted gas mass $`M_{\mathrm{acc}}`$ and the absolute B-band magnitude of the quasar at the peak of its light curve,
$$M_B(\mathrm{peak})=2.5\mathrm{log}(ϵ_BM_{\mathrm{acc}}/t_{\mathrm{acc}})27.45.$$
(3)
The timecscale $`t_{\mathrm{acc}}`$ over which gas is accreted onto the black hole during a merger is a second parameter of the model and we have to make some assumption as to its scaling with mass and redshift. We explore two possibilities: 1)$`t_{\mathrm{acc}}(1+z)^{1.5}`$. In this case, $`t_{\mathrm{acc}}`$ scales with redshift in the same way as $`t_{\mathrm{dyn}}`$, as expected if the radius of the accretion disk were to scale with the radius of the galaxy. 2) $`t_{\mathrm{acc}}`$ = constant. In both cases we assume $`t_{\mathrm{acc}}`$ to be independent of mass.
Note that if we change the radiative efficiency parameter $`ϵ_B`$, the quasar luminosities simply scale by a constant factor. If we change the accretion timescale $`t_{\mathrm{acc}}`$, we affect both the number densities and the luminosities of the quasars. It is usually assumed that the luminosities of quasars should not exceed the Eddington limit and we therefore introduce an upper limit to the B-band luminosity of quasars that scales linearly with the mass of the black hole $`L_B(\mathrm{max})=0.1L_{\mathrm{edd}}0.14(M_{\mathrm{bh}}/10^8M_{})10^{46}`$ erg s<sup>-1</sup>. Some of our quasars accrete at rates higher than that necessary to sustain the Eddington luminosity, especially at high redshift and in models with short $`t_{\mathrm{acc}}`$. These quasars nevertheless obey the Eddington limit because a “trapping surface” develops within which the radiation advects inwards rather than escapes. As a result, the emission efficiency declines inversely with the accretion rate for such objects (Begelman 1978).
Finally, we assume that the luminosity of a quasar a time $`t`$ after the merging event declines as
$$L_B(t)=L_B(\mathrm{peak})\mathrm{exp}(\mathrm{t}/\mathrm{t}_{\mathrm{acc}}).$$
(4)
We have chosen to normalize our model luminosity function to the abundance of bright ($`M_B<24`$) quasars at redshift 2. We have chosen three values of $`t_{\mathrm{acc}}`$ for illustrative purposes: $`10^7`$, $`3\times 10^7`$, and $`10^8`$ years. The typical values derived for $`ϵ_B`$ are in the range $`0.0020.008`$. This leaves some room for emission in accretion modes other than that traced by optically bright quasars. Two possibilities we do not treat here are advection dominated accretion flows and dust-obscured accretion (see Haehnelt, Natarajan & Rees 1998 for a recent discussion).
The resulting luminosity functions in four different redshift intervals are shown for our fiducial model in figure 8. For this plot we assumed that the accretion time scales in the same way as the host galaxy dynamical time, $`t_{\mathrm{acc}}(1+z)^{1.5}`$. The data points plotted as filled circles are taken from the compilation by Hartwick & Schade(1990). The triangles are from the Hamburg/ESO bright quasar survey by Koehler et al (1997). These authors find that luminous quasars are much more numerous in the local Universe than previous smaller surveys indicated. We find that an accretion timescale of about $`10^7\mathrm{yr}`$ yield results that are in reasonable agreement with the data. Figure 9 shows the evolution of the space density of quasars with luminosity $`M_B<26`$ and $`M_B<24`$ as a function of redshift. The symbols on the plot represent data points compiled from a number of sources (see figure caption for details). Note that at B-band magnitudes brighter than $`26`$, the quasar number densities decrease sharply. As a result, small changes in model parameters, such as the treatment of gas in cooling flows, can make quite a large difference to our results (see figure 11 below). We therefore regard the comparison at $`M_B<24`$ as a more robust test of the model. Our “best fit” $`\mathrm{\Omega }=1`$ CDM model with $`\alpha (1+z)^2`$ reproduces the observed decline in quasar space density from $`z=2`$ to the present reasonably well, but the corresponding evolution in the mean mass density of cold gas is stronger than inferred from the damped Lyman-alpha systems (figure 3). As we will show in section 7, the $`\mathrm{\Lambda }`$CDM model provides a better overall fit to the observations.
In figure 10, we plot the mean ratio of quasar luminosity to Eddington luminosity at a series of redshifts for our best-fit model with $`t_{\mathrm{acc}}(z)=1.0\times 10^7(1+z)^{1.5}`$ Gyr. The error bars show the 25th and 75th percentiles of the distribution. For faint quasars at low redshift, the values of $`L/L_{\mathrm{Edd}}`$ range from 0.01 to 0.1. These values increase for bright quasars and at high redshifts. By $`z=3`$, $`L/L_{\mathrm{Edd}}`$ has increased to values between 0.3 and 1. These results agree reasonably well with observed values of $`L/L_{\mathrm{Edd}}`$ inferred from the kinematics of the broad-line region, X-ray variability and spectral fitting of accretion disc models (see Wandel 1998 for a review; Laor 1998; Haiman & Menou 1998; Salucci et al 1999).
We now study the sensitivity of our results to our assumptions about cooling, star formation and accretion timescales. The leftmost panel in figure 11 shows the evolution of the B-band luminosity function for a model with $`\alpha (1+z)^1`$ and $`t_{\mathrm{acc}}(1+z)^{1.5}`$. In the other panels we show the effect of varying a single parameter in the model:
1. Redshift dependence of the accretion time. If $`t_{\mathrm{acc}}`$ is held contant, the space density of the brightest quasars evolves very little from $`z=0`$ to $`z=2`$.
2. Redshift dependence of $`\alpha `$. If $`\alpha `$ is a constant, the space density of quasars of all luminosities increases much less from $`z=0`$ to $`z=2`$. As we have shown, a very strong scaling of $`\alpha `$ with redshift ($`\alpha (1+z)^2`$) is required in order to come reasonably close to fitting the observed increase for an $`\mathrm{\Omega }=1`$ CDM model.
3. Cooling flow gas as fuel for black holes? If gas in cooling flows can fuel black holes, the number of very luminous quasars at low redshifts increases substantially. As a result, the evolution at the bright end of the luminosity function is weaker than before. This model can fit the Koehler et al (1997) luminosity function reasonably well at $`z=0`$. The model also produces a luminosity function shape that is closer to a power-law at all redshifts.
## 6 Further Model Predictions
### 6.1 The Host Galaxies of Quasars
Recent near-infrared and Hubble Space Telescope (HST) imaging studies of quasar host-galaxies at low redshifts show that luminous quasars reside mainly in luminous early-type hosts (McLeod & Rieke 1995; Hutchings 1995; Taylor et al 1996; McLeod 1997; Bahcall et al 1997; Boyce et al 1998; McLure et al 1998). There also appears to be an upper bound to the quasar luminosity as a function of host galaxy stellar mass (McLeod & Rieke 1995).
In our models, quasars are only activated by major mergers which also result in the formation of a spheroidal remnant galaxy. By definition, all quasar hosts are thus either ellipticals or spirals in the process of merging. This is no doubt an oversimplified picture. It is certainly possible that minor mergers or galaxy encounters trigger gas accretion onto the central black hole. It is also likely that this is more important for low luminosity quasars.
In figure 12, we show scatterplots of host galaxy luminosity versus quasar luminosity at a series of different redshifts. For reference, the horizontal line in each plot shows present-day value of $`L_{}`$ for galaxies. At low redshift, quasars with magnitudes brighter than $`M_B=23`$ reside mostly in galaxies more luminous than $`L_{}`$. The luminosity of the host correlates with the luminosity of the quasar, but there is substantial scatter ( typically a factor $`10`$ in host galaxy luminosity at fixed quasar B-band magnitude). At low quasar luminosities, the scatter is large because the sample includes both low mass galaxies at peak quasar luminosity and high mass galaxies seen some time after the accretion event. At high quasar luminosities, the scatter decreases and the sample consists only of massive galaxies “caught in the act”. Our results at low redshift agree remarkably well with a recent compilation of ground-based and HST observations of quasar hosts by Mcleod, Rieke & Storrie-Lombardi (1999). In figure 12, we have drawn a triangle around the region spanned by their observational data points. At high redshifts, our models predict that the quasars should be found in progressively less luminous host galaxies. This is not surprising because in hierarchical models, the massive galaxies that host luminous quasars at the present epoch are predicted to have assembled recently (Kauffmann & Charlot 1998). We caution, however, that the the luminosities of quasars hosted by galaxies at different epochs depends strongly on the redshift scaling of $`t_{\mathrm{acc}}`$. As discussed previously, smaller accretion timescales mean that luminous quasars can be located in smaller host galaxies. In section 7, we explore the extent to which the predicted masses of the host galaxies depend on the choice of cosmology.
### 6.2 Evolution of Starbursting Galaxies
An important result from the IRAS satellite was the discovery of galaxies with luminosities in the far-infrared (8-1000 $`\mu `$m) that exceed their optical or UV luminosities by factors of up to 80. The brightest ($`>10^{12}L_{}`$) of these objects are often referred to as ultraluminous infrared galaxies (ULIRGs) and their space densities are comparable to those of quasars of similar power. The nature of the energy source powering the ULIRGs has been a subject of intense debate. One hypothesis is that they are powered by dust-embedded AGNs. Alternatively, their far-infrared luminosity may be provided by an intense burst of star formation, with implied star formation rates $`1001000M_{}`$ yr<sup>-1</sup>. Recently, Genzel et al. (1998) have used ISO spectroscopy to argue that the majority (70-80 %) of these objects are dominated by star formation, with about $`25\%`$ powered by AGNs. The Hubble Space Telescope has been used to study the morphologies of a sample of 150 ULIRGs (Borne et al 1999) and in almost all cases, the objects are made up of multiple subcomponents. This is taken as evidence that most ULIRGs are interacting or merging systems. ULIRGs have now been detected at redshifts up to $`z1`$ (Van der Werf et al 1999, Clements et al 1999), but the total number of objects at high-z is currently too small to draw firm conclusions about evolution rates. This situation will no doubt change with the next generation of IR telescopes, SOFIA, SIRTF, IRIS and FIRST.
In figure 13, we show how the comoving number density of gas-rich mergers evolves with redshift in the model with $`\alpha (1+z)^2`$. We plot the number density of mergers between galaxies containing more than $`10^9`$, $`10^{10}`$ and $`3\times 10^{10}`$ $`M_{}`$ of cold gas averaged over an interval of 1 Gyr. In order to transform the values shown on the plot to a space density of starbursting galaxies, we would need to make some assumptions about the typical timescale and efficiency with which the gas is converted into stars, and whether or not this is likely to scale with redshift. Since there is not yet any available data to which we can fit our predictions, we prefer to leave our plot in its “raw” form. For example at $`z=0`$, if one assumes a typical star formation timescale of $`10^8`$ years and a gas-to-star conversion efficiency of 100%, then one obtains a space density of objects with star formation rates in excess of $`100M_{}`$ yr<sup>-1</sup> of $`3\times 10^6`$ Mpc<sup>-3</sup>, in reasonable agreement with what is observed. Our models predict that the space density of gas rich mergers evolves strongly with redshift; the density of mergers involving more than $`10^{10}`$ $`M_{}`$ of gas increases by more than a factor of 50 from redshift 0, reaches a very broad peak at $`z24`$, and then declines again. Note that this peak occurs at lower redshift for the most massive systems, simply because they form later. Finally, our models predict that the ratio of young stars formed in the burst to that of “old” stars formed in the progenitors, should increase strongly with redshift. This is illustrated in the bottom panel of figure 13, where we show that the ratio of the total mass of gas $`M_{\mathrm{gas}}`$ to the total mass of stars $`M_{\mathrm{stars}}`$ in the two galaxies before they merge evolves by a factor of $`510`$ from 0.1 at $`z=0`$ to 0.7 at $`z=3`$. In a $`\mathrm{\Lambda }`$CDM model (section 7), there is a milder evolution of the gas-to-star ratio with redshift.
### 6.3 Structural Properties of the Merger Remnants
Low-luminosity ellipticals and spiral bulges possess steeply rising central stellar density profiles that approximate power laws, whereas high-luminosity ellipticals have central profiles with much shallower slopes (termed cores). The power-law cusps in small spheroids, their disky isophotes and their rapid rotation, led Faber et al (1997) to suggest that they formed in gas-rich dissipative mergers. Large spheroids, with their low rotation and boxy isophotes, plausibly formed in near dissipationless mergers. In this case the orbital decay of the massive black hole binary can scour out a core in the stellar mass distribution (Quinlan 1996; Quinlan & Hernquist 1997). One may ask whether such a scenario is viable in the hierarchical merger models presented in this paper. Figure 14 illustrates that the mergers that form low-luminosity ellipticals are indeed substantially more gas rich than the mergers that form high-luminosity ellipticals. This is because small ellipticals typically form at high redshift from mergers of low mass spirals with high gas fractions. Conversely, massive ellipticals form a low redshift from massive spirals that contain much less gas. More detailed dynamical modelling is required in order to demonstrate that this correlation is sufficient to explain the observed dichotomy of core profiles, isophote shapes and rotation speeds.
## 7 The Influence of Cosmological Parameters
All results shown so far have been for a CDM cosmology with $`\mathrm{\Omega }=1`$, $`H_0=`$ 50 km s<sup>-1</sup> and $`\sigma _8=0.67`$. Although we could explain the evolution of the quasar luminosity function in a qualitative sense, the detailed fits to the observational data were not quite satisfactory. In particular, our “fiducial” model in which the star formation efficiency parameter $`\alpha `$ scaled with redshift as $`\alpha (1+z)^2`$ resulted in too much cold gas at high redshift.
In figures 15-17 we show fits to the data obtained for the popular $`\mathrm{\Lambda }`$CDM cosmology ($`\mathrm{\Omega }=0.3`$, $`\mathrm{\Lambda }=0.7`$, $`H_0=`$ 70 km s<sup>-1</sup> and $`\sigma _8=1`$). We note that quasar luminosity functions in the literature are almost always computed assuming a cosmology with $`q_0=0.5`$ and $`H_0=`$ 50 km s<sup>-1</sup>. In order to avoid having to re-analyze the real observational data, we choose to transform our model predictions to the values that would be obtained if the observations were analyzed assuming an Einstein-de Sitter universe with Hubble Constant $`h=0.5`$. Because structure forms earlier in the $`\mathrm{\Lambda }`$CDM cosmology, galaxy-galaxy merging rates are substantially lower at $`z=0`$ than in the high-density CDM cosmology. As a result, we do not require an very strong evolution in the cold gas content of galaxies to reproduce the strong decline in quasar space densities from $`z2`$ to the present day. Our best-fit model has $`\alpha (1+z)^1`$ and $`t_{acc}(z=0)=2.5\times 10^7`$ Gyr. In addition we assume that $`t_{\mathrm{acc}}`$ scales in the same way as the host galaxy dynamical time in a $`\mathrm{\Lambda }`$CDM cosmology: $`t_{\mathrm{acc}}(0.7+0.3(1+z)^3)^{1/2}`$ (Mo, Mao & White 1998). As seen in figures 15-17, the $`\mathrm{\Lambda }`$CDM model can simultaneosly fit the evolution of cold gas inferred from the damped systems and the decline in quasar space density from $`z=2`$ to the present day.
We also find that the masses of quasar host galaxies do not decline as rapidly with redshift in this low-density model. As shown in figure 18, the host galaxies of bright quasars in the SCDM model are slightly more massive than those in the $`\mathrm{\Lambda }`$CDM at $`z=0.5`$. By $`z=2`$, however, the SCDM hosts are 60% less massive on average.
Most of the other observed properties of the galaxies and quasars in the best-fitting $`\mathrm{\Lambda }`$CDM model, including the K-band luminosity function, the evolution of the star formation rate density and the bulge luminosity/ black hole mass relation, are similar to those of the best-fitting SCDM model. For brevity, we will not show these again.
## 8 Summary & Discussion
The aim of this paper has been to demonstrate that the redshift evolution of galaxies and quasars can be explained in a unified way within hierarchical models of structure formation. This appears possible provided: a) black holes form and grow mainly during the major mergers that are responsible for the formation of ellipticals; and b) the gas consumption efficiency in galaxies scales with redshift so that galaxies have higher cold gas fractions at earlier times.
We have set the free parameters controlling star formation and feedback in our model to match the stellar mass function of present-day galaxies, the observed redshift evolution of the star formation rate density, and the evolution of the total cold gas content of the Universe as inferred from observations of damped Lyman alpha systems. We have assumed that during major mergers, the black holes in the progenitor galaxies coalesce and a few percent of the available cold gas is accreted by the new black hole. The fraction of accreted gas was chosen to match the observed relation between bulge mass and black hole mass at the present day. We obtain a linear relation if the fuelling of black holes is less efficient in low mass galaxies. The scatter in the model relation arises because bulges form over a wide range in redshift.
The greatest success of our model is its ability to explain the strong decrease in the space density of bright quasars from $`z=2`$ to $`z=0`$. We assume that when a black hole accretes gas, about 1 percent of the rest mass energy of this material is radiated in the B-band. The strong decrease in quasar activity results from a combination of three factors i) a decrease in the merging rates of intermediate mass galaxies at late times, ii) a decrease in the gas available to fuel the most massive black holes and iii) the assumption that black holes accrete gas more slowly at late times. The evolution of merging rates is the most secure feature of the model, since it is a simple consequence of the growth of structure in the dark matter component. The Press-Schechter-based algorithms that we employ have been tested against N-body simulations and have found to work reasonably well ( Kauffmann & White 1993; Lacey & Cole 1994 ). The evolution of the gas supply, on the other hand, is strongly dependent on the chosen parametrization of star formation and feedback in the models. It is encouraging that the $`\mathrm{\Lambda }`$CDM model in particular can match both the observed evolution of quasars and the increase in cold gas with redshift inferred from observations of damped Lyman alpha absorbers. Future HI observations of galaxies at high redshift will yield more direct information on how the gas-to-stellar mass ratios of galaxies evolve with lookback time. The increase in the gas fractions of galaxies at high redshift also leads to a strong evolution in the space density of merger-induced starbursts. Our results favour rather short accretion times onto the central black hole ( $`10^7`$ years).
Our model also reproduces the luminosities of the host galaxies of low-redshift quasars. We predict that quasar hosts are on average a factor 10 less massive at $`z=2`$ than at $`z=0`$ if the accretion timescale evolves $`(1+z)^{3/2}`$ as in our fiducial $`\mathrm{\Omega }=1`$ model. If the accretion timescale evolves less strongly, the hosts will be brighter. Haehnelt, Natarajan & Rees (1998) have suggested that future measurements of the clustering strengths of high redshift quasars should determine whether they reside in low mass (and thus weakly clustered) galaxies or in high mass systems which should cluster more strongly. The low-density $`\mathrm{\Lambda }`$CDM model predicts a smaller drop in quasar host mass at high redshift: the hosts are on average a factor $`5`$ less massive at $`z=2`$ than at $`z=0`$.
Finally, we have presented some results on the nature of the merging process by which ellipticals form. One basic feature of the hierarchical galaxy formation scenario is that the most massive spheroids form late. The strong decrease in the cold gas content of galaxies inferred from damped Ly$`\alpha `$ systems and required to explain the observed evolution of quasars means that the most massive ellipticals must form in nearly dissipationless mergers. It is interesting that dissipationless mergers may also be required to explain the core structure and the boxy isophotes observed in massive ellipticals.
Our assumptions regarding the growth of supermassive black holes and the resulting optical quasar emission are deliberately simple. In reality these processes will probably depend in a more complicated way on the properties of the merging galaxies. Furthermore there should be some accretion due to minor mergers and non-merging encounters, as well as accretion from the hot gas halos of elliptical galaxies and clusters. Nevertheless, our results demonstrate that the evolution of galaxies, the growth of supermassive black holes and the evolution of quasars and starbursts, can all be explained in consistent way in a hierarchical cosmogony.
Acknowledgments
We thank Christian Kaiser, Philip Best, Simon White and Huub Rottgering for helpful discussions and comments on the manuscript.
|
no-problem/9906/hep-ph9906516.html
|
ar5iv
|
text
|
# Acknowledgments
## Acknowledgments
I would like to thank E. Akhmedov for the useful comment on the T violation. I also thank J. Maalampi for useful discussions and his hospitality in Universty of Helsinki. This research is supported by the Grant-in-Aid for Science Research, Ministry of Education, Science and Culture, Japan(No.10640274 ).
|
no-problem/9906/cond-mat9906270.html
|
ar5iv
|
text
|
# How to evaluate ground-state landscapes of disordered systems thermodynamical correctly
## I Introduction
Recently, a new algorithm for studying the ground-state landscape of finite-dimensional spin glasses was introduced . It could be shown that this method is indeed able to calculate true ground states . The $`\pm J`$ spin glass (see below) exhibits a ground-state degeneracy, i.e. many different ground states exist for each realization. Results describing the distribution of the ground states depend on the statistical weights of the states which are determined by the algorithm which is used. Usually, different ground states exhibit different weights , which is thermodynamically incorrect. Here, a new technique is applied which avoids this problem.
In this work, three-dimensional Edwards-Anderson (EA) $`\pm J`$ spin glasses are investigated. They consist of $`N`$ spins $`\sigma _i=\pm 1`$, described by the Hamiltonian
$$H\underset{i,j}{}J_{ij}\sigma _i\sigma _j.$$
(1)
The sum runs over all pairs of nearest neighbors. The spins are placed on a three-dimensional (d=3) cubic lattice of linear size $`L`$ with periodic boundary conditions in all directions. Systems with quenched disorder of the interactions (bonds) are considered. Their possible values are $`J_{ij}=\pm 1`$ with equal probability. To reduce the fluctuations, a constraint is imposed, so that $`_{i,j}J_{ij}=0`$.
One of the most important questions is whether many pure states exist for realistic spin glasses. For the infinitely ranged Sherrington-Kirkpatrik (SK) Ising spin glass this question was answered positively by the continuous replica-symmetry-breaking mean-field (MF) scheme by Parisi . But also a complete different model is proposed: the Droplet Scaling (DS) theory suggests that only two pure states (related by a global flip) exist and that the most relevant excitations are obtained by reversing large domains of spins (the droplets). From the ground state point of view the existence of many pure states means that two ground states may differ by an arbitrary number of spins. Otherwise two ground states would only differ by the spin orientations in some finite domains, which is always possible in the $`\pm J`$ model because of the discrete structure of the interaction distribution. A detailed discussion can be found in , where the metastate approach is used to thoroughly analyze MF,DS and other intermediate scenarios.
While earlier Monte-Carlo (MC) simulations suffer from small system sizes or equilibration problems , recent results of simulations at temperatures just below $`T_c`$ seem to find evidence for the MF picture. In , by applying a Migdal-Kadanoff approximation, MF behavior was found for small systems at temperatures slightly below $`T_c`$, where the correlation length exceeds the system size. But by going to lower temperatures or larger systems the DS picture turned out to be more appropriate. Consequently, the analysis of true ground states should clarify the issue. In ground states were calculated using multicanonical MC sampling, but no discrimination between MF and DS could be made because of too small system sizes. Using cluster-exact approximation true ground states were studied and MF behavior was found . But, as mentioned at the beginning, these results suffer from the fact that not all ground states are generated with the same probability . This would indeed be the correct sampling method, since all ground-state configurations have exactly the same energy.
In this work ground states of sizes up to $`L=14`$ are calculated and a technique is applied, which guarantees that all ground states enter the result with the same weight, i.e. the correct $`T=0`$ thermodynamical behavior is obtained. It will be shown that the main result changes dramatically: with increasing system size the ground-state behavior is not explained by the MF scenario.
The method presented here is not only useful when the ground state calculation is performed using cluster-exact approximation. Also other methods like simulated annealing or multicanonical simulation do not guarantee a priori that each ground state is calculated with the same probability because always a finite number of steps is used. Thus, the technique presented here has a wide range of applications.
The paper is organized as follows: next a short description of the algorithms is presented. Then the definitions of the observables evaluated here are shown. In the main section the results are presented and finally a summary is given.
## II Algorithms
The calculation of ground states for three-dimensional spin glasses belongs to the class of the NP-hard problems , i.e. only algorithms with exponentially increasing running time are available. Thus, only small systems can be treated. The basic method used here is the cluster-exact approximation (CEA) technique , which is a discrete optimization method designed especially for spin glasses. In combination with a genetic algorithm this method is able to calculate true ground states up to $`L=14`$. Using this technique one does not encounter ergodicity problems or critical slowing down like when using algorithms which are based on Monte-Carlo methods.
But, as mentioned before, by applying pure genetic CEA, one does not obtain the true thermodynamic distribution of the ground states , i.e. not all ground states contribute to physical quantities with the same weight. For small system sizes up to $`L=4`$ it is possible to avoid the problem by generating all $`T=0`$ states, i.e. averages can be performed simply by considering each ground-state once. Since the ground state degeneracy increases exponentially with the number $`N`$ of spins, this is not possible for larger system sizes. Instead one has to choose a subset of all configurations. The following procedure is applied to ensure that all ground states appear with the same probability in this selection:
By performing the ballistic-search (BS) algorithm the ground states are grouped into clusters. All states which are accessible via flipping of free spins, i.e. without changing the energy, are considered to be in the same cluster. It has been shown that the number of clusters defined in this way diverges exponentially for the three-dimensional $`\pm J`$ spin glass. The sizes of these clusters can be estimated quite accurately using a variant of the BS method even if only few ground states per cluster are available. Then a certain number of ground states is selected from each cluster. This number is proportional to the size of the cluster. It means that each cluster contributes with its proper weight. The selection is done in a manner that many small clusters may contribute as a collection as well; e.g. assume that 100 states are used to represent a cluster consisting of $`10^{10}`$ ground states, then for a set of 500 clusters of size $`10^7`$ each a total number of 50 states is selected. This is achieved by sorting the clusters in ascending order. The generation of states starts with the smallest cluster. For each cluster the number of states generated is proportional to its size multiplied by a factor $`f`$. If the number of states grows too large, only a certain fraction $`f_2`$ of the states which have already been selected is kept, the factor is recalculated ($`fff_2`$) and the process continues with the next cluster.
The states representing the clusters are generated by $`T=0`$ Monte-Carlo simulation, i.e. iteratively spins are selected randomly and flipped if they are free. The ground states which have been obtained before are used as initial configurations for the MC simulation. MC is able to reproduce the correct thermodynamic distribution, if the simulation time is long enough. Then, all ground-states within a cluster are visited with the same frequency. Later it will be shown that for the largest size $`L=14`$ and the largest clusters 100 MC steps per spin are sufficient.
Since each cluster appears with a weight proportional to its size and each ground state within a cluster appears with the same probability, on total each ground state has the same likelihood of being generated. Thus, the correct thermodynamic distribution is obtained.
## III Observables
For a fixed realization $`J=\{J_{ij}\}`$ of the exchange interactions and two replicas $`\{\sigma _i^\alpha \},\{\sigma _i^\beta \}`$, the overlap is defined as
$$q^{\alpha \beta }\frac{1}{N}\underset{i}{}\sigma _i^\alpha \sigma _i^\beta .$$
(2)
The ground state of a given realization is characterized by the probability density $`P_J(q)`$. Averaging over the realizations $`J`$, denoted by $`[]_{av}`$, results in ($`Z`$ = number of realizations)
$$P(q)[P_J(q)]_{av}=\frac{1}{Z}\underset{J}{}P_J(q).$$
(3)
Because no external field is present the densities are symmetric: $`P_J(q)=P_J(q)`$ and $`P(q)=P(q)`$. Therefore, only $`P(|q|)`$ is relevant.
The Droplet model predicts that only two pure states exist, implying that $`P(|q|)`$ converges to a delta function $`P(q)=\delta (qq_{EA})`$ for $`L\mathrm{}`$ (we don’t indicate the $`L`$ dependence by an index), while in the MF picture the density remains nonzero for a range $`0qq_1`$ with a peak at $`q_{\mathrm{max}}`$ ($`0<q_{\mathrm{max}}q_1`$). Consequently the variance
$$\sigma ^2(|q|)_1^1(\overline{|q|}|q|)^2P(q)𝑑q=\overline{|q|^2}\overline{|q|}^2$$
(4)
stays finite for $`L\mathrm{}`$ in the MF pictures while $`\sigma ^2(|q|)L^y0`$ according the DS approach. The combined average of a quantity $`X`$ over all ground states and over the disorder is denoted with $`\overline{X}`$. Here, $`y`$ is the zero-temperature scaling exponent denoted $`\mathrm{\Theta }`$ in .
To characterize the contribution from small overlap values separately, which are due to a complex structure of the energy landscape, the weight $`X_{q_0}`$ of the distribution below a given threshold $`q_0`$ is calculated:
$$X_{q_0}_0^{q_0}P(|q|)𝑑q.$$
(5)
The overlap defined in (2) can be applied to measure the distance $`d^{\alpha \beta }`$ between two states:
$$d^{\alpha \beta }0.5(1q^{\alpha \beta })$$
(6)
with $`0d^{\alpha \beta }1`$. For three replicas $`\alpha ,\beta ,\gamma `$ the usual triangular inequality reads $`d^{\alpha \beta }d^{\alpha \gamma }+d^{\gamma \beta }`$. Written in terms of $`q`$ it reads
$$q^{\alpha \beta }q^{\alpha \gamma }+q^{\gamma \beta }1.$$
(7)
Another characteristic attributed to the MF scheme is that the state space exhibits ultrametricity. In an ultrametric space the triangular inequality is replaced by a stronger one $`d^{\alpha \beta }\mathrm{max}(d^{\alpha \gamma },d^{\gamma \beta })`$ or equivalently
$$q^{\alpha \beta }\mathrm{min}(q^{\alpha \gamma },q^{\gamma \beta }).$$
(8)
An example of an ultrametric space is given by the set of leaves of a binary tree: the distance between two leaves is defined by the number of edges on a path between the leaves.
Let $`q_1q_2q_3`$ be the overlaps $`q^{\alpha \beta }`$, $`q^{\alpha \gamma }`$, $`q^{\gamma \beta }`$ ordered according their sizes. By writing the smallest overlap on the left side in equation (8), one realizes that two of the overlaps must be equal and the third may be larger or the same: $`q_1=q_2q_3`$. Therefore, for the the difference
$$\delta qq_2q_1$$
(9)
$`\delta q=0`$ holds. For a finite system ultrametricity may be violated, i.e. $`\delta q>0`$. If a system becomes more and more ultrametric with growing system size, $`\delta q`$ should decrease while $`L\mathrm{}`$. When evaluating $`\delta q`$, the influence of the absolute size of the overlaps should be excluded. Thus, the third overlap is fixed: $`q_3=q_{fix}`$. In practice overlap triples are used where $`q_3[q_{fix},q_{fix2}]`$ holds. This allows to obtain sufficient statistics. In the next section the distribution $`P(\delta q)`$ is evaluated. For an ultrametric system this quantity should converge to a Dirac delta function with increasing size $`L`$ .
## IV Results
Ground states were generated using genetic CEA for sizes $`L[3,\mathrm{},14]`$. The number of realizations of the bonds per lattice size ranged from 100 realizations for $`L=14`$ up to 1000 realizations for $`L=3`$. One $`L=14`$ run needs typically 540 CPU-min on an 80MHz PPC601 processor (70 CPU-min for $`L=12`$, $`\mathrm{}`$, 0.2 CPU-sec for $`L=3`$), more details can be found in . Each run resulted in one configuration which was stored, if it exhibited the ground state energy. For the smallest sizes $`L=3,4`$ all ground states were calculated for each realization by performing up to $`10^4`$ runs. For larger sizes it is not possible to obtain all ground states, because of the exponentially rising degeneracy. For $`L=5,6,8`$ practically all clusters are obtained using at most $`10^4`$ runs , only for about 25% of the $`L=8`$ realization some small clusters may have been missed.
For $`L>8`$ not only the number of states but also the number of clusters is too large, consequently $`40`$ independent runs were made for each realization. For $`L=14`$ this resulted in an average of $`13.8`$ states per realization having the lowest energy while for $`L=10`$ on average $`35.3`$ states were stored. This seems a rather small number. However, the probability that genetic CEA returns a specific ground state increases (sublinearly) with the size of the cluster the state belongs to . Thus, ground states from small clusters do appear with a small probability. Because the behavior is dominated by the largest clusters, the results shown later on are the same (within error bars) as if all ground states were available. This was tested explicitly for 100 realizations of $`L=10`$ by doubling the number of runs, i.e. increasing the number of clusters found.
Using this initial set of states for each realization ($`L>4`$) a second set was produced using the techniques explained before, which ensures that each ground state enters the results with the same weight. The number of states was chosen in a way, that $`n_{\mathrm{max}}=100`$ states were available for the largest clusters of each realization, i.e. a single cluster smaller than one hundredth of the largest cluster does not contribute to physical quantities, but, as explained before, a collection of many small clusters contributes to the results as well. Finally, it was verified that the results did not change by increasing $`n_{\mathrm{max}}`$.
The number of MC steps used for generating the states was determined in the following way: a ground state was selected randomly from the largest clusters found for the $`L=14`$ realizations. 100 independent $`T=0`$ MC runs of length $`n_{MC}`$ MC steps were performed starting always from this initial state. For the set of 100 final states the distribution of overlaps was calculated. The whole process was averaged over different realizations. In Fig. 1 the average distribution $`P_c(q)`$ of overlaps is shown for different run lengths $`n_{MC}`$. It can be seen that by increasing the number of MC steps the ground-state cluster is explored better. By going beyond $`n_{MC}=100`$ steps $`P_c(q)`$ does not change, indicating that this number of MC steps is sufficient to generate ground states equally distributed within a cluster for $`L=14`$.
The order parameter selected here for the description of the complex ground state behavior of spin glasses is the total distribution $`P(|q|)`$ of overlaps. The result for the case where all ground states have the same weight is shown in Fig. 2 for $`L=6,10`$. The distributions are dominated by a large peak for $`q>0.8`$. Additionally there is a long tail down to $`q=0`$, which means that arbitrarily different ground states are possible. So far this is the same result as obtained earlier for the case where the weights of the states are determined by the genetic CEA algorithm. But there is a difference: for the old results the weight of the long tail remains the same for all system sizes. Here for $`L=10`$ small overlaps are about $`3/4`$ times less likely than for $`L=6`$.
To study the finite size dependence of this effect, the variance $`\sigma ^2(|q|)`$ of $`P(|q|)`$ was evaluated as a function of the system size $`L`$. The result is displayed in Fig. 3. Additionally the datapoints from are given. Obviously, by guaranteeing that every ground state has the same weight, the result changes dramatically. To extrapolate to $`L\mathrm{}`$, a fit of the data to $`\sigma _L^2=\sigma _{\mathrm{}}^2+a_0L^{a_1}`$ was performed. A value of $`\sigma _{\mathrm{}}^2=0.01(1)`$ ($`a_1=0.61(15)`$) was obtained, indicating that the width of $`P(|q|)`$ is zero for the infinite system. Consequently, the MF picture with a continuous breaking of replica symmetry cannot be true for three-dimensional $`\pm J`$ spin glasses.
In Fig. 4 the behavior of the long tail is studied in more detail. The integrated weight $`X_{0.5}(L)`$ of all overlaps $`q<q_00.5`$ is shown as a function of the system size. Again a fit is used to extrapolate the behavior of the infinite system. A value of $`X_{\mathrm{}}=0.01(2)`$ is obtained, confirming the result obtained above.
One might suspect that the results can be explained by the fact that with increasing system size the behavior is dominated more and more by one ground-state cluster. To examine this issue the quantity $`Y=1[_cw_c^2]_{av}`$ is calculated, where $`w_c`$ is the relative size of cluster $`c`$. If really one cluster dominates, $`Y`$ must vanish with increasing system size $`L`$. In Fig. 5 $`Y`$ is shown as a function of $`L`$ for small system sizes $`L8`$, where all ground-state clusters have been obtained. Obviously, $`Y`$ does not decrease. One reason is that the probability $`P(n_c=1)`$ that a realization exhibits just one ground-state cluster (and its inverse) decreases with growing system size (cf. inset). Consequently, there is no single reason explaining the behavior of $`P(|q|)`$. Additionally, for the interpretation of Fig.5, one has to take into account that the definition of a cluster, although it is very useful for the evaluation of the ground-state landscape, may have no physical meaning.
By collecting all results one obtains the following description for the distribution of overlaps of the infinite system: It consists of a large delta-peak and a tail down to $`q=0`$, but the weight of that tail goes to zero. This expression is used to point out that by going to larger sizes small overlaps still occur: the number of arbitrarily different ground states diverges . But the size of the largest clusters, which determine the self overlap leading to the large peak, diverges even faster. The delta-peak is centered around a finite value $`q_{EA}`$. From further evaluation of the results $`q_{EA}=0.90(1)`$ was obtained.
Finally, it was tested whether the ground states are ultrametrically organized. In Fig. 6 the distribution $`P(\delta q)`$ is shown for system sizes $`L=4,8,12`$. Each realization enters the distribution with the same weight. With increasing system size the distributions get closer to $`q=0`$, indicating that the systems become more and more ultrametric. The same conclusion can be driven from the evaluation of the average value of $`\delta q`$ as a function of $`L`$ (cf. inset). This result is similar to the former calculations , where the correct $`T=0`$ distribution was not obtained. But it should be stressed that ultrametricity is only found within a restricted subset of states (here $`q_30.5`$). By performing the thermodynamic limit the weight of all regions of state space restricted in this way disappears, i.e. ultrametricity disappears as well.
## V Conclusion
Using genetic cluster-exact approximation the ground-state landscape of three-dimensional $`\pm J`$ spin glasses is investigated. By applying ballistic search and $`T=0`$ Monte-Carlo simulation it is guaranteed that each ground state enters the result with the same probability, thus a correct thermodynamic distribution is achieved. This technique also can be successfully combined with other methods which are used to generate several configurations from a degenerate ground-state landscape, e.g. with simulated annealing or multicanonical simulation.
The distribution of overlaps is evaluated. For the infinite system it consists solely of two symmetric delta-peaks. This does not imply that there are only two ground-state clusters remaining. On the contrary, the number of ground state clusters grows exponentially with increasing system size but the ground-state behavior is dominated by a few large similar clusters (and their inverse). Therefore, a distinct impression emerges: a huge number of arbitrarily different ground-state clusters exist, but by going to larger and larger sizes most of them become unimportant. This rules out any (nonstandard) MF picture with continuous breaking of symmetry to be valid in total. Interestingly, the result is compatible with the one step replica-symmetry-breaking scheme which was observed for the p-spin glass . It exhibits a simple distribution of overlaps while many different ground-state clusters are possible. However, further work is needed to determine which of the remaining scenarios really holds for finite-dimensional spin glasses.
Please note that the cluster-interpretation depends on the definition of a cluster. By choosing a dynamic which allows flips of more than one spin at a time, a different definition of energy-barriers is implied and thus another kind of clusters. But it should be stressed that the results presented in this work do not depend on the way a cluster is defined. Any method of sorting the ground states into groups will work that takes the number of ground states selected proportional to the size of the group, and ensures that each state of a group has the same probability of being used for the calculation.
Finally, it should be pointed out that not all results previously obtained using genetic CEA are biased by the disbalance of the ground-state distribution. The main outcomes in are not affected. Additionally, although the old data bases on a wrong distribution, the results in prove that there are arbitrary different clusters present. The reason for $`P(|q|)\delta (qq_{EA})`$ is that most of them become less important.
## VI Acknowledgements
The author thanks T. Aspelmeier, K. Bhattacharya, M. Otto and A. Zippelius for interesting discussions. He is gratefull to A. Zippelius and O. Herbst for critically reading the manuscript. The work was supported by the Graduiertenkolleg “Modellierung und Wissenschaftliches Rechnen in Mathematik und Naturwissenschaften” at the Interdisziplinäres Zentrum für Wissenschaftliches Rechnen in Heidelberg and the Paderborn Center for Parallel Computing by the allocation of computer time. The author obtained financial support from the DFG (Deutsche Forschungs Gemeinschaft) under grant Zi209/6-1.
|
no-problem/9906/astro-ph9906224.html
|
ar5iv
|
text
|
# Accurate Rotation Curves and Distribution of Dark Matter in Galaxies11footnote 1To appear in the Proceedings of XIXth Moriond Astrophysics Meeting “Building Galaxies: from the Primordial Universe to the Present”, Les Arcs, March 13-20 1999. ed. F.Hammer et al. (Editions Frontieres, Gif-sur-Yvetter)
## 1 Introduction
Rotation curve is the principal tool to derive the axisymmetric distribution of mass in disk galaxies in the first-order approximation. Rotation curves have been obtained by optical and HI-line spectroscopy (Rubin et al 1980, 1982; Bosma 1981; Mathewson et al 1996; Persic et al 1996). However, the inner rotation curves have been not thoroughly investigated yet, not only because the concern in these studies has been on the massive halo, but also for the difficulty in observing inside central bulges. We have shown that the CO molecular line is useful for deriving central kinematics for its high concentration in the center and low extinction (Sofue 1996, 1997, Sofue et al 1997, 1998: Papers I to IV). Recent CCD H$`\alpha `$ line spectroscopy has also made us available with accurate rotation curves for the inner regions (Rubin et al 1997; Sofue et al 1998). In this paper, we present high-accuracy rotation curves, and discuss their general characteristics. We derive surface mass distributions, and discuss the radial variation of mass-to-luminosity ratio and the dark mass fraction.
## 2 Universal Properties of Rotation Curves
### 2.1 Central-to-Outer Rotation Curves
Besides the Milky Way, it has been widely believed that inner rotation curves behave in a rigid-body fashion. In order to clarify if such rigid rotation is common, or galaxies have similar rotation curves to the Milky Way, we have performed high-resolution CO-line observations. We have also obtained CCD spectroscopy in the H$`\alpha `$ and \[NII\] line emissions of the central regions of galaxies. In deriving rotation curves, we applied the envelop-tracing method from PV diagrams. In Fig. 1a, we show the most-completely-sampled rotation curves (Papers I - IV).
### 2.2 Logarithmic Rotation Curves
Since the dynamical structure of a galaxy varies with the radius rapidly toward the center, a logarithm plot would help to overview the innermost kinematics. In fact, logarithmic plots in Fig. 1b demonstrate the convenience to discuss the central kinematics. In such a plot, we may argue that high-mass galaxies show almost constant rotation velocities from the center to outer edge.
### 2.3 Universal Properties
We may summarize that the universal properties of rotation curves in Fig. 1 and 2 as follows, which are similar to those for the Milky Way.
(1) Steep central rise and peak, often starting from high velocity at the nucleus;
(2) Bulge component, often causing the central peak of rotation curve;
(3) Broad maximum by the disk; and
(4) Halo component.
The steep nuclear rise of rotation is a universal property for massive Sb and Sc galaxies, regardless the morphological peculiarities, while less massive galaxies tend to show a rigid-body rise. The fact that almost all massive galaxies show the steep rise indicates that it is not due to non-circular motion by chance. Even if there is a bar, we have more chance to observe shocked gas bound to the potential than high-velocity flows: We have more chance to observe the pattern speed than the high-velocity flow, which would result in underestimating the true rotation velocity.
## 3 Mass-to-Luminosity Ratio and the Dark Mass Fraction (DMF)
Once an accurate rotation curve from the center to the outer edge is obtained, we can directly calculate the surface mass density. One extreme case is to assume a spherical symmetry: the rotation velocity is used to calculate the total mass involved within a radius, which is then used to calculate the surface mass density. Another extreme case is to assume a thin disk: the surface mass density can be directly calculated by using the Poisson equation (e.g. Binney and Tremaine 1987). We may safely assume that the true mass distribution lies in between these two cases. We have, thus, calculated the mass distribution for the galaxies for which accurate rotation curves have been obtained. Results for spherical and disk assumptions are found to be coincident usually within a factor of 1.5 to 2. We stress that this method is not intervened by any potential models, as widely adopted in such a method to fit calculated rotation curves by assuming potentials (Kent 1987).
The surface mass density can be, then, directly compared with observed surface luminosity, from which we can derive the mass-to-luminosity ratio (M/L) Fig. 2 shows the thus obtained radial distributions of M/L for a disk assumption (Takamiya and Sofue 1999). The figure indicates that the M/L is not constant at all, but varies significantly within a galaxy. Since the M/L of stars will not vary so drastically, this diagram can be interpreted to represent the distribution of the dark mass fraction (DMF) for the first approximation, namely the minimal DMF.
(1) M/L and DMF vary drastically within the central bulge. In some galaxies, it increases inward toward the center, suggesting a dark massive core. In some galaxies, it decreases toward the center, likely due to luminosity excess such as due to active nuclei.
(2) M/L and DMF gradually increases from the inner disk to outer disk, and the gradient increases with the radius.
(3) M/L and DMF increases drastically from the outer disk toward the outer edge, indicating the massive dark halo. In many galaxies, the dark halo can be nearly directly seen from this figure, where the M/L exceed ten, and sometimes hundred.
References
Binney, T., Tremaine, S. 1987, in Galactic Astronomy (Princeton Univ. Press).
Bosma A. 1981, AJ 86, 1825
Kent, S. M. 1987, AJ 93, 816.
Mathewson, D.S. and Ford, V.L., 1996 ApJS, 107, 97.
Persic, M., and Salucci, P. 1995, ApJS 99, 501.
Rubin V. C., Ford W. K., Thonnard N. 1980, ApJ 238, 471
Rubin, V. C., Ford, W. K., Thonnard, N. 1982, ApJ, 261, 439
Rubin, V., Kenney, J.D.P., Young, J.S. 1997 AJ, 113, 1250.
Sofue, Y. 1996, ApJ, 458, 120 (Paper I)
Sofue, Y. 1997, PASJ, 49, 17 (Paper II)
Sofue, Y., Tomita, A., Honma, M., Tutui, Y. and Takeda, Y. 1998, PASJ 50, 427. (Paper IV)
Sofue, Y., Tutui, Y., Honma, M., and Tomita, A., 1997, AJ, 114, 2428 (Paper III)
Takamiya, T., and Sofue, Y. 1999, in preparation.
|
no-problem/9906/gr-qc9906047.html
|
ar5iv
|
text
|
# Does the generalized second law require entropy bounds for a charged system?
## I Introduction
One of the most remarkable developments in black hole physics is the relationship between the laws of black hole mechanics and thermodynamics. Classically, black holes obey the laws that are analogous to the ordinary laws of thermodynamics . This correspondence becomes more than just an analogy when quantum effects are taken into account (Hawking’s discovery of the thermal radiation emitted by a black hole ).
Furthermore, Bekenstein has conjectured a generalized second law (GSL) of thermodynamics: The sum of the black hole entropy and the ordinary entropy of the matter outside the black hole never decreases. More precisely, the GSL states that the generalized entropy $`S_g`$ defined by
$$S_g=S_{matter}+\frac{1}{4}A_{bh}$$
(1)
never decreases for any physical process (we use natural units such that $`\mathrm{}=G=c=k=1`$ throughout this paper), where $`S_{matter}`$ is the entropy of ordinary matter outside the black hole and $`A_{bh}/4`$, one quarter of the surface area of the black hole, plays the role of the entropy of the black hole. It is important to check the validity of this conjecture because this would strongly support the idea that the ordinary laws of thermodynamics apply to a self-gravitating quantum system containing a black hole and that $`A_{bh}/4`$ truly represents the physical entropy of the black hole.
There currently exists no general proof of the GSL based on the known microscopic laws of physics, although there are some proofs that rely on the semiclassical approximation . This is because the laws of quantum gravity are not well known. Thus, gedanken experiments to test the validity of the GSL are very important tools to bolster confidence in this conjecture.
Classically, It was already recognized that a promising possibility for achieving a violation of the GSL occurs when one slowly (adiabatically) lowers a box initially containing energy $`E`$ and entropy $`S`$ toward a black hole and then dropped in . The energy delivered to the black hole can be arbitrarily red-shifted by letting the dumping point approach the horizon. Near this limit, the black hole area increase is not large enough to compensate for the decrease of the matter’s entropy. A resolution of this difficulty was proposed by Bekenstein, who conjectured that there exists a universal upper bound on the entropy $`S`$ of matter with energy $`E`$ which is placed in a box of size $`R`$ :
$$S2\pi ER.$$
(2)
The intuitive reason why such a bound could rescue the GSL is that it prevents one from lowering a box close enough to a black hole to violate the GSL.
However, Unruh and Wald pointed out that Bekenstein failed to take into account certain quantum effects in his analysis. They noted that there is a quantum thermal atmosphere surrounding a black hole, which produces a buoyancy force on a box when one tries to lower the box slowly toward the black hole. As a result, one cannot lower the box down to the horizon (if one does not wish to inject energy by pushing it in) and the box will float at a finite distance from the horizon, which is determined by the condition that the energy contained in the box is exactly the same as the energy of the acceleration radiation displaced by the box. Since the total energy at infinity added to the black hole after the box has been dropped from the floating point is larger than the redshifted proper energy of the box, the box must be opened (this was extended to the “dropped” case, recently ) at the floating point in order to minimize the entropy increase of the black hole. Accordingly, they concluded that the GSL holds in this process provided only that unconstrained thermal matter maximizes entropy at fixed volume and energy:
$$SVs(e),$$
(3)
where $`s(e)`$ is the entropy density as a function of energy density $`e`$ of unconstrained thermal matter. Thus, they concluded that no additional assumption on the quantum nature of the matter such as (2) is necessary to rescue the GSL.
Recently, Bekenstein and Mayo and Hod have derived an upper bound to the entropy of a charged system by considering the polarization of the black hole by a nearby charge. They argued that the GSL could be saved only by assuming the existence of entropy bounds on confined systems of the type as stated above. In their derivation, they regard the system as a “point particle” and used the test particle approximation. That is, the system is assumed to follow the equation of motion of a charged particle on a black hole background and has a conserved energy (the “backreaction” effects are negligible). However, since the system does not descend slowly (adiabatically) to the black hole in this process, there must be backreaction effects: the system radiates gravitational and electromagnetic radiation (these process also carry entropy) and the generalized entropy should increase if all these effects are included. Further more, there is no justification for treating the system as a point particle: the thermodynamical properties in and outside the box are completely neglected, even though they play an important role in the validity of the GSL . Thus, it is doubtful if this composite system can be considered to be thermal.
In order to avoid these difficulties, we carry out a gedanken experiment in which a (possibly “thick”) box initially containing energy $`E`$, entropy $`S`$ and charge $`Q`$ is lowered adiabatically toward a Reissner-Nordström black hole and then dropped in. This is an extension of the work of Unruh-Wald to a charged system (the contents of the box possess a charge $`Q`$). Their previous analysis showed that the effects of acceleration radiation (buoyancy force) prevent a violation of the GSL, as stated above. Here, in addition to adding charge to the box, we consider the more generic case in which the thermal atmosphere has a spherically distributed charge, too. In this case, we notice that, in addition to the Unruh-Wald entropy restriction, there is an equilibrium condition for the chemical potential of the thermal atmosphere. Indeed, we prove here that these two equilibrium conditions and the physical properties of ordinary matter are sufficient to enforce the generalized second law. Thus, no additional assumptions concerning entropy bounds on the contents of the box need to be made in this process.
In Sec.II, we derive the equations that hold for the thermal atmosphere around a black hole. In Sec.III, we show that the GSL holds in the aforementioned process. Sec.IV is devoted to a summary and discussion of our results and, in particular, comparison with previous works .
## II Thermal atmosphere around a black hole
We carry out a gedanken experiment with a Reissner-Nordström black hole of mass $`M`$ and charge $`Q_{bh}`$, whose spacetime metric and electromagnetic vector potential are given by
$`ds^2`$ $`=`$ $`f(r)dt^2+{\displaystyle \frac{dr^2}{f(r)}}+r^2d\mathrm{\Omega }^2,`$ (4)
$`A_\mu dx^\mu `$ $`=`$ $`\mathrm{\Phi }(r)dt{\displaystyle \frac{Q_{bh}}{r}}dt,`$ (5)
where
$$f(r)=\frac{(rr_+)(rr_{})}{r^2},$$
(6)
with $`r_\pm M\pm \sqrt{M^2Q_{bh}^2}`$. The event horizon is located at $`r=r_+`$ and has area $`A=4\pi r_+^2`$.
The temperature of the black hole is defined by
$$T_H=\frac{1}{2\pi }\kappa \frac{1}{4\pi }f^{}(r_+),$$
(7)
where $`f^{}`$ denotes $`df/dr`$ and $`\kappa =(r_+r_{})/2r_+^2`$ is the surface gravity of the black hole. Physically this represents the temperature of the black hole measured at infinity.
First, we give a definition of unconstrained thermal matter with charge. We define unconstrained thermal matter in a given region outside the black hole to be the state of matter that maximizes entropy at a fixed volume, energy and charge (electromagnetic potential given by (5)). Note that the properties of unconstrained thermal matter depend on location, i.e., the entropy density of unconstrained thermal matter, $`\stackrel{~}{s}`$, is a function of energy density, $`\stackrel{~}{\rho }`$, charge density, $`\stackrel{~}{q}`$, at the given point outside the black hole. We assume that the thermal atmosphere of a black hole is described by unconstrained thermal matter.
Then, the local temperature of a thermal atmosphere which is in equilibrium with the black hole is given by using the Tolman’s law as
$$\stackrel{~}{T}=T_H/\chi ,$$
(8)
where $`\chi =f^{1/2}`$ is the redshift factor.
In addition to (8), the chemical potential of the thermal atmosphere $`\stackrel{~}{\mu }_i`$ must satisfy the following condition in order that it be in an equilibrium state :
$$\stackrel{~}{\mu }_i\chi =\text{Constant for each}i,$$
(9)
where index $`i`$ denotes particle species.
Following Unruh and Wald , we assume that the black hole has reached thermal equilibrium with the radiation, the whole system being enclosed in a large cavity. This is achieved by fixing the boundary condition at the boundary, i.e., by specifying the temperature and the electrostatic potential (these are determined by the Hawking radiation and the difference between the chemical potentials, respectively) at the boundary.
The first law for the thermal atmosphere is written as
$$d\stackrel{~}{\rho }=\stackrel{~}{T}d\stackrel{~}{s}+\stackrel{~}{q}d\varphi +\underset{i}{}\stackrel{~}{\mu }_id\stackrel{~}{n}_i,$$
(10)
where $`\varphi =\mathrm{\Phi }/\chi `$. The integrated Gibbs-Duhem relation for this system is as follows. (See appendix A for a derivation.)
$$\stackrel{~}{\rho }=\stackrel{~}{T}\stackrel{~}{s}\stackrel{~}{P}+\underset{i}{}\stackrel{~}{\mu }_i\stackrel{~}{n}_i.$$
(11)
Note that the quantity $`\varphi `$ does not appear in this expression.
By using the above two equations, the following equation is derived from Eqs. (8) and (9).
$$d(\stackrel{~}{P}\chi )=\stackrel{~}{\rho }d\chi \stackrel{~}{q}\chi d\varphi .$$
(12)
Eq. (12) states that pressure gradient is balanced by gravitational and electromagnetic forces.
## III Validity of the generalized second law
In this section, following , we compute the change in generalized entropy occurring when matter in a (possibly “large”) box is slowly lowered toward a black hole and then dropped in. We consider a box of cross-sectional area $`A`$ and height $`b`$, which contain energy density $`\rho `$, charge density $`q`$ and total entropy $`S`$. As the box is lowered toward the black hole, the energy and charge density will depend both on the height $`l`$ of the center of the box above the horizon, and the position within the box, $`y`$, as measured from the center.
We adopt the following notation for integrals
$$f(y)𝑑VA_{b/2}^{b/2}f(y)𝑑y.$$
(13)
The energy of the box as measured at infinity is
$$E_{\mathrm{}}(l)=\rho (l,y)\chi (l+y)𝑑V,$$
(14)
whereas the gravitational and electromagnetic forces as measured at infinity are in the forms
$`w(l)`$ $`=`$ $`{\displaystyle \rho (l,y)\frac{\chi (l+y)}{l}𝑑V},`$ (15)
$`f_{em}(l)`$ $`=`$ $`{\displaystyle q(l,y)\chi (l+y)\frac{\varphi (l+y)}{l}𝑑V}.`$ (16)
These external forces do work on the gas in the box. We denote the work by $`W_{ge}(l)`$:
$`W_{ge}(l)`$ $`=`$ $`E_{\mathrm{}}(l)E_i`$ (17)
$`=`$ $`{\displaystyle _{\mathrm{}}^l}[w(l^{})+f_{em}(l^{})]𝑑l^{},`$ (18)
where $`E_i`$ is the initial energy of the box. This is equivalent to
$$dE_{\mathrm{}}=(w+f_{em})dl.$$
(19)
Meanwhile, the buoyancy force acting on the box, as measured at infinity, is equal to
$$f_b(l)=A\left[(\stackrel{~}{P}\chi )_{lb/2}(\stackrel{~}{P}\chi )_{l+b/2}\right],$$
(20)
where $`\stackrel{~}{P}`$ is the radiation pressure of the thermal atmosphere. From Eq. (20), it is easy to show that the work done by the buoyancy force is given by
$`W_b(l)={\displaystyle _{\mathrm{}}^l}f_b(l^{})𝑑l^{}={\displaystyle \stackrel{~}{P}(l,y)\chi (l+y)𝑑V}.`$ (21)
Putting together Eqs. (17) and (21), the total work done on the box system is given by
$`W_{tot}(l)`$ $`=`$ $`W_{ge}(l)+W_b(l)`$ (22)
$`=`$ $`{\displaystyle [\rho (l,y)+\stackrel{~}{P}(l,y)]\chi (l+y)𝑑V}E_i.`$ (23)
If the contents of the box are dropped into the black hole from position $`l_0`$, the first law of black hole requires that the change $`\mathrm{\Delta }S_{bh}`$ in black hole entropy should satisfy
$`\mathrm{\Delta }S_{bh}`$ $`=`$ $`{\displaystyle \frac{1}{T_H}}(E_i+W_{tot}(l_0)\mathrm{\Phi }_{bh}Q)`$ (24)
$`=`$ $`{\displaystyle \frac{1}{T_H}}{\displaystyle [\rho (l_0,y)+\stackrel{~}{P}(l_0,y)]\chi (l_0+y)𝑑V}{\displaystyle \frac{\mathrm{\Phi }_{bh}Q}{T_H}},`$ (25)
where $`Q`$ and $`\mathrm{\Phi }_{bh}=Q_{bh}/r_+`$ are charge thrown into the system by the agent at infinity and electromagnetic potential of the black hole, respectively.
Hereafter, for simplicity, we consider $`2`$-component system to be composed of a gas of particles with charge $`e`$ and anti-particles with opposite charge $`e`$. This assumption does not affect our result and it is easy to extend our argument to the $`2n`$-component system <sup>*</sup><sup>*</sup>* If there is a particle with charge $`e(>0)`$, there exist a corresponding anti-particle with opposite charge $`e`$ in nature. So we consider an even number of particle species. , if we wish.
Therefore, by substituting Eqs. (8) and (11) into Eq. (25), we get the change in generalized entropy as
$`\mathrm{\Delta }S_g`$ $`=`$ $`\mathrm{\Delta }S_{bh}S`$ (26)
$`=`$ $`{\displaystyle \frac{1}{T_H}}{\displaystyle [\rho (l_0,y)\stackrel{~}{\rho }(l_0,y)]\chi (l_0+y)𝑑V}+\stackrel{~}{S}(l_0)S`$ (28)
$`+{\displaystyle \frac{1}{T_H}}\left\{{\displaystyle \underset{i=+,}{}}{\displaystyle \stackrel{~}{\mu }_i(l_0,y)\stackrel{~}{n}_i(l_0,y)\chi (l_0+y)𝑑V}\mathrm{\Phi }_{bh}Q\right\},`$
where $`\stackrel{~}{S}(l_0)=\stackrel{~}{s}(l_0,y)𝑑V`$ and $`S=s(l_0,y)𝑑V`$ are the entropy of the thermal atmosphere displaced by the box and the entropy of the matter in a box, respectively.
By using the equilibrium condition for the chemical potential of the thermal atmosphere (9), and noting that the chemical potential in the absence of the field can be neglected completely at the horizon because of its high (infinite) local temperature , we get
$`\stackrel{~}{\mu }_+\stackrel{~}{n}_+\chi +\stackrel{~}{\mu }_{}\stackrel{~}{n}_{}\chi `$ $`=`$ $`\stackrel{~}{\mu }_+^h\chi _h\stackrel{~}{n}_++\stackrel{~}{\mu }_{}^h\chi _h\stackrel{~}{n}_{}`$ (29)
$`=`$ $`e\mathrm{\Phi }_{bh}(\stackrel{~}{n}_+\stackrel{~}{n}_{}),`$ (30)
where the index $`h`$ denotes the quantity evaluated at the horizon.
Thus, we can rewrite Eq.(28) further.
$`\mathrm{\Delta }S_g`$ $`=`$ $`{\displaystyle \left\{\stackrel{~}{s}(l_0)\frac{1}{T_H}[\stackrel{~}{\rho }(l_0,y)\chi (l_0+y)\mathrm{\Phi }_{bh}\stackrel{~}{q}(l_0,y)]\right\}𝑑V}`$ (32)
$`{\displaystyle \left\{s(l_0)\frac{1}{T_H}[\rho (l_0,y)\chi (l_0+y)\mathrm{\Phi }_{bh}q(l_0,y)]\right\}𝑑V},`$
where $`q=e(n_+n_{})`$ and $`\stackrel{~}{q}=e(\stackrel{~}{n}_+\stackrel{~}{n}_{})`$ are the charge density of the matter in the box and that of the thermal atmosphere, respectively.
Since Eq. (19) can be rewritten as
$$\frac{\rho (l,y)}{l}\chi (l+y)𝑑Vq(l,y)\chi (l+y)\frac{\varphi (l+y)}{l}𝑑V=0,$$
(33)
it is easily shown by differentiating (28) that
$$\frac{}{l_0}\mathrm{\Delta }S_{bh}=\frac{1}{T_H}\left\{[\rho (l_0,y)\stackrel{~}{\rho }(l_0,y)]\frac{\chi (l_0+y)}{l_0}+[q(l_0,y)\stackrel{~}{q}(l_0,y)]\chi (l_0+y)\frac{\varphi (l_0+y)}{l_0}\right\}𝑑V,$$
(34)
where we have used Eqs. (9) and (12), and the fact that total charge $`Q=q𝑑V`$ of the particles in the box is conserved. Note that, since this process of lowering the box is adiabatic, no change in the entropy in the box can occur and thus $`\mathrm{\Delta }S_{bh}`$ can be replaced by $`\mathrm{\Delta }S_g`$.
First, for simplicity, we consider the case in which the box is sufficiently “small” in the sense that the change in $`\chi `$, $`d\chi /dl`$ and $`d\mathrm{\Phi }/dl`$ across the box are small compared with their average values.
In this case, the floating point condition (34) and the total change in generalized entropy (32) reduce to
$$\rho (l_0)+q(l_0)\chi (l_0)\frac{\varphi (l_0)}{\chi (l_0)}=\stackrel{~}{\rho }(l_0)+\stackrel{~}{q}(l_0)\chi (l_0)\frac{\varphi (l_0)}{\chi (l_0)},$$
(35)
and
$$\mathrm{\Delta }S_g=\left\{\stackrel{~}{S}(l_0)\frac{1}{T_H}[\stackrel{~}{E}(l_0)\stackrel{~}{Q}(l_0)\mathrm{\Phi }_{bh}]\right\}\left\{S(l_0)\frac{1}{T_H}[E(l_0)Q(l_0)\mathrm{\Phi }_{bh}]\right\},$$
(36)
respectively. Here, we wrote $`S=sV`$, $`E=\rho \chi V`$, $`Q=qV`$ and the quantities with $`(\stackrel{~}{})`$ refer to the thermal atmosphere.
Thus, our task is to seek the distribution function which maximizes the functional $`S(EQ\mathrm{\Phi }_{bh})/T_H`$ under the constraint (35).
The state of matter is encoded in some density operator $`\widehat{f}`$. By using it, we can express the energy density as $`\rho =Tr(\widehat{\rho }\widehat{f})`$ and charge density as $`q=Tr(\widehat{q}\widehat{f})`$, while entropy is defined by $`sV=Tr(\widehat{f}\mathrm{ln}\widehat{f})`$. Then, Eq. (35) can be rewritten as
$$Tr[\widehat{O}\widehat{f}]=Tr[\widehat{O}\stackrel{~}{f}],$$
(37)
where $`\widehat{O}\widehat{\rho }\chi V+\widehat{q}\chi ^2\frac{\varphi }{\chi }V`$, $`\widehat{f}`$ and $`\stackrel{~}{f}(\mathrm{exp}\{(\stackrel{~}{H}_{\mathrm{}}\stackrel{~}{Q}\mathrm{\Phi }_{bh})/T_H\})`$ denotes the density operator of the matter in a box and that of the thermal atmosphere which is in equilibrium with the black hole, respectively.
Considering that the variation of $`S(EQ\mathrm{\Phi }_{bh})/T_H`$ under a small variation $`\delta \widehat{f}`$ is given by
$`\delta [S(EQ\mathrm{\Phi }_{bh})/T_H]`$ $`=`$ $`Tr[\delta \widehat{f}(\mathrm{ln}\widehat{f}+1+(\widehat{\rho }\chi \widehat{q}\mathrm{\Phi }_{bh})VT_{H}^{}{}_{}{}^{1})]`$ (38)
$``$ $`Tr[\delta \widehat{f}(\mathrm{ln}\widehat{f}+1+(\widehat{H}_{\mathrm{}}\widehat{Q}\mathrm{\Phi }_{bh})T_{H}^{}{}_{}{}^{1})],`$ (39)
the functional $`S(EQ\mathrm{\Phi }_{bh})/T_H`$ has an extremum under variations that preserve $`Tr\widehat{f}=1`$ and $`Tr[\widehat{O}\widehat{f}]=Tr[\widehat{O}\stackrel{~}{f}]`$, where $`\widehat{f}`$ satisfies $`(\mathrm{ln}\widehat{f}+1)+(\widehat{H}_{\mathrm{}}\widehat{Q}\mathrm{\Phi }_{bh})T_{H}^{}{}_{}{}^{1}\lambda _1\lambda _2\widehat{O}=0`$. The quantities $`\lambda _{1,2}`$ are Lagrange multipliers for these constraints. Eliminating $`\lambda _1`$ by using $`Tr\widehat{f}=1`$, we get a unique solution
$$\widehat{f}=\frac{1}{Z}\mathrm{exp}\left\{\beta _H(\widehat{H}_{\mathrm{}}\widehat{Q}\mathrm{\Phi }_{bh})+\lambda _2\widehat{O}\right\},$$
(40)
where $`Z=Tr[\mathrm{exp}\{\beta _H(\widehat{H}_{\mathrm{}}\widehat{Q}\mathrm{\Phi }_{bh})+\lambda _2\widehat{O}\}]`$ and $`\beta _HT_H^1`$. Then, by substituting Eq. (40) into Eq. (37), we get $`\lambda _2=0`$ and thus $`\widehat{f}=\stackrel{~}{f}=Z^1\mathrm{exp}\{\beta _H(\widehat{H}_{\mathrm{}}\widehat{Q}\mathrm{\Phi }_{bh})\}`$.
Therefore, the maximum value of the functional $`S(EQ\mathrm{\Phi }_{bh})/T_H`$ is realized for the thermal state with the canonical distribution $`\widehat{f}=Z^1\mathrm{exp}\{\frac{1}{T_H}(\widehat{H}_{\mathrm{}}\widehat{Q}\mathrm{\Phi }_{bh})\}`$, which, in our case, corresponds to the thermal atmosphere of the black hole.
Hence, we have
$$\mathrm{\Delta }S_g\mathrm{\Delta }S_g(l=l_0)0.$$
(41)
Thus, the GSL is satisfied in this process.
Next, we analyze the case of a “larger” box. The same procedure as for the “small” box can be applied in this case, too.
Hereafter, we adopt the following notation
$$a𝑑VTr[\widehat{A}\widehat{f}],$$
(42)
where $`a`$, $`\widehat{A}`$ and $`\widehat{f}`$ denote some observable, corresponding operator and density operator, respectively.
With this notation, the total change in generalized entropy (32) can be written in the form
$$\mathrm{\Delta }S_g=U[\stackrel{~}{f};\beta _H,\mathrm{\Phi }_{bh}]U[\widehat{f};\beta _H,\mathrm{\Phi }_{bh}],$$
(43)
where $`U`$ is a functional of a density matrix of the matter fields defined by
$$U[\widehat{f};\beta _H,\mathrm{\Phi }_{bh}]Tr[\widehat{f}\mathrm{ln}\widehat{f}]\beta _H(Tr[\widehat{H}_{\mathrm{}}\widehat{f}]\mathrm{\Phi }_{bh}Tr[\widehat{Q}\widehat{f}]),$$
(44)
$`\widehat{f}`$ and $`\stackrel{~}{f}(\mathrm{exp}\{\beta _H(\stackrel{~}{H}_{\mathrm{}}\stackrel{~}{Q}\mathrm{\Phi }_{bh})\})`$ denote the density operators of the matter in the box and of the thermal atmosphere which is equilibrium with the black hole, respectively. In this expression, $`\widehat{H}_{\mathrm{}}\widehat{\rho }\chi 𝑑V`$ and $`\widehat{Q}\widehat{q}𝑑V`$ are operators corresponding to energy (at infinity) and charge. Note that the functional $`U`$ is essentially the negative of the free energy divided by the temperature.
Similarly, the floating point condition (34) can be reduced to
$`Tr[\widehat{O}\stackrel{~}{f}]`$ $`=`$ $`Tr[\widehat{O}\widehat{f}]`$ (45)
$``$ $`{\displaystyle (\rho \frac{\chi }{l}+q\chi \frac{\varphi }{l})𝑑V}.`$ (46)
These Eqs. (43) and (45) have just the same form as the Eqs. (36) and (37) for the “small” box. Therefore, by repeating the same procedure as in the “small” box’s case, we can show that violation of the GSL cannot be achieved in the case of a “large” box.
In obtaining these results, we have ignored any entropy emitted by the black hole. In fact, the entropy produced in spontaneous Unruh emission corresponding to the superradiant modes can be neglected by taking the black hole as a very massive one.
## IV Summary and discussion
We examined the gedanken experiment of lowering a box initially containing energy $`E`$, entropy $`S`$ and charge $`Q`$ toward a Reissner-Nordström black hole and then dropped in (an extension of the work of Unruh-Wald to the charged system). We have shown that the properties of the thermal atmosphere plays an important role in this case just as in Unruh-Wald’s case. Specifically, we used an assumption that unconstrained thermal matter maximizes entropy as a function of energy density and charge density, in addition to the Unruh-Wald buoyancy force. Note that an equilibrium condition for the chemical potential of the thermal atmosphere also plays an important role in this case. In deed, we proved here that these are sufficient for the enforcement of the GSL and no additional assumptions concerning entropy bounds on the contents of the box need to be made in this process.
Finally, we comment briefly on the relation between our work and the recent work of Bekenstein and Mayo , and Hod . They have derived an upper bound to the entropy of a charged system by considering the polarization of the black hole by a nearby charge (gravitationally induced electrostatic self-force on a charged test particle ). They concluded that the GSL could be saved only by assuming the existence of entropy bounds on a confined charged system. On the other hand, in our derivation, we have neglected the electrostatic self-force till now. If we want to include the electrostatic self-energy in our analysis, we have only to replace $`\mathrm{\Phi }\mathrm{\Phi }\pm eM/2r^2`$ in our analysis. The only effect is a change in Eq. (30), i.e.,
$$\stackrel{~}{\mu }_+\stackrel{~}{n}_+\chi +\stackrel{~}{\mu }_{}\stackrel{~}{n}_{}\chi =e\mathrm{\Phi }_{bh}(\stackrel{~}{n}_+\stackrel{~}{n}_{})+\frac{e^2M}{2r_+^2}(\stackrel{~}{n}_++\stackrel{~}{n}_{}).$$
(47)
This correction gives a positive contribution to the net change in generalized entropy Eq. (32). Thus, in this gedanken experiment, the GSL would hold even if we include those self-interaction forces.
There are several advantages to our analysis compared with theirs. They regarded the system as a “point particle” (test particle approximation) and assumed that it follows the equation of motion of a charged particle on a black hole background and has a conserved energy (the “backreaction” effects are negligible). However, the system does not descend slowly (adiabatically) to the black hole in this process, the system would radiate gravitational and electromagnetic radiation (these process also carry entropy) and the generalized entropy should increase if all these effects are included. Of course, such an analysis including the backreaction effect would be too complicated to reach a definitive answer analytically. Compared this, since we very slowly (adiabatically) lowers the box toward the black hole (quasi-static process), these effects can be neglected. Furthermore, there is no justification for treating the system as a point particle: the thermodynamical properties in and outside the box is completely neglected Thus, it is doubtful if their composite system can be considered to be a thermal one (in the sense of thermally contacted system). On the other hand, since we adiabatically lowers the box toward the black hole, this condition is naturally justified. , even though they play an important role in the validity of the GSL . In deed, we take into account the energy change in the box and the effects of thermal atmosphere and showed that these effects have an important role to prevent the violation of the GSL.
Of course, our analysis is not perfect: for instance, we have neglected interactions between the constituents of the radiation and the thermal atmosphere. However, we could say that our analysis improves the previous analyses, even if we have not resolved all the difficulties.
Acknowledgments
One of us (T.S.) would like to thank Dr. T. Okamura for useful discussions, Professors A. Hosoya, H. Ishihara and T. Mishima for their continuing encouragement. The other (S.M.) would like to thank Professor H. Kodama for his continuing encouragement and Professor W. Israel for his warmest hospitality in University of Victoria and careful reading of the manuscript. This work was supported partially (S.M.) by the Grant-in-Aid for Scientific Research Fund (No. 9809228).
## A Integrated Gibbs-Duhem relation
Providing that the system is in equilibrium states, the first law of thermodynamics is
$$d=\stackrel{~}{T}d𝒮\stackrel{~}{P}d𝒱+𝒬d\varphi +\underset{i}{}\stackrel{~}{\mu }_id𝒩_i,$$
(A1)
where $``$, $`\stackrel{~}{T}`$, $`𝒮`$, $`\stackrel{~}{P}`$, $`𝒱`$, $`𝒬`$, $`\varphi `$ ,$`\stackrel{~}{\mu }_i`$ and $`𝒩_i`$ are energy measured by a local static observer, local temperature, entropy, pressure, volume, electromagnetic charge, electromagnetic potential, chemical potential and particle number density, respectively.
Since
$$d(𝒬\varphi )=\stackrel{~}{T}d𝒮\stackrel{~}{P}d𝒱\varphi d𝒬+\underset{i}{}\stackrel{~}{\mu }_id𝒩_i,$$
(A2)
the quantity $`𝒬\varphi `$ is a function of $`𝒮`$, $`𝒱`$, $`𝒬`$ and $`𝒩_i`$:
$$𝒬\varphi =(𝒮,𝒱,𝒬,𝒩_i).$$
(A3)
Thus, since $``$ is homogeneous function of degree 1 in these extensive parameters, we get
$$\alpha (𝒬\varphi )=(\alpha 𝒮,\alpha 𝒱,\alpha 𝒬,\alpha 𝒩_i).$$
(A4)
By differentiating this equation with respect to $`\alpha `$ and setting $`\alpha =1`$, we obtain
$$𝒬\varphi =\left(\frac{}{𝒮}\right)_{𝒱,𝒬,𝒩_i}𝒮+\left(\frac{}{𝒱}\right)_{𝒮,𝒬,𝒩_i}𝒱+\left(\frac{}{𝒬}\right)_{𝒮,𝒱,𝒩_i}𝒬+\underset{i}{}\left(\frac{}{𝒩_i}\right)_{𝒮,𝒱,𝒬}𝒩_i.$$
(A5)
Therefore, we get the integrated Gibbs-Duhem relation:
$$=\stackrel{~}{T}𝒮\stackrel{~}{P}𝒱+\underset{i}{}\stackrel{~}{\mu }_i𝒩_i.$$
(A6)
Eqs.(A1) and (A6) also can be rewritten as the relations between local quantities:
$`d\stackrel{~}{\rho }`$ $`=`$ $`\stackrel{~}{T}d\stackrel{~}{s}+\stackrel{~}{q}d\varphi +{\displaystyle \underset{i}{}}\stackrel{~}{\mu }_id\stackrel{~}{n}_i,`$ (A7)
$`\stackrel{~}{\rho }`$ $`=`$ $`\stackrel{~}{T}\stackrel{~}{s}\stackrel{~}{P}+{\displaystyle \underset{i}{}}\stackrel{~}{\mu }_i\stackrel{~}{n}_i.`$ (A8)
where $`\stackrel{~}{\rho }`$ ($`=/𝒱`$), $`\stackrel{~}{s}`$ ($`=𝒮/𝒱`$), $`\stackrel{~}{q}`$ ($`=𝒬/𝒱`$) and $`\stackrel{~}{n}_i`$ ($`=𝒩_i/𝒱`$) are energy density measured by a local static observer, entropy density, charge density and number density, respectively.
|
no-problem/9906/physics9906009.html
|
ar5iv
|
text
|
# Complete Numerical Solution of the Temkin-Poet Three-Body Problem
## Abstract
Although the convergent close-coupling (CCC) method has achieved unprecedented success in obtaining accurate theoretical cross sections for electron-atom scattering, it generally fails to yield converged energy distributions for ionization. Here we report converged energy distributions for ionization of $`\mathrm{H}(1\mathrm{s})`$ by numerically integrating Schrödinger’s equation subject to correct asymptotic boundary conditions for the Temkin-Poet model collision problem, which neglects angular momentum. Moreover, since the present method is complete, we obtained convergence for all transitions in a single calculation (excluding the very highest Rydberg transitions, which require integrating to infinitely-large distances; these cross sections may be accurately obtained from lower-level Rydberg cross sections using the $`1/n^3`$ scaling law). Complete results, accurate to 1%, are presented for impact energies of 54.4 and 40.8 eV, where CCC results are available for comparison.
The Temkin-Poet (TP) model of electron-hydrogen scattering is now widely regarded as an ideal testing ground for the development of general methods intended for the full three-body Coulomb problem. Although only $`s`$-states are included for both projectile and atomic electrons, this model problem still contains most of the features that make the real scattering problem hard to solve. Indeed, even in this simplified model, converged energy distributions for ionization can not generally be obtained via the close-coupling formalism . Any general method that can not obtain complete, converged results for this model problem will face similar difficulties when applied to the full electron-hydrogen system. Therefore we believe it is essential to develop a numerical method capable of solving the TP model completely before angular momentum is included. Here we report such a method. Complete, precision results for $`e^{}+\mathrm{H}(1\mathrm{s})`$, accurate to 1%, are presented for total energies of 3 and 2 Rydbergs (Ryd). Atomic units (Ryd energy units) are used throughout this work unless stated otherwise.
Our numerical method may be summerized as follows. The model Schrödinger equation is integrated outwards from the atomic center on a grid of fixed spacing $`h`$. The number of difference equations is reduced each step outwards using an algorithm due to Poet , resulting in a propagating solution of the partial-differential equation. By imposing correct asymptotic boundary conditions on this general, propagating solution, the particular solution that physically corresponds to scattering is obtained along with the scattering amplitudes.
The Schrödinger equation in the TP model is given by
$$\left(\frac{^2}{x^2}+\frac{^2}{y^2}+\frac{2}{\mathrm{min}(x,y)}+E\right)\mathrm{\Psi }(x,y)=0,$$
(1)
with boundary conditions
$$\mathrm{\Psi }(x,0)=\mathrm{\Psi }(0,y)=0$$
(2)
and symmetry condition
$$\mathrm{\Psi }(y,x)=\pm \mathrm{\Psi }(x,y),$$
(3)
depending on whether the two electrons form a singlet ($`+`$) or triplet ($``$) spin state. Eq. (1) is separable in the two regions $`xy`$ and $`xy`$. Because of the symmetry condition (3), we can solve Eq. (1) in just one of these regions and this is sufficient to determine all of the scattering information. For brevity, we do not explicitly indicate the total spin since the singlet and triplet cases require completely separate calculations. For $`xy`$, the wave function may be written
$`\mathrm{\Psi }(x,y)=\psi _{ϵ_i}(y)e^{ik_{ϵ_i}x}+{\displaystyle \underset{j=1}{\overset{\mathrm{}}{}}}C_{ϵ_ji}\psi _{ϵ_j}(y)e^{ik_{ϵ_j}x}`$ (4)
$`+{\displaystyle _0^{\mathrm{}}}𝑑ϵ_bC_{ϵ_bi}\psi _{ϵ_b}(y)e^{ik_{ϵ_b}x}.`$ (5)
The $`\psi _ϵ`$ are bound and continuum states of the hydrogen atom with zero angular momentum:
$`\psi _ϵ(y)=ye^{qy}{}_{1}{}^{}F_{1}^{}(11/q,2;2qy).`$ (6)
Here $`q^2=ϵ`$, where $`ϵ`$ is the inner electron energy, and $`{}_{1}{}^{}F_{1}^{}`$ is the confluent hypergeometric function. The momenta in (5) are fixed by energy conservation according to
$$ϵ_i+k_{ϵ_i}^2=ϵ_j+k_{ϵ_j}^2=ϵ_b+k_{ϵ_b}^2=E,$$
(7)
where $`E>0`$ is the total energy. The $`C_{ϵi}`$ are related to S-matrix elements by normalization factors:
$$\mathrm{S}_{ϵ_ji}=\left(\frac{k_{ϵ_j}}{k_{ϵ_i}}\right)^{1/2}\left(\frac{j}{i}\right)^{3/2}C_{ϵ_ji}$$
(8)
for discrete transitions and
$$\mathrm{S}_{ϵ_bi}=\left(\frac{k_{ϵ_b}}{k_{ϵ_i}}\right)^{1/2}\left(\frac{1}{i}\right)^{3/2}\left[\frac{1e^{2\pi /k}}{4k}\right]^{1/2}C_{ϵ_bi}$$
(9)
for ionization, where $`k=\sqrt{ϵ_b}`$. Cross sections are obtained from S-matrix elements in the usual manner.
To convert the partial-differential equation (1) into difference equations we impose a grid of fixed spacing $`h`$ and approximate derivatives by finite differences. After applying the Numerov scheme in both the $`x`$ and $`y`$ directions, our difference equations have the form
$$𝐀^{(i)}\overline{\mathrm{\Psi }}^{(i1)}+𝐁^{(i)}\overline{\mathrm{\Psi }}^{(i)}+𝐂^{(i)}\overline{\mathrm{\Psi }}^{(i+1)}=\mathrm{𝟎},$$
(10)
Here we have collected the various $`\mathrm{\Psi }_j^{(i)},j=1,2,\mathrm{},i`$, where $`\mathrm{\Psi }_j^{(i)}\mathrm{\Psi }(x=ih,y=jh)`$, into a vector $`\overline{\mathrm{\Psi }}^{(i)}`$. The matrices $`𝐀^{(i)}`$, $`𝐁^{(i)}`$ and $`𝐂^{(i)}`$ are completely determined by the formulas given by Poet .
At each value of $`i`$ we can solve our equations if we apply symbolic boundary conditions at $`i+1`$ \[solve for $`\mathrm{\Psi }_j^{(i)}`$ in terms of $`\mathrm{\Psi }_j^{(i+1)}`$ ($`j=1,2,\mathrm{},i`$)\]. This procedure yields a propagation matrix $`𝐃^{(i)}`$:
$$\overline{\mathrm{\Psi }}^{(i)}=𝐃^{(i)}\overline{\mathrm{\Psi }}^{(i+1)}.$$
(11)
We can obtain a recursion relation for $`𝐃^{(i)}`$ by using (11) to eliminate $`\overline{\mathrm{\Psi }}^{(i1)}`$ from equation (10):
$$\left[𝐁^{(i)}+𝐀^{(i)}𝐃^{(i1)}\right]\overline{\mathrm{\Psi }}^{(i)}=𝐂^{(i)}\overline{\mathrm{\Psi }}^{(i+1)}.$$
(12)
Comparing (12) with (11),
$$𝐃^{(i)}=\left[𝐁^{(i)}+𝐀^{(i)}𝐃^{(i1)}\right]^1𝐂^{(i)}.$$
(13)
Thus each $`𝐃^{(i)}`$ is determined from the previous one ($`𝐃^{(1)}`$ can be determined by inspection).
In the asymptotic region, the form of the wave function is known and is given in terms of the $`C_{ϵi}`$ by
$`\overline{\mathrm{\Psi }}^{(i)}𝐈^{(i)}+𝐑^{(i)}𝐂.`$ (14)
Here the matrix $`𝐈^{(i)}`$ contains the incident part of the asymptotic solution while $`𝐑^{(i)}`$ contains the reflected part. The asymptotic solution is identical to the full solution, Eq. (5), except that the quadrature over the continuum extends only up to the total energy $`E`$. The infinite summation over discrete channels is truncated to some finite integer $`N_d`$ and the quadrature over the two-electron continuum is performed prior to matching by first writing the $`C_{ϵ_bi}`$ as a power series in $`ϵ_b`$:
$$C_{ϵ_bi}\underset{n=1}{\overset{N_c}{}}\mathrm{c}_{ni}ϵ_b^n.$$
(15)
The matching procedure then determines the (in practice, much smaller set of) coefficients $`\mathrm{c}_{ni}`$, rather than the $`C_{ϵ_bi}`$ directly, which eliminates ill conditioning .
To extract an $`N\times N`$ coefficient matrix, where $`N=N_d+N_c`$, we need only $`N`$ of the $`i`$ equations (11). Alternatively, one may use all $`i`$ equations as in Poet . In this case, the system of equations is overdetermined. Nevertheless, a solution can be found by the standard method of minimizing the sum of the squares of the residuals \[the differences between the left- and right-hand sides of equations (11)\]. Previously , we found that the least-squares method is generally stabler than keeping any subset of just $`N`$ equations (11).
Our numerical method is stable and rapidly convergent. For a given grid spacing $`h`$, we established convergence in propagation distance by performing the matching every 40 a.u. until convergence was obtained. At each matching radius, both the number of discrete channels $`N_d`$ and the number of expansion functions for the continuum $`N_c`$ were varied to obtain convergence. Finally, the entire calculation was repeated for a finer grid (using one-half the original grid spacing $`h`$).
The biggest advantage of having a general, propagating solution is that once the grid spacing is chosen, a “single” calculation is all that is needed to establish convergence for the remaining numerical parameters. This is because the D-matrix, the calculation of which consumes nearly all the computational effort, is independent of asymptotic boundary conditions. Thus, in a typical calculation, the same D-matrix is used for, e.g., $`N_c=0,1,\mathrm{},9`$ while $`N_d`$ runs from 1 to 30. This would have required 300 completely separate calculations (each taking about the same time as our “one” calculation) had we solved the original global matrix equations (10).
We have performed complete calculations for electrons colliding with atomic hydrogen at impact energies of 54.4 and 40.8 eV (total energies of 3 and 2 Ryd, respectively). In Table I, we present our calculated cross sections for $`e^{}+\mathrm{H}(1s)e^{}+\mathrm{H}(ns),n8`$. The grid spacing is $`h=1/5`$ a.u. (results using one-half this spacing differed by less than 0.1% for discrete excitations and 0.5% for elastic scattering). One of the advantages of our direct approach is that we are able to obtain the amplitudes for higher-level (Rydberg) transitions as easily as those for low-level excitations, provided the matching radius is large enough to enclose the final Rydberg state. This is in contrast to some other approaches, such as the CCC, which lose accuracy for higher-level transitions.
In Figures 1-4, we present our results (labeled FDM for finite-difference method) for the single-differential cross section (SDCS). For a total energy of 3 Ryd, 240 a.u. proved to be a sufficient matching radius to get convergence of the SDCS and for E = 2 Ryd, a radius of 360 a.u. was required. The SDCS is more sensitive to the number of expansion functions for the continuum than the other observables, particularly about $`ϵ_b=E/2`$. Nevertheless, convergence to better than 1% was readily obtained using 7-8 functions (the largest discrepancy in the SDCS between $`N_c=7`$ and $`N_c=8`$ was smaller than 0.3%; even using just 6 expansion function gave results accurate to 1%).
Also shown in Figs. 1-4 are the results of convergent close-coupling (CCC) calculations . The CCC method of Bray employs a “distinguishable electron” prescription, which produces energy distributions that are not symmetric about $`ϵ_b=E/2`$. Stelbovics has shown that a properly symmetrized CCC amplitude yields SDCS that are symmetric about $`E/2`$ as well as being four times larger at $`ϵ_b=E/2`$ than those assuming distinguishable electrons. (Note that our singlet FDM results at $`ϵ_b=E/2`$ are about four times larger than the corresponding CCC results.) Other than making the energy distributions symmetric, it is clear from the figures that symmetrization (coherent summation of the CCC amplitudes $`C_{ϵ_bi}`$ and $`C_{ϵ_ai}`$, where $`ϵ_a=Eϵ_b`$, which correspond to physically indistinguishable processes) will significantly affect only singlet scattering (and then only near $`ϵ_b=E/2`$), since $`C_{ϵ_bi}`$ is practically zero for $`ϵ_b>E/2`$. For singlet scattering, the CCC oscillates about the true value of SDCS, except near (and beyond) $`ϵ_b=E/2`$. CCC results for triplet scattering, on the other hand, are in very good agreement with our results for $`0ϵ_bE/2`$.
Some very recent results from Baertschy et al. have also been included in the figures. Baertschy et al. rearrange the Schrödinger equation to solve for the outgoing scattered wave. They use a two-dimensional grid like ours, but scale the coordinates by a complex phase factor beyond a certain radius where the tail of the Coulomb potential is ignored. As a result, the scattered wave decays like an ordinary bound state beyond this cut-off radius, which makes the asymptotic boundary conditions very simple. By computing the outgoing flux directly from the scattered wave at several large cut-off radii, and extrapolating to infinity, they obtain the single-differential ionization cross section without having to use Coulomb three-body boundary conditions. This method, called exterior complex scaling (ECS), has just been extended to the full electron-hydrogen ionization problem . It is seen from Figs. 1-4 that the ECS results are in good agreement with our FDM results except when the energy fraction $`ϵ_b/E`$ approaches 0 or 1. Baertschy et al. note that their method may be unreliable as $`ϵ_b`$ approaches 0 or $`E`$ due to “contamination” of the ionization flux by contributions from discrete excitations.
We note also the recent work of Miyashita et al. , who have presented SDCS for total energies of 4, 2, and 0.1 Ryd using two different methods. One produces an asymmetric energy distribution similar to that of CCC while the other gives a symmetric distribution. Both contain oscillations. The mean of their symmetric curve at $`E=2`$ Ryd (40.8 eV impact energy) is in reasonable agreement with our calculations.
In conclusion, we have presented complete, precision results for the Temkin-Poet electron-hydrogen scattering problem for impact energies of 54.4 and 40.8 eV. It may be possible to improve the speed of the present method by using a variable-spaced grid, like that used by Botero and Shertzer in their finite-element analysis (this would greatly reduce storage requirements as well). Once we have optimized our code for this simplified model we will proceed to include angular momentum. When angular momentum is included, the ionization boundary condition is no longer separable and this is the major challenge for generalizing the present approach to the full electron-hydrogen scattering problem.
The authors gratefully acknowledge the financial support of the Australian Research Council for this work.
|
no-problem/9906/hep-lat9906037.html
|
ar5iv
|
text
|
# Ergodic Properties of Classical SU(2) Lattice Gauge Theory
## I Introduction
Extensive experimental efforts are under way at Brookhaven National Laboratory and CERN to produce and investigate the new deconfined, chirally symmetric high-temperature phase of QCD, usually called the quark-gluon-plasma (QGP). While the very high energy densities generated in high-energy nuclear collisions virtually guarantee that some new state of matter is reached, there are still important unresolved theoretical problems relating to the description of this state. One missing, critical ingredient is a non-perturbative approach to dynamical QCD processes far from thermodynamical equilibrium.
The study of non-equilibrium dynamics of relativistic quantum fields is currently an active area of theoretical research . Approaches that go beyond perturbation theory include descriptions in terms of probabilistic transport equations , and deterministic or stochastic classical equations for the infrared degrees of freedom of the quantum fields . In the special, but important case of non-Abelian gauge theories, the extreme infrared limit has been long known to correspond to a dynamical system exhibiting classical as well as quantum chaos . Several years ago, this result was extended to spatially varying, lattice regulated Yang-Mills fields by numerical calculation of the maximal Lyapunov exponents and the complete ergodic Lyapunov spectrum of classical SU(2) gauge theory . The most intriguing results with implications for relativistic heavy ion physics are:
1. The ergodic Lyapunov spectrum looks exactly as expected for a globally hyperbolic system.
2. The largest Lyapunov exponent appears to be related to the plasmon damping rate as predicted by high temperature perturbation theory .
3. The magnitude of the maximal Lyapunov exponent for SU(3) indicates a rapid thermalization of gluons in heavy-ion collisions.
These results suggest the extension of this approach to a systematic semi-classical description of the dynamics of Yang-Mills field theories. The success of such an approach will ultimately depend on one’s ability to find practical methods for the application of periodic orbit theory to systems with many degrees of freedom. We discuss a very first step in this direction.
We present results of an investigation of the relation between the Lyapunov exponents of periodic and ergodic orbits. Periodic orbit theory, in the framework of the thermodynamic formalism, makes detailed predictions for the statistical properties of Lyapunov exponents of generic orbits in Anosov systems, but few studies of these relations appear to have been made for specific chaotic, non-linear dynamical systems. This motivated our numerical study of the relation between the Lyapunov exponents of periodic orbits and generic trajectories in a system for which the Lyapunov exponents of periodic orbits (henceforth simply called “periodic Lyapunov exponents”) are known for all orbits below a certain period: the two-dimensional hyperbola billiard . For this system, therefore, powerful mean value theorems can be invoked to predict analytical relations which can be checked numerically. Below, we present our numerical results confirming the general connection between the Lyapunov exponents for ergodic and periodic orbits as well as for their fluctuations.
We then dicuss the corresponding properties of the Lyapunov exponents of ergodic trajectories for classical SU(2) Yang-Mills theory on a three-dimensional lattice. The observed similarities suggest that this system is also globally hyperbolic and could, in principle, be treated within the framework of periodic orbit theory. Our conjecture yields a prediction for the fluctuation properties of the ergodic Lyapunov exponents which is verified numerically. On the basis of the relation between these fluctuations and the fluctuations of the entropy growth rate we obtain a prediction of the magnitude of entropy fluctuations as a function of space-time volume. We find that for the conditions occurring in high energy nuclear collisions these fluctuations are expected to be very small, in agreement with observations.
We emphasize that it is presently impossible to predict how far this approach will carry toward a description of the dynamics of non-equilibrium processes in QCD. Classical Yang-Mills equations can only be used to estimate a very limited number of dynamical parameters of the QGP, namely those which have a well-defined classical limit, such as the logarithmic entropy growth rate, $`d(\mathrm{ln}S)/dt`$, but not quantities such as the energy or entropy density. Our analysis is only relevant to the fluctuation properties of such, essentially “classical” quantities. However, we hasten to point out that, independently of the specific application considered here, an improved understanding of the connection between quantum field theory and periodic orbit theory is of fundamental theoretical relevance for non-linear dynamics in general. To our knowledge for the first time, we propose a general relationship between the mean periodic Lyapunov exponents of a dynamical system, its mean ergodic Lyapunov exponents, and the ergodic autocorrelation time. This general relationship makes it possible to extract important new information for any higher-dimensional system for which the explicit construction of the periodic orbits is practically not feasible.
The basic assumption underlying periodic orbit theory is that the periodic orbits sample the phase space of a non-linear dynamical system in such a manner that its averaged properties can be systematically reconstructed from the properties of the periodic orbits. For each such orbit there is a spectrum of characteristic Lyapunov exponents that describe how fast the separation between neighboring orbits increases with time. While periodic orbit theory is an extremely powerful tool, its range of applicability is strongly limited by the difficulties encountered in determining the complete set of periodic orbits. For any field theory with its potentially infinitely many degrees of freedom, the task of numerically constructing the periodic orbits looks hopeless. It is, however, relatively easy to obtain ergodic Lyapunov exponents by numerical integration of the equations of motion . Since it seems plausible that every ergodic trajectory eventually comes close to any periodic orbit, any infinite ergodic orbit should sample all periodic orbits. Thus it appears as a natural conjecture that the average properties of ergodic Lyapunov exponents and the average properties of periodic Lyapunov exponents should be related. It is this relationship that we want to discuss in the following.
## II General Relations
Before we investigate and confirm the relationship between ergodic and periodic orbits for a simple but non-trivial system for which all periodic Lyapunov exponents (up to a certain period) are known, namely, the two-dimensional hyperbola billiard , we review some general relations between Lyapunov exponents of periodic and generic trajectories. In the next section, we will compare these analytic predictions for the properties of the ergodic Lyapunov exponents $`\lambda _\mathrm{r}`$ with those obtained by numerical integration of a randomly chosen ergodic trajectory $`\stackrel{}{x}(t)=\stackrel{}{x}_0(t)+\delta \stackrel{}{x}(t)`$:
$$\lambda _\mathrm{r}=\underset{\delta \stackrel{}{x}(0)0}{lim}\underset{t\mathrm{}}{lim}\frac{1}{t}\mathrm{ln}\frac{|\delta \stackrel{}{x}(t)|}{|\delta \stackrel{}{x}(0)|},$$
(1)
where the index $`r`$ indicates the random starting point. (We remind the reader that for a fully ergodic system this yields the maximal ergodic Lyapunov exponent, which for $`d=2`$ degrees of freedom is the unique positive exponent.)
In a Hamiltonian hyperbolic dynamical system with $`d`$ degrees of freedom ergodicity implies that the sum of its $`d1`$ positive ergodic Lyapunov exponents can also be obtained as the ergodic mean of the local expansion rate,
$$\underset{t\mathrm{}}{lim}h_\mathrm{r}(t)\underset{t\mathrm{}}{lim}\frac{1}{t}_0^t\chi (\stackrel{}{x}(t^{}))𝑑t^{}=\underset{j=1}{\overset{d1}{}}\lambda _{r,j}=h_{\mathrm{KS}}.$$
(2)
Here $`h_{\mathrm{KS}}`$ denotes the Kolomogorov-Sinai entropy and
$$\chi (\stackrel{}{x}(t))=\frac{d}{dt}\mathrm{ln}det\left(\frac{\stackrel{}{x}(t)}{\stackrel{}{x}(0)}\right)_{\mathrm{expanding}}$$
(3)
is the local rate of expansion along the trajectory $`\stackrel{}{x}(t)`$. Due to the equidistribution of periodic orbits in phase space it is possible to evaluate the ergodic mean in (2) by weighted sums over periodic orbits. In fact, for hyperbolic systems the thermodynamic formalism allows to express certain invariant measures on phase space in terms of averages over periodic orbits, see, e.g., . One is in particular able to obtain a relation that establishes a direct connection between the positive ergodic Lyapunov exponents $`\lambda _{r,j}`$ and those of periodic orbits. Labelling periodic orbits by $`\nu `$, and denoting their periods and positive Lyapunov exponents by $`T_\nu `$ and $`\lambda _{\nu ,j}`$, respectively, this relation reads
$$\underset{j=1}{\overset{d1}{}}\lambda _{r,j}=\underset{t\mathrm{}}{lim}\frac{_{tT_\nu t+\epsilon }\left(_{j=1}^{d1}\lambda _{\nu ,j}\right)\mathrm{exp}\left(_{j=1}^{d1}\lambda _{\nu ,j}T_\nu \right)}{_{tT_\nu t+\epsilon }\mathrm{exp}\left(_{j=1}^{d1}\lambda _{\nu ,j}T_\nu \right)},$$
(4)
where $`\epsilon >0`$ is arbitrary. Within the thermodynamic formalism the topological pressure $`P(\beta )`$ was introduced as a useful tool to analyze invariant measures on phase space in terms of periodic orbits as, e.g., in (4). This function can be expressed as
$$P(\beta )=\underset{t\mathrm{}}{lim}\frac{1}{t}\mathrm{ln}\underset{tT_\nu t+\epsilon }{}\mathrm{exp}\left(\beta \underset{j=1}{\overset{d1}{}}\lambda _{\nu ,j}T_\nu \right),$$
(5)
and it is not difficult to derive from (5) that $`P(\beta )`$ is monotonically decreasing and convex. The exponential proliferation of the number of periodic orbits immediately implies that $`P(0)=h_{top}`$ (topological entropy). Moreover, the arithmetic average of the sum of the positive periodic Lyapunov exponents is given by $`\overline{\lambda }=P^{}(0)`$. The relation (4) then follows from (2) and from the non-trivial identity $`P^{}(1)=h_{\mathrm{KS}}`$. One also concludes that the three quantities measuring a mean separation of neighboring trajectories are ordered in the following way: $`\overline{\lambda }h_{top}h_{\mathrm{KS}}`$. For further information see, e.g., .
Our next goal is to investigate the fluctuations of the local rate of expansion (3), when integrated up to a sampling time $`t_\mathrm{s}`$, about its ergodic mean (2). We recall that this quantity was denoted as $`h_\mathrm{r}(t_\mathrm{s})`$ in (2). For (uniformly) hyperbolic dynamical systems one expects that observables sampled along ergodic trajectories up to time $`t_\mathrm{s}`$ show Gaussian fluctuations about their ergodic mean. Indeed, in many cases a central limit theorem holds true that also predicts the widths of these Gaussian to scale as $`t_\mathrm{s}^{1/2}`$ for large sampling times $`t_\mathrm{s}`$. More precisely, Waddington has shown that for Anosov systems (i.e., fully hyperbolic systems on compact phase spaces) the difference
$$\sqrt{t_\mathrm{s}}\left[h_\mathrm{r}(t_\mathrm{s})h_{\mathrm{KS}}\right]$$
(6)
shows Gaussian fluctuations with variance $`P^{\prime \prime }(1)`$ in the limit $`t_\mathrm{s}\mathrm{}`$. This means that
$$\mathrm{\Delta }h_\mathrm{r}(t_\mathrm{s})\sqrt{P^{\prime \prime }(1)/t_\mathrm{s}},t_\mathrm{s}\mathrm{}.$$
(7)
According to (5) the quantity $`P^{\prime \prime }(1)`$ can be expressed in terms of periodic orbit sums as
$$P^{\prime \prime }(1)=\underset{t\mathrm{}}{lim}t\left[\frac{_\nu \left(_j\lambda _{\nu ,j}\right)^2\mathrm{exp}\left(_j\lambda _{\nu ,j}T_\nu \right)}{_\nu \mathrm{exp}\left(_j\lambda _{\nu ,j}T_\nu \right)}\left(\frac{_\nu \left(_j\lambda _{\nu ,j}\right)\mathrm{exp}\left(_j\lambda _{\nu ,j}T_\nu \right)}{_\nu \mathrm{exp}\left(_j\lambda _{\nu ,j}T_\nu \right)}\right)^2\right].$$
(8)
On the other hand, the variance of the distribution of the periodic Lyapunov exponents is related to $`P^{\prime \prime }(0)`$, since
$$P^{\prime \prime }(0)=\underset{t\mathrm{}}{lim}t\left[\frac{_{tT_\nu t+\epsilon }\left(_{j=1}^{d1}\lambda _{\nu ,j}\right)^2}{_{tT_\nu t+\epsilon }1}\left(\frac{_{tT_\nu t+\epsilon }\left(_{j=1}^{d1}\lambda _{\nu ,j}\right)}{_{tT_\nu t+\epsilon }1}\right)^2\right].$$
(9)
For the hyperbola billiard this variance was calculated numerically by Sieber , who found Gaussian distributions of the positive Lyapunov exponents of periodic orbits with $`N`$ bounces off the boundary. For large $`N`$ the widths of these Gaussian scale like
$$\stackrel{~}{\sigma }_N\frac{0.199}{\sqrt{N}}.$$
(10)
Taking into account that the mean length of periodic orbits with $`N`$ bounces scales as $`\overline{t}_N2.027N`$ , this yields a prediction for the width of the distribution of periodic Lyapunov exponents expressed as a function of $`t`$ that scales as
$$\mathrm{\Delta }\lambda _\nu (t)\frac{0.283}{\sqrt{t}}$$
(11)
in the limit of long periodic orbits. One hence concludes that $`P^{\prime \prime }(0)=0.08`$.
The variance of the fluctuations (6) can also be related to the autocorrelation function
$$a(\tau )=\chi (\stackrel{}{x}(\tau ))\chi (\stackrel{}{x}(0))(h_{\mathrm{KS}})^2$$
(12)
of the local ergodic Lyapunov exponents, where $`\mathrm{}`$ denotes a phase space average. In order to derive this connection one averages the square of (6) over phase space, which then leads to
$$t(\mathrm{\Delta }h_\mathrm{r}(t))^2=\frac{1}{t}_t^{+t}(t|\tau |)a(\tau )𝑑\tau .$$
(13)
A connection with the topological pressure can be established because (6) and (7) imply that the autocorrelation function (12) vanishes faster than $`1/\tau `$ as $`\tau \mathrm{}`$. One can therefore perform the limit $`t\mathrm{}`$ on both sides of (13), yielding
$$P^{\prime \prime }(1)=\underset{t\mathrm{}}{lim}t(\mathrm{\Delta }h_\mathrm{r}(t))^2=_{\mathrm{}}^+\mathrm{}a(\tau )𝑑\tau .$$
(14)
Finally we want to discuss the probability for deviations of the sum of the positive ergodic Lyapunov exponents, sampled over time $`t`$, from its ergodic mean $`h_{\mathrm{KS}}`$. To this end let $`p_t(h)`$ denote the probability density for $`h_\mathrm{r}(t)`$ to have a value $`h`$. Waddington has shown that for Anosov systems which are such that $`P^{\prime \prime }(\beta )0`$ for all $`\beta `$, this probability density has the form
$$p_t(h)=f(h)\sqrt{t}\mathrm{exp}(g(h)t),$$
(15)
where $`f(h)`$ is a complicated, though uniquely fixed function. Moreover,
$$g(h)=\underset{\beta }{inf}\{h\beta +P(\beta +1)\}$$
(16)
is a strictly convex, non-negative function with a unique minimum at the ergodic mean $`h_{min}=h_{\mathrm{KS}}`$, where $`g(h_{\mathrm{KS}})=0`$. This means that for large $`t`$ the probability of large deviations of $`h_\mathrm{r}(t)`$ from the ergodic mean is exponentially small.
## III The Two-Dimensional Hyperbola Billiard
We test the above statements in the two-dimensional hyperbola billiard, for which all periodic orbits and their Lyapunov exponents are known up to certain orbit period . In order to be able to compare our numerical results with the analytical predictions, which are based on periodic orbits in a restricted length range, we have limited the motion into the arms of the hyperbola billiard, using the cut-off $`|x|,|y|x_{\mathrm{lim}}=10/\sqrt{2}`$ and reflecting the motion horizontally or vertically at the boundary. We have not studied the dependence of our results on $`x_{\mathrm{lim}}`$ in any systematic fashion, but a cursory exploration did not reveal a significant dependence.
Our numerical result for the KS-entropy was obtained as $`h_{\mathrm{KS}}\lambda _\mathrm{r}=0.575`$ by exploiting the relation (1) for the positive ergodic Lyapunov exponent. In the arithmetic average of the periodic Lyapunov exponents and the topological entropy have been determined numerically as $`\overline{\lambda }=0.703`$ and $`h_{\mathrm{top}}=0.5925`$, respectively, so that the ordering $`\overline{\lambda }h_{\mathrm{top}}h_{\mathrm{KS}}`$ is respected. This provides a non-trivial test since the general theoretical statement has only been proven for uniformly hyperbolic systems (Anosov systems) on compact phase spaces. The hyperbola billiard is only non-uniformly hyperbolic and, moreover, without the imposed cut-off its phase space fails to be compact.
For the (cut-off) hyperbola billiard we found that the distributions of the ergodic Lyapunov exponents (1) that we determined numerically up to sampling times $`t_\mathrm{s}`$ are very well described by Gaussians, see Fig. 1, if the sampling time is not too small ($`t_\mathrm{s}1`$). For small sampling times, most of the phase space divergence occurs during intervals $`t_\mathrm{s}`$ when the trajectory reflects off the hyperbolic boundary, making the distribution of $`h_\mathrm{r}(t_\mathrm{s})\lambda _\mathrm{r}(t_\mathrm{s})`$ strongly non-Gaussian in the limit $`t_\mathrm{s}0`$. We made power-law fits of the form $`at_\mathrm{s}^b`$ to the dependence of the widths of these Gaussians on $`t_\mathrm{s}`$. This gave the result (see Fig. 2):
$$\mathrm{\Delta }h_\mathrm{r}(t_\mathrm{s})0.86t_\mathrm{s}^{1/2}.$$
(17)
We also determined the correlation function $`a(\tau )`$ for the hyperbola billiard by sampling $`h_\mathrm{r}(t_\mathrm{s})`$ for in small intervals $`t_\mathrm{s}=1/(2\sqrt{2})`$. The result is shown in Fig. 3. Clearly, $`a(\tau )`$ falls off rapidly with a time constant of about $`t_c=6`$. Therefore, we can test the relation (13) by integrating the right-hand side numerically. For $`t=28.3`$, corresponding to the lower plot in Fig. 1, we obtain in this way the prediction $`\mathrm{\Delta }h_\mathrm{r}=0.197`$ with an estimated numerical uncertainty of about 25%. The value obtained from the Gaussian fit to the histogram in Fig. 1 is $`\mathrm{\Delta }h_\mathrm{r}=0.159`$. The quality of this agreement must be judged with the fact in mind that the correlation function $`a(\tau )`$, as well as the distribution $`\chi (\stackrel{}{x}(t))`$ are highly singular for the hyperbola billiard in the limit $`\tau 0`$.
## IV The SU(2) gauge theory on a lattice
Let us now turn to a comparison with results obtained for ergodic orbits in the classical SU(2) Yang-Mills theory regularized on a lattice. In the complete (positive) Lyapunov spectra were obtained for lattice volumes $`L^3`$ with $`L=1,2,3`$. We have extended these calculations to the lattices of size $`L=4,6`$. All our calculations were performed for an average energy per plaquette $`E_\mathrm{p}1.8`$. For sufficiently long trajectories and fixed energy per lattice site the Lyapunov spectrum has a unique shape, independent of the lattize size, as shown in Fig.5. Indeed, for a completely hyperbolic system, physical intuition requires that the Kolmogorov-Sinai entropy $`P^{}(1)`$ is an extensive quantity. For this to be true, the sum over all positive Lyapunov exponents must scale like the lattice volume $`L^3`$ and the shape of the distribution of Lyapunov exponents must be independent of $`L`$. Figure 5 confirms this expectation.
In Fig. 6 we show distributions of the sum over positive Lyapunov exponents as a function of the length of the sampled ergodic trajectories (obtained as function of the sampling time $`t_\mathrm{s}`$ on a single, very long trajectory). Obviously, the distributions are nicely fitted by Gaussians whose widths decrease like $`1/\sqrt{t_\mathrm{s}}`$ (see Fig. 7). This behavior is identical to that of the two-dimensional hyperbolic system studied before (cf. Fig. 1). We also determined again the autocorrelation function $`a(\tau )`$ defined in (12) by sampling the distribution $`p_t(h)`$ with small time steps (see top part of Fig. 4). For the $`L=4`$ lattice the result is shown in the lower part of Fig. 4. This allows us to test the relation (13) connecting the $`a(\tau )`$ with the variance of the ergodic Lyapunov exponents. Using (13) we obtain the value $`\mathrm{\Delta }h_\mathrm{r}=0.88`$ for $`t_\mathrm{s}=6`$, whereas the Gaussian fit to the sampled distribution shown in the top part of Fig. 6 is $`\mathrm{\Delta }h_\mathrm{r}=0.83`$.
One can also read off from the distributions shown in Fig. 6 how the widths of the Gaussians scale with the lattice size $`L`$. To a very good approximation we find that it is proportional to $`\sqrt{L^3}`$. If one includes the sampling time dependence, the variance of $`h_{\mathrm{KS}}`$ scales like $`\sqrt{L^3/t_\mathrm{s}}`$. As the mean value $`h_{\mathrm{KS}}`$ of the distribution $`p_t(h)`$ scales like $`L^3`$, this result confirms the Gaussian nature of the fluctuations. Our result also has important consequences for heavy-ion collisions. If fluctuations are Gaussian with a dimensional scale given by the mean maximal ergodic Lyapunov exponent, which is found numerically to be of order (0.5 fm)<sup>-1</sup> , then for typical volumes and reaction times encountered in nuclear reactions the relative fluctuations must be very small, of order $`\sqrt{(0.5\mathrm{fm})^4/(5\mathrm{fm})^4}=0.01`$. This result is in agreement with a recent measurement of the fluctuations in relativistic heavy-ion collisions, which show that the primary event-by-event fluctuations in the mean value of the transverse momentum do not exceed 1 percent .
Let us stress that while it is consistent to assume that the SU(2) gauge theory treated as a classical field theory on the lattice is a hyperbolic system, our positive evidence is limited. It should be clear that it is impossible to exclude, by numerical calculations for a limited number of trajectories, that there are regions in the high-dimensional phase space of our lattice field theory which are not hyperbolic. (Then the SU(2) field on the lattice would not be an Anosov system.) Also it is unproven, though highly probable, that the addition of the quarks will not change the picture.
## V Conclusions
We have shown by numerical simulations that for a two-dimensional billiard the mean values for the ergodic and periodic Lyapunov exponents and their fluctuations as a function of trajectory length (i.e. time) are closely related. We have derived a general relation between their mean values and checked it numerically. This demonstrates that we understand the relationship between ergodic and periodic Lyapunov exponents for the hyperbola billiard. We have than analyzed in a similar way classical SU(2) gauge theory on a lattice. For all investigated properties we found good agreement with the expectations for a globally hyperbolic (Anosov) system. We conclude that for all quantities of interest which have a well-defined classical limit (like the growth rate of entropy after the initial energy deposition by hard interactions) the probability for large fluctuations should be exponentially small. For typical high-energy heavy-ion collisions (Pb+Pb) such fluctuations are estimated to be at most of the order of a few percent.
## Acknowledgments
We thank T. Guhr and M. Brack for very helpful discussions. B.M. acknowledges support by the Alexander von Humboldt-Stiftung (U.S. Senior Scientist Award) and by a grant from the U.S. Department of Energy (DE-FG02-96ER40495). A.S. acknowledges support by GSI and DFG. We also acknowledge computational support by the NC Supercomputing Center and the Intel Corporation.
|
no-problem/9906/adap-org9906003.html
|
ar5iv
|
text
|
# An Exactly Soluble Hierarchical Clustering Model: Inverse Cascades, Self-Similarity, and Scaling
## I Introduction
Clustering and aggregation play an important role in many complex systems. In this paper, we present an inverse cascade model for the self-similar growth of clusters. Elements are introduced at the smallest scale, which then coalesce to form larger and larger clusters. The inverse cascade is terminated by the loss of the largest clusters. The system is thus in a quasi-steady state with the loss of elements in large clusters balanced by the introduction of new elements. The clustering process is recognized to be a branching network similar to a DLA cluster or a river network. Individual clusters are analogous to branches, and coalescence is equivalent to the joining of two branches.
There is a wide range of applications for this analysis. As a specific example, we consider the forest-fire model which has been said to exhibit self-organized criticality. In one version of the forest-fire model, a square grid of sites is considered. At each time step, a model tree or a model spark is dropped on a randomly chosen site. If the site is unoccupied when a tree is dropped, it is “planted.” The sparking frequency $`f`$ is the inverse number of attempted tree drops before a spark is dropped. If the spark is dropped on an empty site, nothing happens; if it is dropped on a tree, it ignites and “burns” all adjacent trees in a model forest fire. In this model, individual trees are introduced at the smallest scale, clusters of trees coalesce to form larger and larger clusters. Significant numbers of trees are lost only in the largest fires that terminate the inverse cascade. The noncumulative frequency-area distribution for the fires is well approximated by a power-law relation
$$N\frac{1}{A^\alpha }$$
(1)
with $`\alpha 1`$. If the sparking frequency $`f`$ is relatively large, the largest fires are relatively small and the self-similar inverse cascade is valid only over a relatively small range of cluster sizes. If the sparking frequency $`f`$ is small, the fires that terminate the cascade are large and if $`f`$ is sufficiently small the fires will span the entire grid. The noncumulative frequency-area distribution of cluster sizes satisfies equation (1.1) with $`\alpha 2`$ and the cumulative distribution of clusters with area larger than $`A`$ satisfies equation (1.1) with $`\alpha 1`$. The behavior of the one-dimensional forest-fire model has been discussed in terms of a cascade by Paczuski and Bak. The inverse cascade analysis is also applicable to the sandpile model and the slider-block model. In the sandpile model the clusters are the metastable regions that participate in avalanches once they are triggered. In the slider-block model, the clusters are the metastable regions that participate in slip events once they are initiated.
One of the most striking patterns in biology is clusters or aggregations of animals. Examples range from bacteria to whales and include insects, fish, and birds. Bonabeau et al. showed that the frequency-number distribution of whales satisfy equation (1.1) with $`\alpha 1`$. The model we present here should also be applicable to these biological problems.
## II Hierarchical Clustering
We consider a system of stationary entities that we shall refer to as elements. In terms of the forest-fire model, the elements are the trees that are planted on a lattice. The system is growing due to the steady injection of new elements that are added to locations that are not already occupied by previously injected elements. We define connected sets of elements, i.e. groups of elements that are in contact, to be clusters. Note, however, that our model does not require that elements be confined to lattice points. Neighbors can be defined with any metric (e.g. distance) condition, or according to a defined graph structure (e.g. lattice). In the forest-fire model, clusters are the groups of adjacent trees that would burn in a fire if a spark dropped on one of the trees in the cluster. We construct rules for assigning rank to clusters in such a system, based in spirit on the Strahler classification that was originally developed for branching in river networks. In this classification system, a stream with no upstream tributaries is defined to be of rank one; when two rank-one streams combine, they form a stream of rank two, and so forth. However, when streams of different rank combine, the rank of the dominant stream prevails. Our model for the growth of clusters is an extension of a scheme developed earlier which only allowed for the coalescence of clusters of the same rank. The new model is much richer in that it accomodates the coalescence of clusters of all ranks and can, therefore, describe a much wider array of phenomena.
The rules for our cluster model are:
1. We define a single element that is added to a system to be a cluster of rank 1.
2. If a new element is added adjacent to an existing cluster, we say that it is added to the cluster without changing that cluster’s rank, unless the cluster is a single element. In that special case, we define the two elements as forming a cluster of rank 2.
3. If a new element connects two existing clusters of ranks $`i`$ and $`j`$, respectively, then the rank of this new cluster is defined as $`i+1`$ when $`i=j`$ and as $`\mathrm{max}\{i,j\}`$ when $`ij`$. In words, this is equivalent to saying that when two clusters of equal rank coalesce, then the rank increases by one; however, if the two clusters are not of equal rank, then the rank of the larger cluster prevails.
4. If a new element connects three or more clusters, then the rank of the new cluster is defined to be
* the maximal rank of these clusters, when one of the clusters has a rank exceeding that of all of the others, or
* the maximal rank of these clusters plus one, when there are two or more clusters of the same maximal rank.
\[This is a rare event—akin to a four-body interaction—and it is ignored in the model equations given below.\]
5. We terminate the inverse cascade of elements from small to large clusters by eliminating clusters of a specified high rank.
In Figure 1, we illustrate how this model works.
We now wish to establish the dynamical equations governing the evolution of this system. Let us define $`N_i`$ to be the number of clusters with rank $`i`$, for $`i1`$. Let $`m_i`$ be the average mass—i.e., the number of elements—of a cluster of rank $`i`$. Then, the total mass $`M_i`$ of the clusters of rank $`i`$ is given by
$$M_i=N_im_i.$$
(2)
For convenience, we will define the mass of a single element to be one, namely $`m_1=1`$. For example, in two dimensions, we can regard $`m_i`$ as the mean area $`A_i`$ of a cluster of rank $`i`$. This would be the case in the forest-fire model.
We now develop a mean-field approximation describing the dynamical evolution prescribed by the mapping rules given above. As indicated, we ignore the simultaneous coalescence of more than two clusters. We denote the instantaneous change in all quantities using the mapping symbol $``$. Accordingly, when two clusters of ranks $`i`$ and $`j`$ coalesce, the values of $`N_i`$ and $`M_i`$ are modified as follows. For $`i=j`$,
$$N_{i+1}N_{i+1}+1,N_iN_i2,$$
(3)
$$M_{i+1}M_{i+1}+2m_i,M_iM_i2m_i,$$
(4)
and for $`i<j`$,
$$N_iN_i1,N_jN_j,$$
(5)
$$M_jM_j+m_i,M_iM_im_i,$$
(6)
with equivalent expressions for $`j<i`$. In these equations for $`M_j`$, we have ignored the addition of an element that bridges or joins the two clusters. Since $`m_i`$ will be shown to increase in an essentially geometric progression with respect to the rank $`i`$, the omission of that solitary unit mass in the calculation does not influence the asymptotic properties as $`i\mathrm{}`$.
In our model, coalescence occurs when a new element connects two existing clusters. (We have already indicated that 4-body and higher order effects will be neglected.) Accordingly, in the mean field approximation, we assume that the rate $`r_{ij}`$ of coalescence between clusters of ranks $`i`$ and $`j`$ is proportional to the product of their total numbers, $`N_i`$ and $`N_j`$, and to the product of their boundary sizes, $`\mathrm{}_i`$ and $`\mathrm{}_j`$, and is naturally related to the joint probability of the new element connecting two pre-existing clusters. For example in two dimensions, $`\mathrm{}_i`$ refers to the effective length of the cluster boundary. Thus, we assume that
$$r_{ij}N_i\mathrm{}_iN_j\mathrm{}_j.$$
(7)
This is an Euclidean approximation, and emerges in the spirit of classical kinetic theory, although the mechanics of this problem is entirely different. In sections IV and VII, this model will be modified to accommodate the possible fractal geometry of clusters.
We now define
$$L_i=N_i\mathrm{}_i$$
(8)
to be the total size of the boundary associated with clusters of rank $`i`$. We select the normalization for our time-scale so that $`r_{ij}=L_iL_j`$. Accordingly, let $`C`$ be the injection rate of single elements, utilizing this time scale. The evolution of the system can be determined by appropriately adapting equations (2.2)–(2.5). From equations (2.2) and (2.4), we write
$$\dot{N}_1=C2L_1^2\underset{j=2}{\overset{\mathrm{}}{}}L_1L_j,$$
(9)
$$\dot{N}_i=L_{i1}^22L_i^2\underset{j=i+1}{\overset{\mathrm{}}{}}L_iL_j,\mathrm{for}i>1.$$
(10)
In equation (2.8), we observe that the rate of change in the number of clusters of rank 1 is equal to the injection rate minus the rate of coalescence of rank 1 clusters together with the rate of coalescence of rank 1 clusters with clusters of larger rank. The factor of 2 appears because two rank 1 clusters were lost in coalescing to form a rank 2 cluster. Meanwhile, in equation (2.9), we observe that the rate of change in the number of clusters of rank $`i`$ is equal to the rate of rank $`i`$ cluster formation from the coalescence of pairs of rank $`i1`$ clusters, minus the rate of coalescence of pairs of rank $`i`$ clusters, together with the rate of coalescence of rank $`i`$ clusters with clusters of larger rank $`j>i`$.
In a similar way, taking into account $`m_1=1`$, we can express the mass-balance in the system, derived from equations (2.3) and (2.5), according to
$$\dot{M}_1=C2L_1^2\underset{j=2}{\overset{\mathrm{}}{}}L_1L_j,$$
(11)
$`\dot{M}_i=2L_{i1}^2m_{i1}`$ $`+`$ $`{\displaystyle \underset{k=1}{\overset{i1}{}}}L_iL_km_k2L_i^2m_i`$ (12)
$``$ $`{\displaystyle \underset{j=i+1}{\overset{\mathrm{}}{}}}L_iL_jm_i,\mathrm{for}i>1.`$ (13)
Note that equations (2.8) and (2.10) are identical since $`M_1=N_1`$.
We observe that the equations above have the potential for self-similarity, since most of the sums are infinite in extent, and might be expected to be convergent. Intuitively, we expect that $`L_j`$ will diminish as $`j`$ increases; while the boundary size of individual clusters of rank $`j`$ increase, their absolute numbers will decrease even more rapidly so that the total boundary size in clusters of rank $`j`$ will be monotone decreasing. The finite sum, which appears in equation (2.11), is somewhat more involved. Nevertheless, it is reasonable to expect that the product of $`m_k`$ with $`L_k`$ will steadily diminish as $`k`$ becomes smaller and that negligible contributions emerge from low values of $`k`$. Finally, it is easy to see that all of the governing rate equations will quickly converge, in the sense of an inverse cascade from $`i=1`$ to some finite cut-off, as $`t\mathrm{}`$. As $`N_1`$ begins to grow, it provides a stimulus to the growth of $`N_2`$, and so on. Similarly, as the masses at each rank in the system grow, they will in turn cause the boundary size $`\mathrm{}_i`$ of each cluster of rank $`i`$ to grow, basically in proportion to some power in $`m_i`$. With this intuition in hand, we now obtain the steady-state solution for this system.
## III Steady State Solution: Cluster and Mass Scaling
We derive a steady state solution for an inverse cascade from equations (2.8) through (2.11). In our inverse cascade, single elements are introduced at the lowest level, and they coalesce to form larger and larger clusters. The inverse cascade is terminated by assuming that very large clusters are removed from the system. We assume that our system develops in a sufficiently large region, so that edge effects can be ignored over a long time. Otherwise, we will have a completely space-filling solution and percolation effects will govern. We can regard this (limited) steady-state solution to be an intermediate asymptotics for our system—our solution will describe the similitude that emerges before percolation and space-filling issues become significant. The steady state solution follows when the time derivatives in the left hand sides of equations (2.8)–(2.11) vanish with the result
$$C=2L_1^2+\underset{j=2}{\overset{\mathrm{}}{}}L_1L_j.$$
(14)
$$L_{i1}^2=2L_i^2+\underset{j=i+1}{\overset{\mathrm{}}{}}L_iL_j,\mathrm{for}i>1.$$
(15)
$`2L_{i1}^2m_{i1}`$ $`+`$ $`{\displaystyle \underset{k=1}{\overset{i1}{}}}L_iL_km_k=`$ (16)
$`2L_i^2m_i`$ $`+`$ $`{\displaystyle \underset{j=i+1}{\overset{\mathrm{}}{}}}L_iL_jm_i,\mathrm{for}i>1.`$ (17)
As noted earlier, equations (2.8) and (2.10) are equivalent.
Equation (3.2) has a self-similar solution, since that equation is invariant under $`ii+1`$, and depends only on $`L_j/L_i`$. Thus, we seek a solution having the form
$$L_i=ax^{i1}$$
(18)
where $`0<x<1`$. The first of these constraints on $`x`$ corresponds to boundary sizes being positive, while the second is necessary for the summation to exist. We find that $`x`$ satisfies
$$2x^{2i2}+\underset{j=i+1}{\overset{\mathrm{}}{}}x^{i+j2}=x^{2i4}.$$
(19)
Summing the infinite geometric series explicitly and dividing by $`x^{2i4}`$, we obtain
$$2x^2+\frac{x^3}{1x}=1,\mathrm{or}x^32x^2x+1=0.$$
(20)
This equation has a single root in the range $`0<x<1`$, namely $`x=0.55495813\mathrm{}`$ . Given equations (3.1) and (3.6), we find that
$$C=a^2\left[2+x/\left(1x\right)\right]=a^2\mathrm{or}a=C^{1/2}.$$
(21)
Substitution of these results into equation (3.4) gives
$$L_i=C^{1/2}\left(0.55495813\right)^{i1}.$$
(22)
We now turn our attention to equation (3.3). We substitute equation (3.4) into equation (3.3), dividing by $`a^2x^{i3}`$ and taking into account equation (3.6). We then obtain
$`2x^{i1}m_{i1}+{\displaystyle \underset{k=1}{\overset{i1}{}}}x^{k+1}m_k`$ $`=`$ (23)
$`2x^{i+1}m_i+{\displaystyle \frac{x^{i+2}}{1x}}m_i`$ $`=`$ $`x^{i1}m_i.`$ (24)
This equation does not have an exactly self-similar solution, since it is not invariant under $`ii+1`$. Suppose that we make the substitution
$$x^{i1}m_i=y^{i1},$$
(25)
assuming that $`y>1`$, whereupon we obtain from summing the finite series
$$2xy^{i2}+x^2\frac{y^{i1}1}{y1}=y^{i1}.$$
(26)
We observe that the solution for $`y`$ in this equation depends upon $`i`$. However, for large $`i`$, equation (3.11) approximately implies, assuming that we can replace $`y^{i1}1`$ by $`y^{i1}`$, that $`2x+x^2y/(y1)=y`$ which we rewrite as
$$y^2(x+1)^2y+2x=0.$$
(27)
This equation has a unique solution for $`y>1`$, namely $`y=1/x=1.8019377\mathrm{}`$ . Accordingly, for large $`i`$, we have asymptotic self-similarity with
$$m_i\alpha x^{1i}y^{i1}=\alpha c^{i1},$$
(28)
where $`c=1/x^2=3.24697602\mathrm{}`$ . With $`m_1=1`$, we have
$$m_i\left(3.24697602\right)^{i1}.$$
(29)
Before moving to issues dealing with fractals and branching, the solutions we have just obtained for $`L_i`$ and for $`m_i`$ can be immediately exploited. Since $`L_ix^i`$ and, approximately, $`m_ix^{2i}`$, we observe that $`L_i\sqrt{m_i}\mathrm{const}.`$ For example in two dimensions, recalling that $`L_iN_i\mathrm{}_i`$ and introducing the Euclidean relation that $`\mathrm{}_i\sqrt{m_i}`$, it follows that $`N_im_i\mathrm{const}.`$ or, equivalently, we find the number-mass or number-area relationships
$$N_i1/m_i1/A_i.$$
(30)
This is equivalent to equation (1.1) with $`\alpha =1`$. The branch numbers $`N_i`$ are loosely equivalent to a logarithmic binning of cluster sizes. Logarithmic binning is equivalent to a cumulative distribution. Thus, the result given in equation (3.15) is in agreement with the distribution of cluster sizes obtained from the forest-fire model as discussed above. The concept of clusters can also be extended to both sandpile and slider-block models. In these cases, the clusters are the metastable regions that will avalanche or slip when an event is triggered. In both cases, the cumulative distribution of cluster sizes satisfy equation (1.1) with $`\alpha 1`$. These scaling relationships are archetypical of self-organized criticality. Remarkably, this scaling has been deduced using solely analytic means from our inverse-cascade hierarchical cluster model.
## IV Adaptation for Fractal Perimeter: Cluster and Mass Scaling
In the analysis given in the previous sections, we assumed that the rate of cluster coalescence $`r_{ij}`$ was proportional to the linear dimensions of the two clusters as given in Equation (2.6). We now generalize this dependence to account for the possibility of fractal clusters by introducing an “efficiency” factor $`ϵ<1`$, with an appropriate scaling such that
$$r_{ij}ϵ^{\left|ji\right|}N_i\mathrm{}_iN_j\mathrm{}_j=ϵ^{\left|ji\right|}L_iL_j.$$
(31)
As before, $`r_{ij}`$ is the rate of coalescence between clusters of ranks $`i`$ and $`j`$. This modification can, for example, describe the increased efficiency with which a smaller cluster can coalesce with a larger one, since the smaller cluster can become attached inside one of the nooks and crannies that can characterize a fractal perimeter.
With this modification, we obtain analogs of equations (2.8)–(2.11)
$$\dot{N}_1=\dot{M}_1=C2L_1^2\underset{j=2}{\overset{\mathrm{}}{}}ϵ^{1j}L_1L_j,$$
(32)
$$\dot{N}_i=L_{i1}^22L_i^2\underset{j=i+1}{\overset{\mathrm{}}{}}ϵ^{ij}L_iL_j,\mathrm{for}i>1,$$
(33)
$`\dot{M}_i`$ $`=`$ $`2L_{i1}^2m_{i1}+{\displaystyle \underset{k=1}{\overset{i1}{}}}ϵ^{ki}L_iL_km_k`$ (34)
$``$ $`2L_i^2m_i{\displaystyle \underset{j=i+1}{\overset{\mathrm{}}{}}}ϵ^{ij}L_iL_jm_i,\mathrm{for}i>1.`$ (35)
In the steady state, we obtain analogs of equations (3.1)–(3.3)
$$C=2L_1^2+\underset{j=2}{\overset{\mathrm{}}{}}ϵ^{1j}L_1L_j.$$
(36)
$$L_{i1}^2=2L_i^2+\underset{j=i+1}{\overset{\mathrm{}}{}}ϵ^{ij}L_iL_j,\mathrm{for}i>1.$$
(37)
$`2L_{i1}^2m_{i1}`$ $`+`$ $`{\displaystyle \underset{k=1}{\overset{i1}{}}}ϵ^{ki}L_iL_km_k=`$ (38)
$`2L_i^2m_i`$ $`+`$ $`{\displaystyle \underset{j=i+1}{\overset{\mathrm{}}{}}}ϵ^{ij}L_iL_jm_i,\mathrm{for}i>1.`$ (39)
Substituting equation (3.4) into (4.6) we obtain an analog of (3.6)
$`2x^2+{\displaystyle \frac{x^3}{ϵx}}=1,`$ $`\mathrm{or}`$ $`x^32ϵx^2x+ϵ=0,`$ (40)
$`\mathrm{or}`$ $`ϵ={\displaystyle \frac{xx^3}{12x^2}}.`$ (41)
As the $`L_i`$ are positive, $`x`$ must be positive from its definition in (3.4). Suppose that $`ϵ=x`$. Then $`x2x^3=xx^3`$, giving $`x=0`$, a contradiction. Accordingly, for positive $`x`$, the sign of $`ϵx=x^3/\left(12x^2\right)`$ changes only at $`x=\sqrt{1/2}`$ where $`ϵ`$ changes sign as it passes through infinity (due to the denominator). It is easy to see that the sign of both $`ϵx`$ and $`ϵ`$ is positive for $`0<x<\sqrt{1/2}`$, and negative for $`x>\sqrt{1/2}`$. As $`x<ϵ`$ is nesessary for the summation of the geometric series to exist, this implies that $`x<\sqrt{1/2}`$. In addition, the condition $`ϵ<1`$ requires that $`x<0.55495813\mathrm{}`$. For example, $`x=0.5`$ corresponds to $`ϵ=3/4`$. From equations (4.5) and (4.8), we obtain that $`a=xC^{1/2}`$, for any $`ϵ`$.
Let us turn now to the mass balance equation (4.7). Substituting (3.4) and assuming $`x^{i1}m_i=y^{i1}`$, we obtain an analog of equation (3.11)
$$2xy^{i2}+x^2ϵ^{1i}\frac{(ϵy)^{i1}1}{ϵy1}=y^{i1}.$$
(42)
We assume that $`ϵy>1`$. Precisely as in equation (3.11), we observe that the solution for $`y`$ in this equation depends upon $`i`$. However, for large $`i`$, equation (4.9) approximately implies that $`2x+x^2y/(ϵy1)=y`$ which we rewrite as
$$ϵy^2(x^2+2ϵx+1)y+2x=0.$$
(43)
Due to equation (4.8), equation (4.10) has a solution $`y=1/x`$, for any $`ϵ`$. Note that condition $`ϵy>1`$ is satisfied for $`y=1/x`$. Accordingly, for large $`i`$, we have asymptotic self-similarity with $`m_i\alpha c^{i1}`$, where $`c=1/x^2`$, as in equation (3.13). For example, when $`ϵ=3/4`$, we have $`c=4`$.
It is important to remember that $`ϵ`$ describes the perimetric fractal scaling for the clusters. The relationship between perimetric and areal scaling remains a controversial topic. However, assuming that one can identify an appropriate link between the two, for example in the context of forest fire or other models, then the preceding discussion makes it possible to identify the frequency-area relationship for fractal clusters, in analogy to the $`N1/A`$ relationship we identified previously for Euclidian clusters.
## V Branching Numbers
In the analogy between clustering and river networks that we have discussed above, we can write for our clusters
$$\frac{N_{i+1}}{N_i}=x^2$$
(44)
which is known as the bifurcation ratio for river networks. Also, we have
$$\frac{\mathrm{}_i}{\mathrm{}_{i+1}}=x$$
(45)
which is known as the length-order ratio for river networks. For river networks, the fact that these two ratios are almost constant is known as Horton’s laws.
A major step forward in classifying river networks was made by Tokunaga. He extended the Strahler ordering system to include side branching. A first-order branch joining another first-order branch is denoted by the subscript “11” and the number of such branches is $`N_{11}`$, a first-order branch joining a second-order branch is subscripted “12” and the number of such branches is $`N_{12}`$; a second-order branch joining a second-order branch is subscripted “22” and the number of such branches is $`N_{22}`$.
In order to apply the concept of side branching to the coalescence of clusters, let us suppose that we have a coalescence of two clusters, of ranks $`i`$ and $`j`$. In the case $`i<j`$, the cluster of rank $`i`$ becomes a branch of the cluster of rank $`j`$. Note that, if the smaller cluster has its own branches, these branches are not counted as branches of the larger cluster. However, these branches, together with all of their branches, etc. are counted as subclusters of the larger cluster. In analogy to river networks, branches are to tributaries as clusters are to drainage basins. A branch formed by the cluster of rank $`i`$ is considered to be a subcluster too, and is assigned the rank $`i`$. Any other subcluster is assigned the rank of a cluster from which it first formed as a branch. In analogy to river networks, subclusters of a cluster correspond with the streams in a drainage basin. The case $`i>j`$ is treated similarly. In the case $`i=j`$, both clusters of rank $`i`$ become branches of rank $`i`$ of the new cluster of rank $`i+1`$. Subclusters and their ranks are defined the same way as above.
Let $`t_{ij}`$ be the average number of branches of rank $`i`$ in a cluster of rank $`j`$, for $`i<j`$, and let $`n_{ij}`$ be the total number of sub-clusters of rank $`i`$ in a cluster of rank $`j`$. For $`i=j`$, we define $`t_{ii}=n_{ii}=1`$. By definition, for $`i<j`$ we have
$$n_{ij}=\underset{k=i}{\overset{j1}{}}n_{ik}t_{kj}.$$
(46)
Moreover, let $`N_{ij}=N_jn_{ij}`$ be the total number of sub-clusters of rank $`i`$ for all clusters of rank $`j`$, and let $`T_{ij}=N_jt_{ij}`$ be the total number of branches. This classification scheme is illustrated in Figure 2. In (a), we have a cluster of rank “1” which corresponds to a single tree in the forest-fire model. In (b), two clusters of rank “1” have coalesced to form a cluster of rank “2.” This cluster has been joined by a cluster of rank “1.” In the forest-fire model, two trees on adjacent grid points have been joined by a third tree. In (c) and (d), clusters of rank “3” and “4” are illustrated. For this example, we have $`n_{12}=n_{23}=n_{34}=3`$, $`n_{13}=n_{24}=11`$, $`n_{14}=43`$, $`t_{12}=t_{23}=t_{34}=3`$, $`t_{13}=t_{24}=2`$, and $`t_{14}=4`$.
As before, we regard the coalescence of more than two clusters as being exceedingly rare and neglect them in our treatment. When two clusters, of ranks $`i`$ and $`j`$ coalesce, we prescribe the mappings for $`N_{ki},N_{kj},T_{ki}`$, and $`T_{ij}`$ as described below. When $`i=j`$,
$$N_{k,i+1}N_{k,i+1}+2n_{ki},N_{ki}N_{ki}2n_{ki},\mathrm{for}k<i,$$
(47)
$$T_{i,i+1}T_{i,i+1}+2,T_{k,i}T_{ki}2t_{ki},\mathrm{for}ki;$$
(48)
and when $`i<j`$,
$$N_{kj}N_{kj}+n_{ki},N_{ki}N_{ki}n_{ki},\mathrm{for}ki,$$
(49)
$$T_{ij}T_{ij}+1,T_{ki}T_{ki}t_{ki},\mathrm{for}ki.$$
(50)
Given the rate of coalescence $`r_{ij}=L_iL_j`$, we describe the time evolution of the branching process by the following equations
$`\dot{N}_{kj}=2L_{j1}^2n_{k,j1}`$ $`+`$ $`{\displaystyle \underset{i=k}{\overset{j1}{}}}L_iL_jn_{ki}2L_j^2n_{kj}`$ (51)
$``$ $`{\displaystyle \underset{i=j+1}{\overset{\mathrm{}}{}}}L_iL_jn_{kj},\mathrm{for}k<j,`$ (52)
from equations (5.4) and (5.6), and
$`\dot{T}_{j1,j}`$ $`=`$ $`2L_{j1}^2+L_{j1}L_j2L_j^2t_{j1,j}`$ (53)
$``$ $`{\displaystyle \underset{k=j+1}{\overset{\mathrm{}}{}}}L_kL_jt_{j1,j},\mathrm{for}j>1,`$ (54)
$$\dot{T}_{ij}=L_iL_j2L_j^2t_{ij}\underset{k=j+1}{\overset{\mathrm{}}{}}L_kL_jt_{ij},\mathrm{for}i<j1,$$
(55)
from equations (5.5) and (5.7). As before, we turn our focus to the steady state solution of equations (5.8) through (5.10).
## VI Steady State: Branching Numbers
We begin for the steady state case by setting the time derivatives in the left hand sides of equations (5.8)–(5.10) to zero. We obtain
$`2L_{j1}^2n_{k,j1}`$ $`+`$ $`{\displaystyle \underset{i=k}{\overset{j1}{}}}L_iL_jn_{ki}=`$ (56)
$`2L_j^2n_{kj}`$ $`+`$ $`{\displaystyle \underset{i=j+1}{\overset{\mathrm{}}{}}}L_iL_jn_{kj},\mathrm{for}k<j.`$ (57)
$`2L_{j1}^2+L_{j1}L_j`$ $`=`$ (58)
$`2L_j^2t_{j1,j}`$ $`+`$ $`{\displaystyle \underset{k=j+1}{\overset{\mathrm{}}{}}}L_kL_jt_{j1,j},\mathrm{for}j>1.`$ (59)
$$L_iL_j=2L_j^2t_{ij}+\underset{k=j+1}{\overset{\mathrm{}}{}}L_kL_jt_{ij},\mathrm{for}i<j1.$$
(60)
We observe that, due to the finite summation present in equation (6.1), it is not invariant under $`jkjk+1`$ and its solution is not exactly self-similar in $`jk`$. However, we now employ the same methodology used in §III and obtain asymptotically valid approximate solution. In particular, we substitute (3.4) into equation (6.1) and divide by $`a^2x^{j+k4}`$, and we obtain
$`2x^{jk}n_{k,j1}`$ $`+`$ $`{\displaystyle \underset{i=k}{\overset{j1}{}}}x^{ik+2}n_{ki}=`$ (61)
$`2x^{jk+2}n_{kj}`$ $`+`$ $`{\displaystyle \frac{x^{jk+3}}{1x}}n_{kj}=x^{jk}n_{kj}.`$ (62)
Based on our result obtained using equation (3.10), we introduce
$$x^{jk}n_{kj}=z^{jk}$$
(63)
assuming $`z>1`$, and we obtain from summing the finite series in equation (6.4)
$$2xz^{jk1}+x^2\frac{z^{jk}1}{z1}=z^{jk}.$$
(64)
Approximating $`z^{jk}1`$ by $`z^{jk}`$ in the asymptotic limit $`jk`$, equation (6.6) approximately implies that $`2x+x^2z/(z1)=z`$, or
$$z^2(x+1)^2z+2x=0.$$
(65)
This latter equation is identical to equation (3.12), and has a unique solution $`z>1`$, namely $`z=1/x=1.8019377\mathrm{}`$ and, thereby, demonstrates that the branching network description preserves the same structural character. Accordingly, for $`jk`$, we have
$$n_{kj}\beta x^{kj}z^{jk}=\beta c^{jk},$$
(66)
where $`c=1/x^2=3.24697602\mathrm{}`$ as before. Thus, we have approximately
$$n_{kj}\left(3.24697602\right)^{jk}$$
(67)
in the limit $`jk`$. For the deterministic example given in Fig. 1, we have $`n_{kj}4^{j1}`$ for $`jk`$. Substituting (3.4) into equation (6.3) and dividing by $`a^2x^{2j4}`$, we obtain
$$x^{ij+2}=2x^2t_{ij}+\frac{x^3}{1x}t_{ij}=t_{ij},\mathrm{for}i<j1$$
(68)
which establishes that
$$t_{ij}=x^{ij+2},\mathrm{for}i<j1.$$
(69)
\[For the special case that $`i=j1`$, we have from equation (6.2) that $`2+x=t_{j1,j}`$.\] This, now, is functionally equivalent to the similitude relationship assumed by Tokunaga, namely
$$t_{ij}=t_{ji}=ax^{ij}.$$
(70)
Importantly, the behavior that Tokunaga assumed to be valid emerges in a completely natural way from the underlying mathematics of our inverse cascade. Since $`x=0.55495813`$, we have for our inverse cascade
$$t_{ij}=\left(0.55495813\right)^{ij+1}.$$
(71)
For the deterministic example given in Fig. 1, we have $`t_{ij}=\left(1/2\right)^{ij+1}`$.
Finally, the connection between our treatment of branching and our earlier treatment of clustering needs to be established. In particular, we observe that $`m_j`$ turns out to be equivalent to $`n_{1j}`$ and that both scale as $`c^{j1}`$ where, as we have already seen, $`c=1/x^2`$.
## VII Adaptation for Fractal Perimeter: Branching Numbers
The branching analysis given in the previous section is easily modified to include the fractal perimeter dependence introduced in equation (4.1). Introducing this relation into equations (5.8)–(5.10), we obtain
$`\dot{N}_{kj}`$ $`=`$ $`2L_{j1}^2n_{k,j1}+{\displaystyle \underset{i=k}{\overset{j1}{}}}ϵ^{ij}L_iL_jn_{ki}`$ (72)
$``$ $`2L_j^2n_{kj}{\displaystyle \underset{i=j+1}{\overset{\mathrm{}}{}}}ϵ^{ji}L_iL_jn_{kj},\mathrm{for}k<j,`$ (73)
$`\dot{T}_{j1,j}`$ $`=`$ $`2L_{j1}^2+ϵ^1L_{j1}L_j2L_j^2t_{j1,j}`$ (74)
$``$ $`{\displaystyle \underset{k=j+1}{\overset{\mathrm{}}{}}}ϵ^{jk}L_kL_jt_{j1,j},\mathrm{for}j>1,`$ (75)
$`\dot{T}_{ij}`$ $`=`$ $`ϵ^{ij}L_iL_j2L_j^2t_{ij}`$ (76)
$``$ $`{\displaystyle \underset{k=j+1}{\overset{\mathrm{}}{}}}ϵ^{jk}L_kL_jt_{ij},\mathrm{for}i<j1.`$ (77)
In the steady state, we obtain analogs of equations (6.1)–(6.3)
$`2L_{j1}^2n_{k,j1}`$ $`+`$ $`{\displaystyle \underset{i=k}{\overset{j1}{}}}ϵ^{ij}L_iL_jn_{ki}=`$ (78)
$`2L_j^2n_{kj}`$ $`+`$ $`{\displaystyle \underset{i=j+1}{\overset{\mathrm{}}{}}}ϵ^{ji}L_iL_jn_{kj},\mathrm{for}k<j.`$ (79)
$`2L_{j1}^2+ϵ^1L_{j1}L_j`$ $`=`$ (80)
$`2L_j^2t_{j1,j}`$ $`+`$ $`{\displaystyle \underset{k=j+1}{\overset{\mathrm{}}{}}}ϵ^{jk}L_kL_jt_{j1,j},\mathrm{for}j>1.`$ (81)
$`ϵ^{ij}L_iL_j`$ $`=`$ (82)
$`2L_j^2t_{ij}`$ $`+`$ $`{\displaystyle \underset{k=j+1}{\overset{\mathrm{}}{}}}ϵ^{jk}L_kL_jt_{ij},\mathrm{for}i<j1.`$ (83)
Substituting equation (3.4) into (7.4) and assuming that $`x^{jk}n_{kj}=z^{jk}`$, we obtain, due to (4.8), an analog of equation (6.6), namely
$$2xz^{jk1}+x^2ϵ^{kj}\frac{(ϵz)^{jk}1}{ϵz1}=z^{jk}.$$
(84)
Assuming $`ϵz>1`$, we approximate $`(ϵz)^{jk}1`$ by $`(ϵz)^{jk}`$ in the asymptotic limit $`jk`$. In this case, equation (7.7) approximately implies that $`2x+x^2z/(ϵz1)=z`$, or
$$ϵz^2(x^2+2ϵx+1)z+2x=0.$$
(85)
This latter equation is identical to equation (4.10), and has a solution $`z=1/x`$, for any $`ϵ`$. Accordingly, for $`jk`$, we have asymptotic self-similarity with $`n_{kj}\beta x^{kj}z^{jk}=\beta c^{jk}`$, where $`c=1/x^2`$ as before.
Substituting equation (3.4) into equation (7.6) and dividing by $`a^2x^{2j4}`$, we obtain
$$ϵ^{ij}x^{ij+2}=2x^2t_{ij}+\frac{x^3}{ϵx}t_{ij}=t_{ij},\mathrm{for}i<j1$$
(86)
which establishes that
$$t_{ij}=ϵ^{ij}x^{ij+2}\mathrm{for}i<j1.$$
(87)
\[For the special case that $`i=j1`$, we have from equation (7.5) that $`2+x/ϵ=t_{j1,j}`$.\] Thus, our modification of the Euclidean model to accommodate fractal perimetric behavior is complete, and the self-similar description of the branching process has been shown to follow in a completely analogous way.
## VIII Conclusions and Discussion
In this paper, we have presented an inverse cascade model for clustering. This model requires:
1. The addition of single elements at a prescribed small scale;
2. The consideration of the clustering process as a hierarchical tree with side branching;
3. The probability that a cluster of one order will coalesce with another cluster of the same or different order is proportional to the product of the number of trees of the two orders and the square root of their masses (or areas); and
4. Clusters are lost (destroyed) at a prescribed large scale.
Our inverse cascade model provides a general explanation for the behavior of several models that have been considered to exhibit behavior which has often been described as “self-organized criticality” and occurs in various settings including the “forest-fire” model. In this model, the planting of individual trees is the introduction of single elements, and coalescence occurs when a planted tree bridges the gap between two existing clusters. The model “fires” burn significant numbers of trees only in the largest clusters and this terminates the inverse cascade. Our model gives the number-mass (or area) distribution to be $`N1/A`$; this is also found to be the case for the forest-fire model. Our model is also applicable for the sandpile and slider-block models. In the sandpile model, the cluster is the region over which an avalanche will spread once it is initiated. In the slider-block model, the cluster is the region over which a slip event will spread once it is initiated. The initiation of an avalanche in the sandpile model and the initiation of a slip event in the slider-block model are equivalent to a spark being dropped on a tree. In both models the clusters grow by coalescence.
We conclude that these models, which are said to exhibit self-organized criticality, are neither critical nor self-organized. Instead, their behavior is associated with an inverse cascade which asymptotically approaches (so long as the largest scales are not involved) power-law (“fractal”) scaling. This behavior is related to the self-similar direct cascade associated with the inertial-range of fully-developed isotropic turbulence. This behavior qualifies as a form of “intermediate asymptotics”. It is interesting to note that earthquakes, landslides, and actual forest fires also have analogous power-law frequency-area distributions.
We have quantified our inverse cascade in terms of a branching tree hierarchy with side branching. We have adapted the taxonomy used for river networks to the growth of our clusters. The order of each cluster is specified and, in our mean-field approximation, the number of clusters of each order is obtained. We find that this distribution is identical to the self-similar side branching distribution introduced empirically by Tokunaga. This distribution has been found to be applicable for river networks, DLA clusters, and vein structures of leaves.
###### Acknowledgements.
We wish to acknowledge the support of NSF Grant EAR 9804859. We are also grateful to Gleb Morein for several useful discussions.
|
no-problem/9906/gr-qc9906086.html
|
ar5iv
|
text
|
# ON THE (NON) EXISTENCE OF SEVERAL GRAVITOMAGNETIC EFFECTS
## 1 Gravitomagnetic and Maxwell equations
Our starting point is the resemblance between Maxwell-Lorentz’s electromagnetic equations and the linear and slow motion aproximation of the Einstein’s equations of General Relativity. Hence, I do not start from the full non-linear Einstein’s equations, to develop, after the projection into the local rest spaces of a congruence of observers, the Maxwell analogy in General Relativity, based on the correspondence between the Faraday tensor of the electromagnetic field and the Weyl tensor of the gravitational tidal field. This analogy has been developed in several recent papers, however it was put forward and clearly exposed in . This approach is done without any aproximation and, in this framework, the Bianchi identities are dynamical and the Einstein equations can be interpreted as constitutive relations of a 4-dim non-linear elastic medium, (this can be seen in ).
In Newtonian theory of gravity, no fundamental gravitational force is associated with the rotation of a mass. In this theory, if a body rotates, the gravitational force it exerts on other masses, changes only to the extent that the matter distribution within the body is affected by the rotation. The Newtonian gravitational force is only associated to the distribution of mass at a time, but not with the state of intrinsic rotation of this mass.
However, Lense and Thirring (1918) and Thirring (1921) showed that, a certain gravitomagnetic field is indeed associated with the rotation of a mass, in the framework of the weak field aproximation to General Relativity. From the linearized Einstein’s equations, one obtains, when the first order effects of the motion of the sources are taken into account, the following Maxwell-like (gravitomagnetic) equations, which can be considered invariant under the Poincaré (or even Conformal) group:
$`𝒈`$ $`=`$ $`4\pi \rho ,`$ (1)
$`𝒃`$ $`=`$ $`4\pi \rho 𝒖+{\displaystyle \frac{𝒈}{t}},`$ (2)
$`𝒈`$ $`=`$ $`{\displaystyle \frac{𝒃}{t}},`$ (3)
$`𝒃`$ $`=`$ $`0.`$ (4)
Where $`𝒈`$ is the Newtonian gravitostatic field with source the density of mass-energy, $`\rho `$, and $`𝒃`$ is the gravitomagnetic field with source the density of mass current generated by the motion, in particular, an intrinsic rotation. This deduction can be seen, in the corresponding chapters of several books, see for instance , with some changes in the notation and new symbols. Moreover, for a stationary field one obtains an equation for $`𝒃`$, that is analogous to the electromagnetic one, changing the magnetic dipole moment by minus twice the spin angular momentum $`𝑺`$.
## 2 Magnetic dynamo theory
The term ”dynamo effect” in magnetohydrodynamics (hereafter MHD), is generically used to describe the systematic and sustained generation of magnetic energy as a result of the stretching action of a velocity field $`𝒖`$, on a magnetic field $`𝑩`$. In other words, if a conducting fluid moves in a magnetic field $`𝑩`$, the flow will be affected by the force due to the interaction between $`𝑩`$ and the currents of the fluid. Also, $`𝑩`$ will be modified (amplified) by the currents of the fluid and this is the dynamo effect.
The kinematic dynamo is the most simple case of self-excited one, due to the fact that the back reaction of the magnetic field to $`𝒖`$ is assumed negligible, and considers the evolution (amplification) of magnetic field according the induction equation:
$$\frac{𝑩}{t}=(𝒖𝑩)+\frac{1}{4\pi }\eta _e\mathrm{\Delta }𝑩,$$
(5)
being $`\eta _e`$, the resistivity or difussivity (for insulators is infinite, for plasmas is zero), the reciprocal of the electric conductivity $`\sigma `$. The induction equation is obtained from the ”macroscopic” magnetic Galilean limit (will be discussed) of Maxwell’s equations, in which case the displacement current is neglected, and Ohm’s law. I will try to propose a similar mechanism in gravitomagnetism, to amplify $`𝒃`$ and hence the intrinsic angular momentum $`𝑺`$, due to the fact that we have Maxwell-like equations for gravity at our disposal. However, the key equation of the kinematic magnetic dynamo, which stablish the loop to amplify $`𝑩`$ is the Ohm’s law. Do we have a similar equation in gravity?
## 3 An analog for gravitomagnetism of the Ohm’s law
Our main radical and new idea is that, in order to have a gravitomagnetic dynamo, the source fluid can not be a perfect fluid. The fluid must be ”not dry”, wet, and hence must have viscosity. But, as viscosity is a tensorial object and as we need a scalar, we only consider its trace, the viscous pressure, neglecting the shear viscosity. Viscosity (viscous pressure) $`\eta `$, will be the analog in gravitomagnetism of the resistivity $`\eta _e`$, for a conducting electrical medium. Our Ohm’s-like law for the moving viscous fluid, in a moving frame, will be:
$$𝒋=\rho 𝒖=\delta \left(𝒈+𝒖𝒃\right),$$
(6)
where $`𝒋`$ is the mass current that appears in the first term of the r.h.s. of (2) and being $`\delta =1/\eta `$, the ”dryness” of the viscous fluid. With the Ohm’s like law (6) and following the same procedure as in electromagnetism, one obtains an induction equation for $`𝒃`$. The difference with (5) is a change of sign in the second term of the r.h.s., i.e., the presence of a ”concentration” term instead of a diffusion one. This gives rise to some problems concerning the existence of a gravitomagnetic dynamo, however, in the next section I present a stronger reason against it.
## 4 Galilean limits of the gravitomagnetic equations
It is well-known that Maxwell’s equations and the Lorentz force law have two different kinds of Galilean limits: electric and magnetic. This is due, from the mathematical point of view, to the existence of two different kinds of Galilean four-vectors. Starting from a Lorentz four-vector, for instance $`(𝑬,𝑩)`$, this can be more timelike, i.e., $`|𝑬|>>|𝑩|`$, and in this electric Galilean limit, its transformation under the Galilean inertial one is:
$$𝑬^{}=𝑬,𝑩^{}=𝑩𝒗𝑬.$$
(7)
Physically, in the electric limit one describes situations where isolated electrical charges move at low velocities. On the other hand, the magnetic Galilean limit (in which the space-like parts are dominant) is the usual situation at the macroscopic level where magnetic effects are dominant, due to the balance between negative and positive electric charges. This magnetic Galilean limit is the proper one that is used in magnetic dynamo theory but it is not possible in gravitomagnetism, where we do not have negative masses at our disposal. Thus, in gravitomagnetism, if we take a Galilean limit, this must necessarily be of the electric (almost Newtonian) kind and describe situations where isolated masses move at low velocities. In this electric (almost Newtonian) limit, the gravitomagnetic equations (1,2 and 4) have the same expressions, but there is an important difference, in this limit the Faraday-like equation (3) has not induction term, this equation now read
$$𝒈=0,$$
(8)
Moreover, in this limit, the proper one for gravity, it is impossible to build a gravitomagnetic dynamo even if we use an Ohm’s like law for gravity as (6), because we do not have, at our disposal, an induction term in the Faraday’s equation.
The only possibility that remains, in my opinion, to construct a gravitomagnetic dynamo, would be to consider a non-relativistic generalized Newtonian theory of gravity of the kind introduced by Bel in . This possibility will be explored in a future work.
## 5 The gravitomagnetic Meissner effect does not exists
Working with the gravitomagnetic equations, several works have appeared (see for instance ), in which a gravitational analog of the electromagnetic Meissner-Ochsenfeld effect is presented. As a result, Lano, by using the classical London equations, suggests an expulsion of the gravitomagnetic field $`𝒃`$ from the core of the neutron stars to the exterior, i.e., a transport outwards of the spin angular momentum, due to the diamagnetic nature of the Meissner effect. I will show that this gravitomagnetic Meissner effect does not exists in gravitomagnetism and instead of a diamagnetic nature, at the classical level in spite of the fact that is a truly quantum effect, one finds a paramagnetic character. Surprisingly, this key difference comes from a trivial, but fundamental, error in the calculation of . Begin with the Lorentz force law in the electric (quasi-Newtonian) limit, $`𝒋/t=\rho 𝒈`$, that is the first London type equation. By substituing it into the Faraday’s law, (3), one obtains
$$\frac{}{t}\left(\frac{1}{\rho }𝒋+𝒃\right)=0.$$
(9)
One solution is the second London equation $`𝒋=\rho 𝒃`$. From the Ampere equation in the magnetic limit,
$$𝒃=4\pi 𝒋,$$
(10)
taking the curl, substituing the second London equation and finally appling (4), one obtains:
$$\mathrm{\Delta }𝒃=4\pi 𝝆𝒃.$$
(11)
So, we have found a ”paramagnetic” character instead of the Meissner-Ochsen feld effect. Our final criticism to our previous deduction is similar to the proposed for the dynamo effect. In this case one uses a mixture of Galilean limits (electric, for the Lorentz force and magnetic, for the field equations), thus the paramagnetic character of gravitomagnetism must be also put into question.
## 6 Conclusion
Our final remark is the following. The use of the linear slow motion aproximation of Einstein’s equations can lead to the appearance of spurious effects. The truly gravitomagnetic effects must be found using the Maxwell analogy of General Relativity, first exposed by Ll. Bel, i.e., working with tidal curvature fields instead of with kinematic connection fields.
## Acknowledgments
I am grateful to A. San Miguel and F. Vicente for discussions and TeX help, to A. Ferriz-Mas and M. Núñez for introducing me to the subject of magnetic dynamo theory and, after the completion of this paper, to Ll. Bel for drawing my attention to one of his publications. This work has been partially supported by the spanish research projects VA61/98 of Junta de Castilla y León and C.I.C.Y.T. PB97-0487.
## References
|
no-problem/9906/hep-th9906029.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
Several past years brought us new understanding of non-perturbative phenomena in supersymmetric (SUSY) quantum gauge theories. In particular it has become possible to take into account all instanton effects and write down the exact low-energy effective actions in $`𝒩=2`$ SUSY Yang-Mills theories . The proposed non-perturbative formulas imply an existence of underlying hidden geometric structures and, in a most elegant way, can be formulated in terms of integrable systems . This question has already a long story, but the origin of this relation still remains to be an open problem. The aim of these notes is, in particular, to discuss and partially fill this gap. We shall see, for example, that some aspects of this relation can be more clearly understood if one takes, first, SUSY gauge theory in the space-time with compactified dimensions .
The reason is that the compactified gauge theory has larger moduli space than its fully non-compact relative , and this moduli space can be thought of as a phase space of certain classical integrable system. We shall consider the compactified theory with broken SUSY by the Scherk-Schwarz mechanism (down to $`𝒩=1`$ in 4D sense), i.e. when the breaking mass parameter is given by $`\frac{ϵ}{R}`$, where $`R`$ is the radius of compact dimension and $`ϵ`$ – phase parameter of boundary conditions. In the decompactification limit $`R\mathrm{}`$ supersymmetry is restored but the extra, compact, degrees of freedom become ”heavy” and the integration over them leads to the ”averaging” of an integrable system in the Bogolyubov-Whitham sense – thus becoming an origin of the relation between Seiberg-Witten theories and Whitham integrable systems .
## 2 Integrable systems in Seiberg-Witten theory
We start with the formulation of the exact effective actions for the 4D SUSY gauge theories a la Seiberg and Witten (SW) which is very simple: the (Coulomb branch) low-energy effective action for the $`𝒩=2`$ SUSY Yang-Mills vector multiplets (supersymmetry requires the metric on moduli space of massless complex scalars from $`𝒩=2`$ vector supermultiplets to be of a ”special Kähler form” – or the Kähler potential $`K(𝐚,\overline{𝐚})=\mathrm{Im}_i\overline{a}_i\frac{}{a_i}`$ should be expressed through a holomorphic function $`=(𝐚)`$ – a prepotential) can be described in terms of auxiliary Riemann surface (complex curve) $`\mathrm{\Sigma }`$, endowed with a meromorphic 1-differential $`dS`$, which possess peculiar properties:
* The number of ”live” moduli (of complex structure) of $`\mathrm{\Sigma }`$ is strongly restricted (roughly ”3 times” less than for generic Riemann surface). The genus of $`\mathrm{\Sigma }`$ – for the $`SU(N)`$ gauge theories – is exactly equal to the rank of gauge group – i.e. to the number of independent moduli.
* The variation of generating 1-form $`dS`$ over these moduli gives holomorphic differentials
$$\begin{array}{c}\delta _{\mathrm{moduli}}dS=\mathrm{holomorphic}\end{array}$$
(1)
* The (canonical) $`𝐀`$\- and $`𝐁`$-periods of generating 1-form
$$\begin{array}{c}𝐚=_𝐀𝑑S𝐚_D=_𝐁𝑑S\end{array}$$
(2)
give the set of ”dual” BPS masses – the W-bosons and the monopoles while the period matrix $`T_{ij}(\mathrm{\Sigma })`$ – the set of couplings in the low-energy effective theory. From (1) and (2) one gets the relation between the BPS masses and couplings
$$\begin{array}{c}\frac{a_D^i}{a_j}=_{B_i}𝑑\omega _j=T_{ij}(\mathrm{\Sigma })\end{array}$$
(3)
* All above requirements can be summarized saying that the SW data are equivalent to defining an integrable system in the sense of . The periods (2) are the action variables and the (holomorphic) variation of the generating 1-form (1) gives rise to the dual (angle) variables. The corresponding class of integrable models include well-known integrable systems of particles (the periodic Toda chains, Calogero-Moser models and their relativistic Ruijsenaars generalizations) and classical spin chains (see and references therein for details).
* The prepotential is function of half of the variables (2), say $`=(𝐚)`$, then
$$\begin{array}{c}a_D^i=\frac{}{a_i}\\ T_{ij}=\frac{a_D^i}{a_j}=\frac{^2}{a_ia_j}\end{array}$$
(4)
The prepotential $``$ itself has no natural definition in the language of classical finite-gap integrable systems – in order to describe it one has to consider deformations of the finite-gap models. It is possible to show, for example, that it satisfies the following system of differential equations
$$\begin{array}{c}_i_k^1_j=_j_k^1_ii,j,k=1,\mathrm{},N1.\end{array}$$
(5)
where $`_i`$ denotes the matrix of the third derivatives
$$\begin{array}{c}(_i)_{mn}=\frac{^3}{a_ia_ma_n}=\frac{T_{mn}}{a_i}\end{array}$$
(6)
or
$$\begin{array}{c}_iG^1_j=_jG^1_ii,j,k=1,\mathrm{},N1.\end{array}$$
(7)
where $`G=_kg_k_k`$ for any $`g_k`$. The system of equations (5) holds non-perturbatively for most of SW prepotentials (an important exception is the case of broken $`𝒩=4`$ theory – the elliptic Calogero-Moser model ). The proof of the Eqs. (5), (7) is based on the existence of closed algebra of multiplication of holomorphic differentials on certain Riemann surfaces , which is a sort of generalization of polynomial rings.
* The SW prepotential $``$ can be also considered as a particular case of the tau-function of the Whitham hierarchy – a function of infinitely many extra parameters $`T_n`$ so that in the ”SW point”
$$\begin{array}{c}_{SW}(𝐚,\mathrm{\Lambda })(𝐚,𝐓)|_{T_n=\mathrm{\Lambda }\delta _{n,1}}\end{array}$$
(8)
It implies, in particular, that the generalized prepotential satisfies
$$\begin{array}{c}\frac{}{T_n}=\frac{N}{2\pi in}_mmT_m𝒜_{mn}=\frac{N}{2\pi in}T_1_{n+1}+O(T_2,T_3,\mathrm{})\\ \frac{^2}{a_iT_n}=\frac{N}{2\pi in}\frac{_{n+1}}{a_i}\\ \frac{^2}{T_mT_n}=\frac{N}{2\pi i}\left(𝒜_{mn}+\frac{N}{mn}\frac{_{m+1}}{a_i}\frac{_{n+1}}{a_j}_{ij}^2\mathrm{log}\theta _E(0|T)\right)\end{array}$$
(9)
where
$$\begin{array}{c}𝒜_{mn}=\frac{N}{mn}\mathrm{res}_{\mathrm{}}\left(P^{n/N}(\lambda )dP_+^{m/N}(\lambda )\right)=𝒜_{nm}\end{array}$$
(10)
and
$$\begin{array}{c}_{n+1}𝒜_{n1}=\frac{N}{n}\mathrm{res}_{\mathrm{}}P^{n/N}(\lambda )d\lambda =h_{n+1}+O(h^2)\end{array}$$
(11)
The most illustrative form is
$$\begin{array}{c}\frac{^2}{T_mT_n}\left((a,T)\frac{N}{4\pi i}^{GKM}(a,T)\right)=\\ =\frac{N^2}{2\pi imn}\frac{_{m+1}}{a_i}\frac{_{n+1}}{a_j}_{ij}^2\mathrm{log}\theta _E(0|T)\end{array}$$
(12)
where
$$\begin{array}{c}^{GKM}(a|T)\frac{1}{2}_{m,n}T_mT_n𝒜_{mn}\end{array}$$
(13)
is the prepotential of Generalized Kontsevich Model – a 2D topological string theory or the ”local” part which is not related to the structure of nontrivial SW spectral curve, while the r.h.s. of (12) is expressed through the derivatives of $`\theta `$-constant, corresponding to a particular SW curve (and certain choice of characteristic on this particular curve).
* Eqs. (5), (7), (9) and (12) are classical differential equations one can write for the quantum effective actions which, in particular, depend on the Planck constant $`\mathrm{}`$ or the string scale $`\sqrt{\alpha ^{}}`$. The same phenomenon was studied earlier in the case of 2D topological string models (see, for example and references therein).
These properties of the low-energy effective SUSY gauge theories follow from the Seiberg-Witten hypothesis and were derived using calculus on Riemann surfases. However they do not tell us anything about the origin of this non-perturbative structure. In particular the physical sense of arising integrable system remains to be unclear. One of the ways to try to understand it is to consider the compactified version of the SW theory.
There were two, at first glance different, consequences of adding a compact dimension in the context of Seiberg and Witten. A straightforward one is to add 5-th compact dimension to 4D SUSY gauge theory and to take into account the contribution of the soft Kaluza-Klein (KK) modes which leads to the ”relativization” of an integrable system. A different effect of enlarging moduli space arises when one considers (preserving SUSY!) compactification down to 3+1 (compact) dimension . Both ways are particular cases of generic compactified theory with Wilson lines, or new moduli – the monodromies of gauge fields and, in general, such compactification breaks supersymmetry (at least partially), giving rise, in particular, to new effective theories of SW type .
In order to discuss compactified theory, we shall consider, first, general properties of moduli spaces (vacua) of SUSY Yang-Mills, or, if speaking about bosonic sector, the so called Yang-Mills-Higgs theories. On these moduli spaces one can introduce holomorphic symplectic 2-forms which allow to construct an integrable system in the sense of or . The integrating change of variables can be considered as relation between bare or classical and quantum variables in the context of corresponding gauge theory. Partially this relation is known as a relation between ”bare” moduli $`u_k=\frac{1}{k}\mathrm{Tr}\mathrm{\Phi }^k`$ and the ”exact” quantum moduli – the periods of Seiberg-Witten differential or action variables in corresponding integrable system. The rest part of this relation – exactly the integrating change of variables in finite-gap integrable system has also the meaning of relation between the ”bare” Wilson loops (or dual moduli) and their exact quantum counterparts. We shall also discuss the symmetry between Wilson loops and scalar moduli under T-duality transformation and show that it is related with the duality between the co-ordinates and action variables in integrable systems .
As a particular example of compactified theory we shall consider 3+1 dimensional vector supermultiplet with Wilson line softly breaking SUSY from $`𝒩=2`$ in 4D sense down to $`𝒩=2`$ in 3 dimensions (from two complex Weyl spinor supercharges to one complex or two real Weyl supercharges). This, macroscopically 3-dimensional theory is known to generate Toda chain superpotential and we shall demonstrate that this superpotential is directly related to the Toda chain dynamics arising in the context of SW theory.
## 3 Perturbative gauge theories and ”degenerate” integrable systems
The relation between SW theories and integrable systems can be already discussed at the perturbative level , where $`𝒩=2`$ SUSY effective actions are completely defined by the 1-loop contributions (see and references therein). The scalar field $`𝚽=\mathrm{\Phi }_{ij}`$ of $`𝒩=2`$ vector supermultiplet acquires nonzero VEV $`𝚽=\mathrm{diag}(\varphi _1,\mathrm{},\varphi _N)`$ and the masses of “particles” – $`W`$-bosons and their superpartners are proportional to $`\varphi _{ij}=\varphi _i\varphi _j`$ due to the Higgs term $`[A_\mu ,𝚽]_{ij}=A_\mu ^{ij}(\varphi _i\varphi _j)`$ in the SUSY Yang-Mills action. These masses can be written altogether in terms of the generating polynomial
$$\begin{array}{c}w=P_N(\lambda )=det(\lambda 𝚽)=(\lambda \varphi _i)\end{array}$$
(14)
where $`𝚽`$ is the adjoint complex scalar ($`\mathrm{Tr}𝚽=_i\varphi _i=0`$), via residue formula
$$\begin{array}{c}m_{ij}_{C_{ij}}\lambda d\mathrm{log}w=_{C_{ij}}\lambda d\mathrm{log}P_N(\lambda )\end{array}$$
(15)
which for a particular ”$`\mathrm{}`$-like” contour $`C_{ij}`$ around the roots $`\lambda =\varphi _i`$ and $`\lambda =\varphi _j`$ gives rise exactly to the Higgs masses. The contour integral (15) is defined on a complex $`\lambda `$-plane with $`N`$ removed points: the roots of the polynomial (14) – a degenerate Riemann surface. The masses of monopoles are naively infinite in this limit, since the corresponding contours (dual to $`C_{ij}`$) start and end in the points where $`dS`$ obeys pole singularities. It means that the monopole masses, proportional to the squared inverse coupling, are renormalized in perturbation theory and defined naively up to the masses of particle states times some divergent constants.
The effective action (the prepotential) $``$, or the set of effective charges $`T_{ij}`$ (4), are defined in $`𝒩=2`$ perturbation theory completely by 1-loop diagram giving rise to the logarithmic corrections
$$\begin{array}{c}\left(\delta ^2\right)_{ij}=T_{ij}_{\mathrm{masses}}\mathrm{log}\frac{(\mathrm{mass})^2}{\mathrm{\Lambda }^2}=\mathrm{log}\frac{(\varphi _i\varphi _j)^2}{\mathrm{\Lambda }^2}\end{array}$$
(16)
where $`\mathrm{\Lambda }\mathrm{\Lambda }_{QCD}`$ and last equality is true only for pure gauge theories – since the only masses we have there are given by (15). That is all one has in the perturbative weak-coupling limit of the SW construction, when the instanton contributions to the prepotential (being proportional to the degrees of $`\mathrm{\Lambda }^{2N}`$ (or $`q^{2N}e^{2\pi i\tau N}`$ – in the UV-finite theories with bare coupling $`\tau `$) are (exponentially) suppressed so that one keeps only the terms proportional to $`\tau `$ or $`\mathrm{log}\mathrm{\Lambda }`$. These degenerated rational spectral curves can be already related to the family of trigonometric Ruijsenaars-Schneider and Calogero-Moser-Sutherland systems and the open Toda chain or Toda molecule.
For example, in the case of $`SU(2)`$ pure gauge theory Eq. (14) turns into
$$\begin{array}{c}w=\lambda ^2u\end{array}$$
(17)
with $`u=\frac{1}{2}\mathrm{Tr}𝚽^2`$. In the parameterization of $`X=w=e^z=\lambda ^2u`$, $`Y=w\lambda `$ the same equation can be written as
$$\begin{array}{c}Y^2=X^2(X+u)\end{array}$$
(18)
and the masses (15) are now defined by the contour integrals of
$$\begin{array}{c}dS=\lambda d\mathrm{log}w=2\frac{\lambda ^2d\lambda }{\lambda ^2u}=\frac{\lambda d\lambda }{\lambda \sqrt{u}}+\frac{\lambda d\lambda }{\lambda +\sqrt{u}}=\sqrt{X+u}\frac{dX}{X}\end{array}$$
(19)
One can easily notice that Eqs. (17), (18) and (19) can be interpreted as integration of the open $`SL(2)`$ (the Liouville) Toda chain with the co-ordinate $`X=w=e^q`$, momentum $`p=\lambda `$ and Hamiltonian (energy) $`u`$. The integration of generating differential $`dS=pdq`$ over the trajectories of the particles gives rise, in fact, to the monopole masses in the SW theory.
This is actually a general rule – the perturbative $`𝒩=2`$ theories of the ”SW family” give rise to the ”open” or trigonometric family of integrable systems – the open Toda chain, the trigonometric Calogero-Moser or Ruijsenaars-Schneider systems. This can be easily established at the level of spectrum (15) and the effective couplings (16) – the corresponding (rational) curves are (14) in the $`N`$-particle Toda chain case
$$\begin{array}{c}w=\frac{P_N^{(CM)}(\lambda )}{P_N^{(CM)}(\lambda +m)}dS=\lambda \frac{dw}{w}\end{array}$$
(20)
for the trigonometric Calogero-Moser-Sutherland model and
$$\begin{array}{c}w=\frac{P_N^{(RS)}(\lambda )}{P_N^{(RS)}(\lambda e^{2iϵ})}dS=\mathrm{log}\lambda \frac{dw}{w}\end{array}$$
(21)
for the trigonometric Ruijsenaars-Schneider system. It is easy to see that (perturbative) spectra are given by general formula
$$\begin{array}{c}M=\varphi _{ij}\frac{\pi n}{R}\frac{ϵ+\pi n}{R}\\ N𝐙.\end{array}$$
(22)
and contain in addition to the Higgs part $`\varphi _{ij}`$ the KK modes $`\frac{\pi n}{R}`$ and the KK modes for the fields with ”shifted” by $`ϵ`$ boundary conditions. The $`ϵ`$ parameter can be treated as a Wilson loop of gauge field along the compact dimension and in a subclass of models $`\frac{ϵ}{R}`$ plays the role of the mass of the adjoint matter multiplet.
Thus we see that the relation between effective actions of SUSY gauge theories and classical integrable systems is really established on perturbative level. Moreover, the perturbative limit can be considered as a self-consistent approximation since all ingredients of the relation between the SW theories and integrable systems we mentioned above can be consistently (and even explicitly), in particular:
* The associativity equations (5) possess an obvious class of perturbative solutions, for example (cf. with (16))
$$\begin{array}{c}_{\mathrm{pert}}=\frac{1}{2}_{\stackrel{i<j}{i,j=1}}^{N1}(a_ia_j)^2\mathrm{log}(a_ia_j)+\frac{1}{2}_{i=1}^{N1}a_i^2\mathrm{log}a_i\end{array}$$
(23)
(see a proof by straightforward calculation in ), which is a direct weak-coupling limit of the full SW prepotential when the non-perturbative terms (powers of $`\mathrm{\Lambda }`$) are suppressed. A generic approach to the computation of perturbative prepotentials (including the theories with the KK excitations) based on SW theory can be found in .
* Whitham hierarchy has a particular class of solutions corresponding to degenerate Riemann surfaces $`\mathrm{\Sigma }`$ where handles turn into (pairs of) points and holomorphic differentials into the differentials with first-order poles at these points. The generating differential in this case has general structure
$$\begin{array}{c}dS=T_n\lambda ^{n1}d\lambda +a_i\frac{d\lambda }{\lambda \lambda _i}\end{array}$$
(24)
This formula provides a straightforward way of computation of the ”perturbative” Whitham tau-function with non-zero times $`T_n`$ <sup>1</sup><sup>1</sup>1However, it would be nice to have an explicit expression..
Despite one can really check in the perturbative limit all the statements of sect. 2 by standard field theory computation, the nature of this relation at such level remains unclear.
## 4 String theory and Yang-Mills-Higgs system
In general, moduli spaces of SUSY gauge theories are described in terms of the (eigenvalues of the) scalar fields from SUSY multiplets and – in general situation when space-time has compact dimensions – one should also add the Wilson loops of gauge fields themselves. The scalar Higgs fields in string picture are associated with the positions of branes (the hypersurfaces) in some transverse directions and the effective world-volume gauge theories are coming from open strings ending on D-branes. The quantum moduli spaces of branes are formulated in terms of full matrices (rather than their eigenvalues) with the potential <sup>2</sup><sup>2</sup>2Moduli space can be also described in terms of holomorphic superpotential, for example, for 6 real fields
$$\begin{array}{c}W(\mathrm{\Phi })=\mathrm{Tr}ϵ_{ijk}\stackrel{~}{\mathrm{\Phi }}_i[\stackrel{~}{\mathrm{\Phi }}_j,\stackrel{~}{\mathrm{\Phi }}_k]\end{array}$$
(25) where $`\stackrel{~}{\mathrm{\Phi }}_i=\mathrm{\Phi }_{i_1}+i\mathrm{\Phi }_{i_2}`$ are already complex scalars.
$$\begin{array}{c}V(\mathrm{\Phi })=\mathrm{Tr}_{i<j}[\mathrm{\Phi }_i,\mathrm{\Phi }_j]^2\end{array}$$
(26)
which arises in the action under the compactification of 10D $`𝒩=1`$ SUSY Yang-Mills theory
$$\begin{array}{c}\mathrm{Tr}_{d^{10}x}𝐅_{MN}^2+\mathrm{fermions}\end{array}$$
(27)
down to 4D ($`𝒩=4`$ SUSY) theory
$$\begin{array}{c}\mathrm{Tr}_{d^4x}𝐅_{\mu \nu }^2+(D_\mu \mathrm{\Phi }_i)^2+_{i<j}[\mathrm{\Phi }_i,\mathrm{\Phi }_j]^2+\mathrm{fermions}\end{array}$$
(28)
so that $`M=(\mu ,i)`$, i.e. $`A_\mu =A_M`$ for $`M=0,\mathrm{},3`$ and $`\mathrm{\Phi }_i=A_M`$ for $`M=4,\mathrm{},9`$. In the vacuum sector one can neglect the fermionic terms in (28) and, thus, speak about the Yang-Mills-Higgs system. Minima of the potential (26) correspond to $`[\mathrm{\Phi }_i,\mathrm{\Phi }_j]=0`$, or to the simultaneously diagonal matrices
$$\begin{array}{c}\mathrm{\Phi }_i=\mathrm{diag}(\varphi _1^{(i)},\mathrm{},\varphi _N^{(i)})\end{array}$$
(29)
Distinct eigenvalues correspond to $`U(1)^{N1}`$ gauge theory, when the eigenvalues of $`\mathrm{\Phi }_i`$ coincide the gauge symmetry is restored up to $`SU(N)`$. In terms of modern string theory, the action (28) literally corresponds to a system of $`N`$ parallel D3-branes or to (bosonic sector of) $`𝒩=4`$ SUSY Yang-Mills theory. Six scalars $`\mathrm{\Phi }_i`$ can all obey non-zero VEV’s so that the dimension of moduli space is $`6(N1)`$. D3 branes can be thought of as D5 branes $`X_0,\mathrm{},X_5`$ compactified on two-torus, if, say, the dimensions $`X_4`$ and $`X_5`$ are compact.
In theories with $`𝒩=4`$ SUSY in four dimensions effective couplings and BPS masses are not renormalized – the theory has lots of symmetries including conformal group and there are no dimensionful (mass) parameters. It means, in particular that the eigenvalues of the scalar fields $`\{\varphi _i\}`$ are not renormalized, as well as $`\{q_i\}`$ – the eigenvalues of the Wilson loops $`A_\mu 𝑑x^\mu `$ – if we consider a theory with compact dimensions. The correspondence between SW theories and integrable systems implies that theory with $`𝒩=4`$ SUSY corresponds to a system of free particles – with the Hamiltonians $`u_k=\frac{1}{k}\mathrm{Tr}\mathrm{\Phi }^k=\frac{1}{k}_i\varphi _i^k`$, where $`\{\varphi _i\}`$ play the role of momenta. If one adds compact dimensions, the extra moduli $`\{q_i\}`$ can play the role of dual co-ordinates – at least in the sense that the volume of moduli space is (non-renormalized, dimensionless) constant and the volume form can be presented as a degree of symplectic form
$$\begin{array}{c}\mathrm{\Omega }=\mathrm{Tr}\delta A\delta \mathrm{\Phi }=d\varphi _idq_i\end{array}$$
(30)
Breaking supersymmetry down to $`𝒩=2`$ the potential acquires extra mass terms $`m_i^2\mathrm{Tr}\mathrm{\Phi }_i^2`$ for two of three scalar fields and the dimension of ”scalar” moduli space goes down to $`2(N1)`$ or to one complex ”diagonal matrix” field (29). Moreover, in contrast to $`𝒩=4`$ theory, in general case (of nontrivial boundary conditions) matrices of scalar fields and monodromies become dependent of each other, or satisfy nontrivial commutation relation like
$$\begin{array}{c}[A,\mathrm{\Phi }]mJ\end{array}$$
(31)
linear in the parameter of ”massive deformation” (for $`m0`$ one comes back to $`𝒩=4`$ theory) and $`J`$ is some matrix of ”gauge-covariant” form.
Quantum effects turn the bare symplectic form (30) into
$$\begin{array}{c}\delta q_i\delta p_i\delta \vartheta _k\delta a_k\end{array}$$
(32)
where
$$\begin{array}{c}a_i=_{A_i}𝑑S\end{array}$$
(33)
are correct quantum variables – the Seiberg-Witten integrals. One may consider $`a_i=a_i(\mathrm{\Phi },\mathrm{\Lambda })`$ as a transformation from bare quantities – the eigenvalues $`\{\varphi _i\}`$ to their exact quantum values in the effective theories $`\{a_i\}`$, playing the role of the exact quantum BPS masses. In the same way one should consider the transformation $`q_i\vartheta _i`$ as transformation from bare value of monodromy to its exact quantum value in the effective theory.
## 5 Compactification to 3+1 dimensions and SUSY breaking
SUSY breaking can be elegantly reached compactifying theories with nontrivial boundary conditions (the Scherk-Schwarz mechanism). In the framework of SW theory this leads to a possibility, for example, to formulate the whole family of the ”adjoint matter” SW theories as various limits of a unique integrable system – the elliptic Ruijsenaars-Schneider model .
Consider now the compactification of $`𝒩=2`$ SUSY Yang-Mills theory with the only vector supermultiplet to 3+1 (compact with radius $`R_3R`$) dimensions. If one takes all fields to have the periodic boundary conditions in compact direction this would be a $`𝒩=4`$ (in 3D sense) SUSY theory. If, however, one puts
$$\begin{array}{c}\varphi (x+R)=e^{iϵ}\varphi (x)\end{array}$$
(34)
on half of the fields, the resulting theory would have only $`𝒩=2`$ three-dimensional SUSY (i.e. $`𝒩=1`$ in 4D sense), i.e. the supersymmetry will be (partially) broken by non-periodic boundary conditions.
As an example, let us consider the case of $`𝒩=2`$ 4D vector supermultiplet in the adjoint representation, consisting of $`(A_\mu ,\psi )`$ – 4D $`𝒩=1`$ vector multiplet and $`(\varphi ,\chi )`$ – 4D $`𝒩=1`$ scalar multiplet, where $`\psi `$ and $`\chi `$ are two complex Weyl spinors. Let the latter one acquire a nontrivial phase (34) under shift along the loop in compact direction, then it becomes massive with mass $`\frac{ϵ}{R}`$. The 4D $`𝒩=1`$ vector multiplet remains to be massless, and can be represented as a 3D $`𝒩=2`$ supermultiplet $`(A_\alpha ,\psi ,\frac{q}{R})`$, where $`\alpha =0,1,2`$, $`q=RA_3`$ and $`\psi `$ is 3D complex spinor.
In contrast to $`𝒩=4`$ SUSY in 3D, $`𝒩=2`$ supersymmetric theory can generate a superpotential . Introducing complexified variables $`q_iq_j+i\gamma _i`$ where $`q_i`$ are the (properly normalized) eigenvalues of the matrix $`A_3`$ and $`\gamma _i`$ are 3D dual photons $`A_i=ϵ_{ijk}_j\gamma _k`$, the superpotential acquires the form
$$\begin{array}{c}W\frac{1}{R}\left(ϵ\mathrm{Tr}\mathrm{\Phi }^2+\mathrm{\Lambda }^2\left(_{i=1}^{N1}e^{q_{i+1}q_i}+e^{q_1q_N}\right)\right)\end{array}$$
(35)
where the first term is ”4D contribution” , the second term (first term in the brackets) has 3D origin and the last one is induced by 3+1D instanton contributions (see, for example, recent paper and references therein for details). All simple roots (the first term) are usual 3D instantons (BPS ”monopoles”) giving the potential of the open Toda chain, while the last term (the negative root) appears only in 3+1 dimensions and can be treated as a 4D instanton (or caloron) contribution.
In the weak coupling limit this term vanishes, giving rise to the open Toda chain . This term also vanishes in literally 3D gauge theory (when the compact dimension shrinks to zero), it is not surprising since dimensional reduction implies that $`\frac{R}{g_4^2}=\frac{1}{g_3^2}`$ and $`\mathrm{\Lambda }e^{\frac{1}{g_4^2}}=e^{\frac{1}{Rg_3^2}}`$ so that the 3D limit $`R0`$ coincides (for fixed 3D coupling $`g_3`$) with the weak coupling limit in 4D gauge theory. In the weak-coupling limit (in the $`SL(2)`$ or ”Liouville” example) the superpotential is
$$\begin{array}{c}\mathrm{\Lambda }^2e^{2q}=up^2=u\left(\frac{dq}{dt}\right)^2\end{array}$$
(36)
it follows, that
$$\begin{array}{c}dt=\frac{dq}{\sqrt{u\mathrm{\Lambda }^2e^{2q}}}\underset{X=\mathrm{\Lambda }^2e^{2q}}{=}\frac{dX}{2X\sqrt{uX}}\underset{(\text{36})}{=}\frac{dp}{p^2u}\end{array}$$
(37)
and integration gives
$$\begin{array}{c}p=\sqrt{u}\frac{1+e^\vartheta }{1e^\vartheta }=\sqrt{u}\frac{1+e^{2\sqrt{u}t}}{1e^{2\sqrt{u}t}}=\sqrt{u}\mathrm{coth}\sqrt{u}t\end{array}$$
(38)
so that
$$\begin{array}{c}\mathrm{\Lambda }^2e^{2q}=X=2u\frac{e^\vartheta }{1e^\vartheta }=2u_{n=1}^{\mathrm{}}e^{n\vartheta }\end{array}$$
(39)
For $`\vartheta 1`$, the whole ”instanton series” $`e^{2q}e^\vartheta `$ can be reduced to the classical or ”bare” result. We see, indeed, that quantization – at least in the sense of constructing the effective actions – has the form of canonical transformation of some integrable system <sup>3</sup><sup>3</sup>3In the case we literally discuss now – the Toda chain..
The opposite limit $`R\mathrm{}`$ corresponds to the uncompactified 4D gauge theory and in this limit the ”3D” variables $`\{q_i\}`$ or $`\{\vartheta _i\}`$ are massive, i.e. cannot play the role of moduli. Thus the dimension of moduli space becomes equal to the half of the dimension of the phase space of an integrable system and computation of the effective action requires also an integration over the ”3D” variables, or the Bogolyubov-Whitham averaging of an integrable system. This leads to arising of the Whitham integrable system, which can be formulated in pure geometric terms along the lines of .
Let us point out that the SUSY breaking we have considered above is different from a direct SUSY breaking to $`𝒩=1`$ 4D theory (without compactification) which implies the massless monopole’s limit of the $`𝒩=2`$ spectral curve $`\mathrm{\Lambda }^N\mathrm{cosh}z=P_N(\lambda )`$ or, in particular, that the polynomial $`P_N(\lambda )`$ (14) turns into
$$\begin{array}{c}P_N(\lambda )=\mathrm{\Lambda }^N\mathrm{cosh}z\lambda =2\mathrm{cosh}\frac{z}{N}\xi +\xi ^1\end{array}$$
(40)
This degeneration corresponds to the solitonic limit of corresponding finite-gap integrable system – for pure gauge theory of the periodic Toda chain. In particular the curve (40) is a ”solitonic” curve in Toda chain with the roots of polynomial $`Q(\lambda )`$ given by
$$\begin{array}{c}Q(\lambda )=_{j=1}^{N1}(\lambda 2\mathrm{cos}\frac{\pi j}{N})\end{array}$$
(41)
The generic form of the Toda BA function is
$$\begin{array}{c}\mathrm{\Psi }_n^{(\pm )}(\xi ,t)=\xi ^ne^{_kt_k(\xi ^k\xi ^k)}\frac{R_n^{(\pm )}(\xi ,t)}{R(\xi )}\end{array}$$
(42)
where
$$\begin{array}{c}R(\xi )=_{s=1}^{N1}(\xi \gamma _s)\\ R_n^{(\pm )}(\xi ,t)=\psi _n^{(\pm )}(t)_{s=1}^{N1}(\xi \mu _s(n,t))=_{k=0}^{N1}r_k(n,t)\xi ^k\end{array}$$
(43)
and the Toda chain Lax equation
$$\begin{array}{c}\lambda \mathrm{\Psi }_n=C_{n+1}\mathrm{\Psi }_{n+1}+p_n\mathrm{\Psi }_n+C_n\mathrm{\Psi }_{n1}\\ C_ne^{\frac{1}{2}(q_nq_{n1})}\lambda =\xi +\frac{1}{\xi }\end{array}$$
(44)
implies that
$$\begin{array}{c}r_0(n)C_nr_0(n1)=0\\ r_1(n)C_nr_1(n1)p_nr_0(n)=0\end{array}$$
(45)
i.e.
$$\begin{array}{c}r_0(n)=C_nr_0(n1)=\mathrm{}=e^{\frac{1}{2}(q_nq_0)}r_0(0)e^{\frac{1}{2}q_n}\end{array}$$
(46)
For the solitons coming from degeneration of $`N`$-periodic Toda chain one should impose the ”gluing conditions”
$$\begin{array}{c}\mathrm{\Psi }_n(\xi _j)=\mathrm{\Psi }_n(\frac{1}{\xi _j})j=1,\mathrm{},N1\end{array}$$
(47)
which mean that the BA function remembers that it came originally from genus $`N1`$ Riemann surface and each pair of points $`\xi _j,\frac{1}{\xi _j}`$ corresponds to a degenerate handle. The condition (47) together with the explicit form (42) and $`\mathrm{\Psi }_{n+N}=w\mathrm{\Psi }_n`$ gives
$$\begin{array}{c}w=\xi ^N\xi _j^{2N}=1\end{array}$$
(48)
i.e.
$$\begin{array}{c}\xi _j=e^{\frac{i\pi j}{N}}\end{array}$$
(49)
where the label $`j`$ can be restricted to $`j=1,\mathrm{},N1`$ since
$$\begin{array}{c}\varphi _j=\xi _j+\frac{1}{\xi _j}=2\mathrm{cos}\frac{\pi j}{N}=\varphi _{2Nj}\end{array}$$
(50)
Eq. (47) explicitly reads
$$\begin{array}{c}\frac{R_n(\frac{1}{\xi _j})}{R_n(\xi _j)}=_{k=1}^{N1}\frac{\xi _j^1\mu _k(n,t)}{\xi _j\mu _k(n,t)}=e^{\frac{2\pi inj}{N}+4i_lt_l\mathrm{sin}\frac{\pi jl}{N}+Z_j(\gamma )}\\ Z_j(\gamma )_{s=1}^{N1}\mathrm{log}\frac{\xi _j\gamma _s}{1\xi _j\gamma _s}\\ j=1,\mathrm{},N1\end{array}$$
(51)
This is a system of linear equations for the coefficients $`r_k(n,t)`$ of the polynomial $`R_n(\xi ,t)`$
$$\begin{array}{c}e^{\frac{i\mathrm{\Phi }_j(n,t)}{2}}_{k=1}^{N1}\mathrm{sin}\left(\frac{\pi jk}{N}+\frac{\mathrm{\Phi }_j(n,t)}{2}\right)r_k=0\\ \mathrm{\Phi }_j(n,t)\frac{2\pi nj}{N}+4_lt_l\mathrm{sin}\frac{\pi jl}{N}iZ_j(\gamma )\end{array}$$
(52)
which can be easily solved. The conditions (50) can be interpreted as values of the scalar fields in the critical points of the superpotential while the soliton trajectories connect the critical points.
Another way to see that SUSY breaking down to $`𝒩=1`$ should correspond to the solitonic limit comes, possibly, from more detailed study of the Whitham hierarchy. Generating superpotential in the $`𝒩=1`$ theory can be thought of as switching on Whitham dynamics $`\delta t_k\mathrm{Tr}\mathrm{\Phi }^k`$ in the sense of . For small values of $`\delta t_k`$ this can be thought as perturbation of the smooth $`𝒩=2`$ solution, or to the computation of certain correlators in $`𝒩=2`$ SYM theory. $`𝒩=1`$ theory itself rather corresponds to finding the solutions to Whitham equations for large values of $`t_k`$. The large $`t_k`$ asymptotic of the Whitham solutions brings us to the boundaries of moduli space or, in other words, corresponds to the decoupling of the smooth finite-gap solutions into solitons.
## 6 T-duality and dualities in integrable systems
T-duality is one of the basic features of string theory in target-space with compact dimensions and an example of coordinate-momentum duality. It is well-known and, in fact, easy to see that the spectrum of string on a circle of radius $`R`$ is invariant under the transformation $`R\frac{\alpha ^{}}{R}`$ which interchanges the KK momenta, propagating along the compact direction, with the windings of strings along the circle. T-duality transformation also replaces the Wilson of gauge fields loops by positions of branes or VEV’s of scalars and vice versa (see, for example, for details).
Two different ”pictures” of moduli space (of compactified theory) – in terms of Wilson lines and VEV’s of scalar fields are ”T-dual” to each other. This coordinate-momentum duality can be trivially seen in the theory with $`𝒩=4`$ SUSY when it literally corresponds to the exchange of independent parameters $`\{\varphi _i\}`$ and $`\{q_i\}`$ – the momenta (action variables) and co-ordinates (angles) of a trivial integrable system – free motion of particles on some torus. Breaking $`𝒩=4`$ SUSY leads to relating of corresponding gauge theory with already nontrivial integrable system, when the action variables are no longer identified with momenta. However, certain finite-dimensional integrable systems (of the Calogero-Moser-Ruijsenaars family) still possess nice duality properties , and it is not quite a coincidence that this duality can be easily constructed in the systems corresponding to the theories with KK excitations .
In other words T-duality can be considered as symmetry between the $`A`$ and $`\mathrm{\Phi }`$ variables in the relation (31). In terms of integrable systems this leads to the symmetry between two different sets of commuting variables on the full phase space – the original co-ordinates and Hamiltonians (or action variables) of an integrable system .
As an example, consider, first, 2-particle trigonometric Ruijsenaars system with the Hamiltonian
$$\begin{array}{c}h=h(p,q)=\mathrm{cosh}p\sqrt{1\frac{m^2}{\mathrm{sinh}^2q}}\end{array}$$
(53)
From the Hamiltonian equations
$$\begin{array}{c}\frac{dq}{dt}=\frac{h}{p}=\mathrm{sinh}p\sqrt{1\frac{m^2}{\mathrm{sinh}^2q}}\\ \frac{dp}{dt}=\frac{h}{q}=\frac{m^2\mathrm{cosh}p\mathrm{cosh}q}{\mathrm{sinh}^3q\sqrt{1\frac{m^2}{\mathrm{sinh}^2q}}}\end{array}$$
(54)
it follows that <sup>4</sup><sup>4</sup>4In this form the equation of motion coincides exactly with the non-relativistic limit – the trigonometric Calogero model, the same effect as in relativistic and non-relativistic Toda chains (see, for example ).
$$\begin{array}{c}\frac{d^2q}{dt^2}=\frac{}{q}\left(\frac{m^2}{2\mathrm{sinh}^2q}\right)\end{array}$$
(55)
or
$$\begin{array}{c}\left(\frac{dq}{dt}\right)^2\frac{m^2}{\mathrm{sinh}^2q}=E\end{array}$$
(56)
with $`E=h^21`$. Solving (56) one gets
$$\begin{array}{c}\sqrt{E}t=\mathrm{log}\left(\mathrm{cosh}q+\sqrt{\frac{m^2}{E}+\mathrm{sinh}^2q}\right)\mathrm{log}\sqrt{1\frac{m^2}{E}}\end{array}$$
(57)
(with a particular choice for the integration constant). It is easy to check that the symplectic form is
$$\begin{array}{c}\mathrm{\Omega }=dpdq=dhdt=dxd\pi \end{array}$$
(58)
where $`h=\mathrm{cosh}x`$, $`\pi =t\sqrt{E}=t\sqrt{h^21}`$. Moreover it is easy to see that, introducing $`H=\mathrm{cosh}q`$, one gets from Eq. (57)
$$\begin{array}{c}H=\frac{1}{2}\sqrt{1\frac{m^2}{E}}\left(e^\pi +e^\pi \right)=\mathrm{cosh}\pi \sqrt{1\frac{m^2}{\mathrm{sinh}^2x}}\end{array}$$
(59)
i.e. the Hamiltonian of the dual system which is again the trigonometric Ruijsennars model with the same coupling constant $`m^2`$.
This self-duality of trigonometric Ruijsenaars system turns into duality between the trigonometric Calogero-Moser model and rational relativistic Ruijsenaars model in almost obvious ”nonrelativistic” limit. The equation of motion (56) can be equally considered as an equation of motion for the non-relativistic Calogero-Moser model with the Hamiltonian
$$\begin{array}{c}h_{CM}=\frac{1}{2}p^2\frac{m^2}{\mathrm{sinh}^2q}\end{array}$$
(60)
so that $`h_{CM}=E=\sqrt{h^21}`$. The result of integration of the equation of motion is again (57) but now one has to consider rather ”non-relativistic” limit of ”small” $`x`$ and ”p”, i.e. $`h_{CM}=E=x^2`$ but still $`\mathrm{cosh}q=H`$. It follows then, that the system (60) is dual to rational Ruijsenaars model with the Hamiltonian <sup>5</sup><sup>5</sup>5 Going further, it is easy to check that in the ”double” non-relativistic limit one comes to the (again self-dual) rational Calogero model with
$$\begin{array}{c}h_C=p^2\frac{m^2}{q^2}=x^2\end{array}$$
(61) or
$$\begin{array}{c}H_C=\pi ^2\frac{m^2}{x^2}=q^2\end{array}$$
(62)
$$\begin{array}{c}H=\mathrm{cosh}\pi \sqrt{1\frac{m^2}{x^2}}\end{array}$$
(63)
In general $`N`$-particle trigonometric Ruijsenaars-Schneider system it was shown in that the duality transformation can be interpreted as modular transformation in the space of the $`SL(N)`$ valued flat connections on torus. These flat connections are described by two $`SL(N)`$ matrices in general position, say, $`(A,B)`$ modulo common conjugation: $`(A,B)(gAg^1,gBg^1)`$. According to , this space is endowed with the Poisson bracket
$$\begin{array}{c}\{A\underset{,}{}A\}=r_aAA+AAr_a+(1A)r_{21}(A1)(A1)r_{12}(1A)\\ \{A\underset{,}{}B\}=r_{12}AB+ABr_{12}+(1B)r_{21}(A1)(A1)r_{12}(1B)\\ \{B\underset{,}{}B\}=r_aBB+BBr_a+(1B)r_{21}(B1)(B1)r_{12}(1B)\end{array}$$
(64)
with
$$\begin{array}{c}r_{12}=_{\alpha >0}E_\alpha E_\alpha +\frac{1}{2}_iH_iH_i\\ r_{21}=_{\alpha >0}E_\alpha E_\alpha +\frac{1}{2}_iH_iH_i\\ r^a=\frac{1}{2}\left(r_{12}r_{21}\right)\end{array}$$
(65)
which is degenerate, but can be inverted, for example, on a symplectic leave, defined as
$$\begin{array}{c}ABA^1B^1=m^2\mathrm{𝟏}+R^{(1)}\end{array}$$
(66)
where $`R^{(1)}𝝃𝜼`$ is a matrix of unit rank (or $`R_{ij}^{(1)}\xi _i\eta _j`$ with some, dependent on $`A`$ and $`B`$, vectors $`𝝃`$ and $`𝜼`$). Diagonalizing, for example, $`A=\mathrm{diag}(Q_1,\mathrm{},Q_N)`$ one can turn the relation (66) into
$$\begin{array}{c}\left(\frac{Q_i}{Q_j}m^2\right)B_{ij}=\stackrel{~}{R}_{ij}^{(1)}=\xi _i\stackrel{~}{\eta }_j\end{array}$$
(67)
where $`\stackrel{~}{𝜼}=𝜼B`$. Now, using the freedome of conjugation by diagonal matrices (which leaves the diagonal form of $`A`$ untouched) one can put (in general position) all $`\xi _j=1`$, ending up with
$$\begin{array}{c}B_{ij}=\frac{\stackrel{~}{\eta }_iQ_i}{Q_im^2Q_j}\end{array}$$
(68)
The traces of this matrix, as functions of $`\stackrel{~}{\eta }_i=e^{p_i}`$ and $`Q_i=e^{q_i}`$ give Hamiltonians of the trigonometric Ruijsenaars-Schneider system. In the nonrelativistic limit equation (66) turns into the commutator relation
$$\begin{array}{c}[A,B]=m^2\mathrm{𝟏}+R^{(1)}\end{array}$$
(69)
with the solution
$$\begin{array}{c}B_{ij}=p_i\delta _{ij}+\frac{m^2}{Q_iQ_j}\end{array}$$
(70)
As we discussed in sect. 3 the trigonometric Ruijsenaars-Schneider system literally corresponds to the perturbative limit of the SW theory with the adjoint mass and KK excitations (22), so this is the way how perturbative T-duality transformation is realized on the phase space of corresponding integrable system.
## 7 Conclusion
In these notes we have tried to review main ingredients of the approach based on relation of the Seiberg-Witten effective theories with integrable systems. Recent studies have shown that the existence of the exact non-perturbative integrable differential equations allows to compute explicitly some physical quantites in 4D SUSY gauge theories.
Moreover, it turns out that an integrable system in a most straightforward way is seen in the compactified SUSY gauge theory – in 3 plus 1 compact dimensions. In this case, one finds that the symplectic transformation is nothing but a change of variables from bare to exact quantum variables and the set of Hamiltonians (or the spectral curve equation) arises as superpotentials in the compactified theory with broken SUSY.
Note added. In a very recent paper one can find very similiar conclusions concerning the relation between superpotentials and structures of the integrable system we have discussed above. Moreover, this paper contains the detailed analysis of breaking SUSY down to $`𝒩=1`$ in the theory with finite adjoint mass – the corresponding superpotential coincides, like it should follow from general reasoning and the results of , with that of elliptic Calogero-Moser model.
## Acknowledgements
I am indebted to H.Braden, E.Corrigan, V.Fock, A.Gerasimov, S.Kharchev, A.Losev, A.Mironov, A.Morozov, A.Rosly and B.Voronov for illuminating discussions and I am grateful to T.Inami, R.Sasaki, T.Uematsu, all other organizers of the conference in Kyoto and T.Takebe for warm hospitality in Japan. The work was also supported by the RFBR grant 98-01-00344 and the INTAS grant 99-0103.
|
no-problem/9906/patt-sol9906008.html
|
ar5iv
|
text
|
# Instabilities of Hexagonal Patterns with Broken Chiral Symmetry
## I Introduction
Convection has played a key role in the elucidation of the spatio-temporal dynamics arising in nonequilibrium pattern forming systems. The interplay of well-controlled experiments with analytical and numerical theoretical work has contributed to a better understanding of various mechanisms that can lead to complex behavior. From a theoretical point of view the effect of rotation on roll convection has been particularly interesting because it can lead to spatio-temporal chaos immediately above threshold where the small amplitude of the pattern allows a simplified treatment. Early work of Küppers and Lortz showed that for sufficiently large rotation rate the roll pattern becomes unstable to another set of rolls rotated with respect to the initial one. Due to isotropy the new set of rolls is also unstable and persistent dynamics are expected. Later Busse and Heikes confirmed experimentally the existence of this instability and the persistent dynamics arising from it. They proposed an idealized model of three coupled amplitude equations in which the instability leads to a heteroclinic cycle connecting three sets of rolls rotated by 120<sup>o</sup> with respect to each other. Recently the Küppers-Lortz instability and the ensuing dynamics have been subject to intensive research, both experimentally and theoretically . It is found that in sufficiently large systems the switching between rolls of different orientation looses coherence and the pattern breaks up into patches in which the rolls change orientation at different times. The shape and size of the patches changes persistently due to the motion of the fronts separating them. Other interesting aspects induced by rotation are the modification of the dynamics of defects and an unexpected transition to square patterns .
In this paper we are interested in the effect of rotation on hexagonal rather than roll (stripe) patterns as they arise in systems with broken up-down symmetry (e.g. non-Boussines convection or surface-tension driven convection). Complex dynamics, if they are indeed induced by the rotation, are likely to differ qualitatively from those in roll patterns due to the difference in the symmetry of the pattern. Considering the small-amplitude regime close to onset, we use coupled Ginzburg-Landau equations. On this level the Coriolis force arising from rotation manifests itself as a breaking of the chiral symmetry. We therefore consider quite generally the effect of chiral symmetry breaking on weakly nonlinear hexagonal patterns. Since the equations are derived from the symmetries of the system we expect them to capture the generic behavior close to onset.
The dynamics of strictly periodic hexagon patterns with broken chiral symmetry have been investigated in detail by Swift and Soward . They found that the heteroclinic orbit of the Busse-Heikes model is replaced by a periodic orbit arising from a secondary Hopf bifurcation off the hexagons. Their results have been confirmed in numerical simulations of a Swift-Hohenberg-type model . The competition between hexagons, rolls, and squares in rotating Bénard-Marangoni convection has been considered in . In the present paper we focus on the impact of rotation on the side-band instabilities of steady hexagon patterns, i.e. on instabilities that introduce modes with wavelengths or orientation different than those of the hexagons themselves. Thus, we extend the work of Sushchik and Tsimring to the case of broken chiral symmetry. We find that rotation can increase the wavenumber range over which the hexagons are stable with respect to long-wave perturbations. For larger values of the control parameter, however, additional short-wave instabilities arise. The long- and the short-wave modes can be steady or oscillatory. While in most cases they eventually lead to stable hexagon or roll patterns with different wavevectors, they can also induce persistent dynamics that can apparently not be described with Ginzburg-Landau equations.
The paper is organized as follows. In the following section we use symmetry arguments to introduce the appropriate Ginzburg-Landau equations. The stability with respect to long-wave perturbations is addressed in section III in which the coupled phase equations for the system are derived. General perturbations (within the Ginzburg-Landau framework) are considered in section IV. In section V we investigate numerically the nonlinear behavior resulting from the side-band instabilities. Conclusions are given in section VI.
## II Amplitude equations
We consider small-amplitude hexagon patterns in systems with broken chiral symmetry. For strictly periodic patterns the amplitudes $`𝒜_i`$ of the three sets of rolls (stripes) that make up the hexagon satisfy then the equations
$$_t𝒜_1=ϵ𝒜_1+\alpha _0\overline{𝒜}_2\overline{𝒜}_3g_1𝒜_1|𝒜_1|^2g_2𝒜_1|𝒜_2|^2g_3𝒜_1|𝒜_3|^2,$$
(1)
where the equations for the other two amplitudes are obtained by cyclic permutation of the indices and $`ϵ`$ is a small parameter related to the distance from threshold. The overbar represents complex conjugation. These equations can be obtained from the corresponding physical equations (e.g. Navier-Stokes) using a perturbative technique with the usual scalings ($`𝒜𝒪(ϵ^{1/2})`$ and $`_t𝒪(ϵ)`$). In order for all terms in (1) to be of the same order the coefficient of the quadratic term must be small, $`\alpha _0𝒪(ϵ^{1/2})`$. This term arises from a resonance of the wavevectors of the three modes in the plane. The broken chiral symmetry manifests itself by the cross-coupling coefficients not being equal, $`g_2g_3`$.
For completeness it should be noted that rotation leads in convection not only to a chiral symmetry breaking but also to a (weak) breaking of the translation symmetry due to the centrifugal force. In the following we will consider it to be negligible. In addition, for sufficiently small Prandtl number rotation can render the primary instability oscillatory .
In order to analyze the possibility of modulational instabilities spatial derivatives must be included in Eq. (1). We take the gradients in both directions to be of the same order, $`𝒪(ϵ^{1/2})`$ , and retain both linear and quadratic gradient terms. After rescaling the amplitude, time, and space we arrive at the equations,
$`_tA_1`$ $`=`$ $`\mu A_1+(𝐧_1)^2A_1+\overline{A}_2\overline{A}_3A_1|A_1|^2(\nu +\stackrel{~}{\nu })A_1|A_2|^2(\nu \stackrel{~}{\nu })A_1|A_3|^2`$ (4)
$`+i(\alpha _1+\stackrel{~}{\alpha })\overline{A}_2(𝐧_3)\overline{A}_3+i(\alpha _1\stackrel{~}{\alpha })\overline{A}_3(𝐧_2)\overline{A}_2`$
$`+i\alpha _2\left(\overline{A}_2(𝝉_3)\overline{A}_3\overline{A}_3(𝝉_2)\overline{A}_2\right)`$
where now all the coefficients are $`𝒪(1)`$, and $`𝐧_i`$ and $`𝝉_i`$ represent the unit vectors parallel and perpendicular to the wavenumber $`𝐤_i`$ (Fig. 1). The cross-coupling coefficients have been rewritten in terms of $`\nu `$ and $`\stackrel{~}{\nu }`$, with $`\stackrel{~}{\nu }`$ being proportional to rotation and therefore giving a measure of the chiral symmetry breaking. In the gradient terms the chiral symmetry breaking manifests itself in the terms proportional to $`\stackrel{~}{\alpha }`$.
The influence of the nonlinear gradient terms in (4) involving $`\alpha _1`$ and $`\alpha _2`$ has been studied by several authors . The origin of the new term involving $`\stackrel{~}{\alpha }`$ is best understood by considering the coefficient $`\alpha `$ of the quadratic term. The gradient terms arise from its dependence on the wavenumber of the modes involved, $`\alpha =\alpha (𝐤_1,𝐤_2,𝐤_3)`$ i.e. when the equation is considered in Fourier space. Due to the resonance condition ($`𝐤_1+𝐤_2+𝐤_3=0`$) we can drop the dependence on one of the wavevectors. Then, up to an arbitrary global rotation $`\mathrm{\Psi }`$, the system can be specified by the angle $`\mathrm{\Theta }`$ between the wavevectors $`𝐤_2`$ and $`𝐤_3`$ and their moduli $`k_2`$ and $`k_3`$ (see Fig. 1). Due to isotropy the coefficient $`\alpha `$ cannot depend on the global rotation and can therefore be expressed as $`\alpha =\alpha (k_2,k_3,\mathrm{\Theta })`$. When evaluated at the critical values of the wavenumbers $`\alpha _0`$ is given by $`\alpha _0=\alpha (k_2^c,k_3^c,\mathrm{\Theta }^c=2\pi /3)`$ (in Eq. (4) we take the normalization $`\alpha _0=1`$). Since a change in the modulus of $`𝐤_i`$ can be effected by the replacement $`A_iA_ie^{iK𝐧_i}`$ the dependence of $`\alpha `$ on the moduli $`k_2`$ and $`k_3`$ can be represented in real space by
$$\frac{\alpha }{k_2}\overline{A}_3(𝐧_2)\overline{A}_2+\frac{\alpha }{k_3}\overline{A}_2(𝐧_3)\overline{A}_3.$$
(5)
When the chiral symmetry is broken $`\alpha /k_2`$ and $`\alpha /k_3`$ need not be equal and it is convenient to introduce the coefficients
$$\alpha _1=\frac{1}{2}\left(\frac{\alpha }{k_3}+\frac{\alpha }{k_2}\right)\text{and}\stackrel{~}{\alpha }=\frac{1}{2}\left(\frac{\alpha }{k_3}\frac{\alpha }{k_2}\right)$$
(6)
with $`\alpha _1`$ even and $`\stackrel{~}{\alpha }`$ odd in the amplitude of the symmetry breaking.
On the other hand, a variation in the angle between $`𝐤_2`$ and $`𝐤_3`$ is represented by
$$\frac{i}{k_c}\frac{\alpha }{\mathrm{\Theta }}(\overline{A}_2(𝝉_3)\overline{A}_3\overline{A}_3(𝝉_2)\overline{A}_2)$$
(7)
with only one coefficient $`\alpha _2=(\alpha /\mathrm{\Theta })/k_c`$. This term is invariant under reflections interchanging modes $`A_2`$ and $`A_3`$, since in contrast to the normal vector $`𝐧_i`$ the tangential vector $`𝝉_i`$ changes sign in this reflection. Therefore the coefficient $`\alpha _2`$ is even in the amplitude of the symmetry breaking.
Equation (4) admits hexagon solutions $`A_i=He^{iK\widehat{𝐧}_i𝐱+i\varphi _i}`$ with a sligthly offcritical wavenumber ($`𝐤_i=𝐤_i^c+𝐊_i`$, $`Kk_c`$), with
$$H=\frac{(1+2K\alpha _1)\pm \sqrt{(1+2K\alpha _1)^2+4(\mu K^2)(1+2\nu )}}{2(1+2\nu )},\mathrm{\Phi }\varphi _1+\varphi _2+\varphi _3=0.$$
(8)
The stability of this solution to perturbations with the same wavevectors has been studied by several authors and can be summarized in the bifurcation diagram shown in Fig. 2. The hexagons appear through a saddle-node bifurcation at $`\mu =\mu _{sn}`$,
$$\mu _{sn}=\frac{(1+2K\alpha _1)^2}{4(1+2\nu )}+K^2,$$
(9)
and become unstable $`via`$ a Hopf bifurcation at $`\mu =\mu _H`$,
$$\mu _H=\frac{(1+2K\alpha _1)^2(2+\nu )}{(\nu 1)^2}+K^2,$$
(10)
with a critical frequency $`\omega _c=2\sqrt{3}\stackrel{~}{\nu }(1+2K\alpha _1)^2/(\nu 1)^2`$. Note that the Hopf frequency does not depend on $`\stackrel{~}{\alpha }`$. The Hopf bifurcation is supercritical and for $`\mu >\mu _H`$ stable oscillations in the three amplitudes of the hexagonal pattern arise with a phase shift of $`2\pi /3`$ between them , resulting in what we are going to call oscillating hexagons. As $`\mu `$ is increased further, eventually a point $`\mu =\mu _{het}`$ is reached at which the branch of oscillating hexagons ends on the branch corresponding to the mixed-mode solution in a global bifurcation involving a heteroclinic connection. Above this point the only stable solution is the roll solution whose stability region is bounded below by
$$\mu _R=\frac{1}{(\nu +\stackrel{~}{\nu }1)(\nu \stackrel{~}{\nu }1)}+K^2.$$
(11)
When $`|\stackrel{~}{\nu }|>\nu 1`$ the rolls are never stable and the limit cycle persists for arbitrary large values of $`\mu `$. In the absence of the quadratic terms in Eq. (1) this condition corresponds to the Küppers-Lortz instability of rolls.
## III Long-Wave approximation: Phase equation
Already below the Hopf bifurcation to oscillating hexagons the hexagonal pattern can be unstable to side-band perturbations. The behavior of long-wavelength modulations is described by the dynamics of the phase of the periodic structure. The phases $`\varphi _i`$ of the three modes of the hexagonal pattern can be combined to define a phase vector $`\mathit{\varphi }(\varphi _x,\varphi _y)`$, the components $`\varphi _x(\varphi _2+\varphi _3)`$ and $`\varphi _y(\varphi _2\varphi _3)/\sqrt{3}`$ of which are related to the translation modes in the $`x`$ \- and the $`y`$ -direction. In the chirally symmetric case the phase vector satisfies the coupled diffusion equations
$$\tau _0_t\mathit{\varphi }\text{ }=D_{}^2\mathit{\varphi }\text{ }+(D_{}D_{})(\mathit{\varphi }),$$
(12)
and can be decomposed into a longitudinal (irrotational) and a transversal (divergence free) part $`\mathit{\varphi }\psi _l+\times \widehat{𝐞}_z\psi _t`$. The fields $`\psi _{l,t}`$ each satisfy a diffusion equation with diffusion constants $`D_{}`$ and $`D_{}`$, respectively.
We can use symmetry arguments to derive the form of the phase equation when the chiral symmetry is broken. We consider a general diffusion coefficient which is a tensor of rank four,
$$_t\varphi _i=D_i^{jkl}_j_k\varphi _l$$
(13)
with summation over repeated indices implied. Invariance under reflection and rotations of $`60^o`$ restrict the number of possible independent coefficients $`D_i^{jkl}`$. In the case of broken chiral symmetry we split the diffusion tensor into two parts: one even under reflections, the other odd,
$$D_i^{jkl}=\overline{D}_i^{jkl}+\mathrm{\Omega }_i^m\stackrel{~}{D}_m^{jkl},$$
(14)
where $`\mathrm{\Omega }_i^m`$ is the antisymmetric tensor of rank two given by
$$\mathrm{\Omega }=\left(\begin{array}{cc}0& \omega \\ \omega & 0\end{array}\right),$$
(15)
with $`\omega `$ giving the strength of the chiral symmetry breaking (e.g. the rotation frequency). $`\overline{D}`$ and $`\stackrel{~}{D}`$ are even functions of $`\omega `$. As generators of the symmetry group we can take rotations of $`60^o`$ and reflections in $`x`$,
$$R_{60}=\left(\begin{array}{cc}\frac{1}{2}& \frac{\sqrt{3}}{2}\\ \frac{\sqrt{3}}{2}& \frac{1}{2}\end{array}\right),\kappa _x=\left(\begin{array}{cc}1& 0\\ 0& 1\end{array}\right).$$
(16)
Requiring that $`\overline{D}`$ and $`\stackrel{~}{D}`$ are invariant under the operations (16) one can show that the most general form of the phase equation with broken quiral symmetry is given by
$$_t\mathit{\varphi }=D_{}^2\mathit{\varphi }+(D_{}D_{})(\mathit{\varphi })D_{\times _1}(\widehat{𝐞}_z\times ^2\mathit{\varphi })+D_{\times _2}(\widehat{𝐞}_z\times )(\mathit{\varphi }),$$
(17)
where $`\widehat{𝐞}_z`$ is a unit vector in the direction perpendicular to the plane.
It is worth emphasizing that, although the coefficients of this equation can be derived from the amplitude equations, its form is given by symmetry arguments and is, therefore, generic and valid even far from threshold. To derive the phase equation from the amplitude equations (4) we consider a perfect hexagonal pattern with a wavenumber slightly different from critical ($`k=k_c+K`$) and perturb it, both in amplitude and phase, $`A_i=(H+r_i)e^{iK\widehat{𝐧}_i𝐱+i\varphi _i}`$. Away from threshold, from the saddle-node and the Hopf bifurcation, the amplitude modes $`r_1,r_2,r_3`$, and the global phase $`\mathrm{\Phi }=\varphi _1+\varphi _2+\varphi _3`$ are strongly damped and can be eliminated adiabatically. Following the usual procedures (e.g. ) we arrive at Eq. (17) with
$`D_{}`$ $`=`$ $`{\displaystyle \frac{1}{4}}+{\displaystyle \frac{1}{u^2+\omega ^2}}\left\{{\displaystyle \frac{1}{4}}H^2u[(\alpha _1+\sqrt{3}\alpha _2)^2+3\stackrel{~}{\alpha }^2]\sqrt{3}H\omega \stackrel{~}{\alpha }KuK^2\right\},`$ (18)
$`D_{}`$ $`=`$ $`D_{}+{\displaystyle \frac{1}{2}}{\displaystyle \frac{1}{v}}\left\{H^2\alpha _1(\alpha _1\sqrt{3}\alpha _2)H(3\alpha _1\sqrt{3}\alpha _2)K+2K^2\right\},`$ (19)
$`D_{\times _1}`$ $`=`$ $`{\displaystyle \frac{1}{u^2+\omega ^2}}\left\{{\displaystyle \frac{1}{4}}\omega H^2[(\alpha _1+\sqrt{3}\alpha _2)^2+3\stackrel{~}{\alpha }^2]+\sqrt{3}Hu\stackrel{~}{\alpha }K\omega K^2\right\},`$ (20)
$`D_{\times _2}`$ $`=`$ $`{\displaystyle \frac{\stackrel{~}{\alpha }}{v}}\left\{\sqrt{3}H^2\alpha _1\sqrt{3}HK\right\}.`$ (21)
where
$`\omega =2\sqrt{3}H^2\stackrel{~}{\nu },`$ (22)
$`u=2H^2(1\nu )+2(1+2K\alpha _1)H,`$ (23)
$`v=2H^2(1+2\nu )(1+2K\alpha _1)H.`$ (24)
The coefficients $`D_{\times _1}`$ and $`D_{\times _2}`$ are odd in the symmetry-breaking terms $`\stackrel{~}{\nu }`$ and $`\stackrel{~}{\alpha }`$. At the Hopf bifurcation curve $`u=0`$ implying $`H=(1+2K\alpha _1)/(\nu 1)`$ and $`\omega =\omega _c`$, while $`v=0`$ represents the saddle-node instability.
Expanding the phase in normal modes $`\mathit{\varphi }=\mathit{\varphi }^0e^{i𝐐𝐱+\sigma t}`$ we obtain the dispersion relation
$$\sigma ^2+(D_{}+D_{})Q^2\sigma +\left(D_{}D_{}+D_{\times _1}(D_{\times _1}+D_{\times _2})\right)Q^4=0,$$
(25)
whose eigenvalues are
$$\sigma _{1,2}=\frac{1}{2}\left[D_{}+D_{}\pm \sqrt{(D_{}D_{})^24D_{\times _1}(D_{\times _1}+D_{\times _2})}\right]Q^2.$$
(26)
When $`D_{\times _1}=0`$ the eigenvalues become simply $`\sigma _1=D_{}Q^2`$ and $`\sigma _2=D_{}Q^2`$, corresponding to the eigenvalues of the irrotational and the divergence-free phase modes, respectively. If the rotation rate is small ($`D_{\times _1},D_{\times _2}D_{},D_{}`$) we can expand (26) and obtain,
$$\sigma _1=\left(D_{}\frac{D_{\times _1}(D_{\times _1}+D_{\times _2})}{(D_{}D_{})}\right)Q^2,\sigma _2=\left(D_{}+\frac{D_{\times _1}(D_{\times _1}+D_{\times _2})}{(D_{}D_{})}\right)Q^2.$$
(27)
For $`D_{\times _1},D_{\times _2}D_{},D_{}`$ this approximation is not valid and the longitudinal and transverse perturbations become coupled. An important novelty in this case is that the phase instability can become oscillatory. It occurs when the following conditions are satisfied,
$`D_{}+D_{}=0,`$ (28)
$`(D_{}D_{})^24D_{\times _1}(D_{\times _1}+D_{\times _2})=0.`$ (29)
In Fig. 3 and Fig. 5 (below) we represent the phase instability curves for a number of cases. For small values of the rotation rate $`\stackrel{~}{\nu }`$, the phase stability diagram is similar to that obtained in the absence of rotation, especially for small $`\mu `$. As $`|K|`$ is increased, both real eigenvalues in (26) go through zero consecutively as indicated by the dashed and solid lines in Fig.3. As $`\mu `$ is increased towards the Hopf bifurcation ($`\mu \mu _H`$) the two lines merge and the phase instability becomes oscillatory as indicated by the solid lines. Note that, in contrast to the chirally symmetric case, the left and right stability limits do not merge at $`K=0`$ as the transition to oscillating hexagons is reached. Instead, they are open and over a range of wavenumbers the hexagons remain stable with respect to long-wave perturbations all the way to the Hopf bifurcation at $`\mu =\mu _H`$. Furthermore, while in the chirally symmetric case the analog of the Hopf bifurcation is transcritical (and steady) and leads discontinously to rolls, the Hopf bifurcation is supercritical and leads to oscillating hexagons .
## IV General Stability Analysis
We now consider arbitrary perturbations of the hexagonal pattern $`A_i=(H+a_ie^{i𝐐𝐱+\sigma t})e^{iK\widehat{𝐧}_i𝐱}`$, with $`a_1`$, $`a_2`$, $`a_3`$ complex and solve the resulting $`6\times 6`$ linearized system. Two of the six eigenvalues correspond to the global phase $`\mathrm{\Phi }`$ and the overall amplitude involved in the saddle-node bifurcation. In the regime of interest both are strongly negative. The next two correspond to the translation modes, and can be real or complex. It turns out that these modes can destabilize the hexagons not only *via* the longwave instabilities (26) but also *via* short-wave instabilities as illustrated in Fig. 4a, where the solid and dashed lines correspond to the real parts of the complex and real eigenvalues, respectively. Finally, there is a pair of complex conjugate eigenvalues corresponding to the Hopf bifurcation to oscillating hexagons. For some parameter values these eigenvalues merge with the ones corresponding to the phase modes and their real parts become positive (Fig. 4b and 4c). Note that in the chirally symmetric case the eigenvalues are always real and the instabilities long-wave .
In what follows we will consider $`\alpha _1=\alpha _2=0`$ for simplicity. Although non-zero values for these coefficients change the stability boundaries quantitatively, they are not found to induce any qualitatively different instability.
In Figs. 3 and 5 we present the stability limits obtained for several values of $`\stackrel{~}{\nu }`$ and $`\stackrel{~}{\alpha }`$. The short-dashed and the dotted lines correspond to the Hopf and saddle-node bifurcations, respectively, while the dot-dashed line is the curve above which the rolls become stable with respect to the hexagons. We do not address their side-band instabilities. The circles correspond to the results of the general stability analysis while the solid and dashed lines are the stability limits in the long-wave approximation, as given by (26) (with eigenvalues either real or a complex conjugate pair). The solid circles in Fig. 5 correspond to instabilities at finite wavenumber due to the Hopf modes.
Fig. 3 shows the stability limits for $`\stackrel{~}{\alpha }=0`$ but $`\stackrel{~}{\nu }0`$, i.e. the chiral symmetry is only broken at cubic order. While for small values of the control parameter the long-wave analysis gives the correct stability limits, for larger $`\mu `$ a steady short-wave instability preempts the long-wave instability before it becomes oscillatory. In this case the eigenvalues are complex for $`Q0`$, but they split into two real eigenvalues for larger $`Q`$, one of which becomes positive (Fig. 4a). As the coefficient $`\stackrel{~}{\nu }`$ is increased the region in which the long-wave instability is relevant decreases (Fig. 3b) and shrinks to almost zero (Fig. 3c).
When the quadratic gradient terms are different from zero the instability regions become asymmetric with respect to $`K=0`$. The system is, however, invariant under the change $`\stackrel{~}{\alpha }\stackrel{~}{\alpha }`$, $`KK`$, $`\alpha _i\alpha _i`$, and we will therefore consider only positive values of $`\stackrel{~}{\alpha }`$. For small $`\stackrel{~}{\alpha }`$ the region of the steady long-wave instability becomes smaller at one side of the bandcenter and larger at the other (Fig. 5a). As $`\stackrel{~}{\alpha }`$ is increased, this region shrinks to zero for $`K<0`$ and disappears above the Hopf curve for $`K>0`$. At this point all the instabilities are oscillatory (Fig. 5c). For $`\stackrel{~}{\alpha }=0.4`$ and $`\stackrel{~}{\alpha }=0.7`$ (Fig. 5b,c) the stability limit is entirely given by the longwave results for $`K>0`$, but for still larger values of $`\stackrel{~}{\alpha }`$ it becomes shortwavelength (Fig. 5d). For $`K<0`$ there is a large region in which the instability is short-wave and oscillatory. Close to the Hopf bifurcation the instability switches from the translation modes to the Hopf mode (cf. Fig. 4). As $`\stackrel{~}{\alpha }`$ is increased the region in which the instability is due to the Hopf modes grows.
## V Numerical simulations
In order to study the nonlinear behavior arising from the instabilities, we have performed numerical simulations of Eqs. (4). A Runge-Kutta method with an integrating factor that computes the linear derivative terms exactly has been used. Derivatives were computed in Fourier space, using a two-dimensional fast Fourier transform (FFT). The numerical simulations were done in a rectangular box of aspect ratio $`2/\sqrt{3}`$ with periodic boundary conditions. This aspect ratio was used to allow for regular hexagonal patterns.
We start with a perfect hexagonal pattern with a wavenumber in the unstable region and add noise. In all the cases we have considered the numerical simulations reproduce correctly the linear stability limits. Over most of the parameter regime the nonlinear evolution of the instabilities is qualitatively very similar. The perturbation growths (with or without oscillations, depending on the kind of instability) until it destroys the original hexagon pattern and then settles down to a stable periodic pattern. Therefore all the instabilities appear to be subcritical. Furthermore, it seems to be irrelevant whether the instability comes from the translation or the Hopf modes. The branch switching does not change the value of the unstable wavenumber nor the frequency (cf. Fig. 4b,c) and the behavior of the dispersion relation at lower values of the perturbation wavenumber does not play a role. For values of the control parameter for which rolls are unstable, the instabilities lead to a rotation of the original hexagonal pattern and to a change in its wavelength. Usually penta-hepta defects appear in the process. In the presence of rotation they annihilate each other quite fast, yielding a perfect pattern as the final state. For larger control parameters rolls become stable and the side-band instabilities of the hexagons eventually lead to roll patterns, independently of the specific type of the instability.
For certain parameter values, however, more complicated behavior is found. This is shown in Fig. 6, where we represent a reconstruction of the hexagonal pattern, $`\mathrm{\Psi }=_{i=1}^3A_ie^{i𝐤_i^c𝐱}`$ (top panel), as well as the corresponding Fourier spectrum of the amplitude $`A_1`$, $`\widehat{A}_1(K)`$ (bottom panel). In this case the instability develops close to the initial wavenumber of the unstable hexagonal pattern (Fig. 6a). As time progresses, however, modes with ever increasing $`y`$-component of the wavevector are excited. Independent of the maximal wavevectors retained in the simulations ($`15.3<K_{max}<46`$ with system size in the range $`17.5L52.5`$) eventually the wavevectors with the largest possible $`y`$-components are excited and the peak in the spectrum displayed in Fig. 6b reaches the top border of the figure. Then the peak reemerges at the bottom border again, i.e. the wavevectors have very large negative $`y`$-components. This is shown more clearly in Fig. 7 where a cross-section of the Fourier spectrum in the $`y`$-direction for $`K_x=0`$ is shown for three times. At time $`t=90`$ most of the excited modes have already reemerged at (large) negative values of $`K_y`$. Obviously, in these simulations the solutions cease to be numerically resolved already well before $`t=60`$. Within the Ginzburg-Landau equations (4) the curve of marginal modes corresponds to a vertical line in the Fourier spectrum in Fig. 6. Thus, the excited modes lie predominantly along the critical curve and the numerically observed behavior suggests that the correct evolution of the pattern would involve a trend towards a rotation of the pattern and a spreading of the Fourier modes over the circle of marginal modes. Such dynamics reflect explicitly the isotropy of the system. They cannot be captured within the Ginzburg-Landau equations, which break the isotropy through the choice of the wavevectors corresponding to the amplitudes $`A_i`$. To represent dynamics as suggested in Fig. 6 correctly, models that retain the isotropy have to be used. This motivates the use of Swift-Hohenberg-type models <sup>*</sup><sup>*</sup>*Formally, isotropy can be recovered by modifying the gradient terms in Eq. (4) . However, this requires that the amplitudes be allowed to vary rapidly in space.. Investigations of the complex dynamics that can arise from instabilities identified here have been performed in . They show indeed a bistability between the ordered hexagons and a spatio-temporally chaotic state with an almost isotropic Fourier spectrum.
## VI Conclusion
In this article we have analyzed the effect of chiral symmetry-breaking on the stability of hexagonal patterns. Such patterns arise, for instance, in non-Boussinesq Rayleigh-Bénard convection and in Marangoni convection, where the chiral symmetry can be broken by rotating the system. Focussing on the regime near threshold we have used the appropriate Ginzburg-Landau equations for the three modes making up the hexagon pattern. The chiral symmetry breaking introduces an asymmetry between the cubic coupling coefficients as well as a new nonlinear gradient term. The general linear stability analysis of these equations revealed long-wave as well as short-wave instabilities. The long-wave instabilities, which are captured with coupled phase equations, can be steady or oscillatory. For all parameter regimes investigated, the short-wave instabilities arise for larger values of the control parameter, but below the transition to oscillating hexagons. They can be due to the translation or the Hopf modes. In the latter case they are always oscillatory.
In contrast to the Küppers-Lortz instability of stripe patterns , no regime was identified in which hexagon patterns become unstable at all wavelengths. Nevertheless, persistent irregular dynamics of disordered hexagon patterns can apparently arise from the short-wave instability. Our numerical simulations of the Ginzburg-Landau equations indicate that the nonlinear evolution ensuing from the instability tends to introduce modes with wavevectors covering the whole critical circle. Of course, such a state in which the Fourier modes are distributed almost isotropically over the critical circle cannot be described by Ginzburg-Landau equations, since they break the isotropy at the very outset. This suggests the use of Swift-Hohenberg-type equations, which preserve the isotropy of the system. They are often used as truncated model equations (e.g. ) to study the qualitative behavior of systems, but can under certain conditions also be derived from the basic (fluid) equations as a long-wave description . Recently, in such investigations of hexagons with broken chiral symmetry spatio-temporally chaotic states have been found to arise from the corresponding oscillatory short-wave instability . As in our simulations of the Ginzburg-Landau equations the spatio-temporal chaos persists although for the same parameters there exist also stable ordered hexagon patterns. This bistability is somewhat reminiscence of the coexistence of spiral-defect chaos and ordered roll convection in Rayleigh-Bénard convection without rotation . In the Swift-Hohenberg model the oscillatory short-wave instability can also lead to a supercritical bifurcation to hexagons that are modulated periodically in space and time . No such state could be identified in the Ginzburg-Landau equations discussed here.
Our results suggest that rotation may induce irregular dynamics in hexagonal convection patterns quite close to threshold. So far, disordered hexagon patterns (without broken chiral symmetry) have been found in Marangoni convection far from threshold and also in experiments on chemical Turing patterns . In the latter case they appear to be due to the competition with the stripe pattern in a bistable regime.
From previous work it is well known that the chiral symmetry breaking delays the transition from hexagons to stripe patterns. More specifically, the steady bifurcation to the unstable mixed state is replaced by a Hopf bifurcation to a state of coherently oscillating hexagons . Their side-band instabilities can be investigated with the same Ginzburg-Landau equations as discussed here .
###### Acknowledgements.
We gratefully acknowledge interesting discussions with F. Sain, M. Silber and C. Pérez-García. The numerical simulations were performed with a modification of a code by G.D. Granzow. This work was supported by D.O.E. Grant DE-FG02-G2ER14303 and NASA Grant NAG3-2113.
|
no-problem/9906/nucl-th9906075.html
|
ar5iv
|
text
|
# The physics of the centrality dependence of elliptic flow
## Abstract
The centrality dependence of elliptic flow and how it is related to the physics of expansion of the system created in high energy nuclear collisions is discussed. Since in the hydro limit the centrality dependence of elliptic flow is mostly defined by the elliptic anisotropy of the overlapping region of the colliding nuclei, and in the low density limit by the product of the elliptic anisotropy and the multiplicity, we argue that the centrality dependence of elliptic flow should be a good indicator of the degree of equilibration reached in the reaction. Then we analyze experimental data obtained at AGS and SPS energies. The observed difference in the centrality dependence of elliptic flow could imply a transition from a hadronic to a partonic nature of the system evolution. Finally we exploit the multiplicity dependence of elliptic flow to make qualitative predictions for RHIC and LHC.
The goal of the ultrarelativistic nuclear collision program is the creation of the QGP – quark-gluon plasma – the state of deconfined quarks and gluons. It is understood that such a state requires (local) thermalization of the system brought about by many rescatterings per particle during the system evolution. It is not clear when and if such a dynamical thermalization can really occur. An understanding of these phenomena can be achieved by considering elliptic flow recently studied at AGS and SPS energies. It will be shown how the centrality dependence of the strength of elliptic flow, $`v_2`$, defined as the second coefficient in the Fourier decomposition of the particle azimuthal distribution , is an indicator of the degree of equilibration (thermalization) achieved in the system.
Our qualitative conclusions are based on the observation, that in the hydro limit (which we equate in our discussion to complete thermalization) and in the opposite limiting case, the low density limit, (where dynamical thermalization is not expected) the centrality dependence of elliptic flow is different. In the hydro limit, the mean free path is much less than the geometrical size of the system. The centrality dependence of flow is totally governed in this case by the initial geometry (eccentricity), the latter being roughly proportional to the impact parameter. In the low density limit, the mean free path is comparable to or larger than the system size. The final anisotropy in this case should be proportional to the ratio of the system size to the mean free path (the number of collision). The anisotropy vanishes in the limit of infinite mean free path. The latter in its turn depends on the particle density, which is largest for central collisions and vanishes for very peripheral collisions. Note that the factors involved change drastically with centrality. One could imagine other reasons for centrality dependence of elliptic flow in the hydro model, such as the initial conditions, viscous corrections, resonances, or effective volume corrections, but we expect that all these other factors have a much weaker dependence on the impact parameter. By considering the two limiting cases we hope to highlight qualitative considerations important for understanding the degree of thermalization and the partonic or hadronic nature of the collisions. In essence we present a framework for examining these questions experimentally, however at the moment, it is mainly the experimental data which are not adequate to answer these questions convincingly.
We will often use the term “physics of the collision”. By this we mean both the degree of equilibration and whether the hadronic picture in terms of nucleons, pions, etc., or the partonic picture in terms of deconfined quarks and gluons, is more applicable to the evolution of the system. The partonic picture in our view is similar to a QGP but the system is not necessarily thermalized.
Low Density Limit
To discuss the centrality dependence of $`v_2`$ more quantitatively, we start from the hypothesis that the system is not dense and its evolution can be described by the first correction to the collisionless limit . Physically this means that the rescattering occurring during the system evolution changes the particle momenta very little on the average and the corresponding change in the distribution functions can be treated in first order as perturbations. Under this assumption the final elliptic flow, $`v_2`$, is proportional to the initial overlapping region elliptic anisotropy, $`\epsilon `$, (introduced in flow analyses in and in its present form in ) and to the initial particle space density which defines the probability of particles to rescatter.
The initial geometry of the overlapping zone can be evaluated in a simple Glauber type model with a Woods-Saxon nuclear density. The results are weakly dependent on the weights used . What is important is that if one wants to compare different energies, e.g. AGS, SPS and RHIC, the nuclear geometry cancels out, and only the dependence on multiplicity is left. This is true provided that the “physics” of of the system evolution stays the same. If it changes then the scaling with multiplicity will be violated. This is a very important point if one reads it the other way around: if scaling is not observed then probably the physics has changed.
Under the assumption that the system is relatively dilute the momentum anisotropy is proportional to the spatial anisotropy, but also the particles must scatter to probe that anisotropy. Thus, the spectra distortion is directly proportional to the spatial anisotropy and the number of rescatterings, or the particle density in the transverse plane. In this limit the final elliptic flow (see a more detailed formula in )
$$v_2\epsilon \frac{1}{S}\frac{dN}{dy},$$
(1)
where $`S=\pi R_xR_y`$ is the area of the overlapping zone, with $`R_x^2x^2`$ and $`R_y^2y^2`$ describing the initial geometrical sizes of the system in $`x`$ and $`y`$ directions, respectively. (The x-z axes lie in the reaction plane). The averages include a weighting with the number of collisions along the beam axis. The initial space elliptic anisotropy is defined as
$$\epsilon =\frac{R_y^2R_x^2}{R_x^2+R_y^2}.$$
(2)
In our calculation we use a Woods-Saxon parameterization of the nuclear density with parameters $`R_A=1.12A^{1/3}`$, and $`a=0.547`$ fm. More information on the effect of different weights and the values of $`R_x^2,R_y^2,S`$ and $`\epsilon `$ as a function of impact parameter can be found in . The proportionality coefficient in Eq. (1) is defined by the “physics” of the rescattering. If the physics is the same in central and peripheral collisions then Eq. (1) yields the centrality dependence of $`v_2`$.
Hydro Limit
As follows from Eq. (1) the elliptic flow increases with the particle density. Eventually it will saturate at the hydro limit, which would mean complete thermalization of the system. In this regime the centrality dependence of elliptic flow is mainly determined by the initial elliptic anisotropy of the overlapping zone in the transverse plane , and the ratio of the two should be approximately constant as shown in the first such calculations done by Ollitrault . From his results it follows that $`(v_2/\epsilon )_{hydro}0.270.35`$, depending on the equation of state used (with or without QGP)<sup>*</sup><sup>*</sup>*To avoid confusion, note the difference in definitions of $`\epsilon `$ used in Eq. (2) of this paper and $`\alpha _x`$ from . For Pb+Pb collisions the maximal value of $`\epsilon 0.44`$ compared to $`\alpha 0.3.`$. Then, the results yield $`v_2^{\{p_t^2\}}/\epsilon 0.550.7`$, where $`v_2^{\{p_t^2\}}`$ means the elliptic flow weighted with $`p_t^2`$. Recent calculations show that the particle elliptic flow is related to this quantity as $`v_20.5v_2^{\{p_t^2\}}`$.. The calculations give a somewhat smaller flow, resulting in $`(v_2/\epsilon )_{hydro}0.210.23`$ (partly due to the realistic treatment of resonances which decrease the pion flow by about 15%). Note that in both calculations, and , the longitudinal expansion of the system is treated analytically assuming Bjorken scaling. Real 3D hydro calculations would be very useful, although we do not expect that they would greatly change the centrality dependence.
RQMD
Before discussing the experimental data we will first consider a realistic model. We take RQMD v2.3 for our calculations. Fig. 1 top shows the comparison of the directly calculated $`v_2`$ of pions in Pb+Pb collisions at 158 GeV$``$A collisions at mid-rapidity ($`1<y<1`$) with the expectation from the low density limit, $`v_2^{LDL}`$ (Eq. (1) normalized to the same area under the curve in order to illustrate just the centrality dependence.) One can see rather good agreement, which suggests that RQMD is close to the low density limit even as one scans the centrality from peripheral to central collisions. (In this version of RQMD no QGP is simulated.) This is not that striking a conclusion, considering that no hydro-type behavior has ever been observed in RQMD. Note that the low density limit does not mean a low number of total rescatterings. The number of rescatterings can be large provided all of them are relatively soft and the particle momentum changes little compared to the initial momentum. The cross section which enters the equations is the transport (not total) cross section (see ). The centrality dependence expected for the hydro limit is shown on the same plot by a dashed line also normalized to the same area under the curve ($`v_2^{HYDRO}0.059\epsilon `$). Note the large difference between the two curves, which was not noted in . Fig. 1 bottom shows that the ratio of $`v_2`$ to the expected functional form is flat for the low density limit but not for the hydro limit. A centrality dependence similar to the low density limit was also observed in where a computer simulation of a pion gas expansion was studied.
Data
Now let us turn to the experimental data. At AGS energies the elliptic flow of charged particles and of transverse energy was measured by the E877 Collaboration. Unfortunately, the publication containing the detailed pseudorapidity dependence for each centrality lacks a figure showing just the centrality dependence. Our estimates based on their data of charged particle flow at midrapidity are presented in Fig. 2.
The data indicate that at AGS the flow peaks at mid-centralityA similar centrality dependence of transverse energy flow (from the same data ) can be found in the thesis of Chang ., consistent with the low density limit prediction and no change in physics with centrality. At this energy some decrease of elliptic flow in peripheral collisions can be also attributed to shadowing by spectator matter. At SPS , preliminary data indicate that the elliptic flow peak moves towards peripheral collisions. This fact itself would hint at the hydro-dynamical picture of the system evolution. A more detailed look at the data shows that this is unlikely. First, the maximal value of elliptic flow ($`v_20.04`$) is significantly less than predicted by hydro calculations (about 0.09–0.1)In agreement was claimed between hydro and the NA49 mid-central data leading to their conclusion of complete equilibration. However, this comparison was done for $`p_t<0.3`$ GeV/c and it could be that the $`p_t`$ dependence of $`v_2`$ in the hydro model does not agree with experiment.. Second, in the hydro limit elliptic flow should depend only on the initial space elliptic anisotropy, $`\epsilon `$. The preliminary NA49 data indicate that the ratio $`v_2/\epsilon `$, at least for semi-central collisions, is likely increasing with centrality (see the data presented in Fig. 3 below). This centrality dependence (natural for the low density limit) implies that we still could be far from the hydro regime<sup>§</sup><sup>§</sup>§ One can argue that, taking into account systematic uncertainties, the preliminary SPS data for $`v_2/\epsilon `$ are consistent with being constant as a function of centrality. In this case it would indeed mean that the system has equilibrated and the hydro regime has been reached. The low absolute strength of the elliptic flow in this case would indicate that the equilibration happens at a rather late time when the spatial anisotropy $`\epsilon `$ has decreased due to initial “free streaming”. We do not exclude this possibility but must wait for the final SPS data and the coming RHIC data to answer the question..
Assuming that at SPS the hydro regime is not reached yet, the observed centrality dependence of elliptic flow would indicate that the physics of the system evolution is different in central and peripheral collisions. Elliptic flow peaks at more peripheral collisions because the central collisions exhibit too little flow compared to that expected from the AGS data scaled with multiplicity. A natural explanation for this would be that peripheral collisions are described by hadronic (re)scatterings (the same as at the AGS in both peripheral and central collisions) while in central collisions partonic physics becomes important. One of the possible mechanisms responsible for the change could be a color percolation occurring at high parton densities in the central collisions and discussed in more detail below.
Discussion
Summarizing, our view of the overall picture is: at AGS energies, the physics of rescattering which defines the system evolution is hadronic in nature, while at SPS it is the same for peripheral collisions, but for central collisions the physics is likely to be partonic. The partonic picture will remain at RHIC energies, with some extension toward more peripheral collisions. At RHIC equilibration becomes more important, but it is not clear if complete thermalization will be reached. At LHC energies the parton densities could become so high that (partonic) rescattering would lead to dynamical equilibration of the (partonic) system (creation of regions of real QGP) and consequently to a hydro-dynamical type of system evolution.
The above picture for collisions of heavy nuclei implies that the shape of the centrality dependence of elliptic flow would change continuously with beam energy. At AGS, the elliptic flow is peaked at an impact parameter value slightly higher than $`R_A`$, just as prescribed by the low density limit. At SPS energies the peak moves toward more peripheral collisions because possibly the physics of relatively central collisions may have changed from hadronic to partonic, which leads to weaker flow than one would expect taking into account the increased multiplicity. If thermalization is not reached at RHIC, the elliptic flow peak could move back toward mid-central collisions because the physics of the peripheral and central collisions will be the same – partonic rescattering, unlike the situation at SPS when peripheral collisions are driven by the hadronic rescatterings resulting in relatively large flow signal. At even higher energies at LHC, the elliptic flow should peak at more peripheral collisions just as predicted by hydrodynamic calculations.
The schematic overall picture based on these observations is presented in Fig. 3, where the ratio of elliptic flow to the initial space elliptic anisotropy is presented as a function of initial particle density.At the moment this plot is qualitative as many things shown have large uncertainties. The hydro limits can depend slightly on the initial particle density and, more importantly, on the time of thermalization of the system. The values shown are an average of the results of . The predictions for the case without QGP are only for the EoS of a massless pion gas. Resonances can soften the EoS and lead to weaker flow. The uncertainty in the experimental points is mainly from the determination of the collision centrality required for calculation of the initial space elliptic anisotropy and the area of the overlapping region. The data points correspond to the centrality determined from the fraction of the total cross section corresponding to each centrality bin. Higher centralities were estimated from experimental measurement of the number of participants . Finally, the smooth dashed curves are just schematic illustrations for hadronic and partonic scenarios and the solid curve includes a transition between the two. In this plot we use the experimental charged particle multiplicity, assuming that it is proportional to the total particle multiplicity and also to the initial particle multiplicity. For the experimental values we use $`dN_{ch}/dy`$ at mid-rapidity from .
In the limit of very low density the objects which rescatter must be hadrons. At some critical density a partial deconfinement happens. Parton density becomes high enough such that the color parton can propagate in the perpendicular plane without hadronization. Each parton is always close enough to other partons which screen its colorThis picture is very close to the deconfinement (color percolation) model discussed by Satz for $`J/\mathrm{\Psi }`$ suppression.. Once the motion in the perpendicular plane becomes easier (there is no need for hadronization), the elliptic flow decreases. Note that the system still can be far from being dynamically thermalized, which would occur only at even higher particle densities. Even more important, such a significant change in the behavior of $`v_2/\epsilon `$ can only happen if the system is not thermalized. See also the discussion of this question in along with the discussion of the possibility of observation of the QGP to hadron gas phase transition.
To prove or disapprove the picture described above one needs more accurate data on the centrality dependence of elliptic flow. We would like to emphasize the importance of flow measurements not only at medium impact parameters but in the full range of centrality including rather central collisions where the anisotropic flow is small. The measurement of elliptic flow and its centrality dependence at RHIC thus becomes very important. Different models predict different rapidity densities for RHIC and LHC. Assuming that they are higher than at SPS by factors of 2 and 8, respectively, we have indicated the regions expected for Au+Au (Pb+Pb) collisions in Fig. 3. The measurements of elliptic flow in collisions of lighter systems (e.g. Cu+Cu) are also very important since they would cover the region of the SPS Pb+Pb data and would be useful in testing the above picture. The new SPS data taken at 40 GeV$``$A energy are also of great interest since they would bridge the two other sets of data and may scan the onset of deconfinement from hadronic to partonic physics.
Note that our picture of nuclear collisions and QGP production is different from what is usually discussed, which assumes thermal equilibrium even at rather low beam energies, when QGP is not expected, and then with an increase in collision energy, formation of regions of QGP. We believe that what could happen is that the deconfinement can occur before dynamical thermalization is achieved and that the centrality dependence of elliptic flow would be a good indicator of this.
We are grateful to J.-Y. Ollitrault, U. Heinz, H. Heiselberg, G. Cooper, P. Seyboth, R. Snellings, H. Sorge, and H.G. Ritter for useful discussions.
This work was supported by the Director, Office of Energy Research, Office of High Energy and Nuclear Physics, Division of Nuclear Physics of the U.S. Department of Energy under Contracts DE-AC03-76SF00098 and DE-FG02-92ER40713.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.