id
stringlengths 30
36
| source
stringclasses 1
value | format
stringclasses 1
value | text
stringlengths 5
878k
|
---|---|---|---|
no-problem/9903/cond-mat9903348.html
|
ar5iv
|
text
|
# Magnetism and magnetic asphericity in NiFe alloys
## 1 Introduction
NiFe alloys have been extensively studied earlier. Experimental data have been available for decades. Neutron scattering has probed the local magnetic moments . Theoretical approaches include the Hartree-Fock approaches of Hasegawa and Kanamori and Kanamori . These authors assumed the density of states to be steeple models, and were therefore only qualitative. They used the coherent potential approach (CPA) of Velický et al in order to describe disorder and configuration averaging . Mishra and Mookerjee have used the Hartree-Fock model of Hasegawa and Kanmori but the averaged density of states was obtained from the augmented space technique based cluster CPA. Podgórny studied NiFe using the supercell linearized muffin-tin orbitals method (LMTO) while Lipiński applied the tight-binding LMTO-CPA.
NiFe alloys have interesting behaviour. The magnetic behaviour of Ni is anomalous in the sense that the local magnetic moment of Ni depends sensitively on its immediate surrounding. The local magnetic moment on a Ni atom depends upon the number of Ni neighbours it has. The CPA which replaces the random neighbourhood of an atom in a solid solution by an effective, averaged medium cannot reflect this behaviour. The aim of this communication is to study NiFe by the augmented space recursion (ASR) technique . Recently we have shown that the ASR allows us to go beyond the CPA without violating the essential herglotz analytic properties and take into account the effect of local neighbourhood . The ASR will be based on the TB-LMTO hamiltonian
$`H={\displaystyle \underset{RL}{}}\widehat{C}_{RL}𝒫_{RL}+{\displaystyle \underset{RL}{}}{\displaystyle \underset{R^{}L^{}}{}}\widehat{\mathrm{\Delta }}_{RL}^{1/2}S_{RL,R^{}L^{}}\widehat{\mathrm{\Delta }}_{R^{}L^{}}^{1/2}𝒯_{RL,R^{}L^{}}`$
$`\widehat{C}_{RL}=C_{RL}^B+\left(C_{RL}^AC_{RL}^B\right)n_R`$
$`\widehat{\mathrm{\Delta }}_{RL}^{1/2}=\mathrm{\Delta }_{RL}^{B1/2}+\left(\mathrm{\Delta }_{RL}^{A1/2}\mathrm{\Delta }_{RL}^{B1/2}\right)n_R`$ (1)
Here $`𝒫_{RL}`$ and $`𝒯_{RL,R^{}L^{}}`$ are projection and transfer operators in the hilbert space spanned by the tight binding basis $`|RL`$ and $`n_R`$ is a random occupation variable which is 1 if the site $`R`$ is occupied by an atom of the A type and 0 if not. The augmented space hamiltonian replaces the random occupation variable by operators $`M_R`$ of rank 2. For models without any short-range order
* $`M_R=x𝒫_{}^R+(1x)𝒫_{}^R+\sqrt{x(1x)}\left(𝒯_{}^R+𝒯_{}^R\right)`$
* $`|=\left(\sqrt{x}|0+\sqrt{1x}|1\right)`$
* $`|=\left(\sqrt{1x}|0\sqrt{x}|1\right)`$
The ASR was carried out with LDA self-consistency. The computations included scalar relatavistic corrections. The exchange potential of von Barth-Hedin was used. We used the choice of flexible Wigner Seitz radii for Ni and Fe as suggested by Kudrnovský and Drchal in order to ensure the atomic spheres to be neutral and avoid the calculation of the Madelung energy.
The recursion method then expresses the Green functions as continued fraction expansions. The continued fraction coefficients are exactly obtained upto eight levels and the terminator suggested by Luchini and Nex is used to approximate the asymptotic part. The convergence of this procedure has been discussed by Ghosh et al . The local charge densities are given by :
$$\rho _\sigma ^\lambda (r)=(1/\pi )\mathrm{}m\underset{L}{}_{\mathrm{}}^{E_F}𝑑EG_{LL}^{\lambda ,\sigma }(r,r,E)$$
(2)
Here $`\lambda `$ is either $`A`$ or $`B`$. The local magnetic moment is
* $`m^\lambda ={\displaystyle _{r<R_{WS}}}d^3r\left[\rho _{}(r)\rho _{}(r)\right]`$
We have also obtained the spectral densities and complex band structures for 50-50 Ni-Fe using the ASR in k-space as suggested by Biswas et al .
For high concentrations of Ni the alloy forms a face centred cubic solid solution. With decreasing Ni concentration, at the invar concentration of about 30$`\%`$, the system undergoes a structural phase transition to a body centred cubic solid solution. At this transition there is a sharp decrease of Wigner-Seitz radius. This leads to larger overlap of the $`d`$-bands, and hence, using the Stoner criterion, a decrease of magnetization. We shall limit ourselves to the face centred cubic region beyond the invar concentration.
## 2 Results and Discussion
Figures 1(a) and (b) show the Ni and Fe partial density of states for Ni concentrations varying between 90 and 40 $`\%`$. We note that the Ni densities hardly change with concentration, while the Fe densities change considerably. For the minority spin partial densities on Fe, the structures below the Fermi energy do not change with concentration. The peak at around 0.0 Ryd grows with increasing Ni concentration. However, this structure is above the Fermi level and does not contribute to the magnetic moment. For the majority spin partial densities on Fe, the peak around -0.4 Ryd grows with increasing Ni concentration . This ensures that the Ni local magnetic moment remains almost concentration independent, while the Fe local moment increases with Ni concentration. Figure 2 shows the Ni and Fe moments as well as the average moment as functions of Ni concentration. The results agree rather well with earlier CPA results. In addition we have shown the experimental results on the average magnetic moment by Crangle and Halam as well as the experimental data on local magnetic moments on Fe and Ni by Shull and Wilkinson , Collins et al and Nishi et al . The experimental data are reasonably reproduced, as well as CPA did earlier.
Both the CPA and our results indicate a slight increase of the Fe magnetic moment as the Ni concentration increases. The experimental data with its large error bars tell us little about this trend with certainty. The asphericity, which is measured by the ratio of the $`t_{2g}`$ and $`e_g`$ contributions to the magnetic moment, is shown in figure 3. The results indicate that the Ni moment distribution is highly anisotropic. The $`t_{2g}`$ dominates the magnetization as well as the density of states peaks near the Fermi level. We also compare our results with neutron scattering experimental data of Brown et al and Ito et al . The asphericity trend with concentration is reproduced, but our average asphericities are slightly larger. The asphericity decreases with decreasing Ni concentration. The Fe moment distribution is almost spherical throughout the concentration range, with a slight decrease with decreasing Ni concentration. The average asphericity agrees reasonably well with experiment .
Figures 4 (a) and (b) gives the spectral densities for k-vectors going from the $`\mathrm{\Gamma }`$ to the $`X`$ point for the $`e_g`$ and $`t_{2g}`$ bands. The results clearly show the splittng between the majority and minority spin bands. The imaginary part of the self-energy which measures the disorder scattering life-time, is clearly k-dependent and is more prominent at the $`X`$ point and least at the $`\mathrm{\Gamma }`$ point. This is in contrast to the CPA calculations where the life-times are almost k-independent. Angle-resolved photoemmision experiments show the spectral behaviour of alloys and the nature of fuzzy fermi surfaces can be obtained from Compton scattering and positron annihilation experiments. The complex bands are obtained from the peaks and widths in the spectral densities. These are shown in figure 5 (a) and (b). We note that the d-bands of Fe and Ni overlap considerably, and although the $`e_g`$ and $`t_{2g}`$ type bands are evident, they are broadened by disorder. The lines shown in figure 5 are to guide the eye. The statements made above based on the spectral functions are clearly seen in this figure. The Fermi level crosses the minority spin bands. These bands have considerable life-times and the Fermi surface should have this width associated with it. It should be interesting to look at the fuzzy Fermi surface experimentally.
NiFe alloys tend to exhibit short-ranged order. The augmented space recursion is ideally suited to take into account effects of short-ranged order. The formalism to include this has been introduced earlier by Mookerjee and Prasad and applied to alloy systems by Saha et al . We have carried out the calculation of the magnetic moment of the 50-50 NiFe alloy as a function of the Warren-Cowley short-ranged order parameter $`\alpha `$. The magnetic moment of Fe is hardly affected by short-ranged order. However, in the region where $`\alpha <0`$ indicating ordering, Ni is more likely to have Fe neighbours as compared to the case without short-ranged order. Here the magnetic moment on Ni increases. Similarly, when $`\alpha >0`$, Ni atoms segregate and are less likely to have Fe as their neighbours and the Ni moment decreases.This is shown in figure 6. We conclude that extra moment is induced on Ni by Fe atoms in its vicinity. This induced moment on Ni is most sensitive to short-ranged order in the alloy.
## References
## Figure Captions
(a) Partial densities of states at Ni sites. (b) Partial densities of states at Fe sites. The concentrations of Ni are shown as are the Fermi energies (vertical lines).
Magnetic moments on Fe and Ni sites and the averaged magnetic moment as functions of Ni concentration. The diamond marks indicate the experimental results of Crangle and Halam, the dashed points of Shull and Wilkinson, the square points of Collins and Lowde and the crossed points of Nishi et al .
Asphericity of moment distribution on Fe and Ni sites as functions of Ni concentration. The cross marks indicate the neutron scattering results of Brown et al and Ito et al .
The spectral densities for (a) $`e_g`$ and (b) $`t_{2g}`$ states along the $`\mathrm{\Gamma }`$ to $`X`$ direction for 50-50 FeNi. The bold curves are for the majority spins and the dotted curve for the minority spins. For both (a) and (b), the wave vectors of figures are (from top to bottom) (1,0,0), (0.75,0,0),(0.5,0,0),(0.25,0,0) and (0,0,0) in units of 2$`\pi `$/a, where a is the lattice constant.
Complex band structure for $`d`$-bands of 50-50 FeNi (a) For the majority spin bands (b) for the minority spin bands. The widths of the bands are marked and the lines are only to guide the eye.
The averaged magnetic moment and the magnetic moment of Ni as functions of the Warren-Cowley short-ranged order parameter for 50-50 NiFe.
|
no-problem/9903/math9903161.html
|
ar5iv
|
text
|
# Finite Order 𝑞-Invariants of immersions of surfaces into 3-Space
## 1. Introduction
Let $`F`$ be a closed surface. Let $`Imm(F,^3)`$ denote the space of all immersions of $`F`$ into $`^3`$ and let $`I_0Imm(F,^3)`$ denote the space of all generic immersions.
###### Definition 1.1.
A function $`f:I_0/2`$ will be called a “$`q`$-invariant” if whenever $`H_t:F^3`$ ($`0t1`$) is a generic regular homotopy with no quadruple points, then $`f(H_0)=f(H_1)`$.
###### Definition 1.2.
Let $`I_nImm(F,^3)`$ denote the space of all immersions whose unstable self intersection consists of precisely $`n`$ generic quadruple points, and let $`I=_{n=0}^{\mathrm{}}I_n`$.
###### Definition 1.3.
Given a $`q`$-invariant $`f:I_0/2`$ we extend it to $`I`$ as follows: For $`iI_n`$ let $`i_1,\mathrm{},i_{2^n}I_0`$ be the $`2^n`$ generic immersions that may be obtained by slightly deforming $`i`$. Define
$$f(i)=\underset{k=1}{\overset{2^n}{}}f(i_k).$$
For any $`q`$-invariant, we will always assume without mention that it is extended to the whole of $`I`$ as in Definition 1.3.
The following relation clearly holds:
###### Proposition 1.4.
Let $`f`$ be a $`q`$-invariant. Let $`iI_n`$, $`n1`$, and let $`p^3`$ be one of its $`n`$ quadruple points. Then: $`f(i)=f(i_1)+f(i_2)`$ where $`i_1,i_2I_{n1}`$ are the two immersions that may be obtained by slightly deforming $`i`$ in a small neighborhood of $`p`$.
(Or equivalently, since we are in $`/2`$, $`f(i_2)=f(i_1)+f(i)`$.)
###### Definition 1.5.
A $`q`$-invariant $`f`$ will be called “of finite order” if $`f|_{I_n}0`$ for some $`n`$.
The “order” of a finite order $`q`$-invariant $`f`$ is defined as the minimal $`n`$ such that $`f|_{I_{n+1}}0`$.
(Compare our Definitions 1.3 and 1.5 with 2.2 of \[O\].)
An example of a $`q`$-invariant of order 1 is the invariant $`Q`$ which is defined by the property that if $`H_t:F^3`$ ($`0t1`$) is a generic regular homotopy in which $`m`$ quadruple points occur, then $`Q(H_1)=Q(H_0)+mmod2`$. In other words $`Q`$ is defined by the property that $`Q|_{I_1}1`$. It was proved in \[N\] that $`Q`$ indeed exists for any surface $`F`$.
There are $`M=2^{2\chi (F)}`$ regular homotopy classes (i.e. connected components) in $`Imm(F,^3)`$. Given a regular homotopy class $`AImm(F,^3)`$, we may repeat all our definitions with $`A`$ in place of $`Imm(F,^3)`$. Let then $`V_n(A)`$ (respectively $`V_n`$) denote the space of all $`q`$-invariants on $`A`$ (respectively $`Imm(F,^3)`$) of order $`n`$. $`V_n(A)`$ and $`V_n`$ are vector spaces over $`/2`$, and $`V_n=_{\alpha =1}^MV_n(A_\alpha )`$ where $`A_1,\mathrm{},A_M`$ are the regular homotopy classes in $`Imm(F,^3)`$. More precisely, a function $`f:I_0/2`$ is a $`q`$-invariant of order $`n`$ iff for every $`1\alpha M`$, $`f|_{I_0A_\alpha }`$ is a $`q`$-invariant of order $`n`$. And so studying $`q`$-invariants on $`Imm(F,^3)`$ is the same as studying $`q`$-invariants on the various regular homotopy classes.
The purpose of this work is to prove the following:
###### Theorem 1.6.
If $`F`$ is orientable then $`dim(V_n(A)/V_{n1}(A))1`$ for any $`A`$ and $`n`$.
By \[N\] $`dim(V_1(A)/V_0(A))1`$ for any $`A`$ (for all surfaces, not necessarily orientable) and so we get:
###### Corollary 1.7.
If $`F`$ is orientable then $`dim(V_1(A)/V_0(A))=1`$ for any $`A`$.
Since as mentioned, $`V_n=_{\alpha =1}^MV_n(A_\alpha )`$, we get:
###### Corollary 1.8.
If $`F`$ is orientable of genus $`g`$ then $`dim(V_n/V_{n1})2^{2g}`$ for every $`n`$, and $`dim(V_1/V_0)=2^{2g}`$.
## 2. General $`q`$-Invariants
The results in this section will not assume that the $`q`$-invariant $`f`$ is of finite order.
###### Theorem 2.1 (The 10 Term Relation).
Let $`i:F^3`$ be any immersion whose non-stable self intersection consists of one generic quintuple point, and some finite number of generic quadruple points. Let the quintuple point be located at $`p^3`$ and let $`S_1,\mathrm{},S_5`$ be the five sheets passing through $`p`$. Let $`i_k^1`$ and $`i_k^2`$ ($`k=1,\mathrm{},5`$) be the two immersions obtained from $`i`$ by slightly pushing $`S_k`$ away from $`p`$ to either side. Then for any $`q`$-invariant $`f`$:
$$\underset{k=1}{\overset{5}{}}\underset{l=1}{\overset{2}{}}f(i_k^l)=0.$$
###### Proof.
Starting with $`i`$, take $`S_1`$ and push it slightly to one side. Then take $`S_2`$ and push it away on a much smaller scale. What we now have is an immersion $`j`$ where sheets $`S_2,\mathrm{},S_5`$ create a little tetrahedron, and $`S_1`$ passes outside this tetrahedron. We define the following regular homotopy $`H_t:F^3`$ beginning and ending with $`j`$, we describe it in four steps: (a) $`S_1`$ sweeps to the other side of the tetrahedron. In this step four quadruple points occur. (b) $`S_2`$ sweeps across the triple point of sheets $`S_3,S_4,S_5`$. This results in the vanishing of the tetrahedron and its inside-out reappearance. One quadruple point occurs here. (c) $`S_1`$ sweeps back to its place. Four more quadruple points occur. (d) $`S_2`$ sweeps back to its place. One more quadruple point occurs.
All together we have ten quadruple points, and say the $`m`$th quadruple point occurs at time $`t_m`$. It is easy to verify that the ten immersions $`H_{t_1},\mathrm{},H_{t_{10}}`$ are precisely (equivalent to) the ten immersions $`i_k^l`$ ($`l=1,2`$ , $`k=1,\mathrm{},5`$.) Also, $`f(H_{t_m})=f(H_{t_mϵ})+f(H_{t_m+ϵ})`$ and so:
$$\underset{kl}{}f(i_k^l)=\underset{m=1}{\overset{10}{}}f(H_{t_m})=\underset{m=1}{\overset{10}{}}(f(H_{t_mϵ})+f(H_{t_m+ϵ})).$$
But $`f(H_{t_m+ϵ})=f(H_{t_{m+1}ϵ})`$ (where $`m+1`$ means $`(m+1)mod10`$) and so this sum is 0.
###### Proposition 2.2.
Let $`B(1)^3`$ be the unit ball. Let $`D_1(1),\mathrm{},D_4(1)F`$ be four disjoint discs which will each be parameterized as the unit disc, and let $`D(1)=_{k=1}^4D_k(1)`$. Let $`iI`$ and assume $`i^1(B(1))=D(1)`$ and $`i|_{D(1)}`$ maps each $`D_k(1)`$ linearly onto some $`L_kB(1)`$ where $`L_k`$ is a plane through the origin, and $`L_1,\mathrm{},L_4`$ are in general position. Let $`i^{}:D(1)B(1)`$ be an immersion of the same sort as $`i|_{D(1)}`$ but with planes $`L_{}^{}{}_{1}{}^{},\mathrm{},L_{}^{}{}_{4}{}^{}`$.
For $`0r1`$ let $`B(r)B(1)`$ and $`D_k(r)D_k(1)`$ be the ball and discs of radius $`r`$ and let $`D(r)=_{k=1}^4D_k(r)`$.
Then: There exists an immersion $`j:F^3`$ satisfying:
1. $`j`$ is regularly homotopic to $`i`$ via a regular homotopy that fixes $`FD(1)`$.
2. $`j^1(B(\frac{1}{2}))=D(\frac{1}{2})`$
3. $`j|_{D(\frac{1}{2})}=i^{}|_{D(\frac{1}{2})}`$
4. $`f(j)=f(i)`$ for any $`q`$-invariant $`f`$.
###### Proof.
Slightly perturb $`i`$ if necessary so that the eight planes $`L_k,L_{}^{}{}_{k}{}^{}`$ will all be in general position. We define a regular homotopy $`H_t`$ from $`i`$ to an immersion $`\stackrel{~}{i}`$ as follows: Say $`a`$ is the point in $`D_1(1)`$ which is mapped to the origin. Keeping $`a`$ and $`FD_1(1)`$ fixed, we isotope $`D_1(1)`$ within $`B(1)`$ to get $`\stackrel{~}{i}`$ with $`\stackrel{~}{i}^1(B(\frac{7}{8}))=D(\frac{7}{8})`$ and $`\stackrel{~}{i}|_{D_1(\frac{7}{8})}=i^{}|_{D_1(\frac{7}{8})}`$.
Let $`i^1,i^2`$ be the two immersions obtained from $`i`$ by slightly pushing $`D_1(1)`$ off of the origin, and let $`\stackrel{~}{i}^1,\stackrel{~}{i}^2`$ be the corresponding slight deformations of $`\stackrel{~}{i}`$. $`H_t`$ induces regular homotopies $`H_t^l`$ ($`l=1,2`$) from $`i^l`$ to $`\stackrel{~}{i}^l`$, and such that $`H_t^l|_{D_1(1)}`$ avoids the origin.
Now, the only triple point of $`\{L_2,L_3,L_4\}`$ is the origin, and $`H_t^l|_{D_1(1)}`$ is an isotopy which avoids the origin, and so $`H_t^l`$ will have no quadruple point, and so $`f(i^l)=f(\stackrel{~}{i}^l)`$ ($`l=1,2`$). And so (By Proposition 1.4) $`f(i)=f(i^1)+f(i^2)=f(\stackrel{~}{i}^1)+f(\stackrel{~}{i}^2)=f(\stackrel{~}{i})`$.
We now repeat this process in the ball $`B(\frac{7}{8})`$ and with $`D_2(\frac{7}{8})`$, obtaining an immersion $`\stackrel{~}{\stackrel{~}{i}}`$ with $`\stackrel{~}{\stackrel{~}{i}}|_{D_1(\frac{6}{8})D_2(\frac{6}{8})}=i^{}|_{D_1(\frac{6}{8})D_2(\frac{6}{8})}`$. After four iterations we get the desired $`j`$.
## 3. $`q`$-Invariants of Order $`n`$
We now prove the following theorem, which clearly implies Theorem 1.6 (our main theorem):
###### Theorem 3.1.
Assume $`F`$ is orientable and let $`f`$ be a $`q`$-invariant of order $`n`$.
Then for any regular homotopy class $`AImm(F,^3)`$, $`f`$ is constant on $`I_nA`$.
###### Proof.
Let $`iI`$ and $`p^3`$ a quadruple point of $`i`$. A ball $`B^3`$ centered at $`p`$ as in Proposition 2.2, i.e. such that $`i^1(B)`$ is a union of four disjoint discs intersecting in $`B`$ as four planes, will be called “a good neighborhood for $`i`$ at $`p`$.”
For $`iI_n`$ let $`p_1,\mathrm{},p_n^3`$ be the $`n`$ quadruple points of $`i`$ in some order, and let $`B_1,\mathrm{},B_n`$ be disjoint good neighborhoods for $`i`$ at $`p_1,\mathrm{},p_n`$. We define $`\pi _k(i):FB_k`$ as follows: Push each one of the four discs in $`B_k`$ slightly away from $`p_k`$ into the preferred side determined by the orientation of $`F`$. We now have a map that avoids $`p_k`$. Define $`\pi _k(i)`$ as the composition of this map with the radial projection $`^3\{p_k\}B_k`$.
Let $`d_k(i)`$ denote the degree of the map $`\pi _k(i)`$.
Let the symmetric group $`S_n`$ act on $`^n`$ by $`\sigma (a_1,\mathrm{},a_n)=(a_{\sigma (1)},\mathrm{},a_{\sigma (n)})`$, and let $`\stackrel{~}{^n}=^n/S_n`$. Let the class of $`(a_1,\mathrm{},a_n)`$ in $`\stackrel{~}{^n}`$ be denoted by $`[a_1,\mathrm{},a_n]`$. For $`iI_n`$ we define $`d(i)\stackrel{~}{^n}`$ by $`d(i)=[d_1(i),\mathrm{},d_n(i)]`$.
We break our proof into two steps. *Step 1:* If $`i,jI_nA`$ and $`d(i)=d(j)`$ then $`f(i)=f(j)`$. *Step 2:* For any $`(a_1,\mathrm{},a_n)^n`$, there are immersions $`i,jI_nA`$ with $`d(i)=[a_1,a_2,\mathrm{},a_n]`$, $`d(j)=[a_1+1,a_2,\mathrm{},a_n]`$ and $`f(i)=f(j)`$. The theorem clearly follows from these two claims.
*Proof of Step 1:* By composing $`i`$ with an isotopy $`U_t:^3^3`$ we may assume that $`p_1,\mathrm{},p_n^3`$ are the quadruple points of both $`i`$ and $`j`$ and that $`d_k(i)=d_k(j)`$ for each $`1kn`$. Let $`B_1,\mathrm{},B_n`$ be disjoint good neighborhoods for both $`i`$ and $`j`$ at $`p_1,\mathrm{},p_n`$. By composing $`i`$ with an isotopy $`V_t:FF`$ we may further assume that $`i^1(B_k)=j^1(B_k)`$ for every $`k`$. We name the four discs in $`F`$ corresponding to $`p_k`$ by $`D^{kl}`$, $`l=1,\mathrm{},4`$.
Using Proposition 2.2 we may now change $`i`$ such that (for smaller $`B_k`$’s) we will have $`i|_{D^{kl}}=j|_{D^{kl}}`$ for all $`1kn`$ , $`1l4`$. The process of Proposition 2.2 indeed does not change $`d_k(i)`$, since the slightly pushed discs appearing in the definition of $`\pi _k(i)`$ can follow the regular homotopy of Proposition 2.2 and this will induce a homotopy between the corresponding $`\pi _k(i)`$’s.
So we may assume $`i|_{D^{kl}}=j|_{D^{kl}}`$ for all $`1kn`$ , $`1l4`$. We will now show that there exists a regular homotopy from $`i`$ to $`j`$ such that each $`D^{kl}`$ moves only within its image in $`^3`$, and $`F_{kl}D^{kl}`$ moves only within $`^3_kB_k`$. We will then be done since such a regular homotopy cannot change $`f(i)`$. Indeed, no sheet will pass $`p_1,\mathrm{},p_n`$ and so the only singularities that might be relevant are the quadruple points occurring in $`^3_kB_k`$. But whenever such a quadruple point occurs, then we will have $`n+1`$ quadruple points all together, and so since $`f`$ is of order $`n`$, $`f(i)`$ will not change. (Proposition 1.4.)
To show the existence of the above regular homotopy, we construct the following handle decomposition of $`F`$. Our discs $`D^{kl}`$, ($`1kn`$ , $`1l4`$) will be the 0-handles. If $`g`$ is the genus of $`F`$ we will have $`2g+4n1`$ 1-handles as follows: $`2g`$ 1-handles will have both ends glued to $`D^{11}`$ such that $`D^{11}`$ with these $`2g`$ handles will decompose $`F`$ in the standard way. Then choose an ordering of the discs $`D^{kl}`$ with $`D^{11}`$ first, and connect each two consecutive discs with a 1-handle. The complement of the 0- and 1-handles is one disc which will be the unique 2-handle.
We first define our regular homotopy on the union of 0- and 1- handles. Take a 1-handle $`h`$ of the first type. Since $`i`$ and $`j`$ are regularly homotopic, their restrictions to the annulus $`D^{11}h`$ are also regularly homotopic. We can construct such a regular homotopy of $`D^{11}h`$ fixing $`D^{11}`$ and avoiding $`_kB_k`$.
Next consider the 1-handles of the second type. Take the 1-handle $`h`$ connecting $`D^{11}`$ to the second disc in our ordering, call it $`D^{}`$. Then if $`i|_h`$ and $`j|_h`$ are not regularly homotopic relative the gluing of $`h`$ to $`D^{11}D^{}`$, then we perform one full rotation of $`D^{}`$, as to make them regularly homotopic. (This will require a motion of the next 1-handle too.) Again we perform all regular homotopies while avoiding $`_kB_k`$. We can now go along the chain of 1-handles of the second type, and regularly homotope them one by one as we did the first one. At each step we might need to move the next 0-handle and 1-handle, but we never need to change what we have already done.
So far we have constructed the desired regular homotopy on the union of 0- and 1-handles. By \[S\] this regular homotopy may be extended to the whole of $`F`$ (still avoiding $`_kB_k`$.) And so, if we denote our 2-handle by $`D`$, we are left with regularly homotoping $`i|_D`$ to $`j|_D`$ (relative $`D`$.) Since $`d_k(i)=d_k(j)`$ for all $`k`$, these maps are homotopic in $`^3_kB_k`$. By \[S\] they are also regularly homotopic in $`^3_kB_k`$, since the obstruction to that would lie in $`\pi _2(SO_3)=0`$.
*Proof of Step 2:* Take any immersion $`i^{}I_nA`$ with $`d(i^{})=[a_1,\mathrm{},a_n]`$ and let $`p_1,\mathrm{},p_n^3`$ be the quadruple points of $`i^{}`$, ordered such that $`d_k(i^{})=a_k`$, $`1kn`$. (Clearly any $`[a_1,\mathrm{},a_n]\stackrel{~}{^n}`$ may be realized within any regular homotopy class.) Take a disc in $`F`$ which is away from the $`p_k`$’s and start pushing it (i.e. regularly homotoping it) into its preferred side directing it towards $`p_1`$. Avoid any of the $`p_k`$’s on the way, and so the immersion $`i`$ we will get just before arriving at $`p_1`$, will still have $`d_k(i)=a_k`$ for all $`k`$. We then pass $`p_1`$ creating a quintuple point, and continue to the other side arriving at an immersion $`j`$ which is again in $`I_n`$. Clearly $`d_1(j)=a_1+1`$ and $`d_k(j)=a_k`$ for $`k2`$. We will now use Step 1 and the 10 term relation (Theorem 2.1) to show that $`f(i)=f(j)`$. Indeed, let us name the five sheets of our quintuple point by $`S_1,\mathrm{},S_5`$ where $`S_1`$ is the sheet coming from the disc that we pushed into $`p_1`$. Let $`i_m^1`$ ($`m=1,\mathrm{},5`$) denote the immersion obtained by pushing $`S_m`$ into its non-preferred side, and $`i_m^2`$ the immersion obtained by pushing $`S_m`$ into its preferred side. Then $`i=i_1^1`$ and $`j=i_1^2`$. Recall that $`\pi _1(i_m^l)`$ is constructed by pushing all four sheets involved in the quadruple point at $`p_1`$ into their preferred side. And so for each $`1m5`$, $`\pi _1(i_m^1)`$ has one sheet pushed into the non-preferred side and four sheets into the preferred side, and so $`d_1(i_m^1)`$ are all equal to each other. And, for each $`1m5`$, $`\pi _1(i_m^2)`$ has all five sheets pushed into the preferred side and so also $`d_1(i_m^2)`$ are all equal to each other. Clearly all this has no effect on $`d_k`$ for $`k2`$, and so we have $`d(i_m^1)=d(i)`$ and $`d(i_m^2)=d(j)`$ for all $`1m5`$. And so by step 1, $`f(i_m^1)=f(i)`$ and $`f(i_m^2)=f(j)`$ for all $`1m5`$. And so by the 10 term relation, $`0=_{ml}f(i_m^l)=5f(i)+5f(j)=f(i)+f(j)`$ i.e. $`f(i)=f(j)`$.
|
no-problem/9903/astro-ph9903413.html
|
ar5iv
|
text
|
# Curvature and Acoustic Instabilities in Rotating Fluid Disks
## 1 Introduction
Spiral galaxies are characterized by bright ”arms” spiraling out from a region near the center. Differential rotation will shear and wind these arms quickly if they are material features, so Lindblad (1958) and Lin & Shu (1964) developed a theory of density waves to overcome this winding dilemma. Lin & Shu (1964,1966) also obtained the dispersion relation for these waves, which is the relation between frequency and wavenumber. An important discriminant in this dispersion relation is the stability parameter $`Q`$ for axisymmetric disturbances (Toomre 1964); when $`Q>1`$, the disk is stable against ring-like disturbances.
Lau & Bertin (1978) included additional terms that treated tangential forces for fluid spiral waves in a uniform disk, finding an additional destabilizing term they called $`J`$. They used a WKB approximation and ignored curvature terms, which scale inversely with galactocentric radius. Goldreich & Lynden-Bell (1965), Zang (1976), and Toomre (1981) also studied azimuthal forces, by considering the temporal response of shearing wavelets. Toomre (1981) termed the mechanism responsible for the spectacular growth of shearing waves a ”swing amplifier”. He found that spiral waves can grow for a short time even when $`Q>1`$, as long as $`Q`$ is not too large.
In section 3.2 below, we discuss a new instability in the usual spiral wave equations derived by Bertin et al. (1989; hereafter BLLT) that is relevant beyond the Lindblad resonances, i.e., inside the ILR and outside the OLR, even when $`Q>1`$. This is a regime that BLLT did not consider. The new instability depends on shear and self-gravity as in the BLLT derivation, but it also has a component in the absence of self-gravity that arises only from pressure and rotation. We therefore refer to it as an acoustic instability.
We also derive dispersion relations for fluid disks considering the curvature terms and other terms that were ignored in these previous studies, such as radial variations of the basic properties of the disk. Our additional terms depend on two dimensionless parameters,
$$ϵ\frac{2\pi G\sigma _0}{r\kappa ^2},$$
(1)
and $`a/(\kappa r)`$, for mass column density $`\sigma _0`$, radius $`r`$, epicyclic frequency $`\kappa `$, and sound speed $`a`$. Typically $`ϵ0.1`$, which is small, so our new results are not important modifications to previous studies that considered only small $`Q`$. However, in regions where $`Q`$ is large, the additional terms lead to residual instabilities that can be important in some situations.
Numerical and analytical solutions to the modified dispersion relation are found here for typical regions in galactic and other disks. These include the main disks of spiral galaxies, where the rotation curves are approximately flat (Rubin et al. 1985); the inner disks of galaxies, where the rotation curves are approximately solid body; inner solid-body gaseous disks that are not self-gravitating (e.g., NGC 2207; Elmegreen et al. 1998), and non-self-gravitating Keplerian disks, as might be appropriate for proto-planetary disks or galactic nuclear regions surrounding black holes (Nakai et al. 1993).
## 2 The General Dispersion Relation
The dynamical response of an infinitely thin fluid disk to perturbation density waves will be studied here, considering various degrees of approximations using algebraic expansions in terms of small parameters. The disk response to spiral waves is considered to be weak enough for the linearized equations of motion to be valid. The effects of self-gravity, pressure, and differential rotation are included. The pressure is assumed to depend only on the density; in the formulation, enthalpy is used. In the analysis, perturbation variables are assumed to be of the form $`g_1(r,\theta ,t)=G(r)e^{i{\scriptscriptstyle k(r)𝑑r}}e^{i\left(\omega tm\theta \right)}`$, where $`r`$ is the radius, $`\theta `$ is the azimuthal angle, $`\omega `$ is the frequency of oscillation if it is real, and the growth or decay rate if it is imaginary, $`m`$ is the number of arms, $`k(r)`$ is the radial wavenumber, and $`G(r)`$ is the slowly varying amplitude. The spiral waves have an interarm spacing that is much shorter than the radius, that is $`\zeta 1/|\widehat{k}r|1`$ for total wavenumber $`\widehat{k}=\sqrt{k^2+m^2/r^2}`$. This condition is satisfied either for very short waves or for open spirals with many arms, and it allows asymptotic solutions to the density response. The same condition is used to express the density as a linear function of the gravitational potential (Bertin & Mark 1979).
The linearized equations of motion are combined with the continuity equation to relate the perturbation enthalpy $`h_1`$ to the perturbation gravitational potential $`\varphi _1`$ (Goldreich & Tremaine 1979, Lin & Lau 1979):
$$\left(h_1+\varphi _1\right)=Ch_1,$$
(2)
where $`=d^2/dr^2+Ad/dr+B`$ and the coefficients are $`A=\left(1/r\right)d\mathrm{ln}𝒜/d\mathrm{ln}r`$ , $`B=m^2/r^2+\left(2m\mathrm{\Omega }/r^2\kappa \nu \right)d\mathrm{ln}\left(\kappa ^2\left(1\nu ^2\right)/\sigma _0\mathrm{\Omega }\right)/d\mathrm{ln}r`$, and $`C=\kappa ^2\left(1\nu ^2\right)/a^2`$; also $`𝒜=\kappa ^2\left(1\nu ^2\right)/\left(\sigma _0r\right)`$, where $`\nu `$ is the dimensionless frequency, $`\nu =\left(\omega m\mathrm{\Omega }\right)/\kappa `$, $`m`$ is the number of arms in the spiral pattern, $`\kappa `$ is the epicyclic frequency, $`\sigma _0`$ is the surface density of the disk, $`\mathrm{\Omega }\left(r\right)`$ is the angular frequency, and $`a`$ is the sound speed in the disk. The perturbation gravitational potential can be expressed in the form $`\varphi _1(r)=\mathrm{\Phi }(r)e^{i{\scriptscriptstyle k(r)𝑑r}}`$; then Poisson’s equation is (Bertin & Mark 1979):
$$\sigma _1=\frac{\sigma _0}{a^2}f(r)\varphi _1,$$
(3)
with the definition
$$f(r)\frac{1}{2\pi GrK(\alpha ,m)}\left[1+i\widehat{A}(\alpha )r\frac{d\alpha }{dr}+\widehat{B}(\alpha )r^2\frac{d^2\alpha }{dr^2}+\widehat{C}(\alpha )(r\frac{d\alpha }{dr})^2\right],$$
(4)
and the approximation
$`K(\alpha ,m)`$ $`=`$ $`\tau (1+{\displaystyle \frac{m+1/2}{2}}\tau ^2),`$
$`\tau `$ $`=`$ $`(\alpha ^2+(m+1/2)^2)^{1/2},`$
$`\alpha `$ $`=`$ $`krir\mathrm{\Phi }^{^{}}/\mathrm{\Phi }i/2.`$
This expansion for $`f(r)`$ is correct to third order in $`\alpha `$. The terms $`\widehat{A}`$, $`\widehat{B}`$, and $`\widehat{C}`$ are defined in Bertin & Mark (1979); they are:
$`\widehat{A}(\alpha )`$ $`=`$ $`K_2K_{1}^{}{}_{}{}^{2}`$
$`\widehat{B}(\alpha )`$ $`=`$ $`K_{1}^{}{}_{}{}^{3}+K_32K_1K_2`$
$`\widehat{C}(\alpha )`$ $`=`$ $`9K_{1}^{}{}_{}{}^{2}K_26K_1K_3+3K_43K_{1}^{}{}_{}{}^{4}3K_{2}^{}{}_{}{}^{2},`$
where
$$K_n=\frac{1}{n!K(\alpha ,m)}\frac{^nK(\alpha ,m)}{\alpha ^n}.$$
The enthalpy, $`h_1=a^2\sigma _1/\sigma _0`$, can be expressed in terms of the potential $`\varphi _1`$ using equation (3) to obtain
$$h_1=f(r)\varphi _1.$$
(5)
The expression for $`f`$, equation (4), can be expanded in the small parameter $`\zeta `$ to get $`f(r)=(\widehat{k}/k_J)\left[1+if_1\zeta +f_2\zeta ^2+\left(f_3+if_4\right)\zeta ^3+\mathrm{}\right]`$. Here, $`k_J2\pi G\sigma _0/a^2`$ is the two-dimensional equivalent of the Jeans wavenumber. The terms $`f_i`$ are real and depend on derivatives of $`k`$ and $`\mathrm{\Phi }`$. For example, $`f_1=(k/\widehat{k})\left[1/2r\mathrm{\Phi }^{^{}}/\mathrm{\Phi }\left(1+rk^{^{}}/k\right)\left(m^2/2\widehat{k}^2r^2\right)\right]`$.
If only the first term is kept in the expansion of $`f(r)`$ and all radial gradients and the $`m`$-dependence of $`\widehat{k}`$ is dropped, the Lin & Shu (1966) dispersion relation is obtained:
$$\left(\omega m\mathrm{\Omega }\right)^2=\kappa ^22\pi G\sigma _0|k|+k^2a^2.$$
(6)
In terms of the dimensionless frequency, $`\nu =\left(\omega m\mathrm{\Omega }\right)/\kappa `$, dimensionless wavelength, $`\eta =k_{crit}/|k|0`$, where $`k_{crit}=\kappa ^2/2\pi G\sigma _0`$, and Toomre’s stability parameter $`Q=\kappa a/\pi G\sigma _0`$, the Lin-Shu relation is
$$\nu ^2=1\frac{1}{\eta }+\frac{Q^2}{4\eta ^2}.$$
(7)
## 3 Tangential forces and the stability parameter $`J`$
### 3.1 The Bertin-Lin-Lowe-Thurstans dispersion relation
In the derivation of the Lin-Shu dispersion relation, which is equation (7) above, terms of magnitude $`m/kr`$ are ignored. Thus the dispersion relation is accurate for radial oscillations only. When the azimuthal wavenumber $`m/r`$ is included, the gravitational instability is stronger (Lau & Bertin 1978). In the derivation of the corresponding dispersion relation, Lau and Bertin made the assumptions that in Poisson’s equation the out of phase (i.e., imaginary) terms can be ignored and the wavenumber $`|k|k_J/2`$. Defining the total wavelength to be $`\lambda _m=2\pi /\sqrt{k^2+m^2/r^2}`$, Poisson’s equation becomes
$$\varphi _1=G\sigma _1\lambda _m,$$
(8)
and equation (2) is
$$\left(\sigma _1/\sigma _0\right)_{inphase}=\frac{h_1+\varphi _1}{\kappa ^2\left(\omega m\mathrm{\Omega }\right)^2}\left[\frac{4\pi ^2}{\lambda _m^2}\frac{T_1}{(1\nu ^2)}\right],$$
(9)
where $`T_1=(2m\mathrm{\Omega }/\kappa r)^2(d\mathrm{ln}\mathrm{\Omega }/d\mathrm{ln}r)`$. Note that the last term in the equation above contains $`(1\nu ^2)`$, which was not present in (C15) in Lau & Bertin (1978) because they were considering solutions near corotation ($`\nu 1`$). However, $`T_1/(1\nu ^2)`$ can be derived from their equations (B6) and (B9), it comes from their second term in equation (B9); in fact, Bertin et al. (1989) included it in their dispersion relation. Lau & Bertin (1978) also dropped the fifth term in (C14) when they derived (C15) because it is higher order in $`1/kr`$. We do the same for equation (9) because this section is about the low order terms as well. We include all of these terms in the higher order analysis in the rest of the paper.
The dispersion relation for spiral waves, which is analogous to equation (7), is now
$$Q^2/4=\widehat{\eta }\frac{(1\nu ^2)}{\widehat{\eta }^2+J^2/(1\nu ^2)},$$
(10)
where $`\widehat{\eta }=k_{crit}/\widehat{k}`$ and $`J^2=T_1/k_{crit}^2`$, as defined in Bertin et al. (1989). We call equation (10) the Bertin-Lin-Lowe-Thurstan (BLLT) dispersion relation with dimensionless frequency $`\nu _{BLLT}`$. It describes the response of a differentially rotating disk to spiral perturbations. Evidently, the response is stronger than for axisymmetric perturbations by a factor that depends on the parameter $`J`$.
Equation (10) was studied extensively by Lau & Bertin (1978) and Bertin et al. (1989) in the limit when $`\nu 0`$, which is near corotation. In this limit, equation (10) predicts an instability when the frequency is purely imaginary, and this occurs when
$$1+\left(\frac{Q^2}{4\widehat{\eta }^2}\frac{1}{\widehat{\eta }}\right)\left(1+J^2\widehat{\eta }^2\right)<0.$$
(11)
For ring-like perturbations ($`m=0`$ and $`J=0`$), equation (11) is satisfied when $`Q<1`$; that is, equation (11) reduces to Toomre’s (1964) instability condition, $`Q<1`$, for the axisymmetric case.
It is seen from equations (7) and (10) that when the imaginary terms in the equation of motion and Poisson’s equation are ignored (Hunter 1983), the dimensionless frequency is pure real or pure imaginary according to the values of $`\widehat{\eta }`$ and $`Q`$ and for small values of $`J^2`$. The exclusion of these imaginary terms is justified in the limits $`|kr|>>1`$ and $`k_{crit}r>>1`$. This latter quantity is $`ϵ^1`$, defined by equation (1). If the complex terms are included in the equation of motion and Poisson’s equation, then the frequency solutions are complex functions of $`\widehat{\eta }`$ and $`Q`$. In that case, the frequency $`\nu `$ contains a non-vanishing imaginary part in all of the parameter space ($`\eta ,Q`$). This means there is always some instability present, consisting of an oscillation plus growth, so $`Q`$ is not an absolute discriminant of stability for small $`J`$ when higher order terms in $`ϵ`$ are included. These new instabilities will be discussed in detail in sections 4 and 5, but first we consider the low-order BLLT equation in the region beyond the Lindblad resonances.
### 3.2 A modification to the BLLT equation beyond the Lindblad Resonances
In addition to the instability condition given by equation (11), the BLLT dispersion relation (Eq. 10) predicts another instability when the frequency $`\nu `$ is complex and has a real component with an absolute value larger than 1.
This is a different regime of position relative to the resonances than considered by BLLT. They were concerned mostly with instabilities near corotation, where the waves are evanescent. For this reason, they took $`\nu 0`$. In this section, we consider stability properties inside the inner Lindblad resonance ($`\nu <1`$) and outside the outer Lindblad resonance ($`\nu >1`$), using the same order of approximation as in BLLT. These are regions that were considered to be damped and radiative, respectively, in the BLLT model. We show that the same dispersion relations also allow solutions that grow as they oscillate, i.e., with complex frequencies.
The condition for this second instability may be obtained from the square root part of the solution for $`\nu ^2`$ in equation (10), and is:
$$\left(\frac{Q^2}{4\widehat{\eta }^2}\frac{1}{\widehat{\eta }}\right)\left(\frac{Q^2}{4\widehat{\eta }^2}\frac{1}{\widehat{\eta }}4J^2\widehat{\eta }^2\right)<0.$$
This condition can be written in the form
$$\frac{1}{\widehat{\eta }}<\frac{Q^2}{4\widehat{\eta }^2}<\frac{1}{\widehat{\eta }}+4J^2\widehat{\eta }^2,$$
(12)
which is the same as
$$ϵ\widehat{k}r<\frac{a^2\widehat{k}^2}{\kappa ^2}<ϵ\widehat{k}r+\frac{4s^2m^2}{\widehat{k}^2r^2}$$
(13)
if we substitute $`Qϵ/2=a/(\kappa r)`$ and $`ϵ\widehat{\eta }=1/\widehat{k}r`$, and define $`J^2/ϵ^2s^2m^2`$, where $`s=2(\mathrm{\Omega }r\mathrm{\Omega }^{^{}})^{1/2}/\kappa `$ and is of order 1. Equation (13) is a new condition for instability. When this condition is satisfied, the self-gravitating disk is unstable to the growth of spiral waves. The right hand side of equation (13) contains two terms. The first term depends on the self-gravity of the disk and the second depends on shear. When gravity is negligible, there is still instability from the second term, coming entirely from pressure, shear, and Coriolis forces. We refer to this as an acoustic instability; it has apparently not been considered previously in the literature.
Figure 1 shows the unstable regions for a five-arm spiral ($`m=5`$) in the $`(k_{crit}/|k|,Q^2)`$ plane from the BLLT dispersion relation, equation (10), considering a self-gravitating disk with a flat rotation curve ($`s^2=2`$); this case is studied in more detail in the next section. The growth rate is represented as a gray scale, and the borders of the regions of instability are represented as lines, obtained from the instability conditions. The most unstable region is in the bottom left corner of the figure, where the bottom line shows the stability limit for the Lin-Shu dispersion relation ($`m=0`$), which is obtained from equation (7). For $`m`$ and $`J^20`$ the border of this region of instability shifts to the line given by the BLLT condition (Eq. 11). The acoustic instability is bracketed by the two upper lines described by equation (12). The lower line corresponds to $`\nu ^2=1`$ . This occurs at a Lindblad resonance when the Doppler-shifted frequency of oscillation, $`(\omega m\mathrm{\Omega })`$, matches the epicyclic frequency, $`\kappa `$, which is where the self-gravity of the disk is balanced by the pressure force (the Jeans condition) according to equation (10). The upper line corresponds to $`\nu ^2=1+2J^2\widehat{\eta }^2`$.
We can investigate the instability conditions (11) and (13) further by writing the BLLT dispersion relation without self-gravity. This can be done by multiplying equation (10) by $`ϵ^2`$, and then substituting as above. We then let $`ϵ0`$ to turn off gravity. The BLLT dispersion relation becomes
$$\nu ^4\left(2+a^2\widehat{k}^2/\kappa ^2\right)\nu ^2+1+\frac{a^2\left(\widehat{k}^2+s^2m^2/r^2\right)}{\kappa ^2}=0.$$
(14)
We combine the contributions to the dispersion relation from the sound speed and the epicyclic frequency by defining an angle $`\gamma =\mathrm{tan}^1(a\widehat{k}/\kappa )`$. We also define an angle $`p=\pi +\mathrm{tan}^1(m/kr)`$ for $`k<0`$; this angle is between $`\pi /2`$ and $`\pi `$, giving $`\mathrm{sin}p>0`$ and $`\mathrm{cos}p<0`$. The standard definition of a spiral arm pitch angle is $`\pi p`$. For $`ϵ=0`$, equation (11) is never satisfied, so the BLLT instability disappears, as recognized by these authors. However, the acoustic instability remains, with an instability criterion given by equation (13) with $`ϵ=0`$; this is
$$\frac{a}{\kappa r}<\frac{2s\mathrm{sin}p}{\widehat{k}r}=\frac{2sm}{k^2r^2+m^2}.$$
(15)
Another way to write equation (15) is to remove the explicit radial dependence; then the instability condition becomes
$$\mathrm{tan}\gamma \frac{a\widehat{k}}{\kappa }<2s\mathrm{sin}p.$$
(16)
The left hand side of the inequality in equation (16) is the ratio of the length scale for the epicyclic oscillation to the interarm spacing. This ratio has to be less than order unity for the instability to develop, which means that there has to be room for epicyclic motions within the distance that separates the spiral arms. That is, spiral waves will grow at all wavelengths that have enough room for epicyclic motions at the local sound speed.
When equation (16) is satisfied, a non-self-gravitating fluid disk with differential rotation will be unstable to spiral perturbations inside the ILR and outside the OLR. For a disk with solid body rotation, $`s=0`$, for a flat rotation curve, $`s=\sqrt{2}`$, and for a Keplerian disk, $`s=\sqrt{6}`$, so condition (16) is more easily satisfied, and the growth of instabilities is stronger, with greater shear. From equation (14), the phase velocity, $`c_{ph}`$, and the group velocity, $`c_g`$, of the acoustic waves in the radial direction can be obtained. Define $`z=(2s\mathrm{sin}p/\mathrm{tan}\gamma )^2`$, and $`w_\pm =(1\pm \sqrt{1z})/2`$; then
$`c_{ph}`$ $`=`$ $`m\mathrm{\Omega }/k\pm {\displaystyle \frac{\kappa }{k}}\sqrt{1+w_\pm \mathrm{tan}^2\gamma },`$
$`c_g`$ $`=`$ $`\pm a{\displaystyle \frac{w_\pm \mathrm{cos}p\mathrm{tan}\gamma }{\sqrt{1z}\sqrt{1+w_\pm \mathrm{tan}^2\gamma }}}.`$
In the unstable regime, $`z>1`$, which implies that $`c_{ph}`$, and $`c_g`$ are complex; only the real parts should be taken for the physical phase and group velocities. Note that for trailing waves, which are the only waves considered here, $`\mathrm{cos}p<0`$. This instability, along with additional instabilities resulting from higher order terms, will be studied further in section 5 for the cases with flat rotation curves and Kepler rotation.
### 3.3 Physical insights to the BLLT extension beyond the Lindblad resonances
The acoustic instability determined by the condition (16) has a different physical origin than the higher-order instability discussed in the next sections. For example, the higher-order instability works with or without shear, but the BLLT extension beyond the Lindblad resonances requires shear ($`s0`$ in equation 16). We show in section 6 that the physical origin of the higher order instability is a geometric growth of incoming wavetrains near the nucleus of a galaxy. We do not actually think of this higher-order growth as an instability because it is limited in time to the propagation time over the radius. This is unlike the acoustic instability discussed in the previous section, which is a true instability. The acoustic instability is very similar to the gravity-driven instability of BLLT near corotation, i.e., between the Lindblad resonances, but it is pressure-driven instead, and beyond the Lindblad resonances. We explain here in physical terms how it works.
In normal galactic spirals between the Lindblad resonances, and in bars between corotation and the ILR, the spiral or bar perturbation grows with time because more and more stellar (or fluid particle) orbits lock into phase with the perturbation, and because each new aligned orbit reinforces the perturbation, causing greater and greater forcing. This works for two reasons: (1) In this radial range, an unperturbed epicycle precesses slower than the pattern speed, i.e., the precession speed, $`\mathrm{\Omega }\kappa /m`$, is less than the pattern speed, $`\mathrm{\Omega }_p`$. (2) The inward forcing from the perturbation, gravity in this case, is greatest near the apocenter of the epicyclic orbit. For a spiral arm, this apocenter occurs just outside the potential minimum of the arm, and is directed inward because of the arm gravity. For a bar, the apocenter is on the bar major axis, and is directed inward because of the gravity of the bar.
Reason (1) implies that in the absence of forcing, a fluid element with its apocenter at the crest of one arm will come in and go out again to the next apocenter before it reaches the next arm. This is because the precession rate is slow and the apocenter of the epicycle twists around in angle more slowly than the spiral pattern. (In other words, the Coriolis force (in $`\kappa `$) is too large, so the angular velocity perturbation causes too large a radial velocity perturbation and the radial oscillation period is short.) However, with gravity, the excess inward forcing at the apocenter in the first arm crest gives the orbit an extra kick in the radial direction, and this flings the fluid element all the way around to the next arm before it has its next apocenter. Moreover, this kick occurs during the part of the orbit when the fluid is most susceptible to gaining momentum, i.e., when it is moving most slowly and spending the most time (at apocenter). Thus the forced orbit aligns with the perturbation, always having its apocenter in the arm crest. The same occurs for a bar: the presence of an excess inward bar forcing on the major axis of the bar flings the fluid elements around so they have their next apocenter at the other major axis, rather than too early. Thus we see how the gravitational force from spirals and bars causes the epicyclic motions of individual fluid elements to align with the perturbation and strengthen it.
Inside the inner Lindblad resonance, the precession speed of an unforced stellar orbit is greater than the pattern speed, i.e., $`\mathrm{\Omega }\kappa /m>\mathrm{\Omega }_p`$, so normal spiral or bar gravity kicks the stellar orbits the wrong way. That is, the gravity forcing makes an epicycle that already has its next apocenter come too late, meet the next arm even later. For the case of the bar, this leads to a perpendicular alignment of the orbits, so the point of maximum inward forcing, on the bar axis, is at the place in the epicycle, its pericenter, where the fluid element is least susceptible to acquire excess momentum, i.e., where it is moving most quickly. A previous description of this process was given in Elmegreen (1997).
Now consider the influence of pressure on these waves. The pressure forcing in a spiral is out of phase from the gravity forcing. When the gravity forcing is a maximum inward, just outside the spiral potential minimum, the pressure forcing is a maximum in the outward direction, because of the pressure gradient from the compression in the arm crest. Thus pressure is a stabilizing influence on normal spirals and bars between the Lindblad resonances, as is well known. Pressure forcing has to be less than gravity forcing for the spiral to grow. This is the usual condition for the dispersion relation, which equates the wave oscillation frequency to positive (and therefore stabilizing) contributions from acoustic and epicyclic oscillations, plus a negative (and therefore destabilizing) contribution from self-gravity.
Inside the ILR and outside the OLR, the role of pressure and gravity change. Whereas self-gravity opposes the alignment of epicycles beyond the Lindblad resonances, as discussed above, pressure is in the right phase to support this alignment of epicycles. The maximum outward force from pressure is near the epicycle apocenter both inside and outside the ILR, and the existence of this outward force slows down the fluid at its apocenter in both cases too. But inside the ILR and outside the OLR, this slow down causes the next apocenter to occur in the next arm, rather than after the next arm, which would be the case without the pressure forcing.
The acoustic instability beyond the Lindblad resonances is therefore due to a reversal in the role of gravity and pressure as driving agents for spiral density waves on either side of the Lindblad resonances. Between the ILR and OLR, gravity changes the orbits in such a way that they reinforce an initial perturbation, while pressure opposes this change. Beyond the Lindblad resonances, pressure changes the orbits to reinforce the initial perturbation, while gravity opposes. When gravity is weak beyond the Lindblad resonances, pressure alone is left to drive spiral instabilities.
The sensitivity of the instability condition (16) to shear ($`s`$) and pitch angle ($`\pi p`$), which is the same as the requirement that $`\nu ^2<1+2J^2\widehat{\eta }^2`$, makes sense for such pressure driven spirals. When the pitch angle is large, the maximum inward pressure force occurs closest to the minor axis of the epicycle, and the maximum outward pressure force occurs closest to the major axis. This situation leads to the maximum possible forcing from the pressure gradients. The shear is important because this is what causes the epicycles to precess forward or backward relative to the pattern. Without shear, the precession speed is zero, and no amount of pressure forcing can enhance the spiral alignment of orbits.
## 4 Higher Order Terms in the General Dispersion Relation
For the general case with self-gravity, it is possible to solve for the complex frequency if we know the basic state of the disk. If the rotation curve, the density distribution, and the sound speed distribution in the disk are known, then the dispersion relation in the tightwinding approximation can be obtained to second order in $`ϵ`$.
The dispersion relation for $`\nu `$ is obtained by turning equation (2) into an algebraic expression. This is done by using the definition of the enthalpy and equation (3) to express the enthalpy as a function of the potential and then using the asymptotic form of the potential, $`\varphi _1=\mathrm{\Phi }e^{i{\scriptscriptstyle k(r)𝑑r}}`$. We will consider only trailing spirals ($`k<0`$). Note that $`\nu ^{^{}}=(m\mathrm{\Omega }^{^{}}/\kappa +\nu \kappa ^{^{}}/\kappa )`$ for radial derivatives denoted by primes. Equation (2) can be written in the form:
$$\left[\frac{r^2d^2}{dr^2}+\left(Ar\right)\frac{rd}{dr}+Br^2\right]\left(h_1+\varphi _1\right)=\delta ^1(1\nu ^2)h_1,$$
(17)
where $`\delta =a^2/(\kappa ^2r^2)`$. Multiply equation (17) by $`\delta /h_1`$ and define $`D_0=\delta (1\frac{1}{f})`$, $`D_1=\delta \frac{r}{h_1}\frac{d}{dr}(\varphi _1+h_1)`$, and $`D_2=\delta \frac{1}{h_1}(\frac{r^2d^2}{dr^2}m^2)(\varphi _1+h_1)`$. Then
$$D_2+(Ar)D_1+(Br^2+m^2)D_0+\nu ^21=0.$$
(18)
The terms $`D_i`$ are
$`D_0`$ $`=`$ $`\delta \left(1{\displaystyle \frac{1}{f}}\right),`$
$`D_1`$ $`=`$ $`\delta \left[\left(ikr+{\displaystyle \frac{r\mathrm{\Phi }^{^{}}}{\mathrm{\Phi }}}\right)\left(1{\displaystyle \frac{1}{f}}\right)+{\displaystyle \frac{rf^{^{}}}{f}}\right]`$
$`D_2`$ $`=`$ $`\delta \left(\left[\widehat{k}^2r^2+ikr\left({\displaystyle \frac{2r\mathrm{\Phi }^{^{}}}{\mathrm{\Phi }}}+{\displaystyle \frac{rk^{^{}}}{k}}\right)+{\displaystyle \frac{r^2\mathrm{\Phi }^{^{\prime \prime }}}{\mathrm{\Phi }}}\right]\left(1{\displaystyle \frac{1}{f}}\right)+{\displaystyle \frac{2rf^{^{}}}{f}}\left(ikr+{\displaystyle \frac{r\mathrm{\Phi }^{^{}}}{\mathrm{\Phi }}}\right)+{\displaystyle \frac{r^2f^{^{\prime \prime }}}{f}}\right),`$
These terms are used to find numerically the roots of the dispersion relation. They can be expanded in the small parameter $`1/\widehat{k}r`$ by using the Bertin & Mark expression of Poisson’s equation (Eqs. 4 and 5). Their expansion is correct to third order in $`1/\widehat{k}r`$, so our dispersion relation is limited to third order in this quantity as well. In terms of $`Q^2`$ and $`\widehat{\eta }`$, and to lowest order in $`ϵ`$, $`D_i`$ become:
$`D_0`$ $`=`$ $`ϵ^2\left({\displaystyle \frac{Q^2}{4}}\widehat{\eta }\right)+iϵ^3\widehat{\eta }^2f_1+\mathrm{}=ϵ^2d_{\mathrm{0\hspace{0.17em}2}}+iϵ^3d_{\mathrm{0\hspace{0.17em}3}}+\mathrm{}`$
$`D_1`$ $`=`$ $`iϵ\mathrm{cos}p({\displaystyle \frac{Q^2}{4\widehat{\eta }}}1)+ϵ^2\left[{\displaystyle \frac{Q^2}{4}}\left({\displaystyle \frac{r\mathrm{\Phi }^{^{}}}{\mathrm{\Phi }}}+{\displaystyle \frac{r\widehat{k}^{^{}}}{\widehat{k}}}{\displaystyle \frac{rk_{J}^{}{}_{}{}^{^{}}}{k_J}}\right)+\widehat{\eta }\left(f_1\mathrm{cos}p{\displaystyle \frac{r\mathrm{\Phi }^{^{}}}{\mathrm{\Phi }}}\right)\right]+\mathrm{}`$
$`=`$ $`iϵd_{\mathrm{1\hspace{0.17em}1}}+ϵ^2d_{\mathrm{1\hspace{0.17em}2}}+\mathrm{}`$
$`D_2`$ $`=`$ $`{\displaystyle \frac{1}{\widehat{\eta }}}{\displaystyle \frac{Q^2}{4\widehat{\eta }^2}}+iϵ\left[\mathrm{cos}p\left({\displaystyle \frac{Q^2}{4\widehat{\eta }}}1\right)\left({\displaystyle \frac{2r\mathrm{\Phi }^{^{}}}{\mathrm{\Phi }}}+{\displaystyle \frac{rk^{^{}}}{k}}\right)f_1+\mathrm{cos}p{\displaystyle \frac{Q^2}{2\widehat{\eta }}}\left({\displaystyle \frac{r\widehat{k}^{^{}}}{\widehat{k}}}{\displaystyle \frac{rk_{J}^{}{}_{}{}^{^{}}}{k_J}}\right)\right]`$
$`+ϵ^2\left({\displaystyle \frac{Q^2}{4}}{\displaystyle \frac{r\mathrm{\Phi }^{^{\prime \prime }}}{\mathrm{\Phi }}}\widehat{\eta }\left[{\displaystyle \frac{r\mathrm{\Phi }^{^{\prime \prime }}}{\mathrm{\Phi }}}+\mathrm{cos}pf_1\left({\displaystyle \frac{2r\mathrm{\Phi }^{^{}}}{\mathrm{\Phi }}}+{\displaystyle \frac{rk^{^{}}}{k}}\right)+f_{1}^{}{}_{}{}^{2}+f_2\right]\right)`$
$`=`$ $`d_{\mathrm{2\hspace{0.17em}0}}+iϵd_{\mathrm{2\hspace{0.17em}1}}+ϵ^2d_{\mathrm{2\hspace{0.17em}2}}+\mathrm{}`$
These equations define the terms $`d_{ij}`$; note that alternate terms are imaginary as is typical for WKB approximation methods. Also note that $`r\widehat{k}^{^{}}/\widehat{k}=\mathrm{cos}^2prk^{^{}}/k\mathrm{sin}^2p`$, and that $`rk_{J}^{}{}_{}{}^{^{}}/k_J=r\sigma _{0}^{}{}_{}{}^{^{}}/\sigma _02ra^{^{}}/a`$. We take $`k`$ to be constant and real. The terms ($`Ar`$) and ($`Br^2+m^2`$) contain contributions of order unity divided by $`\nu `$ and $`(\nu ^21)`$. To get a polynomial expression for $`\nu `$, we calculate the expressions
$`\nu (\nu ^21)Ar`$ $`=`$ $`a_1\nu +a_2\nu ^2+a_3\nu ^3`$
$`\nu (\nu ^21)(Br^2+m^2)`$ $`=`$ $`b_0+b_1\nu +b_2\nu ^2`$
with
$`a_1`$ $`=`$ $`{\displaystyle \frac{2r\kappa ^{^{}}}{\kappa }}1{\displaystyle \frac{r\sigma _{0}^{}{}_{}{}^{^{}}}{\sigma _0}}`$
$`a_2`$ $`=`$ $`{\displaystyle \frac{2m\mathrm{\Omega }}{\kappa }}{\displaystyle \frac{r\mathrm{\Omega }^{^{}}}{\mathrm{\Omega }}}`$
$`a_3`$ $`=`$ $`1+{\displaystyle \frac{r\sigma _{0}^{}{}_{}{}^{^{}}}{\sigma _0}}`$
$`b_0`$ $`=`$ $`{\displaystyle \frac{2m\mathrm{\Omega }}{\kappa }}({\displaystyle \frac{r\sigma _{0}^{}{}_{}{}^{^{}}}{\sigma _0}}+{\displaystyle \frac{r\mathrm{\Omega }^{^{}}}{\mathrm{\Omega }}}{\displaystyle \frac{2r\kappa ^{^{}}}{\kappa }})`$
$`b_1`$ $`=`$ $`\left({\displaystyle \frac{2m\mathrm{\Omega }}{\kappa }}\right)^2{\displaystyle \frac{r\mathrm{\Omega }^{^{}}}{\mathrm{\Omega }}}J^2/ϵ^2=s^2m^2`$
$`b_2`$ $`=`$ $`{\displaystyle \frac{2m\mathrm{\Omega }}{\kappa }}({\displaystyle \frac{r\sigma _{0}^{}{}_{}{}^{^{}}}{\sigma _0}}+{\displaystyle \frac{r\mathrm{\Omega }^{^{}}}{\mathrm{\Omega }}}).`$
Equation (18) is now multiplied by $`\nu (\nu ^21)`$ to obtain a general dispersion relation for fluid disks:
$$\nu ^5+c_3\nu ^3+c_2\nu ^2+c_1\nu +c_0=0,$$
(19)
where
$`c_3`$ $`=`$ $`2+d_{\mathrm{2\hspace{0.17em}0}}+iϵ(d_{\mathrm{2\hspace{0.17em}1}}+a_3d_{\mathrm{1\hspace{0.17em}1}})+ϵ^2(d_{\mathrm{2\hspace{0.17em}2}}+a_3d_{\mathrm{1\hspace{0.17em}2}})+\mathrm{}`$
$`=`$ $`c_{\mathrm{3\hspace{0.17em}0}}+iϵc_{\mathrm{3\hspace{0.17em}1}}+ϵ^2c_{\mathrm{3\hspace{0.17em}2}}+\mathrm{},`$
$`c_2`$ $`=`$ $`iϵa_2d_{\mathrm{1\hspace{0.17em}1}}+ϵ^2(a_2d_{\mathrm{1\hspace{0.17em}2}}+b_2d_{\mathrm{0\hspace{0.17em}2}})+\mathrm{}`$
$`=`$ $`iϵc_{\mathrm{2\hspace{0.17em}1}}+ϵ^2c_{\mathrm{2\hspace{0.17em}2}}+\mathrm{},`$
$`c_1`$ $`=`$ $`1d_{\mathrm{2\hspace{0.17em}0}}+d_{\mathrm{0\hspace{0.17em}2}}J^2+iϵ(d_{\mathrm{2\hspace{0.17em}1}}+a_1d_{\mathrm{1\hspace{0.17em}1}}+J^2d_{\mathrm{0\hspace{0.17em}3}})+\mathrm{}`$
$`=`$ $`c_{\mathrm{1\hspace{0.17em}0}}+iϵc_{\mathrm{1\hspace{0.17em}1}}+ϵ^2c_{\mathrm{1\hspace{0.17em}2}}+\mathrm{},`$
$`c_0`$ $`=`$ $`ϵ^2b_0d_{\mathrm{0\hspace{0.17em}2}}+iϵ^3b_0d_{\mathrm{0\hspace{0.17em}3}}+\mathrm{}`$
$`=`$ $`ϵ^2c_{\mathrm{0\hspace{0.17em}2}}+iϵ^3c_{\mathrm{0\hspace{0.17em}3}}+\mathrm{}.`$
This dispersion relation includes terms that have been neglected in previous studies. The effect of the higher order terms can be followed by the dependence of the coefficients $`c_i`$ on the small parameter $`ϵ`$. In the limit of $`ϵ0`$, but with a finite $`\widehat{\eta }`$ and finite $`Q^2/\widehat{\eta }^2`$, the general dispersion relation (Eq. 19) becomes the BLLT dispersion relation (Eq. 10).
We investigate the effects of the higher order terms by expressing $`\nu `$ as an expansion in the parameter $`ϵ`$, that is, $`\nu =\nu _0+ϵ\nu _1+ϵ^2\nu _2+\mathrm{}`$, and by solving for the roots of equation (18). Substituting the expansion for $`\nu `$ into equation (18) and setting coefficients of equal powers of $`ϵ`$ to zero, we obtain expressions for the expansion terms $`\nu _i`$. The zeroth-order root, $`\nu _0`$, satisfies the equation
$$\nu _0\left[\nu _{0}^{}{}_{}{}^{4}\nu _{0}^{}{}_{}{}^{2}\left(2d_{\mathrm{2\hspace{0.17em}0}}\right)+1d_{\mathrm{2\hspace{0.17em}0}}\left(1+J^2\widehat{\eta }^2\right)\right]=0.$$
(20)
The expression in the squared brackets of equation (20) is the BLLT dispersion relation as discussed above. The other solution ($`\nu _0=\mathrm{\hspace{0.17em}0}`$) has no terms of order $`ϵ`$; i.e., it is of the form $`\nu =\nu _2ϵ^2+\nu _3ϵ^3+\mathrm{}`$.
The first-order term that corresponds to the nonzero solution $`\nu _0`$ is
$$\nu _1=i\frac{\nu _0\left(c_{\mathrm{1\hspace{0.17em}1}}+c_{\mathrm{2\hspace{0.17em}1}}\nu _0+c_{\mathrm{3\hspace{0.17em}1}}\nu _{0}^{}{}_{}{}^{2}\right)}{c_{\mathrm{1\hspace{0.17em}0}}+3c_{\mathrm{3\hspace{0.17em}0}}\nu _{0}^{}{}_{}{}^{2}+5\nu _{0}^{}{}_{}{}^{4}};$$
for real $`\nu _0`$ (i.e., stability in the BLLT equation), this $`\nu _1`$ is purely imaginary; for imaginary $`\nu _0`$, it is complex.
The coefficients, $`c_{i\mathrm{\hspace{0.17em}2}}`$, for the next term in the expansion, $`\nu _2`$, are pure real and, if $`\nu _0`$ is real, then this term is real also, and a factor of $`ϵ^2`$ smaller. When $`\nu _0`$ is real, the growth rate to first order in $`ϵ`$ is attributed to $`\nu _1`$. The next contribution to the growth rate will be from $`\nu _3`$, which is of order $`ϵ^2`$ smaller.
In summary, we have found in this analysis a general dispersion relation that includes the effects of radial variations in the basic parameters of the disk and is accurate to higher order in the small parameter $`ϵ=\left(k_{crit}r\right)^1`$. Furthermore, the effects included in this analysis change significantly the criterion for stability of the disk as shown explicitly by the models in the next section.
## 5 Instability models including the high order terms
Several models will be studied to illustrate the effects of the higher order terms in the dispersion relation and to investigate how different assumptions affect the stability of the disk. Four models will be considered: a self-gravitating disk with a flat rotation curve, a self-gravitating disk with solid body rotation, a non-self-gravitating disk with solid body rotation, and a non-self-gravitating disk with Keplerian rotation. The amplitude of the wave is assumed to be slowly varying so $`r\mathrm{\Phi }^{^{}}/\mathrm{\Phi }1`$. This gives an arm/interarm contrast that increases with radius beyond one scale length, in agreement with observations (Schweitzer 1980; Elmegreen & Elmegreen 1984).
All disks considered here are assumed to have an exponential mass column density profile with a scale length $`r_d`$ and a constant sound speed, $`a`$. Then $`r\sigma _{0}^{}{}_{}{}^{^{}}/\sigma _0=r/r_d`$ and $`ra^{^{}}/a=0`$.
We are considering solutions to the dispersion relation obtained from a local analysis where there are gradients in the physical quantities of the equilibrium disk. The local analysis is relevant when the growth time for the perturbations is shorter than the time needed for the disturbances to travel to the boundaries (e.g., see Lin & Shu 1964; Toomre 1981). That is usually $`10^9`$ years to the outer boundary and $`10^7`$ years to the center for a circumnuclear disk, but in this case the center boundary usually serves as a sink, as waves are shocked and energy is dissipated. Therefore we are justified in using a local analysis in nuclear disks. For main galaxy disks the growth time of spiral waves is also typically less than the propagation time. Bertin et al. (1989) considered a non-local analysis, including the effects of gradients and boundary conditions. This leads to the standard modal theory of spiral structure.
Gradients of disk properties, as well as curvature, can lead to spatial variations in the amplitude of spiral waves, including singularities. The curvature effects are considered in more detail in section 6.
General dispersion relations like these can be solved by assuming $`k`$ real and $`\omega `$ complex, or $`k`$ complex and $`\omega `$ real. In the remainder of this section, we consider $`k`$ real and constant and look for solutions with imaginary $`\omega `$. The result will be sinusoidal waves that grow exponentially with time, as in the usual stability analyses.
A third method of analysis is to consider the initial value problem of time-dependent growth with shearing sinusoidal perturbations, as in Goldreich & Lynden-Bell (1965) and Toomre (1981). When gravity is important, this leads to the swing amplifier theory.
In the following subsections we will investigate analytically and numerically the dispersion relation for disks with different rotational properties. The relevant dispersion relation is Eq. (18). The same dispersion relation with explicit expansion in terms of the small parameter $`ϵ`$ is Eq. (19) for self-gravitating disks. Another dispersion relation is derived for non-self-gravitating disks in the indicated subsections.
### 5.1 Exponential self-gravitating disk with constant rotation velocity
We first find the roots of Eq. (18) at two scale lengths for an exponential disk with a constant rotation speed. In this case $`r\mathrm{\Omega }^{^{}}/\mathrm{\Omega }=r\kappa ^{^{}}/\kappa =1`$, and $`\mathrm{\Omega }/\kappa =\mathrm{\hspace{0.17em}1}/\sqrt{2}`$. The value of $`ϵ=1/(k_{crit}r)`$ depends on the ratio of the disk to total mass (disk and halo) in the spiral region. A value of $`ϵ0.11`$ corresponds to the Solar radius in the Galaxy, using the rotation curve model in Schmidt (1983) and a disk mass surface density of $`48`$ M pc<sup>-2</sup> (Kuijken & Gilmore 1989, 1991). We use a value of $`ϵ=0.1`$.
There are five roots of the dispersion relation. The root that corresponds to the greatest growth is always plotted in the figures here; this is the root with most negative imaginary component.
Figure 2 shows the components of the normalized frequencies $`\nu `$ in the ($`k_{crit}/|k|\eta `$, $`Q^2`$) plane for two values of the azimuthal wavenumber, $`m=`$2, and 5, obtained numerically using the full dispersion relation, equation (18) with coefficients up to third order in the small parameter $`\zeta ϵ\widehat{\eta }`$. To be clear, we write $`k_{crit}/|k|`$ instead of $`\eta `$ in the figures. The top figures show the negative of the imaginary component of the frequency, i.e., the growth rate normalized to the epicyclic frequency $`\kappa `$, with contour values $`2^{i/4}`$ for $`i=`$ -20 to 10. The bottom figures show the absolute values of the corresponding real frequencies with the same contours. The left figures correspond to $`m=`$ 2 and the right correspond to $`m=`$ 5. The values of the real and imaginary components are tabulated for some values of $`Q^2`$ and $`k_{crit}/|k|`$ in table 1; this will facilitate the interpretation of the contours.
The thick lines in the top plots of figure 2 indicate the loci of points where the normalized frequency, $`\nu _{BLLT}`$, equals 0 (corotation, lower line), $`\pm 1`$ (inner and outer Lindblad resonances, middle line) and $`\pm \sqrt{1+2J^2\widehat{\eta }^2}`$ (upper line) in the BLLT dispersion relation, equation (10). The BLLT instability condition, equation (11), is satisfied below the lower thick line. The new acoustic instability condition, equation (13), is satisfied between the middle and the upper thick lines.
The figure and table show that the growth rate decreases but remains finite for $`k_{crit}/|k|0`$, and that at $`k_{crit}/|k|=0`$, it increases with increasing $`Q`$. At intermediate values of $`k_{crit}/|k|`$, say 0.5, the growth rate is largest for $`Q<1`$ and decreases to a minimum at $`Q^22`$, but again increases for increasing $`Q^2`$. The growth rate decreases for increasing $`k_{crit}/|k|`$ beyond 0.5 for constant values of $`Q^2`$. This pattern is observed for both $`m`$ values. A significant difference between the figures for $`m=2`$ and $`m=5`$ is that for higher $`m`$, the growth rate is larger over the plotted ($`k_{crit}/|k|`$, $`Q^2`$) plane than for low $`m`$, and for high $`k_{crit}/|k|`$, the growth rate remains relatively large for moderate values of $`Q^2`$ above the line $`\nu _{BLLT}=0`$. This enhanced growth at high $`m`$ is because the $`J`$-parameter is proportional to $`m`$ and is contributing to the higher order terms in the dispersion relation.
Note that there is a kink in the lower right corner ($`k_{crit}/|k|`$ 1.6) of the $`m=`$2 contour plot for the real component of the root. This occurs because in adjacent regions to the kink different real components have the most negative imaginary component.
Figure 2 and table 1 also indicate that the greatest growth occurs for small values of $`Q^2`$, just as predicted using the BLLT dispersion relation (cf. Sect. 3.2). Moreover, they indicate that the disk is unstable to form spirals for a wide range of $`Q`$ and $`m`$, although the growth rate is low, of order $`ϵ`$, when $`Q`$ is large. This implies there is still a spiral instability at low gravity. For most bright galaxies, however, the region where the rotation curve is flat is also the region where $`Q`$ is relatively small, so these high $`Q`$ solutions are not important. They could be important in early type galaxies (Caldwell et al. 1992) or low surface brightness galaxies (van der Hulst et al. 1993) where $`Q`$ is high in the main disk.
### 5.2 Exponential self-gravitating disk with solid body rotation
The inner parts of galaxies and small galaxies typically have rotation curves that are approximately solid body. This is the result of a strong bulge with a nearly uniform central density in some spiral galaxies, and a relatively dense dark matter halo in dwarf galaxies. Inner galaxy disks (Elmegreen et al. 1998) and dwarfs (Hunter et al. 1998) may also be weakly self-gravitating for some time (e.g., between accretion events and starbursts), and so the high-$`Q`$ cases studied here may have applications there. Furthermore, inner disks and dwarfs have short rotation times, so the actual growth factor of a spiral instability can be large even if the normalized growth rate is small.
For solid body rotation, $`r\mathrm{\Omega }^{^{}}/\mathrm{\Omega }=r\kappa ^{^{}}/\kappa =\mathrm{\hspace{0.17em}0}`$, and $`\mathrm{\Omega }/\kappa =\mathrm{\hspace{0.17em}1}/2`$. We assume a value for $`ϵ=0.1`$ as in the previous section. In this case the term $`A(r)`$ does not depend on $`\nu `$ and $`B(r)`$ has a $`1/\nu `$ dependence. The dispersion relation then becomes cubic in $`\nu `$:
$$\nu ^3+\left(1+D_2+a_3D_1\right)\nu +b_2D_0=0,$$
where the terms $`D_i`$, $`a_3`$, and $`b_2`$ were defined in the previous section. The roots can be expressed as an expansion in $`ϵ`$, writing $`\nu =\nu _0+\nu _1ϵ+\nu _2ϵ^2+\mathrm{}`$. The zero order term is the Lin-Shu dispersion relation, equation (7), with $`\eta `$ replaced by $`\widehat{\eta }`$. The first order term is
$$\nu _1ϵ=iϵ\frac{\mathrm{cos}p}{2\nu _0}\left[\frac{Q^2}{4\widehat{\eta }}\left(1\frac{r\sigma _{0}^{}{}_{}{}^{^{}}}{\sigma _0}2\mathrm{sin}^2p\right)\frac{1}{2}\frac{r\sigma _{0}^{}{}_{}{}^{^{}}}{\sigma _0}+\frac{\mathrm{sin}^2p}{2}\right].$$
(21)
In the region where $`\nu _0`$ is real, the growth rate is dominated by the first order term. In the region where $`|\nu _0|`$ is of order 1, $`\nu _1i`$ and the growth rate is of order $`i\nu _1ϵϵ`$. For an exponential disk at two scale lengths $`\frac{r\sigma _{0}^{}{}_{}{}^{^{}}}{\sigma _0}=2`$, so
$$\nu _1ϵ=iϵ\frac{\mathrm{cos}p}{2\nu _0}\left[\frac{Q^2}{4\widehat{\eta }}\left(32\mathrm{sin}^2p\right)+\frac{\mathrm{sin}^2p+3}{2}\right].$$
(22)
Figure 3 and table 2 show real and imaginary components of the normalized roots of the full dispersion relation (5.2) for the rising rotation curve model at $`r=2r_d`$. Again we display only the root that corresponds to the fastest growth. We can see from the left-hand regions in the ($`k_{crit}/|k|,Q^2`$) plot, where the absolute values of the real components are large, that the growth rates become small for small $`k_{crit}/|k|`$. The opposite occurs for small values of the real component, which are in the lower region of the plot. Where the real component is of order 1, in the center of the plot, the growth rate is of order $`ϵ`$.
The detailed behavior of the growth rate in this case can be followed from the approximate analytical solution written above as equation (21). For example, equation (21) gives the same growth rate as the full solution in table 2 for $`\eta =k_{crit}/|k|=0.2`$ for both $`m=`$ 2 and 5, because the approximate equation is relatively accurate for low $`\widehat{\eta }`$. Equation (21) gives slightly different rates than table 2 for $`\eta =0.6`$; at $`m`$ = 2 and $`Q^2`$ = 2, 5, and 10, equation (21) has growth rates of 0.232, 0.228 and 0.275 while table 2 has more precise growth rates of 0.227, 0.225 and 0.273. The rates given by equation (21) differ more significantly from those calculated by equation (5.2) when $`\eta >0.6`$.
One can observe from table 2 that the real component corresponding to the greatest growth rate in the ($`k_{crit}/|k|,Q^2`$) plane is always negative, i.e., it corresponds to the Lin-Shu and BLLT solutions inside of corotation in the disk.
Figures 2 and 3 show that there is a similarity between the growth rates for the flat and solid body rotation curve models. Both figures display a saddle shape for the growth rate contours; the greatest growth occurs as $`Q^2<1`$, and for $`k_{crit}/|k|0.5`$, the growth rate first decreases and then increases with increasing $`Q^2`$. The main difference between the two models occurs for large numbers of spiral arms, where the growth rate is smaller at $`m=5`$ than $`m=2`$ for the solid body case, and larger at $`m=5`$ than $`m=2`$ in the flat rotation curve case. This is because for solid body rotation, $`J=0`$, so the absence of differential rotation reduces the growth rate of waves at any $`m0`$. The zero order BLLT instability condition (Eq. (11)) is reduced to the Lin-Shu instability condition (Eq. (7)), and the acoustic instability disappears as the upper unstable region collapses around $`\nu ^21`$. In addition, the contributions of $`J^2`$ to the higher order terms $`\nu _i`$ are also absent so the growth rate is less than for $`J^2>0`$.
The solutions shown for all the self-gravitating models indicate that disks are weakly unstable to spiral waves when $`ϵ=1/(k_{crit}r)>0`$, even in the limit of weak self-gravity. This is the first time spiral disk instabilities have been found at large $`Q`$ in the absence of magnetic fields. We pursue this result further in the next section, which considers the growth of waves in the absence of self-gravity, that is, when $`ϵ=0`$.
### 5.3 Exponential disk with solid body rotation and no self-gravity
This section and the next consider fluid disks without self-gravity as an idealization of the high $`Q`$ cases found to be unstable in the previous sections. To be consistent with the radial dependence of the enthalpy amplitude, $`H(r)`$, used before, which was defined in terms of a slowly varying potential amplitude $`\mathrm{\Phi }`$, we now assume $`H(r)f(r)`$, where $`f`$ was given in the discussion following equation (5).
From equation (21) we can see that the growth rate of the instability depends on both the self-gravity of the disk and the radial derivative of the background surface density. The normalized growth rate is, to first order, $`\nu _1ϵ`$, from the previous discussion. The first term in equation (21) is proportional to $`Q^2\mathrm{cos}pϵ/(4\widehat{\eta })=ka^2/\kappa ^2r`$, which is independent of the self-gravity of the disk. It depends primarily on the disk curvature, i.e., on the ratio of the square of the semimajor axis of an epicycle caused by random motions ($`a/\kappa `$), to the product of the wave scale ($`k^1`$) and the disk radius ($`r`$). The second term is proportional to $`ϵ\mathrm{mass}_d/\mathrm{mass}_{total}`$, which comes from the self-gravity of the disk. If the disk self-gravity is neglected, $`ϵ=0`$ and the second term is zero, but there is still growth from the first term, depending on orbital curvature. When $`ϵ=0`$, the expansion has to be made in terms of the small parameter $`\zeta 1/\widehat{k}r`$. Then we get:
$`\nu _0`$ $`=`$ $`\pm \sqrt{1+(a\widehat{k})^2/\kappa ^2},`$
$`\nu _1\zeta `$ $`=`$ $`i{\displaystyle \frac{ka^2}{2\kappa ^2r\nu _0}}\left(2\mathrm{sin}^2p{\displaystyle \frac{r}{r_d}}1\right).`$ (23)
When the expression inside the parenthesis of equation (23) is zero for some particular pitch angle $`\pi p`$, there is no growth at that radius, but there is growth at adjacent radii.
The numerical solutions to equation (5.2) when gravity is neglected are shown for $`r=2r_d`$ in figure 4 and table 3, using normalized axes $`(a/\kappa r)^2`$ instead of $`Q^2`$ and $`1/|kr|`$ instead of $`k_{crit}/|k|`$. To compare the growth rates with the previous models, recall that the value of the vertical axis in figure 4 is obtained by multiplying the value of the vertical axis in our previous figures by $`ϵ^2`$ = 0.01, and the value of the horizontal axis in figure 4 is obtained by multiplying the previous value of the horizontal axis by $`ϵ`$=0.1. This means that the growth rates in figure 4 are analogous to those in the upper right part of figure 3. Figure 4 and table 3 indicate that the growth rate remains finite, proportional to $`(a/\kappa r)`$, as $`|kr|\mathrm{}`$. We infer from this behavior that the instability is acoustic in nature, similar to that described in section 3.2, but in the absence of self-gravity and shear. It is driven by curvature and pressure gradients in the disk (cf. section 6).
### 5.4 Exponential disk with Keplerian rotation and no self-gravity
Accretion disks around black holes (Nakai et al. 1993) and protostars have negligible self-gravity and may have Keplerian rotation. In this case $`r\kappa ^{^{}}/\kappa =r\mathrm{\Omega }^{^{}}/\mathrm{\Omega }=3/2`$, and equation (2) becomes a fifth order polynomial in $`\nu `$, as for a flat rotation curve. As in the previous section, an acoustic instability is still present even in the absence of self-gravity. The dispersion relation for this acoustic instability is now obtained from equation (18) with the modifications
$`D_0`$ $`=`$ $`\delta `$
$`D_1`$ $`=`$ $`\delta \left(ikr+{\displaystyle \frac{rf^{^{}}}{f}}\right)`$
$`D_2`$ $`=`$ $`\delta \left(\widehat{k}^2r^2+2ikr{\displaystyle \frac{rf^{^{}}}{f}}+{\displaystyle \frac{r^2f^{^{\prime \prime }}}{f}}\right).`$
Because there is no self-gravity, this fifth order dispersion relation has to be expanded to successive orders in $`\zeta =1/\widehat{k}r`$ instead of $`ϵ`$, giving $`\nu =\nu _0+\nu _1\zeta +\nu _2\zeta ^2\mathrm{}`$. The zero-order term in this expansion is the modified BLLT dispersion relation, equation (14). Recall that the geometric term for a Keplerian disk is $`s=\sqrt{6}`$. The zero order term becomes complex when the instability condition, equation (16), is satisfied. When equation (16) is not satisfied, the growth rate is dominated by the first order term
$$\nu _1\zeta =i\frac{a^2k}{2\kappa ^2r}\frac{\left(1+r/r_d2\mathrm{sin}^2p\right)\left(\nu _{0}^{}{}_{}{}^{2}1\right)3\left(1+m\nu _0\right)}{\nu _0\left(2\nu _{0}^{}{}_{}{}^{2}2a^2\widehat{k}^2/\kappa ^2\right)}.$$
(24)
Figure 5 and table 4 display numerical solutions to the fifth order polynomial, equation (18), for the dispersion relation in this Keplerian model using the modified expressions $`D_i`$ when self-gravity is neglected. The real and imaginary components of the root with the largest growth rate are plotted using the same axes as in the previous section, ($`1/|kr|`$, $`a^2/\kappa ^2r^2`$). The critical curve for stability, equation (15), is plotted in the top figures as a thick line. To the left of the critical curve, equation (15) is not satisfied and the disk is stable against acoustic instabilities to lowest order (higher order instabilities remain). To the right of the critical curve, equation (15) is satisfied and the acoustic instability to all orders dominates the growth of perturbations. The growth rate is larger than that in the case of solid body rotation without self-gravity because shear stimulates growth. There are discontinuities in the contours for the real component, with kinks at the same locations in the contours of the imaginary growth rate near the critical curve. To the left of these discontinuities, the real part of $`\nu `$ is negative, corresponding to radii inside the ILR; to the right, the real part is positive, corresponding to radii outside the OLR.
Equation (24), which is the first order approximation to the growth rate, matches the full numerical solutions in figure 5 and table 4 to two significant digits for $`\widehat{k}r>5`$ and $`(a^2/\kappa ^2r^2)<0.1`$.
## 6 Physical Insights to the Curvature Terms
We have just shown that differential rotation, curvature, and radial gradients in the basic properties of a fluid disk affect the propagation and growth of spiral disturbances. Here we simplify the problem by including only the effects of orbital curvature.
The curvature terms can be illustrated by considering an ideal disk with solid-body rotation, constant surface density, and negligible self-gravity. Such disks may be appropriate for the central regions of quiescent galaxies, such as NGC 2207 (Elmegreen et al. 1998). The governing equation (2) for such a disk, assuming constant surface density, becomes
$$\frac{d^2h_1}{dr^2}+\frac{1}{r}\frac{dh_1}{dr}+\left(\frac{\kappa ^2}{a^2}\left[\nu ^21\right]\frac{m^2}{r^2}\right)h_1=0.$$
(25)
This equation was derived with the center of the coordinate system at the center of rotation. It is the well-known Bessel equation, and can have mathematical singularities at $`r=0`$. Often in wave equations, these mathematical singularities can be transformed away by a change in the coordinate system, adopting, for example, a rectilinear coordinate system instead of cylindrical. However, in the case of a galaxy, the singularities cannot be transformed away by a different coordinate system: rotation and galactic gravity define the coordinate system.
In the galactic Bessel equation, the time derivative in the equation of motion appears as the term $`\nu ^2`$, as it did in the previous sections. The spatial variation in the azimuthal direction is also assumed to be the same as before, $`e^{im\theta }`$, but in the radial direction it is written explicitly. We may look for the behavior of spirals by assuming radial solutions of the form $`h_1e^{ikr}`$. These solutions are trailing spirals when $`k<0`$. They always contain pieces of waves that can come close to the origin, depending on their direction of propagation, so they can force out the singularities in the Bessel equation. The pure-ring case with $`m=0`$ may also approach the origin and increase in amplitude. In this case the increase is analogous to laboratory sonoluminescence, in which sound waves converge to the center of an air bubble in a liquid and increase in amplitude until they shock and emit light (Kondic, Gersten, & Yuan 1995).
In the case of spiral solutions, the radial derivatives in the Bessel equation are replaced by $`ik`$ and the frequency $`\nu `$ may be solved to give
$$\nu ^2=\left[\left(\frac{a}{\kappa r}\right)^2\left(r^2k^2+m^2\right)+1\right]irk.$$
(26)
This frequency is necessarily complex because of the first derivative term in equation (25). Because of this term, the general solutions are growing or decaying oscillations with spiral shapes having $`m`$ arms. It will become apparent shortly that the incoming waves are growing, and the outgoing waves are decaying, as expected from the nuclear singularity.
For $`\nu _R>>\nu _I`$, we recover the same result as equation (23) in the limit $`r_d\mathrm{}`$, assuming constant enthalpy amplitude:
$$\nu _R=\pm i\left[\left(\frac{a}{\kappa r}\right)^2\left(r^2k^2+m^2\right)+1\right]^{1/2},\nu _I=\frac{1}{2}\left(\frac{a}{\kappa r}\right)^2\frac{rk}{\nu _R}.$$
(27)
This gives the growth rate of a wave with azimuthal wavenumber $`m`$ and radial wavenumber $`k`$. Note that $`|\nu _R|>1`$ in all cases here, which means the waves are only inside the inner Lindblad resonance or outside the outer Lindblad resonance. This was the case also for the new instability solutions discussed in section 3.2, which re-considered the BLLT equations in this new radial limit.
The nature of the growth implied by $`\nu _I`$ in equation (27) should be discussed more. Recall that the assumed time behavior of the wave in an inertial frame is $`e^{i\left(kr+\omega tm\theta \right)}=e^{i\left(kr+\nu \kappa t\right)}`$ for $`\nu =(\omega m\mathrm{\Omega })/\kappa `$ and $`\theta =\mathrm{\Omega }t`$ following the rotation. Also note that we have written $`\nu =\nu _R+i\nu _I`$. Thus we have a time behavior $`e^{i\nu _R\kappa t}e^{\nu _I\kappa t}`$. When $`\nu _I`$ is negative, the wave grows in time. This occurs for trailing waves only when $`\nu _R`$ is the negative one of the two solutions given above, because $`\nu _Ik/\nu _R`$, and $`k<0`$ for trailing waves. Moreover, the negative $`\nu _R`$ solution is an incoming trailing wave, because the wave-like part of the solution, $`e^{i\left(kr+\nu _R\kappa t\right)}`$, has constant phase for decreasing $`r`$ with increasing time when $`\nu _R<0`$, i.e., $`r=\left(\nu _R/k\right)\kappa t=|\nu _R/k|\kappa t`$ when both $`\nu _R`$ and $`k`$ are less than zero. As a result, the galactic Bessel equation has trailing spiral wave solutions that grow in time as they propagate toward the center of the galaxy.
These solutions are not instabilities in the usual sense, because $`t`$ cannot be allowed to go to infinity. The waves reach the center in finite time, i.e., in the time $`tr/a`$. In this sense, the growing solutions are like those in the galactic swing amplifier (Goldreich & Lynden Bell 1965; Julian & Toomre 1966), in which spiral waves grow in the shearing part of a disk for a finite time ($`\mathrm{\Delta }t2/A`$ for Oort constant $`A`$). The instabilities in a galactic nucleus are also not stationary waves that grow in amplitude without any change in shape. This is because the spiral solution is always undefined at the nucleus and can never be considered present at all radii. The waves are only pieces of spirals, moving inward or outward with a growth or decay in time following the wave crest, respectively. Thus the growth is also unlike the growth of infinite plane waves in a sheet, as might be the case for the Kelvin-Helmholtz instability, for example. Spiral wave growth in galactic nuclei involves inward propagation of finite wave trains.
The dispersion relation (26) may also be regarded as an equation for $`k`$, in which $`\nu `$ is held as a real variable. Then equation (25) has normal Bessel function solutions $`J_m(k_Br)`$ and $`Y_m(k_Br)`$ for
$$\frac{\kappa ^2}{a^2}(\nu ^21)k_{B}^{}{}_{}{}^{2}>0;$$
$`k_B`$ is the radial wavenumber. When $`k_Br`$ is large, $`J_m`$ and $`Y_m`$ behave like sines and cosines, which may be combined as outgoing or incoming waves with $`\mathrm{exp}(i\omega t)`$. When $`k_Br`$ is small, the $`Y_m`$ solutions grow algebraically with decreasing $`r`$. The growth arises directly from the curvature terms, namely, the first derivative term and the $`m^2/r^2`$ term. These are the same terms that led to imaginary $`\nu `$ in equations (26) and (27). The $`m^2/r^2`$ term actually defines a region within which the waves start to grow out of bounds, i.e., when $`r<m/k_B`$ for $`m0`$, the $`Y_m(k_Br)`$ solution begins to increase. In terms of the growth discussed for the time-dependent case, this is the radius at which an incoming wave has only one more epicycle in time before it reaches the nucleus propagating at the sound speed.
What happens to a trailing spiral wave in a real galaxy when it enters the $`rk_B/m<1`$ regime? We expect that the amplitude will begin to increase geometrically until nonlinear and dissipative effects come into play. This means that the waves will break in the form of shocks shortly after they enter the inner region. The condition $`rk_B/m<1`$ implies that the radius for this wave shocking increases with azimuthal wave number $`m`$. This explains for the case of NGC 2207 (Elmegreen et al. 1998) why the multiple-arm features are only observed in the outer part of the nuclear disk, while the $`m=1`$ and ring-like feature is close to the center. That is, the multiple arms (high $`m`$) become non-linear and damp out before they reach the inner radii, leaving only the low-$`m`$ arms near the center. Other spirals that may travel outward in NGC 2207 are probably too weak to be seen because their amplitudes decrease as they propagate.
Galactic nuclear spiral waves also propagate in the azimuthal direction with angular speed $`\omega /m`$ as long as $`\nu ^21>0`$. This angular speed implies that waves with different $`m`$ will interact, forming complex structures. The waves are also dispersive, with dispersion relation
$$\frac{\omega m\mathrm{\Omega }}{\kappa }=\pm \left(1+\frac{a^2k_{B}^{}{}_{}{}^{2}}{\kappa ^2}\right)^{1/2}.$$
They form wave packets that propagate with group velocity
$$c_g=\pm a\mathrm{sin}\gamma _B,$$
with $`\gamma _B=\mathrm{tan}^1ak_B/\kappa `$. Undoubtedly the waves will interact because of these various phase and group speeds. They will also get sheared by differential rotation in reality to form complex spiral structures. When $`\nu ^21<0`$, the entire disk is evanescent; then we should not see any waves.
So far we have ignored the exponential density distribution of the disk. If it is taken into consideration, equation (25) will change to
$$\frac{d^2h_1}{dr^2}+\frac{1}{r}\left(1\frac{r}{r_d}\right)\frac{dh_1}{dr}+\left[\frac{\kappa ^2}{a^2}\left(\nu ^21\right)\frac{m^2}{r^2}+\frac{m}{\nu rr_d}\right]h_1=0,$$
(28)
where $`r_d`$ is the scale length of the exponential disk. The additional factor in the first derivative term will modify the behavior of the Bessel functions when $`rr_d`$, and the additional term in the last parenthesis will complicate the wave behavior. But the qualitative nature of the Bessel function solutions does not change.
## 7 Summary
We have obtained dispersion relations for spiral waves with multiple arms, considering curvature and gradient terms that were ignored in previous derivations. These dispersion relations suggest the presence of several new instabilities. Four specific cases were studied, flat and rising rotation curves with self-gravity, rising rotation curves without self-gravity, and Kepler rotation curves without self-gravity. These cases seem to have applications in various regions of galaxies and accretion disks.
When self-gravity is present, instability at lowest order in the parameter $`ϵ`$ (cf. Eq. 1) is driven by both shear and self-gravity. Then there are two independent instability conditions, either of which can cause spiral waves. These are equations (11) and (13). The first of these comes from Bertin et al. (1989), and contains the Toomre (1964) instability condition, $`Q<1`$, as a special case for ring-like perturbations ($`m=0`$, which gives $`J=0`$). This first instability is the spiral instability that is commonly discussed in the literature as a source of multiple arm and grand design spiral structure in galaxy and protoplanetary disks. The second of these conditions arises outside the Lindblad resonances from a combination of parameters different than the first when $`Q^2>4\widehat{\eta }`$ (cf. Eq. 12). When self-gravity is not present, this second case is still unstable as a result of pressure and differential rotation alone, as determined by the smallness of the parameter $`a/(\kappa r)`$ (cf. Eq. 15). This pressure-rotation instability is apparently new, and we call it an acoustic instability. A physical explanation for it was given in section 3.3.
We also found additional instabilities coming from higher order terms in an expansion of the dispersion relation (18) around the small parameter $`ϵ`$. These additional instabilities are present even when the BLLT and Toomre instability conditions are not satisfied, i.e., when the low order terms give stability. The source of these residual instabilities is a combination of orbital curvature \[terms of order $`1/(kr)`$\], self-gravity (terms of order $`ϵ`$), and various disk gradients ($`r\sigma ^{}/\sigma `$, $`ra^{}/a`$, etc.), including shear (the $`J`$ or $`s`$ terms). Growth rates for these residual instabilities were given to all orders in $`ϵ`$ for flat and rising rotation curves by figures 2 and 3 and tables 1 and 2, and they were given to first order in $`ϵ`$ by equation (21) for solid body rotation. The residual instability that arises from self-gravity and orbital curvature (through $`ϵ`$), discussed in sections 5.1 and 5.2, will be called a gravitational-curvature instability. The residual instability that arises from a combination of pressure and orbital curvature \[through $`a/(\kappa r)`$\], discussed in sections 5.3 and 5.4, will be called an acoustic-curvature instability, because it operates even without self-gravity.
These three new instabilities should be important for fluid disks with negligible or weak self-gravity, including proto-planetary disks, gaseous disks around black holes, some galactic nuclear disks, low surface brightness galaxy disks, and some dwarf galaxies. In these cases, zero-order acoustic and higher-order acoustic-curvature and gravitational-curvature instabilities can lead to the growth of spiral or other structures in about an orbital time. They are most important in the region close to the center where the orbital time is small.
Non-linear effects arising from these waves may ultimately lead to visible dust lanes (Elmegreen et al. 1998) and associated gaseous shocks (Roberts 1969) in even the most weakly self-gravitating disks, with the possibility of heightened self-gravity and star formation in some of the compressed regions (e.g., Elmegreen 1994). Non-linear effects might also promote accretion flows (e.g., Larson 1990). Indeed, the ubiquity of acoustic waves in disks implies that galactic nuclear accretion should occur in a wide variety of environments with or without shear, self-gravity, or magnetic fields.
|
no-problem/9903/gr-qc9903094.html
|
ar5iv
|
text
|
# Quintessence, the Gravitational Constant, and Gravity
## I introduction
Recently a number of observations suggest that the Universe is dominated by an energy component with an effective negative pressure. One possibility for such a component is the cosmological constant. Another possibility is dynamical vacuum energy or quintessence, a slowly varying and spatially inhomogeneous component with negative pressure.
We face two problems when we consider such a nonzero vacuum energy. The first is the fine-tuning problem related to the energy scale of the vacuum energy density $`10^{47}\mathrm{GeV}`$. The second is the coincident problem: why the vacuum energy is beginning to dominate presently. While these two problems are separated in quintessence, they are degenerate for the cosmological constant, and one has to introduce the cosmological constant of extremely small energy scale at the very beginning of the universe.
As a solution of the coincidence problem, the notion of a tracker field is introduced in . It is shown that a very wide range of initial conditions approach a common evolutionary track, so that the cosmology is insensitive to the initial conditions similar to inflation. Once one parameter relating to the energy scale of the vacuum energy is fixed, the present-day equation-of-state $`w_Q=p_Q/\rho _Q`$ is automatically determined: there is a $`\mathrm{\Omega }_Qw_Q`$ relation.
Direct methods to verify the idea of quintessence are important. Proposed possibilities are the following: the direct reconstruction of the effective potential from the luminosity distance - redshift relation observed for type Ia supernovae; the detection of quintessence from the measurements of a rotation in the plane of polarization of radiation from distant radio sources. The direct interaction of the quintessence field to ordinary matter, however, is found to be strongly suppressed so as not to violate the equivalence principle and the constancy of the constants of nature.
The possibility of an explicit coupling between the scalar field and the curvature is not excluded theoretically. It is then natural to consider further the coupling of the quintessence field to the gravity itself. In this paper, we examine the cosmological consequence of the non-minimal coupling of the quintessence field to the gravity. Since such a scalar field gives rise to both the time variation of the gravitational constant and a gravity force of long range, such a coupling should be constrained by experiments.
## II Non-minimally Coupled Quintessence
The action we consider is
$$S=d^4x\sqrt{g}\left[\frac{R}{2\kappa ^2}\frac{1}{2}\xi \varphi ^2R\frac{1}{2}g^{ab}_a\varphi _b\varphi V(\varphi )\right]+S_m,$$
(1)
where $`\kappa ^28\pi G_{bare}`$ is the bare gravitational constant and $`S_m`$ denotes the action of matter. The effective gravitational “constant” is defined by $`\kappa _{eff}^2\kappa ^2(1\xi \kappa ^2\varphi ^2)^1`$. $`\xi `$ is the non-minimal coupling between the scalar field and the curvature. In our conventions, $`\xi =1/6`$ corresponds to the conformal coupling.
We assume that the universe is described by the flat homogeneous and isotropic universe model with the scale factor $`a`$. The time coordinate is so normalized that $`a=1`$ at the present. The field equations are then
$`H^2\left({\displaystyle \frac{\dot{a}}{a}}\right)^2={\displaystyle \frac{\kappa ^2}{3}}\left[1\xi \kappa ^2\varphi ^2\right]^1\left(\rho _B+{\displaystyle \frac{1}{2}}\dot{\varphi }^2+V(\varphi )+6\xi H\varphi \dot{\varphi }\right),`$ (2)
$`\dot{H}={\displaystyle \frac{\kappa ^2}{2}}\left[1\xi \kappa ^2\varphi ^2\right]^1\left[\rho _B+p_B+\dot{\varphi }^2+2\xi \left(H\varphi \dot{\varphi }\dot{\varphi }^2\varphi \ddot{\varphi }\right)\right],`$ (3)
$`\ddot{\varphi }+3H\dot{\varphi }+6\xi \left(\dot{H}+2H^2\right)\varphi +V^{}=0,`$ (4)
$`\dot{\rho }_B+3H(\rho _B+p_B)=0,`$ (5)
where $`\rho _B,p_B`$ denotes the background energy density, pressure, respectively, and $`V^{}=dV/d\varphi `$.
We consider a potential of inverse-power as an example of the tracker field for $`\xi =0`$:
$$V(\varphi )=M^4(\varphi /M)^\alpha .$$
(6)
For $`\xi =0`$, there exists the following scaling solution during the background dominated epoch
$`H/H_0=a^{3(1+w_B)/2}`$ (7)
$`\varphi /\varphi _0=a^{3(1+w_B)/(\alpha +2)}`$ (8)
$`\varphi _0=\left({\displaystyle \frac{2\alpha (\alpha +2)^2M^{\alpha +4}}{9H_0^2(1+w_B)(4+(1w_B)\alpha )}}\right)^{1/(\alpha +2)}.`$ (9)
The equation-of-state $`w_Q`$ is
$$w_Q=\frac{\alpha w_B2}{\alpha +2}.$$
(10)
Since we consider a potential whose present mass scale is extremely small ($`H_010^{33}\mathrm{eV}`$), the force mediated by the scalar field is of long range, and hence the usual solar system limit on $`\xi `$, likewise the Brans-Dicke parameter, does apply. The correspondence to the Brans-Dicke field $`\mathrm{\Phi }_{BD}`$ and the coupling function $`\omega (\mathrm{\Phi }_{BD})`$ of scalar-tensor theories of gravity is given by
$`\mathrm{\Phi }_{BD}`$ $`=`$ $`8\pi (1\xi \kappa ^2\varphi ^2)/\kappa ^2,`$ (11)
$`\omega (\mathrm{\Phi }_{BD})`$ $`=`$ $`{\displaystyle \frac{1\xi \kappa ^2\varphi ^2}{4\xi ^2\kappa ^2\varphi ^2}}={\displaystyle \frac{\kappa ^2\mathrm{\Phi }_{BD}}{4\xi (8\pi \kappa ^2\mathrm{\Phi }_{BD})}}.`$ (12)
### A perturbative analysis
To consider the effect of $`\xi `$ qualitatively, we consider the case of $`|\xi |\kappa ^2\varphi ^21`$. Then during the background dominated epoch, Eq.(2) and Eq.(4) are approximated to
$`H^2={\displaystyle \frac{\kappa ^2}{3}}\rho _B`$ (13)
$`\ddot{\varphi }+3H\dot{\varphi }+\xi \kappa ^2(13w_B)\rho _B\varphi +V^{}=0,`$ (14)
where we have used Eq.(3) to derive Eq.(14). It is recently established that the scaling solutions Eq.(7-8) with the same power-index persist even if $`\xi 0`$ and that the stability of them does not depend on $`\xi `$.
To the lowest order in $`\xi `$, the corresponding Brans-Dicke parameter is given by
$$\omega _0=\frac{1\xi \kappa ^2\varphi _0^2}{4\xi ^2\kappa ^2\varphi _0^2}\frac{3}{4\alpha (\alpha +2)}\frac{1}{\xi ^2},$$
(15)
where we have used the relation that holds for the potential of inverse-power to estimate the present-day value of the scalar field:
$$V^{\prime \prime }=\alpha (\alpha +1)\frac{V}{\varphi ^2}=\frac{9}{2}(1w_Q^2)\frac{\alpha +1}{\alpha }H^2.$$
(16)
Up to $`𝒪(\xi )`$, the time variation of the gravitational constant is given by
$$\frac{\dot{G}}{G}|_0=\frac{2\xi \kappa ^2\varphi \dot{\varphi }}{1\xi \kappa ^2\varphi ^2}|_02\xi \alpha H_0$$
(17)
Hence, for $`\xi >0`$ the gravitational “constant” increases with time, while it decreases for $`\xi <0`$. Eq.(17) also shows that $`|\dot{G}/G|`$ is larger for larger $`\alpha `$ since the potential then becomes steeper and the scalar field rolls down the potential more rapidly.
### B constraining $`\xi `$
We perform the numerical calculation to examine in detail the time variation of $`G`$ and the deviation from general relativity induced by the non-minimal coupling of the quintessence field to the curvature. Initial condition is set at $`a=10^{14}`$. We vary the fraction of the energy density of the quintessence field relative to radiation from $`10^9`$ to $`10^{30}`$. We also choose various initial $`\varphi `$ and $`\dot{\varphi }`$. We confirmed the tracking behavior: convergence to a common evolutionary track. Below we show typical results for the potential Eq.(6) with $`\alpha =4`$. We choose the following parameters: $`\mathrm{\Omega }_M\kappa _{eff}^2\rho _M/3H^2|_0=0.3`$ and $`H_0=100h\mathrm{km}/\mathrm{sec}/\mathrm{Mpc}`$ with $`h=0.6`$.
There exist a lot of experimental limits on the time variation of $`G`$. Radar ranging data to the Viking landers on Mars gives $`|\dot{G}/G|=(2\pm 4)\times 10^{12}\mathrm{yr}^1`$. Lunar laser ranging experiments yield $`|\dot{G}/G|=(0\pm 11)\times 10^{12}\mathrm{yr}^1`$ and recently updated as $`|\dot{G}/G|=(1\pm 8)\times 10^{12}\mathrm{yr}^1`$. More recently, a tighter bound is found by analysing the measurements of the masses of young and old neutron stars in binary pulsars: $`|\dot{G}/G|=(0.6\pm 2.0)\times 10^{12}\mathrm{yr}^1`$, although the uncertainties in the age estimation may weaken the constraint. Considering these experimental results, we will adopt the limit: $`|\dot{G}/G|=(0\pm 8)\times 10^{12}\mathrm{yr}^1`$, and the limit by Thorsett is treated separately.
In Fig. 1, we show the numerical results of $`\dot{G}/G`$. The shaded region is already excluded by the current experimental limits. To examine the model dependencies of the results, we also show $`\dot{G}/G`$ for the potential of the form $`M^4\left(\mathrm{exp}(1/\kappa \varphi )1\right)`$ by a dotted curve. We find that negative $`\xi `$ is severely constrained, while positive $`\xi `$ is loosely constrained and the limit is dependent on the potential.
These results are intuitively understood via conformally transformed picture. If we perform the conformal transformation so that the scalar field is minimally coupled:
$$g_{ab}=\stackrel{~}{g_{ab}}|1\kappa ^2\xi \varphi ^2|^1.$$
(18)
Then the action Eq.(1) becomes
$$S=d^4x\sqrt{\stackrel{~}{g}}\left[\frac{\stackrel{~}{R}}{2\kappa ^2}\frac{1}{2}F^2(\varphi )(\stackrel{~}{}\varphi )^2\stackrel{~}{V}(\varphi )\right]+S_m,$$
(19)
where
$`F^2(\varphi )`$ $`=`$ $`{\displaystyle \frac{1\xi (16\xi )\kappa ^2\varphi ^2}{(1\xi \kappa ^2\varphi ^2)^2}},`$ (20)
$`\stackrel{~}{V}(\varphi )`$ $`=`$ $`{\displaystyle \frac{V(\varphi )}{(1\xi \kappa ^2\varphi ^2)^2}}.`$ (21)
Hence, after redefining the scalar field so that the kinetic term is canonical:
$$\mathrm{\Phi }=𝑑\varphi F(\varphi ),$$
(22)
the action is reduced to that of the scalar field minimally coupled to the Einstein gravity. We can follow the dynamics qualitatively by simply looking at the effective potential $`\stackrel{~}{V}(\mathrm{\Phi })`$. Note that $`1/(1\xi \kappa ^2\varphi ^2)^2`$ is a decreasing function of $`\varphi `$ for $`\xi <0`$, while an increasing function for $`\xi >0`$. For $`\xi <0`$ the effective potential $`\stackrel{~}{V}(\varphi )`$ decreases more rapidly than $`V(\varphi )`$ (in particular, $`\stackrel{~}{V}(\mathrm{\Phi })`$ decreases exponentially for large $`\kappa \varphi `$), and consequently the scalar field rolls down the potential more rapidly. On the other hand, for $`\xi >0`$, $`\stackrel{~}{V}(\mathrm{\Phi })`$ diverges at $`\kappa ^2\varphi ^2=1/\xi `$, so the slope of the effective potential becomes gentler and the scalar field rolls down the potential more slowly, and hence $`|\dot{G}/G|`$ becomes smaller than that for $`\xi <0`$.
We may summarize that the current experimental limits on the time variation of $`G`$ constrain the non-minimal coupling as
$$10^2\xi 10^210^1,$$
(23)
while if the tighter limit by Thorsett is adopted, then we have
$$10^2\xi 10^2.$$
(24)
However, the limit is sensitive to the shape of the potential.
Most important experimental limits we must consider are the solar system experiments, such as the Shapiro time delay and the deflection of light because the non-minimally coupled scalar field can mediate the long range gravity force in addition to that mediated by a metric tensor. The recent experiments set constraint on the parameterized-post-Newtonian(PPN) parameter $`\gamma _{\mathrm{PPN}}`$ as
$$|\gamma _{\mathrm{PPN}}1|<2\times 10^3,$$
(25)
which constrains the Brans-Dicke parameter through the relation $`\gamma _{\mathrm{PPN}}=(\omega +1)/(\omega +2)|_0`$
$$\omega _0>500.$$
(26)
In Fig. 2, we show the present-day Brans-Dicke parameter defined by Eq.(12) as a function of $`\xi `$. We also plot a curve derived under the assumption of $`|\xi |\kappa ^2\varphi ^21`$, Eq.(15). We find a good agreement. Thus, using Eq.(15) and Eq.(26), the non-minimal coupling $`\xi `$ is found to be constrained as
$$|\xi |<3.9\times 10^2\frac{1}{\sqrt{\alpha (\alpha +2)}}2.2\times 10^2$$
(27)
as long as $`\alpha 1`$. The limit is less sensitive to the potential than that derived from $`|\dot{G}/G|`$ because $`\omega `$ does not explicitly depend on $`\dot{\varphi }`$ unlike $`|\dot{G}/G|`$. We note that the limit is found to be insensitive to $`\mathrm{\Omega }_M`$ as long as $`\mathrm{\Omega }_M0.7`$. There is another PPN parameter $`\beta _{\mathrm{PPN}}`$ which is written in terms of $`\omega `$ as $`\beta _{PPN}1=(d\omega /d\mathrm{\Phi }_{BD})(2\omega +4)^1(2\omega +3)^2|_0`$ . The most recent results of the lunar-laser-ranging, combined with Eq.(25), yields
$$|\beta _{\mathrm{PPN}}1|<6\times 10^4.$$
(28)
We find that $`|\beta _{\mathrm{PPN}}1|𝒪(\xi ^3)`$ and consequently the experimental limit on $`\beta _{PPN}`$ is always satisfied if the condition Eq.(25) is satisfied.
## III summary
We have explored the possibility of an explicit coupling between the quintessence field and the curvature. Because the force mediated by the scalar field is of long range($`H_0^1`$), such a coupling is constrained by the solar system experiments. Through both analytical estimate and numerical integration of the equations, we have found that the limit is given by $`|\xi |10^2`$. The current limit on the non-minimal coupling, $`|\xi |10^2`$, is not so strong when compared with other couplings to ordinary matter. For example, a coupling with the electromagnetic field is suppressed at the level of $`10^6`$; the coupling with QCD is at most $`10^4`$ . We have also found that the induced time variation of $`G`$ is sensitive to the shape of the potential. The future improvements in the limit of $`\dot{G}/G`$ may further constrain negative $`\xi `$ or might lead to a detection of $`\dot{G}/G<0`$ depending on the potential of the quintessence field.
###### Acknowledgements.
The author would like to thank Professor K.Sato for useful comments. He also would like to thank the hospitality of Aspen Center for Physics, where the final part of this work was done. This work was supported in part by the JSPS under Grant No.3596.
|
no-problem/9903/astro-ph9903354.html
|
ar5iv
|
text
|
# Prospects for the Determination of Star Orbits Near the Galactic Center
## 1 Introduction
The proper motion studies of stars near the Galactic Center (Genzel et al. 1997; Eckart & Genzel 1997; Ghez et al. 1998) show the astonishing accuracy of the astrometric observations in the near infrared $`K`$ band. The closest to Sgr A studied star is at the projected distance $`100\mathrm{mas}`$ (corresponding to $`850\mathrm{AU}`$ at $`8.5\mathrm{kpc}`$) from its position, and moves with the velocity $`1400\mathrm{km}/\mathrm{s}`$ in the plane of the sky. The radial velocities of some of the stars at $`3\mathrm{}`$ from Sgr A are also measured (Genzel et al. 1996) and the comparison with the proper motion data shows that the velocity distribution is nearly isotropic. The published observations cover a relatively short span of time and the above authors use a statistical approach to find the mass in the central $`0.01\mathrm{pc}`$ region around the Galactic Center. The present accuracy of the observations makes it possible to study the orbits of individual stars and derive the mass in the central part of the Galaxy by more direct methods, similar to those used in classical binary systems of stars.
Recently Salim & Gould (1998) proposed the study of the orbits of individual stars in the vicinity of Sgr A to get its distance. Such measurement may be based on the accumulation of astrometric data augmented by the radial velocity data. Salim & Gould consider three stars with the smallest projected distances from Sgr A which can be found in the Ghez et al. (1998) catalog. They investigate the accuracy of the Sgr A distance estimate achieved after given observation time and its dependence on the actual orbit inclinations and periods of the chosen stars. They assume that the star positions will be obtained with the present accuracy ($`2\mathrm{mas}`$) and that the radial velocity will be measured with an error smaller than $`50\mathrm{km}/\mathrm{s}`$.
In this paper we address similar questions using a different approach. First we are interested in the accuracy of the determination of all orbit parameters and the accuracy of the determination of the central mass. We are also interested in determinations based on better astrometric accuracy and using fainter stars, which may in future be found closer to the Galactic Center. We do not use any particular stars with already measured proper motions, but rather simulate the orbits with given semimajor axis, not exceeding $`10^3\mathrm{AU}`$ and randomly chosen eccentricity and orientation in space. Similar approach has been used by Jaroszyński (1998b), in the study of the observability of relativistic effects in motion of stars close to Sgr A. According to this paper only the relativistic motion of periastron would be measurable for orbits $`1000\mathrm{AU}`$ in size and only if the accuracy of astrometric measurements is much higher than the present one, reaching the future capabilities of the Keck Interferometer (van Belle & Vasisht 1998). We neglect the relativistic effects altogether using purely Newtonian star trajectories in our studies.
While the presence of the black hole in the Galactic Center has not been proven yet, and the existence of a dense cluster of some kind of dark matter here (Munyaneza, Tsikaluri, & Violler 1998) has not been excluded, we assume that there is in fact a point mass in the Galactic Center. We adopt the central black hole mass estimate of Ghez et al. (1998), $`M_0=2.6(\pm 0.2)\times 10^6\mathrm{M}_{\mathrm{}}`$, and the distance to the Galactic Center, $`D_0=8.5\mathrm{kpc}`$ for our simulations. Our analysis is aimed at finding the relative error in possible mass and distance estimates and does not depend critically on their exact values used for simulations. With the adopted distance and mass the angular size of $`100\mathrm{mas}`$ corresponds to $`850\mathrm{AU}`$ and to the orbital period of $`15.4\mathrm{y}`$ for an elliptical orbit of this semimajor axis.
In the next Section we consider the modeling of orbits based on astrometric observations alone. In Sec. 3 we consider fits based on the combined astrometric and radial velocity data. In Sec. 4 we estimate the number of stars in the close vicinity of Sgr A, which may in future be used for black hole mass and its distance determination. The discussion follows in the last Section.
## 2 Simulations of Astrometric Observations of Star Motions
We consider stars on elliptic orbits with “true” semimajor axis $`a_010^3\mathrm{AU}`$. The closest to Sgr A star with measured proper motion (Ghez et al. 1998) is at the projected distance $`114\mathrm{mas}`$, so its 3D distance $`r969\mathrm{AU}`$ and the semimajor axis of its orbit must be greater than $`485\mathrm{AU}`$. (It can be much greater, of course.) We are also interested in faint stars ($`K17`$, or fainter), which may in future be found at similar or smaller distances from the Galatic Center. We postpone the discussion of the probability of finding such stars until Sec. 4.
We assume the accuracy of the relative position measurements to be constant in time and the observations to take place once a year, as has been the practice until now. According to Ghez et al. (1998) the uncertainty of the relative position measurements for bright ($`K15`$) stars near Galactic Center is typically $`2\mathrm{mas}`$, which corresponds to $`17\mathrm{AU}`$. The proper velocity measurements are less accurate for fainter stars, with uncertainty doubling at each 2 magnitude interval. It suggests indirectly, that for faint stars ($`K17`$) the uncertainty in the relative position amounts to $`4\mathrm{mas}`$, or $`34\mathrm{AU}`$. These are the typical numbers we are using in our numerical experiments. In the future the Keck Interferometer (van Belle and Vasisht 1998) will achieve $`20\mu \mathrm{as}`$ accuracy in astrometric mode, corresponding to $`0.17\mathrm{AU}`$. This number is another characteristic value, which can be used for simulations.
We introduce a Cartesian coordinate system $`(x_0,y_0)`$ in the orbital plane with the origin at the position of the black hole and the $`x_0`$ axis pointing toward the periastron. The “true” orbit is given as:
$$\frac{2\pi }{P_0}(tt_0)=ue_0\mathrm{sin}u;x_0=a_0(\mathrm{cos}ue);y_0=b_0\mathrm{sin}u$$
(1)
where $`P_0`$ is the orbital period, $`t_0`$ \- time of the passage through the periastron, $`a_0`$ and $`b_0`$ are the major and minor semiaxes of the ellipse, $`e_0`$ is its eccentricity, and $`u`$ \- eccentric anomaly. The orientation of the ellipse in space is given by the three angles (inclination $`i_0`$, position angle of the line of nodes in the sky $`\mathrm{\Omega }_0`$, and the angle of the periastron measured from the ascending node of the orbit $`\omega _0`$). The position of the star in the sky is obtained after the projection of its position in space, which is given as:
$$𝐫(t)=x_0(t)𝐞_x+y_0(t)𝐞_y$$
(2)
where $`𝐞_x`$ and $`𝐞_y`$ are the 3D unit vectors along $`x_0`$ and $`y_0`$ axes.
Our approach is a Monte Carlo simulation of synthetic data sets (e.g. Press et al. 1987). Usually one has a model fitted to the real data and is interested in the confidence limits on the estimated parameters. One of the possible way of doing it is to take the fitted parameters as true and simulate the sets of observations of the system assuming that the model is a good representation of the system. In our case the known orbit parameters allow the calculation of the accurate star position on the sky at any time. The measured positions are in error. With the estimated uncertainty in position measurements $`\sigma `$ we assume the measured position to be normally distributed:
$$p(X)dX=\frac{1}{\sqrt{2\pi }\sigma }\mathrm{exp}\left((XX_0)^2/2\sigma ^2\right)dx$$
(3)
where $`X`$ can be any of the two measured coordinates of the star on the sky, and $`X_0`$ is its “true” value. We draw randomly the simulated positions of the star from the probability distribution and obtain the synthetic data set. Repeating the procedure we get many such sets. Fitting models to these simulated observations we obtain different sets of model parameters scattered around the original values. The scatter in these fitted parameters is a good measure of the confidence limits of the original fit. Thus starting with an orbit of known parameters and simulating many synthetic data sets with the same uncertainty $`\sigma `$, we can learn about a likely quality of the fitted model. In particular we can estimate the typical errors of the fit.
The mass of the black hole is related to the orbital parameters through the Kepler’s law:
$$M=\frac{4\pi ^2a^3}{\mathrm{G}P^2}$$
(4)
where $`G`$ is the gravitational constant. In this Section we assume that the distance to the Galactic Center is known and equal to $`D_0=8.5\mathrm{kpc}`$, so the directly measurable angular sizes are equivalent to corresponding linear sizes. In general the quantity which can be estimated from the astrometric data alone is the ratio $`M/D^3`$. In our simulations of star orbits we use $`M_0=2.6\times 10^6M_{\mathrm{}}`$ for the value of the black hole mass. The mass estimated from the models fitted to simulated data is scaterred around $`M_0`$. For different central mass values of some of the fitted parameters would scale, but the procedure would remain the same.
We find the model parameters using the least square minimization of the expression:
$$\chi ^2=\underset{j=1}{\overset{N}{}}\frac{(𝐗_j𝐗(t_j;a,e,P,i,\mathrm{\Omega },\omega ,t_0))^2}{\sigma ^2}$$
(5)
where $`𝐗_j`$ is the “measured” position of the star at the time $`t_j`$ and $`𝐗(t_j;a,e,P,i,\mathrm{\Omega },\omega ,t_0)`$ is the position resulting from a model with the given parameters and calculated for the same instant of time.
The semimajor axis $`a_0`$ of the “true” orbit serves as a main parameter of our simulations. Our study shows that the quality of the fits depends mostly on this parameter (and on the “true” period, since the two are related). For practical reasons we limit the number of iterations in the procedure finding the minima of $`\chi ^2`$. A deeper analysis of the fitting shows a weak dependence of its success on the orbit eccentricity, showing that the cases with $`e1`$ are relatively more difficult. We neglect this fact in our simulations, which means, that the orbits having the above property are slightly underrepresented among successful fits. For a given $`a_0`$ we choose the eccentricity $`0e_01`$ as a random number. The $`\mathrm{cos}i_0`$, $`\mathrm{\Omega }_0`$, and $`\omega _0`$ are also given random values to guarantee the isotropic distribution of orbits orientation and position of the periastron. The time of periastron passage has no physical meaning (any properties of the motion depend on $`tt_0`$ only) so we choose it at random from the range $`0P`$.
We assume observations to take place in a randomly chosen day of June and to be repeated through $`N`$ seasons. The basic number of observations we consider is $`N=10`$, but we also make fewer simulations for other numbers. For each synthetic data sets $`\{𝐗_j\}`$ we find a model, starting the fitting procedure from the “true” values of the parameters. Since we expect the parameters fitted to the scattered data to be close to the “true” parameters, this starting point seems to be the best. If the fit converges, and if the obtained minimal $`\chi ^2`$ is smaller than the tabularized value for given number of the degrees of freedom and required confidence, we include the parameters of the model to the sample. Otherwise we neglect them, but we keep the track of such unsuccessful fits.
The results of our simulations are shown in Figs 1,2. In the upper panel of Fig. 1 we show the ratio of the median value of the fitted semimajor axis $`a`$ to its “true” value $`a_0`$. We also draw the lines showing the region including 68% of the sample points. Since the accuracy of position measurement is constant, the relative errors in fitted values of $`a`$ are larger for smaller orbits. The opposite can be said of the accuracy of the period estimates, which become more accurate when the total span of observations becomes longer than the orbital period. That means increasing accuracy for smaller orbits. The eccentricity is related to the orbit shape and can be better estimated for large orbits. In Fig.2 we show the result for mass estimation based on the estimation of the orbital parameters. In this plot we see that the estimates for small orbits become less accurate. Even if there are faint stars on close orbits near the Galactic Center, the speckle interferometry with the Keck telescope, with the position uncertainty of $`2\mathrm{mas}`$ ($`17\mathrm{AU}`$) is not sufficient to give mass estimates better than obtained with the existing methods. For the stars on large ($`10\mathrm{y}`$) orbits the “once a year” strategy seems promising, but requires several years of data acquisition.
## 3 Simulations of Astrometric and Radial Velocity Observations
The measurements of the radial velocities for some of the Galactic Center stars with measured proper motions have been done (Genzel et al. 1996) with the accuracy of $`\sigma _v=30\mathrm{km}/\mathrm{s}`$. These stars are rather far from the center (at projected distance $`3^{\prime \prime }`$), but similar measurements for stars closer to the center, in the crowded field of view, may be possible in the future. We optimistically assume that the same accuracy of radial velocities will be possible for the stars closer to the center.
With radial velocities measured and orbits determined from the proper motion study, it is possible to estimate the distance $`D`$ to the source (Salim & Gould 1998). We use now $`\alpha _0`$ \- the angular measure of the orbit semimajor axis as an independent parameter. (One has $`a_0D\alpha _0`$; $`b_0D\beta _0`$.)
The velocity components of a star moving on an elliptic orbit are:
$$v_{0x}=\frac{2\pi D\alpha _0}{P_0}\frac{\mathrm{sin}u}{1e\mathrm{cos}u};v_{0y}=+\frac{2\pi D\beta _0}{P_0}\frac{\mathrm{cos}u}{1e\mathrm{cos}u}$$
(6)
where we use the reference frame of equation (1). The velocity vector in space is given as:
$$𝐯_0=v_{0x}(t)𝐞_x+v_{0y}(t)𝐞_y$$
(7)
and its component along the line of sight can (in principle) be measured. The parameter $`V_02\pi D\alpha _0/P_0`$ measures the amplitude of the velocity and can be independently fitted using the radial velocity data. The model of the orbit including radial velocities has eight parameters and can be fitted after the minimization of the expression
$$\chi ^2=\underset{j=1}{\overset{N}{}}\frac{(𝐗_j𝐗(t_j;\alpha ,e,P,i,\mathrm{\Omega },\omega ,V,t_0))^2}{\sigma ^2}+\underset{j=1}{\overset{N_v}{}}\frac{(v_jv(t_j;\alpha ,e,P,i,\mathrm{\Omega },\omega ,V,t_0))^2}{\sigma _v^2}$$
(8)
where $`N_v`$ is the number of radial velocity measurements, $`v_j`$ is the j-th measured radial velocity, and $`v(t_j;\alpha ,e,P,i,\mathrm{\Omega },\omega ,V,t_0)`$ is the radial velocity at the instant $`t_j`$ resulting from the model with given parameter values.
With the velocity measured independently, the central mass and its distance can be estimated:
$$M=\frac{PV^3}{2\pi \mathrm{G}}$$
(9)
$$D=\frac{VP}{2\pi \alpha }$$
(10)
where all the variables in the RHS are given by the fit. In Figure 3 we show the results of our simulations including radial velocity measurements. The results of the velocity fitting, mass estimate, and distance estimate are shown. As can be seen in the plots, the relative error in velocity fit becomes smaller for smaller orbits and the same can be said about the mass estimation. Ten astrometric observations with accuracy of $`2\mathrm{mas}`$ with 5 radial velocity measurements done in the period of $`10`$ years are sufficient to give the distance to the Galaxy Center with a accuracy better than 5% ($`1\sigma `$) for large enough orbits ($`200\mathrm{AU}`$).
### 3.1 Accuracy of parameter fitting
The improvement of interferometric equipment is expected to give much better accuracy of position measurements, reaching $`\sigma 20\mu \mathrm{as}`$ (van Belle & Vasisht 1998) in the case of the Keck Interferometer. We investigate the influence of the position accuracy on the expected errors in fitted parameters, for the whole range of $`\sigma `$ from $`20\mu \mathrm{as}`$ to $`4\mathrm{mas}`$ ($`0.17`$ to $`34\mathrm{AU}`$). We repeat our simulations for several values of $`\sigma `$ and several values of the ellipse semimajor axis, using the same “observational strategy” as above. “Observations” of each orbit of a star are simulated many times. For every simulation we get a set of estimated orbit parameters. We define the scatter in the estimated values of a parameter $`p`$ (where $`p\{a,e,P,t_0,i,\mathrm{\Omega },\omega ,V\}`$) of a given orbit as
$$\delta p=(p_+p_{})/2$$
(11)
where 16% of the estimated values of $`p`$ are greater than $`p_+`$, another 16% are below $`p_{}`$, and the remaining 68% are between them. The scatter in the estimation of the parameter $`p`$ for all the orbits of the same size $`a_0`$, observed with the same position accuracy $`\sigma `$ is given as:
$$\mathrm{\Delta }p=<\delta p>$$
(12)
In Fig.4 we show the scatter in the fitted parameters. The ratio $`\sigma /a_0`$ is a good estimate of the deformation introduced to the visual orbit, so we use it as the abscissa for our plots. In the left column we show the results for simulations based on astrometric measurements alone. In the right column the typical errors in the orbit elements fitting are shown for combined astrometric and radial velocity synthetic data. It can be seen, that the radial velocity data, which has fixed accuracy in our simulations, can improve the fitting procedure in case of poor astrometric accuracy.
We consider also the accuracy of mass and distance determinations. In Fig. 5 we display the typical uncertainty in mass estimation based on two methods and the results for the distance determination. Again, the increased astrometric accuracy does not help much the determinations based on radial velocity data.
## 4 Possibility of Observing Stars Closer to the Center
The central star cluster (Genzel et al. 1996, 1997) is the dominating stellar component within $`10^2\mathrm{pc}`$ from the Galaxy Center. The density distribution follows the softened isothermal sphere model
$$n(r)=\frac{n_c}{1+(r/r_c)^2}$$
(13)
with the core radius $`r_c=0.22\mathrm{pc}`$ ($`5.33\mathrm{arcsec}`$). Projection along the line of sight gives the surface concentration of stars:
$$𝒩(R)=\pi n\frac{r_c^2}{\sqrt{r_c^2+R^2}}𝒩_K\frac{r_c}{\sqrt{r_c^2+R^2}}$$
(14)
where $`R`$ is the distance from the black hole measured in the plane of the sky, and $`𝒩_{17}=20\mathrm{arcsec}^2`$ for stars brighter than $`K=17^m`$. In the whole region of interest to us ($`Rr_c`$) the surface density of stars belonging to the cluster core is constant.
According to Ghez et al. (1998) the sample of stars in their proper motion studies constitutes a distinct cluster with the core radius $`r_{c1}=0\stackrel{}{\mathrm{.}}3`$ ($`0.01\mathrm{pc}`$) and the peak surface density of $`𝒩15\mathrm{arcsec}^2`$. Since the central surface densities of both clusters are similar the volume density in the smaller one is $`r_c/r_{c1}`$ times larger than the volume density in the core of the background cluster. Thus approximately half of the stars that could be seen very close to Sgr A on the sky are indeed the stars very close to the center in 3D.
The proper motion sample of stars (Ghez et al. 1998) is not complete and probably cannot be used as a tracer of the general population of stars in the very center of the Galaxy. On the other hand we expect that the future proper motion studies of the Galaxy Center will employ similar selection criteria, so the resulting samples will have similar space distribution.
We are interested in the density of the observable stars which are well inside the core. The observability of sources is limited by the spatial resolution of the interferometer at given limiting brightness. For the spatial resolution $`d`$ of the interferometer the density of stars should not be too big:
$$𝒩(2d)^21𝒩_{\mathrm{max}}\frac{1}{4d^2}10^4\mathrm{arcsec}^2$$
(15)
where we adopt $`d5\mathrm{mas}`$ as the resolution of the Keck Interferometer.
The combined surface star density in both clusters for $`K17^m`$ is $`𝒩_{17}35\mathrm{arcsec}^2`$ (Genzel et al. 1997; Alexander & Sternberg 1998; Ghez et al. 1998). The maximal surface density of stars $`𝒩_{\mathrm{max}}`$ is $`300`$ times larger. The integral luminosity function for the Galaxy Center has the slope $`\beta =0.875`$ at $`K=17^m`$ (Blum et al. 1996). This slope flattens for stars less massive than $`0.7M_{\mathrm{}}`$ (Holtzman et al. 1998), which corresponds to $`K21^m`$ (Alexander & Sternberg 1998) if we adopt the extinction $`A_K=3.5`$ (Blum et al. 1996) to the Galactic Center. Rescaling to $`21^m`$ in $`K`$ we have $`𝒩_{21}25𝒩_{17}`$, much less than the maximal surface density introduced above. Thus the possibility of finding faint stars close to the Galactic Center is limited by their volume density and not by the limited resolution of the Keck Interferometer.
Using the same luminosity function to the proper motion sample of Ghez et al. (1998) we find that there should be $`25`$ times more stars with $`K21`$ and with measurable proper motions. Some of them may be located closer to the black hole than the stars already observed. For such faint stars the Keck Interferometer operating in the imaging mode will have position accuracy of $`3\mathrm{mas}`$. The highest accuracy for astrometry ($`20\mu \mathrm{as}`$) will be possible for stars much brighter, $`K17.6`$ (van Belle & Vasisht 1998).
## 5 Discussion and Conclusions
We have investigated elliptical orbits of stars in the Newtonian potential of a point mass. According to Jaroszyński (1998b) the periastron motion of the orbit due to the relativistic effects may be measurable for small enough orbits ($`a10^3\mathrm{AU}`$). Similar effect, but of different sign, may be caused by the significant amount of matter distributed continuously around the central black hole. The gravitational lensing in the vicinity of Sgr A (Jaroszyński 1998a) may deform a part of the visual orbit if the observer is close to the orbital plane and when the star is behind the central mass. All these effects are easy to account for and can be introduced to the model. We neglect them here, since they have no significant influence on the accuracy of parameters fitting or mass and distance estimates.
Since we limit the number of iterations in the procedure finding $`\chi ^2`$ minima, we have investigated the influence of this fact on the estimated errors in fitted parameters. We are interested mostly in the semimajor axis, period and mass estimates. As our analysis shows, the number of iterations necessary for the fitting procedure to converge increases with increasing orbit eccentricity. We have performed extra calculations for orbits with the true semimajor axis $`a_0=800\mathrm{A}\mathrm{U}`$, eccentricity changing from $`e_0=0.01`$ to $`0.99`$ and random orientation in space, simulating the astrometric observations of the moving stars with the position accuracy of $`\sigma =17\mathrm{A}\mathrm{U}`$. The calculations show that the convergence is reached in $`98`$% of the cases for $`e_00.8`$ and falls to $`90`$% for $`e_0=0.99`$. The quality of the fits, as measured by the value of $`\chi ^2`$ does not depend on the orbit eccentricity and the same is true about the scatter in the estimated orbital period. The relative error in the estimated semimajor axis is up to $`2`$ times larger for highly eccentric orbits ($`e_00.8`$) as compared to lower eccentricity orbits. Since this group of the orbits is underrepresented, the error is underestimated, but only slightly. (An analysis taking into account the adequate number of high eccentricity orbits would give $`1.01`$ times larger error estimates for semimajor axis and $`1.03`$ times larger scatter in mass estimates. Calculations with doubled number of allowed iterations confirm this reasoning.) Similar investigation shows, that the orbit inclination has negligible influence on the accuracy of the estimated semimajor axis, period, or central mass.
The investigation of the proper motion of stars at distances $`10^3\mathrm{AU}`$ from Sgr A can provide a robust test of the existence of a black hole there. If the mass is indeed in the form of a black hole, and the amount of mass distributed continuously is insubstantial, than the stars should move on elliptical orbits and the rate of periastron motion should agree with the mass estimated from the orbit size and period. Each orbit directly probes the distribution of mass at distances $`(1e)ar(1+e)a`$. Knowing few orbits one may cover a substantial range in distances from the central mass. For such a test faint stars, which may be found closer to the center than already observed relatively bright stars with $`K17`$, can be used. The stars closer to the center have shorter periods, so their orbits can be found in shorter time. The increased accuracy of the astrometric position measurements will give very accurate orbits for bright stars. This may eventually serve as a test of point mass hypothesis at distances $`10^3\mathrm{AU}`$.
The measurement of radial velocities allows for the measurement of the distance to Sgr A (Salim & Gould,998) and the absolute estimate of its mass. With the present accuracy of astrometry ($`2\mathrm{mas}`$) and spectroscopy ($`30\mathrm{km}/\mathrm{s}`$) in $`K`$ band, and with “observational strategy” adopted in our study, the most promising for the distance estimate are the orbits with $`a600\mathrm{AU}`$, which would give $`3`$% accuracy of the measurement in 10 years. Better accuracy and for slightly larger orbits can be obtained after longer time (Salim & Gould 1998). The mass estimate based on the radial velocity becomes more accurate for smaller orbits. Their investigation is so challenging observationally, that it is probably better to use large orbits and wait longer for the results.
Special thanks are due to Bohdan Paczyński for suggesting the topic of this research, interest in its progress and kind hospitality during my stay at the Department of Astrophysical Sciences, Princeton University. This project was supported with the NASA grant NAG5-7016 to B. Paczyński and the Polish KBN grant 2P03D-012-12 to M. Jaroszyński.
|
no-problem/9903/cond-mat9903368.html
|
ar5iv
|
text
|
# Algebraic description of a two-dimensional system of charged particles in an external magnetic field and periodic potential
## 1 Introduction
The magnetic translation operators
$$T(\mathrm{R})=\mathrm{exp}\left[\frac{\mathrm{}}{\mathrm{}}\mathrm{R}\mathbf{}\left(\mathrm{p}\frac{e}{c}\mathrm{A}\right)\right]$$
introduced by Brown (1964), to describe the movement of a Bloch electron in an external magnetic field, form in fact a projective (ray) representation of the translation group with a factor system (Brown 1964, Zak 1964a, b)
$$T(\mathrm{R})T(\mathrm{R}^{})[T(\mathrm{R}+\mathrm{R}^{})]^1=m(\mathrm{R},\mathrm{R}^{})=\mathrm{exp}[\frac{1}{2}\frac{\mathrm{}e}{c\mathrm{}}(\mathrm{R}\mathbf{\times }\mathrm{R}^{})\mathbf{}\mathrm{H}]$$
where $`\mathrm{H}=\mathbf{}\mathbf{\times }\mathrm{A}`$. This is only one of many applications of projective representations, firstly investigated by Schur (1904, 1097, 1911), in quantum physics. However, its clarity and importance led Backhouse and Bradley to start their series of articles on projective representations with this example (Backhouse and Bradley 1970, Backhouse 1970, 1971, Backhouse and Bradley 1972). Another important application is illustrated by the construction of space groups (Altmann 1977); however in this case one considers projective representations of the point group (see also Bradley and Cracknell 1972).
The other—equivalent—description of Bloch electrons in a magnetic field was proposed by Zak (1964a, b) and applied, e.g., by Divakaran and Rajagopal (1995) and the author (Florek 1994, 1996a, b). This approach consists in introduction of a covering group and investigations of its ordinary, i.e. vector, representations (see also Altmann 1977, 1986). The covering group contains pairs $`(\alpha ,\mathrm{R})`$, $`\alpha \mathrm{U}(1)`$, and its vector representation can be written as a product $`\mathrm{\Gamma }(\alpha )T(\mathrm{R})`$, where $`\mathrm{\Gamma }`$ is a representation of $`\mathrm{U}(1)`$ and $`T`$ is a projective representation of the translation group (Zak 1964a, Altmann 1977, Florek 1994). Zak rejected representations with $`\mathrm{\Gamma }(\alpha )\alpha `$ as ‘non-physical’ (Zak 1964b). However, if $`T^{}`$ is a projective representation with a factor system $`\mathrm{\Gamma }(m(\mathrm{R},\mathrm{R}^{}))`$, then the product $`\mathrm{\Gamma }T^{}`$ is a vector representation of the covering group and there are no rules that are contravened by considering this case. The first attempt to consider all representations was performed within Zak’s approach by the author (Florek 1997a); in that work, the physical consequences of taking into account all cases were indicated.
This paper is based on Brown’s approach; i.e. projective representations of the translation group are considered. It is shown that all projective representations are necessary in a description of the movement of a particle with the charge $`qe`$, where $`q`$ is an integer, in a magnetic field and a crystalline potential. Moreover, applying results of earlier articles (Florek 1997b, Florek and Wałcerz 1998), this is done for any vector potential $`\mathrm{A}`$ (strictly speaking, for $`\mathrm{A}`$ a linear function of the coordinates; however, by appropriate gauge transformation each vector potential can be written in such a form for a constant, uniform magnetic field). This removes the restriction imposed by Brown (1964) and Zak (1964a, b) on $`\mathrm{A}`$ of being a fully antisymmetric function of coordinates (i.e. $`A_l/x_k+A_k/x_l=0`$ for each pair $`k,l=1,2,3`$). Moreover, the proposed approach yields in a natural way the concept of magnetic cells (Zak 1964a, b) and proves the periodicity of physical properties with respect to the charge, in addition to the periodicity with respect to the magnitude of the magnetic field proven by Azbel (1963). Since projective representations corresponds to energy levels of one-particle states, their direct products must describe two-particle states (or many-particle states in a more general case). A system of two particles with the charges $`qe`$ and $`q^{}e`$ has the total charge $`(q+q^{})e`$ and, therefore, should correspond to a projective representations with a factor system determined by this charge. It follows from the previous discussion that in a many-body problem one has to consider all representations, also those considered by Zak as ‘non-physical’.
## 2 Periodicity with respect to charge
The Hamiltonian describing the motion of a charged particle in a periodic potential $`V(\mathrm{r})`$ and an external magnetic field $`\mathrm{H}=\mathbf{}\mathbf{\times }\mathrm{A}`$ is given as
$$=\frac{1}{2m}\left(\mathrm{p}\frac{qe}{c}\mathrm{A}\right)^2+V(\mathrm{r})$$
where $`m`$ denotes the effective particle mass, $`\mathrm{p}`$ its kinetic momentum, and $`qe`$, with $`q`$ and $`e>0`$, its charge. If the vector potential $`\mathrm{A}`$ is a linear function of the coordinates, i.e.
$$A_\alpha =\underset{\beta }{}a_{\alpha \beta }\beta \alpha ,\beta =x,y,z$$
then the magnetic translation operators can be written as (Florek 1997b, Florek and Wałcerz 1998)
$$T(\mathrm{R})=\mathrm{exp}\left[\frac{\mathrm{}}{\mathrm{}}\mathrm{R}\mathbf{}\left(\mathrm{p}\frac{qe}{c}\mathrm{A}^{}\right)\right]$$
where $`\mathrm{A}^{}`$ is a vector potential associated with $`\mathrm{A}`$, defined as
$$A_\alpha ^{}=\underset{\beta }{}a_{\beta \alpha }\beta .$$
It is well known (Brown 1964, Zak 1964a, b) that the periodic boundary conditions allow us to consider a two-dimensional crystal lattice (in the $`xy`$-plane, say) and $`\mathrm{H}=[0,0,H]`$ perpendicular to it. Hence, any lattice vector can be considered as two-dimensional:
$$\mathrm{R}=n_1\mathrm{a}_1+n_2\mathrm{a}_2.$$
The magnetically periodic boundary conditions (Brown 1964) yield quantization of a magnetic flux:
$$\mathrm{H}\mathbf{}(\mathrm{a}_1\mathbf{\times }\mathrm{a}_2)=\frac{2\pi }{N}\frac{\mathrm{}c}{e}\frac{L}{q}$$
where an integer $`L`$ is mutually prime with the crystal period $`N`$. Replacing the left-hand side by the flux per the unit cell
$$\varphi =(e/hc)\mathrm{H}\mathbf{}(\mathrm{a}_1\mathbf{\times }\mathrm{a}_2)$$
one obtains
$$N\varphi =\frac{L}{q}.$$
(1)
The factor system $`m(\mathrm{R},\mathrm{R}^{})`$ depends on $`\mathrm{A}`$: for example the antisymmetric gauge $`\frac{1}{2}(\mathrm{H}\mathbf{\times }\mathrm{r})`$ gives (Brown 1964, Zak 1964a ,b)
$$m(\mathrm{R},\mathrm{R}^{})=\omega _N^{(1/2)L(n_2n_1^{}n_1n_2^{})}=\omega _{2N}^{L(n_2n_1^{}n_1n_2^{})}$$
(2)
whereas for the Landau gauge $`\mathrm{A}=[0,Hx,0]`$ (and $`\mathrm{A}^{}=[Hy,0,0]`$)
$$m_N^{(L)}(\mathrm{R},\mathrm{R}^{\mathbf{}})=\omega _N^{Ln_2n_1^{}}.$$
(3)
In both formulae, $`\omega _N=\mathrm{exp}(2\pi \mathrm{}/N)`$. However, the group-theoretical commutator is gauge-independent, and for any linear gauge we have (Florek and Wałcerz 1998)
$$T(\mathrm{R})T(\mathrm{R}^{})T^1(\mathrm{R})T^1(\mathrm{R}^{})=\omega _N^{L(n_1n_2^{}n_2n_1^{})}.$$
(4)
The matrices of irreducible projective representation corresponding to the factor system given as (3) can be chosen as
$$D_{ij}^{NL}(\mathrm{R})=\delta _{i,jn_2}\omega _N^{Ln_1i}i,j=0,1,\mathrm{},N1.$$
(5)
It should be underlined that such a projective representation is normalized (cf. Altmann 1977, 1986, Florek and Wałcerz 1998), in contrast to those corresponding to the factor system (2) and considered by Brown (1964).
If $`\mathrm{gcd}(L,N)=\nu >1`$ the representations (5) are reducible and the corresponding factor system is
$$m_n^{(l)}(\mathrm{R},\mathrm{R}^{})=\omega _n^{ln_2n_1^{}}$$
(6)
where $`l=L/\nu `$, $`n=N/\nu `$ and $`\mathrm{gcd}(l,n)=1`$. Irreducible projective representations with such factors have to be $`n`$-dimensional, which directly leads to the concept of magnetic cells: one obtains $`D^{NL}(n\mathrm{R})=\mathrm{𝟏}`$, so the magnetic period is equal to $`n`$, though the crystal period is still $`N`$. Therefore, the $`N\times N`$ lattice can be viewed as a $`\nu \times \nu `$ lattice, with the translation group $`T_\nu =_\nu ^2`$, of $`n\times n`$ magnetic cells. Let $`(\xi _1,\xi _2)`$ label magnetic cells, whereas $`(\eta _1,\eta _2)`$ is the position within a magnetic cell, i.e. $`n_i=\eta _i+\xi _in`$. Then matrices
$$D_{ij}^{nl,\mathrm{k}}(\mathrm{R})=D_{ij}^{nl}(\eta _1,\eta _2)D^\mathrm{k}(\xi _1,\xi _2)=\delta _{i,j\eta _2}\omega _n^{l\eta _1i}D^\mathrm{k}(\xi _1,\xi _2)$$
(7)
form an irreducible projective representation of $`_N^2`$ with the factor system (6), where
$$D^\mathrm{k}(\xi _1,\xi _2)=\mathrm{exp}[2\pi \mathrm{}(k_1\xi _1+k_2\xi _2)/\nu ]=\omega _\nu ^{(k_1\xi _1+k_2\xi _2)}$$
(8)
is an irreducible representation of $`T_\nu `$ (Backhouse 1970). The character of the representation given by (7) is
$$\chi _{n,l;\mathrm{k}}(\mathrm{R})=\delta _{\eta _1,0}\delta _{\eta _2,0}n\omega _\nu ^{(k_1\xi _1+k_2\xi _2)}.$$
For given $`n`$ and $`l`$ (i.e. for a given factor system), we obtain all $`\nu ^2`$ inequivalent irreducible projective representations labelled by $`\mathrm{k}`$ (Altmann 1977, 1986), and all of them are normalized.
To determine a relation between the charge $`q`$ of a particle and the irreducible projective representation $`D^{nl,\mathrm{k}}`$, let us fix the magnetic flux $`\varphi `$ and the crystal period $`N`$. Then the condition (1) gives that $`L=N\varphi q`$; i.e. $`Lq`$. However, this is not a one-to-one relation, since $`L`$ is limited to the range $`0,1,\mathrm{},N1`$ with no condition imposed on $`q`$. The representation (5), its factor system (3), and the commutator (4) are determined by $`\omega _N^L`$, so all of them are periodic functions of $`Lq`$, and, therefore, periodic functions with respect to the charge of a moving particle. We see, in particular, that for $`q=zN`$, $`z`$, vector representations with trivial factor systems (and trivial commutators) are obtained. This means that for a given crystal period $`N`$ and constant magnetic field, a particle with the charge $`zNe`$ behaves as non-charged one. It is also easy to see that for some $`q`$ we can obtain $`L=l\nu `$, where $`\nu =\mathrm{gcd}(N,L)`$, and in this case the irreducible representations $`D^{nl,\mathrm{k}}`$ have to be used. Since $`\nu `$ is a co-divisor of $`n`$, then assuming $`\varphi =1/N`$ we obtain
$$q=N\frac{l}{n}$$
(9)
which relates the pair $`(n,l)`$ (the label of the irreducible representation) and the charge $`q`$ of a particle. It has to be underlined that this relation has been derived for a fixed $`\varphi `$ and does not depend on the irreducible representations $`D^\mathrm{k}`$ of $`T_\nu `$ given by (8).
## 3 Multi-particle states
It can be shown (see, for example, Altmann 1986) that a product of two projective representations $`D^{}`$ and $`D^{\prime \prime }`$ of a given group $`G`$ with factor systems $`m^{}`$ and $`m^{\prime \prime }`$, respectively, is another projective representation with a factor system $`m(g,g^{})=m^{}(g,g^{})m^{\prime \prime }(g,g^{})`$, which, in general, is different from factor systems $`m^{}`$ and $`m^{\prime \prime }`$. Let $`D`$ be a product of two irreducible projective representations $`D^{nl,\mathrm{k}}`$ and $`D^{n^{}l^{},\mathrm{k}^{}}`$. Then their product has a factor system
$$m(𝐑,𝐑^{})=\omega _N^{Ln_2n_1^{}}\mathrm{with}L=l\nu +l^{}\nu ^{}$$
(10)
so it corresponds to the representation $`D^{NL,\mathrm{K}}`$ ($`\mathrm{K}`$ has not been determined, but it depends on the irreducibility of the representation obtained). The character of this representation is
$$\chi (\mathrm{R})=\delta _{\eta _1,0}\delta _{\eta _2,0}\delta _{\eta _1^{},0}\delta _{\eta _2^{},0}nn^{}\omega _N^{n(k_1\xi _1+k_2\xi _2)n^{}(k_1^{}\xi _1^{}+k_2^{}\xi _2^{})}$$
so it is nonzero only for $`n_i=x_im`$, where $`m=nn^{}/\gamma `$, $`\gamma =\mathrm{gcd}(n,n^{})`$, $`0x_i<\mu =N/m=\mathrm{gcd}(\nu ,\nu ^{})`$. Substituting $`m`$ and $`\mu `$ to the above formula one obtains
$$\chi (\mathrm{R})=\delta _{\eta _1,0}\delta _{\eta _2,0}m\gamma \omega _\mu ^{(k_1+k_1^{})x_1(k_2+k_2^{})x_2}(modm).$$
(11)
Since $`\nu /\mu =n^{}/\gamma `$, then $`L`$ in (10) can be written as
$$L=\mu \left(\frac{l\nu }{\mu }+\frac{l^{}\nu ^{}}{\mu }\right)=\mu \left(\frac{ln^{}}{\gamma }+\frac{l^{}n}{\gamma }\right)=\mu \lambda .$$
(12)
It seems that this determines a factor system $`m_m^{(\lambda )}`$. However, we cannot exclude the case in which $`\mathrm{gcd}(\lambda ,m)=\mathrm{}>1`$. Therefore, the product considered has to be decomposed into irreducible representations with a factor system $`m_M^{(\mathrm{\Lambda })}`$, where $`\mathrm{\Lambda }=\lambda /\mathrm{}`$ and $`M=m/\mathrm{}`$. The scalar product of the appropriate characters gives us a multiplicity of $`D^{M\mathrm{\Lambda },\mathrm{K}}`$ in the product considered, as follows:
$$f(D^{M,\mathrm{\Lambda };\mathrm{K}},D^{nl,\mathrm{k}}D^{n^{}l^{},\mathrm{k}^{}})=\frac{\gamma }{\mathrm{}}\delta _{K_1,k_1+k_1^{}}\delta _{K_2,k_2+k_2^{}}.$$
(13)
There are $`\mathrm{}^2`$ such representations with $`K_i=(k_i+k_i^{})mod\mu `$.
The most interesting is the case when $`n=n^{}`$ and $`l=l^{}`$, since $`n`$ and $`l`$ are determined by the magnetic flux, the charge, and the crystal period $`N`$; hence such a case can be interpreted as a system of two identical particles moving in the same lattice and the same magnetic field (Florek 1997a). The resultant representation is $`n^2`$-dimensional and its character is equal to
$$\chi (\mathrm{R})=\delta _{n_1,x_1n}\delta _{n_2,x_2n}n^2\omega _\nu ^{(k_1+k_1^{})x_1(k_2+k_2^{})x_2}$$
with $`0k_i,k_i^{},x_i<\nu `$. The factor system is given by (10) as
$$m(\mathrm{R},\mathrm{R}^{})=\omega _n^{2ln_2n_1^{}}=\omega _n^{\lambda n_2n_1^{}}$$
so we have to check the $`\mathrm{gcd}(\lambda ,n)`$. At this moment the cases of odd and even $`n`$ have to be considered separately. In the first case, $`\mathrm{}=\mathrm{gcd}(n,2l)=1`$ and the representations obtained decomposes into $`n`$ copies of the representation $`D^{n2l,\mathrm{K}}`$ with $`K_i=(k_i+k_i^{})mod\nu `$. In the second case, however, $`\mathrm{}=2`$ and $`M=\frac{1}{2}n`$, so the considered product decomposes into representations $`D^{{\scriptscriptstyle \frac{1}{2}}nl,\mathrm{K}}`$: there are four representations with $`K_i=(k_i+k_i^{})mod\nu `$ and each of them appears $`\frac{1}{2}n`$ times. In both cases we have
$$\frac{2l}{n}=\frac{l}{(n/2)}=2\frac{l}{n}$$
so the new representations correspond to a system with the charge $`2q`$, see (9). However, an even $`n`$ in the second case yields the change of magnetic periodicity from $`n`$ to $`\frac{1}{2}n`$ and four times as many magnetic cells. In a similar way, the coupling of $`d`$ representations $`D^{n1,\mathrm{k}^{(j)}}`$, $`j=1,2,\mathrm{},d`$ with $`n=dM`$, changes the magnetic period from $`n`$ to $`M`$ (and yields $`d^2`$ times as many magnetic cells)—however, not by modification of the magnetic field, but by multiplication of the charge by $`d`$.
The irreducible representations (7) are written as a product of a one-dimensional irreducible representation $`D^\mathrm{k}`$ of $`T_\nu `$, equation (8), and a projective one of $`T_n`$. It means that also products of such representations can also be separated into a part describing addition of the quasi-momenta $`\mathrm{k}`$, $`\mathrm{k}^{}`$ with the second part corresponding to the addition of co-divisors $`\nu `$ and $`\nu ^{}`$ or, more precisely, $`l\nu +l\nu ^{}`$, see (10). However, the last addition can change the magnetic periodicity, determined by $`M`$ and $`\mathrm{\Lambda }`$ in (12) and (13), in a way depending on the arithmetic structure of $`N`$, $`n`$, $`n^{}`$, $`l`$, and $`l^{}`$. In the above example, the label $`M`$ (the size of magnetic cells) of resultant representation was equal to or smaller than $`n=n^{}`$. One can easily obtain that for $`N=12`$
$$D^{3,1;[1,0]}D^{6,1;[1,0]}=\underset{K_1,K_2=0,2,4}{}D^{2,1;[K_1,K_2]}.$$
In this case one particle may have charge $`4e`$ and the second $`2e`$, so the two-particle system has the charge $`6e`$. We must say ‘may have’ since the condition (1) involves both the magnetic flux $`\varphi `$ and the charge $`q`$. The chosen values of charges correspond to the fixed $`\varphi =1/N`$. Therefore, the charge of the first particle yields $`3\times 3`$ magnetic cells, and the second one $`6\times 6`$, whereas two-particle system demands $`2\times 2`$ magnetic cells. On the other hand we have ($`N=12`$, as above)
$$D^{3,1;[1,0]}D^{4,1;[1,0]}=D^{12,7;[0,0]}$$
so $`M>n,n^{}`$ and there is only one magnetic cell. Therefore, the addition of quasi-momenta $`\mathrm{k}`$, $`\mathrm{k}^{}`$ has to be modified to reflect all possible changes of the magnetic periodicity.
## 4 Final remarks and conclusions
The projective representations used by Brown (1964) and in this paper can be replaced in an equivalent approach by using vector representations of central extensions (Zak 1964a, b, Florek 1994, 1996a, b). Zak assumed that a factor $`\omega `$ has to be represented by itself, and rejected representations in which $`\omega `$ is represented by $`\omega ^r`$. However, as long as $`r`$ is mutually prime with $`N`$, such a change is an isomorphism of (inequivalent) central extensions (Altmann 1977, Florek 1994). Within the approach presented here this fact is realized by the freedom that one has in choosing relation between the charge $`q`$ and the index $`l`$, given by (9). For $`q=1`$, we can take not only $`L=1`$ but also any $`r`$ mutually prime with $`n`$. All important properties, e.g. addition of charges and charge periodicity, are unaffected: since $`\mathrm{gcd}(r,n)=1`$, then $`\{r,2r,\mathrm{},Nr\}=\{1,2,\mathrm{},n\}`$, but elements of the first set are obtained in a different order ($`zr`$ is calculated $`modn`$). In physical terms, this means that if we observe only magnetic or charge periodicity, we cannot distinguish $`H_1=2\pi \mathrm{}c/Ne`$ form $`H_r=rH`$ if $`\mathrm{gcd}(r,N)=1`$; see (1) and (9). In fact, it should be said that the condition (1) is not imposed on $`H`$ or $`q`$ but on their product $`qH`$, and has to be written as
$$qH=\frac{2\pi }{N}\frac{\mathrm{}c}{e}L\mathrm{or}q\varphi =\frac{L}{N}.$$
(14)
This means that a particle with the charge $`2e`$ can be described by the same representation $`D^{NL}`$ as a particle with the charge $`e`$ if the magnetic field is halve. On the other hand, very strong magnetic fields may lead to observations of a fractional charge, if the product $`qH`$ has to satisfy (14).
The introduction of projective representations in this paper has been based on the magnetic translation operators determined by Brown (1964), and the notion of Bloch electrons in an external magnetic field was used throughout this work. Hence, the concept of magnetic cells has appeared in a natural way. However, these representations can be applied to any problem in quantum mechanics in which a symmetry group $`G`$ appears and phase factors play an important role. For example, Divakaran and Rajagopal (1991) used them in the theory of superconducting layered materials (they included also many general remarks in their work). If we assume that projective representations correspond to energy levels (and so representation vectors correspond to states) of a one-particle system, then products of two (or more) representations have to correspond to two-particle (or many-particle, in a general case) systems. Not straying far from physical problems discussed above, we can look at a two-dimensional electron gas in an external magnetic field. The fractional quantum Hall effect (Tsui et al1982, Das Sarma and Pinczuk 1997) is still a subject to which much effort is being devoted by theorists and experimentalists, but it has been accepted that Coulomb interactions play a very important role in explanation of observed features (Shankar and Murthy 1997, Heinonen 1998). Therefore, it seems possible to apply the results presented above to such problems.
It should be underlined that products of projective representations are well known in mathematics (Backhouse and Bradley 1972, Altmann 1986). On the other hand, products of vector representations are commonly used in quantum physics to describe multi-particle states. It is shown in this paper that products of projective representations also have to be applied in many-body problems.
It is a pleasure to thank Professor G. Kamieniarz for carefully reading the manuscript and many helpful remarks. Partial support from the State Committee for Scientific Research (KBN) within the project No 8 T11F 027 16 is acknowledged.
## References
|
no-problem/9903/cond-mat9903295.html
|
ar5iv
|
text
|
# The role of imaginary vector potential in composite fermion pairing theory
## Abstract
We show that the imaginary vector potential causes a pair-breaking effect in the composite fermion theory, and discuss its irrelevance in the pairing state. The Hamiltonian for pairing states of composite fermions is proposed. The advantage of non-unitary transformations and the meaning of the non-Hermitian term are discussed.
Recently, the $`\nu =5/2`$ enigma is reconsidered in Refs. . These numerical works support that the $`\nu =5/2`$ state is the spin-polarized p-wave pairing state of composite fermions. On the other hand, the possibility of the p-wave pairing state was discussed at $`\nu =1/2`$ .
However, the condition for the occurrence of composite fermion pairings are still controversial problem. The pair-breaking effect in the composite fermion pairing state was discussed by Bonesteel . He discussed that there is a pair-breaking effect stemmed from a fluctuation of the Chern-Simons gauge field and this pair-breaking effect is strong enough in a short range interaction case to break composite fermion pairs. On the other hand, the effect of non-Hermitian term was unclear in the composite fermion pairing theory based on the Rajaraman-Sondhi transformation .
The effect of non-Hermitian term was discussed in the localization-delocalization phenomena. For two-dimensional non-interacting electron systems with random potentials, it is believed that all states are localized. However, when we introduce an imaginary vector potential, and increase the strength of it, there appear extended states. The origin of the imaginary vector potential is clear for the vortex depinning phenomena with columnar defects, that is, the transverse magnetic field. However, the merits of considering the non-Hermitian quantum mechanics in general systems are not clear.
In this paper, we show that the non-Hermitian term causes a pair-breaking effect to composite fermion pairing states. By solving the gap equations, it is shown that the ground state is the gapless pairing state in the absence of the Coulomb energy. However, this gapless pairing state is unstable one because it is on the relative maximum of the ground state energy. When we take into account the Coulomb interaction, it is not the ground state any more. The advantage of non-unitary transformation and the meaning of the non-Hermitian term are discussed. Moreover we mention a guide to apply non-unitary transformations to the strongly correlated electron systems.
The Hamiltonian at $`\nu =1/m`$ ($`m`$ is an even integer) for composite fermions based on the Rajaraman-Sondhi non-unitary transformation is given by
$$H=H^0+V^H+V^{NH}+V^C,$$
(1)
where $`H^0=_𝐤\frac{\mathrm{}^2k^2}{2m_b}\pi _𝐤\varphi _𝐤`$, $`V^C`$ is the Coulomb interaction, and
$`V^H`$ $`=`$ $`{\displaystyle d^2𝐫\frac{e}{c}𝐣_{CF}(𝐫)\delta 𝐚},`$ (2)
$`V^{NH}`$ $`=`$ $`{\displaystyle d^2𝐫\frac{e}{c}𝐣_{CF}(𝐫)\left(i\widehat{e}_z\times \delta 𝐚\right)},`$ (3)
with $`\pi _𝐤`$ ($`\varphi _𝐤`$) being the creation (annihilation) operator of composite fermions with momentum $`𝐤`$, $`𝐣_{CF}`$ being the current operator of composite fermions, and $`\delta 𝐚`$ being given by
$$\delta 𝐚_\alpha =\frac{\varphi _0}{2\pi }md^2𝐫^{}\delta \rho _{CF}(𝐫^{})\mathrm{Im}\mathrm{log}(zz^{}).$$
(4)
Here $`\varphi _0=\frac{ch}{e}`$ is the flux quantum, $`z=x+iy`$ is the complex coordinate, and $`\delta \rho _{CF}(𝐫)`$ is equal to $`\rho _{CF}(𝐫)\overline{\rho }`$ with $`\overline{\rho }`$ being the average density of particles. Substituting Eq. (4) into Eqs. (2) and (3) directly, we obtain the interaction between composite fermions in two-body forms.
Equation(2) has the form of the minimal coupling between composite fermions and the Chern-Simons gauge field fluctuation. Therefore, $`V^H`$ causes an interaction like the Lorentz force. Considering the classical equation of motion for composite fermions, we see that a composite fermion passing by another composite fermion in the counterclockwise direction from the view point at positive $`z`$-axis feels the attractive force toward it because $`\delta \rho >0`$ around the composite particles. On the other hand, if a composite fermion passes by another composite fermion in the clockwise direction, the repulsive interaction is caused between them. For that reason, a pairing state with positive angular momentum is expected to exist. From this argument, we expect that if there is some pairing state, the symmetry of the pairing state is not s wave. On the other hand, we find that the $`V^{NH}`$ has no effect for the equation of motion.
The gap equations for pairing states of composite fermions are derived by a pairing approximation. At zero temperature, they are given by
$`\mathrm{\Delta }_𝐤`$ $`=`$ $`{\displaystyle \frac{1}{2\mathrm{\Omega }}}{\displaystyle \underset{𝐤^{}(𝐤)}{}}V_{\mathrm{𝐤𝐤}^{}}{\displaystyle \frac{\mathrm{\Delta }_𝐤^{}}{E_𝐤^{}}},`$ (5)
$`\overline{\mathrm{\Delta }}_𝐤`$ $`=`$ $`{\displaystyle \frac{1}{2\mathrm{\Omega }}}{\displaystyle \underset{𝐤^{}(𝐤)}{}}V_{𝐤^{}𝐤}{\displaystyle \frac{\overline{\mathrm{\Delta }}_𝐤^{}}{E_𝐤^{}}},`$ (6)
where $`V_{\mathrm{𝐤𝐤}^{}}`$ is the interaction for pairs with zero total momentum. The analysis of these gap equations with $`V^H`$ only for the pairing interaction was done by Greiter, Wen, and Wilczek . They showed that the ground state is the p-wave pairing state of composite fermions. In their analysis, the usual Chern-Simons singular gauge transformation was performed. The quadratic term for the Chern-Simons gauge field was neglected and the effect of it was not clear. As we will show later, it has close relation to the non-Hermitian term $`V^{NH}`$.
Now we take into account the non-Hermitian term $`V^{NH}`$ in the gap equations. Setting $`\mathrm{\Delta }_𝐤=\mathrm{\Delta }_k\mathrm{e}^{i\mathrm{}\theta _k}`$ and $`\overline{\mathrm{\Delta }}_𝐤=\overline{\mathrm{\Delta }}_k\mathrm{e}^{i\mathrm{}\theta _k}`$ in Eqs. (5) and (6) for $`\mathrm{}`$-pairing state ($`\mathrm{tan}\theta _k=\frac{k_x}{k_y}`$), we obtain
$`\mathrm{\Delta }_k`$ $`=`$ $`{\displaystyle \frac{m}{M}}{\displaystyle _0^k}𝑑k^{}{\displaystyle \frac{k^{}\mathrm{\Delta }_k^{}}{E_k^{}}}\left({\displaystyle \frac{k^{}}{k}}\right)^{\mathrm{}},`$ (7)
$`\overline{\mathrm{\Delta }}_k`$ $`=`$ $`{\displaystyle \frac{m}{M}}{\displaystyle _k^{\mathrm{}}}𝑑k^{}{\displaystyle \frac{k^{}\mathrm{\Delta }_k^{}}{E_k^{}}}\left({\displaystyle \frac{k}{k^{}}}\right)^{\mathrm{}},`$ (8)
where we have replaced the band mass with the effective mass $`M`$ of composite fermions. These gap equations are solved exactly and the solution is given by
$`\mathrm{\Delta }_k`$ $`=`$ $`\{\begin{array}{cc}0\hfill & 𝒇𝒐𝒓k<k_F,\\ \mathrm{\Delta }ϵ_F\left[\left(\frac{k_F}{k}\right)^21\right]^m\left(\frac{k}{k_F}\right)^{\mathrm{}}\hfill & 𝒇𝒐𝒓k>k_F,\end{array}`$ (11)
$`\overline{\mathrm{\Delta }}_k`$ $`=`$ $`\{\begin{array}{cc}\overline{\mathrm{\Delta }}ϵ_F\left(\frac{k}{k_F}\right)^{\mathrm{}}\left[1\left(\frac{k}{k_F}\right)^2\right]^m\hfill & 𝒇𝒐𝒓k<k_F,\\ 0\hfill & 𝒇𝒐𝒓k>k_F,\end{array}`$ (14)
where $`\mathrm{\Delta }`$ and $`\overline{\mathrm{\Delta }}`$ are constants, $`ϵ_F`$ and $`k_F`$ are the Fermi energy and the Fermi wave vector for composite fermions, respectively. As a remarkable fact, this state is the gapless pairing state because $`\overline{\mathrm{\Delta }}_k\mathrm{\Delta }_k0`$ for any $`k`$ but each of $`\overline{\mathrm{\Delta }}_k`$ and $`\mathrm{\Delta }_k`$ is not identical to zero. Therefore, we understand that the effect of $`V^{NH}`$ is the pair-breaking effect. The conclusion that all of the pairing states are gapless if we neglect the Coulomb interaction is the natural one. In the absence of the Coulomb interaction, the original problem is the free electron gas under the magnetic field. In this situation, it is not an appropriate starting point to take an approximation to the Hamiltonian obtained by either the Chern-Simons singular gauge transformation or Rajaraman-Sondhi transformation because the two-body correlation effect, which is taken into account by such transformations, is absent. We do not expect the pairing correlation in such systems.
The pair-breaking effect caused by $`V^{NH}`$ has a close relation to the effect of an imaginary vector potential in the localization-delocalization phenomena. From Eq. (3), we see that $`V^{NH}`$ corresponds to an imaginary vector potential $`i\widehat{e}_z\times \delta 𝐚`$. The effect of the imaginary vector potential was discussed in the localization-delocalization phenomena. In the absence of the imaginary vector potential, it is believed that all eigenstates are localized in two-dimensional non-interacting systems. However, if we introduce an imaginary vector potential, and increase the strength of it, there appear extended states. In the composite fermion pairing theory, as we have seen above, the ground state of the system is the pairing state in the absence of the imaginary vector potential and the Coulomb interaction. When we take into account the imaginary vector potential, which is fixed contrary to the vortex depinning phenomena , the gap of the pairing state goes to zero. Therefore, the imaginary vector potential in the composite fermion pairing state has the similar effect as in the localization-delocalization phenomena.
As discussed above, the ground state is the gapless pairing state in the absence of the Coulomb interaction. However, this gapless pairing state is not stable because, from discussion below, we see that it is on the relative maximum of the ground state energy. The variation of the ground state energy with regard to the $`\overline{\mathrm{\Delta }}_k\mathrm{\Delta }_k`$ is given by
$`H_{\overline{\mathrm{\Delta }}_k\mathrm{\Delta }_k+\delta \left(\overline{\mathrm{\Delta }}_k\mathrm{\Delta }_k\right)}`$ $``$ $`H_{\overline{\mathrm{\Delta }}_k\mathrm{\Delta }_k}`$ (16)
$`={\displaystyle \frac{1}{8}}{\displaystyle \underset{𝐤}{}}{\displaystyle \frac{\overline{\mathrm{\Delta }}_k\mathrm{\Delta }_k}{\left(\overline{\mathrm{\Delta }}_k\mathrm{\Delta }_k+\xi _k^2\right)^{3/2}}}\delta \left(\overline{\mathrm{\Delta }}_k\mathrm{\Delta }_k\right).`$
The factor $`\overline{\mathrm{\Delta }}_k\mathrm{\Delta }_k/\left(\overline{\mathrm{\Delta }}_k\mathrm{\Delta }_k+\xi _k^2\right)^{3/2}`$ is not lower than zero. Therefore, the function $`\overline{\mathrm{\Delta }}_k\mathrm{\Delta }_k0`$ is the relative maximum of $`H`$. It means that any perturbation will unstabilize the gapless pairing state. Hence when we take into account the Coulomb interaction, this gapless pairing state is not stable. The effect of $`V^{NH}`$ is the pair-breaking effect but it is irrelevant for the pairing state.
The irrelevance of $`V^{NH}`$ is understood by considering the meaning of the Rajaraman-Sondhi non-unitary transformation. It fully takes into account the most fundamental correlation in quantum Hall systems. Because of the Coulomb interaction between electrons and the external magnetic field, any pair of electrons has the correlation with non-zero relative angular momentum as the short-range correlation. The Rajaraman-Sondhi transformation fully takes account of it. Then, what is the meaning of a non-Hermitian term resulting from the Rajaraman-Sondhi transformation? Before discussing it, we consider the simplest problem. Suppose the harmonic oscillator in one dimension: $`H=\frac{1}{2}\frac{d^2}{dx^2}+\frac{1}{2}x^2`$. As is well-known, the ground state is $`\psi _0(x)\mathrm{exp}(\frac{x^2}{2})`$. Let us apply a non-unitary transformation:$`U=\psi _0(x)`$ via $`\overline{U}HU`$, where $`\overline{U}=\mathrm{exp}(\frac{x^2}{2})`$. Of course the resulting Hamiltonian has the non-Hermitian term. However, it only affects excited states. For the ground state, it is not relevant. Similarly, the effect of $`V^{NH}`$ is only relevant for the motion which deviates from the above fundamental correlation. However, such motions are high energy modes in the presence of the Coulomb interaction. If it is relevant, the Hamiltonian obtained by the Rajaraman-Sondhi transformation as well as the Chern-Simons singular gauge transformation is not effective one. As far as the above correlation for short range correlation is the most fundamental one, $`V^{NH}`$ is irrelevant. The fact that the above correlation effect is the most fundamental one is demonstrated for the bilayer quantum Hall systems.
From the discussion above, we get a guide for the application of non-unitary transformations. The correlation effect in condensed matter physics is divided into two parts: short range correlation effects and long range correlation effects. The difficulty of dealing with strongly correlated electron systems theoretically relies on that we do not know how to take into account the former. However, the short range correlation originates from the two-body correlation, and it is found by considering the two-body problem. Therefore, the steps to seek the appropriate transformation are the following. First, we analyze the two-body correlation of the system. Next, we invent a transformation which properly takes account of it. When we use non-unitary transformations in this step, the non-Hermitian term is not relevant as far as we concern low-energy excitations, and we can neglect it. Finally, we obtain an effective Hamiltonian of the system. Of course, the irrelevance of the non-Hermitian term must be shown in a self-consistent way.
Now we remark on the relation to the usual Chern-Simons composite fermion theory. In the Chern-Simons formalism, the term which corresponds to $`V^{NH}`$ is a three-body interaction term. When we perform the usual Chern-Simons singular gauge transformation, the Hamiltonian is given by
$$H=H_{CS}^0+V_{CS}^H+V^3,$$
(17)
where $`H_{CS}^0`$ and $`V_{CS}^H`$ is given by replacing the $`\pi _𝐤`$ in Eq. (1) with $`\varphi _𝐤^{}`$, which is the creation operator of composite particle with momentum $`𝐤`$ and $`V^3`$ is the three-body interaction term for composite particles. In the Chern-Simons gauge theory of the fractional quantum Hall effect at $`\nu =1/m`$, where $`m`$ is an odd integer, $`V^3`$ is irrelevant when the condensate of composite bosons occurs. We can apply this argument to the composite fermion pairing theory. The pairings of composite fermions have the same role of composite bosons. The power of the canonical dimension of the coupling constant for $`V^3`$ is half of that for composite bosons but the sign of it is unchanged. Therefore, when the condensate of pairings of composite fermions occurs, $`V^3`$ is no more relevant.
The effect of $`V^3`$ was discussed by Bonesteel as the current-current correlation effect. He concluded that it causes a pair-breaking effect. The conclusion is similar to ours, however, his argument that for the short range interaction this pair-breaking effect is strong enough to unstabilize the pairing state is not correct. The pair-breaking effect caused by $`V^3`$ seems to exist but it is irrelevant for the pairing state. If the pairing state is unstabilized, the pair-breaking effect is caused not from $`V^3`$ but from the direct repulsive interaction between composite fermions or impurity potentials.
Here we remark on the Hamiltonian for the composite fermion theory. When we deal with the pairing of composite fermions, we can neglect $`V^{NH}`$ as mentioned above. Therefore, it is enough to consider the Hamiltonian
$$H=H^0+V^H+V^C,$$
(18)
for the pairing state of composite fermions. However, when we deal with no-pairing states of composite fermions, we cannot expect the irrelevance of $`V^{NH}`$ any more. We must properly take into account the effect of $`V^{NH}`$, or $`V^3`$ for a compressible state of composite fermions.
This work is supported by Research Fellowships of the Japan Society for the Promotion of Science for Young Scientists.
|
no-problem/9903/gr-qc9903066.html
|
ar5iv
|
text
|
# 1 . Introduction
## 1 . Introduction
One of the aspects of Quantum Gravity that has been most actively investigated is the possibility that there be a minimum uncertainty for the measurement of distances. The simplest and best understood proposal is the one of a “minimum length”
$$\mathrm{\Delta }xL_{min},$$
(1.1)
which fits well the expectation of certain studies of measurability in quantum gravity. Eq. (1.1) might also hold in critical sting theory as suggested by analyses of string collisions at Planckian energies, which were found to be characterized by the following modified uncertainty relation
$$\mathrm{\Delta }x\frac{\mathrm{}}{\mathrm{\Delta }p}+\alpha G\mathrm{\Delta }p,$$
(1.2)
where $`G=c^2l_p^2/\mathrm{}`$ is the gravitational coupling (Newton) constant, $`c`$ and $`l_p`$ are the speed-of-light and Planck-length constants respectively, and $`\alpha `$ is a constant related to the string tension (Regge slope). Clearly Eq. (1.2) implies $`L_{min}\sqrt{\mathrm{}\alpha G}`$.
While modified uncertainty relations are not necessarily associated to quantum groups , it is interesting that quantum-group descriptions are often available ; in particular, in Ref. it was shown that the relations (1.1)-(1.2) can be associated to $`SU_q(n)`$ covariance.
The minimum-length scenario (1.1) would already require that the conceptual framework of Quantum Gravity be significantly different from the one of “ordinary” (non-gravitational) Quantum Mechanics<sup>3</sup><sup>3</sup>3In particular, in Ref. it was shown that when gravitational effects are taken into account in a (quantum) measurement process then the masses of the probes used in the measurement induce a change in the space-time metric and this is associated to the emergence of nonlocality. The nature of this gravitationally-induced nonlocality suggests a modification of the fundamental commutators.. However, as emphasized in Ref. , even more dramatic consequences for the measurability of distances appear to emerge from the analysis of additional contributions to the uncertainty relations which are associated to the fact that the limit of “classical” (infinitely massive) devices might not be accessible in Quantum Gravity. In particular, the analysis in Ref. has led to the proposal of an alternative to the bound (1.1). By taking into account both the quantum nature of the agents involved in the measurement and the gravitational effects associated to the devices, one finds that the measurability of distances is bound by a quantity that (as needed for the decoherence mechanism discussed in Ref. ) grows with the time required by the measurement procedure
$$\mathrm{min}\left[\mathrm{\Delta }L\right]l_p\sqrt{\frac{cT}{s}}l_p\sqrt{\frac{L}{s}},$$
(1.3)
where $`L`$ is the distance being measured, $`s`$ is a length scale characterizing the spatial extension of the devices (e.g., clocks) used in the measurement, $`T`$ is the time needed to complete the procedure of measuring $`L`$, and on the right-hand-side we used the fact that, assuming the measurement procedure uses massless probes, one has typically $`TL`$. Notice that for all acceptable values of $`s`$ (i.e. $`s<L`$) the bound (1.3) is more stringent than (1.1); this is a direct consequence of the fact that the analyses leading to (1.1) had implicitly relied on the availability of ideal classical devices in the measurement procedure.
While, as mentioned above, critical string theories provide a framework for the bound (1.1), it appears that noncritical string theories might provide a framework for the bound (1.3). In particular, within “Liouville” noncritical string theories, in which the target time is identified with the Liouville mode , the nature of the dynamics of the light probes exchanged in a typical procedure of measurement of a distance was shown to lead to a measurability bound of type (1.3).
An interesting problem is the one of finding a quantum-group (and quantum-Lie-algebra) framework for (1.3), just like Ref. has provided a quantum-group framework for (1.1)-(1.2). The notion of quantum group as a Hopf algebra permits to consider deformed symmetries; in fact, the Hopf algebra axioms provide simultaneously an algebraic generalization of the definition of Lie group as well as of Lie algebra. As exemplified by the formulae in the following section, the phase space containing the coordinate and momentum sectors can be described in the quantum-deformed case as a semidirect product of two dual Hopf algebras describing the coordinates and momentum sectors. Such a definition of quantum phase space has been first proposed by Majid , and it is endoved with the property that in the undeformed case (coordinates and momenta described by Abelian Hopf algebra with primitive coproducts) one obtains the standard quantum mechanical Heisenberg commutation relations.<sup>4</sup><sup>4</sup>4In the literature sometimes the semidirect product construction for two dual Hopf algebras describing respectively quantum Lie group and quantum Lie algebra is called “Heisenberg double” (see, e.g., Ref. ). The so-called $`\kappa `$-deformations provide an example of this type of quantum deformations of relativistic symmetries, and one of us recently argued that $`\kappa `$-deformed symmetries might provide an algebraic abstraction of the measurability bound (1.3). The analysis reported in was somewhat preliminary since only the coordinate sector was considered, but the bound (1.3) emerged rather compellingly, as a direct consequence of the noncommuting space-time coordinates of $`\kappa `$-deformed Minkowski space. Encouraged by the findings of Ref., in this paper we explore further the relation between $`\kappa `$-Poincaré and (1.3); specifically, we extend the analysis of Ref. from the confines of the space-time coordinate sector to the full structure of the $`\kappa `$-deformed phase space. We primarily consider the $`\kappa `$-deformed Poincaré symmetries in the bicrossproduct basis , which appears to be a very natural framework for the quantum deformations of semidirect product algebras, and outside of the coordinate sector we identify two structures which could affect the analysis of Ref.: the $`\kappa `$-deformed mass-shell condition, which is associated to the Casimir and suggests a modification of the propagation of the light probes exchanged during measurement, and the nontrivial commutation relation between three-momenta and quantum time coordinate, which we find to have important consequences for the analysis of the propagation of heavy probes exchanged during measurement. As discussed below, our analysis uncovers new nonnegligible contributions to the bound on the measurability of distances. These contributions are however comparable to the one identified in Ref., and therefore the order of magnitude of the effect discussed in Ref. is confirmed by our analysis. These findings provide additional evidence of a relation between $`\kappa `$-Poincaré and the bound (1.3), and thereby contribute to the development of a physical interpretation of this class of deformations. We hope that this will provide further motivation for experimentalists to investigate the theoretical framework here advocated, especially exploiting the recent remarkable discoveries in the phenomenology of gamma-ray bursts that allow a direct test<sup>5</sup><sup>5</sup>5Previous analyses attempting to bound the dimensionful parameter $`\kappa `$ only probed values of $`\kappa `$ that were several orderds of magnitude below the Planck scale (see, e.g., Ref. ), but now that we have definitive evidence that gamma-ray bursts are at cosmological distances we can expect to probe values of $`\kappa `$ all the way up to the Planck scale. of some of the predictions of the $`\kappa `$ deformations of Poincaré symmetries.
## 2 $`\kappa `$-deformed quantum relativistic phase space
The standard form of the covariant fourdimensional Heisenberg commutation relations, describing quantum-mechanical covariant phase space looks as follows:
$$[x_\mu ,p_\nu ]=i\mathrm{}g_{\mu \nu },g_{\mu \nu }=diag(1,1,1,1).$$
(2.1)
The space-time coordinates $`x_\mu `$ ($`\mu =0,1,2,3`$) can be identified with the translation sector of the Poincaré group, and the fourmomenta $`p_\mu `$ ($`\mu =0,1,2,3`$) are given by the translation generators of the Poincaré algebra. In considering quantum deformations of relativistic symmetries as describing the modification of space-time structure one is lead to the study of the possible quantum Poincaré groups.<sup>6</sup><sup>6</sup>6We take into consideration here only the genuine 10-generator quantum deformations of $`D=4`$ Poincaré symmetries. In particular, the “standard” $`q`$-deformations are not considered. These $`q`$-deformations require adding an eleventh (dilatation) generator, i.e. one deals with the dilatation extended Poincaré algebra . In such a case the corresponding quantum phase space is much more complicated (see, e.g., ), and the deformation parameter is dimensionless, rendering difficult the physical separation between the ordinary regime of commutative space-time coordinates and the short-distance regime in which non-commutativity sets in. The classification of quantum deformations of $`D=4`$ Poincaré groups in the framework of Hopf algebras was given by Podleś and Woronowicz (; see also ) and provides the most general class of noncommutative space-time coordinates $`\widehat{x}_\nu `$ allowed by the quantum-group formalism. If we assume that the quantum deformation does not affect the nonrelativistic kinematics, i.e. we preserve the nonrelativistic $`O(3)`$ rotations classical and $`O(3)`$ covariance, the only consistent class of noncommuting space-time coordinates is described by the relations of the $`\kappa `$-deformed Minkowski space with commuting classical space coordinates. In order to describe the relativistic phase space we start with the deformed Hopf algebra of fourmomenta $`\widehat{p}_\mu `$ written as follows
$`[\widehat{p}_0,\widehat{p}_k]`$ $`=`$ $`0`$ (2.2)
$`\mathrm{\Delta }(\widehat{p}_0)`$ $`=`$ $`\widehat{p}_01+1\widehat{p}_0`$
$`\mathrm{\Delta }(\widehat{p}_k)`$ $`=`$ $`\widehat{p}_ke^{\alpha \widehat{p}_0}+e^{\beta \widehat{p}_0}\widehat{p}_k`$ (2.3)
with antipode and counit given by
$$S(\widehat{p}_k)=e^{(\alpha +\beta )\widehat{p}_0}\widehat{p}_kS(\widehat{p}_0)=\widehat{p}_0ϵ(\widehat{p}_\mu )=0.$$
(2.4)
Using the duality relations involving the fundamental constant $`\mathrm{}`$ (Planck’s constant)
$$\widehat{x}_\mu ,\widehat{p}_\nu =i\mathrm{}g_{\mu \nu }g_{\mu \nu }=(1,1,1,1)$$
(2.5)
we obtain the noncommutative deformed configuration space $`𝒳`$ as a Hopf algebra with the following algebra and coalgebra structure
$`[\widehat{x}_0,\widehat{x}_k]`$ $`=`$ $`i\mathrm{}(\beta \alpha )\widehat{x}_k,[\widehat{x}_k,\widehat{x}_l]=0,`$ (2.6)
$`\mathrm{\Delta }(\widehat{x}_\mu )`$ $`=`$ $`\widehat{x}_\mu 1+1\widehat{x}_\mu ,`$ (2.7a)
$`S(\widehat{x}_\mu )`$ $`=`$ $`\widehat{x}_\mu ,ϵ(\widehat{x}_\mu )=0.`$ (2.7b)
The deformed phase space can be considered as the vector space $`𝒳𝒫`$ with the product (see )
$$(xp)(\stackrel{~}{x}\stackrel{~}{p})=x(p_{(1)}\stackrel{~}{x})p_{(2)}\stackrel{~}{p}$$
(2.8)
where left action is given by
$$px=p,x_{(2)}x_{(1)}$$
(2.9)
The product (2.8) can be rewritten as the commutators between coordinates and momenta by using the obvious isomorphism $`xx1`$, $`p1p`$. This procedure provides the following commutation relations (see also )
$$\begin{array}{cccccc}\hfill [\widehat{x}_k,\widehat{p}_l]& =& i\mathrm{}\delta _{kl}e^{\alpha \widehat{p}_0},\hfill & \hfill [\widehat{x}_k,\widehat{p}_0]& =& 0,\hfill \\ \hfill [\widehat{x}_0,\widehat{p}_k]& =& i\mathrm{}\beta \widehat{p}_k,\hfill & \hfill [\widehat{x}_0,\widehat{p}_0]& =& i\mathrm{}.\hfill \end{array}$$
(2.10)
The set of relations $`(2.2)`$, $`(2.6)`$ and $`(2.10)`$ describes the deformed relativistic quantum phase space.
Introducing the dispersion of the observable $`a`$ in quantum mechanical sense by
$$\mathrm{\Delta }(a)=\sqrt{a^2a^2}$$
(2.11)
we have
$$\mathrm{\Delta }(a)\mathrm{\Delta }(b)\frac{1}{2}|c|\text{ where }c=[a,b]$$
(2.12)
We obtain deformed uncertainty relations in the form
$`\mathrm{\Delta }\widehat{x}_0\mathrm{\Delta }\widehat{x}_k`$ $``$ $`{\displaystyle \frac{\mathrm{}}{2}}|(\beta \alpha )||\widehat{x}_k|`$ (2.13a)
$`\mathrm{\Delta }\widehat{p}_k\mathrm{\Delta }\widehat{x}_l`$ $``$ $`{\displaystyle \frac{\mathrm{}}{2}}\delta _{kl}e^{\alpha \widehat{p}_0}`$ (2.13b)
$`\mathrm{\Delta }\widehat{p}_0\mathrm{\Delta }\widehat{x}_0`$ $``$ $`{\displaystyle \frac{\mathrm{}}{2}}`$ (2.13c)
$`\mathrm{\Delta }\widehat{p}_k\mathrm{\Delta }\widehat{x}_0`$ $``$ $`{\displaystyle \frac{\mathrm{}}{2}}|\beta \widehat{p}_k|`$ (2.13d)
Depending on a choice of the parameters $`\alpha `$ and $`\beta `$ we can distinguish the following cases ($`c`$-speed of light, $`\kappa `$-(mass like) deformation parameter):
i) $`\alpha =\beta =0`$ ; standard form of nondeformed covariant phase space,
ii) $`\alpha =\beta `$ ; trivially deformed phase space with commuting configuration space,
iii) $`\alpha =\beta =\frac{1}{2\kappa c}`$ ; $`\kappa `$-deformed phase space in the standard basis (see ),
iv) $`\alpha =0,\beta =\frac{1}{\kappa c}`$ ; $`\kappa `$-deformed phase space in the bicrossproduct basis (see ),
v) $`\alpha =\frac{1}{\kappa c},\beta =0`$ ; $`\kappa `$-deformed phase space in the bicrossproduct basis (see ),
vi) $`\alpha =0,\beta =\frac{1}{\kappa c}`$ ; $`\kappa `$-deformed phase space in the bicrossproduct basis (the case (v)) with transposed coproduct.
The Quantum-Gravity arguments advocated in the next sections make contact with the bicrossproduct basis, and therefore (also for definiteness) in the following we focus on the case (vi). The set of relations $`(2.2)`$, $`(2.6)`$ and $`(2.10)`$ for our choice of parameters (vi) are the following
$`[\widehat{p}_0,\widehat{p}_k]`$ $`=`$ $`0`$ (2.14a)
$`[\widehat{x}_0,\widehat{x}_k]`$ $`=`$ $`{\displaystyle \frac{i\mathrm{}}{\kappa c}}\widehat{x}_k,[\widehat{x}_k,\widehat{x}_l]=0`$ (2.14b)
$`[\widehat{x}_k,\widehat{p}_l]`$ $`=`$ $`i\mathrm{}\delta _{kl},[\widehat{x}_k,\widehat{p}_0]=0`$ (2.14c)
$`[\widehat{x}_0,\widehat{p}_k]`$ $`=`$ $`{\displaystyle \frac{i\mathrm{}}{\kappa c}}\widehat{p}_k,[\widehat{x}_0,\widehat{p}_0]=i\mathrm{}`$ (2.14d)
and are $`\kappa `$-Poincaré covariant <sup>7</sup><sup>7</sup>7The $`\kappa `$-covariance of the relations (2.14b) has been shown firstly in Ref. . The $`\kappa `$-covariance of the whole quantum $`\kappa `$-deformed Heisenberg algebra follows from the general properties of the semidirect product, defined by the relations (2.14b) and (2.14c-d). (see, e.g., ).
The modified covariant Heisenberg uncertainty relations follow from the relations $`(2.14)`$, therefore we obtain $`\kappa `$-deformed uncertainty relations
$`\mathrm{\Delta }\widehat{t}\mathrm{\Delta }\widehat{x}_k`$ $``$ $`{\displaystyle \frac{\mathrm{}}{2\kappa c^2}}|\widehat{x}_k|={\displaystyle \frac{1}{2}}{\displaystyle \frac{l_\kappa }{c}}|\widehat{x}_k|,`$ (2.15a)
$`\mathrm{\Delta }\widehat{p}_k\mathrm{\Delta }\widehat{x}_l`$ $``$ $`{\displaystyle \frac{1}{2}}\mathrm{}\delta _{kl},`$ (2.15b)
$`\mathrm{\Delta }\widehat{E}\mathrm{\Delta }\widehat{t}`$ $``$ $`{\displaystyle \frac{1}{2}}\mathrm{},`$ (2.15c)
$`\mathrm{\Delta }\widehat{p}_k\mathrm{\Delta }\widehat{t}`$ $``$ $`{\displaystyle \frac{\mathrm{}}{2\kappa c^2}}|\widehat{p}_k|={\displaystyle \frac{1}{2}}{\displaystyle \frac{l_\kappa }{c}}\left|\widehat{p}_k\right|.`$ (2.15d)
where $`l_\kappa =\frac{\mathrm{}}{\kappa c}`$ describes the fundamental length at which the time variable should already be considered noncommutative. In comparison with the discussion in Ref., which only considered the coordinate sector, the significant new element that emerged in our present analysis is the relation (2.15d). Interestingly, multiplying the three relations (2.15a), (2.15b) and (2.15d) one obtains
$`(\mathrm{\Delta }\widehat{t})^2(\mathrm{\Delta }\widehat{x}_l\mathrm{\Delta }\widehat{p}_l)^2`$ $``$ $`{\displaystyle \frac{\mathrm{}}{8}}{\displaystyle \frac{l_\kappa ^2}{c^2}}\left|\widehat{x}_l\widehat{p}_l\right|`$ (2.16)
(where no sum over the index $`l`$ is to be understood). This suggests that a wave packet with minimal standard ($`\mathrm{\Delta }x\mathrm{\Delta }p`$) uncertainty has the largest uncertainty in the localization of time. (In ordinary quantum mechanics $`l_\kappa =0`$ and there is no such correlation.)
It is also interesting to consider the relation (2.15d) under the assumption that the three-momenta $`\widehat{p}_k`$ can be expressed by a general formula $`\widehat{p}_i=(v^2)v_i`$, in which case $`\mathrm{\Delta }\widehat{p}_i=_{ij}\mathrm{\Delta }v_k`$ with $`_{ij}=[\delta _{ij}+2v_iv_j(\mathrm{ln})^{}]`$. Then (2.15d) implies
$`\mathrm{\Delta }\widehat{t}\mathrm{\Delta }v_i`$ $``$ $`{\displaystyle \frac{l_\kappa }{c}}(v)_{ij}^1(v)v_j`$ (2.17)
Because in part of our measurement analysis we shall consider light probes, we now discuss the modification of the kinematics of $`\kappa `$-deformed photons. We shall assume that the generators of the $`\kappa `$-deformed Poincaré algebra in bicrossproduct basis describes the “physical” generators of space-time symmetries. In the bicrossproduct basis the $`\kappa `$-deformed mass Casimir takes the form
$$C_2^\kappa =\frac{1}{c^2}\stackrel{}{P}^2e^{\frac{P_0}{\kappa c}}(2\kappa \mathrm{sinh}\frac{P_0}{2\kappa c})^2=M^2,$$
(2.18)
where $`P_\mu `$ are the generators of space-time translations and $`M`$ denotes the $`\kappa `$-invariant mass parameter. For $`M=0`$ ($`\kappa `$-deformed photons) from (2.18) one obtains that (we identify $`P_\mu \widehat{p}_\mu `$)
$$\widehat{p}_0=\kappa c\mathrm{ln}(1+\frac{|\stackrel{}{\widehat{p}}|}{\kappa c})=|\stackrel{}{\widehat{p}}|\frac{|\stackrel{}{\widehat{p}}|^2}{2\kappa c}+O(\frac{1}{\kappa ^2})$$
(2.19)
and in particular the velocity formula for massless $`\kappa `$-deformed quanta looks as follows<sup>8</sup><sup>8</sup>8The relation (2.20a) is valid as a consequence of the Hamiltonian equation of motion $`\dot{x}_i=H/p_i(x_i/\kappa )H/x_0`$. \[See Ref. , Eq. (4.22).\] For the $`\kappa `$-photon here considered, since $`H=H(p_i)`$, the velocities are classical ($`[v_i,v_j]=0`$). ($`E=c\widehat{p}_0`$)
$$v_i=\frac{E}{\widehat{p}_i}=\frac{c}{1+\frac{|\stackrel{}{\widehat{p}}|}{\kappa c}}\frac{\widehat{p}_i}{|\stackrel{}{\widehat{p}}|}$$
(2.20a)
or
$$v=|\stackrel{}{v}|=\frac{c}{1+\frac{|\stackrel{}{\widehat{p}}|}{\kappa c}}=c\frac{|\stackrel{}{\widehat{p}}|}{\kappa }+O(\frac{1}{\kappa ^2})$$
(2.20b)
The inverse formula, which can be inserted in (2.17), looks as follows
$$\widehat{p}_i=\kappa \frac{c}{v}(\frac{c}{v}1)v_i$$
(2.21)
and it is linear in the deformation parameter $`\kappa `$.
## 3 Measurement of distance and covariant $`\kappa `$-deformed phase space
In this section we analyze the measurement of the distance $`L`$ between two bodies as it results from a plausible physical interpretation of the uncertainty relations (2.15a)-(2.15d). Like the related studies we consider the procedure of measurement of distances set out by Wigner , which relies on the exchange of a probe/signal between the bodies. The distance is therefore measured as $`L=vT/2`$, where $`v`$ is the velocity of the probe and $`T`$ is the time (being measured by a clock) spent by the probe to go from one body to the other and return. In general the quantum mechanical nature of the agents intervening in the experiment introduces uncertainties in the measurement of $`L`$, and in particular one finds that <sup>9</sup><sup>9</sup>9Of course there are other contributions to $`\mathrm{\Delta }L`$ (e.g., coming from the quantum mechanical nature of the other devices used in the experiment ); however, since they obviously contribute additively to the total uncertainty in the measurement of $`L`$, these uncertainties could only make stronger the bound derived in the following.
$$\mathrm{\Delta }L[\mathrm{\Delta }L]_{clock}+[\mathrm{\Delta }L]_{probe},$$
(3.1)
i.e. the uncertainty in the measurement of $`L`$ receives of course contributions that originate from the quantum mechanical nature of the “clock” (the timing/triggering device employed in the measurement) and from the quantum mechanical nature of the probe exchanged between the bodies. A significant contribution to $`\mathrm{\Delta }L_{clock}`$ was uncovered in Ref. ; this results in the relation
$$[\mathrm{\Delta }L]_{clock}l_p\sqrt{\frac{cT}{s}},$$
(3.2)
where $`s`$ is a length scale characterizing the spatial extension of the clock (e.g., the radius of a spherically-symmetric clock) and $`T`$ is the time needed to complete the procedure of measuring $`L`$ (i.e. $`T`$ is the time that the clock measures).
Within ordinary quantum mechanics the quantum mechanical nature of the probe (while contributing in general to the uncertainty) does not contribute to the bound on the measurability of $`L`$ (i.e. a suitable measurement set up can be found so that the quantum mechanical nature of the probe does not lead to a contribution to $`\mathrm{\Delta }L`$). It was shown in Ref. that instead the kinematics of quantum $`\kappa `$-Minkowski space-time does lead to a nontrivial $`[\mathrm{\Delta }L]_{probe}`$, and interestingly this turns out to be of the same form of the $`[\mathrm{\Delta }L]_{clock}`$ in (3.2). As announced in the Introduction we are interested in extending the analysis of Ref. to include structure from the full $`\kappa `$-deformed phase space. We are also more general than Ref. and other related work (see, e.g., Ref.) in that we not only consider massless particles as the probes exchanged in the Wigner measurement, but we also consider the opposite limit in which the probes are ultra-heavy.
### 3.1 Using a heavy probe
In general combining the contribution (3.2), which originates from the quantum mechanical nature of the clock, with uncertainties due to the quantum mechanical nature of the probe one finds that
$$\mathrm{\Delta }Ll_p\sqrt{\frac{cT}{s}}+\mathrm{\Delta }x+v\mathrm{\Delta }t+T\mathrm{\Delta }v$$
(3.3)
where $`\mathrm{\Delta }x`$ and $`\mathrm{\Delta }t`$ are the uncertainties on the space-time position <sup>10</sup><sup>10</sup>10As implicit in the terminology here adopted, the Wigner measurement procedure is essentially one-dimensional, and the only relevant spatial coordinate is the one along the axis passing through the bodies whose distance is being measured. of the probe at the “final time” $`T`$, while $`\mathrm{\Delta }v`$ is the uncertainty on the velocity of the probe.
The first contribution on the right-hand-side of (3.3) originates from the quantum mechanical nature of the clock, and it is interesting to notice that in the case of a heavy probe the proportionality to $`\sqrt{T}`$ of that term, which always signals decoherence effects (e.g., the more time goes by, the more the quantum clock decoheres according to the ideas in Refs.), can be turned into a proportionality to $`\sqrt{L/v}`$, i.e. the uncertainty actually diverges in the limit of vanishing velocity as expected in a context involving decoherence (small velocities imply large times).
Concerning the contributions on the right-hand-side of (3.3) that originate from the quantum mechanical nature of the probe, it is interesting to observe that in ordinary quantum mechanics $`\mathrm{\Delta }x`$, $`\mathrm{\Delta }t`$ and $`\mathrm{\Delta }v`$ are not correlated and therefore they do not lead to a contribution to the bound on the measurability of $`L`$. However, the $`\kappa `$ deformation induces correlations between $`\mathrm{\Delta }x`$, $`\mathrm{\Delta }t`$ and $`\mathrm{\Delta }v`$. In particular, we observe that (2.15a)-(2.15d) imply (for an ideal heavy/nonrelativistic probe with $`p=Mv`$ and interpreting the $`x`$ on the right-hand-side of (2.15a) as the distance traveled by the probe)
$$\mathrm{\Delta }v\frac{l_\kappa v}{2c\mathrm{\Delta }t}$$
(3.4)
and
$$\mathrm{\Delta }x\frac{l_\kappa L}{2c\mathrm{\Delta }t}.$$
(3.5)
This relations together with the fact that $`vL/T`$ allow to rewrite (3.3) as
$$\mathrm{\Delta }Ll_p\sqrt{\frac{cT}{s}}+\frac{l_\kappa L}{2c\mathrm{\Delta }t}+\frac{L}{T}\mathrm{\Delta }t+\frac{l_\kappa L}{2c\mathrm{\Delta }t}.$$
(3.6)
This uncertainty can be minimized by preparing the probe in a state with $`vcl_p/\sqrt{sl_\kappa }`$, i.e. $`TL\sqrt{sl_\kappa }/(cl_p)`$, and $`\mathrm{\Delta }t\sqrt{l_\kappa T/c}`$, and this results in the measurability bound
$$min[\mathrm{\Delta }L]\sqrt{Ll_p\sqrt{\frac{l_\kappa }{s}}}.$$
(3.7)
The fact that this bound emerging from our analysis of Wigner measurement using a heavy probe manifests the same $`\sqrt{L}`$ behavior encountered in the heuristic quantum-gravity analysis of the clock involved in the measurement is a rather interesting aspect of the covariantly $`\kappa `$-deformed phase space. In fact, Eq.(3.4), which reflects the specific structure of the $`\kappa `$-deformed commutation relation between three-momenta and quantum time coordinate, plays a nontrivial role in establishing that the $`\kappa `$-deformed kinematics of the heavy probe leads to an uncertainty with this $`\sqrt{L}`$ behavior.
### 3.2 Using a massless probe
Of course, also in the case of a Wigner measurement involving a massless probe one finds that
$$\mathrm{\Delta }Ll_p\sqrt{\frac{cT}{s}}+\mathrm{\Delta }x+c\mathrm{\Delta }t+T\mathrm{\Delta }v,$$
(3.8)
and again the $`\kappa `$ deformation induces correlations between $`\mathrm{\Delta }x`$, $`\mathrm{\Delta }t`$ and $`\mathrm{\Delta }v`$. In particular, concerning the correlation between $`\mathrm{\Delta }x`$ and $`\mathrm{\Delta }t`$ using again (2.15a) one finds
$$\mathrm{\Delta }t\frac{\mathrm{}L}{2\kappa c^2\mathrm{\Delta }x}.$$
(3.9)
Moreover, if the probe is massless with modified velocity<sup>11</sup><sup>11</sup>11It is interesting to notice that $`\kappa `$-deformed mass-shell condition and $`\kappa `$-commutation relation between three-momenta and quantum time coordinate are somewhat related. In fact, for a minimum-uncertainty state in the framework of $`\kappa `$-deformed kinematics one has $`\mathrm{\Delta }E\mathrm{\Delta }t\mathrm{}/2`$ and $`\mathrm{\Delta }p\mathrm{\Delta }tl_\kappa p/(2c)`$, and this is consistent with a given dispersion relation $`E(p)`$ only if $`E(p)(c\mathrm{}/l_\kappa )\mathrm{ln}(p/p^{})`$ (with $`p^{}`$ a constant to be otherwise determined) which coincides with the asymptotic behavior of the $`\kappa `$-deformed dispersion relation (see (2.19)). (2.20b) one finds that
$$\mathrm{\Delta }v\frac{\mathrm{\Delta }P}{\kappa }\frac{\mathrm{}}{2\kappa \mathrm{\Delta }x},$$
(3.10)
where on the right-hand-side we used (2.15b).
Using (3.9) and (3.10) one can rewrite (3.8) as
$$\mathrm{\Delta }Ll_p\sqrt{\frac{cT}{s}}+\mathrm{\Delta }x+\frac{\mathrm{}L}{2\kappa c\mathrm{\Delta }x}+\frac{\mathrm{}T}{2\kappa \mathrm{\Delta }x},$$
(3.11)
and therefore, also taking into account that $`LcT/2`$ and $`l_\kappa \mathrm{}/(\kappa c)`$, one finds that the minimal value of $`\mathrm{\Delta }L`$ is obtained if $`(\mathrm{\Delta }x)^2Ll_\kappa `$ and this implies that the minimal uncertainty in the measurement of the distance $`L`$ is
$$\mathrm{min}[\mathrm{\Delta }L]\sqrt{\frac{Ll_p^2}{s}}+\sqrt{Ll_\kappa }$$
(3.12)
Again we find the $`\sqrt{L}`$ behavior, and again the full structure of the covariantly $`\kappa `$-deformed phase space advocated here plays a rather central role in obtaining this result; in fact, the relation (2.18) ensures that the fourth term on the right-hand side of Eq. (3.8) (which was not considered in Ref. ) is of the same order as the second term on the right-hand side of Eq. (3.8), which is the one considered in Ref. .
While the $`\sqrt{L}`$ behavior is of course the most robust outcome of these analyses, it is interesting to notice the interplay between the scale $`l_\kappa `$, which characterizes the $`\kappa `$ deformation, and the scales $`s`$ and $`l_p`$, which characterize heuristic quantum-gravity arguments. Although the relative magnitude of these scales could only be determined once a full Quantum Gravity formalism became available, it is quite natural to guess that if $`\kappa `$ deformations were to have physical applications it might be that $`l_\kappa l_p`$. Moreover, from the role of $`s`$ in the measurement procedure it is clear that $`sl_p`$, and since the measurability bound should be a general property of the theory it is conceivable that also $`sl_p`$. This, for example, appears to fit rather well the schemes, such as the one discussed in Ref. , in which “fundamental clocks” are intrinsic to the formulation of the quantum-gravity approach. For $`l_\kappa l_ps`$ the heavy probe and the massless probe considered in this and in the previous subsection lead to exactly (up to an overall numerical factor of order 1) the same bound in the context of the Wigner measurement, and even the heuristic quantum-gravity measurement analysis of Ref. reproduces this bound exactly (again up to an overall numerical factor of order 1). Nevertheless, especially in light of the fact that very little will be known about $`s`$ until a fully consistent (and genuinely quantum) theory of gravity is available, it is interesting to observe that if $`sl_p`$ (i.e. $`s>l_p`$) the Wigner measurement using a heavy probe is actually a “better measurement” (weaker bound) than its counterpart using a massless probe. Since most of the previous studies of quantum-gravity measurability bounds have relied on massless probes, our results suggest that a reanalysis of those studies might be necessary.
## 4 Closing Remarks
The covariant $`\kappa `$-deformation of relativistic symmetries here considered, and the associated covariant $`\kappa `$-deformation of the Heisenberg algebra (2.14), has several appealing properties as a candidate for the high-energy modification of classical relativistic symmetries. It provides a rather moderate (at least in comparison with some of its alternatives) deformation of classical relativistic symmetries, which in particular reflects the reasonable expectation that, if any of the space-time coordinates is to be special, the special coordinate should be time. (Interestingly this intuition appears to be also realized in certain approaches to string theory, see e.g. Ref. .) As manifest in the relations (2.15a)-(2.15d), the $`\kappa `$-modifications of the covariant Heisenberg commutations relations are of quantum mechanical nature, i.e. proportional to the Planck constant $`\mathrm{}`$. This suggests that the $`\kappa `$-deformation can be related with the quantum corrections to the classical dynamics of the space-time geometry. In extending the analysis of Ref. from the space-time coordinate sector to the full structure of the $`\kappa `$-deformed phase space, our analysis has provided additional evidence that the bounds on the measurability of distances associated with the uncertainty relations characterizing the $`\kappa `$-deformed covariant Heisenberg algebra (2.14) are consistent with independent heuristic quantum-gravity analyses of such measurability bounds. Based on this consistency between heuristic quantum-gravity measurability analysis and $`\kappa `$-Poincaré measurability analysis one is prompted to consider the possibility that at length scales larger than the Planck length (but of course smaller than the length scales already probed experimentally) the $`\kappa `$-deformations of Poincaré symmetries might play a role in the effective description of Quantum Gravity. This is not completely surprising since some of the most popular Quantum-Gravity scenarios, such as the ones based on a spacetime lattice and the ones involving a foamy Quantum-Gravity vacuum, would plausibly lead to deformations of Poincaré symmetries, and, in particular, the analysis of Ref. appears to suggest that propagation in a foamy Quantum Gravity vacuum might be characterized by a $`\kappa `$-deformed dispersion relation.
The three-momentum-dependent (i.e. energy-dependent) “speed of light” (2.20b) is a novel phenomenon that arises in the framework here considered. As mentioned, it has the same functional form (upon appropriate identification between $`\kappa `$ and the string scale) as the energy-dependent speed of light recently discussed in the non-critical (“Liouville”) string literature. Both in the $`\kappa `$-Poincaré and in the string theory contexts the deviation from ordinary physics, while very significant at the conceptual level, is rather marginal from the phenomenological viewpoint. For example, for photons of energies of order 1 GeV the Eq. (2.20b) entails a minuscule $`10^{19}c`$ correction with respect to the ordinary scenario with constant speed of light. At least when $`\kappa `$ is identified with the Planck scale, the Eq. (2.20b) is completely consistent with presently available experimental data . However (as already emphasized in Refs. and references therein), some of the modern techniques of investigation of astrophysical phenomena could soon bring remarkable progress in the investigation of space-time symmetries. In our case the best laboratory appears to be provided by gamma-ray bursts, and, now that there is convincing evidence that these bursts have cosmological origin, we can expect that within a few years gamma-ray-burst data will test conclusively the $`\kappa `$ deformations we considered.
|
no-problem/9903/cond-mat9903137.html
|
ar5iv
|
text
|
# Berry-phase theory of proper piezoelectric response
## I Introduction
The calculation of spontaneous polarization and piezoelectric response within the framework of first-principles methods of electronic structure theory has proven to be a rather subtle problem. In a landmark paper, Martin showed that the piezoelectric tensor is well-defined as a bulk quantity in a crystalline insulator. However, at that time it was far from clear whether the spontaneous polarization itself could be regarded as a bulk property in the same sense, and calculations of piezoelectric constants by finite differences of spontaneous polarization were therefore not possible.
The situation changed in 1993 with the development of the “Berry-phase” theory of polarization , which provided a direct and straightforward method for computing the electric polarization. (For a useful review, see Ref. .) Nevertheless, some subtleties remain regarding the computation of the piezoelectric tensor components by finite differences . First, the Berry-phase theory gives the polarization as a multivalued quantity, and the piezoelectric response that would be computed from a given one of the many branches is not invariant with respect to choice of branch. Second, a distinction is made between the “proper” and “improper” piezoelectric response , and it might not be clear which of these is to be associated with the finite-difference calculation.
The purpose of the present paper is to elucidate the physics of the spontaneous polarization, the piezoelectric response, and the relations between the two. It is clarified that the improper polarization is the one given by the naive finite-difference approach, and that while this quantity is indeed branch-dependent, the proper polarization, which should be compared with experiment, is not. As a result of this analysis, a simplified recipe for the direct finite-difference computation of the proper piezoelectric response is given.
## II Berry-phase theory of polarization
We consider a periodic insulating crystal in zero macroscopic electric field, and assume that the electronic ground state can be described by a one-electron Hamiltonian $`H`$ as in density-functional or Hartree-Fock theory. The eigenstates of $`H`$ are the Bloch functions $`\psi _{n𝐤}`$ with energies $`ϵ_{n𝐤}`$, and it is conventional to define the cell-periodic Bloch functions
$$u_{n𝐤}(𝐫)=e^{i𝐤𝐫}\psi _{n𝐤}(𝐫)$$
(1)
having periodicity $`u_{n𝐤}(𝐫)=u_{n𝐤}(𝐑+𝐫)`$, where $`𝐑`$ is any lattice vector. The contribution of the $`n`$’th occupied band to the spontaneous electric polarization of the crystal can then be written
$$𝐏_n=\frac{ie}{(2\pi )^3}d^3ku_{n𝐤}|_𝐤|u_{n𝐤}.$$
(2)
We take the convention that $`n`$ runs over bands and spin, so a factor of two would need to be inserted in Eq. (2) to account for paired spins. The total spontaneous polarization is then given by
$$𝐏=\frac{e}{\mathrm{\Omega }}\underset{\tau }{}Z_\tau 𝐫_\tau +\underset{n\mathrm{occ}}{}𝐏_n,$$
(3)
where $`Z_\tau `$ and $`𝐫_\tau `$ are the atomic number and cell position of the $`\tau `$’th nucleus in the unit cell, and $`\mathrm{\Omega }`$ is the unit cell volume.
Strictly speaking, Eq. (2) applies only to an isolated band, i.e., a band for which $`ϵ_{n𝐤}`$ does not become degenerate with any other band at any point in the Brillouin zone. This restriction is not essential; methods for extending the analysis to composite groups of occupied bands containing arbitrary degeneracies and crossings have been developed as described in Refs. . However, for simplicity of presentation, it will be assumed here that only isolated bands are present. For the same reason, spin degeneracy is suppressed throughout.
There is a certain arbitrariness inherent in Eq. (2) associated with the freedom to choose the phases of the Bloch functions $`\psi _{n𝐤}`$. For, suppose we make a different choice
$$|\stackrel{~}{\psi }_{n𝐤}=e^{i\beta (𝐤)}|\psi _{n𝐤}.$$
(4)
We shall refer to this as a “gauge transformation” of the Bloch functions. Note that the choice of $`\beta (𝐤)`$ is restricted by the fact that $`𝐤`$ and $`𝐤+𝐆`$ label the same wavefunction (where $`𝐆`$ is a reciprocal lattice vector), so that $`\beta (𝐤+𝐆)\beta (𝐤)`$ must be an integer multiple of $`2\pi `$ for any $`𝐆`$. Thus, the most general form of $`\beta (𝐤)`$ is
$$\beta (𝐤)=\beta _{\mathrm{per}}(𝐤)+𝐤𝐑$$
(5)
where $`\beta _{\mathrm{per}}`$ is a periodic function in $`k`$-space and $`𝐑`$ is some real-space lattice vector. Letting $`\stackrel{~}{𝐏}_n`$ be the result of inserting the $`\stackrel{~}{u}_{n𝐤}`$ in place of the $`u_{n𝐤}`$ in Eq. (2), and using Eqs. (1), (4), and (5), one finds
$$\stackrel{~}{𝐏}_n=𝐏_n\frac{e𝐑}{\mathrm{\Omega }}.$$
(6)
Thus, while the contribution of this band to the electronic polarization is not absolutely gauge-invariant, it is gauge-invariant modulo $`e/\mathrm{\Omega }`$ times a real-space lattice vector. Actually, this is precisely the type of qualified invariance we should have expected. After all, the choice of the location $`𝐫_\tau `$ of the atom representing sublattice $`\tau `$ in the unit cell has a similar ambiguity; we could just as well choose $`\stackrel{~}{𝐫}_\tau =𝐫_\tau +𝐑^{}`$, where $`𝐑^{}`$ is another lattice vector, leading to precisely the same kind of “modulo $`e𝐑/\mathrm{\Omega }`$” ambiguity in the expression for $`𝐏`$ in Eq. (3).
Perhaps the most natural way to incorporate this kind of ambiguity in the definition of the polarization is to regard $`𝐏`$ as a multivalued quantity; that is, it simultaneously takes on a lattice of values given by some $`𝐏^{(b)}`$ (here ‘$`b`$’ is a “branch” label) and all its periodic images $`𝐏^{(b)}+e𝐑/\mathrm{\Omega }`$ (with $`R`$ running over all lattice vectors of the crystal). To interpret this intuitively, we can say that from the point of view of its dipolar properties, the real insulator behaves like a fictitious crystal composed of two sublattices of point $`\pm e`$ charges, with the sublattice of $`e`$ charges displaced relative to the $`+e`$ sublattice by $`\mathrm{\Omega }𝐏/e`$. That is, choosing one of the $`+e`$ charges as the origin, $`\mathrm{\Omega }𝐏/e`$ takes on a lattice of values that is precisely the lattice of positions of the $`e`$ charges.
This is illustrated in Fig. 1 for an imaginary tetragonal crystal (dimensions $`a\times a\times c`$) with one monovalent ion located at the cell corners, and a single (spinless) electron band giving rise to the distributed electron charge indicated schematically by the contours in Fig. 1(a). (We assume that $`M_z`$ mirror symmetry is broken in some way.) Eq. (2) then gives the location
$$𝐫_n=\mathrm{\Omega }𝐏_n/e$$
(7)
of the effective unit point charge $`e`$ illustrated in Fig. 1(b). As discussed in Refs. , this location is just the charge center of the Wannier function associated with the electron band. The polarization will then take on a lattice of values having $`x`$, $`y`$, and $`z`$ components of $`m_1e/ac`$, $`m_2e/ac`$, and $`(\gamma +m_3)e/a^2`$, respectively, where the $`m_i`$ are integers. More generally, when several occupied bands are present, one can rewrite Eq. (3) as
$$𝐏=\frac{e}{\mathrm{\Omega }}\underset{\tau }{}Z_\tau 𝐫_\tau \frac{e}{\mathrm{\Omega }}\underset{n\mathrm{occ}}{}𝐫_n.$$
(8)
In practice, one proceeds by computing the component of $`𝐏_𝐧`$ along a particular crystallographic direction $`\alpha `$ via the quantity
$$\varphi _{n,\alpha }=\frac{\mathrm{\Omega }}{e}𝐆_\alpha 𝐏_n,$$
(9)
where $`𝐆_\alpha `$ is the primitive reciprocal lattice vector in direction $`\alpha `$. In cases of simple symmetry (e.g., tetragonal or rhombohedral ferroelectric phases), a single $`\varphi _n`$ suffices to determine $`𝐏_n`$, but in general $`𝐏_n`$ can be reconstructed from the three $`\varphi `$’s via
$$𝐏_n=\frac{1}{2\pi }\frac{e}{\mathrm{\Omega }}\underset{\alpha }{}\varphi _{n,\alpha }𝐑_\alpha ,$$
(10)
where $`𝐑_\alpha `$ is the real-space primitive lattice vector corresponding to $`𝐆_\alpha `$. The $`\varphi _{n,\alpha }`$ are angle variables (“Berry phases”) that are well-defined modulo $`2\pi `$, given by
$$\varphi _{n,\alpha }=\mathrm{\Omega }_{\mathrm{BZ}}^1_{\mathrm{BZ}}d^3ku_{n𝐤}|i𝐆_\alpha _𝐤|u_{n𝐤},$$
(11)
where $`\mathrm{\Omega }_{\mathrm{BZ}}=(2\pi )^3/\mathrm{\Omega }`$ is the Brillouin zone (BZ) volume.
The $`\varphi _{n,\alpha }`$ can be regarded as giving the position of the Wannier center for band $`n`$. For the toy crystal of Fig. 1, for example, and with the origin chosen at the cell corner, one would have $`\varphi _x=\varphi _y=0`$ and $`\varphi _z=2\pi \gamma `$. The practical calculation of the $`\varphi _{n,\alpha }`$ proceeds on a discrete mesh in reciprocal space, arranged as a two-dimensional grid of $`𝐆_\alpha `$-oriented strings of k-points, as described in Refs. .
## III Piezoelectric response
The piezoelectric tensor of a crystal reflects the first-order change in spontaneous electric polarization in response to a first-order deformation of the crystal. The “improper” piezoelectric tensor is defined as
$$c_{ijk}=\frac{dP_i}{dϵ_{jk}}$$
(12)
in terms of the deformation
$$dr_j=\underset{k}{}dϵ_{jk}r_k,$$
(13)
where the symmetric and antisymmetric parts of $`dϵ`$ represent infinitesimal strains and rotations, respectively. On the other hand, the “proper” piezoelectric tensor can be defined as
$$\stackrel{~}{c}_{ijk}=\frac{dJ_i}{d\dot{ϵ}_{jk}},$$
(14)
where $`𝐉`$ is the current density that flows through the bulk of the sample in adiabatic response to a slow deformation $`\dot{ϵ}=dϵ/dt`$. According to the standard references , the relation between the improper and proper piezoelectric tensors is
$$\stackrel{~}{c}_{ijk}=c_{ijk}+\delta _{jk}P_i\delta _{ij}P_k.$$
(15)
Writing out explicit tensor components, this last equation becomes
$`\stackrel{~}{c}_{zzz}`$ $`=`$ $`c_{zzz},`$ (16)
$`\stackrel{~}{c}_{zxx}`$ $`=`$ $`c_{zxx}+P_z,`$ (17)
$`\stackrel{~}{c}_{zxy}`$ $`=`$ $`c_{zxy},`$ (18)
$`\stackrel{~}{c}_{zxz}`$ $`=`$ $`c_{zxz},`$ (19)
$`\stackrel{~}{c}_{zzx}`$ $`=`$ $`c_{zzx}P_x,`$ (20)
and similarly for permutations of the cartesian labels (but not for permutations of their position in the index triplet). It might seem strange at first sight that the expressions for $`\stackrel{~}{c}_{zxz}`$ and $`\stackrel{~}{c}_{zzx}`$ have a different form, but this just reflects the fact that the deformation tensor $`ϵ`$ has been allowed to contain an antisymmetric part.
Now in the Berry-phase theory, the polarization is a multivalued quantity, so that any particular value $`𝐏^{(b)}`$ has to be identified by its branch label ‘$`b`$’, and the corresponding improper piezoelectric tensor is
$$c_{ijk}^{(b)}=\frac{dP_i^{(b)}}{dϵ_{jk}}.$$
(21)
Since $`𝐏`$ is well-defined modulo $`e𝐑/\mathrm{\Omega }`$, and both $`𝐑`$ and $`\mathrm{\Omega }`$ vary with the deformation $`ϵ`$, Eq. (21) will clearly give different results for different choices of branch. This branch-dependence is problematic; the piezoelectric tensor is measurable, and a suitable theory ought to give a unique value for it.
Before proceeding, the reader is reminded that the piezoelectric response contains, in general, a “clamped-ion” part and an “internal-strain” part . That is, one decomposes the actual deformation into a sum of two parts: a homogeneous strain in which the nuclear coordinates follow Eq. (13) exactly (clamped-ion part), plus an internal distortion of the nuclear coordinates at fixed strain (internal-strain part). Since the latter occurs at fixed strain, all the subtleties about the branch-dependence and the proper-vs.-improper distinction disappear for this case. While the computation of the internal-strain part of the piezoelectric response may be tedious (requiring an iterative set of force calculations to determine the needed internal relaxations), it is straightforward in principle. Consequently, for the remainder of this paper, the discussion refers to the clamped-ion response only unless explicitly stated otherwise.
### A Branch-invariance of proper piezoelectric response
While it is true that the improper piezoelectric response depends, in general, on choice of branch, it is instead the proper piezoelectric tensor that should be compared with experiment. Figure 2 shows a sketch of one possible experimental setup, in which a block of piezoelectric material is sandwiched between shorted conducting electrodes, and the current $`I`$ that flows in response to a deformation $`ϵ`$ is measured. As suggested by Eq. (14), the proper piezoelectric response is related to the current that flows through the sample in response to the deformation, and is thus the experimentally measured quantity. Moreover, the induced current density $`𝐣(𝐫)`$ is periodic with the lattice, so that its unit cell average $`𝐉`$ in Eq. (14) is perfectly well-defined, and consequently the proper piezoelectric tensor $`\stackrel{~}{c}`$ cannot suffer from any dependence upon choice of branch.
It is straightforward to check this branch-independence of $`\stackrel{~}{c}`$ explicitly. Since the polarizations for two different branch choices are related by
$$𝐏^{(b^{})}=𝐏^{(b)}+\frac{e}{\mathrm{\Omega }}𝐑,$$
(22)
one finds
$`dP_i^{(b^{})}`$ $`=`$ $`dP_i^{(b)}{\displaystyle \frac{e}{\mathrm{\Omega }^2}}d\mathrm{\Omega }R_i+{\displaystyle \frac{e}{\mathrm{\Omega }}}dR_i`$ (23)
$`=`$ $`dP_i^{(b)}+{\displaystyle \frac{e}{\mathrm{\Omega }}}{\displaystyle \underset{l}{}}(dϵ_{ll}R_i+dϵ_{il}R_l).`$ (24)
so that
$$c_{ijk}^{(b^{})}=c_{ijk}^{(b)}\frac{e}{\mathrm{\Omega }}\delta _{jk}R_i+\frac{e}{\mathrm{\Omega }}\delta _{ij}R_k,$$
(25)
or, using Eq. (22),
$$c_{ijk}^{(b^{})}+\delta _{jk}P_i^{(b^{})}\delta _{ij}P_k^{(b^{})}=c_{ijk}^{(b)}+\delta _{jk}P_i^{(b)}+\delta _{ij}P_k^{(b)}.$$
(26)
It is thus evident that $`\stackrel{~}{c}_{ijk}`$ as defined in Eq. (15) is indeed independent of choice of branch.
It is instructive to note that a similar argument applies to the part of the proper piezoelectric tensor arising from the ionic contribution $`𝐏_{\mathrm{ion}}=(e/\mathrm{\Omega })_\tau Z_\tau 𝐫_\tau `$ in Eq. (3). Recalling that we are working in the clamped-ion approximation, so that $`d𝐫_\tau `$ follows the form of Eq. (13), one finds immediately that $`\stackrel{~}{c}_{\mathrm{ion}}=0`$ by the same logic as for the previous paragraph.
Indeed, the same logic would apply to Eq. (8) if the Wannier centers $`𝐫_n`$ would undergo a homogeneous deformation of the type (13). In other words, the proper piezoelectric response is identically zero for a homogeneous deformation of both the ionic positions and the Wannier centers, in which case there is no charge flow through the interior of the crystal.
### B Simplified finite-difference formula
Of course, there is no reason to expect the Wannier centers $`𝐫_n`$ to follow a homogeneous deformation, so $`\stackrel{~}{c}`$ is not generally zero. But from this point of view, it becomes evident that the the proper piezoelectric response is precisely a measure of the degree to which the Wannier centers fail to follow a homogeneous deformation. Or equivalently, returning to Eq. (10), we see that the proper piezoelectric response measures just the variation of the Berry phases $`\varphi _{n,\alpha }`$ with the strain deformation. More precisely, starting from Eqs.(10), (12), and (15), one finds
$$\stackrel{~}{c}_{ijk}=\frac{1}{2\pi }\frac{e}{\mathrm{\Omega }}\underset{n,\alpha }{}\frac{d\varphi _{n,\alpha }}{dϵ_{jk}}R_{\alpha i}$$
(27)
We have been working in the clamped-ion approximation, but in general, if there are internal relaxations accompanying the deformation, one can define a total Berry phase in direction $`\alpha `$,
$$\varphi _\alpha =\underset{\tau }{}Z_\tau 𝐆_\alpha 𝐫_\tau \underset{n}{}\varphi _{n,\alpha },$$
(28)
so that
$$\stackrel{~}{c}_{ijk}=\frac{1}{2\pi }\frac{e}{\mathrm{\Omega }}\underset{\alpha }{}\frac{d\varphi _\alpha }{dϵ_{jk}}R_{\alpha i}.$$
(29)
Naturally, the ionic contributions to $`d\varphi _\alpha /dϵ_{jk}`$ vanish in the clamped-ion approximation.
Equation (27), or its generalization (29), is the central result of this paper, and provides a simple and practical recipe for calculating the desired proper piezoelectric response. One simply computes the needed $`d\varphi /dϵ`$ by finite differences, as $`(\varphi ^{}\varphi )/(ϵ^{}ϵ)`$ for nearby strain configurations $`ϵ`$ and $`ϵ^{}`$. Then these $`d\varphi /dϵ`$ are inserted into Eq. (27) or (29) to obtain the elements of the proper piezoelectric tensor.
### C Relation to surface charges
At the end of Sec. III A, it was pointed out that a homogeneous deformation of the lattice of positive ionic and negative Wannier-center point charges would give rise to no internal current, and hence no proper piezoelectric response. This result can be made more intuitive by considering the connection between bulk polarization and surface charges .
Consider, for example, a crystallite composed of $`N\times N\times N`$ replicas of the unit cell shown in Fig. 1(b). In general there may be an arbitrariness in the choice of surface termination, as illustrated for the top surface of this crystallite in Fig. 3. For any given termination, the macroscopic surface charge density $`\sigma `$ is uniquely defined as $`𝑑z\overline{\rho }(z)`$, where $`\overline{\rho }(z)`$ is the average charge contained in a unit cell centered at vertical coordinate $`z`$ (so that $`\overline{\rho }`$ vanishes either deep in the crystal or deep in the vacuum and its integral is convergent). For the crystal of Fig. 1, one finds $`\sigma =\gamma e/a^2`$ and $`\sigma =(\gamma 1)e/a^2`$ for the terminations of type I and II of Figs. 3(a) and 3(b), respectively. Referring back to Sec. II, where it was found that $`P_z=(\gamma +m_3)e/a^2`$, one confirms that the relation
$$\sigma =𝐏\widehat{𝐧}$$
(30)
is satisfied for both terminations, the ambiguity of termination corresponding to the choice of branch of $`𝐏`$.
For definiteness, assume that the surface terminations are such that the top and bottom surfaces of the crystallite have charge densities $`+\gamma e/a^2`$ and $`\gamma e/a^2`$ on the top and bottom surfaces, respectively, and zero on the sides. Then the magnitude of the total charge on the top or bottom surface is just $`N^2a^2\sigma =N^2\gamma e`$, which is clearly independent of any homogeneous ($`\gamma `$-preserving) deformation of the crystal. Thus, if this crystallite were inserted between grounded capacitor plates as in Fig. 2, no current would flow through the wire as a result of the homogeneous deformation. This is consistent with the vanishing of the proper piezoelectric response associated with such a homogeneous deformation, as already illustrated via Eq. (27).
However, for the same situation, the improper piezoelectric tensor would have nonzero elements. For the chosen surface termination, the crystallite has a total dipole moment $`𝐝=N^3\gamma ec\widehat{𝐳}`$, and a polarization $`𝐏=𝐝/N^3a^2c=(\gamma e/a^2)\widehat{𝐳}`$ as expected. Clearly this $`𝐏`$ is invariant with respect to an elongation of the crystallite along the $`\widehat{𝐳}`$ axis (strain component $`ϵ_{zz}`$), but not to an elongation along the $`\widehat{𝐱}`$ or $`\widehat{𝐲}`$ axes ($`ϵ_{yy}`$ or $`ϵ_{zz}`$), thus explaining why there is a correction to $`c_{zxx}`$ but not to $`c_{zzz}`$ in Eq. (20). Similar considerations applied to shear strains and rotations explain the remaining entries in Eq. (20).
## IV Discussion
As is evident from Eqs.(15) and (20), the distinction between the proper and improper piezoelectric tensor is only present if a spontaneous polarization is present. If the spontaneous polarization is small, as for wurtzite semiconductors , it may be a good approximation to neglect the corrections to the improper tensor components. Alternatively, linear-response methods can be used to compute the proper piezoelectric response directly . However, for a finite-difference calculation of the proper piezoelectric response of a ferroelectric material, it is essential to take the corrections to the improper response explicitly into account, as was done in Ref. .
## V Summary
In this work, a simple and straightforward method for computing the proper piezoelectric response has been proposed. Instead of first computing the improper response and then the needed corrections, the proper response is computed directly from Eq. (27) or (29). It is thus clarified that the central quantities needed to determine the proper piezoelectric response are just the variations of the Berry phases with deformation.
###### Acknowledgements.
This work was supported by ONR Grant N0001497-1-0048. I wish to thank R. Cohen for calling my attention to the problem of the proper piezoelectric response and its relation to the Berry-phase theory of polarization.
|
no-problem/9903/cond-mat9903250.html
|
ar5iv
|
text
|
# Effective temperatures out of equilibrium*footnote **footnote * Based on talks given at “Trends in Theoretical Physics II”, November 30 - December 4, 1998, Buenos Aires, Argentina and in the NATO Advanced Study Institute “Topological Defects and the Non-Equilibrium Dynamics of Symmetry Breaking Phase Transitions”, February 16 - 26, 1999, Les Houches, France. LPTENS/9910
## Abstract
We describe some interesting effects observed during the evolution of nonequilibrium systems, using domain growth and glassy systems as examples. We breafly discuss the analytical tools that have been recently used to study the dynamics of these systems. We mainly concentrate on one of the results obtained from this study, the violation of the fluctuation-dissipation theorem and we discuss, in particular, its relation to the definition and measurement of effective temperatures out of equilibrium.
One of the major challenges in physics is to understand the behaviour of systems that are far from equilibrium. These systems are ubiquitous in nature. Some examples are phase separation, systems undergoing domain growth, all types of glasses, turbulent flows, systems driven by non-potential forces, etc. All these systems are “large” in the sense that they are composed of many, $`N\mathrm{}`$, dynamic degrees of freedom. Apart from succeeding in predicting the time evolution of their macroscopic properties, one would like to know which, if any, of the thermodynamic notions apply to these nonequilibrium cases.
Systems undergoing domain growth, or phase separation, provide the best known example of a nonequilibrium evolution. Take for instance a magnetic system with ferromagnetic interactions in contact with a thermal bath. If the bath temperature is very high the sample is in its paramagnetic phase and the magnetic moments, or spins, point in random directions. If one next cools down the bath, and hence the sample, through a transition temperature $`T_c`$, the system enters the low temperature phase and starts forming domains or islands of the two ordered phases, say up and down. For definiteness, let us fix the final temperature to be $`0<T<T_c`$. At any time $`t_w`$ after crossing the transition at the initial time, two types of dynamics appear: (i) fast fluctuations of some spins, due to thermal fluctuations, inside the otherwise fully ordered domains; (ii) slow motion of the domain walls leading to the growth of the averaged domain size $`L(t_w)`$. If the size of the sample is infinite, in real life very large, the nonequilibrium domain-growth process can take so long that the sample simply does not equilibrate in the time-window that is accessible experimentally. In other words, below the critical temperature $`T_c`$ one always has $`\tau _{\mathrm{obs}}<\tau _{\mathrm{eq}}`$ with $`\tau _{\mathrm{obs}}`$ the observation time and $`\tau _{\mathrm{eq}}`$ the equilibration time. The two types of dynamics itemized above are clearly seen in Fig. 1 where three two-dimensional slices of a system undergoing domain-growth are displayed. The pictures are obtained at increasing waiting times after the quench. One sees the domains growing as well as the existence, in each of the snapshots, of some reversed spins inside the otherwise ordered domains.
Scaling arguments have been extensively used to describe the dynamics below $`T_c`$; they are based on the assumption (sometimes derivation) of the evolution of the averaged domain size $`L(t_w)`$ and on further proposals for the space and time-dependence of the correlation functions.
Understanding the physics of glassy materials is perhaps a problem of intermediate difficulty. Glassy materials can be of very many different types; one has for instance structural glasses, orientational glasses, spin-glasses, plastics, gels and clays, glycerol, etc. Their hallmark is that below some transition range they also fall out of equilibrium.
The easiest way of preparing a glassy system is again through an annealing. This is implemented by decreasing the temperature of the bath with a given cooling-rate. Take for example the case of a molecular liquid. For high enough bath temperatures the sample is in its liquid phase and it achieves equilibrium with the bath. At an intermediate bath-temperature range the liquid avoids the crystallization transition, enters a metastable phase and becomes a super-cooled liquid, that is to say a liquid with some peculiar properties as, for example, a extremely high viscousity. At an even lower bath-temperature the liquid cannot follow the pace of the annealing and falls out of equilibrium, it becomes a glass. If one stops the annealing at any temperature below this range the system stays in its glassy phase for practical purposes forever and is typically an amorphous solid. We talk about a “transition range” since the transition might not be clearcut but depend on the cooling-rate. Actually one can form a glass of probably any substance by choosing a fast enough cooling-rate. Many other routes to the glassy phase are also possible.
There have been proposals to describe the evolution of some glasses, notably spin-glasses, with scaling arguments based on domain growth ideas. The assumption is that the glassy dynamics is simply given by the growth of domains of two competing ground states. However, it has been very difficult to prove (or disprove!) either experimentally or numerically that this is indeed the scenario: no “ordered structures” have been identified in general as the growing phases.
Thus, domain growth, phase separation and glassy materials are all “self-sustained”In the sense that no external perturbation is keeping them far from equilibrium. out of equilibrium systems. If one follows their time-evolution, keeping all parameters fixed, in particular the bath-temperature, some of the main features observed during their nonequilibrium evolution are:
Slow dynamics. The evolution is very slow. “One-time quantities”, as the energy-density, approach their asymptotic limit with some slow decaying function, say power law, logarithmic or more complicated. It is very important to notice though that even if these one-time quantities can get very close to their asymptotic values, this does not mean that the systems get frozen in a metastable state: they are not equilibrated in a restricted region of phase space characterised by these asympotic values. This is most clearly demonstrated by the measurement of “two-time quantities”.
Two-time quantities and physical aging. The measurement of these quantities prove that, even if one-time quantities approach a limit, the system is still changing in an important way.
One can distinguish two types of two-time quantities. Those measured during the free evolution of the system, quantifying the spontaneous fluctuations, such as any two-time correlation function, and those measured after applying a small perturbation, such as any response function. These quantities depend on both times involved in the measurement and not only on the time-difference. This shows that the systems neither are equilibrated with the bath nor have approached equilibrium in any metastable state. They are indeed rather far from equilibrium.
The measurement of the spontaneous fluctuations is quite easy to implement in a numerical simulation. One prepares the sample at an initial time $`t_o`$ and lets it evolve until a waiting time $`t_w`$ when the system configuration is recorded. One then lets the sample further evolve and computes, at all subsequent times $`t\tau +t_w`$, the correlation function between the reference configuration at $`t_w`$ and the configurations at $`\tau +t_w`$. These curves depend on both $`t_w`$ and $`\tau `$ and they are not invariant under time translations showing that the system is out of equilibrium. Furthermore, the decay as a function of $`\tau `$ is slower the longer $`t_w`$. This is the phenomenon called physical aging: the younger (older) the sample the faster (slower) the decay.
The result of the measurement of a local auto-correlation function is very easy to visualize for a domain growth. Take again the case of ferromagnetic domain growth. the dynamic variables are the Ising spins that we encode in a time-dependent $`N`$-dimensional vector $`\mathit{\varphi }(t)=(\varphi _1(t),\mathrm{},\varphi _N(t))`$ (the index $`i=1,\mathrm{},N`$ labels the spins) and the local auto-correlation function is just the scalar product of two configurations evaluated a different times, $`NC(t,t_w)=\mathit{\varphi }(t)\mathit{\varphi }(t_w)`$. For Ising spins, the auto-correlation function is normalized to one at equal times. A departure from one measures how different are any two configurations as those shown in Fig. 1. For any fixed $`t_w`$ the auto-correlation has two distinct regimes depending on the time-difference $`\tau tt_w`$. Let us choose a waiting time $`t_{w}^{}{}_{1}{}^{}`$ and plot $`C(\tau +t_{w}^{}{}_{1}{}^{},t_{w}^{}{}_{1}{}^{})`$ as a function of $`\tau `$. The curve has a first fast decay from one to a bath-temperature dependent value $`q_{EA}(T)`$ (the Edwards-Anderson temperature-dependent parameter). This corresponds to the decorrelation associated to the fast flipping of the spins inside the domains. In this regime $`\tau `$ is small compared to an increasing function of the domain size $`g(L(t_w))`$. When $`\tau `$ increases and becomes of the order of $`g(L(t_w))`$ one starts seeing the motion of the domain-walls, i.e. the growth of the domains, and the decay slows down. If one repeats this calculation choosing a longer waiting-time $`t_{w}^{}{}_{2}{}^{}>t_{w}^{}{}_{1}{}^{}`$, and its associated reference configuration, one observes that the first decay is identical to the one for $`t_{w}^{}{}_{1}{}^{}`$ though it lasts for longer, and that the second regime is notably slower that the one associated to $`t_{w}^{}{}_{1}{}^{}`$. These features can be easily understood. While $`\tau `$ is smaller than $`g(L(t_w))`$ the dynamics takes place only inside the domains as thermal fluctuations. The domain walls are ignored and the correlation behaves as if the system were a patchwork of the two equilibrium states. The correlation function decay is then independent of the waiting-time and approaches $`q_{EA}(T)=m_{\mathrm{eq}}^2(T)`$. However, after a time-difference of the order of $`g(L(t_w))`$ the system realizes it has domain walls, the subsequent decay is associated to the motion of the walls and is nonequilibrium in nature. The decay gets slower the longer the $`t_w`$ simply because the size of the domains reached at $`t_w`$ is larger.
For glassy systems the correlation functions have exactly the same qualitative behaviour though, as already mentioned, it is not easy to decide if there is any type of order growing. Plots like the one displayed in Fig. 2 have been obtained for an impressive number of glassy models of different nature. Some of them are the 3D Edwards-Anderson model, a polymer melt, a polymer in a random potential, a binary Lennard-Jones mixture, etc. Furthemore, the kind of curves were found in several sandpile models and other kind of systems.
The measurement of “dc-response” functions or, more precisely, integrated dc-responses is what is usually done experimentally. The starting procedure is similar to the precedent: one prepares the sample at an initial time and lets it freely evolve until $`t_w`$. At this waiting time one applies a small, constant, perturbation and then measures the associated integrated response of the system as a function of $`\tau `$ for different $`t_w`$. For example, experimenting with spin-glasses one applies a small magnetic field and measures the increase of magnetization, manipulating with polymer glasses one applies a stress and measure the tensile creep compliance, etc. In all cases, the integrated responses are studied as functions of $`t_w`$ and $`\tau `$ and they all show aging effects that manifest in a similar way as in the correlation measurement. There is a first increase of the time-integrated susceptibility towards a value $`\chi _1(T)`$ that does not depend on the waiting-time while there is a second increase of the time-integrated susceptibility towards the equilibrium value $`\chi _{\mathrm{eq}}(T)`$ that is waiting-time dependent.
This can again be simply visualize in the domain growth problem. The first regime corresponds to the response of the spins inside the domains, i.e. to the response of the full system taken to be roughly a patchwork of independent equilibrium states. The second regime instead corresponds to the response of the domain walls. Since their density decreases as time elapses, one expects this nonequilibrium response to vanish in the long witing-time limit.
“Ac-response” measurements are also usually performed experimentally. In these experiments one applies an ac small field of fixed frequency $`\omega `$ at the initial time $`t_0`$ and keeps it applied until the measuring time $`t_w`$. The relation with the previous results is given by identifying $`\omega 1/\tau `$. The out of equilibrium character of the evolution is given by an explicit $`t_w`$ dependence in the relaxation of the in and out of phase susceptibilities.
It is important to notice that these effects are physical aging as opposed to chemical aging. Physical aging is totally reversible: it suffices to heat the sample above the transition range and cool it back again below it to recover a fully rejuvenated system.
The comparison of spontaneous fluctuation, e.g. a correlation function, to induced fluctuations, measured by its associated response function, is well established for systems evolving in equilibrium. Indeed, this relation involves the temperature of the bath and it is called the fluctuation-dissipation theorem. However, for systems that are far from equilibrium this relation does not necessarily hold.
The problem of reaching a theoretical understanding of nonequilibrium physics is important both from a practical and a theoretical point of view. It is obvious that for some applications, one would like to predict the time-evolution of the samples with great precision and avoid undesired changes that depend on the, sometimes unknown, age of the samples. From an analytic point of view, domain-growth and glassy materials are a pattern of out of equilibrium systems whose properties one could try to capture with simple models or simplified approaches to more complex models. The predictions thus obtained can then be experimentally (or numerically) tested in real systems. Importantly enough, one could also try to extend some of these predictions to other nonequilibrium systems such as those externally driven. Some connections between glassy systems and driven systems are discussed in Ref. \[\].
How can one modelise domain growth or glassy systems? The “microscopic” constituents and interactions and, consequently, the microscopic models differ from glass to glass. Spin-glasses are composed of magnetic impurities (spins) that occupy fixed random positions in the sample and interact via RKKY interactions; polymer glasses are composed of polymers (strings) that interact via potentials of (oversimplifying) Lennard-Jones type. In the former case, there is quenched (time-independent) disorder in the system that is associated to the random positions of the spins, that give rise to random interactions between them (the RKKY interactions oscillate very rapidly with the distance between the magnetic impurities and change sign in an almost random manner). In the latter case nothing can be interpreted as being quenched disorder. Though one could expect that these two systems (and other type of glasses) behave very differently, the experimental results as well as the recent theoretical developments show that their dynamical behaviour is indeed rather similar. In other words, one can identify certain quantities that have the same qualitative behaviour.
In order to describe the dynamic evolution of a classical system in contact with an environment one starts by identifying the relevant variables of the system that evolve in time. One then proposes a Langevin equation, with noise and friction mimicking the coupling of the system to the thermal bath, to determine the time-evolution of the time-dependent variables. These variables are the (soft) spins in the case of a spin-glass, the monomer positions in the case of the polymer glass, etc. This procedure leads to a set of $`N`$, the number of dynamic variables in the system, coupled differential equations of second (if inertia is included) or first (if inertia is neglected) order. Obviously, this huge system cannot be solved and one has to resort to some alternative method to further advance in the analysis.
Indeed, one would like to obtain information about “macroscopic” quantities, that can be related to experimental measurements, instead of following the erratic motion of any microcopic variable. The two-point correlation or response functions are macroscopic quantities we would like to monitor. A well-known theoretical method, known under the name of Martin-Siggia-Rose (MSR) formalism, allows us to obtain a generating functional, and from it the Schwinger-Dyson integro-differential equations, for these quantities. This method can be applied in complete generality, to any classical model. For models with non-linear interactions of finite range it recquires the calculation of an infinite series of diagrams that one cannot in general resum and express in terms of two-point functions in an explicit form. One then faces the problem of choosing some approximation scheme to simplify this series expansion.
Before discussing how to deal with this problem, let us describe the formalism used to study a quantum system in contact with an environment. The Schwinger-Keldysh closed-time path (CTP) formalism was developed to monitor the nonequilibrium time-evolution of a quantum system, and to obtain information about two-time quantities. The environment is usually modelized by a set of harmonic oscillators (infinitely many for each variable in the system) with a spectral distribution of frequencies. The coupling of system and bath is usually chosen to be linear but of course more general situations can be considered. In this way, one obtains the CTP generating functional that, as in the classical case, involves a series expansion that, in general, cannot be obtained explicitly. (The classical limit of the CTP generating functional is the MSR generating functional.)
Typically, two routes are followed to approximate these generating functionals. They are equally applicable in the classical and quantum case and are the following:
* The microscopic models, namely the starting Hamiltonians, are simplified in such a way that the construction can be carried through and that explicit equations can be derived. This is the choice made when one uses, for example, the large $`N`$ limit of a $`O(N)`$-model to describe domain growth, fully connected spin models to describe spin-glasses or when one embeds a finite dimensional manifold in an infinite dimensional space to describe an interface motion, the motion of a polymer in a random medium.
* The microscopic models are realistic but some approximation scheme is chosen to select, from the infinite series, a still infinite subset of diagrams that can be resummed to yield an explicit set of dynamic equations for correlations and responses. Many such recipes exist in the literature, some of them are the mode-coupling approximation, the direct interaction approximation, the self-consistent screening approximation, etc.
These two procedures yield the same “form” of coupled integro-differential causal equations. Actually, in some cases one can show that a simplified microscopic model with infinite range interactions (e.g. the $`p`$ spin-glass model) yields the same dynamic equations that an approximation scheme (e.g. the mode coupling appoximation) applied to a more realistic model for a glassy material. The structure of these equations is always the same: there might be a second time-derivative term if there is inertia, some terms describe the interaction of the system with the bath and some other integral terms describe the interactions in the system (through the self-energy and vertex). It is the explicit form of the self-energy and vertex that is selected by the model or the approximation.
Once one has the equations governing the evolution of the two-time quantities, for all values of the parameters, the question then arises as to which is the phenomenology that they describe.
A combination of analytic and numeric methods are used to study these equations. One can attempt a numerical solution taking advantage of the fact that they are causal. The solution shows that they present a dynamic phase transition at a temperature $`T_d`$. Above $`T_d`$, the solution reaches, in the long waiting-time limit, a stationary form. All two-time correlations and responses are functions of the time-difference only and are related through the fluctuation-dissipation theorem. The high-temperature dynamic equations for spin-glass were studied in detail by Sompolinsky and Zippelius for spin-glass models, by Götze and collaborators for glass-models and the relation between these two was signalled and investigated by Kirkpatrick, Thirumalai and Wolynes in a series of beautiful papers.
Below $`T_d`$, a drawback of the numerical method is that, due to the slowness of the dynamics and the memory of the system, one cannot reach very long time intervals. The numerical solution gives us hints about the structure of the solution but does not give us extremely precise information about more detailed features such as the two-time scaling laws, etc. Nevertheless, the numerical solution sufficed to show that below $`T_d`$ two-time functions start depending on the waiting-time and that aging is captured by these equations.
Below $`T_d`$, and in the asymptotic limit of long waiting-time, an analytical solution was developed first for the $`p`$ spin-glass model and later for other mean-field disordered models such as Sherrington-Kirkpatrick or the motion of manifolds in infinite dimensional random potentials.
One of the main ingredients of this solution concerns the fluctuation-dissipation theorem that relates, in equilibrium, the spontaneous to the induced fluctuations. Indeed, if one follows the dynamics of a classical system that is in equilibrium with a bath, one can easily show that
$$R(t,t^{})\frac{\delta O(t)}{\delta h(t^{})}|_{h=0}=\frac{1}{T}\frac{}{t^{}}O(t)O(t^{})\theta (tt^{})=\frac{1}{T}\frac{}{t^{}}C(t,t^{})\theta (tt^{}),$$
(1)
with $`O(t)`$ any observable taken to have zero mean for simplicity and $`h`$ an infinitesimal field acting a time $`t^{}`$ that modifies the energy of the system according to $`VVhO`$ and that is not correlated with the equilibrium configuration of the system.
In the glassy phase, this relation does not hold. This does not come as a surprise since the equilibrium condition under which it can be proven does not apply. What really comes as a surprise is that the modification of the relation between response and correlation takes a rather simple form for domain-growth and glassy systems.
A way to quantify the modification of FDT in the out of equilibrium phase and to use it to classify different systems is the following. Let us integrate the response function over a time-interval going from a waiting-time $`t_w`$ to a final time $`t`$:
$$\chi (t,t_w)=_{t_w}^t𝑑t^{}R(t,t^{}).$$
(2)
This yields a time-integrated susceptibility that is exactly what is measured experimentally. Next, we compare this integrated-susceptibility to the auto-correlation function. In equilibrium, one can use FDT to show that
$$\chi (t,t_w)=\frac{1}{T}\left(C(t,t)C(t,t_w)\right).$$
(3)
Hence, if one draws a plot of $`\chi `$ against $`C`$, for increasing $`t_w`$, using $`\tau =tt_w`$ as a parameter, in the large $`t_w`$ limit the plot will approach a straight line of slope $`1/T`$ joining $`(lim_t\mathrm{}C(t,t),0)`$ and $`(0,\chi _{\mathrm{eq}})`$. From now on and without loss of generality we take $`lim_t\mathrm{}C(t,t)=1`$. Any departure from this straight line signals a modification of FDT and a departure from equilibrium.
The analytic solution of simplifed models shows that, in the nonequilibrium phase, this construction converges to a limiting curve given by
$`\underset{t_w\mathrm{},C(t,t_w)=C}{lim}\chi (t,t_w)`$ $`=`$ $`{\displaystyle \frac{1}{T_{\mathrm{eff}}(C)}}(1C)`$ (4)
where $`T_{\mathrm{eff}}(C)`$ is a function of the correlation $`C`$. We shall discuss the notation and justify the name of this function below. In the large $`t_w`$ limit two distinct regimes develop in the $`\chi `$ vs $`C`$ curve. There is a first straight line of slope $`1/T`$, joining $`(1,0)`$ and $`(q_{EA}(T),\chi _1(T))`$. This characterises what is called the FDT regime. The straight line then breaks and the $`\chi `$ vs $`C`$ curve goes on in a different manner. The subsequent behaviour depends on the model. Indeed, three families have been identified:
* Models describing domain growth like, for example, the $`O(N)`$ model in $`D`$ dimensions in the large $`N`$ limit. In this case, one follows the local correlation $`NC(t,t_w)\mathit{\varphi }(𝒙,t)\mathit{\varphi }(𝒙,t_w)`$ and its associated local susceptibility. The plot for $`Cq_{EA}(T)`$ is flat. The susceptibility gets stuck at its value $`\chi _1(T)`$ while the correlation continues decreasing towards zero. The same result holds for the Ohta-Jasnow-Kawasaki approximation to the $`\lambda \varphi ^4`$ model of phase separation.
* Models describing structural glasses like, for example, the so-called $`F_{p1}`$ models of the mode-coupling approach or the $`p`$ spin-glass models. In this case the $`\chi `$ vs $`C`$ plot, for $`Cq_{EA}(T)`$, is a straight line of slope larger than $`1/T`$.
* Models describing spin-glasses like, for example, the Sherrington-Kirkpatrick model. In this case the $`\chi `$ vs $`C`$ plot, for $`Cq_{EA}(T)`$, is a non-trivial curve.
This “classification” in three families has been checked numerically for more realistic models. Many numerical simulations using either Montecarlo (MC) techniques or molecular dynamics (MD) have shown that several models fall into the expected cathegories. Some models belonging to the first group are the $`2D`$ Ising model with conserved and non-conserved order parameter, the site diluted ferromagnet and the random field Ising model and the $`2D`$ Ising model with ferromagnetic exchange and antiferromagnetic dipolar interactions. The binary Lennard-Jones mixtures are a standard model for the glass transtion. Both MC and MD simulations show that they belong to the second class. Finally, MC simulations of finite dimensional spin-glass models, the three and four dimensional Edwards-Anderson model, yield the third kind of behaviour. Another particularly interesting problem, relevant for the physics of dirty superconductors, is the one of a manifold diffusing in a random potential. The analytic prediction using an infinite dimensional embedding space depends on the nature of the quenched random potential, namely on it being short or long range correlated. This prediction is partially confirmed by the simulations in finite dimensional transverse space with the proviso of a very interesting modification that is not captured by the infinite dimensional approach. Besides, numerical simulations of lattice-gas modes with kinetic constraints and sandpile models also show FDT violations.
Once this modification of FDT in the nonequilibrium situation is identified, several questions arise, all connected with the initial purpose of checking which thermodynamic concepts can be applied, perhaps after some modifications, to the nonequilibrium case. In the following we discuss three interesting issues.
* Why is there always a two-time regime, when $`C`$ first decays from its equal times value to $`q_{EA}(T)`$, where FDT holds?
For the domain-growth problem the presence of this piece is easy to justify. In this time-scale one only sees the dynamics and the effect of the perturbation inside the domains. Since one can then ignore the presence of domain walls, the equilibrium relation between correlation and response is expected to hold. Of course, one cannot easily extend this argument to a more general situation. There is however a totally general reason for having FDT when $`Cq_{EA}(T)`$ and it is the following.
For any system in contact with an environment, with bounded correlation functions and without non-potential forcesOther bounds can be found if diffusion and/or non-potential forces are allowed. the departure from FDT is bounded by
$$\left|T\chi (t,t_w)C(t,t)+C(t,t_w)\right|K_{t_w}^t𝑑t^{}\left(\frac{1}{\gamma N}\frac{d(t^{})}{dt^{}}\right)^{1/2}$$
(5)
where $`K`$ is a finite constant and $`\gamma `$ the friction coefficient that characterises the coupling to the bath. The Kubo $``$-function is defined as
$$(t^{})𝑑\mathit{\varphi }𝑑\dot{\mathit{\varphi }}P(\mathit{\varphi },\dot{\mathit{\varphi }},t)\left(T\mathrm{ln}P(\mathit{\varphi },\dot{\mathit{\varphi }},t)+V(\mathit{\varphi })+\frac{m\dot{\mathit{\varphi }}^2}{2}\right),$$
(6)
with $`P(\mathit{\varphi },\dot{\mathit{\varphi }},t)`$ the time-dependent probability distribution, $`V(\mathit{\varphi })`$ the potential energy and $`m`$ a mass. The $``$-function satifies $`\dot{}0`$ for all times and it vanishes only for the canonical distribution.
From this bound one sees that if $`(t)`$ falls to zero faster than $`1/t`$ no FDT violations are allowed in the long $`t_w`$ limit since the right-hand-side in Eq. (5) vanishes. Instead, if $`(t)`$ falls to zero in a slower manner, FDT is imposed by the bound for small time-differences but violations are allowed for longer time-differences. This argument proves that there is always a region of correlations close to $`C=1`$ in which FDT holds, even for a system that is not close to equilibrium.
* Can one identify the slope of the plot with an inverse effective temperature and call it $`1/T_{\mathrm{eff}}(t,t_w)=1/T_{\mathrm{eff}}(C)`$?
About ten years ago, in the context of weak-turbulence, Hohenberg and Shraiman proposed to define an effective temperature through the departure from FDT. However, a detailed analysis of this quantity and its properties was not given in this reference.
Indeed, one expects that any quantity to be defined as a nonequilibrium effective temperature must fulfill the requirements associated to the intuitive idea of temperature. The first property to check is if this effective temperature is measurable by a thermometer that is weakly coupled to the system, in a statistical manner, at any chosen waiting time. This property can be proven by studying the time-evolution of the thermometer coupled to $`M`$ identical copies of the system, all of age $`t_w`$, and by verifying that this equation becomes a Langevin equation in the presence of a thermal bath characterised by a coloured noise with correlation given by the system’s correlation and response given by the system’s response.
Thus, if the system has several time-scales characterized by different values of the effective temperatures<sup>§</sup><sup>§</sup>§See Ref. \[\] for a precise definition of two-time scales.
$`C(t,t_w)`$ $`=`$ $`C^{\mathrm{fdt}}(t,t_w)+C^{(1)}(t,t_w)+C^{(2)}(t,t_w)+\mathrm{}`$ (7)
$`R(t,t_w)`$ $`=`$ $`R^{\mathrm{fdt}}(t,t_w)+R^{(1)}(t,t_w)+R^{(2)}(t,t_w)+\mathrm{}`$ (8)
with
$$R^{\mathrm{fdt}}(t,t_w)=\frac{1}{T}\frac{}{t_w}C^{\mathrm{fdt}}(t,t_w)\theta (tt_w)R^{(i)}(t,t_w)=\frac{1}{T^{(i)}}\frac{}{t_w}C^{(i)}(t,t_w)\theta (tt_w)$$
(9)
one can select which value $`T^{(i)}`$ is measured by choosing the internal time-scale of the thermometer. Say, for example, that the thermometer is a harmonic oscillator of internal frequency $`\omega _o`$. Then, one chooses the system time-scale to be explored, and hence the value of the effective temperature to be measured, by comparing $`\omega _o`$ to $`t_w`$.
Many desirable “thermodynamic” properties of $`T_{\mathrm{eff}}`$ defined in this way can also be checked, for example:
* $`T_{\mathrm{eff}}`$ controls the direction of heat flows.
* $`T_{\mathrm{eff}}`$ controls partial equilibrations between observables in a system that evolve in the same time-scales and interact strongly enough.
* Let us take two different glasses, in contact with a single bath of temperature $`T`$. These glasses are constructed in such a way that when they are not in contact each of them has a piecewise $`T_{\mathrm{eff}}(C)`$ of the form
$`T_{\mathrm{eff}}^{\mathrm{syst}\mathrm{\hspace{0.17em}1}}(C)=\{\begin{array}{cc}T\hfill & \text{if}C>q_{EA}^{(1)}\hfill \\ T^{(1)}\hfill & \text{if}C<q_{EA}^{(1)}\hfill \end{array}`$ $`T_{\mathrm{eff}}^{\mathrm{syst}\mathrm{\hspace{0.17em}2}}(C)=\{\begin{array}{cc}T\hfill & \text{if}C>q_{EA}^{(2)}\hfill \\ T^{(2)}\hfill & \text{if}C<q_{EA}^{(2)}\hfill \end{array}`$ (14)
with $`T^{(1)}T^{(2)}`$. One can then reproduce the experiment of setting two observables in contact by coupling these two systems through a small linear coupling between their microscopic variables. The result is that above a critical (though small) value of the coupling strength the two values the effective temperatures below $`q_{EA}`$ equal while below the same critical value of the coupling strength the values remain unaltered. One concludes that if the two observables interact strongly the systems arrange their time-scales in such a way to partially thermalise.
The presence of non trivial effective temperatures in glycerol out of equilibrium is presently being checked experimentally by Grigera and Israeloff. Their results show that, at fixed measuring frequency $`\omega _o8`$ Hz, this system has an effective temperature $`T_{\mathrm{eff}}>T=180K`$ until measuring times of at least $`10^5`$ sec, that is to say of the order of days! (Note that the bath temperature $`T`$ is below $`T_c=187`$ K.)
Further support to the notion of effective temperatures comes from the study of the effect of quantum fluctuations on the same family of models. Below a critical line, that separates glassy from equilibrium phases, and in the slow dynamic regime, one finds violations of the quantum fluctuation dissipation theorem. These are characterised by the replacement of the bath temperature by an effective temperature $`T_{\mathrm{eff}}(t,t_w)`$. The effective temperature is again piecewise. It coincides with the bath-temperature $`T`$ when $`C`$ is larger than $`q_{EA}`$ and it is different when $`C`$ goes below $`q_{EA}`$. This nonequilibrium value has the nice property of being non-zero even at zero bath-temperature. Again, this result can be interpreted within the domain growth example. Whenever one looks at short time-differences with respect to the waiting-time one explores the quantum and thermal fluctuation in the bulk, i.e. one observes a quantum equilibrium dynamics that satisfies the quantum FDT. Instead, when $`\tau `$ is comparable to $`g(L(t_w))`$ one observes the domain wall motion. These are macroscopic objects for which quantum fluctuation do not have a strong effect. This can be seen, for example, in the form of the FDT violations: they look classical though with an effective temperature that depends on the strength of quantum fluctuations.
* Do effective temperatures in out of equilibrium systems emerge from a symmetry breaking?
In the classical case, one can study the structure of time-scales and effective temperatures with the help of the supersymmetric formulation of stochastic processes. Indeed, it is well-known that the effective action in the MSR generating functional is invariant under a supersymmetric group (with a possible symmetry breaking due to the initial condition).
In the kind of glassy systems we deal with, there is a neat separation of time-scales in the long waiting-time limit. This allows us to separate the dynamics in the fast scale from the dynamics in the slow time-scales. The equation governing the slow time-scales, have an enlarged symmetry: they acquire an invariance under super-reparametrizations. The only solution that respects the large symmetry is a trivial, constant one. Hence, in order to have non-trivial dynamics in the long waiting-time limit, the system has to spontaneously break the super-reparametrization invariance. One can prove that the choice of effective temperatures is intimately related to the spontaneous breaking of this invariance.
A similar analysis in the quantum case remains to be developed.
In conclusion, we have summarized some interesting features of the slow out of equilibrium dynamics of domain growth and glassy systems. We have explained why these features arise in the domain-growth case. A similar understanding has not been reached for glassy systems yet. With the purpose of developing a “visual” understanding of glassy physics, a careful analysis of the statistics and organisation of the configurations visited by a glass model during its nonequilibrium evolution is in order.
The use of simplified models or, alternatively, self-consistent approximations to more realistic ones have yielded a number of very interesting results and new predicitons. In particular, these models capture much of the aging phenomenology of glassy systems. Surprisingly enough, even puzzling effects of temperature cyclings during aging in spin-glasses, and the absence of these effects in other kind of glasses, can be described by fully-connected models. Some of these new predictions, notably the modification of FDT, have been tested numerically and experiments are now being performed. Obviously, it is desirable to go beyond these approximations and study more realistic models in finite dimensions. This, however, is a very difficult task.
There have been innumerable attempts to define a temperature for an out of equilibrium system. In particular, in the context of glassy materials, a “fictive temperature” is often introduced to describe some of the experimental findings. The effective temperature discussed in this article has the most welcome property of being measurable, hence being open to experimental tests. As far as we have checked the definition, it also has the welcome property of conforming to the common prejudices one has of a temperature. Of course there are still many open questions related to it. Just to mention one, let us say that it would be very interesting to extend the analytical experiment of “coupling a thermometer to a system” to the quantum case.
Acknowledgements
I wish to especially thank J. Kurchan with whom I have done much of the work on this subject and H. Castillo for suggestions concerning the preparation of this manuscript.
|
no-problem/9903/quant-ph9903052.html
|
ar5iv
|
text
|
# The Schrödinger particle in an oscillating spherical cavity
## Abstract
We study a Schrödinger particle in an infinite spherical well with an oscillating wall. Parametric resonances emerge when the oscillation frequency is equal to the energy difference between two eigenstates of the static cavity. Whereas an analytic calculation based on a two-level system approximation reproduces the numerical results at low driving amplitudes $`ϵ`$, we observe a drastic change of behaviour when $`ϵ>0.1`$, when new resonance states appear bearing no apparent relation to the eigenstates of the static system.
We study in this article the behaviour of a Schrödinger particle confined in a spherical cavity with an oscillating boundary that constitutes a particular kind of time-dependent perturbation. Our study provides a conceptually simple “laboratory” in which the subtle and nontrivial aspects of the resonant coupling between the oscillating wall and a particle trapped inside the cavity can be investigated. Our original motivation in this work comes from our attempt to construct a dynamical bag model of hadrons ; however, our results may bear implications on the physics of a wide range of systems such as cavity QED and perhaps even sonoluminescence .
The system of a one-dimensional vibrating perfect cavity with quantized electromagnetic fields has been well studied . It was found that the electromagnetic field energy density inside a cavity vibrating at one of its resonance frequencies concentrates into narrow peaks regardless of the detailed trajectories of the oscillating cavity wall . Furthermore, the amplitudes of these energy wave packets grow rapidly in time, producing sharp and intense pulses of photons. The distortion of the vacuum fields arising from the cavity wall motions leads to dynamical modifications of the Casimir effects , which represents a fundamentally important and interesting feature of quantum physics. The problem of a quantum particle in a box with moving walls has also been studied with an analytical approach , but the possibility of resonances was not discussed, which is the main interest in this work.
If the oscillation amplitude $`ϵR_0`$ is small compared to the original cavity radius $`R_0`$, perturbation theory can be used to calculate the transition amplitudes between two states of the unperturbed system. This corresponds to what is usually observed in experiments. However the non-perturbative solutions of the complete time-dependent Hamiltonian ($`H=H_0+H_1(t)`$), where $`H_0`$ is the time-independent part of the Hamiltonian, can in principle be remarkably different from the perturbative ones and can give rise to non-trivial features.
We consider, as a first step, an infinite spherical well with oscillating walls:
$$V(r)=\{\begin{array}{cc}0\hfill & \text{if }r<R(t)\hfill \\ \mathrm{}\hfill & \text{if }rR(t)\hfill \end{array},$$
(1)
where $`R(t)=R_0(1+ϵ\mathrm{sin}\nu t)R_0/\alpha (t)`$. Transforming to a fixed spatial domain via $`\stackrel{}{y}\alpha (t)\stackrel{}{r}`$, $`y|\stackrel{}{y}|<R_0`$, and renormalizing the wavefunction $`\varphi (\stackrel{}{y},t)\alpha ^{3/2}(t)\psi (\stackrel{}{r},t)`$ in order to preserve unitarity, we have
$$i\mathrm{}\frac{\varphi }{t}=H_0\varphi +H_1(t)\varphi ,$$
(2)
where
$$H_1(t)\left(\alpha ^2(t)1\right)H_0\frac{\dot{R}(t)}{R(t)}\left(\stackrel{}{y}\stackrel{}{p}i\frac{3}{2}\mathrm{}\right)$$
(3)
can be considered a small time-dependent perturbation if $`ϵ`$ and $`\nu `$ are small enough.
Since $`H_1(t)`$ commutes with $`L^2`$ and $`\stackrel{}{L}`$, we can look for solutions that are eigenstates of the angular momentum. This allows us to separate the angular dependence from the radial one in Eq. 2 to obtain:
$$\begin{array}{cc}\frac{}{t}\varphi (y)\hfill & =i\frac{\mathrm{}}{2m}\alpha ^2(t)\left[\frac{^2}{y^2}+\frac{2}{y}\frac{}{y}\frac{l(l+1)}{y^2}\right]\varphi (y)\hfill \\ & +\frac{\dot{R}(t)}{R(t)}\left(y\frac{}{y}+\frac{3}{2}\right)\varphi (y).\hfill \end{array}$$
(4)
Using first-order perturbation theory, one can easily calculate the coefficients of the solution’s expansion in terms of the unperturbed eigenstates. If the initial state is chosen to be $`|i>=|n=k,l=0>`$ ($`\varphi _{n,0}=\sqrt{2}n\pi j_0(n\pi y)`$), we have
$$\begin{array}{c}c_n^0(t)=\delta _{nk}\hfill \\ c_n^1(t)=\frac{i}{\mathrm{}}\delta _{nk}E_k_0^t𝑑t^{}\left(1\alpha ^2(t^{})\right)\hfill \\ (1)^{nk}\frac{2nk}{n^2k^2}(1\delta _{nk})_0^t𝑑t^{}e^{\frac{i}{\mathrm{}}(E_nE_k)t^{}}\frac{\dot{R}(t^{})}{R(t^{})}.\hfill \end{array}$$
(5)
The term due to $`i\mathrm{}\frac{\dot{R}(t)}{R(t)}\frac{3}{2}`$ is exactly canceled out by the diagonal contribution of $`\frac{\dot{R}(t)}{R(t)}\stackrel{}{y}\stackrel{}{p}`$. The last integral is analytically solvable for $`\nu =\omega _{nk}=(E_nE_k)/\mathrm{}`$ and yields
$$\begin{array}{c}_0^t𝑑t^{}e^{i\omega _{nk}t^{}}\frac{\dot{R}(t^{})}{R(t^{})}=\frac{\omega _{nk}t}{ϵ}+\mathrm{cos}\omega _{nk}t1\hfill \\ 2\frac{\sqrt{1ϵ^2}}{ϵ}\left[\mathrm{arctan}\left(\frac{ϵ+\mathrm{tan}\left(\frac{\omega _{nk}t}{2}\right)}{\sqrt{1ϵ^2}}\right)\mathrm{arctan}\left(\frac{ϵ}{\sqrt{1ϵ^2}}\right)\right]\hfill \\ +i\left[\mathrm{sin}\omega _{nk}t\frac{1}{ϵ}\mathrm{ln}(1+ϵ\mathrm{sin}\omega _{nk}t)\right].\hfill \end{array}$$
(6)
The secular term $`\omega _{nk}t/ϵ`$ in Eq. 6 is a typical sign of a resonance. Notice that the secular term does not multiply a periodic function and the amplitude $`ϵ`$ that we suppose to be small is at the denominator. We can easily check that this is not a problem if we make a Taylor expansion of $`\mathrm{arctan}\left[\left(ϵ+\mathrm{tan}\left(\omega _{nk}t/2\right)\right)/\sqrt{1ϵ^2}\right]`$ in powers of $`ϵ`$ near $`ϵ=0`$, since the zeroth-order term exactly cancels the secular term. However the increase of $`c_n^1(t)`$ in time remains.
We can now calculate the expectation value of any observable as a function of time. We define the following dimensionless quantities:
$$\begin{array}{c}\stackrel{~}{E}mR_0^2E/\mathrm{}^2,\hfill \\ \stackrel{~}{\nu }mR_0^2\nu /\mathrm{}.\hfill \end{array}$$
(7)
The perturbative results are in excellent agreement with the numerical ones when the cavity is oscillating out of the resonances. For example, at $`\stackrel{~}{\nu }=7,ϵ=0.01`$ the fluctuations of the energy (Fig. 1) correspond almost exactly to those of $`1/R^2(t)`$, as one can expect from a quasistatic approximation, even though our system is not quasistatic. Even at high frequencies such as at $`\stackrel{~}{\nu }=90,ϵ=0.01`$, the first-order perturbative results are still acceptable (Fig. 2a). Notice that in this case the energy is shifted up slightly and its fluctuations in time are smaller. This is due to the fact that the system is no longer able to follow the fast oscillations of the walls, and consequently the fluctuations as well as the value of the r.m.s. radius $`R_s<(y/R_o)^2>^{1/2}`$ are suppressed slightly (see Fig. 2b).
At resonances, the perturbative approach breaks down and gives only an indication that a resonance exists. In order to study these resonances we solved the Schrödinger equation numerically, using a unitary numerical algorithm . For $`\stackrel{~}{\nu }=\stackrel{~}{E}_2\stackrel{~}{E}_1`$, we calculated the expectational values of the energy $`U<\stackrel{~}{E}>`$ and $`R_s`$, choosing $`|n=1,l>`$ as the initial state. In Fig. 3 we plotted the results for $`l=0`$ and $`l=1`$ ($`l=0,\stackrel{~}{E}_2\stackrel{~}{E}_1=14.8044;l=1,\stackrel{~}{E}_2\stackrel{~}{E}_1=19.7444`$) and two different values of $`ϵ`$. The values for $`\stackrel{~}{\nu }=7`$ are also plotted for comparison. The drastic change of behaviour of the system at the resonant frequency is evident even for very small amplitudes such as $`ϵ=0.001`$.
At resonances the maximum expectation value of the energy, $`U_{\mathrm{max}}`$, varies as a function of $`ϵ`$ because of the trivial adiabatic factor $`\alpha ^2(t)`$ and, more importantly, non-trivial excitation processes. In Fig. 4 we show $`\mathrm{max}[\alpha ^2(t)U]`$ vs. $`ϵ`$. For very small $`ϵ`$ ($`ϵ<0.002`$), the perturbation is not strong enough and the probability of exciting the second eigenstate never reaches $`1`$. The expectation value of the energy saturates (and equals $`\stackrel{~}{E}_2`$) for $`0.006<ϵ<0.1`$. In this regime, the frequency dependence of the energy maxima is well fitted by a Breit–Wigner function: $`U_{\mathrm{max}}=\stackrel{~}{E}_1+C/[(\stackrel{~}{\nu }\stackrel{~}{\nu }_0)^2+\mathrm{\Gamma }^2/4]`$, and the width $`\mathrm{\Gamma }`$ increases linearly with $`ϵ`$ up to $`ϵ0.1`$. For $`ϵ>0.1`$, even higher states are excited.
Projecting the numerical solution on the eigenstates of the static system we found the expected result that for $`ϵ<0.1`$, the resonant dynamics is dominated by the lowest two eigenfunctions. This fact allows us to study the resonating system as a two-level system. In this case the differential equations for the coefficients reduce to:
$$\dot{c}_1=\frac{i}{\mathrm{}}\left[V_{11}(t)c_1+V_{12}(t)e^{i\omega _{21}t}c_2\right],$$
(8)
$$\dot{c}_2=\frac{i}{\mathrm{}}\left[V_{21}(t)e^{i\omega _{21}t}c_1+V_{22}(t)c_2\right],$$
(9)
where $`V_{ij}(t)<i|H_1(t)|j>`$. Using the fact that $`c_i(t)`$ changes little in a period $`T=2\pi /\omega _{21}`$, we can average Eq. 8 and Eq. 9 over a period to cast them into two coupled first-order ODE’s with constant coefficients :
$$\dot{c}_i=\underset{j}{}W_{ij}c_j.$$
(10)
Neglecting higher order terms in $`ϵ`$, we have
$$W_{11}=W_{22}=0,$$
(11)
$$W_{21}=W_{12}=\frac{4\left(1\sqrt{1ϵ^2}\right)}{3ϵ}\omega _{21}\mathrm{\Omega }.$$
(12)
The system can then be diagonalized easily, giving $`c_1(t)=\mathrm{cos}\mathrm{\Omega }t`$ and $`c_2(t)=\mathrm{sin}\mathrm{\Omega }t`$.
When $`ϵ1`$ then $`\mathrm{\Omega }2\omega _{21}ϵ/3`$ and the period of the resonance $`lim_{ϵ0}T_r=2\pi /\mathrm{\Omega }=\mathrm{}`$. In the other limit when $`ϵ1`$ then $`\mathrm{\Omega }4/3\omega _{21}`$, but in this case our assumption that $`c_i(t)`$ changes little in a period is no longer true and the averaging method no more valid. In Fig. 5 we plot the expectation value of the energy $`U=\alpha ^2(t)(\stackrel{~}{E}_1\mathrm{cos}^2\mathrm{\Omega }t+\stackrel{~}{E}_2\mathrm{sin}^2\mathrm{\Omega }t)`$ and compare it with the numerical results. For amplitudes $`0.005<ϵ<0.1`$ the agreement is excellent.
The matrix $`W_{ij}`$ can be written as $`i\mathrm{\Omega }\sigma _2`$, where $`\sigma _2`$ is the second Pauli matrix. It follows that the vector formed by the coefficients $`c_1`$ and $`c_2`$ behaves like the spinor of a spin $`1/2`$ particle in a magnetic field along the $`\widehat{ȷ}`$ axis:
$$i\frac{|\mathrm{\Psi }>}{t}=\mathrm{\Omega }\sigma _2|\mathrm{\Psi }>.$$
(13)
Therefore, if the initial state of the particle inside the oscillating cavity is one of the two eigenstates involved in the resonance, which corresponds to an eigenstate of $`S_z`$, the evolution of the system will be a precession of $`<\stackrel{}{S}>`$ around the $`\widehat{ȷ}`$ axis. On the other hand, if the initial state corresponds to an eigenstate of $`S_y`$ we will obtain a stationary solution: $`|\mathrm{\Psi }(t)>=e^{i\mathrm{\Omega }t}|\mathrm{\Psi }(0)>`$, which translates to
$$\varphi _\pm (y,t)=\sqrt{\frac{\alpha ^3(t)}{2}}e^{i\mathrm{\Omega }t}\left[e^{i\frac{E_1}{\mathrm{}}t}\varphi _1(y)\pm ie^{i\frac{E_2}{\mathrm{}}t}\varphi _2(y)\right].$$
(14)
The wavefunction in Eq. 14 is periodical with period $`T=2\pi /\omega _{21}`$:
$$\begin{array}{ccc}\varphi _\pm (y,t+T)=e^{i\theta }\varphi _\pm (y,t),\hfill & & \end{array}$$
(15)
where $`\theta 2\pi [E_2/(\mathrm{}\omega _{21})\pm 4(1\sqrt{1ϵ^2})/3ϵ]`$.
We calculated numerically the solution choosing as initial function one of the two of Eq. 14 at $`t=0`$, and in Fig. 5 we show the resulting $`U`$. Although $`\alpha ^2(t)U(t)`$ is not strictly constant its variation is considerably smaller compared to other solutions. It is remarkable that such a highly dynamical system can show a quasi-stationary behaviour.
For $`ϵ>0.1`$ the two-level approximation starts to break down. For $`ϵ=0.15`$ the third and fourth eigenstates become as important as the first two, and even more states are involved as one increases $`ϵ`$ further. The behaviour of the system changes drastically for $`ϵ>0.1`$, and we even observe the emergence of several new resonances that seem to have no straightforward explanation in terms of the unperturbed eigenstates. In Fig. 6 we show the maxima of $`\alpha ^2(t)U(t)`$ computed numerically for several driving frequencies choosing as initial state $`|n=1,l=0>`$. The resonance at $`\nu =\omega _{21}`$ is indicated, and it is much broader and smaller in amplitude compared to the new non-trivial resonances. It is interesting to note that even at these new resonances, the coefficients of the expansion in the static eigenstates are still approximately periodic. It may be possible to understand these new resonances for $`ϵ>1`$ by including a few more levels in the two-level approximation. However the complexity of the system in this case warrants further study.
For $`ϵ<0.005`$ the two-level approximation fails again; it continues to give the maximum of the expected energy as $`\stackrel{~}{E}_2`$, typical of two-level systems, while in the complete system the energy maximum decreases as $`ϵ`$ is reduced. Also, the two-level approximation gives a period of the resonance $`T_r`$ greater than that of the complete system.
We emphasize that the resonances we studied here are caused exclusively by the motion of the cavity wall, since the system has no interaction with electromagnetic fields. Another interesting feature of our system is the independence of its dynamics on $`R_0`$ except for the rescaling of the oscillating frequency.
It is also possible to consider a real system, hence with the electromagnetic interaction, in which an “oscillating-cavity” resonance occurs but the Rabi resonances do not. In fact, to observe Rabi resonances we need a cavity with radius $`R_0`$ such that the fundamental frequency of the electromagnetic field $`\nu _0=2\pi c/R_0`$ is equal to the difference between two energy levels, $`E_nE_k\mathrm{}^2\pi ^2/2mR_0^2`$. It is hence not difficult to choose an $`R_0`$ such that the Rabi resonances are not excited. In practice though, maintaining a stable mechanical oscillation with frequencies higher than some MHz is difficult.
For simplicity we have only considered a spherically symmetric cavity with perfect wall. However, we conjecture that the resonances should not be too sensitive on the symmetry of the perturbation and on the detailed shape of the potential as long as the matrix element $`V_{12}`$ (see Eq. 9) is different from zero. One possibility is to use a microcrystal of conducting material with separations between the levels inside the conduction band of the order of $`10^{11}`$ eV ($`100`$ kHz). Forcing the crystal to vibrate at one of the resonant frequencies should excite many of the Fermi level electrons, which decay by emitting radiowaves. A second way could be to use a system with several, almost equispaced, energy levels. At a resonant frequency the particle, an electron or a trapped atom for example, absorbs energy from the driving oscillation to jump from one level to the next one and so on, as long as the resonance condition $`\stackrel{~}{\nu }\stackrel{~}{E}_{n+1}\stackrel{~}{E}_n`$ is satisfied. In this way the frequency of the emitted quanta can be higher than the oscillation frequency, making them distinguishable from the electromagnetic noise due to dipole radiation at the driving frequency.
In a further study we will consider a system with many equispaced energy levels and analyze the increase in energy with time. Ideally from such a system one can get quanta of frequency much higher than the driving frequency, and this is a major difference compared to the cavity QED situation, where at resonances typically a great increase in the number of photons with the same frequency as the driving force is expected.
We thank Dr. C. K. Law for his suggestion of the two-level approximation. This work is partially supported by the Hong Kong Research Grants Council grant CUHK 312/96P and a Chinese University Direct Grant (Project ID: 2060093).
|
no-problem/9903/astro-ph9903182.html
|
ar5iv
|
text
|
# Dark matter halos and the anisotropy of ultra-high energy cosmic rays
## 1 Introduction
The problem of the origin of ultra high-energy cosmic rays (UHECR) is receiving considerable attention. The situation is very well known and need only be summarized briefly. Shortly after the discovery of the cosmic background radiation Greisen (1966) and Zatsepin and Kuzmin (1966) pointed out that interactions of cosmic ray protons and nuclei with the 2.7 K radiation field would severely deplete the number of events at energies beyond about $`4\times 10^{19}`$ eV. General acceptance that events exist beyond what has come to be known as the GZK cut-off has been long in coming but recently a consensus has emerged that there is indeed an excess of events beyond $`10^{20}`$ eV which cannot be explained by observational errors or uncertainties in energy estimates.
Very recently (Takeda et al. 1998) the Japanese AGASA project has reported 6 events above this energy with a spectrum which appears to be in contradiction with what would be expected if the sources of these particles were universal, although, as demonstrated by Medina Tanco (1998), the number of events is not large enough to rule out an association with nearby extragalactic luminous matter.
The agreement of the AGASA spectrum with those from the other giant shower detectors serves to underline the reality of the events of greater than $`10^{20}`$ eV reported from them. We note that $`13`$ events have been reported overall for which the energies are claimed to be above $`10^{20}`$ eV: AGASA (7) (Takeda et al. 1999), Volcano Ranch (1) (Linsley 1963), Haverah Park (4) (Lawrence, Reid and Watson 1991), Fly’s Eye (1)(Bird et al 1993) and Yakutsk (1) (Efimov et al. 1991)). The distribution of events recorded by each experiment is in reasonable agreement with their individual exposures (Watson 1998). Not only are the particles above $`10^{20}`$ eV unexpected in the face of the GZK cut-off but also many theorists find it impossible to envisage electromagnetic methods of acceleration to these energies.
The experimental situation with regard to the arrival direction distribution of UHECR is less clear cut than it is for the energy spectrum. Using a data set dominated by Haverah Park events, Stanev et al. (1995) claimed that cosmic rays above $`4\times 10^{19}`$ eV showed a correlation with the direction of the Super Galactic Plane: the level of significance was 2.5 - 2.8 sigma. Later studies with AGASA data (Hayashida et al. 1996) and with Fly’s Eye data (Bird et al. 1998) did not support this claim. Very recently the AGASA group (Takeda et al. 1999) have released details of 581 events above $`10^{19}`$ eV recorded by them. Of these 47 are above $`4\times 10^{19}`$ eV and 7 are above $`10^{20}`$ eV. There is no evidence within this consistent data set to support an anisotropy associated with the Super Galactic Plane but they find some evidence of clustering on an angular scale of $`2.5^{}`$: there are three doublets and one triplet, the chance occurrence of which is calculated as less than 1%. The triplet and a doublet, which becomes a triplet if a $`10^{20}`$ eV event from Haverah Park, lie close to the Super Galactic Plane. This work extends a similar earlier analysis by Uchihori et al. (1997) using a set of data containing events from several experiments. If clustering of cosmic rays is established in very much larger data sets it will have profound implications for our ideas about cosmic ray origin. For example Farrar and Biermann(1998) have claimed an association with radio-loud QSOs for 5 of the most energetic events. While their statistical analysis has recently been challenged by Hoffman (1999), the idea is now capable of an independent test with the precise directions of the new AGASA events (Takeda et al. 1999). So far evidence for departures from isotropy have proved elusive.
At $`4\times 10^{19}`$ eV about 50% of the events are expected to come from within $`130`$ Mpc while at $`10^{20}`$ eV the 50% distance is only $`19`$ Mpc (Hillas, 1998b). The isotropy of these events which must originate so close to our galaxy has prompted a number of authors to propose that the particles may come from the decay of super-heavy relic particles gravitationally bound within the galactic halo. Such super-heavy relics are postulated as having been created in the re-heating which may follow early Universe inflation (Berezinsky, Kacheltiess and Vilenkin (1997), Benkali, Ellis and Nanopoulos (1998) and Birkel and Sarkar (1998)). That such a bold hypothesis is advocated is a measure of the difficult situation in which observation has placed theoretical expectation. The situation is so acute that ideas such as the acceleration of Dirac monopoles by the galactic magnetic field (Kephart and Weiler 1996) and the breakdown of Lorentz invariance (Gonzalez-Mestres 1997, Coleman and Glashow 1998) are amongst those proposed to solve the enigma.
The question of super-heavy relics residing in the galactic halo and providing a small fraction of the cold dark matter has attracted recent attention (Berezinsky, Blasi and Vilenkin 1998, Dubovsky and Tinyakov 1998, Hillas 1998a, Berezinsky and Mikhailov 1998 and Benson, Smialkowski and Wolfendale 1998). In the latter two papers estimates of the anisotropy expected have been made and Benson et al. have compared their predictions with observation. The present paper extends these analyses and presents the results of the calculation in a way which demonstrates acutely the need to have improved measurements of the UHECR from both the Northern and the Southern Hemispheres to help resolve the issue of a halo contribution to the UHECR.
## 2 Calculations and Discussion
### 2.1 Anisotropy associated with the halo
In what follows, we will limit the analysis to the anisotropy observed at Earth due to the possible origin of UHECR from the decay of primaries resident in the galactic halo. While we have been motivated by the idea of the decay of super-heavy relic particles our results are of relevance to any type of source of UHECR distributed throughout the galactic halo.
If UHECR are gamma-rays or neutrons, then their propagation is rectilinear and no further assumptions are required. If, on the other hand, UHECR are mainly charged particles, as it seems more likely from the muon content of the largest AGASA event (Hayashida et al.1996) and the profile of the largest Fly’s Eye event (Bird et al. 1993), then they will be deflected by the magnetic field inside the halo. In the latter case, a good description of the topology and intensity of the halo magnetic field, $`B_H`$, is necessary for a rigorous estimate of the anisotropy observed at Earth. Unfortunately, there are large uncertainties regarding $`B_H`$ (Kronberg 1994, Beck et al. 1996, Vallée 1997). However, the higher the particle energy, the smaller the deflection. Using and axisymmetric, spiral field without reversals and with even (quadrupole type) parity in the perpendicular direction to the galactic plane (Stanev 1997), which is consistent with the observations of our own and other spiral galaxies (Beck et al. 1996, Kronberg 1994), it has been shown (Medina Tanco 1997, 1998, Medina Tanco et al. 1998) that, upon traversing a $`20`$ kpc halo: (a) protons with $`E4\times 10^{19}`$ eV are deflected through angles $`\alpha <10^o`$ ($`\alpha <5^o`$ at galactic latitude $`|b|>60^o`$) unless their trajectories cross the central regions of the galaxy; (b) the deflections suffered by protons are reduced to $`\alpha <5^o`$ at $`E10^{20}`$ eV for most directions; (c) heavier nuclei, in particular Fe, are deflected by up to $`40^o`$ for most arrival directions even at energies as high as $`E2\times 10^{20}`$ eV. In what follows only rectilinear propagation will be considered and so, unless the UHECR are neutral, the results should only be applied to the highest energy particles.
The emissivity of UHECR per unit volume is proportional to the number density of potential sources in the halo, $`n_{SHR}(\underset{¯}{\text{r}})`$ which, in turn, we will assume to be proportional to the dark matter density inside the galactic halo, $`n_H(\underset{¯}{\text{r}})`$ where $`\underset{¯}{\text{r}}`$ is the position vector in a galactocentric reference system. Therefore, the incoming flux of UHECR from a solid angle $`\delta \mathrm{\Omega }(\underset{¯}{\overset{^}{\text{r}}}^{})`$, around the direction $`\underset{¯}{\overset{^}{\text{r}}}^{}`$, defined in a geocentric coordinate system is:
$$\delta \mathrm{\Phi }_{V_{\delta \mathrm{\Omega }}}\frac{n_H\left[\underset{¯}{\text{r}}(\underset{¯}{\text{r}}^{})\right]}{r^2}𝑑V=_0^{r_H(\underset{¯}{\text{r}}^{})}n_H\left[\underset{¯}{\text{r}}(\underset{¯}{\text{r}}^{})\right]\delta \mathrm{\Omega }𝑑r^{}$$
(1)
where $`V_{\delta \mathrm{\Omega }}`$ is the volume of the cone of solid angle $`\delta \mathrm{\Omega }`$, $`r_H`$ is the external radius of the halo and $`\underset{¯}{\text{r}}(\underset{¯}{\text{r}}^{})`$ is the coordinate of the volume element $`dV`$ in the reference system with origin on the galactic center. Thus, the incoming UHECR flux per unit solid angle from the direction $`\underset{¯}{\text{r}}^{}`$ is:
$$\frac{\delta \mathrm{\Phi }}{\delta \mathrm{\Omega }}_0^{r_H(\underset{¯}{\text{r}}^{})}n_H\left(\underset{¯}{\text{R}}_{}+\underset{¯}{\text{r}}^{}\right)𝑑r^{}$$
(2)
where $`\underset{¯}{\text{R}}_{}`$ is the position of the Sun in the galactocentric reference system. To ensure that each direction on the celestial sphere has an equal weight and that the symmetry of the problem is preserved in the calculation of the anisotropy, an equal area Schmidt projection (Fisher, Lewis and Embleton 1993) of the sky onto a plane tangent to the appropriate celestial pole is used. The projected area is populated with pixels of equal area. The fluxes, $`\delta \mathrm{\Phi }/\delta \mathrm{\Omega }`$, are then calculated for each pixel, and modulated by the exposure of a typical experiment, $`\mathrm{\Xi }(\delta )`$, which is a function that depends only on declination. For experiments in the Northern hemisphere, the Haverah Park exposure at $`E>10^{19}`$ eV, was used as typical, since it is located at latitude $`54^o`$ N, mid-way between those of AGASA ($`36^o`$ N) and Yakutsk ($`62^o`$ N). However Haverah Park used water-Cerenkov detectors so that the declination response was broader than for the scintillator array of AGASA and Yakutsk.
The distribution of dark matter inside the halo is by no means certain. Nevertheless, the flatness of the rotation curves of spiral galaxies implies that the density inside the halo must decrease roughly as $`1/r^2`$. Caldwell and Ostriker (1981) parametrised the density of dark matter in the plane of the galaxy by a core-halo type model ($`n_H\left(1+r^2/r_c^2\right)^1`$), and assumed that the halo is spherical (see also, Binney and Tremaine 1987, Sciama 1993). However, N-body simulations of the dissipationless formation of halos (Frenk et al. 1988, Katz 1991, Katz and Gunn 1991, Dubinski and Carlberg 1991, Dubinski 1992, Warren et al. 1992) indicate that the final shape is flattened. For the flattest halos obtained in the absence of dissipation the axial ratio, $`q`$, equals $`0.4`$. In an observational study of our own galaxy, van der Marel (1991) found $`q>0.34`$.
For our calculation we have assumed a bi-axial ellipsoid as an approximation to a flattened halo density profile; in cylindrical galactocentric coordinates $`(\rho ,\varphi ,z)`$:
$$n_H\frac{1}{\left[1+\frac{1}{r_c^2}\left(\rho ^2+\frac{z^2}{q^2}\right)\right]}$$
(3)
where $`r_c`$ is a characteristic, essentially unknown, scale. The spherical limit, $`q=1`$, corresponds to the isothermal halo model of Caldwell and Ostriker (1981).
Navarro, Frenk and White (1996) (NFW), on the other hand, investigated the structure of dark halos in the standard Cold Dark Matter model, and found that the spherically averaged density profile can be fit over an interval of two decades in radius by scaling a ”universal” profile. Their halo profiles are approximately isothermal over a large range in radii, but shallower than $`r^2`$ in the central region and steeper than $`r^2`$ near the virial radius:
$$n_H\frac{1}{\frac{r}{r_s}\left(1+\frac{r}{r_s}\right)^2}$$
(4)
where $`r_s`$ is a characteristic radius (not the halo core). In our analysis we consider halo profiles given by both eq. (3) and (4).
Figure 1 shows a step-by-step graphical description of our procedures. In figure 1a the density profile, given by (3) with $`q=0.4`$ and $`r_c=8`$ kpc is shown. The horizontal axis, $`\rho `$, runs along the galactic plane, while the vertical axis, $`z`$, is perpendicular to the galactic plane. Figure 1b shows the flux of UHECR per unit solid angle, originated by the density profile in 1a, in galactic coordinates with the galactic centre at the center of the figure. Figure 1c shows $`\delta \mathrm{\Phi }/\delta \mathrm{\Omega }`$ from figure 1b rotated into equatorial coordinates. Figures 1d and 1f are the Schmidt projections of $`\delta \mathrm{\Phi }/\delta \mathrm{\Omega }`$ from 1c onto planes tangent to the North and South pole respectively. Figures 1e and 1g show the Schmidt projections 1d and 1f convoluted with the response in declination of Haverah Park ($`54^o`$ N) and Auger South (Malargüe, Argentina) respectively. For the Malargüe site ($`35^o`$ S) we have used the Haverah Park declination distribution (appropriately mirrored and shifted) as the actual declination distribution has yet to be measured and the Pierre Auger Observatory will use water-Cerenkov tanks of the same depth as those used at Haverah Park. It is from these later figures, and similar ones for other halo models, that the anisotropies discussed below has been calculated.
We have used the amplitude and phase of the first harmonic to characterize the anisotropies. Thus (e.g., Linsley 1975), the amplitude is:
$$r_{1h}=\sqrt{a_{1h}^2+b_{1h}^2}$$
(5)
where:
$$a_{1h}=\frac{2}{N}\underset{i=1}{\overset{N}{}}cos\alpha _i\text{,}b_{1h}=\frac{2}{N}\underset{i=1}{\overset{N}{}}sin\alpha _i$$
(6)
the phase is
$$\mathrm{\Psi }_{1h}=\text{tan}^1\left(\frac{b_{1h}}{a_{1h}}\right)$$
(7)
and $`\alpha _i`$ is the right ascension of an individual event.
The rms spread in amplitude and phase of the first harmonic are given by:
$$\mathrm{\Delta }r=\sqrt{\frac{2}{N}}$$
(8)
and
$$\mathrm{\Delta }\mathrm{\Psi }=\frac{1}{\sqrt{2k_0}}$$
(9)
where $`k_0=r_{1h}^2N/4`$. Another quantity of interest is the number of events required for a signal-to-noise ratio of $`n_\sigma `$ standard deviations either in amplitude or phase:
$$N_r(n_\sigma )=\frac{2n_\sigma ^2}{r_{1h}^2}\text{,}N_\mathrm{\Psi }(n_\sigma )=\frac{2n_\sigma ^2}{r_{1h}^2\mathrm{\Psi }_{1h}^2}$$
(10)
In figure 2 $`N_r(n_\sigma =3)`$ is shown for the set of models described by equation(3) as a function of the characteristic length $`r_c`$ and different values of $`q`$, covering very flat solutions ($`q=0.2`$) to the isothermal solution ($`q=1.0`$). The magnitude of $`r_{1h}`$ depends on the halo model: for the models described by equation (3) $`r_{1h}`$ decreases as $`q`$ increases at constant $`r_c`$, while at constant $`q`$, $`r_{1h}`$ decreases as $`r_c`$ increases. The curves have been calculated for Haverah Park, but they are also representative of what would be expected for AGASA and Yakutsk. We note that the grand total number of events with $`E>4\times 10^{19}`$ eV for the Northern Hemisphere sites is $`N100`$. Therefore, it is not possible, with the present data to measure the amplitude of the first harmonic at the $`3\sigma `$ level required to have statistically significant discriminators between any dark halo model density profiles.
Figure 3 shows phase vs. amplitude of the first harmonic for dark halo models (3) and (4) (NFW) for $`2<r_c<50`$ kpc and $`10<r_s<100`$ kpc respectively. For model (3) flattenings $`0.2q1`$ are shown. For every model, the larger the amplitude of the first harmonic the more centrally concentrated is the halo (i.e., smaller $`r_c`$ or $`r_s`$). The error bars represent 68% confidence levels for Volcano Ranch (6 events, Linsley 1980) Haverah Park (27 events, Reid and Watson 1980), Yakutsk (24 events, Afanasiev et al. 1995) and AGASA (47 events, Hayashida et al. 1996, Uchihori et al. 1997, Takeda et al. 1999) at $`E>4\times 10^{19}`$ eV, and 95% confidence for the 104 events of the four experiments combined. For the latter the error box is also shown in shades of gray in the background. Note the strong increase of the uncertainty range in phase as the amplitude decreases. It is evident that the data available at present are insufficient to restrict any particular dark matter halo model. At most it can be said that the data are not incompatible with UHECR originating in a spherical, or only slightly flattened halo ($`q>0.6`$). An isothermal halo is as acceptable as, and is indistinguishable from, a NFW type of halo model, regardless of the value of their characteristic scales. Furthermore, the number of events detected so far by each experiment is so small that statistical fluctuations may even dominate the results.
Figures 4 and 5 show how much the situation can improve using the Southern site of the Auger experiment (Malargüe, Argentina, $`35^o`$ South) which is to be developed. Comparing figures 3 and 5 it is evident that an experiment located in the Southern Hemisphere has a larger potential to discriminate between halo models than one located in the Northern hemisphere for small $`N`$, provided $`r_c\stackrel{}{>}10`$ kpc. Location is not enough, however, and figures 2 and 4 imply that a significantly larger exposure is needed to make a difference from the current status. After three years of operation of the $`3000`$ km<sup>2</sup> Southern hemisphere Auger detector, roughly $`570`$ events are expected above $`4\times 10^{19}`$ eV, and that should allow $`3\sigma `$ amplitude determinations for the flatter halo models (the constraints on phase are always smaller). As an example, suppose that a measured harmonic amplitude is regarded as being established when the probability that it could have arisen from a random distribution through a chance fluctuation is less than $`10^3`$. It follows that with 500 events an amplitude of $`24`$% would be detectable and the phase would have an uncertainly of $`\pm 15^o`$. Simulated error boxes are shown in figure 5 for this supposed amplitude and for one of $`70`$%. It is clear from the figure that such a result would eliminate a number of halo possibilities depending on the value of the phase which is measured. Therefore, after three years of operation, it should be possible to exclude some dark halo models.
### 2.2 Anisotropy associated with Andromeda (M31)
It is a well known fact in gamma ray burst research, that a halo origin of the bursts is ruled out by the non-observation of clustering of events in the direction of Andromeda galaxy (M31, the largest galaxy in the local group at a distance of only $`D670`$ kpc). That much the same reasoning should apply to the present UHECR problem has been most recently discussed by several authors (Benson, Smialkowski and Wolfendale 1998, Dubovsky and Tinyakov 1998). However, very different values have been quoted in these works for the contribution of Andromeda in UHECR. We have therefore made an independent calculation of the magnitude of the effect. The ratio between the incoming UHECR flux originating in Andromeda and that originating in the halo of our own galaxy inside a given solid angle $`\delta \mathrm{\Omega }`$ can be expressed as:
$$\frac{\mathrm{\Phi }_{M31}}{\mathrm{\Phi }_{MW}}\frac{\zeta }{D^2}\times \frac{_{V_H}n_H𝑑V}{_{V_{\delta \mathrm{\Omega }}}\frac{n_H}{r^2}𝑑V}$$
(11)
where the second factor on the right hand side of the equation is a function that depends only on the particular halo model assumed and $`\zeta 2`$ is the ratio between the masses of the halos of Andromeda and the Milky Way. The integration volume $`V_H`$ is the volume of the Galaxy halo and $`V_{\delta \mathrm{\Omega }}`$ is the volume defined by the cone of solid angle $`\delta \mathrm{\Omega }`$ pointing in the direction of Andromeda.
Figure 6 shows $`\mathrm{\Phi }_{M31}/\mathrm{\Phi }_{MW}`$ for a $`10^o\times 10^o`$ solid angle (the expected spread due to deflection of a $`4\times 10^{19}`$ eV proton arriving from Andromeda - e.g., Medina Tanco et al. 1997) for several isothermal (i.e., $`q=1`$ in eq. (3)) halo models. The three models have been normalized in such a way as to give the same contribution to the galactic rotation curve at a galactocentric distance of $`r_o=18`$ kpc and differ in the ratio $`\eta `$ between the total halo mass and the dark matter mass inside $`r_o`$. Galactic dark halos with $`\eta =2`$, $`5`$ and $`10`$ are considered. The results show that the contribution from Andromeda increseas faster than the contribution from our own galaxy as the mass of the halo is increased. Due to the limited size of the present UHECR sample ($`0.5`$ events per $`10^o\times 10^o`$ solid angle), nothing can yet be said about the existence of an UHECR contribution originated in the dark halo of Andromeda.
## 3 Comments on related work
Other authors have recently discussed the anisotropy expected if the UHECR are produced by the decay of super-heavy relic particles in the galactic halo (or indeed by any other sources distributed in a similar way). Berezinsky and Mikhailov (1998) have used the Isothermal distribution of dark matter (Kravtsov et al 1997) and the distribution predicted by the numerical simulations of Navarro, Frenk and White (1996) to predict the amplitudes of the first and second harmonics and the phase of the first harmonic for the geographical location of the Yakutsk array (latitude = $`62^o`$ N). This is an extension of the calculation outlined in Berezinsky, Blasi and Vilenkin (1998) in which a wide-ranging overview of the signatures from topological defects is given. The amplitudes and phases which they predict are very similar to those found in our calculation (figure 3). For the Isothermal model they calculate the phase to be $`250^o`$ and find that the amplitude of the first harmonic varies from $`0.40`$ to $`0.14`$ as $`r_c`$ changes from 5 to 50 kpc. For the NFW model the same phase is found and the harmonic amplitude varies form $`0.38`$ to $`0.31`$ as $`r_s`$ changes from $`30`$ to $`100`$ kpc. These results are in reasonable agreement with our work (figure 3).
Berezinsky and Mikhailov state that dominance of a halo component at about $`10^{19}`$ eV can probably be excluded by the AGASA data which contains nearly $`600`$ events above this energy. However in our view this is not a very strong conclusion as there is no acute problem at $`10^{19}`$ eV comparable to that which exists at higher energy. Particles of $`10^{19}`$ eV can probably be produced in several locations by electromagnetic processes. Additionally there is no difficulty in explaining the isotropy as a reasonable fraction of the particles may be iron nuclei. This is allowed by the necessarily model-interpretation of the Fly’s Eye data and the limited statistics (Bird et al. 1995, Ding et al., 1997). Iron nuclei cannot, of course, be created by the decay of dark matter particles.
Benson, Smialkowski and Wolfendale (1998) have used data from a variety of experiments to discuss the dark matter contribution from two halo possibilities, one in which an extensive ($`100`$ kpc) magnetic halo is postulated and one in which the dark matter density distribution follows the Navarro, Frenk and White (1996) model. They make comparisons with their predictions using data at $`(15)\times 10^{18}`$ eV from Akeno and Yakutsk and above $`3\times 10^{19}`$ eV using data from AGASA, Volcano Ranch, Haverah Park, Yakutsk and Sydney as discussed in Chi et al. (1992).
It does not seem possible, to us, to extract meaningful information on the super-heavy relic content of the halo from the arrival direction distribution of events as low in energy as $`(15)\times 10^{18}`$ eV. Here there are likely to be many iron nuclei present and, as at $`10^{19}`$ eV, there is no enigma to be resolved which necessitates the postulate of dark matter particles.
In our discussion of the data above $`4\times 10^{19}`$ eV we have used the $`104`$ events (figure 3) from Volcano Ranch, Yakutsk, Haverah Park and AGASA. We have shown that this number of events is insufficient to discriminate against models other than those with rather flat distributions ($`q<0.4`$). We believe that it is inappropriate to try to draw conclusions using observations made with the Sydney array as Benson, Smialkowski and Wolfendale(1998) have attempted. Of the 80 events with energies above $`4\times 10^{19}`$ eV in the Sydney catalogue, $`60`$ have zenith angles smaller than $`60`$ degrees. However the mean multiplicity of struck stations in only $`5.0`$ and $`60`$% of these events have $`3`$ or $`4`$ fold multiplicity. This means that the core location, and hence the reconstructed muon size, is very uncertain. There are also well-documented difficulties with the instrumentation of the Sydney experiment (e.g. Watson 1991) and with the models used to estimate the energies (Hillas 1990). The conclusions reached by Chi et al. (1992) about the Sydney data result in an energy spectrum (figure 7a of Chi et al.) which is not consistent with the modern spectra from AGASA, Haverah Park and Fly’s Eye. For several reasons, therefore, we deem it prudent to ignore those data.
## 4 Conclusions
We have calculated the anisotropy of UHECR to be expected at specimen locations in the Northern and Southern hemispheres on the assumption that the particles are created in the decay of super-heavy relic particles within the galactic halo. Several models describing the distribution of cold dark matter have been considered. We conclude that our calculations are in good agreement with other work but that it is premature to draw inferences about the existence, or otherwise, of sources of UHECR lying within the halo of our galaxy. The issue could be resolved relatively quickly by the Pierre Auger Observatory, construction of the Southern part of which is scheduled to begin in 1999.
## 5 Acknowledgments
GAMT is also grateful to the High-Energy Astrophysics group of the University of Leeds for its kind hospitality. This work was partially supported by the Brazilian agency FAPESP through a fellowship to GAMT.
Figure Captions
Figure 1: : A graphical example of the procedure followed is shown. (a) Halo density (cylindrical galacto-centric coordinates) given by eq.(3) with $`q=0.4`$ and $`r_c=8`$ kpc; distances are in kpc and density contours are linear; the density distribution is shown for one-quarter of the Galaxy. (b) UHECR flux produced by dark matter distribution (a) as seen in galactic coordinates. (c) As (b) but in equatorial coordinates. (d) Schmidt projection of (c) onto the North Pole. (f) As (d) but for the South Pole. (e) and (g) are the projections (d) and (f) convoluted with the response in declination of Haverah Park and Auger South respectively. First harmonics have been calculated over figures of type (e) and (g) for a variety of halo models.
Figure 2: Number of events necessary for an amplitude determination significant at the $`3\sigma `$ level for several halo models. Note that the existing Northern hemisphere database (AGASA, Haverah Park Volcano Ranch and Yakutsk) at $`E>4\times 10^{19}`$ eV comprises only 104 events.
Figure 3: Phase versus amplitude of the first harmonic for the several models described in the text. The heavy dots are NFW models for $`r_s=10`$, $`20`$, $`30`$, $`50`$ and $`100`$ kpc. The lines identify models described by equation (3) for $`2r_c50`$ kpc and $`0.2q1.0`$. $`r_s`$ and $`r_c`$ are explained in the text. Error bars correspond to 68% C.L. for the available data from Volcano Ranch (VR, 6 events), Yakutsk (YK, 24), AGASA (AG, 47) and Haverah Park (HP, 27) with $`E>4\times 10^{19}`$ eV. The 95% C.L. error bars for the combination of the experiments (AG+HP+YK+VR, 104) is also shown. The shaded region denotes the 95% C.L. combined error box, and stresses the increase of the error in phase as the amplitude decreases.
Figure 4: Same as figure 2 but calculated for Malargüe, Argentina, the Southern site of the Auger experiment.
Figure 5: Same as figure 3 but calculated for Malargüe. The error boxes are two simulated $`68`$% C.L. data points, corresponding to hypothetical first harmonic amplitudes equal to $`0.24`$ and $`0.7`$ respectively as would be found after $`3`$ years of observation (i.e., $`500`$ events with $`E>4\times 10^{19}`$ eV).
Figure 6: The contribution of Andromeda (M31). Ratio between the flux of UHECR originating in the halo of Andromeda and in our own halo, within a cone of $`10^o\times 10^o`$ centered in the direction to M31. The calculations shown are for the isothermal halo (eq. (3) with $`q=1`$). The models are normalized to reproduce the galactic rotation curve inside $`r_o18`$ kpc, but differ in the total mass of the Galaxy halo, $`M_{MW}=\eta \times M(rr_o)`$, where $`\eta `$ is the mass of our halo in units of the mass inside $`r_o=18`$ kpc. At present, the average number of UHECR detected above $`E>4\times 10^{19}`$ eV is only $`0.5`$ events on a sky area of $`10^o\times 10^o`$ so not conclusion may be drawn.
|
no-problem/9903/hep-ph9903331.html
|
ar5iv
|
text
|
# LABORATORI NAZIONALI DI FRASCATISIS-Pubblicazioni LNF-99/006(P)12 Febbraio 1999IISc/CTS/4/99hep-ph/9903331 Eikonalised minijet model predictions for cross-sections of photon induced processes
## 1 Introduction
The rise of hadronic total cross-sections $`\sigma (A+B\mathrm{hadrons})`$ with energy, has been now observed for a set of comparable values of $`\sqrt{s}`$ where both A,B are hadrons , when one of them is a photon and when both of them are photons . It is well known that interactions of a photon with another hadron or photon receive contributions from the ‘structure’ of a photon which the photon develops due to its fluctuation into a virtual $`q\overline{q}`$ pair. The recent measurements of $`\sigma (\gamma \gamma \mathrm{hadrons})`$ at higher energies of upto and beyond $`𝒪(100)`$ GeV, have confirmed that the hadronic cross-sections rise with $`\sqrt{s}`$ and preliminary claims are that they rise faster than the $`p\overline{p}`$ cross-sections. A measurement of $`\gamma ^{}p`$ cross-sections by ZEUS collaboration, extrapolated to $`Q^2=0`$ , lies above the photoproduction measurements . These extrapolated $`\gamma ^{}p`$ data and the new $`\gamma \gamma `$ data seem to indicate that the rise of cross-sections with $`\sqrt{s}`$ gets faster as one replaces hadrons with photons successively. In the Pomeron-Regge picture the total cross-section is given by
$$\sigma _{ab}^{\mathrm{tot}}=Y_{ab}s^\eta +X_{ab}s^ϵ$$
(1)
where $`\eta `$ and $`ϵ`$ are related to the intercept at zero of the leading Regge trajectory and of the Pomeron, respectively. The value of the Pomeron intercept indicated by the unpublished results of ZEUS is $`0.157\pm 0.019\pm 0.04`$ whereas the corresponding value for the $`\gamma \gamma `$ data obtained by the L3 collaboration is $`0.158\pm 0.006\pm 0.028`$ which is to be compared with the value of $`0.08`$ for pure hadronic cross-sections.
Fig. 1 shows the energy dependence of the hadronic cross-sections as well as those for the photon induced processes. The latter are multiplied by a quark model motivated factor of $`3/2`$ and the inverse of the probability for a photon to fluctuate into a $`q\overline{q}`$ pair: $`P_\gamma ^{\mathrm{had}}`$. The value chosen for $`P_\gamma ^{\mathrm{had}}`$ is $`1/250`$ which is close to a value motivated by VMD picture. i.e.
$$P_\gamma ^{\mathrm{had}}=P_{VMD}=\underset{V=\rho ,\omega ,\varphi }{}\frac{4\pi \alpha _{\mathrm{em}}}{f_V^2}\frac{1}{250}.$$
(2)
Fig. 1 illustrates that all total cross-sections show a similar rise in energy when the difference between photon and hadron is taken into account, albeit with indications of somewhat steeper energy dependence for photon induced processes<sup>1</sup><sup>1</sup>1It should be noted here that although the rate of rise with $`\sqrt{s}`$ is similar for both OPAL and L3 , on this plot the OPAL data seems to stand a bit apart, which may be due to the difference in the normalisation of the two data sets.. The similarity in the energy dependence makes it interesting to attempt to give a description of all three data sets in the same theoretical framework. We have analysed the cross-sections for photon-induced processes alone and we find that the EMM calculations generically predict faster rise with $`\sqrt{s}`$ for $`\gamma \gamma `$ case than would be expected by an universal pomeron hypothesis. I will further present arguments why in the framework of EMM the Pomeron intercept is expected to increase as the number of colliding photons in the process increase.
## 2 Eikonalised Minijet Model
There are some basic differences between the purely hadronic cross-section measurements and those of the total cross-sections in photon induced processes. In the purely hadronic case the measurement of the total cross-section comes from the combined methods of extrapolation of the elastic diffraction peak and total event count, whereas in the case of photon induced processes, this has to be extracted from the data using a Montecarlo; for example the total $`\gamma \gamma `$ cross-sections are extracted from a measurement of hadron production in the untagged $`e^+e^{}`$ interactions. Experimentally this gives rise to different types of uncertainites in the measurement<sup>2</sup><sup>2</sup>2This is clear from the discussions, for example, in Ref. .. Theoretically, the concept of the ‘elastic’ cross-section can not be well defined for the photons. In models satisfying unitarity like those which use the eikonal formulation, it is important to understand what is the definition of the elastic cross-section and if the data include all of this elastic cross-section or part or a small fraction. To that end let us summarise some of the basic features of the Eikonal formulation.
Let us start from the very beginning. Consider the eikonal formulation for the elastic scattering amplitude
$$f(\theta )=\frac{ik}{2\pi }d^2\stackrel{}{b}e^{i\stackrel{}{q}\stackrel{}{b}}[1e^{i\chi (b,s)/2}]$$
(3)
which, together with the optical theorem leads to the expression for the total cross-sections
$`\sigma ^{\mathrm{el}}`$ $`=`$ $`{\displaystyle d^2\stackrel{}{b}|1e^{i\chi (b,s)/2}|^2}`$ (4)
$`\sigma ^{\mathrm{tot}}`$ $`=`$ $`2{\displaystyle d^2\stackrel{}{b}[1e^{\chi _I(b,s)/2}cos(\chi _R)]}`$ (5)
$`\sigma ^{\mathrm{inel}}`$ $`=`$ $`\sigma ^{\mathrm{tot}}\sigma ^{\mathrm{el}}={\displaystyle d^2\stackrel{}{b}[1e^{\chi _I(b,s)}]}`$ (6)
According to the minijet model the rise of total cross-sections can be calculated from QCD. In the model, there is an ad hoc sharp division between a soft component, which is of non-perturbative origin and for which the model is not able to make theoretical predictions, and a hard component, which receives input from perturbative QCD. The minijet model assumes that the rise with energy of total cross-sections is driven by the rise with energy of the number of low-x partons (gluons) responsible for hadron collisions and in its simplest formulation reads
$$\sigma _{ab}^{inel,u}=\sigma _0+_{p_{tmin}}d^2\stackrel{}{p}_t\frac{d\sigma _{ab}^{jet}}{d^2\stackrel{}{p}_t}=\sigma _0+\sigma _{ab}^{jet}(s,p_{tmin}),$$
(7)
the superscript $`u`$ indicating that this is the ununitarised cross-section. This concept can be embodied in a unitary formulation as in 5-6, by writing
$$\sigma _{ab}^{inel}=P_{\mathrm{ab}}^{\mathrm{had}}d^2\stackrel{}{b}[1e^{n(b,s)}]$$
(8)
with
$$n(b,s)=A_{ab}(b)[\sigma _{h/a,h/b}^{soft}+\frac{\sigma _{ab}^{jet}(s,p_{tmin})}{P_{\mathrm{ab}}^{\mathrm{had}}}]$$
(9)
In eq.8, we have inserted, to include the generalization to photon processes, a factor $`P_{\mathrm{ab}}^{\mathrm{had}}`$ defined as the probability that particles $`a`$ and $`b`$ behave like hadrons in the collision. This parameter is unity for hadron-hadron processes, but of order $`\alpha _{\mathrm{em}}`$ or $`\alpha _{\mathrm{em}}^2`$ for processes with respectively one or two photons in the initial state. The definition of $`\sigma _{h/a,h/b}^{soft}`$ in eq.9 is such that, even in the photonic case, it is of hadronic size, just like $`\sigma _{\mathrm{ab}}^{\mathrm{jet}}(s,p_{tmin})/P_{\mathrm{ab}}^{\mathrm{had}}`$. A simple way to understand the need for this factor is to realise that the unitarisation in this formalism is achieved by multiple parton interactions in a given scatter of hadrons and once the photon has ‘hadronised’ itself, one should not be paying the price of $`P_\gamma ^{\mathrm{had}}`$ for further multiparton scatters.
At high energies, the dominant term in the eikonal is the $`jet`$ cross-section which is calculable in QCD and depends on the parton densities in the colliding particles and $`p_{tmin}`$, which admittedly is an ad hoc parameter separating the perturbative and nonperturbative contributions to the eikonal. The basic assumption in arriving at eq. 8 is that the multiple parton scatters responsible for the unitarisation are independent of each other at a given value of $`b`$. In this model $`n(b,s)`$ in eq. 9, is identified as the average number of collisions at any given energy $`\sqrt{s}`$ and impact parameter $`b`$. The $`b`$ dependence is assumed to be given by the function $`A_{ab}(b)`$ which is modelled in different ways. This function measures the overlap of the partons in the two hadrons $`a,b`$ in the transverse plane.
Before going to the discussion of different models of $`A_{ab}(b)`$, we note that the mini-jet model is particularly well suited for generalisation to the photon-induced processes where the concept of ‘elastic’ cross-section is not very well defined. Whereas for the hadronic case one starts from the elastic amplitude followed by the optical theorem as done in eqs. 3 – 6, in this case the starting point is actually the eq. 8 and then one defines $`\sigma _{ab}^{\mathrm{tot}}`$ using eq. 6 with $`\chi _R=0`$ and using $`\chi _I`$ as given by 8. The above discussion specifies the total cross-section formulation of the EMM for photon-induced processes. While our earlier analyses assumed that the $`\gamma \gamma `$ cross-sections presented were the inelastic cross-sections, the analyis of had used the total cross-section formulation but with a different ansatz for the eikonal. Our analysis uses the total cross-section formulation of the EMM with the perturbative part of the Eikonal as given by QCD a-la eq. 8.
## 3 Overlap function and jet cross-sections.
The overlap function $`A_{ab}(b)`$ is normally calculated in terms of the convolution of the matter distributions $`\rho _{a,b}(\stackrel{}{b})`$of the partons in the colliding hadrons in the transverse plane
$$A_{ab}(b)=d^2\stackrel{}{b^{}}\rho _a(\stackrel{}{b^{}})\rho _b(\stackrel{}{b}\stackrel{}{b^{}}).$$
(10)
If we assume that the $`\rho (\stackrel{}{b})`$ is given by Fourier Transform of the form factor of the hadron, then $`A_{ab}(b)`$ is given by,
$$A_{ab}(b)=\frac{1}{(2\pi )^2}d^2\stackrel{}{q}_a(q)_b(q)e^{i\stackrel{}{q}\stackrel{}{b}},$$
(11)
where $`_{a,b}`$ are the electromagnetic form factors of the colliding hadrons. For protons this is given by the dipole expression
$$_{prot}(q)=[\frac{\nu ^2}{q^2+\nu ^2}]^2,$$
(12)
with $`\nu ^2=0.71\mathrm{GeV}^2`$. For photons a number of authors , on the basis of Vector Meson Dominance, have assumed the same functional form as for pion, i.e. the pole expression
$$_{pion}(q)=\frac{k_0^2}{q^2+k_0^2},$$
(13)
with $`k_0=0.735`$ GeV from the measured pion form factors, changing the value of the scale parameter $`k_0`$, if necessary in order to fit the data.
Yet another philosophy would be to assume that the b-space distribution of partons is the Fourier transform of the transverse momentum distribution of the colliding system . To leading order, this transverse momentum distribution can be entirely due to an intrinsic transverse momentum of partons in the parent hadron. While the intrinsic transverse momentum ($`k_T`$) distribution of partons in a proton is normally taken to be Gaussian, a choice which can be justified in QCD based models , in the case of photon the origin of all partons can, in principle, be traced back to the hard vertex $`\gamma q\overline{q}`$. Therefore, also in the intrinsic transverse momentum philosophy, one can expect the $`k_T`$ distribution of photonic partons to be different from that of the partons in the proton. The expected functional dependence can be deduced using the origin of photonic partons from the $`\gamma q\overline{q}`$ splitting. For the photon one can argue that the intrinsic transverse momentum ansätze would imply the use of a different value of the parameter $`k_0`$ , which is extracted from data involving ‘resolved’ photon interactions , in the pole expression for the form factor. By varying $`k_0`$ one can then explore various possibilities, i.e. the $`\mathrm{VMD}/\pi `$ hypothesis if $`k_0=0.735`$ GeV, or the intrinsic transverse distribution case for other values of $`k_0`$ .
The ansatz of eqs. 5– 6 and 9, requires that the overlap function be normalised to unity, i.e.,
$$d^2\stackrel{}{b}A_{ab}(b)=1.$$
(14)
Taking the form factor ansatz for the proton we then have
$$A_{pp}(b)=\frac{\nu ^2}{96\pi }(\nu b)^3𝒦_3(\nu b)A_{\gamma \gamma }=\frac{k_0^3b}{4\pi }𝒦_1(k_0b)$$
(15)
and
$$A_{\gamma \mathrm{p}}(b)=\frac{k_0^2\nu ^4}{2\pi (\nu ^2k_0^2)^2}\left[𝒦_0(k_0b)𝒦_0(\nu b)\right]+\frac{k_0^2\nu ^2}{4\pi (k_0^2\nu ^2)}\nu b𝒦_1(\nu b)$$
(16)
where $`\nu `$ and $`k_0`$ are the scale factors mentioned earlier and $`𝒦_i`$ are the modified Bessel functions.
If we look at eqs. 89 it is easy to see that $`A_{ab}(b)`$ and $`P_{\mathrm{ab}}^{had}`$ always appear in the combination $`A_{ab}(b)/P_{ab}^{had}`$ . Hence only one of them can be varied independently. Note also that $`\sigma ^{\mathrm{soft}}`$ can always be renormalised since it is a function fitted to the low energy data. By looking at eq. 9 we can see that if the $`s`$-dependence of the $`jet`$ cross-sections were similar for all the colliding particles, then the difference in the $`s`$-dependece of the total/inelastic cross-sections can be estimated by looking at the behaviour of $`A_{ab}`$. It is also clear then that changing the scale parameter $`k_0`$ in $`A_{ab}(b)`$ is equivalent to changing $`P_{ab}^{had}`$. Note also that
$$P_{\gamma \mathrm{p}}^{had}=P_\gamma ^{had};P_{\gamma \gamma }^{had}=(P_\gamma ^{had})^2.$$
(17)
Hence, in analysing the photon-induced reactions, i.e, the $`\gamma p`$ and $`\gamma \gamma `$ cross-sections, the only hadronisation probability that is an independent parameter is $`P^{\mathrm{had}}P_\gamma ^{\mathrm{had}}`$.
Thus now we are ready to list the total number of inputs on which the EMM predictions depend:
* The soft cross-sections $`\sigma _{ab}^{\mathrm{soft}}`$ ,
* $`p_{tmin}`$ and the parton densities in the colliding hadrons,
* $`P^{\mathrm{had}}`$ and the ansatz as well as the scale parameters for the $`A_{ab}(b)`$,
Out of these the protonic and photonic parton densities are known from $`eP`$ and $`e\gamma `$ DIS. The nonperturbative part $`\sigma _{\gamma p}^{\mathrm{soft}},\sigma _{\gamma \gamma }^{\mathrm{soft}}`$ has to be determined from some fits. We outline the procedure used by us below. It is true that the jet cross-sections of eq. 7 depend very strongly on the value of $`p_{tmin}`$. Hence it would be useful to have an independent information of this parameter, which as said before separates the perturbative and nonperturbative contribution in an ad hoc manner. Luckily, there is more direct evidence that the ansatz of eq. 8 can describe some features of hadronic interactions. Event generators which have built in multiple parton interactions in a given $`ab`$ interaction, for the case of $`a,b`$ being $`p,\overline{p}`$, were shown to explain many features of the hadronic interactions such as multiplicity distributions with a $`p_{tmin}`$ around $`1.5`$ GeV. Recent analyses of the $`\gamma p`$ interactions seem to show again that a consistent description of many features such as energy flow, multiplicity distributions is possible with a $`p_{tmin}`$ value between $`1.5`$$`2.0`$ GeV.
The energy dependence of $`\sigma _{ab}^{\mathrm{jet}}`$ as defined in eq. 7 will of course get reflected in the energy rise of the eikonalised total or inelastic cross-section. It is therefore instructive to see how this depends on the type of the colliding particles. We compare this for $`p\overline{p}`$, $`\gamma p`$ and $`\gamma \gamma `$ case, where we have multiplied the $`\gamma p`$ and $`\gamma \gamma `$ jet cross-sections by factor of $`\alpha `$ and $`\alpha ^2`$ respectively.
In the comparison of Fig. 2 we have used the GRV, LO parametrisations for both the proton and the photon . We note here that at high energies, the rise with energy of the jet cross-sections is very similar in all the three cases, when difference between a photon and a proton is accounted for. A study of the $`b`$-dependence of the respective $`A_{ab}(b)`$ given by eqs. 1516, shows that the photon is much ‘smaller’ as compared to the proton in the transverse space, which is also understandable as the photon after all owes its ‘structure’ to the hard $`\gamma q\overline{q}`$ vertex. Hence, we expect that the damping of the cross-section rise due to multiple scattering for photons will be less than for a proton. This, coupled with the above observation of the $`jet`$ cross-section, implies that in the EMM, total/inelastic cross-sections are expected to rise faster with energy as we replace a proton by a photon. I.e., an increase in the pomeron intercept as we go from $`\overline{p}p`$ to $`\gamma p`$ and $`\gamma \gamma `$ as indicated by the data, is expected in the EMM framework.
## 4 Results
Now in this section let us spell out our strategy of fixing the various inputs to the EMM. We restrict our analysis only to the photon-induced processes, i.e., $`\gamma p`$ and $`\gamma \gamma `$ cross-section. We follow the same procedure as we had adopted in , i.e., we fix all the inputs to the EMM by a fit to the data on the available photoproduction data on $`\sigma _{\gamma p}`$. Here we do not include the data , shown in Fig. 1, which has been obtained by an extrapolation of the low $`Q^2`$ data to $`0`$. We determine $`\sigma _{\gamma p}^{\mathrm{soft}}`$ by a fit to the photoproduction data using a form suggested in ,
$$\sigma _{\gamma p}^{\mathrm{soft}}=\sigma _{\gamma p}^0+\frac{𝒜_{\gamma p}}{\sqrt{s}}+\frac{_{\gamma p}}{s}.$$
(18)
We then determine $`𝒜_{\gamma p},_{\gamma p}`$ and $`\sigma _{\gamma p}^0`$ from the best fit to the low-energy photoproduction data, starting from the quark-model motivated ansatz $`\sigma _{\gamma p}^0=2/3\sigma _{\overline{p}p}`$. In earlier work we had used the inelastic formulation. Now we have repeated the same exercise with the total cross-section formulation of the EMM, which we believe is the more appropriate to use . The results of our fit, using the total cross-section formulation of the EMM, are shown
in Fig. 3. The fit values of the parameters are
$$\sigma _{\gamma p}^0=31.2\mathrm{mb};𝒜_{\gamma p}=10\mathrm{mb}\mathrm{GeV};_{\gamma p}=37.9\mathrm{mb}\mathrm{GeV}^2.$$
(19)
As compared with the similar exercise done in , we find that the rise of the eikonalised cross-sections with $`\sqrt{s}`$ is faster in this case than in the inelastic formulation. However, $`p_{tmin}=2`$ GeV is still the best value to use, as seen from Fig. 3. We use here the form factor ansatz for the proton and the intrinsic $`k_T`$ ansatz for the photon with a value of parameter $`k_0=0.66`$ GeV, which corresponds to the central value from the measurement of the intrinsic $`k_T`$ distribution. We have used GRV distributions for both the proton and photon and $`P^{\mathrm{had}}=1/240`$. We also find, similar to the analysis in the inelastic formulation by us and others , that the description of the photoproduction data in terms of a single eikonal leaves leeway for improvement. We restrict ourselves to the use of a single eikonal, so as to minimize our parameters but note that this can perhaps be cured by using an energy dependent $`P^{\mathrm{had}}`$ or alternatively an energy dependent $`k_0`$.
Now, having fixed all the inputs for the $`\gamma p`$ case, we determine the corresponding parameters for the $`\gamma \gamma `$ case again by appealing to the Quark Model considerations and we use,
$$\sigma _{\gamma \gamma }^{soft}=\frac{2}{3}\sigma _{\gamma p}^{soft}.$$
(20)
All the other inputs are exactly the same as in the $`\gamma p`$ case. In this manner, we have really extrapolated our results from $`\gamma p`$ case to the $`\gamma \gamma `$ case. The results of our extrapolation are shown
in Fig. 4.
Notice that the overall normalization of the photonic cross-section depends upon $`P^{had}`$. When extrapolating from photoproduction to photon-photon using the inelastic assumption, we had used $`P^{had}=1/200`$, which can be thought of as corresponding to a 20% non-VMD component. Using the total cross-section formulation, the low energy production data suggest rather to use $`P^{had}=1/240`$, a value which implies that the photon is practically purely a vector meson. Then, Fig.4 shows that in the total cross-section formulation, the extrapolation from $`\gamma p`$ to $`\gamma \gamma `$ leads to cross-section which lies lower at low energies, but rises faster, than in the inelastic fits, for the same values of the parameters.
Both the inelastic and the total cross-section are seen to rise faster than is expected in an universal pomeron picture. This feature is same both for $`\gamma p`$ and $`\gamma \gamma `$ cases. We show the dependence of our results on the scale parameter $`k_0`$. The band in the figure corresponds to using the Regge-Pomeron hypothesis of eq. 1, measured values of $`X_{ab},Y_{ab}`$ for $`\overline{p}p/pp`$, $`\gamma p`$ case and the factorisation idea
$$X_{\gamma \gamma }=X_{\gamma p}^2/X_{pp};Y_{\gamma \gamma }=Y_{\gamma p}^2/Y_{pp}.$$
Here $`X(Y)_{pp}`$ stand for an average for $`pp`$ and $`\overline{p}p`$ case.
We see that while our analysis using inelastic formulation and the default value of $`k_0=0.66`$ GeV gave predictions closer to the OPAL data the total cross-section formulation, for the same value of $`k_0`$, gives results closer to the L3 data, as already pointed out in . The sensitivity of the predictions to the difference between different parametrisations for the photonic partons increases with energy. At higher energies one is sensitive to the low-$`x`$ region about which not much is known. Our earlier analysis in the inelastic formulation had shown that the $`\gamma \gamma `$ cross-sections rise more slowly for the SAS parametrisation of the photonic parton densities. The dependence of our results in this analysis on the parton densities in the photon will be presented elsewhere .
Fig. 5 shows a comparison of the eikonalised $`\gamma p`$ and $`\gamma \gamma `$ cross-sections, in the total formulation, with the different parameters for $`\gamma p`$ and $`\gamma \gamma `$ case related as described before. We see indeed that in the EMM the $`\gamma \gamma `$ cross-sections rise faster with $`\sqrt{s}`$ than $`\gamma p`$ case, as was expected from the results shown in Fig. 3 and arguments following that. However, the dependence of this observation on $`P^{\mathrm{had}}`$ and/or the scale parameter needs to be still explored.
## 5 Conclusions
In conclusion we discuss the results of an analysis of total and inelastic hadronic cross-sections for photon induced processes in the framework of an eikonalised minijet model (EMM). We have fixed various input parameters to the EMM calculations by using the data on photoproduction cross-sections and then made predictions for $`\sigma (\gamma \gamma \mathrm{hadrons})`$ using same values of the parameters. We then compare our predictions with the recent measurements of $`\sigma (\gamma \gamma \mathrm{hadrons})`$ from LEP. We find that the total cross-section formulation of the EMM predicts faster rise with $`\sqrt{s}`$ as compared to the inelastic one, for the same value of $`p_{tmin}`$ and scale parameter $`k_0`$. In the former case our extrapolations yield results closer to the L3 data whereas in the latter case they are closer to the OPAL results. We also find that in the framework of EMM it is natural to expect a faster rise with $`\sqrt{s}`$ for the $`\gamma \gamma `$ case as compared to the $`\gamma p`$ case.
## 6 Acknowledgement
It is a pleasure to thank Goran Jarlskog and Torbjorn Sjöstrand for organising this workshop which provided such a pleasant atmosphere for very useful discussions. G.P. is grateful to Martin Block for clarifying discussions on the total cross-section formulation. R.M.G. wishes to acknowledge support from Department of Science and Technology (India) and National Science Foundation,under NSF-grant-Int-9602567 and G.P from the EEC-TMR-00169.
|
no-problem/9903/hep-ph9903317.html
|
ar5iv
|
text
|
# L-R asymmetries and signals for new bosons Presented by M. C. Rodriguez at VIII Mexican School of Particles and Fields, Oaxaca de Juárez, Oax, México, November 20–29, 1998.
## Abstract
Several left-right parity violating asymmetries in lepton-lepton scattering in fixed target and collider experiments are considered as signals for doubly charged vector bosons (bileptons).
preprint: IFT-P.024/99 March 1999
The left-right asymmetry when only one of the lepton is polarized is defined as follows
$$A_{RL}(llll)=\frac{d\sigma _Rd\sigma _L}{d\sigma _R+d\sigma _L},$$
(1)
where $`d\sigma _{R(L)}`$ is the differential cross section for one right (left)-handed lepton $`l`$ scattering on an unpolarized lepton $`l`$. That is
$$A_{RL}(llll)=\frac{(d\sigma _{RR}+d\sigma _{RL})(d\sigma _{LL}+d\sigma _{LR})}{(d\sigma _{RR}+d\sigma _{RL})+(d\sigma _{LL}+d\sigma _{LR})},$$
(2)
where $`d\sigma _{ij}`$ denotes the cross section for incoming leptons with helicity $`i`$ and $`j`$, respectively, and they are given by
$$d\sigma _{ij}\underset{kl}{}|M_{ij;kl}|^2,i,j;k,l=L,R.$$
(3)
Another interesting possibility is the case when both leptons are polarized. We can define an asymmetry $`A_{R;RL}`$ in which one beam is always in the same polarization state, say right-handed, and the other is either right- or left-handed polarized (similarly we can define $`A_{L;LR}`$):
$$A_{R;RL}=\frac{d\sigma _{RR}d\sigma _{RL}}{d\sigma _{RR}+d\sigma _{RL}},A_{L;RL}=\frac{d\sigma _{LR}d\sigma _{LL}}{d\sigma _{LL}+d\sigma _{LR}}.$$
(4)
We can define also an asymmetry when one incident particle is right- handed and the other is left-handed and the final states are right- and left or left- and right-handed:
$$A_{RL;RL,LR}=\frac{d\sigma _{RL;RL}d\sigma _{RL;LR}}{d\sigma _{RL;RL}+d\sigma _{RL;LR}}$$
(5)
or similarly, $`A_{LR;RL,LR}`$. These asymmetries, in Eqs.(4) and (5), are dominated by QED contributions. However, this will not be the case if a bilepton resonance does exist at typical energies of the NLC. To show this fact is the goal of this paper. These asymmetries can be calculated for both fixed target (FT) and collider (CO) experiments.
We can integrate in the scattering angle and define the asymmetry $`\overline{A}_{RL}`$ as
$$\overline{A}_{RL}(llll)=\frac{(𝑑\sigma _{RR}+𝑑\sigma _{RL})(𝑑\sigma _{LL}+𝑑\sigma _{LR})}{(𝑑\sigma _{RR}+𝑑\sigma _{RL})+(𝑑\sigma _{LL}+𝑑\sigma _{LR})},$$
(6)
where $`𝑑\sigma _{ij}_{5^o}^{175^o}𝑑\sigma _{ij}`$. All these asymmetries can be studied in future accelerators .
The importance of these sort of parity breaking asymmetries in fixed target experiments in lepton-lepton scattering was first pointed out in Ref. . For the case of electron-electron scattering the mass of the electrons can be neglected. For an energy of $`E=50`$ GeV and for a scattering angle (in the center of mass frame) of $`\theta =90^o`$ the left-right asymmetry, defined in Eq. (1) has a value $`3\times 10^7`$ in the standard model. Radiative corrections reduce this value about $`40\pm 3\%`$ . It is expected that fixed target experiments like those at SLAC can measure this asymmetry . For the muon-muon elastic scattering this asymmetry is $`5.4\times 10^5`$ . We have studied also the non-diagonal elastic scattering $`e\mu e\mu `$. In the last case we obtain a value of $`5.9\times 10^8`$ for a muon energy of 50 GeV and $`2.9\times 10^7`$ for muon energy of 190 GeV. At these energies the muon mass cannot be neglected . This type of asymmetry can be measured using the high-energy muon beam M2 of the CERN SPS as in the NA47 experiment .
The relevance of these asymmetries in collider experiments was first pointed out in Refs. . In fixed target experiments the cross sections are large ($`\text{mb}`$) and the asymmetries small ($`10^7`$). On the other hand, in collider experiments the cross sections are small ($`10^3\text{nb}`$) but the asymmetries large ($`0.1`$ for the muon-muon case). Explicitly we have that at energies $`\sqrt{s}=300`$ GeV and $`\theta 90^o`$ the asymmetry is
$$A_{RL}^{\mathrm{CO},\mathrm{ESM}}(eeee)0.05,$$
(7)
for the electron-electron case and
$$A_{LR}^{\mathrm{CO},\mathrm{ESM}}(\mu \mu \mu \mu )0.1436,$$
(8)
for the muon-muon case. Future colliders with polarized lepton-lepton scattering can have the appropriate luminosity to measure these parameters.
If a muon-electron collider is constructed in the future, it would be possible to measure the $`A_{RL}^{CO;ESM}(\mu e)=0.024`$ for $`E_\mu =190`$ GeV ($`\sqrt{s}380`$ GeV) and $`\theta =90^o`$. At high energies the mass effects are not important.
So far all the results were obtained in the standard model. In certain kind of models there are doubly charged scalars ($`H^{}`$) or/and vector ($`U^{}`$) bileptons . As expect the asymmetry is larger in the $`U`$-pole. For instance,
$$A_{RL}^{\mathrm{CO},\mathrm{ESM}+\mathrm{U}}(eeee)=0.099,$$
(9)
and
$$A_{RL}^{\mathrm{CO},\mathrm{ESM}+\mathrm{U}}(\mu \mu \mu \mu )=0.1801,$$
(10)
when we add to the standard model asymmetry the contributions due to the the bilepton $`U`$ for $`M_U=300`$ GeV and $`\mathrm{\Gamma }_U=36`$ MeV. In Fig. 1 we show the behaviour of the asymmetry $`A_{RL}^{\mathrm{CO},\mathrm{ESM}+\mathrm{U}}`$ as a function of the mass of the boson $`U^{}`$.
For the electron-electron case we can define the quantity
$$\delta \overline{A}_{RL}(eeee)(\overline{A}_{RL}^{\mathrm{CO},\mathrm{ESM}+\mathrm{U}}\overline{A}_{RL}^{\mathrm{CO},\mathrm{ESM}})/\overline{A}_{RL}^{\mathrm{CO},\mathrm{ESM}},$$
(11)
where $`\overline{A}`$’s are define in Eq. (6). Although $`\delta \overline{A}_{RL}`$ is large (near 50 for $`\sqrt{s}=300`$) at the $`U`$-resonance we would like to stress that it remains appreciably large even far from the $`U`$-peak. That particular behavior suggests that this quantity could be the one to be considered in the search for new physics, like the bilepton $`U^{}`$, in future colliders. On the other hand, the asymmetry is insensitive to the contributions of the doubly charged scalars.
We have used also the asymmetries defined in Eq. (4). In this case it is interesting to note that
$$A_{R;RL}^{\mathrm{CO},\mathrm{ESM}+\mathrm{U}}(eeee)A_{R;RL}^{\mathrm{CO},\mathrm{ESM}}(eeee),$$
(12)
and we see that such a difference on sign is a good signature for the discovery of the vector bilepton.
The contributions of an extra neutral vector boson $`Z^{}`$ has also been considered for the case $`A_{RL}(\mu e)`$. In this case the asymmetry is considerably enhanced and it will be appropriate in searching for extra neutral vector bosons with mass up to 1 TeV. Since in the 331 model the $`Z^{}`$ couplings with the leptons are flavor conserving we do not have additional suppression factors coming from mixing . Hence, the $`\mu e`$ elastic scattering can be very helpful, even with the present experimental capabilities, for looking for non-standard physics effects.
###### Acknowledgements.
This work was supported by Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP), Conselho Nacional de Ciência e Tecnologia (CNPq) and by Programa de Apoio a Núcleos de Excelência (PRONEX).
|
no-problem/9903/quant-ph9903099.html
|
ar5iv
|
text
|
# Fault-Tolerant Quantum Computation with Local Gates
## 1 Introduction
Quantum computation has the potential to offer vast speedups over classical computation. For instance, Shor’s factoring algorithm and Grover’s database search algorithm both offer great improvements over classical algorithms for the corresponding problems. However, quantum computers are likely to be highly susceptible to errors, whether caused by imperfect gates, decoherence due to interactions with the environment, or any other cause.
In classical computers, error correction is rarely necessary because the classical bits are stored using digital devices, which, at every time step, will restore the state of the system to a 0 or a 1. They also are made up of a large number of smaller particles (electrons, usually), and therefore act as a simple (classical) repetition code.
The theory of quantum fault-tolerant computation has developed in an attempt to allow a similar remedy to the buildup of errors in a quantum computer. Instead of performing our algorithm on some number of physical quantum bits (qubits), we implement it on a collection of logical qubits encoded using a somewhat larger number of physical qubits. The logical qubits live in a carefully chosen coding subspace of the full Hilbert space of the physical qubits. Then, by repeatedly performing an error correction algorithm during the computation, we can at every step restore (or nearly so) the data to the coding space, thus fending off errors.
In fact, by using concatenated quantum codes, we can produce an error threshold : if the fundamental error rates per gate and per time step are below some threshold, we can perform arbitrarily long quantum computations with arbitrarily low logical error rate. This threshold is known to be at least $`10^6`$ , but is probably at least a few orders of magnitude higher . Estimates range as high as $`1/700`$ .
However, this result relies on a number of assumptions about the computer and errors, some of which are described below. One assumption is that gates can be performed between any pair of qubits. In practice, this may not be practical at all, since in a large computer, qubits will be constrained by the dimensionality of space to be far apart from each other. For instance, in Kane’s proposal for a solid-state quantum computer using single-spin NMR , only adjacent qubits (in one or perhaps two dimensions) can directly communicate.
Luckily, this assumption is not critical to the result. As Aharonov and Ben-Or note , “the procedures …can be made …to operate only on nearest neighbors, by adding gates that swap between qubits.” As we shall see, this is true, but we must be careful in how we design the computer. The ancilla qubits we need to perform error correction must be placed sufficiently near the computational qubits to be corrected, or too many errors will accumulate in the time necessary to move the interacting qubits together, taking us above the error threshold. Furthermore, when a number of levels of concatenation are used (as is necessary for long computations), some ancillas will necessarily be far away from the data, and we must be certain this does not destroy the threshold result.
## 2 The Threshold Result
First, I will review the usual threshold result. Each qubit is encoded with a concatenated quantum code, usually using the seven-qubit code . That is, each qubit is encoded as seven qubits via the mapping
$`|0`$ $``$ $`|0000000+|1111000+|1100110+|1010101+`$ (1)
$`|0011110+|0101101+|0110011+|1001011`$
$`|1`$ $``$ $`|1111111+|0000111+|0011001+|0101010+`$ (2)
$`|1100001+|1010010+|1001100+|0110100,`$
and each of those seven qubits is again encoded using the same map, and so on for $`L`$ levels.
The seven-qubit code has a number of properties that make it particularly favorable for fault-tolerant computation. The logical $`0`$ of the seven-qubit code is the superposition of the even codewords of the (classical) Hamming code, whereas the logical $`1`$ is the superposition of the odd codewords of the Hamming code. Therefore, to make a measurement of $`0`$ vs. $`1`$, we need only measure each qubit in the block individually, and we will be able to determine the measurement result from the parity of the resulting Hamming codeword . We do not need to perform an additional quantum error correction step before this measurement — phase errors will not affect the measurement result, and bit flip errors will show up as bit flip errors in the classical codeword, which can be corrected using classical methods. (The parity of the codeword should only be determined after this correction.)
In addition, it is easy to perform a number of fault-tolerant operations on the seven-qubit code. The Hadamard transform $`H:|j(|0+(1)^j|1)/\sqrt{2}`$, the phase gate $`P:|ji^j|j`$, and the controlled NOT (or XOR) $`|j,k|j,jk`$ can all be performed via simple transversal operations. A transversal operation only involves gates that interact the $`r`$th qubit in a block with itself and the $`r`$th qubits in other blocks. This prevents any errors from spreading within a block, so a single physical error cannot cause a whole block of seven to go bad.
The logical CNOT has another useful property: individual (or multiple) bit flip errors in the control block will propagate forwards, producing the corresponding bit flip errors in the target block, and phase errors in the target block will propagate backwards, producing the corresponding phase errors in the control block. In addition, the logical Hadamard transform will convert bit flip errors to phase errors and vice-versa, without changing the location of the errors.
We can take advantage of these facts to produce a simple fault-tolerant error correction circuit, shown in figure 1.
We use the CNOT to copy the bit flip errors from the data block into an ancilla block, and measure the ancilla to see where those errors are. A similar procedure tells us the phase errors. The ancillas begin in the logical $`|0`$ and logical $`|0+|1`$ states so that when the data is correct, it is unaffected by the error correction procedure. From just one measurement, there is no way to tell whether an error is originally native to the data or to the ancilla, so we should repeat this procedure multiple times. Using some decision process, we then decide what the most likely error is and correct it.
Also, note that while we are measuring phase errors, bit flip errors can pass from the ancilla into the data, and vice-versa. If the errors are just single-qubit errors, this is not a major problem — we can correct them using the regular error-correction procedure. However, the process of creating the ancilla blocks may introduce correlated errors, and if those errors enter the data, it will be a serious problem. Therefore, we must also verify the ancilla blocks to eliminate such correlated errors. Precisely how we do this is not important for the discussion below, but it will certainly involve a number of additional ancilla qubits.
It is not difficult to see that all of the above properties hold equally well for the concatenated seven-qubit code. CNOTs, Hadamard, and phase gates can all be performed transversally on blocks of size $`7^L`$, and error correction can be performed by interacting a full block with additional ancilla blocks of size $`7^L`$. Since the concatenated seven qubit code is still a superposition of concatenated classical Hamming codewords, we can determine the error at all levels just from measuring these $`7^L`$-qubit blocks .
In order to complete a universal set of gates, we need an additional gate, such as the Toffoli gate ($`|i,j,k|i,j,kij`$). The construction of the Toffoli gate is somewhat involved, but requires three additional ancilla blocks encoded at the same level as the data, plus some “cat” states ($`|00\mathrm{}0+|11\mathrm{}1`$) encoded with one less level. The construction also uses a number of Toffoli gates performed at the next lower level of concatenation. Therefore, construction of the Toffoli gate at level $`L`$ will first require the construction of Toffoli gates at level $`L1`$, which requires the construction of Toffoli gates at level $`L2`$, and so on.
Once we have all these tools, if we make a few assumptions about the form of the errors and our capabilities, we can prove the existence of an error threshold, below which arbitrarily long quantum computations are possible. At level $`k`$, there are a fixed number of places for errors to occur, and two errors are required to produce an error at level $`k+1`$ (our use of fault-tolerant procedures ensures this). Therefore, the error rate $`P_{k+1}`$ at level $`k+1`$ is related to the error rate $`P_k`$ at level $`k`$ by
$$P_{k+1}=CP_k^2,$$
(3)
for some constant $`C`$, which essentially counts all possible combinations of level $`k`$ errors that are fatal at level $`k`$ (though most such combinations will be correctable at level $`k+1`$). Then
$$P_L=(1/C)(CP_0)^{2^L}.$$
(4)
$`P_0`$, the error rate at level $`0`$, is the error rate on the physical qubits. If $`P_0<1/C`$, then $`P_L`$ will approach $`0`$ extremely rapidly, as a double exponential in the number of levels. In fact, since the number of qubits required to encode at level $`L`$ is exponential in $`L`$, the error rate is a single exponential in the number of qubits (or equivalently, the number of qubits required is polylogarithmic in the desired error rate per gate). The value of the threshold is (at least) $`1/C`$.
Because this scaling is so rapid, the result still holds true even if the number of possible places for errors increases, even exponentially, with the number of levels. Suppose
$$P_{k+1}=Cr^{k+1}P_k^2,$$
(5)
for constants $`C`$ and $`r`$. Then
$$P_L=\frac{1}{Cr^2}\left(Cr^2P_0\right)^{2^L}/r^L.$$
(6)
The threshold is still present, but is now $`1/(Cr^2)`$. This fact will be key to showing a threshold is still present when we use local gates. In this case, we cannot avoid the error rate per level increasing, but we will arrange the computer so that it only increases according to equation (5). Note that the final error rate is still a double exponential in the number of levels.
This does not completely prove the existence of a threshold for full-fledged fault-tolerant computation. It also must be possible to reliably create ancilla states for error correction at level $`L`$, and it must be possible to perform Toffoli gates at level $`L`$. Creating a reliable level $`L`$ ancilla requires first a number of reliable level $`L1`$ ancillas, then a number of level $`L1`$ gates. For instance, to create an encoded $`|0`$ state at level $`L`$, we take $`7`$ level $`L1`$ encoded $`|0`$ states and perform an appropriate circuit interacting them to produce the level $`L`$ state.
Since multiple blocks (of $`7^{L1}`$ qubits) within the level $`L`$ block interact, this encoding network could introduce multiple correlated errors in the level $`L`$ block, so we must verify the encoded states. For instance, one way to do this is to compare them against each other (using essentially the regular error-correction procedure) — correlated errors in one block will be uncorrelated with errors in another block. This will produce a smaller number of more reliable ancillas, which we again compare against each other. With a constant number of rounds of comparison, we can reduce the chance of correlated errors to less than the chance of a similar number of errors arising individually (recall that uncorrelated errors are arising during the verification procedure as usual). Other methods will produce qualitatively similar results.
If the effective error rate in the level $`L1`$ ancillas and the level $`L1`$ gates is low enough, then we can create reliable level $`L`$ ancillas . The procedure is self-similar — if creating a reliable ancilla at level $`k`$ (including all verification steps) requires a total of $`N_a`$ level $`k1`$ ancillas,<sup>1</sup><sup>1</sup>1In the example verification procedure above, $`N_a`$ would need to include not just the seven level $`k1`$ ancillas encoded for the original block, but also all of the ancillas used to create the blocks against which the original level $`k`$ ancilla is compared, and for the blocks against which they are compared, and so on. a level $`k+1`$ ancilla also requires $`N_a`$ level $`k`$ ancillas. Therefore, to create a level $`L`$ ancilla ultimately requires $`N_a^L`$ level $`0`$ ancillas.
Essentially the same logic applies to the level $`L`$ Toffoli gate. The requirements for reliable preparation and reliable Toffoli gates will both somewhat lower the threshold, but will not destroy it.
Perhaps the most important assumption we make is that we can perform operations in parallel. Otherwise the situation becomes akin to spinning plates to keep them from falling — each requires a certain amount of time to spin, and once the number of plates becomes too large, we will not have time to spin all of them before the first one falls. The precise degree of parallelism we assume will heavily impact the error threshold. Generally I will assume maximum parallelism — all pairs of qubits that can interact can do so at the same time, provided no qubit participates in two different interactions at once — but since I will not be calculating an explicit error threshold, this assumption will not greatly impact the discussion.
Another important assumption is that errors are uncorrelated, both in time and in space. If there is a chance $`p`$ per time step that the whole computer will break down, we will not be able to perform more than about $`1/p`$ computational steps on average. We can allow small-scale correlations without much damage (though the lower levels of the code will be less effective than expected in that case), but long-range correlations in the errors have the potential to cause serious problems. Systematic gate errors can also be tolerated, but may significantly lower the threshold .
We also require a supply of fresh qubits during the computation to act as ancillas during error correction. These ancillas provide a place for us to dump entropy — otherwise the Second Law of Thermodynamics would forbid arbitrarily long computations. In this paper, I shall assume that qubits can be initialized and erased in place. This appears to be a strict requirement — if a qubit has to move a long distance from where it is initialized to where it is used, it will likely be randomized by the time it arrives.
Note that all three of the above assumptions apply equally well to fault-tolerant classical computation. The last could conceivably be circumvented by a careful use of irreversible gates, but the ability to perform an appropriate variety of gate is really just another form of the same assumption.
Some other assumptions are useful, but not necessary. For instance, we generally assume that errors may randomize the qubits, but will never cause them to leave the computational space. Since it is possible in principle to detect such a “leakage,” we can remove this assumption by adding in “stop leak” gates that watch for such an error. It is also frequently convenient to assume we have the capability to make measurements on the qubits during the computation, and to rapidly perform (modest) classical computations between quantum steps. We can remove this assumption by simulating the classical computation with a quantum circuit (though it must follow the design principles of a classical fault-tolerant computer). In the case of local quantum gates, we would intersperse quantum bits with regions designated for these classical computations (see, for instance, for a one-dimensional classical fault-tolerant architecture). Having a reliable classical computer available considerably simplifies the task of building a fault-tolerant quantum computer, since we can assume our decision processes (such as for which error occurred) are reliable.
Another unnecessary assumption is that arbitrary pairs of qubits can communicate directly. In this paper, I shall show that it is sufficient to interact nearby pairs of qubits. Then by moving the data around, we can allow originally distant pairs of qubits to interact. We must, however, be careful that the time required to do so is not too large, or by the time the qubits are brought together, they will both be erroneous.
## 3 Swapping Qubits
In order to perform effectively long-range interactions, we shall require the ability to move qubits around. We will accomplish this by swapping adjacent qubits. An appropriate series of swaps between adjacent qubits will allow us to perform an arbitrary permutation of the qubits. We will primarily be interested in cyclic rotations, moving a single qubit a distance $`d`$ (qubit $`1`$ becomes qubit $`d`$, while qubit $`s`$ becomes qubit $`s1`$ for $`s=2,\mathrm{},d`$), requiring $`d1`$ swaps. To interact two qubits initially a distance $`d+1`$ apart, we perform one such rotation, bringing the first qubit adjacent to the second, then interact the two, then perform a second rotation, returning the first qubit to its original location. Altogether, we need $`2(d1)`$ swaps for this interaction.
While we do not need to worry about the swap operation propagating preexisting errors (it swaps the errors along with the data), we do have to worry about errors in the swap gate itself, which could introduce correlated errors in the two qubits being swapped. To solve this problem, we introduce an auxiliary qubit between the two computational qubits A and B (which may themselves be ancillary to the primary computation). Then the following series of gates will swap A and B without ever interacting them directly:
1. Swap (1, 2)
2. Swap (1, 3)
3. Swap (2, 3)
(see figure 2).
While the auxiliary qubit may acquire correlated errors with A or B, that does not matter, since the value of the auxiliary qubit is completely immaterial.<sup>2</sup><sup>2</sup>2We might as well perform two unrelated quantum computations on this computer at the same time, with the auxiliary qubits for one being the computational qubits for the other. Note that this network requires next-to-nearest-neighbor gates.
To allow swaps between arbitrary pairs of neighboring computational qubits, we need to alternate computational qubits with auxiliary qubits, as in figure 3a. Note that in two or more dimensions, we can manage with simple nearest-neighbor gates by arranging cul-de-sacs where we can temporarily store one computational qubit while moving another past it (figure 3b). For instance, to move a qubit A two positions up (figure 3c), we simply slide the two (computational) qubits above it into the cul-de-sacs down and to the right from their normal positions, using regular nearest-neighbor swaps. Then move A up to its destination, and move the two displaced qubits into the computational slots down and to the left. With or without cul-de-sacs, moving a qubit a distance $`d`$ requires $`O(d)`$ gates.
The remainder of the protocol only requires swaps and other interactions between nearest neighbors. One might think the Toffoli gate would require next-to-nearest neighbor interactions, but in fact, it can be built up from one- and two-qubit gates . Therefore, in two or more dimensions, we will be able to perform fault-tolerant computation with only nearest-neighbor interactions, whereas in one dimension, the inability to use cul-de-sacs to move the data out of the way requires us to go to next-to-nearest-neighbor interactions. However, in many almost one-dimensional systems (such as two parallel lines of qubits, or a single line with an occasional additional qubit on the side), we can again move to nearest-neighbor interactions.
In fact, since swap gates cannot propagate errors, it will likely be possible to use nearest-neighbor gates even in one-dimensional systems. However, since a single swap gate could introduce correlated errors on pairs of qubits in the same block, it might be necessary to use a concatenated code that corrects two errors per level instead of the concatenated seven-qubit code.
## 4 Three Dimensions
When our qubits lie on a three-dimensional cubic lattice, we use the arrangement of figure 4 for our computer.
Each logical qubit has associated with it many ancillas. We arrange a single encoded data qubit on a plane with all of its ancillas. Then we stack the planes, aligning the data qubits in all of the planes. Therefore, to perform a tranversal interaction between adjacent data qubits, we do not need to move anything. To perform a transversal interaction between distant data qubits, we must first move them together. By treating each step in the move as a regular computational step, we can see that a computation involving $`K`$ logical qubits will be slowed down by a factor of at most $`K`$ computational steps. When $`K`$ is large, we will have to perform error correction at intermediate stages during the move, but this does not present any particular extra burden, since we have the ancillas for error correction constantly available.
In a single plane, we have many lines of qubits. One line will consist of the data block of $`7^L`$ qubits. Other lines will consist of ancillas of various forms and functions. After being used, an ancillas is reinitialized to be used again (a process which we are assuming can be done in place). The ancillas are aligned with the data so that all interactions take place along a single “interaction” axis (perpendicular to the arrangement of the data block), thus realizing the transversal nature of the interactions. The only time interactions will occur along the other axis (the “data” axis) is when an ancilla is being encoded (to create a “cat” state, for instance, or a level $`k`$ ancilla from level $`k1`$ ancillas). Any such encoding will always be followed by verification steps.
The ancillas necessary for level $`0`$ operations, including both error correction and the Toffoli gate, must be nearby the data — we cannot tolerate moving qubits a long distance for level $`0`$ operations. Therefore we place the level $`0`$ ancillas immediately adjacent to the data along the interaction axis. There are a fixed number $`N_o`$ of such ancillas, not increasing with the number of levels of encoding used in the computer (we only count the ancillas that directly interact with the data qubit). Depending on their function, these level $`0`$ ancillas may have a number of different forms. They are frequently encoded in blocks of $`7`$, which interact with corresponding $`7`$-qubit blocks of the code. Level $`0`$ ancillas which are adjacent along the data axis are completely independent — they interact with different blocks of the data, and no communication between them is necessary.
There are also a total of $`N_o`$ level $`1`$ ancillas for level $`1`$ operations and error correction on the data. We place the first of these after the last level $`0`$ ancilla on the interaction axis.<sup>3</sup><sup>3</sup>3Note that a “level $`k`$ ancilla” is used for operations or error correction at level $`k`$. When it is used for error correction, it may in fact be a state encoded at level $`k+1`$. Following the first level $`1`$ ancilla, we place a number $`N_t`$ of level $`0`$ ancillas, which are necessary for preparation and error-correction of the level $`1`$ ancilla. After that comes another level $`1`$ ancilla, followed by another set of level $`0`$ ancillas, and so on. We require the ability to correct errors on the level $`1`$ ancillas because they may have to move a considerable distance to interact with the data, and we may wish to preserve their state during the move.
After all of the level $`1`$ ancillas, we place the first level $`2`$ ancilla, followed by the level $`0`$ ancillas necessary to maintain it. After those comes the first of the level $`1`$ ancillas necessary to prepare and maintain the level $`2`$ ancilla, then the level $`0`$ ancillas to prepare and maintain the level $`1`$ ancilla, then another level $`1`$ ancilla, and so on. The level $`2`$ ancilla requires a total of $`N_t`$ level 1 ancillas, and each has associated with it $`N_t`$ level $`0`$ ancillas. Therefore, each level $`2`$ ancilla requires at most $`(N_t+1)^2`$ lines of qubits for its support structure.
We follow this pattern as far as necessary — each level $`k`$ ancilla requires $`N_t`$ level $`k1`$ ancillas, each of which requires $`N_t`$ level $`k2`$ ancillas, each of which requires $`N_t`$ level $`k3`$ ancillas, and so on. We can see that the total number of ancillas grows exponentially with level. This means that to interact with the data qubit, a level $`k`$ ancilla will have to move a distance $`N^k`$ for some constant $`N`$ (greater than $`N_t`$ and $`N_o`$). This means that the interaction takes $`O(N^k)`$ times as much time to occur as when we had long distance interactions, so the possibilities for error also increase exponentially with $`k`$. However, this merely recovers equation (5),
$$P_{k+1}=Cr^{k+1}P_k^2,$$
(7)
which still has an error threshold. Therefore, we can perform fault-tolerant computation with local gates in three dimensions.
Note that a block encoded at level $`k`$ may have different error rates at level $`0`$, level $`1`$, and so on (a level $`0`$ error is a single erroneous qubit; in a level $`1`$ error, a whole block of $`7`$ has gone bad). It is important that only the probability of level $`k`$ errors, and not the probability of level $`0`$ errors (or errors at another fixed level), increases with $`k`$. If a level $`k`$ ancilla experiences an exponential (or even linear) increase in the probability of level $`0`$ errors, equation (5) will not be valid. It is to solve this potential problem that we need to be able to correct errors on the ancillas as well as prepare them. For instance, we might move the level $`k`$ ancilla $`s`$ spaces towards the data, then perform error correction using the local extra ancillas, staving off low-level errors, then resume its movement. Note that to perform level $`l`$ error correction on the ancilla (with $`l<k`$), we may need to move level $`l`$ ancillas around to bring them next to the level $`k`$ ancilla. However, these level $`l`$ ancillas will only have to move a distance at most $`N^l`$,<sup>4</sup><sup>4</sup>4Or perhaps $`2N^l`$, since partway through the move two level $`k`$ ancillas could be adjacent. so equation (5) holds. We should also perform low-level error correction on the data at the same time as on the ancilla, since the data is accumulating errors while it waits for the level $`k`$ ancilla to arrive. Some other ancillas may also contain important information, and we should perform error correction on them too.
Since level $`L`$ ancillas now have to move a distance $`N^L`$ to interact with the data, this protocol produces a slowdown by a factor of $`O(N^L)`$ relative to the usual fault-tolerant protocol. Since the procedure requires more stops for low-level error correction, which in turn means more ancillas, there will be a similar increase in the number of qubits needed. However, since the error rate is a double exponential in $`L`$, this means the additional overhead is only polylogarithmic in the desired error rate.
## 5 Two Dimensions
In two dimensions, we adopt a somewhat similar arrangement. Now the individual data qubits and their ancillas form lines, which are again aligned so that transversal gates between adjacent data qubits are straightforward (see figure 5).
The arrangement of the individual lines can also be seen in figure 5. Next to each data qubit, we place the corresponding qubits from the $`N_o`$ level $`0`$ ancillas. We do this for a block of seven data qubits (since we are using the seven-qubit code), and then place the level $`1`$ blocks for the $`N_o`$ level $`1`$ ancillas required to perform error correction and Toffoli gates at level $`1`$ with the data. These blocks themselves contain, interspersed with the qubits in the level $`1`$ blocks, the $`N_t`$ level $`0`$ ancillas to create and maintain the level $`1`$ ancillas. We repeat this pattern (level $`1`$ data blocks next to $`N_t`$ level $`1`$ ancilla blocks) seven times, then position the level $`2`$ ancilla blocks with their support structures between the level $`2`$ data blocks.
As in the three dimensional case, this structure means that a level $`k`$ ancilla will have to move a distance $`N^k`$ to interact with the data (although $`N`$ may be bigger). Again, we may have to perform level $`l`$ error correction along the way (with $`l<k`$), but a level $`l`$ ancilla is never further than $`N^l`$ places away. We once again arrive with a recursion relation in the form (5), so we still have an error threshold.
## 6 One Dimension
Suppose we had just a two-qubit quantum computer. We could easily convert the two-dimensional model into a one-dimensional model by alternating qubits from the line associated with the first data qubit with the line associated with the second data qubit (so we would have a qubit from data block $`1`$ followed by a qubit from data block $`2`$, then a level $`0`$ ancilla qubit for data block $`1`$, then a level $`0`$ ancilla qubit for data block $`2`$, and so on). In this model, each ancilla will have to move exactly twice as far as in the two-dimensional case, so there will still be a threshold, though it will be half as large. To interact the two data qubits, we should perfectly align the blocks, so that qubit number $`57`$ from block $`1`$ is right next to qubit number $`57`$ from block $`2`$. However, if the logical qubits do not need to interact, there is no reason the blocks need to be aligned. We will still be able to perform error correction on the blocks separately, even if they are out of phase.
In the two-dimensional model, each data block with its ancillas (and the support structure for the ancillas) took up only a finite amount of space ($`T^L`$ for some constant $`T`$ when there are $`L`$ levels altogether). That means that we can create an arbitrarily large one-dimensional quantum computer by placing these blocks of $`T^L`$ alongside each other.
However, to interact two adjacent blocks would require moving qubits a distance $`T^L`$ — too far to go without error correction. The solution is to move the support structure for the data block along with the data block itself, interleaving the two blocks as in the two-qubit example. We will probably have to stop during the move to do error correction, and the blocks will still be out of phase at this point. Since we do not need to interact the blocks to perform error correction, this is not a problem. We can bring the blocks into phase, interact them, then move them back apart.
All in all, this process will slow the computer down by an additional factor of $`T^L`$ (beyond the two-dimensional case). Again, this only results in an additional polylogarithmic slowdown. Therefore, even for the one-dimensional case, fault-tolerant quantum computation is possible with local gates.
## Acknowledgements
I would like to thank David DiVincenzo and particularly John Preskill for helpful conversations. This research is supported by the Department of Energy under contract W-7405-ENG-36.
|
no-problem/9903/astro-ph9903012.html
|
ar5iv
|
text
|
# Strong optical line variability in Mkn 110 Visiting Astronomer, German-Spanish Astronomical Centre, Calar Alto, operated by the Max-Planck-Institute for Astronomy, Heidelberg, jointly with the Spanish National Commission for Astronomy.
## 1 Introduction
Markarian 110 is a nearby (z=0.0355) Seyfert 1 galaxy with highly irregular morphology. The apparent magnitude of the total system is m<sub>V</sub>=15.4 mag (Weedman 1973) corresponding to M<sub>V</sub>=-20.4 mag ($`H_0`$=75 km s<sup>-1</sup> Mpc<sup>-1</sup>). A foreground star is projected on the host galaxy at a distance of 6 arcsec to the nucleus in north-east direction (see Fig. 1). Therefore, it has been supposed in some early papers that Mkn 110 might be a double nucleus galaxy (Petrosian et al. 1978).
On the other hand the peculiar morphology of Mkn 110 is an indication for a recent interaction and/or merging event in this galaxy (Hutchings & Craven 1988).
One can clearly recognize a tidal arm to the west with a projected length of 50 arcsec (corresponding to 35 kpc) and further signs of asymmetry in the disturbed host galaxy on the R-band image (Fig. 1).
Ten years ago we started a long-term variability campaign to study the continuum and emission line intensity variations in selected AGNs. Besides our principal interest in the long-term variability behaviour of these galaxies on its own we want to compare the individual variability properties with those of other galaxies from the international AGN watch campaign (Peterson et al. 1991) (e.g. NGC 5548, Kollatschny & Dietrich 1996) and LAG campaign (Robinson 1994) (e.g. NGC 4593, Kollatschny & Dietrich 1997). Further results on continuum and H$`\beta `$ variations in Mkn 110 have been published in a recent paper on variability of Seyfert 1 galaxies (Peterson et al. 1998a).
## 2 Observations and data reduction
We took optical spectra of Mkn 110 at 24 epochs from February 1987 until June 1995. The sampling of the observations extends from days to years. In Table 1 we list our observing dates and the corresponding Julian Dates. The spectra were obtained at Calar Alto Observatory in Spain with the 2.2 m and 3.5 m telescopes as well as at McDonald Observatory in Texas with the 2.1 m and 2.7 m telescopes. The individual exposure times range from 10 minutes to 1 hour (see Table 1). We used spectrograph slits with projected widths of 2 to 2.5 arcsec and 2 arcmin length under typical seeing conditions of 1 to 2 arcsec. We extracted spectra of the central 5 arcsec. The slit was oriented in north-south direction in most cases.
To investigate the spatial extension of the narrow line region we took spectra at different position angles: 0, 45, 90, and 135. A possible extended \[OIII\]$`\lambda `$5007 emission line flux was always less than 3% of the nuclear point-like emission. Mkn 110 has been inspected by Nelson et al. (1996) with the Hubble Space Telescope WFPC-1 in the near infrared spectral range. They detected a dominant unresolved nucleus in this galaxy.
Our optical spectra typically cover a wavelength range from 4000 Å to 7200 Å with a spectral resolution of 3 to 7 Å per pixel. We used different CCD detectors in the course of this monitoring program: until 1989 a RCA-chip (1024x640), in January and July 1992 a GEC-chip (1155x768), and in August 1992 a Tektronix-chip (1024x1024).
The reduction of the spectra (flat fielding, wavelength calibration, night sky subtraction, flux calibration, etc.) was done in a homogeneous way using the ESO MIDAS package.
The absolute calibration of our spectra was achieved by scaling the \[OIII\]$`\lambda `$5007 line of all spectra to those obtained under photometric conditions. Our absolute \[OIII\]$`\lambda `$5007 flux corresponds within 5% to that obtained by Peterson et al. (1998a). For a better comparison of these two data sets we will use exactly the same \[OIII\]$`\lambda `$5007 flux of 2.26 $`10^{13}`$ erg s<sup>-1</sup> cm<sup>-2</sup>. Furthermore, we corrected all our data for small spectral shifts and resolution differences with respect to a mean reference spectrum using an automatic scaling program of van Groningen & Wanders (1992).
The R-band image of Mkn 110 (Fig. 1) was taken with the 2.2m telescope at Calar Alto observatory on September 20, 1993 with an exposure time of 6 minutes. Again, we reduced this CCD image with the ESO MIDAS package.
In the course of our discussion we additionally will make use of archival IUE spectra of Mkn 110 taken on February 28 and 29, 1988.
## 3 Results
Some typical spectra are plotted in Fig. 2 showing the range of intensity variations.
Immediately one can recognize the strong variations in the continuum, in the Balmer lines and especially in the HeII$`\lambda `$4686 line. The continuum variations are most pronounced in the blue section. The continuum gradient changes as a function of intensity. The emission line profiles of the Balmer lines in Mkn 110 are quite narrow (FWHM(H$`\beta `$)=1800 km s<sup>-1</sup>) similar to those of the so called narrow-line Seyfert 1 galaxies. Week FeII emission is present in the spectra blending the red line wings of \[OIII\]$`\lambda `$5007 and H$`\beta `$. We measured the integrated intensity of FeII line blends between 5134 Å and 5215 Å. The main FeII components in this region are the 5169 Å and 5198 Å lines belonging to the multiplets 42 and 49. The FeII line flux (2.0 10<sup>-14</sup> erg s<sup>-1</sup> cm<sup>-2</sup> ) remained constant during our variability campaign within the error of 10%.
Difference spectra with respect to our minimum stage in October 1988 are plotted in Fig. 3. All narrow line components cancel out. The FeII lines disappear in the difference spectra as well.
In the Balmer profiles (e.g. H$`\beta `$) very broad, slightly redshifted components stand out in the high intensity stages.
### 3.1 Line and continuum variations
The results of our continuum intensity measurements at 3750 Å, 4265 Å, and 5100 Å as well as the integrated line intensities of H$`\alpha `$, H$`\beta `$, HeII$`\lambda `$4686, HeI$`\lambda `$5876, and HeI$`\lambda `$4471 are given in Table 2. The individual light curves are plotted in Fig. 4.
The continuum intensities are mean values of the wavelength ranges given in Table 3, column (2). Line intensities were integrated in the listed limits after subtraction of a linear pseudo-continuum defined by the boundaries given in column (3). All wavelengths are given in the rest frame.
We started our monitoring program in 1987. Therefore, our 5100 Å continuum light curve covers up a larger time interval than the light curve of Peterson et al. (1998a) beginning in 1992. The observing epochs are partly complementary in the common monitoring interval. But, spectra taken nearly simultaneously from both groups (within one week) correspond with each other to better than 5% in the continuum fluxes. In Fig. 5 we compare our continuum light curve with that of Peterson et al. (1988a) in the common interval of observations. Both light curves are in very good accordance regarding to the intensity variations. Fig. 5 shows that the data sets are not significantly undersampled.
However, our H$`\beta `$ intensities are systematically higher than those of Peterson et al. (1998a) as we integrated over a larger wavelength range and we carried out a slightly different continuum subtraction. This method led to a lower pseudo-continuum flux at 4790 Å. The H$`\beta `$ fluxes are in perfect agreement if we multiply the values given by Peterson et al. (1998a) by a factor of 1.15.
The pattern of the continuum light curves at 5100 Å and 4265 Å (Fig. 4) is identical apart from their different amplitudes. The HeII$`\lambda `$4686 light curve follows closely these continuum light curves. The light curves of the Balmer lines H$`\alpha `$ and H$`\beta `$ are similar among themselves and the light curves of both HeI lines as well.
Statistics of our measured continuum and emission line variations are presented in Table 4. We list minimum and maximum fluxes F<sub>min</sub> and F<sub>max</sub>, peak-to-peak amplitudes R<sub>max</sub> = F<sub>max</sub>/F<sub>min</sub>, the mean flux over the entire period of observations $`<`$F$`>`$, the standard deviation $`\sigma _F`$, and the fractional variation
$$F_{var}=\frac{\sqrt{\sigma _{F}^{}{}_{}{}^{2}\mathrm{\Delta }^2}}{<F>}$$
as defined by Rodríguez-Pascual et al. (1997). The extreme variability amplitudes of Mkn 110 attract attention compared to other galaxies, e.g. NGC 4593 (Dietrich, Kollatschny et al. 1994) and the Seyfert 1 galaxies from the sample of Peterson et al. (1998a).
The variability amplitudes of the continuum increase towards the short wavelength region. These amplitudes as well as those of the emission line intensities are exceptionally high. The variability amplitude of the HeII$`\lambda `$4686 line is unique compared to the other emission lines in Mkn 110 and compared to optical lines in other Seyfert galaxies. In Fig. 6 we plot the line intensity ratios HeII$`\lambda `$4686/H$`\beta `$ and HeI$`\lambda `$5876/H$`\beta `$ as a function of continuum intensity at 5100 Å. These line intensity ratios increase slightly for the HeI line but strongly for the highly ionized HeII line.
### 3.2 Balmer decrement
We calculated Balmer decrement H$`\alpha `$/H$`\beta `$ values in the range from 3.2 to 4.3. Simple photoionization calculations (Case B) result in a value of 2.8 for this line ratio (Osterbrock 1989). Deviations of the observed Balmer decrement from the theoretical value are often explained by wavelength dependent dust absorption and/or by collisional excitation effects. We will show later on that the observed difference can not be explained by dust absorption alone in the broad-line region clouds of Mkn 110. There is a clear anti-correlation of the Balmer decrement with the continuum flux (Fig. 7).
One has to keep in mind that the individual Balmer lines and the continuum of each spectrum originate in distinct regions of the BLR at different times. This will be confirmed by the cross-correlation analysis later on. A very tight correlation can not be expected because of the short-term variations in this galaxy. The solid line in Fig. 7 is a linear fit to all our data points except for the lowest continuum intensity point. At this epoch (JD +5959) the H$`\alpha `$ intensity was extreme low (Fig. 4) in contrast to H$`\beta `$ and the continuum. An anti-correlation of the Balmer decrement with the continuum flux has been first noted in NGC 4151 by Antonucci & Cohen (1983).
### 3.3 UV spectra
Two UV spectra haven been taken with the IUE satellite with a time interval of one day only. These spectra have been taken nearly simultaneously (within 8 days) to our optical observations in March 1988. Therefore, these two spectra are suitable for a determination of optical/UV line intensity ratios. Fig. 8 shows an overplot of both short wavelength UV spectra taken with an time interval of 1 day.
The spectra are identical in the continuum and in the emission lines within the error limits. The different flux values in the center of the CIV$`\lambda `$1550 line are due to saturation effects in one of the spectra.
We determined an integrated Ly$`\alpha `$ flux of $`(4.2\pm 0.3)10^{12}`$ erg s<sup>-1</sup> cm<sup>-2</sup>. Comparison with the optical spectra results in a Ly$`\alpha `$/H$`\beta `$ ratio of 11.0 at the observing epoch March, 1988.
The HeII$`\lambda `$1640 flux amounts to $`(1.4\pm 0.2)10^{13}`$ erg s<sup>-1</sup> cm<sup>-2</sup>. The HeII$`\lambda `$1640/HeII$`\lambda `$4686 ratio of 2.45 is about a factor of two lower than that of typical photoionization models but consistent with other AGN observations (Seaton 1978, Dumount et al. 1998).
### 3.4 CCF analysis
An estimate of size and structure of the broad-line region can be obtained from the cross-correlation function (CCF) of a continuum light curve with emission line light curves.
We cross-correlated the 5100 Å continuum light curve with all our emission line light curves (Fig. 4) using an interpolation cross-correlation function method (ICCF) described by Gaskell & Peterson (1987). In Fig. 9 we plot the cross-correlation functions of the individual emission line light curves of HeII$`\lambda `$4686, HeI$`\lambda `$5876, H$`\beta `$ and H$`\alpha `$ with the continuum light curve.
The cross-correlation functions of HeI$`\lambda `$4471 and HeI$`\lambda `$5876 are identical within the errors; therefore, only one curve is shown in the plot.
First of all we determined an error of the centroids of the ICCFs by averaging the centroids $`\tau _{cent}`$ that were calculated for fractions of the peak ranging from 35% to 90% of the maximum value of the cross-correlation functions. Then we estimated the influence of two principal sources of cross-correlation uncertainties namely flux uncertainties in individual measurements and uncertainties connected to the sampling of the light curves. We used a method similar to that described by Peterson et al. (1998b). We added random noise to our measured flux values and calculated the cross-correlation lags a large number of times. Due to the large variability amplitudes of Mkn 110 these uncertainties were of lower weight compared to those introduced by the sampling of the light curves. The sampling uncertainties were estimated by considering different subsets of our light curves and repeating the cross-correlation calculations. Typically we excluded 37% of our spectra from the data set (cf. Peterson et al. 1998b). In Table 5 we list our final cross-correlation results together with the total error.
Considering the entire observing period we got a lag of the H$`\beta `$ light curve of $`39.9_{9.5}^{+33.2}`$ days. Peterson et al. (1998a) obtained for a similar extended observing campaign a lag of $`31.6_{7.3}^{+9.0}`$ days. However, they claim that their best lag estimate derived from an observing period of 123 days yielding the smallest error is about $`19.5_{6.8}^{+6.5}`$ days.
### 3.5 Line profiles and their variations
Normalized mean and rms profiles of HeII$`\lambda `$4686, HeI$`\lambda `$5876, H$`\beta `$ and H$`\alpha `$ lines are shown in Figs. 10 and 11.
The rms profile is a measure of the variable part in the line profile. There is a very broad line component in the mean and rms profiles especially to be seen in the HeII line. Even apart from this very broad component the mean and rms profiles of the individual lines are different with respect to their shape and full width at half maximum (FWHM). In Table 6 we list the widths of the mean and rms profiles. The mean and rms H$`\beta `$ profiles are more similar to the HeI$`\lambda `$4471 profiles than to H$`\alpha `$. The rms profile of H$`\alpha `$ e.g. is significantly narrower than the profile of H$`\beta `$. The profiles of the HeI$`\lambda `$4471 line are more noisy than the other ones. They are identical to those of the HeI$`\lambda `$5876 line within the errors.
All mean and rms profiles show a red asymmetry. The asymmetry is mainly caused by a second line component at v=1200 km s<sup>-1</sup>. This second component does not vary with the same amplitude as the main component. Furthermore, this second component was stronger during the first half of our campaign from 1987 until January 1992 than during the second half of the campaign. The H$`\alpha `$ spectra taken at the intensity minima of February 1989 and August 1994 are plotted in Fig. 12. The additional component centered at v=1200 km s<sup>-1</sup> is clearly to be seen.
The mean spectra of the first half of our campaign are broader by 400 - 500 km s<sup>-1</sup> (FWHM) than those of the second half because of this component.
There is an independent very broad component present in the mean and rms HeII profiles (Figs. 10 and 11). This very broad component exists in addition to the broad component. There is no transition component visible in the profile. The peak of this very broad profile component is redshifted by 400$`\pm `$100 km s<sup>-1</sup> with respect to the narrow lines. This shift was measured in the difference spectra (cf. Fig. 13). This very broad component is the strongest contributor to the HeII variability as can be seen from the rms profile. The very broad component is visible in the Balmer line profiles also, especially at high continuum stages (see Fig. 3). The HeII and H$`\beta `$ profiles taken in January 1992 are shown in more detail in Fig. 13. We subtracted the minimum profile taken in October 1988 to remove the narrow line component.
The HeII line intensity has been divided by a factor of 1.3 for direct comparison with the very broad H$`\beta `$ profile. Immediately one can see the striking similarity. The blue wing of the H$`\beta `$ profile is stronger than that of the HeII profile because of the blending with the red wing of the HeII line. The very broad line component has a full width at zero intensity (FWZI) of 12 000 km s<sup>-1</sup>.
## 4 Discussion
Mkn 110 is one of the very few Seyfert galaxies with spectral variability coverage over a time interval of ten years. Different continuum ranges show different variability amplitudes; this holds for different optical emission lines, too. But the mean fluxes of the continuum and of all emission lines remain nearly constant integrated over time scales of a few years (see Fig. 4). There are considerable variations over time scales of days to years. The strongest variability amplitudes in the continuum shows the blue spectral range (see Figs. 2 to 4). There are intensity variations of a factor of $`>5`$. The strongest amplitudes in the blue spectral range might be explained by a greater share of the non-thermal continuum with respect to the underlying galaxy continuum.
The optical line variations of H$`\beta `$ are very strong in comparison to other Seyfert galaxies (e.g. Peterson et al. 1998a). The HeII$`\lambda `$4686 line shows the strongest variations of nearly a factor of 8 within two years. On the other hand the H$`\beta `$ and the continuum($`\lambda `$5100) vary only by a factor of 1.7 and 3.0 respectively within the same time interval. Apart from the variation of the HeII$`\lambda `$4686 line in NGC 5548 in 1984 (Peterson & Ferland 1986) these are the strongest optical line variations within such a time interval. In the case of Mkn 110 we can show that the appearance of the very broad HeII$`\lambda `$4686 and H$`\beta `$ component (see Figs. 3 and 13) is not a unique event in the accretion rate. It is connected to a very strong ionizing continuum flux as can be seen from the light curves. The intensity ratio HeII$`\lambda `$4686/H$`\beta `$ comes to a value of 1.3 (see Fig.13) in the very broad line region. Such a line ratio is still in correspondence with photoionization of broad emission-line clouds in quasars (Korista et al. 1997).
The very broad line region (VBLR) originates close to the central ionizing source at a distance of about 9 light days. It is not connected to the “normal” BLR. As can be seen from the line profiles there exists no continuous transition region between these BLRs. The center of the VBLR line profiles is shifted by 400$`\pm `$100 km s<sup>-1</sup> with respect to the “normal” BLR profiles (Figs. 10, 11, 13).
Apart from this VBLR component we could show that the line profiles of the Balmer and HeI lines are similar but not identical. The H$`\alpha `$ line profile is narrower than the H$`\beta `$ profile. Besides the cross-correlation results this is an independent indication that these two lines do not originate in exactly the same region.
The observed Ly$`\alpha `$/H$`\beta `$ ratio comes to a value of 11.0 in Mkn 110. This is about a factor of two higher than the mean observed Ly$`\alpha `$/H$`\beta `$ ratio in Seyfert galaxies (Wu et al. 1983). Photoionization models of Kwan & Krolik (1981) result in Ly$`\alpha `$/H$`\beta 10`$ without the presence of dust. Therefore, dust may not play an important role in the BLR of Mkn 110. The Balmer decrement in Mkn 110 varies as a function of the ionizing continuum flux. This might be explained by radiative transfer effects rather than by variation of dust extinction.
The profiles of the broad emission lines in Mkn 110 are neither symmetric nor smooth (Figs. 10, 11). This is a further indication that the broad-line regions in AGN are structured as e.g. in NGC 4593 (Kollatschny & Dietrich 1997). In Fig. 12 it is shown that during the first half of our campaign a red line component was present in the H$`\alpha `$ spectra. This component was not visible during the second half of the campaign.
The size of the H$`\beta `$ line emitting region (r = 40 ld corresponding to 1.0 10<sup>17</sup> cm) and the optical continuum luminosity is compared to those of other Seyfert galaxies. The continuum luminosity amounts to $`L_{5100}=\mathrm{4.4\; 10}^{39}`$ erg s<sup>-1</sup> Å<sup>-1</sup>. In this case we used $`H_0=100`$ km s<sup>-1</sup> Mpc<sup>-1</sup> in order to compare directly the radius and luminosity of Mkn 110 with those of other Seyfert galaxies compiled by Carone et al. (1996). The values of Mkn 110 fit nicely into the general radius-luminosity relationship for the broad-line regions in Seyfert galaxies. The Balmer line emitting region as well as the luminosity of Mkn 110 are arranged in the upper region of the radius-luminosity plane close to the galaxies Mkn 590 and Mkn 335.
There is a trend that the broader emission lines originate closer to the center (see Table 7). A similar trend was found for NGC 5548, too (Kollatschny & Dietrich 1996).
The central mass in Mkn 110 can be estimated from the width of the broad emission line profiles (FWHM) under the assumption that the gas dynamics are dominated by the central massive object. Furthermore, one needs the distance of the dominant emission line clouds to the ionizing central source (e.g. Koratkar & Gaskell 1991, Kollatschny & Dietrich 1997). We presume that the characteristic velocity of the emission line region is given by the FWHM of the rms profile and the characteristic distance R is given by the centroid of the corresponding cross-correlation function:
$$M=\frac{3}{2}v^2G^1R.$$
In Table 7 we list our virial mass estimations of the central massive object in Mkn 110.
Altogether we determine a central mass of:
$$M=6.4_{2.7}^{+2.2}\mathrm{\hspace{0.17em}10}^7M_{}.$$
We can independently estimate an upper limit of the central mass if we interpret the observed redshift of the very broad HeII component ($`\mathrm{\Delta }z=0.0013`$) as gravitational redshift (e.g. Zheng & Sulentic 1990):
$$M=c^2G^1R\mathrm{\Delta }z$$
Again we presume that this line component originates at a distance of 9 ld from the central ionizing source. We derive an upper limit of the central mass of
$$M=\mathrm{2.1\; 10}^8M_{}$$
This second independent method confirms the former mass estimation.
## 5 Summary
Mkn 110 shows strong variations in the continuum and in the line intensities on time scales of days to years. The continuum - especially the blue range - varies by a factor of 3 to 5 on time scales of years. The Balmer line intensities vary by a factor of 2.5 while the HeII$`\lambda `$4686 line shows exceptionally strong variations by a factor of 8.
We cross-correlated the light curves of the emission lines with those of the continuum. The emission lines originate at distances of 9 to 80 light days from the central source as a function of ionization degree.
Not only the line intensities but also the line profiles varied. We detected a very broad line region VBLR component in the high intensity stages of the Balmer and HeII lines. This region exists separated from the “normal” broad-line region at a distance of only 9 light days from the central ionizing source.
We derived the central mass in Mkn 110 using two independent methods.
Mkn 110 is a prime target for further detailed variability studies with respect to the line and continuum variability amplitudes as well as with respect to the short-term variations.
###### Acknowledgements.
We thank M. Dietrich, D. Grupe, and U. Thiele for taking spectra for us. We are grateful to M. Dietrich, E. van Groningen, and I. Wanders who made available some software to us. This work has been supported by DARA grant 50 OR94089 and DFG grant Ko 857/13.
|
no-problem/9903/cond-mat9903152.html
|
ar5iv
|
text
|
# Quasiparticles in the vortex lattice of unconventional superconductors: Bloch waves or Landau levels?
In conventional ($`s`$-wave) superconductors the single particle excitation spectrum is gapped and, consequently, no quasiparticle states are populated at low temperatures. The situation is dramatically different in unconventional superconductors which exhibit nodes in the gap. These lead to finite density of fermionic excitations at low energies which then dominate the low-temperature physics. Among the known (or suspected) unconventional superconductors are high-$`T_c`$ copper oxides, organic and heavy fermion superconductors, and the recently discovered Sr<sub>2</sub>RuO<sub>4</sub>. Understanding the physics of the low-energy quasiparticles in the mixed state of these unconventional superconductors is an unsolved problem of considerable complexity. This complexity stems from the fact that (i) in the mixed state superconductivity coexists with the magnetic field $`𝐁`$ and the quasiparticles feel the combined effects of $`𝐁`$ and the spatially varying field of chiral supercurrents, and (ii) being composite objects, part electron and part hole, quasiparticles do not carry a definite charge. The corresponding low-energy theory therefore poses an entirely new intellectual challenge, which is simultaneously of considerable practical interest.
The initial theoretical investigations were based on numerical computations, semiclassical approximations and general scaling arguments. More recently Gorkov and Schrieffer made a remarkable prediction that in a $`d_{x^2y^2}`$ superconductor at intermediate fields $`H_{c1}BH_{c2}`$ the quasiparticles will form Landau levels (LL) with a discrete energy spectrum
$$E_n=\pm \mathrm{}\omega _H\sqrt{n},n=0,1,\mathrm{},$$
(1)
where $`\omega _H=2\sqrt{\omega _c\mathrm{\Delta }_0/\mathrm{}}`$, with $`\omega _c=eB/mc`$ being the cyclotron frequency and $`\mathrm{\Delta }_0`$ the maximum superconducting gap. Based on a somewhat different reasoning Anderson later arrived at a similar conclusion and argued that LL quantization could explain the anomalous magneto-transport in cuprates. Jankó proposed a direct test of Eq. (1) using the scanning tunneling spectroscopy.
The concept of LL quantization here is quite different from the one in conventional superconductors, where nodes in the gap arise as a result of the center of mass motion of pairs in strong magnetic fields near $`H_{c2}`$. The physics of Eq. (1) is based on the picture of a low-energy quasiparticle Larmor-precessing in weak external magnetic field along an elliptic orbit of constant energy centered at the Dirac gap node in the $`k`$-space. This motion corresponds to a closed elliptic orbit in the real space and the quantization condition (1) follows from demanding that such an orbit contains $`n`$ quanta of magnetic flux. This argumentation neglects the effect of spatially varying supercurrents in the vortex array, which were, however, recently shown by Melnikov to strongly mix the individual Landau levels.
In this paper we formulate a new approach to the problem which treats the effects of the magnetic field and supercurrents on equal footing. As a result the physics becomes transparent and for periodic vortex arrays in the intermediate field regime the low-energy theory can be solved in its entirety. Our principal result is that the natural low-energy quasiparticle states are Bloch waves of massless Dirac fermions and not the Landau levels discussed above.
At the heart of our approach is the observation that the collective response of the condensate to the external magnetic field on average exactly compensates its effect on the normal quasiparticles. More formally, the phase of the superconducting order parameter, $`\mathrm{\Delta }(𝐫)=\mathrm{\Delta }_0e^{i\varphi (𝐫)}`$, acts as an additional “gauge field” coupled to the quasiparticles. In the vortex state $`\varphi (𝐫)`$ is not a pure gauge: $`\times \varphi (𝐫)=2\pi \widehat{z}_i\delta (𝐫𝐑_i)`$ where $`\{𝐑_i\}`$ denotes vortex positions. From the vantage point of a quasiparticle the singularities in $`\times \varphi `$ act as magnetic half-fluxes concentrated in the vortex cores with polarity opposing the external field. Flux quantization ensures that on average this “spiked” field exactly cancels out the external applied field $`𝐁`$. In the mixed state, the quasiparticle therefore can be thought of as moving in an effective field $`𝐁_{\mathrm{eff}}`$ which is zero on average but derives from a vector potential that is highly nontrivial. The nature of the phenomenon is purely quantum-mechanical, and is closely related to the Aharonov-Bohm effect: classical charged particle would be completely unaffected by the spiked field because the singularities occupy a set of measure zero in the real space.
Our solution consists in finding a gauge in which the Hamiltonian manifestly displays the physical property described above. In such a gauge the fermionic excitation spectrum can be found by band structure techniques suitably adjusted to the “off-diagonal” structure of the theory. Besides revealing the nature of the low-energy quasiparticles, this representation leads to new insights into the physics of the mixed state. One surprising finding is that in a perfectly periodic vortex lattice the original Dirac nodes survive the perturbing effect of a weak magnetic field.
We now supply the details. Quasiparticle wavefunction $`\mathrm{\Psi }^T(𝐫)=[u(𝐫),v(𝐫)]`$ is subject to the Bogoliubov-de Gennes equation $`\mathrm{\Psi }=E\mathrm{\Psi }`$, where
$$=\left(\begin{array}{cc}\widehat{}_e& \widehat{\mathrm{\Delta }}\\ \widehat{\mathrm{\Delta }}^{}& \widehat{}_e^{}\end{array}\right)$$
(2)
with $`\widehat{}_e=\frac{1}{2m}(𝐩\frac{e}{c}𝐀)^2ϵ_F`$ and $`\widehat{\mathrm{\Delta }}`$ the $`d`$-wave pairing operator. Following Simon and Lee we choose the coordinate axes in the direction of gap nodes, in which case $`\widehat{\mathrm{\Delta }}=p_F^2\{\widehat{p}_x,\{\widehat{p}_y,\mathrm{\Delta }(𝐫)\}\}`$, where $`p_F`$ is the Fermi momentum, $`\widehat{𝐩}=i\mathrm{}`$, and curly brackets represent symmetrization, $`\{a,b\}=\frac{1}{2}(ab+ba)`$. In high-$`T_c`$ cuprates it is natural to concentrate on the low to intermediate field regime $`H_{c1}<BH_{c2}`$, where the vortex spacing is large and we may assume the gap amplitude to be constant everywhere in space, $`\mathrm{\Delta }(𝐫)\mathrm{\Delta }_0e^{i\varphi (𝐫)}`$. Under such conditions the magnetic field distribution is described by a simple London model and superfluid velocity, defined as $`𝐯_s(𝐫)=\frac{1}{m}(\frac{\mathrm{}}{2}\varphi \frac{e}{c}𝐀)`$, can be written in terms of vortex positions $`\{𝐑_i\}`$ as
$$𝐯_s(𝐫)=\frac{\pi \mathrm{}}{m}\frac{d^2k}{(2\pi )^2}\frac{i𝐤\times \widehat{z}}{\lambda ^2+k^2}\underset{i}{}e^{i𝐤(𝐫𝐑_i)},$$
(3)
where $`\lambda `$ is the London penetration depth.
In order to diagonalize (2) it is desirable to remove the phase factors $`e^{i\varphi (𝐫)}`$ from the off-diagonal components of $``$. This is accomplished by a unitary transformation
$$U^1U,U=\left(\begin{array}{cc}e^{i\varphi _e(𝐫)}& 0\\ 0& e^{i\varphi _h(𝐫)}\end{array}\right),$$
(4)
where $`\varphi _e(𝐫)`$ and $`\varphi _h(𝐫)`$ are arbitrary functions satisfying
$$\varphi _e(𝐫)+\varphi _h(𝐫)=\varphi (𝐫).$$
(5)
Eq. (4) can be thought of as a singular gauge transformation since it changes the effective magnetic field seen by electrons and holes. We now discuss three specific choices for the functions $`\varphi _e`$ and $`\varphi _h`$.
The most natural choice satisfying (5) is the symmetric one, namely $`\varphi _e(𝐫)=\varphi _h(𝐫)=\varphi (𝐫)/2`$, resulting in
$`_S=\left(\begin{array}{cc}\frac{1}{2m}(\widehat{𝐩}+m𝐯_s)^2ϵ_F& \frac{\mathrm{\Delta }_0}{p_F^2}\widehat{p}_x\widehat{p}_y\\ \frac{\mathrm{\Delta }_0}{p_F^2}\widehat{p}_x\widehat{p}_y& \frac{1}{2m}(\widehat{𝐩}m𝐯_s)^2+ϵ_F\end{array}\right).`$
This particular gauge makes the Hamiltonian very simple but unfortunately is not very useful because, as noted by Anderson and in a different context by Balents et al., the corresponding transformation (4) is not single valued. To see this, consider the situation on encircling the core of a vortex: $`\varphi `$ winds by $`2\pi `$ but $`\varphi _e`$ and $`\varphi _h`$ pick only a phase of $`\pi `$, causing $`U`$ to have two branches. Consequently, one is forced to diagonalize $`_S`$ under the constraint that the original wavefunctions are single valued. Clearly, this is a difficult task. Nevertheless, the symmetric gauge reveals the physical essence of the problem: formally, $`𝐯_s`$ enters $`_S`$ as an effective vector potential, which corresponds to an effective magnetic field $`𝐁_{\mathrm{eff}}=\frac{mc}{e}(\times 𝐯_s)𝐁`$. It is easy to show from Eq. (3) that $`𝐁_{\mathrm{eff}}`$ vanishes on average , i.e. that $`\times 𝐯_s=0`$, where angular brackets denote the spatial average. Aside from the single-valuedness problem, the low-energy physics described by $`_S`$ is that of a quasiparticle in zero average magnetic field. The external field is compensated by the array of magnetic half-fluxes giving rise to a non-trivial vector potential with the periodicity of the vortex lattice.
To avoid the problem of multiple valuedness Anderson, suggested taking $`\varphi _e(𝐫)=\varphi (𝐫)`$ and $`\varphi _h(𝐫)=0`$. This leads to a Hamiltonian of the form
$`_A=\left(\begin{array}{cc}\frac{1}{2m}(\widehat{𝐩}+\frac{e}{c}𝐀+2m𝐯_s)^2ϵ_F& \widehat{D}\\ \widehat{D}& \frac{1}{2m}(\widehat{𝐩}+\frac{e}{c}𝐀)^2+ϵ_F\end{array}\right),`$
with $`\widehat{D}=\frac{\mathrm{\Delta }_0}{p_F^2}(\widehat{p}_x+\frac{e}{c}A_x+mv_{sx})(\widehat{p}_y+\frac{e}{c}A_y+mv_{sy})`$. In this representation one could consider neglecting in the first approximation the $`𝐯_s`$ terms, on the grounds that they represent a perturbative correction. At low energies, expanding all the terms in $`_A`$ to leading order near the nodes, the Hamiltonian becomes that of massless Dirac fermions in a uniform magnetic field $`𝐁=\times 𝐀`$ with the energy spectrum given by Eq. (1). The effects of supercurrent field $`𝐯_s`$ can in principle be treated perturbatively. This is, however, difficult in practice, because of the massive degeneracy of the Landau levels and also because $`𝐯_s(𝐫)`$ is not a small perturbation (it diverges as $`1/r`$ at the vortex cores) and will lead to strong LL mixing. In the absence of a reliable scheme to incorporate $`𝐯_s(𝐫)`$, the physical picture of Larmor precessing quasiparticle that leads to Eq. (1) appears incomplete.
We now introduce a new singular gauge transformation that combines the desirable features of the two transformations discussed above but has none of their drawbacks. Consider dividing vortices into two distinct subsets $`A`$ and $`B`$, each containing an equal number of vortices. Now denote by $`\varphi _A(𝐫)`$ the phase field associated with vortices in the subset $`A`$, with the analogous definition of $`\varphi _B(𝐫)`$. The choice $`\varphi _e(𝐫)=\varphi _A(𝐫)`$, $`\varphi _h(𝐫)=\varphi _B(𝐫)`$ clearly satisfies the condition (5) and the corresponding transformation $`U`$ is single valued. The resulting Hamiltonian is
$`_N=\left(\begin{array}{cc}\frac{1}{2m}(\widehat{𝐩}+m𝐯_s^A)^2ϵ_F& \widehat{D}\\ \widehat{D}& \frac{1}{2m}(\widehat{𝐩}m𝐯_s^B)^2+ϵ_F\end{array}\right),`$
with $`\widehat{D}=\frac{\mathrm{\Delta }_0}{p_F^2}[\widehat{p}_x+\frac{m}{2}(v_{sx}^Av_{sx}^B)][\widehat{p}_y+\frac{m}{2}(v_{sy}^Av_{sy}^B)]`$ and
$$𝐯_s^\mu =\frac{1}{m}(\mathrm{}\varphi _\mu \frac{e}{c}𝐀),\mu =A,B.$$
(6)
As long as the physical field $`𝐁`$ remains approximately uniform,
$`𝐯_s^\mu (𝐫)`$ satisfy equations similar to (3) with $`𝐑_i`$ replaced by $`𝐑_i^\mu `$ and an overall prefactor $`2`$.
It is easy to verify that $`𝐯_s^\mu `$ correspond to zero effective field, i.e. that $`\times 𝐯_s^\mu =0`$. Evidently, there is no reason to expect LL quantization in the system. This property becomes more transparent if we focus on the low-energy excitations. By linearizing $`_N`$ in the vicinity of the four nodes as described in Ref. we obtain $`_N_0+^{}`$, where
$$_0=\left(\begin{array}{cc}v_F\widehat{p}_x& v_\mathrm{\Delta }\widehat{p}_y\\ v_\mathrm{\Delta }\widehat{p}_y& v_F\widehat{p}_x\end{array}\right)$$
(7)
is the free Dirac Hamiltonian and
$$^{}=m\left(\begin{array}{cc}v_Fv_{sx}^A& \frac{1}{2}v_\mathrm{\Delta }(v_{sy}^Av_{sy}^B)\\ \frac{1}{2}v_\mathrm{\Delta }(v_{sy}^Av_{sy}^B)& v_Fv_{sx}^B\end{array}\right)$$
(8)
is the vector potential term, $`v_F`$ is the Fermi velocity and $`v_\mathrm{\Delta }=\mathrm{\Delta }_0/p_F`$ denotes the slope of the gap at the node.
Our considerations so far have been completely general and apply to arbitrary distribution of vortices. In the following we illustrate the utility of the new Hamiltonian by finding the excitation spectrum in a periodic square vortex lattice. With minor modifications the same approach can be generalized to an arbitrary periodic lattice, such as e.g. triangular. We take $`A`$ and $`B`$ subsets to coincide with the two sublattices of the square vortex lattice as illustrated in Figure 1(a). Expanding the quasiparticle wavefunction in the plane wave basis $`\mathrm{\Psi }(𝐫)=_𝐪\mathrm{\Psi }_𝐪e^{i𝐪𝐫}`$ we arrive at an equation of the form
$$_0(𝐪)\mathrm{\Psi }_𝐪+\underset{𝐊}{}^{}(𝐊)\mathrm{\Psi }_{𝐪+𝐊}=E\mathrm{\Psi }_𝐪.$$
(9)
Because of its periodicity, $`^{}`$ only has non-vanishing Fourier
components at the reciprocal lattice vectors $`𝐊=\frac{2\pi }{d}(m_x,m_y)`$, where $`(m_x,m_y)`$ are integers and $`d=\sqrt{2\mathrm{\Phi }_0/B}`$ is the size of the unit cell. Aside from the $`2\times 2`$ matrix structure, Eq. (9) is a standard Bloch equation which we solve by numerical diagonalization for $`𝐪`$ vectors in the first MBZ, sketched in Figure 1(b).
As long as $`\lambda d`$ the results are independent of $`\lambda `$ and the only free parameter in the low-energy theory is the Dirac cone anisotropy $`\alpha _D=v_F/v_\mathrm{\Delta }`$. The band structure for the isotropic case, $`\alpha _D=1`$ is presented in Figure 2. When compared to the unperturbed Dirac bands the periodic potential has precisely the expected effect of opening up band gaps at the MBZ boundaries. The surprising finding is that the magnetic field does not destroy the original nodal point, but merely renormalizes the slope of the dispersion. This finding can be understood as a consequence of the exact electron-hole symmetry of the linearized Hamiltonian (7-8). The associated DOS vanishes at the Fermi level, in contrast to the peak expected from the LL scenario (cf. Figure 2). The peaks which appear in DOS are van Hove singularities related to the band structure and have nothing to do with LL spectrum of Eq. (1). Observation of these peaks is a challenge to the experimental community.
In Figure 3 we display the band structure for $`\alpha _D=20`$, a value perhaps more relevant for the optimally doped YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7-δ</sub>. The striking new feature is the formation of additional lines of nodes on the Fermi surface \[see also Figure 1(b)\] which give rise to a finite DOS at the Fermi surface. This structure can be understood by considering the $`\alpha _D\mathrm{}`$ limit.
The lines of nodes first appear at $`\alpha _D15`$ and with increasing nodal anisotropy their number increases. This finding is consistent with the prediction of a finite residual DOS based on the semiclassical approach, which is expected to be valid when $`\mathrm{\Delta }_0ϵ_F`$, or equivalently $`\alpha _D1`$. Qualitatively similar results are found for different orientations of the vortex unit cell with respect to underlying ionic lattice.
In conclusion, we have shown how the collective superfluid response of a superconductor ensures that the effective magnetic field $`𝐁_{\mathrm{eff}}`$ seen by a fermionic quasiparticle (and distinct from the physical field $`𝐁`$) is zero on average, even in the vortex state. The physics of a low energy quasiparticle in a superconductor with gap nodes is that of a massless Dirac fermion moving in a vector potential associated with physical supercurrents but zero average magnetic field. For a periodic vortex lattice the appropriate description is in terms of familiar Bloch waves. A mathematically equivalent description could be given (in a different gauge) in terms of Landau levels strongly scattered by supercurrents. The fact that no trace of LL structure remains in the exact spectrum of excitations suggests that the former is a more useful starting point. The LL quantization remains a domain of relatively high fields. Our conclusions are corroborated by the absence of LL spectra in the numerical computations as well as in the experimental tunneling data on cuprates and are consistent with scaling arguments given previously.
In the present study we have focused on the leading low-energy, long-wavelength behavior of the quasiparticles as embodied by the linearized Dirac Hamiltonian (7-8). In real materials and at higher energies our results may be modified by the corrections to the linearization, electron-hole asymmetry, possible internode scattering, ionic lattice effects and the vortex core physics. These issues, as well as a more rigorous discussion of the singular gauge transformation, are best addressed within the framework of a tight binding calculation to be reported in a forthcoming publication.
We emphasize that the central idea of this paper, i.e. that upon proper inclusion of the condensate screening the quasiparticles experience effective zero average magnetic field, is completely general and robust against any effects of short length scale physics. Consequently, our method is applicable to any pairing symmetry and arbitrary distribution of vortices and could be useful for understanding the physics of vortex glass and liquid phases, as well as the zero-field quantum phase-disordered states such as the nodal liquid. We expect that disorder in the vortex positions will smear the structure apparent in Figures 2 and 3, resulting in smooth DOS. Of obvious interest are implications for the quasiparticle thermodynamics, transport and localization properties in statically disordered or fluctuating vortex arrays.
The authors are indebted to A. A. Abrikosov, W. A. Atkinson, A. V. Balatsky, M. P. A. Fisher, B. Jankó, A. H. MacDonald, A. Melikyan, D. Rainer, J. A. Sauls, and O. Vafek for helpful discussions and to P. W. Anderson, A. S. Melnikov, J. R. Schrieffer and G. E. Volovik for correspondence. This research was supported in part by NSF grant DMR-9415549.
|
no-problem/9903/astro-ph9903205.html
|
ar5iv
|
text
|
# The Fourth BATSE Gamma-Ray Burst Catalog (Revised)
## 1. Introduction
BATSE observations of gamma-ray bursts (GRBs) provided the first clear indication of their extragalactic origin. The angular distribution is isotropic, while the intensity distribution shows fewer weak bursts than would be expected from a homogeneous distribution of sources in Euclidean space (Meegan et al. 1992). No observed Galactic component has these spatial properties. Now that some bursts have been associated with optical counterparts that appear to be extragalactic (van Paradijs et al. 1997; Metzger et al. 1997; Sahu et al. 1998), it is reasonable to conclude that all bursts come from cosmological distances.
The first catalog (1B) of BATSE bursts (Fishman et al. 1994) consisted of 260 bursts, and covered the time interval from 1991 April 19 until 1992 March 5. The second catalog (2B) and third catalog (3B) extended the time interval to 1993 March 9 (585 bursts) and 1994 September 19 (1122 bursts), respectively (Meegan et al. 1994, 1996). We present here the fourth catalog (4Br), which includes 1637 bursts detected from launch through 1996 August 29. The current version (4Br) has been revised from the version first circulated on CD-ROM in September 1997 (4B) and described by Meegan et al. (1998), to include improved locations for a subset of bursts that have been reprocessed using additional data. The summary tables herein include only the revised 3B and post-3B bursts. The full catalog data are available electronically on the World Wide Web at http:$`/`$$`/`$www.batse.msfc.nasa.gov$`/`$batse$`/`$grb$`/`$catalog$`/`$4b$`/`$ or http:$`/`$$`/`$cossc.gsfc.nasa.gov$`/`$cossc$`/`$batse$`/`$4Bcatalog$`/`$4b\_catalog .html.
## 2. Instrumentation
BATSE consists of eight detector modules situated at the corners of the CGRO spacecraft. Each module contains a 50.8 cm diameter by 1.27 cm thick NaI scintillator, sensitive to gamma-rays from $``$25–2000 keV. For details of the experiment, see Fishman et al. (1989). The nominal CGRO orbit altitude is 450 km. During the first few years of the mission, atmospheric drag brought the altitude down to $``$350 km, and in 1994 the spacecraft was re-boosted to the nominal altitude using an on-board propulsion system. In 1997, the altitude was further boosted to $``$550 km in order to keep the spacecraft in orbit through the next solar maximum. A plot of the CGRO altitude versus time is shown in Figure 1.
Bursts are nominally recognized on board as simultaneous statistically significant increases, above pre-set thresholds, in the count rates of two or more detectors in a specified energy range. The rates are tested at 64, 256, and 1024 ms intervals. The background is recomputed every 17.408 s. Prior to 1994 September 19, the trigger energy range was set to the nominal 50–300 keV. Since then, various scientific considerations have caused us to employ several alternative trigger energy ranges, as summarized in Table 1. This results in spectrum-dependent differences in trigger sensitivity, so that the different trigger energy settings should properly be treated as separate burst experiments. Table 2 summarizes the total amount of time in each trigger range and the corresponding number of bursts. Furthermore, the levels of the three thresholds are independently adjustable by command, specified in units of standard deviations $`\sigma `$ above the background rate, and the three trigger time scales have different sensitivities depending on the temporal structure of the bursts, so that each is best thought of as a separate burst experiment. The history of the threshold settings is also included in Table 1.
When a burst trigger occurs, BATSE enters a mode in which high-rate data are accumulated and stored for later transmission. During this data accumulation interval, further burst triggers are disabled. The duration of the interval, set by command, was 241.7 s from launch until 1992 July 4; then 180.2 s until 1992 July 7, then 241.7 s until 1992 Dec 17; then 573.4 s thereafter. At the end of this accumulation interval, a readout interval begins and the accumulated data are transmitted. During the readout interval, burst triggers are enabled, but only on the 64 ms time scale, and the trigger threshold is raised to correspond approximately to the maximum rate of the current burst. A burst that triggers during this time is referred to as an overwrite. When an overwrite occurs, the readout of the remaining data from the overwritten event is suspended. Consequently, some or all of the data from an overwritten trigger may be lost.
On 1992 December 17, flight software revisions were made to compensate for the failed CGRO tape recorders. A new burst data type, DISCLB, was added to enhance the computation of burst locations even if the burst occurred during a real-time data gap. Also, stored commands were initiated to suspend readout of the burst memory during telemetry gaps, which are predictable, and the readout interval for weak bursts was shortened from the standard $``$90 minutes to $``$28 minutes by eliminating some high time resolution data. A new TDRS satellite also significantly reduced the telemetry gaps. In addition, the CGRO flight software was revised on 1993 March 17 to transmit partial BATSE data at a lower telemetry rate using the omnidirectional antenna when TDRS coverage was not available. As a result of these changes, by March of 1993, the data recovery was almost as good as before the tape recorder failures.
## 3. Instrumental Considerations
### 3.1. Trigger Criteria
The most significant difference between the 4Br (and 4B) and the previous BATSE catalogs is that the energy range of the burst trigger was revised several times since the end of the 3B catalog, as summarized in Table 1. The Trigger Criteria Table in the 4Br catalog on-line database provides further details. Normally, all eight BATSE detectors are enabled for burst triggering, and a trigger requires the rates from two or more detectors to be above threshold. However, from 1995 July 20 to 1995 July 24, the requirement was changed to trigger if a single detector exceeded threshold. This was an engineering test to obtain high time resolution data on single detector phosphorescence events. Also, from 1995 December 11–14 only 2 detectors were enabled for burst triggering, and from 1995 December 14–18 only 4 detectors were so enabled. This was done to obtain a larger sample of events from the bursting pulsar GRO J1744$``$28, which was active at that time.
The variations in the trigger criteria as summarized in Table 1 can distort some burst global properties that involve spectral considerations, such as the burst rate, distributions of spectral parameters, and hardness-intensity correlations. For many studies, it may be necessary to use bursts that have a common trigger energy range. On the other hand, inter-comparison of samples obtained with different trigger criteria presents new opportunities for investigating burst properties.
### 3.2. Sky Exposure
The sky exposure is the total observing time as a function of celestial coordinates. In computing sky exposure, we consider Earth blockage and times that the burst trigger is disabled. After a burst trigger, there is a variable time interval during which the burst data are transmitted to the ground. During this time, the trigger thresholds are raised so that only a stronger event can abort the current readout. Early in the mission, the burst memory readout time interval corresponded approximately to one satellite orbit. As described in section 2, flight software changes were implemented subsequent to the tape recorder failures to suspend the memory readout during telemetry gaps and to read out only a portion of the burst memory for weaker bursts.
In the sky exposure calculation, burst readout times are considered dead time, so that bursts that are overwrites should not be included in calculations that use the sky exposure. The sky exposure is thus the total time during which BATSE could have triggered on a burst above the nominal threshold and is a function only of the declination of the burst, with a dipole moment due primarily to disabling the trigger during passages of CGRO through the South Atlantic Anomaly, and a quadrupole moment due to Earth blockage. Any dependence on burst right ascension averages out over sufficiently long periods.
The algorithm used for calculating sky exposure for the 1B catalog depended on continuous data coverage and could not be used for subsequent catalogs due to the data gaps arising from the tape recorder failures. A new algorithm has been developed (Hakkila et al. 1998a) and used to calculate the sky exposure for each of the BATSE catalogs and subsets of catalogs.
There are two major differences between the new algorithm and the previous algorithm. The previous algorithm included only time during which the trigger threshold was at the nominal setting of 5.5$`\sigma `$ in each of the three trigger time scales, while the new algorithm has no such restriction. This change increases the 1B exposure by $``$14%. The new algorithm handles the SAA passages more accurately than the 1B calculation, which overestimated the time spent in the SAA and therefore overestimated the dipole moment of the exposure. Correcting for sky exposure, the full sky burst rate above the BATSE threshold, triggering on 50–300 keV, is $``$666 bursts$`/`$year. Figure 2 shows the exposure vs. declination for the full 4Br catalog as well as for two important data subsets: the times when the trigger energy channels were channels 1+2 (25–100 keV) and 3+4 ($`>`$100 keV).
Table 3 indicates the average exposures, dipole moments, and pertinent quadrupole moments of the sky exposures shown in Figure 2. The exposures have produced similar anisotropies (due primarily to Earth blockage) in all published BATSE burst catalogs.
Table 4 lists the rates of bursts detected above BATSE’s minimum detection threshold for the three most commonly-used trigger criteria, corrected for sky exposure. The burst rates are significantly higher in trigger channels 2+3 than they are in channels 1+2 or 3+4. The high signal-to-noise ratios in channels 1 and 4 suggest that enhanced channel 2+3 rates are not entirely due to instrumental effects; it appears that more observable bursts exist in the 50 to 300 keV energy range (channels 2+3) than at other energies. This is consistent with the finding by Harris & Share (1998) that few extremely hard bursts exist that could not be detected by BATSE. The spectral response of the BATSE detectors will be discussed in more detail in section 3.3.
The exposure-corrected 2+3 burst rate can be compared to the exposure-corrected burst rate obtained by the untriggered burst search (J. Kommers, private communication). These rates indicate a burst distribution that continues to decline in number below the BATSE trigger threshold. There is no time-dependence to the BATSE exposure-corrected gamma-ray burst rate; this has remained roughly constant throughout the CGRO mission.
### 3.3. Trigger Efficiency
The trigger efficiency is the probability that a burst of a given peak flux will exceed the BATSE trigger threshold. Again, the algorithm used for the previous catalogs cannot be used if there are data gaps. Also, the older algorithm did not consider the increase in efficiency due to atmospheric scattering of photons into the detectors, the effect of the range of burst spectral properties, or the effect of non-nominal thresholds. An improved algorithm that overcomes these limitations is currently under development (Pendleton, Hakkila & Meegan 1998). Figure 3a compares results obtained with the old and new algorithms for the probability of exceeding the 1024 ms threshold, in the nominal 50–300 keV trigger energy range and at the nominal level of 5.5$`\sigma `$, as a function of peak flux. For this calculation, all bursts were assumed to have the same spectrum, viz., a typical Band function (Band et al. 1993). The greater efficiency near threshold is due to inclusion of atmospheric scattering in the new calculation. Figure 3b compares the triggering efficiency for three different trigger energy ranges (20–100 keV, 50–300 keV, and $`>`$100 keV), using the same assumed burst spectrum in all cases.
The spectral dependencies (summarized by the $`\alpha `$, $`\beta `$, and $`E_\mathrm{p}`$ parameters of the Band function) appear to strongly influence the trigger efficiency. The $`E_\mathrm{p}`$ parameter is relatively important, as is the low-energy spectral index $`\alpha `$. The efficiency is less sensitive to the high-energy spectral index $`\beta `$. The effects of the dependence on $`E_\mathrm{p}`$ will be demonstrated in section 4.2. As a result of these spectral dependencies, calculation of the trigger efficiency depends on an assumed distribution of spectral shapes, and is therefore model-dependent.
## 4. The Catalog
The 4Br catalog includes the time period of the 3B catalog plus an additional 515 bursts between 1994 September 20 and 1996 August 29. The 4B catalog was initially released in September 1997 on the World Wide Web and on CD-ROM (Meegan et al. 1998). A few revised entries for 3B bursts were included in the initial release: revised locations for triggers 741, 2311, and 3155; a corrected date for trigger 1694; revised CMAXMIN entries for triggers 111, 3118, and 3137; and a revised duration for trigger 148. Burst locations were computed using the same algorithm as was used for the 3B catalog. As will be described in section 4.1, the accuracy of the algorithm is improved substantially for certain bursts by fitting to data from 6 detectors rather than 4 detectors. The 4Br catalog is expanded from the September 1997 release by including revised locations for 208 bursts (199 of these were recomputed using the same data type, but fitting to 6 detectors, and the remaining 9 were recomputed using more appropriate choices of data type and$`/`$or time interval).
The 162 revised 3B bursts are listed in Table 5 and the 515 post-3B bursts are listed in Table 6. The columns are the same as were used in the previous catalogs. The first column is the BATSE trigger number. The next column specifies the trigger name in the format 4B yymmdd, where yy is year, mm is month, and dd is day of month. The trigger name may have a letter appended if there is more than one trigger in a day. The next column specifies the time of the trigger expressed as the Truncated Julian Day (TJD) and the seconds of day (s). The next column specifies the trigger time in the format day of year and UT time. The next five columns give the computed locations in equatorial (epoch 2000) and Galactic coordinates and the error in the location. The tabulated error radius represents the radius of a circle that has the same area as the 68% statistical confidence region. The actual 1$`\sigma `$ contours are not necessarily circular. These radii represent errors due to photon counting statistics only; there is also a systematic error (see section 4.1). The next column specifies the largest of the three values of $`C_{\mathrm{max}}/C_{\mathrm{min}}`$, the maximum count rate divided by the threshold count rate. The next two columns specify the threshold number of counts $`C_{\mathrm{min}}`$ and the relevant trigger time scale. Note that the latter is not necessarily the time scale for the trigger, but the time scale for which $`C_{\mathrm{max}}/C_{\mathrm{min}}`$ is the largest. The column labeled $`T_{90}`$ is a measure of the burst duration. It is the time during which the burst integrated counts increases from 5% to 95% of the total counts. The next column specifies the peak flux and error. The energy interval for the peak flux is 50–300 keV (the burst trigger range), and the integration time is 256 ms. The next column specifies the fluence and error in the 50–300 keV energy range. The next column presents the hardness ratio, defined as the ratio of fluence in the 100–300 keV range to the fluence in the 50–100 keV range. The next column specifies the total fluence (above $``$20 keV) and error over the duration of the burst. The last column contains codes for specific comments listed at the end of the table.
The $`V/V_{\mathrm{max}}`$ statistic (Schmidt , Higdon & Hueter 1988) is simply $`(C_{\mathrm{max}}/C_{\mathrm{min}})^{3/2}`$. Table 7 summarizes the average value of $`V/V_{\mathrm{max}}`$ for the 4Br catalog and various subsets. It is clear from the table that none of the subsets are consistent with the value 0.5 that a homogeneous sample would produce.
The numerous missing entries are due to data gaps in one or more of the various data types that BATSE transmits. The various parameters have different requirements on data completeness. $`C_{\mathrm{max}}/C_{\mathrm{min}}`$ is most sensitive to missing data; durations, fluxes, and fluences less so. Since locations can be computed using several different data types and time intervals, they are available for all bursts.
### 4.1. Locations
The 4Br catalog incorporates 211 location changes from the 3B catalog (Meegan et al. 1996), of which 3 had already been made in the 4B catalog. While twelve of the changes were made for miscellaneous reasons, 199 of the changes are due to relocating using the data from six instead of four detectors. When a location obtained using the data of four detectors has a detector with an angle larger than 90, this is an indication that the choice of four detectors is poor, since the geometry ensures that there will be always be four detectors with source angles no greater than 90. We postulated that such cases are due primarily to systematic errors in detector response and atmospheric scattering corrections, so that a better location would be obtained by fitting the data from six detectors. The idea was tested by comparing locations for 39 bursts from the 4Br catalog in this class for which interplanetary network (IPN) annuli are available (Hurley et al. 1998a,b). Two intersecting annuli have been derived for trigger 1121, so that its true location is precisely known. In this case, the six-detector BATSE location is better than the four-detector case by 0.4, versus a statistical error of 0.3. For the remaining 38 events, only single IPN annuli are available, so we can only determine the closest approach angle $`\rho `$ of the annuli to the BATSE locations. Figure 4 summarizes the differences $`\mathrm{\Delta }\rho `$ between the four- and six-detector locations. While not every case shows an improvement, the average change is to a significantly better location.
Tables 5 & 6 list only the statistical location errors $`\sigma _{\mathrm{stat}}`$ as determined by LOCBURST (the BATSE software for determining burst locations) from counting statistics. In addition to $`\sigma _{\mathrm{stat}}`$, the location errors also include a systematic component $`\sigma _{\mathrm{sys}}`$. In the 1B catalog, comparison of the LOCBURST locations with more precise, independently determined, locations for a small number of events indicated that $`\sigma _{\mathrm{sys}}=4^{}`$ (Meegan et al. 1994). After improvements to the LOCBURST algorithm, this was reduced in the 3B catalog to $`\sigma _{\mathrm{sys}}=1.6^{}`$ (Meegan et al. 1996). In the 3B catalog paper, it was noted that the error distribution might not be Gaussian and that some bursts might fall into an extended non-Gaussian tail with higher values of $`\sigma _{\mathrm{sys}}`$. Recently, Briggs et al. (1998) used IPN data for 411 GRBs (Hurley et al. 1998b) of the 4Br catalog to test various models for the BATSE location error distribution and to determine the optimum parameter values of these models. An excellent fit to the data is provided by a model in which the error distribution is a modified Gaussian with an extended tail: 78% of the time the systematic error belongs to the core with a value of 1.85, while the remainder of the probability corresponds to a tail with systematic error of 5.1. A more complex model, in which the systematic error depends on the data type used in LOCBURST to obtain the location, is modestly favored by the data. Further details and instructions for implementing the improved error models are presented by Briggs et al. (1998).
Figure 5 shows the distribution in Galactic coordinates of the locations of the 1637 bursts in the 4Br catalog. The dipole and quadrupole moments of the observed distribution of bursts are listed in Table 8, together with the values expected for an isotropic distribution after correction for sky exposure (Section 3.2). The error bars on the observed burst moments are from the sample statistics; the location errors make a negligible contribution. The error bars on the sky exposure are very rough estimates. After correction for the anisotropic sky exposure, the galactic moments and the coordinate system independent tests (Briggs 1993) are consistent with isotropy. Without correction, the equatorial quadrupole moment differs from zero by more than 3$`\sigma `$; after correction, both equatorial moments are consistent with isotropy.
We place limits on burst repetition by comparing the $`R_{}`$ and $`w`$ clustering statistics to Monte Carlo simulations of the gamma-ray burst sky distribution. The $`R_{}`$ statistic (Tegmark et al. 1996) is a measure of burst separation weighted by the stated burst error and a 1.6 systematic error, while $`w`$ is the two point correlation function averaged over burst separation values of 0 to 5.73. Of these two statistics, $`R_{}`$ produces somewhat stronger limits on burst repetition (Brainerd & Kippen 1998). In the simulations, we generated 10<sup>4</sup> gamma-ray burst sky distributions with the observed distribution of location errors for each model of a specific number of observed repetition sources. The simulations use location error model 2 of Briggs et al. (1998), and they take into account the sky exposure. From the simulations, the expectation values for no repeaters are $`R_{}=0.428\pm 0.007`$ and $`w=0.007\pm 0.018`$, consistent with the 4Br catalog values $`R_{}=0.422`$ and $`w=0.007`$. Upper limits to the repeater fraction depend on the assumed number of repetitions per source (Hakkila et al. 1998b). For models in which repeating sources contribute 2 gamma-ray bursts to the catalog, the repeater fraction limits at the 5% significance level are 8.4% using $`R_{}`$ and 12.9% using $`w`$. At the 1% significance level, the corresponding limits are 17.1% using $`R_{}`$ and 24.2% using $`w`$.
### 4.2. Fluxes and Fluences
Peak fluxes for each of the three trigger time scales are determined as in the previous BATSE catalogs. Peak count rates are converted to units of photons cm<sup>-2</sup> s<sup>-1</sup> using detector response matrices that include the effects of varying angles to bursts, detector efficiency, atmospheric scattering, and spectral response. Further details of the method and results are presented by Pendleton et al. (1996).
Peak flux is here defined as the maximum flux in photons cm<sup>-2</sup> s<sup>-1</sup>, integrated over 50–300 keV in energy, and integrated over 64 ms, 256 ms, or 1024 ms. Specifying the peak flux in the trigger energy range provides an intensity measurement for which instrument sensitivity can be more directly calculated.
The anisotropic response of the detectors implies that location inaccuracies produce systematic errors in determining peak flux and fluence. However, the effect is negligible compared to other systematic errors because the energy dependence of the detector response changes rather slowly with the angle of incidence, and because the use of multiple detectors tends to average out the angle dependence. The peak flux and fluence measurements presented here were derived using the original 4B locations. We verified that use of the 4Br locations would have a negligible effect by re-computing flux$`/`$fluence values for 10 bursts with the largest differences between 4Br and 4B locations. We find that the flux and fluence values are changed by less than 10% as a result of the location differences. This is substantially smaller than systematic errors due to energy calibration errors, assumed spectral forms, and detector response uncertainties, which are estimated to be $``$20%. Figures 6a-6c show the cumulative distributions of peak flux calculated on the 64 ms, 256 ms, and 1024 ms trigger time scales, respectively, for bursts observed when the trigger energy range was set to the nominal 50–300 keV range. Near trigger threshold each of these distributions diverges into three distinct branches, illustrating how the instrument trigger threshold influences the measurement of the GRB population intensity distribution near threshold. Our continuing studies of BATSE’s burst sensitivity using the enhanced sky map algorithm (Hakkila et al. 1998a; Pendleton, Hakkila & Meegan 1998) combined with fits of Band’s spectral function (Band et al. 1993) to a large population of bursts (Mallozzi et al. 1995) have shown that BATSE’s sensitivity to bursts is strongly dependent on the burst spectrum<sup>1</sup><sup>1</sup>1Specifically on $`E_\mathrm{p}`$, the energy for which $`\nu F_\nu `$ is a maximum., and that no single spectrum produces a well-behaved instrument sensitivity correction near threshold. To illustrate this, we calculated the BATSE sensitivity for several representative spectra using spectral parameters derived from fits of the Band function to actual burst data. These are indicative of the range of sensitivity corrections, but are not meant to be definitive, as a full exploration of spectral parameter space is beyond the scope of the present paper.
In Figure 6a, the highest curve shows the peak flux distribution using a burst sensitivity correction calculated for a single Band function fit to a burst with $`E_\mathrm{p}=267`$ keV. For this spectral shape, BATSE’s sensitivity is relatively poor for bursts with peak flux just above threshold. Conversely, the lowest curve in Figure 6a shows the effect of the sensitivity calculated using a spectrum with $`E_\mathrm{p}=1391`$ keV, for which essentially no correction to the observed data is required.
All the sensitivity corrections calculated with individual spectra produced corrections to the burst intensity distribution that are relatively smooth and span a small interval in intensity. If we define a peak flux $`P_0`$ above which the sensitivity correction is negligible, then $`P_0`$ and $`E_\mathrm{p}`$ are anti-correlated. Since we know that bursts exhibit a fairly broad range of $`E_\mathrm{p}`$ for all intensities studied to date (Mallozzi et al. 1995), it is reasonable to assume that a more accurate corrected burst intensity distribution will be obtained when a distribution of burst spectra are used in the sensitivity calculation. The thin line in Figure 6a shows the distribution corrected using five spectra in the sensitivity calculation with different $`E_\mathrm{p}`$ values distributed over the range observed by BATSE in the entire burst population. This correction more accurately characterizes the threshold effects than a correction obtained using a single spectrum. It is important to emphasize, however, that inferences concerning the “true” burst population statistics for weaker bursts are quite dependent on the assumed spectral parameters. More detailed studies are required to quantify these effects more accurately.
The dependence of the instrument sensitivity on the burst spectrum is further illustrated by the intensity distribution of bursts detected when the trigger energy range is different from the nominal. Figure 7a shows the intensity distribution for trigger energy 25–100 keV; near threshold, the lowest curve shows the uncorrected data, and the highest curve results from the assumption that all bursts have a spectrum with $`E_\mathrm{p}=267`$ keV. The second-lowest curve results from assuming that all burst spectra have $`E_\mathrm{p}=50`$ keV. The second-highest curve is the result of combining several burst spectra with $`E_\mathrm{p}50`$ keV, and it produces an intensity distribution with an abrupt change in slope near threshold. Further modeling is necessary in this case. Figure 7b shows the intensity distribution for the trigger energy range $`>`$ 100 keV. Again, the lowest curve shows the uncorrected data. The highest curve assumes an input spectrum with $`E_\mathrm{p}=147`$ keV. Assumption of an input spectrum with $`E_\mathrm{p}=1391`$ keV results in a negligible correction. In contrast to Figure 7a, the intermediate curve, which is an average of several spectra, appears to produce a reasonably smooth correction.
In summary, the burst intensity distributions collected in the three different trigger energy ranges each measure a different component of the burst spectral distribution near threshold, and deconvolution of the threshold effects is necessarily model dependent.
### 4.3. Durations
As with the previous catalogs, we use $`T_{50}`$ and $`T_{90}`$ as measures of burst duration. $`T_{50}`$ is the time interval in which the integrated counts from the burst increases from 25% to 75% of the total counts; $`T_{90}`$ is similarly defined. Figure 8 shows the $`T_{50}`$ and $`T_{90}`$ distributions for all bursts of the 4Br catalog, and separately for three different trigger energy ranges. The well-known bimodality (Kouveliotou et al. 1993) is clearly evident in the overall data and the 50–300 keV data. The distributions for the lower and higher trigger energy ranges are consistent, within their limited statistical accuracy, with the 50–300 keV distribution.
## 5. Summary
The 4Br catalog includes 1637 cosmic gamma-ray bursts detected by BATSE during more than five years of operation. After processing with the most up-to-date location algorithm, the sky distribution of the bursts remains highly isotropic. Samples of bursts obtained with trigger energy ranges different from the BATSE nominal (50–300 keV) are, within the limited statistics, also isotropic and inhomogeneous, and have similar duration distributions.
The authors gratefully acknowledge the efforts of the BATSE Operations Team in processing the 4Br catalog data, and the assistance of B. Schuft, P. Welti, X. Chen, M. Ivanushkina, and L. Raschke in producing the trigger efficiencies. Support for UAH & USRA personnel was provided through cooperative agreement NCC 8-65. JH was supported through grant NAG 5-3591. JPL was supported through cooperative agreement NCC 8-82. Additional support for summer students was provided through the NSF Research Experience for Undergraduates program at UAH. Duration distributions ($`T_{50}`$) for bursts in the 4Br catalog that do not have significant data gaps and that are above the 64 ms trigger threshold. The latter selection reduces the effect of an instrumental cutoff of short bursts. a) all bursts, b) trigger energy range 25–100 keV, c) trigger energy range 50–300 keV, d) trigger energy range $`>`$ 100 keV.
|
no-problem/9903/astro-ph9903302.html
|
ar5iv
|
text
|
# The identification of the long–period X–ray pulsar 1WGA J1958.2+3232 with a Be–star/X–ray binary Based on data collected at the Astronomical Observatory of Loiano, Italy
## 1 Introduction
The X–ray source 1WGA J1958.2+3232 was serendipitously detected on May 1993 within the field of view of the Position Sensitive Proportional Counter (PSPC; 0.1–2.4 keV) in the focal plane of the ROSAT X–ray telescope. Highly significant pulsations at a period of 721$`\pm `$14 s were discovered in the ROSAT data (Israel et al. 1998). An ASCA observation performed on May 1998 detected 1WGA J1958.2+3232 at the flux level expected from the ROSAT pointing and confirmed the presence of a strong periodic signal at 734$`\pm `$1 s (Israel et al. 1999). A luminosity of $``$ 10<sup>33</sup>($`d`$/1 kpc)<sup>2</sup> erg s<sup>-1</sup> in the 2–10 keV energy band was obtained (assuming an absorbed power–law model). Due to the large uncertainty in the period determined by ROSAT, it was not possible to determine whether the system contains an accreting magnetic white dwarf or a neutron star, based on the period derivative. Even the spectral characteristics were consistent with both scenarios. Accreting neutron stars in binary systems are often associated with O–B stars, while cataclysmic variables with K–M main sequence companion stars; in both cases strong emission–lines are expected to be detected. So far no unambiguous association of an accreting white dwarf to an OB star has been found. Expected X–ray luminosities are in the $``$10<sup>32</sup> erg s<sup>-1</sup> range for wind accretors. Identifying the optical counterpart of 1WGA J1958.2+3232 and studying its spectrum provides decisive clues on the nature of system.
We present here the results of an optical program aimed at studying the stars included in the X–ray 30″ radius error circle of 1WGA J1958.2+3232. The observations were performed between May and September 1998 at the Loiano Astronomical Observatory. In order to select objects with peculiar emission–lines, as expected from the companion star of this kind of binary systems, slitless multiobject spectroscopy as described by Polcaro & Viotti (1998) was used. This method allows to obtain a good spectrum for a large number of stars and quickly select stars with strong emission–lines, down to magnitudes of m<sub>V</sub> $``$ 16–18. Moreover the absence of a slit eliminates the light loss due to poor seeing, while sky and nebular lines are spread out over the whole image, resulting only in a small increase of the background level.
A Be spectral–type star was found well within the X–ray error circle. The probability of finding by chance a Be star with V$``$16.0 mag within the small position uncertainty region is $``$10<sup>-6</sup>. Therefore the Be star represents a very likely optical counterpart of 1WGA J1958.2+3232, making this source one of the few accreting X–ray pulsars with a pulse period P $`>`$ 500 s in a Be/X–ray binary system.
## 2 Observations and results
The observations were all performed with the 1.52 m “Cassini” telescope equipped with the Bologna Faint Objects Spectrometer and Camera BFOSC (Bregoli et al. BFMFO87 (1987), Merighi et al. MMCMBO94 (1994)). During the first run (May 1998) the camera was equipped with the Thomson $`1024\times 1024`$ CCD with 0.56″ pixel size and $`9\mathrm{}.6\times 9\mathrm{}.6`$ field of view (FOV). On July and September 1998 a Loral $`2048\times 2048`$ CCD detector with 0.41″ pixel size and a FOV of $`13\mathrm{}.5\times 13\mathrm{}.5`$ was used instead. We performed V, R and I photometry, and low–resolution spectroscopy. The data were reduced using standard ESO–MIDAS and IRAF procedures for bias subtraction, flat–field correction, and one dimension stellar and sky spectra extraction. Cosmic rays were removed from each image and spectrum and the sky–subtracted stellar spectra were obtained, corrected for atmospheric extinction and flux calibrated (when possible).
### 2.1 Photometry and Spectroscopy
V, R, and I images of the whole $`9\mathrm{}.6\times 9\mathrm{}.6`$ wide 1WGA J1958.2+3232 field were first obtained on 1998 May 30. In Figure 1 the ROSAT PSPC error circle is shown together with the ASCA error circle (Israel et al. 1998, 1999). Note that at the time of the first optical observation only the ROSAT PSPC position uncertainty was known. Photometry for each stellar object in the image was derived with the DAOPHOT II package (Stetson S87 (1987)), and Color-Magnitude Diagrams (CMDs) were then computed for all color combinations. By the analysis of the $`V`$-$`R`$ vs. $`V`$ CMD a well defined main sequence is clearly visible (Fig. 2), as expected in an area projected along the galactic plane like the one of 1WGA J1958.2+3232 ($`l^{II}69^{}`$; $`b^{II}2^{}`$). However, a number of objects with peculiar colors (redder or bluer) with respect the bulk of the stars were selected; three of them (stars A, B and C) were found to lie within the ROSAT position uncertainty region (see Table 1).
Comparison between photometry obtained at the beginning and the end of the run showed no signs of variability to a limit of $`0.2`$ mag for any of the selected objects. A comparison between May and September 1999 gave similar results.
On 1998 May 30 we obtained a low–resolution (20 Å) slitless spectroscopic image of the field with an R filter and a large band grism covering the spectral region (4000–8000 Å). We note that, in general, the choice of the grism, filter and time exposure depends on the grism dispersion and CCD size (which set the maximum spectral range allowed), and the crowding level of the field (which sets the maximum number of non–overlapping spectra to be analysed). The selected filter and grism combination gives a bandpass of $``$ 1000 Å, centered on H$`\alpha `$. A detailed analysis of the spectra obtained in this way allowed us to single out a relatively strong H$`\alpha `$ emission line associated to one (star B) of the three stars previously selected.
Slit spectroscopy was performed over selected stars on 1998 May 30 – June 2, July 27 – 31, and September 14 – 15 (see Table 2).
### 2.2 Results
The spectra of star A are undoubtfully those of a classical OB star. However neither emission–lines or other peculiarities are present that would associate this star with the X–rays source. Similar results were obtained for stars D and E for which a slitless spectrum was obtained on May 1998.
Due to its faintness (R=17.0) star C is more difficult to study. Even after a 1.5 h exposure spectrum, the S/N ratio of the spectrum barely reached the unity with our instrumental set–up. The steep rise of its spectrum in the UV argues for a very hot object. However, it is unlikely that it is to be associated with the X–ray source since no obvious emission features are present. Moreover, after the ASCA observation, its position lies outside the intersection region of the two X–ray error circles.
We note that at the border of the ROSAT error circle lies a bright (V$``$12) star. Its 4000–8000 Å spectrum ($`\mathrm{\Delta }`$$`\lambda `$=5.5Å) was taken in an earlier observation (June 1996) also from the Loiano Observatory using the BFOSC; the star resulted to be a strongly reddened B8V spectral–type star, without any spectral peculiarity.
The spectrum of star B clearly shows a very high ionization state (see Fig. 3). The Balmer series lines are all in emission, up to the blue edge of our spectrogram as well as many He I ($`\lambda `$$`\lambda `$ 4471, 4922, 5875, 6678, 7065, 8361 Å) and He II ($`\lambda `$$`\lambda `$ 4686, 5412 Å). An emission feature centered at 4634 Å is present, that could be attributed to the N III 4634–40 Å doublet, but we cannot exclude the possibility that these feature is mainly due to Fe II lines. The presence of He II emission rules out a classical Be star, whereas the presence of emissions up to H7 exclude that of an Of star (see e.g. Jaschek & Jaschek, 1987).
Table 3 gives the parameters of the strongest emission lines. These lines (H$`\alpha `$, H$`\beta `$, H$`\gamma `$, He I 5875 Å, He II 4686 Å) show extended and asymmetric profiles, with a red wing more pronounced than the blue wing. At our spectral resolution, this might indicate a line splitting, possibly due to the presence of a disk. The absence of forbidden lines rules out the possibility of a pre–main sequence object or a cataclysmic variable. All this evidence clearly point to the association of this object with the X–ray source.
## 3 Discussion
The spectrum of star B is similar to those of massive Be/X–ray binaries. As usual, a more detailed spectral classification is made more difficult by the fact that most of the classical criteria are unusable, since the H and most of the He lines are in emission and the Fe II complex masks most of the stellar atmospheric features. However, the presence of N III, C III and O II lines, and the $``$ 0.1 Å equivalent width of the Mg II 4481 Å line suggest a B0 spectral type (see e.g. Jaschek & Jaschek, 1987). The few absorption lines that are clearly visible are wide, thus suggesting a main sequence star. We can thus conclude that most probably the optical companion of the collapsed object is a B0V star.
The equivalent width $``$0.5 Å of the interstellar Na II (5890 Å) indicates an intermediate reddening (E<sub>B-V</sub> $``$ 0.6; following Hobbs, 1974), corresponding, in the Cygnus region to a distance of $``$ 800 pc (Ishida, 1969). We also note that a E<sub>B-V</sub> $``$ 0.6 translates into a hydrogen column of $``$ 3 $`\times `$ 10<sup>21</sup> cm<sup>-2</sup> (Zombeck 1990), which is in good agreement with the N<sub>H</sub> values inferred from the spectral analysis of the merged ROSAT and ASCA data (N<sub>H</sub>$``$4 $`\times `$ 10<sup>21</sup> cm<sup>-2</sup>; Israel et al. 1999). This result implies a 1–10 keV X–ray luminosity of 1.2 $`\times `$ 10<sup>33</sup> erg s<sup>-1</sup>. Even though the optical data argue against an accreting white dwarf, only a measurement of the spin period derivative will firmly asses the nature of the accreting object. If the system hosts an accreting neutron star, than it would be one of the faintest Be/X–ray pulsars. Its closest analogoue would be X Per which shows a 1–10 keV X–ray luminosity of 0.7–3.0 $`\times `$ 10<sup>34</sup> erg s<sup>-1</sup> in its low state (Haberl et al. 1998). Two recently identified Be/X–ray pulsars, namely RX J0440.9+4431/BSD 24–491 and RX J1037.5–564/LS 1698 (Reig & Roche, 1999), also share similar optical and X–ray characteristics with 1WGA J1958.2+3232. If the accreting object is a white dwarf, 1WGA J1958.2+3232 would be the first example of a Be/white dwarf interacting binary system.
## 4 Conclusion
Based on the optical results we obtained for the stars within the error circle of 1WGA J1958.2+3232, we conclude that star B, a B0Ve spectral–type star, represents the very likely optical counterpart of 1WGA J1958.2+3232. We compared the optical findings with recent ASCA X–ray data obtained for this source (Israel et al. 1999), other similar X–ray sources (Haberl et al. 1998; Reig & Roche, 1999), and the current knowledge on accreting white dwarfs. We conclude that 1WGA J1958.2+3232 is a new likely faint Be/X–ray pulsar, probably belonging to a new subclass of Be X–ray binary systems with long periods, persistent, low luminosities X–ray emission, and small flux variations. If the accreting white dwarf interpretation proves instead correct, than 1WGA J1958.2+3232 would be the first unambiguous example of a Be/white dwarf binary system.
###### Acknowledgements.
The authors thank R. Gualandi and S. Bernabei for their help during observations. The authors also thank F. Haberl the comment of which helped to improve this paper. This work was partially supported through ASI grants.
|
no-problem/9903/nucl-th9903012.html
|
ar5iv
|
text
|
# Saturating Cronin effect in ultrarelativistic proton-nucleus collisions
\[
nucl-th/9903012, KSUCNR-102-99
## Abstract
Pion and photon production cross sections are analyzed in proton-proton and proton-nucleus collisions at energies $`20`$ GeV $`\sqrt{s}60`$ GeV. We separate the proton-proton and nuclear contributions to transverse-momentum broadening and suggest a new mechanism for the nuclear enhancement in the high transverse-momentum region.
PACS number(s): 12.38.Bx, 13.85.Ni, 13.85.Qk, 24.85.+p, 25.75.-q
\]
Experimental results on pion and photon production in high-energy hadron-nucleus collisions show an extra increase at high transverse momentum ($`p_T`$) over what would be expected based on a simple scaling of the appropriate proton-proton ($`pp`$) cross sections. The nuclear enhancement is referred to as the Cronin effect , and is most relevant at moderate transverse momenta (3 GeV $`p_T6`$ GeV) . In relativistic nuclear collisions this momentum region is at the upper edge of the $`p_T`$ window in Super Proton Synchrotron (SPS) experiments at $`\sqrt{s}=20`$ AGeV, and is measurable at the Relativistic Heavy Ion Collider (RHIC) at $`\sqrt{s}=200`$ AGeV. The importance of a better theoretical understanding of the Cronin effect continues to increase as new data appear from the SPS heavy-ion program and as the commissioning of RHIC approaches. This calls for a systematic study of particle production moving from $`pp`$ to proton-nucleus ($`pA`$) collisions. In the present work we analyze the Cronin effect in inclusive $`\pi ^0`$ and $`\gamma `$ production.
In the past two decades the perturbative QCD (pQCD) improved parton model has become the description of choice for hadronic collisions at large $`p_T`$. The pQCD treatment of hadronic collisions is based on the assumption that the composite structure of hadrons is revealed at high energies, and the parton constituents become the appropriate degrees of freedom for the description of the interaction at these energies. The partonic cross sections are calculable in pQCD at high energy to leading order (LO) or next-to-leading-order (NLO) . The parton distribution function (PDF) and the parton fragmentation function (FF), however, require the knowledge of non-perturbative QCD and are not calculable directly by present techniques. The PDFs and FFs, which are believed to be universal, are fitted to reproduce the data obtained in different reactions. In recent NLO calculations the various scales ($`Q`$,$`\mathrm{\Lambda }_{QCD}`$, etc.) are optimized to improve the agreement between data and theory .
In theoretical investigations of $`\pi ^0`$ and $`\gamma `$ production in $`pA`$ collisions another method appeared and became popular : the different scales of the pQCD calculations are fixed and the NLO pQCD theory is supplemented by an additional non-perturbative parameter, the intrinsic transverse momentum ($`k_T`$) of the partons. The presence of an intrinsic transverse momentum, as a Gaussian type broadening of the transverse momentum distribution of the initial state partons in colliding hadrons was investigated as soon as pQCD calculations were applied to reproduce large-$`p_T`$ hadron production . The average intrinsic transverse momentum needed was small, $`k_T0.30.4`$ GeV, and could be easily understood in terms of the Heisenberg uncertainty relation for partons inside the proton. This simple physical interpretation was ruled out as the only source of intrinsic $`k_T`$ by the analysis of new experiments on direct photon production, where $`k_T1`$ GeV was obtained in the fix target Tevatron experiments and $`k_T4`$ GeV was found at the Tevatron collider for muon, photon and jet production . New theoretical efforts were ignited to understand the physical origin of $`k_T`$ . Parallel to these developments, $`k_T`$ smearing was applied successfully to describe ultrarelativistic nucleus-nucleus collisions and $`J/\psi `$ production at Tevatron and HERA . One possible explanation of the enhanced $`k_T`$-broadening is in terms of multiple gluon radiation , which makes $`k_T`$ reaction and energy dependent. In the absence of a full theoretical description, intrinsic $`k_T`$ can be used phenomenologically in $`pp`$ collisions. A reasonable reproduction of the $`pp`$ data is a prerequisite for the isolation of the nuclear enhancement we intend to focus on.
In the lowest-order pQCD parton model, direct pion production can be described in $`pp`$ collisions by
$`E_\pi {\displaystyle \frac{d\sigma _\pi ^{pp}}{d^3p}}`$ $`=`$ $`{\displaystyle \underset{abcd}{}}{\displaystyle 𝑑x_1𝑑x_2f_{a/p}(x_1,Q^2)f_{b/p}(x_2,Q^2)}`$ (2)
$`K{\displaystyle \frac{d\sigma }{d\widehat{t}}}(abcd){\displaystyle \frac{D_{\pi /c}(z_c,\widehat{Q}^2)}{\pi z_c}},`$
where $`f_{a/p}(x,Q^2)`$ and $`f_{b/p}(x,Q^2)`$ are the PDFs for the colliding partons $`a`$ and $`b`$ in the interacting protons as functions of momentum fraction $`x`$ and momentum transfer $`Q`$, and $`\sigma `$ is the LO hard scattering cross section of the appropriate partonic subprocess. The K-factor accounts for higher order corrections . Comparing LO and NLO calculations one can obtain a constant value, $`K2`$, as a good approximation of the higher order contributions in the $`p_T`$ region of interest . In eq.(2) $`D_{\pi /c}(z_c,\widehat{Q}^2)`$ is the FF for the pion, with the scale $`\widehat{Q}=p_T/z_c`$, where $`z_c`$ indicates the momentum fraction of the final hadron. We use a NLO parameterization of the FFs . Direct $`\gamma `$ production is described similarly .
The generalization to incorporate the $`k_T`$ degree of freedom is straightforward . Each integral over the parton distribution functions is extended to $`k_T`$-space,
$$dxf_{a/p}(x,Q^2)dxd^2k_Tg(\stackrel{}{k}_T)f_{a/p}(x,Q^2),$$
(3)
and, as an approximation, $`g(\stackrel{}{k}_T)`$ is taken to be a Gaussian: $`g(\stackrel{}{k}_T)=\mathrm{exp}(k_T^2/k_T^2)/\pi k_T^2`$. Here $`k_T^2`$ is the 2-dimensional width of the $`k_T`$ distribution and it is related to the average transverse momentum of one parton as $`k_T^2=4k_T^2/\pi `$.
We applied this model to describe the measured data in $`pp\pi ^0X`$ reactions . If $`\pi ^+`$ and $`\pi ^{}`$ production was measured, we constructed the combination $`\pi ^0=(\pi ^++\pi ^{})/2`$, given by the FFs we use. The calculations were corrected for the finite rapidity windows of the data. The Monte-Carlo integrals were carried out by the standard VEGAS-routine . For the PDFs we used the MRST98 set, which incorporates an intrinsic $`k_T`$. The scales are fixed, $`\mathrm{\Lambda }_{\overline{MS}}(n_f=4)=300`$ MeV and $`Q=p_T/2`$. We fitted the data minimizing $`\mathrm{\Delta }^2=(DataTheory)^2/Theory^2`$ in the midpoints of the data. Fig.1. shows the obtained fit values for $`k_T^2`$. The error bars display a $`\mathrm{\Delta }^2=\mathrm{\Delta }_{min}^2\pm 0.1`$ uncertainty in the fit procedure. The uncertainty is small at $`\sqrt{s}=2030`$ GeV and relatively large at $`\sqrt{s}60`$ GeV, indicating that $`k_T^2`$ is more sharply determined at lower energies. We use a value of $`k_T^2=3`$ GeV<sup>2</sup> for $`\pi ^0`$ at $`\sqrt{s}=27.4`$ GeV. The value of $`k_T^2`$ appears to increase with energy. The dashed line serves to guide the eye and indicates a linear increase to a value of $`k_T=3.5`$ GeV at $`\sqrt{s}=1800`$ GeV .
Furthermore, we analyzed the data from $`pp\gamma X`$ reactions . These results are also shown in Fig.1. In this case much lower values are obtained for $`k_T^2`$ with large uncertainty. The dotted line represents the results obtained in Ref. from diphoton and dimuon data. In the following we use $`k_T^2=`$1.2 and 1.5 GeV<sup>2</sup> at $`\sqrt{s}=31.6`$ and $`38.8`$ GeV, respectively.
Fig.2. displays the ratio of data to our calculations as a function of $`x_T=2p_T/\sqrt{s}`$ for $`pp\pi ^0X`$, applying the obtained best fit values for $`k_T^2`$ in the different experiments. For $`pp\gamma X`$ data we achieve a similarly good agreement. Further details of the comparisons with data and with other calculations will be published elsewhere .
With $`\pi ^0`$ and $`\gamma `$ production in $`pp`$ collision reasonably under control, we turn to the nuclear enhancement in $`pA`$ collisions. In minimum-biased $`pA`$ collisions the pQCD description of the inclusive pion cross section is based on
$`E_\pi {\displaystyle \frac{d\sigma _\pi ^{pA}}{d^3p}}={\displaystyle \underset{abcd}{}}{\displaystyle d^2bt_A(b)𝑑x_1𝑑x_2d^2k_{T,a}d^2k_{T,b}g(\stackrel{}{k}_{T,a})}`$ (5)
$`g(\stackrel{}{k}_{T,b})f_{a/p}(x_1,Q^2)f_{b/p}(x_2,Q^2)K{\displaystyle \frac{d\sigma }{d\widehat{t}}}{\displaystyle \frac{D_{\pi /c}(z_c,\widehat{Q}^2)}{\pi z_c}}.`$
Here $`b`$ is the impact parameter and $`t_A(b)`$ is the nuclear thickness function normalized as $`d^2bt_A(b)=A`$. For simplicity, we use a sharp sphere nucleus with $`t_A(b)=2\rho _0\sqrt{R_A^2b^2}`$, where $`R_A=1.14A^{1/3}`$ and $`\rho _0=0.16`$ fm<sup>-3</sup>. Next we discuss the nuclear enhancement of $`k_T^2`$.
The standard physical explanation of the Cronin effect is that the proton traveling through the nucleus gains extra transverse momentum due to random soft collisions and the partons enter the final hard process with this extra $`k_T`$. In our approximation initial soft processes increase the value of $`k_T^2`$, but this effect does not depend on the scale, $`Q^2`$, of the hard process to occur later. However, it may depend on the initial center-of-mass energy, $`\sqrt{s}`$. Furthermore, it is important to note, that in our description not all participant protons are automatically endowed with the extra $`k_T^2`$ enhancement, only the parton distribution of the colliding protons is affected according to the number of soft collisions suffered. To characterize the $`k_T^2`$ enhancement, we write the width of the transverse momentum distribution of the partons in the incoming proton as
$$k_T^2_{pA}=k_T^2_{pp}+Ch_{pA}(b).$$
(6)
Here $`h_{pA}(b)`$ describes the number of effective nucleon-nucleon (NN) collisions at impact parameter $`b`$ which impart an average transverse momentum squared $`C`$.
Naively all possible soft interactions are included, $`h_{pA}^{all}(b)=\nu _A(b)1`$, where $`\nu _A(b)=\sigma _{NN}t_A(b)`$ is the collision number at impact parameter $`b`$ with $`\sigma _{NN}`$ being the total inelastic cross section. Applying this model to $`\pi ^0`$ production in $`pA`$ ($`A=Be,Ti,W`$) collisions at $`\sqrt{s}=27.4`$ GeV , we extract $`C_{pBe}^{all}=`$ $`0.8\pm 0.2`$ GeV<sup>2</sup>, $`C_{pTi}^{all}=0.4\pm 0.2`$ GeV<sup>2</sup>, and $`C_{pW}^{all}=0.3\pm 0.2`$ GeV<sup>2</sup>. The target dependence of the extra enhancement in $`k_T^2`$ is inconsistent with the assumption of a target-independent average transverse momentum transfer per NN collision and/or with the form of $`h_{pA}^{all}(b)`$.
Inspired by the $`A`$-dependence, we propose another physical picture of the nuclear enhancement effect. According to this mechanism, the incoming nucleon first participates in a semi-hard ($`Q^21`$ GeV<sup>2</sup>) collision resulting in an increase of the width of its $`k_T`$ distribution. There is at most one collision able to impart this increased value of $`k_T^2`$, characteristic of the no-longer coherent nucleon. Further soft or semi-hard collisions do not affect the $`k_T`$ distribution of the partons in the incoming nucleon. This saturation effect can be approximated well by a smoothed step function $`h_{pA}^{sat}(b)`$ defined as
$`h_{pA}^{sat}(b)=\{\begin{array}{cc}0\hfill & \text{ if }\nu _A(b)<1\text{ }\hfill \\ \nu _A(b)1\hfill & \text{ if }1\nu _A(b)<2\text{ }.\hfill \\ 1\hfill & \text{ if }2\nu _A(b)\text{ }\hfill \end{array}`$
The saturated Cronin factor is denoted by $`C^{sat}`$.
Using the same $`pA`$ data as previously , $`C^{sat}=1.2`$ GeV<sup>2</sup> gives a good fit for all three targets, $`Be`$, $`Ti`$ and $`W`$. Fig.3. displays our results with (full line) and without (dashed line) the Cronin enhancement. The lower panel shows the data/theory ratio on a linear scale for the $`pW`$ case. It is interesting to note that with the saturated Cronin effect the remaining deviation from one in the data/theory ratio is similar in shape to the $`p_T`$-dependent K factor obtained in Ref. . The $`pBe\pi ^0X`$ data at energies $`\sqrt{s}=31.6`$ GeV and 38.8 GeV are also described quite well with the same saturating Cronin effect .
Let us now discuss photon production in the same energy region. We speculate that the nuclear enhancement does not depend on the outgoing particle, and thus use the same $`C^{sat}`$ for $`\gamma `$ as for $`\pi ^0`$ production. Fig.4. shows the $`pBe\gamma X`$ reaction at energies $`\sqrt{s}=31.6`$ GeV and $`\sqrt{s}=38.8`$ GeV. The data are from Ref. . The calculations are carried out with $`C^{sat}=1.2`$ GeV<sup>2</sup> (full lines). For comparison we show the results without nuclear enhancement, $`C^{sat}=`$0 (dashed lines). In the lower panel the data/theory ratio is displayed for $`\sqrt{s}=31.6`$ GeV.
The common value of $`C^{sat}`$ in the studied energy range indicates that the extra enhancement in $`pA`$ collision is independent of the produced final state particle. We interpret this $`C^{sat}`$ as the square of the characteristic transverse momentum imparted in one semi-hard collision prior to the hard scattering. Independent of the details of the mechanism, the extra nuclear enhancement in eq. (4) appears to have the same total value, which is on the scale of the intuitive dividing line between hard and soft physics, $`Ch_{pA}(0)1`$ GeV<sup>2</sup>. For RHIC predictions it would be necessary to see the energy dependence of $`C^{sat}`$, and whether the same nuclear enhancement is obtained at much higher energies.
In the present letter we separated the $`pp`$ and nuclear contributions to the width of the parton transverse momentum distribution, and proposed a saturating model of the Cronin effect in proton-nucleus collisions. According to our picture, the incoming proton suffers at most one semi-hard scattering prior to the hard parton scattering. In the semi-hard collision the width of the transverse momentum distribution of the partons inside the incoming proton increases. This prescription describes the $`p_T`$ dependence of the nuclear enhancement in $`\pi ^0`$ and $`\gamma `$ production remarkably well. We are confident that complete NLO calculations, which are left for future work, will improve the agreement further. Systematic $`pA`$ experiments are needed to determine the energy dependence of $`C^{sat}(s)`$ and to extrapolate to $`AA`$ collisions at RHIC.
We thank X.N. Wang and C.Y. Wong for stimulating discussions and M. Begel and the E706 Collaboration for providing their data. We are grateful to G. David for his continued interest in the project. Work supported in part by DOE grant No. DE-FG02-86ER40251, by US/Hungarian Science and Technology Joint Fund No.652/1998, and OTKA Grant No. F019689.
|
no-problem/9903/cond-mat9903098.html
|
ar5iv
|
text
|
# Sedimentation of strongly and weakly charged colloidal particles: Prediction of fractional density dependence
## 1 Introduction
The sedimentation velocity $`U`$ of interacting colloidal particles depends both on the indirect hydrodynamic interactions (HI) mediated by the suspending solvent, and on the microstructure of the suspension. In equilibrium, the latter is determined by direct potential forces arising for example from the steric repulsion between the particles and from the electrostatic repulsion of overlapping double layers. Different pair potentials $`u(r)`$ lead to rather different microstructures, and this in turn strongly effects the sedimentation. It is well established by theories and experiments that the sedimentation velocity of a dilute suspension of monodisperse hard spheres is given by
$$\frac{U}{U_0}=16.55\varphi +𝒪(\varphi ^2),$$
(1)
where $`\varphi `$ is the particle volume fraction, and $`U_0`$ is the sedimentation velocity at infinite dilution . On the other hand, the long-ranged electrostatic repulsion occurring in suspensions of charged particles can give rise to a reduction in $`U`$, as compared to a hard sphere dispersion at the same volume fraction . This decrease in $`U`$ is mainly due to the cumulative backflow of displaced fluid, which becomes particularly effective because the probability of two or more charged particles coming close to each other is very small. Conversely, hard spheres are effectively attracted to each other at small interparticle distances $`r\stackrel{<}{}3a`$, where $`a`$ is the radius of the spheres. This can be readily seen from the potential of mean force $`w(r)=k_BT\mathrm{ln}g(r)`$, where $`g(r)`$ is the radial distribution function of hard spheres, which shows its maximum value at contact distance $`r=2a`$ due to excluded volume effects. Closely spaced particles are mutually exposed to the downflow of nearby fluid, dragged along with the sedimenting particles. Consequently, the retardation from backflow is reduced, whereas the influence of near-field HI is enhanced for hard sphere suspensions.
While the reduction in $`U`$ due to long-ranged electrostatic forces is known since many years, it was not realized until recently that, in particular for deionized charge-stabilized suspensions, $`U`$ can be significantly smaller than $`U_0`$ even for extremely small volume fractions as $`\varphi 10^4`$. In fact, in the past the effects of HI have been frequently considered to be negligible for dilute suspensions of charged particles . However, recent calculations have clearly demonstrated for these systems that HI is of importance at small $`\varphi `$ not only for sedimentation , but also for short-time and long-time collective diffusion, and for long-time self-diffusion . Moreover, it was shown theoretically for dilute suspensions of charged colloids without added electrolyte that $`U`$ follows a non-linear $`\varphi `$-dependence of the parametric form
$$\frac{U}{U_0}=1p\varphi ^{\frac{1}{3}}.$$
(2)
The numerically calculated coefficient $`p1.8`$ was found to be nearly independent of the (effective) particle charge $`Z`$, provided that $`Z`$ is kept large enough to completely mask the hard core of the particles. Experimenal results of sedimentation experiments on charged colloids agree favorable with the scaling-prediction of eq. (2) . Similar non-linear volume fraction dependencies are found for the short-time translational and rotational diffusion coefficients of charged particles .
The same $`\varphi `$-dependence as found for salt-free fluid suspensions of charged colloids is known to be valid for the sedimentation velocity of dilute ordered arrays of fixed spheres . For such arrays, the coefficient $`p`$ is determined analytically as $`p=1.76`$ for a $`sc`$ lattice, and as $`p=1.79`$ for a $`fcc`$ or a $`bcc`$ lattice . As discussed in detail in Refs. , the main peak position $`r_m`$ of the radial distribution function $`g(r)`$ for highly charged particles in salt-free suspension scales as $`r_m\varphi ^{1/3}`$ which is also typical of a crystalline solid. This, in fact, turns out to be relevant for eq. (2) to be valid for strongly repelling particles like charged colloids and for ordered arrays of fixed spheres .
In the present article, we analyse the sedimentation velocity of charged colloidal dispersions at low salinity both as a function of $`\varphi `$ and of $`Z`$. Our numerical results for $`U/U_0`$ with system parameters representing monodisperse modified PMMA particles investigated very recently in sedimentation experiments , are well described by eq. (2) if the effective macroion charge is chosen such that $`Z>100`$. We physically explain the scaling relation eq. (2) in terms of an effective hard sphere (EHS) model by using Wertheims analytical expression for the static structure function of hard sphere dispersions obtained in Percus-Yevick approximation. In this specific form of the EHS model, also the numerically determined value $`p1.8`$ is recovered very accurately. We will further demonstrate for weakly charged spheres that $`U/U_01`$ displays a volume fraction dependence proportional to $`\varphi ^{1/2}`$, provided the suspension is completely deionized and $`\varphi `$ is chosen small enough that the radial distribution function of the macroions can be approximated by its zero-density limit. This result, however, turns out to be valid only for extremely small values of $`\varphi `$ and $`Z`$, which we believe are not accessible in sedimentation experiments.
## 2 Theory of sedimentation in charged colloids
We start this section by summarizing the theoretical method used to calculate the reduced sedimentation velocity $`U/U_0`$ of charged colloidal spheres. Our results for $`U/U_0`$ are based on the effective macroion fluid model of charge-stabilized suspensions . In this model, the effective pair potential $`u(r)`$ between two charged colloidal particles consists of a hard-core part with radius $`a`$, and of a longer-ranged screened Coulomb potential $`u_{el}(r)`$ for $`r>2a`$, i.e.
$$\beta u_{el}(r)=2Ka\frac{e^{\kappa (r2a)}}{r}.$$
(3)
Here, $`K`$ is a dimensionless coupling parameter given by
$$K=\frac{L_B}{2a}\left(\frac{Z}{1+\kappa a}\right)^2,$$
(4)
where $`L_B=\beta e^2/ϵ`$ is the so-called Bjerrum length, and $`\beta =(k_BT)^1`$ is the thermal energy. The suspending fluid is treated as a continuum without internal structure, only characterized by its dielectric constant $`ϵ`$. Moreover, $`Z`$ is the effective charge of a colloidal particle in units of the elementary charge $`e`$. The screening parameter $`\kappa `$ is given by
$$\kappa ^2=4\pi L_B\left[n|Z|+2n_s\right]=\kappa _c^2+\kappa _s^2,$$
(5)
where $`n_s`$ is the number density of added 1-1-electrolyte, and $`n=3\varphi /(4\pi a^3)`$ is the number density of colloidal particles. Notice that $`\kappa `$ comprises a contribution $`\kappa _c`$ due to counterions, which are assumed to be monovalent, and a second contribution $`\kappa _s`$ arising from added electrolyte. Eq. (3) is a good approximation for $`u(r)`$ even for strongly charged colloids and for values of $`\kappa a`$ significantly larger than one, provided the effective charge number $`Z`$ is regarded as an adjustable parameter .
On the time scales probed by dynamic light scattering and by sedimentation experiments, the effect of HI on the translational motion of the colloidal particles is embodied in the hydrodynamic diffusivity tensors $`𝑫_{ij}^{tt}(𝑹^N)`$; $`i,j=1,\mathrm{},N`$ . The short-hand notation $`𝑹^N=(𝑹_1,\mathrm{},𝑹_N)`$ denotes the configuration of the $`N`$ spherical particles. Without HI, $`𝑫_{ij}^{tt}(𝑹^N)=\delta _{ij}D_0\mathrm{𝟏}`$, where $`D_0=k_BT/(6\pi \eta a)`$ is the Stokesian diffusion coefficient of a particle with radius $`a`$ in a solvent with viscosity $`\eta `$, and $`\mathrm{𝟏}`$ denotes the unit tensor.
The reduced short-time sedimentation velocity $`U/U_0`$ of a macroscopically homogeneous suspension of monodisperse colloidal spheres is then given by the zero-wavenumber limit
$$\frac{U}{U_0}=\underset{q0}{lim}H(q)$$
(6)
of the so-called hydrodynamic function $`H(q)`$ . This function is defined as
$$H(q)=\frac{1}{ND_0}\underset{l,j=1}{\overset{N}{}}\widehat{𝒒}𝑫_{lj}^{tt}(𝑹^N)\widehat{𝒒}e^{i𝒒(𝑹_l𝑹_j)}$$
(7)
with wavevector $`𝒒`$ of magnitude $`q`$ and corresponding unit vector $`\widehat{𝒒}=𝒒/q`$. The brackets indicate an equilibrium ensemble average. $`H(q)`$ can be regarded as a generalized (short-time) sedimentation coefficient of particles exposed to spatially periodic external forces aligned with $`\widehat{𝒒}`$, and derived from a weak potential proportional to $`\mathrm{exp}\left[i𝒒𝒓\right]`$ .
In principle, one needs to distinguish the short-time sedimentation velocity $`U`$, defined through eq. (6), from the long-time sedimentation velocity, which is determined in conventional sedimentation experiments. In dilute suspensions, however, when the HI are well described as a sum of pairwise additive interactions, both quantities are identical, since then sedimentation does not perturb the equilibrium microstructure . At larger volume fractions, the microstructure becomes distorted from its equilibrium form, since $`n`$-body HI with $`n3`$ becomes important. This distortion leads to an additional change in the sedimentation velocity. Nevertheless, simulations of hard sphere suspensions show that the differences between the short-time and the long-time sedimentation velocities due to memory effects are rather small .
The assumption of pairwise additive HI is justified for the important case of dilute (typically $`\varphi 0.1`$) charge-stabilized suspensions at sufficiently low ionic strength, as considered here. For such systems, the particles are kept apart from each other due to their strong electrostatic repulsion. Then the $`N`$-body diffusivity tensors $`𝑫_{ij}^{tt}(𝑹^N)`$ are well approximated by the two-body tensors
$$𝑫_{ij}^{tt(2)}(𝑹^N)=\delta _{ij}\left[\mathrm{𝟏}+\underset{l=1}{\overset{N}{}}{}_{}{}^{}𝑨(𝑹_i𝑹_l)\right]+(1\delta _{ij})𝑩(𝑹_i𝑹_j),$$
(8)
where the term $`l=i`$ is excluded from the sum. This approximation has been verified in the case of translational and rotational self-diffusion by considering also the leading three-body contribution to HI , and in case of the sedimentation velocity by comparison with a more elaborate method, known as the lowest order form of the $`\delta \gamma `$-expansion . The two-body translational mobility tensors $`𝑨(𝒓)`$ and $`𝑩(𝒓)`$ are calculated by means of series expansions in powers of $`(a/r)`$ . We only quote the leading terms for further reference
$`𝑨(𝒓)`$ $`=`$ $`{\displaystyle \frac{15}{4}}\left({\displaystyle \frac{a}{r}}\right)^4\widehat{𝒓}\widehat{𝒓}+𝒪(r^6)`$ (9)
$`𝑩(𝒓)`$ $`=`$ $`{\displaystyle \frac{3}{4}}\left({\displaystyle \frac{a}{r}}\right)\left[\mathrm{𝟏}+\widehat{𝒓}\widehat{𝒓}\right]+{\displaystyle \frac{1}{2}}\left({\displaystyle \frac{a}{r}}\right)^3\left[\mathrm{𝟏}3\widehat{𝒓}\widehat{𝒓}\right]+𝒪(r^7),`$ (10)
with $`\widehat{𝒓}=𝒓/r`$. Using eqs. (7,8), $`H(q)`$ is expressed in terms of an integral
$$H(q)=1+n𝑑𝒓g(r)\left[\widehat{𝒒}𝑨(𝒓)\widehat{𝒒}+\widehat{𝒒}𝑩(𝒓)\widehat{𝒒}\mathrm{cos}(𝒒𝒓)\right],$$
(11)
involving these mobility tensors together with the radial distribution function $`g(r)`$. In this work, we use eqs. (6,11) together with the series expansions of $`𝑨(𝒓)`$ and $`𝑩(𝒓)`$ for calculating $`U/U_0`$, by including contributions up to order $`(a/r)^{20}`$.
When terms only up to $`𝒪(r^4)`$ in the series expansions of $`𝑨(𝒓)`$ and $`𝑩(𝒓)`$ are employed, $`U/U_0`$ is given explicitly as
$$\frac{U}{U_0}=1\varphi \left[5+3_2^{\mathrm{}}𝑑xx(1g(x))+\frac{15}{4}_2^{\mathrm{}}𝑑x\frac{g(x)}{x^2}\right],$$
(12)
with $`x=r/a`$.
For charge-stabilized suspensions, it is not necessary to account for many terms in the expansion of the two-body mobility tensors, since the integrals in eq. (11) converge rapidly because $`g(r)`$ is essentially zero at small interparticle distances. In contrast, many terms are needed for hard spheres. For example, using the zero-density form $`g_0(r)=\mathrm{\Theta }(r2a)`$ for the radial distribution function of hard spheres, we obtain from eq. (12) that $`U/U_0=1p\varphi `$ with $`p=6.87`$. Here, $`\mathrm{\Theta }(x)`$ is the unit step function. By including terms only up to $`𝒪(r^3)`$, the result is $`p=5.0`$, as can be seen from eq. (12), when the last term on the right hand side, arising from the term proportional to $`(a/r)^4`$ in eq. (9) is omitted. On the other hand, if terms up to $`𝒪(r^{20})`$ are considered, the result for $`p`$ is improved to $`p=6.54`$, which is close to the exact value $`p=6.55`$, first obtained by Batchelor using tabulated numerical results for the near-field HI .
To obtain $`U/U_0`$ for dilute charge-stabilized suspensions, we calculate $`g(r)`$ in the effective macroion fluid model by using, for simplicity, the well-established rescaled mean spherical approximation (RMSA). On the basis of the pairwise additivity approximation of the HI combined with the $`(a/r)`$-expansion of the mobility tensors, henceforth referred to as PA-scheme, $`U/U_0`$ can then be calculated as a function of $`\varphi `$.
## 3 Fractional density dependence of $`U/U_0`$
### 3.1 Strongly charged particles
We show in the following that the exponent $`1/3`$ in eq. (2) and the charge-independence of the parameter $`p`$, both found from numerical calculations, can be understood quantitatively in terms of a model of effective hard spheres (EHS), which can be treated analytically.
The EHS model accounts for the most important feature of the radial distribution function $`g(r)`$ of highly charged particles, namely the so-called correlation hole. For an illustration of that, consider fig. 1, which shows a typical $`g(r)`$ for a salt-free suspension of strongly charged particles at $`\varphi =0.08`$. Due to the strong electrostatic repulsion between the particles, $`g(r)`$ has a well developed first maximum, and a spherical region with nearly zero probability of finding another particle. This region is referred to as the correlation hole. For small $`\varphi `$, the correlation hole usually extends over several particle diameters . Therefore, we can approximate the actual $`g(r)`$ of the charge-stabilized system by the radial distribution function of an effective hard sphere (EHS) system with an effective radius $`a_{EHS}>a`$ and an effective volume fraction $`\varphi _{EHS}=\varphi (a_{EHS}/a)^3`$. The EHS-radius $`a_{EHS}`$ accounts for the electrostatic repulsion between the particles and can be identified as $`a_{EHS}=r_m/2`$, where $`r_m`$ is the principal peak position of the actual $`g(r)`$.
Since the extension of the correlation hole is substantially larger than $`a`$ for small volume fractions and large particle charges, it is then a good approximation to consider only the leading Oseen-term in $`𝑩(𝒓)`$ (cf. eqs. (9,10,11)) in calculating the sedimentation velocity. $`U/U_0`$ is then well approximated by
$$\frac{U}{U_0}=1+\frac{3\varphi }{a^2}_0^{\mathrm{}}𝑑rrh(r)=1+\frac{3\varphi }{a^2}\stackrel{~}{H}(s=0).$$
(13)
Here, $`h(r)=g(r)1`$ is the total correlation function and
$$\stackrel{~}{H}(s)=_0^{\mathrm{}}𝑑rrh(r)e^{sr}$$
(14)
denotes the Laplace transform of $`rh(r)`$. Next, we approximate $`h(r)`$ by the total correlation function $`h_{EHS}(r;\varphi _{EHS})`$ of the EHS model, evaluated at the effective volume fraction $`\varphi _{EHS}`$. With this approximation for $`h(r)`$ used in eq. (13), we readily obtain the result $`U/U_0=1p\varphi ^{1/3}`$ with a fractional exponent $`1/3`$, and the parameter $`p`$ given by
$$p=3\varphi _{EHS}^{2/3}\stackrel{~}{H}_{EHS}(z=0),$$
(15)
where $`\stackrel{~}{H}_{EHS}(z)`$ is the Laplace transform of $`yh_{EHS}(y;\varphi _{EHS})`$ with $`y=r/a_{EHS}`$ and $`z=sa_{EHS}`$.
We notice that $`\varphi _{EHS}`$ and hence $`p`$ are indeed independent of $`\varphi `$ and $`Z(100)`$, provided that $`a_{EHS}`$ is identified with $`r_m/2`$. This follows from the fact that, for deionized suspensions of strongly charged particles where $`\kappa _c\kappa _s`$ holds, $`r_m`$ coincides within $`3\%`$ with the average geometrical distance $`\overline{r}=a(3\varphi /(4\pi ))^{1/3}`$ of two particles. For an illustration of this fact consider the inset of fig. 1, which shows RMSA results for $`r_m/\overline{r}`$ as a function of $`\varphi `$. Thus, the scaling relation $`r_m\overline{r}\varphi ^{1/3}`$ holds and this gives rise to the exponent $`1/3`$ in eq. (2): By identifying $`a_{EHS}`$ with $`\overline{r}`$, we obtain an effective volume fraction $`\varphi _{EHS}=\pi /6`$ independent of $`Z`$ and $`\varphi `$.
To obtain a numerical value of $`p`$ according to eq. (15), we take now advantage of an analytic expression for $`\stackrel{~}{H}_{EHS}(z)`$ given in the Percus-Yevick (PY) approximation . By performing the zero-$`z`$ limit, we obtain in PY approximation after a straightforward calculation the intermediate result
$$\stackrel{~}{H}_{EHS}(z=0)=\frac{102\varphi _{EHS}+\varphi _{EHS}^2}{5(1+2\varphi _{EHS})}.$$
(16)
Substitution of eq. (16) into eq. (15) gives
$$p=\frac{3}{5}\varphi _{EHS}^{2/3}\frac{102\varphi _{EHS}+\varphi _{EHS}^2}{1+2\varphi _{EHS}}.$$
(17)
Since $`\varphi _{EHS}=\pi /6`$, we obtain from eq. (17) a value $`p=1.76`$ remarkably close to $`p=1.80`$ as determined from the parametric fit of our numerical PA-result for $`U/U_0`$ (cf. following section).
For an illustration of the replacement of the actual $`g(r)`$ by the EHS-$`g(r)`$ within the EHS model, see again fig. 1, which shows besides a typical RMSA-$`g(r)`$ of charge-stabilized particles the corresponding EHS-$`g(r)`$ obtained in PY approximation for $`\varphi _{EHS}=\pi /6`$.
We finish this section with two remarks. First, we would like to stress again the close connection between the relation eq. (2), found for highly charged colloidal suspensions, and the corresponding result for ordered arrays of fixed spheres. For both systems, the $`\varphi ^{1/3}`$-scaling behavior of $`U/U_0`$ is caused by a strong structural correlation of the particles, resulting in the scaling relation $`r_m\varphi ^{1/3}`$ . Although Saffmann already mentioned the possibility for finding a scaling behavior as eq. (2) for highly correlated fluid suspensions , the numerical method descibed here provides the first quantitative results on the sedimentation velocity of charged suspensions, leading to the predicted $`\varphi ^{1/3}`$-scaling in eq. (2) plus the calculation of the prefactor $`p`$. It is further important to notice that the scaling relation $`r_m\varphi ^{1/3}`$ is not valid if significantly large amounts of excess electrolyte are added to the suspension, since then the particle diameter becomes another physically relevant length scale besides the mean particle distance $`\overline{r}`$. The $`\varphi `$-dependence of $`U/U_01`$ at small $`\varphi `$ changes then gradually with increasing $`n_s`$ from a $`\varphi ^{1/3}`$-dependence to the linear $`\varphi `$-dependence of eq. (1) characteristic for hard spheres .
### 3.2 Weakly charged particles
It is also of interest to consider the opposite limiting case of dilute suspensions of weakly charged spheres in the framework of the effective macroion fluid model. This case was addressed first by Petsev and Denkov . Suppose $`Z`$ is so small that
$$\beta u_{el}(r=2a)=K1,$$
(18)
i.e. the electrostatic repulsion can be treated as a small perturbation of the hard-core repulsion between the particles. For very small $`\varphi `$, $`g(r)`$ can then be approximated as
$$g(r)g_0(r)\left[1\beta u_{el}(r)\right]$$
(19)
with $`g_0(r)=\mathrm{\Theta }(r2a)`$. In other words, $`g(r)`$ is approximated by its zero-density limit $`g(r)\mathrm{exp}\left(\beta u(r)\right)=g_0(r)\mathrm{exp}\left(\beta u_{el}(r)\right)`$, linearized with respect to $`\beta u_{el}(r)`$. Substitution of eq. (19) in eqs. (6,11) gives two additive contributions to $`U/U_0`$, the first one arising from the hard-core part $`g_0(r)`$, and the second one from the electrostatic perturbation of $`g(r)`$ in eq. (19). Explicitly
$$\frac{U}{U_0}1\varphi \left[6.55+6\frac{K}{\kappa a}\right],$$
(20)
where the first term in the bracket is the result of eq. (1), arising from the hard-core part of $`g(r)`$. As far as the electrostatic part is concerned, it was argued by Petsev and Denkov that the leading Oseen-term in $`𝑩(𝒓)`$ should give the dominant contribution to $`U/U_0`$. Therefore only the Oseen-term is considered here, leading to the second term in the bracket of eq. (20). We will provide a critical discussion of this approximation in the following section.
Next, we further simplify eq. (20) for the case of vanishing excess electrolyte ($`n_s=0`$), where $`\kappa a=A\varphi ^{1/2}`$ with $`A=(3|Z|L_B/a)^{1/2}`$. When $`\varphi `$ and/or $`|Z|`$ are sufficiently small so that $`A\varphi ^{1/2}1`$, eq. (20) is further simplified to
$$\frac{U}{U_0}1\left[6\left(\frac{L_B}{2a}\right)|Z|^3\right]^{\frac{1}{2}}\varphi ^{\frac{1}{2}}+𝒪(\varphi ).$$
(21)
Thus, we expect the reduced sedimentation velocity of dilute deionized suspensions of weakly charged particles to scale like $`U/U_0=1p\varphi ^{1/2}`$, with a parameter $`p`$ depending on the macroion charge $`Z`$ and on the ratio $`L_B/(2a)`$.
Notice that the hard core contribution $`6.55\varphi `$ to $`U/U_0`$ in eq. (20) is omitted against the term proportional $`\varphi ^{1/2}`$ in proceeding from eq. (20) to eq. (21). Therefore, eqs. (20) and (21) do not become equal to each other for $`Z0`$. Whereas eq. (20) reduces to the hard sphere result of eq. (1) for $`Z=0`$, eq. (21) results in $`U=U_0`$ for vanishing particle charge. Obviously, the range of validity of eq. (21) is restricted to small values of $`Z`$, but still sufficiently large that the second term in eq. (20) plays the dominant role compared to the hard core contribution $`6.55\varphi `$. We mention already here that due to the strong approximations made in deriving eqs. (20,21), the range of validity of these two equations is restricted to extremely small values of $`\varphi `$ and $`Z`$ (cf. following section).
For eqs. (20,21) to be valid, it is further assumed that the van der Waals attraction among the particles is negligibly small. Experimentally, this might be achieved by coating the particle surface with a thin layer of polymer chains and/or by matching the solvent refractive index to the refractive index of the colloidal particles. If attractions between the particles are present, not only eqs. (20,21) cease to be valid, but also sedimentation velocities larger than for hard spheres may be measured at the same density . This enhanced sedimentation rate is due to the enhanced probability, as compared to hard spheres, of two particles for being close together, thus leading to a reduced retardation from backflow .
## 4 Numerical results and discussion
In this section, we present numerical results for the reduced sedimentation velocity $`U/U_0`$ as a function of the volume fraction $`\varphi `$ and of the particle charge number $`Z`$. Our PA-scheme calculations of $`U/U_0`$ account, if not stated differently, for two-body contributions to the hydrodynamic mobility tensors up to order $`(a/r)^{20}`$. The system parameters employed in our calculations are $`ϵ=2.183`$ (corresponding to cis-decalin as dispersing fluid at temperature $`T=293`$K), and particle radius $`a=695`$nm, representing modified PMMA particles investigated very recently in sedimentation experiments . We further use $`n_s=0`$, i.e. the ionic strength is essentially determined by the (monovalent) counterions, which counterbalance the charge of the colloidal particles. Obviously, the dielectric constant $`ϵ`$ used in the calculations presented here is rather small, leading to strong long-ranged repulsions between the particles even for small surface charges, which, most probably, are present in any polar organic solvent . Furthermore, for these solvents, residual water can cause strong electric charging of the PMMA particles . We therefore present calculations for both weakly and strongly charged particles. Our calculations for strongly charged particles recover the results obtained in Refs. for different system parameters, demonstrating clearly the independence of the scaling relation eq. (2) from specific system parameters, in particular from the solvent dielectric constant $`ϵ`$.
### 4.1 Strongly charged particles
In fig. 2, we show the PA-scheme result for $`U/U_0`$, obtained by choosing an effective charge number $`Z=150`$ large enough that the physical hard-core radius $`a`$ of the particles constitutes no relevant physical length scale. This numerical result is perfectly fitted by the form $`1p\varphi ^\alpha `$, with $`p=1.80`$ and $`\alpha =0.341/3`$. As mentioned before, this is in remarkably good quantitative agreement with our EHS model result obtained in PY approximation. As discussed in detail in Ref. , adding small amounts of excess electrolyte leads to a significant increase in $`U`$, and the $`\varphi ^{1/3}`$-scaling behaviour of $`U/U_0`$ does not hold any more. For comparison, fig. 2 includes also the reduced sedimentation velocity of uncharged hard spheres according to eq. (1). Evidently, the sedimentation velocity of charged particles decreases much faster with increasing $`\varphi `$ than one would expect for hard spheres at the same volume fraction.
As shown in fig. 1, the RMSA-$`g(r)`$ corresponding to the largest volume fraction $`\varphi =0.08`$ considered in fig. 2 has well developed undulations, with the maximum value approximately located at the mean geometrical particle distance $`\overline{r}`$. We only quote that the corresponding static structure factor $`S(q)`$ has its principal peak height well below the range of values $`2.83.1`$, where the system starts to freeze according to the empirical Hansen-Verlet rule . A few comments on the quality of the RMSA input for $`g(r)`$, used in the PA-scheme calculations, is in order here. It is well known that the RMSA underestimates to some extend the oscillations of $`g(r)`$ and $`S(q)`$ in case of strongly correlated systems of highly charged particles. In principle, we could use instead of the RMSA an alternative scheme like the Rogers-Young (RY) scheme . The RY sheme is quite accurate within the effective macrofluid model, but has the disadvantage of being numerically far more involved than the RMSA. Fortunately, the RMSA has been found to give nearly identical results as the RY-scheme, provided that a somewhat larger value of $`Z`$ is used in the RMSA calculations . We have argued before and will show in the following that the parameter $`p`$ in eq. (2) is nearly independent of $`Z`$, typically as long as $`Z100`$. Consequently, using the RY-scheme instead of the RMSA for the same value of $`Z`$ should lead to practically identical results for $`U/U_0`$. We have verified this assertion by explicit RY calculations .
In fig. 3, we display PA-scheme results for $`U/U_0`$ versus $`\varphi `$ for various values of $`Z`$. For $`Z=0`$, we recover the hard sphere result $`U/U_0=1p\varphi `$ with $`p=6.54`$. An increase in $`Z`$ leads to a strong reduction in $`U`$, with a gradual transition from the linear $`\varphi `$-dependence of $`U/U_0`$ towards the non-linear form of eq. (2), with $`p1.80`$. This figure nicely illustrates that $`U/U_0`$ becomes independent of $`Z`$ for $`Z100`$. For these large values of $`Z`$, $`r_m`$ stays practically constant when $`Z`$ is increased with fixed $`\varphi `$ .
The effect on the PA-results for $`U/U_0`$ caused by truncating the $`(a/r)`$-expansion of the two-particle mobilities after various terms of increasing order in $`(a/r)`$, can be assessed from fig. 4. The solid line represents the full PA-scheme result where all two-body contributions up to $`𝒪(r^{20})`$ are accounted for. Nearly identical results for $`U/U_0`$ are obtained, even at $`\varphi =0.08`$, when contributions only up to $`𝒪(r^4)`$ are considered. In contrast, the result for $`U/U_0`$ obtained by accounting only for hydrodynamic contributions up to $`𝒪(r^3)`$ shows clear deviations from the full PA-scheme result at volume fractions $`\varphi 0.01`$. As a conclusion, we can state that for dispersions of highly charged particles it is justified to use a truncated far-field expansion of the mobility tensors with only the first terms being included, provided a good static input for $`g(r)`$ is used. The finding that an expansion up to $`𝒪(r^4)`$ leads already to good results indicates further that hydrodynamic $`n`$-body contributions to $`U`$ with $`n3`$ are indeed negligibly small in the considered $`\varphi `$-range. For practical purposes, it is therefore suitable to use the simple eq. (12) for calculating the reduced sedimentation velocity for highly charged suspensions.
### 4.2 Weakly charged particles
We discuss now the behaviour of $`U/U_0`$ when $`Z`$ is small. To be specific, consider first the value $`Z=5`$. For this charge number, $`\beta u_{el}(r)<(L_B/(2a))Z^20.5`$. This implies that the microstructure is affected not only by the electrostatic repulsion but also by the physical hard core of the particles. In fig. 5, we have redrawn from fig. 3 the PA-scheme result for $`U/U_0`$ with $`Z=5`$ and RMSA-input for $`g(r)`$. This graph should be compared with the corresponding result for $`U/U_0`$ obtained from the expression given in eq. (20) (henceforth referred to as PD-result). The PD-result for $`U/U_0`$ is completely different from the corresponding PA-scheme result, in particular at larger $`\varphi `$ where the PD-$`U/U_0`$ even turns negative. The failure of the PD-expression in describing $`U/U_0`$ arises from the fact that the approximations entering in its derivation, i.e. in particular the approximation eq. (19) for $`g(r)`$, but also the truncation of the electrostatic contribution to $`U/U_0`$ after the leading Oseen-term, are not justified even for a charge number as small as $`Z=5`$. The consideration of more terms in the hydrodynamic pair mobility contributions up to $`𝒪(r^{20})`$, however, leads only to a small increase in $`U/U_0`$, as can be seen from fig. 5, which further includes the PA-scheme result for $`U/U_0`$ with $`g(r)`$ approximated by the linearized zero-density form of $`g(r)`$, eq. (19). Another slight improvement in $`U/U_0`$ is achieved when the non-linearized zero-density form $`g(r)=\mathrm{\Theta }(r2a)\mathrm{exp}\left(\beta u_{el}(r)\right)`$ is used as input in the PA-scheme. In fact, the contact value $`\beta u_{el}(2a)`$ is not small enough for $`Z=5`$ to fully justify a linearization in $`\beta u_{el}(r)`$. Therefore, the main reason for the failure of the PD-expression is due to the significant differences of the actual $`g(r)`$ at $`Z=5`$ from its zero-density form already for $`\varphi 0.01`$.
To obtain further insight in the range of validity of the PD-result eq. (20), we display in fig. 6 results for $`U/U_0`$ for an even smaller particle charge $`Z=2`$ and volume fractions smaller than $`\varphi =0.01`$. As seen from this figure and from fig. 5, the differences between the RMSA-PA-scheme result for $`U/U_0`$ and the PA-scheme result for using $`g(r)=\mathrm{\Theta }(r2a)\mathrm{exp}(\beta u_{el}(r))`$ or $`g(r)=\mathrm{\Theta }(r2a)\left[1\beta u_{el}(r)\right]`$ as static input become smaller with decreasing $`Z`$, but are still significant even for $`Z=2`$ and $`\varphi \stackrel{>}{}10^3`$. For an explanation of this finding, we refer to fig. 7, where the radial distribution $`g(r)`$ obtained from the RMSA and from the zero-density expression $`g(r)=\mathrm{\Theta }(r2a)\mathrm{exp}(\beta u_{el}(r))`$ for $`Z=2`$ and $`\varphi =0.005`$ are shown. Obviously, there are still small differences between the the zero-density expression for $`g(r)`$ and the corresponding RMSA result even for these small values of $`Z`$ and $`\varphi `$. We show now that these small differences in $`g(r)`$ cause the large differences in the results for $`U/U_0`$ as illustrated in fig. 6. For this purpose, we plot in the inset of fig. 7 the function $`r(1g(r))`$ as obtained from the radial distribution functions shown in the main figure. As easily seen, the two curves for $`r(1g(r))`$, corresponding to the two different inputs for $`g(r)`$, are remarkably different from each other even up to large distances $`r`$. It is now crucial to notice that according to eq. (12) it is essentially the function $`r(1g(r))`$ and not $`g(r)`$ itself which appears in the integrand when $`U/U_0`$ is calculated including terms up to $`𝒪(r^1)`$ in the series expansions in eqs. (9,10). Therefore, the small differences in the two $`g(r)`$’s considered in fig. 7 give rise to substancial differences in the corresponding sedimentation velocities shown in fig. 6. We can therefore conclude that it is absolutely important to employ an accurate $`g(r)`$-input for calculating $`U/U_0`$ in the PA-scheme. Due to the use of the (linearized) zero-density approximation of $`g(r)`$, the expression given by Petsev and Denkov in eq. (20) does not predict the sedimentation velocity of weakly charged particles correctly.
After having explored the high sensibility of $`U/U_0`$ on the form of the $`g(r)`$-input used in the PA-scheme, it is now apparent that the second approximation made in deriving eq. (20), i.e. the omission of terms in the far-field expansions of the hydrodynamic mobilities of $`𝒪(r^3)`$ in the electrostatic contribution to $`U/U_0`$, is not valid for the small values of $`Z`$ and $`\varphi `$ used in figs. 6 and 7. This conclusion follows from fig. 6, when the results for $`U/U_0`$ according to eq. (20) and derived from the PA-scheme using eq. (19) as static input are compared. Although the same approximation is employed for $`g(r)`$, the two results for $`U/U_0`$ do not agree, since the approximations in truncating the series expansions of the hydrodynamic mobility tensors after the terms of $`𝒪(r^1)`$ and $`𝒪(r^{20})`$, respectively, are not the same. On the other hand, we have shown in fig. 4 for highly charged particles that the first few terms in the series expansion of eqs. (9,10) are sufficient to calculate $`U/U_0`$. As pointed out before, this is due to the existence of an extended correlation hole in the $`g(r)`$ of highly charged particles in deionized suspensions (cf. fig. 1). The correlation hole gives rise to a fast concergence of the integrals in eq. (11). However, in case of the weakly charged particles considered here, there is no correlation hole present, as can be seen, e.g., from fig. 7. Therefore, a large number of terms in the series expansions of eqs. (9,10) are needed for weakly charged particles to calculate $`U/U_0`$. In fact, as discussed before in the case of hard spheres, the terms of $`𝒪(r^3)`$ and $`𝒪(r^4)`$ in the series expansions in eqs. (9,10) contribute in approximately the same weight to $`U/U_0`$, still remarkably large compared the the leading Oseen-term of $`𝒪(r^1)`$, considered only in the electrostatic contribution in the PD-result eq. (20).
To summarize the last presented results, we have shown that the range of validity of eq. (20) is restricted to values of $`\varphi `$ considerably smaller than $`10^3`$, where the macroion radial distribution function is described very accurately by its linearized zero-density form. Furthermore, for the PD-result eq. (20) to be hold, the effective charge $`Z`$ has to be small enough that the approximation in eq. (19) can be used to calculate $`U/U_0`$, but still not so small that the actual $`g(r)`$ is too close to its zero-charge limiting form $`g_0(r)=\mathrm{\Theta }(r2a)`$. Otherwise, the omission of higher order hydrodynamic terms of $`𝒪(r^3)`$ to the electrostatic contribution in eq. (20) ceases to be a good approximation. These severe restrictions limit the range of validity of eq. (20) to values of the system parameters most probably not accessible in sedimentation experiments. Moreover, the relative differences in $`U`$ and $`U_0`$ become small in the parameter range where eq. (20) should apply.
Let us now turn to some remarks on the $`\varphi ^{1/2}`$-scaling of $`U/U_01`$ proposed in eq. (21). Since eq. (21) is derived from the PD-result eq. (20), the before discussed restricted range of validity of eq. (20) applies also to eq. (21). Furthermore, in eq. (21), the terms linear in $`\varphi `$ are neglected against the term proportional to $`\varphi ^{1/2}`$. We have shown in fig. 6 that this additional approximation becomes invalid already at small volume fractions, typically $`\varphi \stackrel{>}{}10^3`$. For larger $`\varphi `$, the two results for $`U/U_0`$ in eq. (20) and eq. (21) start to deviate strongly, since the contributions to $`U/U_0`$ linear in $`\varphi `$ (especially the hard core contribution $`6.55\varphi `$) are no longer small compared to the term proportional to $`\varphi ^{1/2}`$. We therefore also expect the $`\varphi ^{1/2}`$-scaling of $`U/U_01`$ according to eq. (21) not to be measurable in a sedimentation experiment. Nevertheless, we wish to point out that there is a specific range of particle charges $`Z`$ where $`U/U_01`$ indeed scales as $`\varphi ^{1/2}`$ for a broad range of volume fractions. This range of $`Z`$-values is determined when the effective particle charge is subsequently lowered from large values $`Z100`$, where $`U/U_01`$ scales like $`\varphi ^{1/3}`$, to $`Z=0`$, where $`U/U_01`$ behaves linearly in $`\varphi `$ according to eq. (1). To demonstate the occurance of a $`\varphi ^{1/2}`$-dependence of $`U/U_01`$, we have redrawn in fig. 8 the RMSA-PA result from fig. 5 for $`Z=5`$ together with the corresponding results for $`Z=6`$ and $`Z=7`$. Obviously, all three curves are well described by the form $`U/U_0=1p\varphi ^\alpha `$ with a parameter $`\alpha `$ very close to $`1/2`$. Such a $`\varphi ^{1/2}`$-dependence of $`U/U_0`$ might indeed be measurable in sedimentation experiments on dispersions with suitably chosen particle charges. However, we stress that the occurance of the parameter $`\alpha `$ close to $`1/2`$ for a broad range of volume fractions as dipicted in fig. 8 is completely different from the prediction eq. (21), which has been shown to be valid only for very small $`\varphi `$.
Finally, we shortly comment on the dependence of our results on the system parameters $`T`$ and $`ϵ`$, which are held fixed in our calculations. Obviously, these two parameters enter into our calculations mainly by determining the Bjerrum length $`L_B`$, which has a rather large value $`L_B=26.12`$nm due to the small dielectric constant $`ϵ=2.183`$ used in this work. As consequence, also the ratio $`L_B/(2a)`$ is comparatively large. This ratio determines together with $`Z`$ the strength of the electrostatic repulsion between the particles. For that reason, one might argue that our dicussion concerning the use of eq. (19) for $`g(r)`$ in calculating $`U/U_0`$ is not longer valid for smaller values of $`L_B`$. However, explicit PA-scheme calculations for system parameters as used in Ref. and/or a simple estimate show that our conclusions derived above remain valid even for considerably smaller values of $`L_B`$: Consider a value of $`L_B`$ ten times smaller than the one used in the above presented calculations, e.g. a value like in sedimentation experiments on charged silica particles in ethanol . Then, according to eqs. (3,4), one might choose a particle charge approximately $`\sqrt{10}3.16`$ times larger than in our calculations to achieve the same strength of repulsion between the particles. As shown above for our choice of system parameters, the use of eq. (19) for $`g(r)`$ is a poor approximation even for low volume fractions $`\varphi 0.01`$ and a particle charge $`Z=2`$. Therefore, even for a system with $`L_B`$ ten times larger, eqs. (20,21) are applicable only for particle charges considerably smaller than $`Z=6`$. This slightly extended parameter range for eqs. (20,21) to be valid is most probably still too restricted to be experimentally accessible.
## 5 Concluding remarks
We have presented theoretical results for the reduced sedimentation velocity $`U/U_0`$ of monodisperse charged suspensions in dependence of the volume fraction $`\varphi `$ and of the particle charge number $`Z`$. Our theoretical model for $`U/U_0`$ is based on the effective macroion fluid model and on the assumption that pairwise additive HI prevails at sufficiently small $`\varphi `$. The numerical results for $`U/U_0`$ in case of strongly charged particles at low salinity are well parametrized by the form $`1p\varphi ^{1/3}`$.
We have shown that the exponent $`1/3`$ and the value of the charge-independent parameter $`p=1.8`$ can be quantitatively understood in terms of a model of effective hard spheres of radius $`a_{EHS}`$ which depends on the volume fraction. Using Percus-Yevick input for the static pair correlation function of hard spheres, the EHS model can be treated analytically and leads to a value of $`p`$ very close to $`1.8`$.
It was further demonstrated that $`U/U_01`$ can scale like $`\varphi ^{1/2}`$ in case of dilute suspensions of very weakly charged particles. This peculiar volume fraction dependence of $`U/U_0`$ derives from an expression given by Petsev and Denkov , when it is further assumed that the ionic strength in the system is mainly due to counterions. Our numerical calculations reveal that both the original expression for $`U/U_0`$ given by Petsev and Denkov, and the derived expression showing the $`\varphi ^{1/2}`$-scaling of $`U/U_01`$, are only valid for very small particle volume fractions, which we expect not to be accessible in conventional sedimentation experiments.
## Acknowledgements
We are grateful to Barbara Löhle (University of Konstanz) for providing Rogers-Young calculations of various static distribution functions, and to Bruce Ackerson (Oklahoma State University) and Barbara Mandl (formerly University of Konstanz) for helpful discussions. We further thank on of the referees for calling our attention to Ref. . M.W. thanks the Deutsche Forschungsgemeinschaft for financial support within SFB 513 and SFB 237.
## Figure captions
|
no-problem/9903/cond-mat9903080.html
|
ar5iv
|
text
|
# Two lifetimes and the pseudogap in the orbital magnetoresistance of Zn–substituted La1.85Sr0.15CuO4
\[
## Abstract
The effect of zinc doping on the anomalous temperature dependence of the magnetoresistance and the Hall effect in the normal state was studied in a series of La<sub>1.85</sub>Sr<sub>0.15</sub>Cu<sub>1-y</sub>Zn<sub>y</sub>O<sub>4</sub> films, with values of $`y`$ between zero and 0.12. The orbital magnetoresistance at high temperatures is found to be proportional to the square of the tangent of the Hall angle, as predicted by the model of two relaxation rates, for all Zn–doped specimens, including nonsuperconducting films. The proportionality constant is equal to 13.7 $`\pm `$ 0.5 independent of doping. This is very different from the behavior observed in underdoped La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> films where a decrease of $`x`$ destroys the proportionality. In addition, the behavior of the orbital magnetoresistance at low temperatures is found to be different depending on whether $`x`$ is changed or $`y`$. We suggest that these differences reflect a different evolution of the pseudogap in the two cases.
\]
The anomalous normal–state transport properties of cuprate superconductors represent a major challenge on the way toward the understanding of the physics of these materials. The assumption that there are two different relaxation times is an attempt to explain anomalies observed in the resistivity, Hall effect and magnetoresistance. It predicts a simple relation between the orbital magnetoresistance and the Hall angle ($`\mathrm{\Delta }\rho /\rho \mathrm{tan}^2\mathrm{\Theta }_H`$), which is indeed observed, at least at high temperatures in optimally doped cuprates . One explanation relates the behavior of the Hall coefficient to the opening of the pseudogap in the normal state . Recent photoemission studies even point to the possibility of two different pseudogaps, a low–energy pseudogap related to the evolution of the Fermi surface into discontinuous Fermi discs in the underdoped cuprates , and a high–energy pseudogap, possibly related to the magnetic interactions .
Recently we described measurements of the orbital magnetoresistance (OMR) and the Hall effect on a wide range of La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> (LSCO) films, from $`x`$ = 0.048 with no superconductivity down to 4 K, through the superconducting range, to $`x`$ = 0.275 with properties approach those of a normal metal . We found that the predicted relation between the Hall angle and the magnetoresistance is not followed except in optimally doped films at temperatures above 100K. At lower temperatures there is a point of inflection in the curve of the OMR as a function of $`T`$, below which the OMR increases rapidly. The large positive OMR observed below the inflection point survives in the nonsuperconducting specimens, indicating that it cannot be attributed solely to superconducting fluctuations as originally suggested . The point of inflection is seen to move to higher temperatures as $`x`$ decreases in the underdoped specimens, and we have suggested that this feature may be related to the opening of a pseudogap as the metal–insulator transition is approached.
To reach a fuller understanding we have made measurements of OMR and Hall effect on films of La<sub>1.85</sub>Sr<sub>0.15</sub>Cu<sub>1-y</sub>Zn<sub>y</sub>O<sub>4</sub> with $`y`$ from zero to 0.12. Superconductivity is absent when $`y`$ is greater than 0.055. We find that the change of $`y`$ gives rise to a distinctly different evolution of the OMR than a change of $`x`$. In particular, the inflection point on the OMR curve does not shift with $`y`$, consistent with a pseudogap–opening that is unaffected by a change of $`y`$. Moreover, the proportionality between the OMR and $`\mathrm{tan}^2\mathrm{\Theta }_H`$ is followed for all specimens, including nonsuperconducting films, with a proportionality constant which does not change with $`y`$ and remains equal to the value reported previously for zinc–free LSCO . This unexpected result is easily explained by the models which use two relaxation times , but is much more difficult to understand on the basis of more conventional Fermi–liquid theories which assume anisotropic relaxation rates .
The $`c`$–axis oriented films, about 6000 Å thick, were grown by pulsed laser deposition on LaSrAlO<sub>4</sub> substrates. The values of $`y`$ are those of the targets, but have been shown to be the same as in the films . The specimens for the present study were selected for their small residual resistivity. Their dependence of the in–plane resistivity on temperature is shown in Fig. 1(a). The inset shows the room–temperature resistivity, $`\rho _{RT}`$, as a function of $`y`$. It exceeds by about 30% that of similar single crystals . However, while $`y`$ was no greater than 0.04 in the single crystals, we are able to reach values three times as high without any deterioration of the film quality .
The films were patterned by photolithography and the wires soldered with indium to evaporated silver pads. Standard six–probe geometry was used to measure Hall voltage and magnetoresistance simultaneously. The measurements were made in magnetic fields up to 8 T, in longitudinal fields (parallel to the $`ab`$–plane) and in transverse fields (perpendicular to the $`ab`$–plane and to the current), with $`T`$ between 25 K and 300 K. The temperature was stabilized to about 3 ppm, as described previously .
The Hall voltage is a linear function of field for all fields. Fig. 1(b) shows the Hall coefficient, $`R_H`$, as a function of $`T`$. It is seen that the increase of $`y`$ causes a decrease of $`R_H`$ without affecting the shape of $`R_H(T)`$. The change in $`R_H`$ is about an order of magnitude less than in LSCO when $`x`$ is decreased from optimal ($`x`$ = 0.15) toward the strongly underdoped regime ($`x=0.048`$) . The decrease of $`R_H`$ confirms the results previously observed in ceramic specimens up to $`y`$ = 0.03 . Note that a decrease of $`x`$ causes an increase of $`R_H`$ while an increase in $`y`$ causes an opposite trend. The change in $`y`$ does not lead to a superconductor–insulator transition , but rather to a metallic nonsuperconducting phase . The fact that the shape of $`R_H(T)`$ remains unaffected by the variation of $`y`$ is different from what happens with overdoping of LSCO, which also leads to a metallic phase, but destroys the anomalous $`T`$–dependence of $`R_H`$ .
Fig. 2 shows that the data for the Hall angle can be described by $`\mathrm{cot}\mathrm{\Theta }_H=bT^2+c`$ from 25 K to 200 K. The variation of the coefficients $`b`$ and $`c`$ is shown in the inset. The coefficient $`c`$ increases linearly with $`y`$ for all superconducting films at a rate equal to 38 $`\pm `$ 4 per at.% of Zn (and faster in nonsuperconducting films). This is about three times as fast as in zinc–substituted YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7-δ</sub> . The parameter $`b`$ is not constant, as suggested for YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7-δ</sub> , but increases with $`y`$.
The inset to Fig. 3 shows a typical example of the dependence of the magnetoresistance on temperature. In all specimens the transverse magnetoresistance (TMR) is positive down to 25 K, and it is always larger than the longitudinal magnetoresistance (LMR). The LMR is negative and very small above 200 K, approaching the experimental resolution of the measurement. At lower T the LMR becomes positive, and larger when $`x`$ decreases, or when $`y`$ increases. In nonsuperconducting specimens with small $`x`$ or large $`y`$ the LMR becomes negative and large below 25 K, consistent with the expectation that the magnetic interactions, and the isotropic spin scattering, which is presumably responsible for the LMR, then play an increasingly important role .
To obtain the OMR we subtract the longitudinal component from the transverse magnetoresistance. The temperature dependence of the OMR is shown in Fig. 3. A dramatic suppression of the positive OMR occurs at low temperatures as $`y`$ increases, until it becomes negative in nonsuperconducting specimens. This is very different from the behavior of the OMR in LSCO, where a large positive OMR survives in nonsuperconducting films . This difference supports our previous conclusion that superconducting fluctuations are not solely responsible for the positive OMR observed at low temperatures. Since superconducting fluctuations are expected to exist in the vicinity of $`T_c`$ in underdoped and in zinc–doped LSCO, they would lead to the same behavior in both types of specimens. Evidently this is not the case.
Fig. 4 shows the OMR on a logarithmic scale. It may be seen that the point of inflection, which is at about 70 K in the film with $`y=0`$, does not change its position along the $`T`$–axis with increasing $`y`$, while with decreasing $`x`$ the point of inflection moves to higher temperatures. We suggest that the fact that the shift is not observed in the zinc–doped films indicates that the temperature at which the pseudogap opens is not affected by the change of $`y`$. This may be understood if one assumes that the zinc doping affects the psedogap behavior only locally, in the immediate vicinity of the impurity, but not in the bulk of the specimen. Thus, while the doping affects the scattering in the bulk of the specimen as seen by the fact that both the Hall angle and the OMR above the point of inflection change with $`y`$, the temperature of the pseudogap opening does not change. A similar suggestion was made in a study of the thermopower in zinc–doped YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7-δ</sub> and YBa<sub>2</sub>Cu<sub>3</sub>O<sub>8</sub> .
Further insight into the nature of the scattering comes from testing the relation between the OMR and the square of the tangent of the Hall angle. We find that this relation is followed for all zinc–doped specimens at temperatures above the inflection point.
Examples of this dependence are shown in the inset to Fig. 5 for three films. The dotted lines are fits to the equation $`a/(bT^2+c)^2`$. A comparison of the experimental values of the OMR and $`\mathrm{tan}^2\mathrm{\Theta }_H`$, measured at temperatures above the inflection point for all of the zinc–doped films, is shown on a log–log plot in Fig. 5. With the exception of data for $`y`$ = 0.12, which are close to the limit of resolution in our experiment, the data fall on straight lines which have approximately the same slope. Small parallel shifts between them probably result from experimental uncertainty of the sample sizes. Excluding the data for $`y`$ = 0.12, we can fit the data with a straight line with slope 0.94 $`\pm `$ 0.06. The proportionality constant $`a`$, averaged over all data, is equal to 13.7 $`\pm `$ 0.5, in excellent agreement with the value 13.6 reported for LSCO with $`x`$ = 0.17 by Harris et al. .
The observation that the proportionality constant $`a`$ is unaffected by doping puts strong constraints on the theoretical models which attempt to explain the anomalous properties of the normal state in cuprates. These models may be divided into two classes. Those in one class, the Fermi–liquid models, are based on the assumption that some strong, unusual anisotropy of the relaxation rates around the Fermi surface leads to the anomalies . Although details of these models vary, it would be expected that the ratio of the OMR to $`\mathrm{tan}^2\mathrm{\Theta }_H`$ would depend on temperature and doping so that our observation would require some fortuitous cancellation. The models in the second class assume the existence of two different relaxation rates at all points of the Fermi surface . It is a fundamental property of these models that the ratio of the OMR to $`\mathrm{tan}^2\mathrm{\Theta }_H`$ is constant, and should not be affected by doping. These models thus appear to be favored by our results. Finally we note that high–magnetic–field studies of the magnetoresistance in Tl<sub>2</sub>Ba<sub>2</sub>CuO<sub>6+δ</sub> also favor the two lifetime models . Their microscopic understanding remains, however, elusive.
We conclude that the metallic phase created by zinc doping retains the anomalous characteristics that are observed at high temperatures in optimally doped LSCO, in both the OMR and the Hall effect, together with the relationship between them. We suggest that the striking contrast between this result and our previous observation of the dissapearance of the proportionality between OMR and Hall angle in underdoped LSCO is related to the opening of a pseudogap. Apparently the opening of the pseudogap destroys this characteristic feature of the anomalous normal state.
We would like to thank M. Gershenson for his cooperation and sharing of laboratory facilities, and Piers Coleman and Andrew Millis for helpful discussions. We also thank Richard Newrock and the Physics Department of the University of Cincinati for help with the construction of the target chamber. This work was supported by the Polish Committee for Scientific Research, KBN, under grant 2 P03B 09414, by the Naval Research Laboratory, and by the Rutgers Research Council.
|
no-problem/9903/cond-mat9903011.html
|
ar5iv
|
text
|
# Ground-state dispersion and density of states from path-integral Monte Carlo. Application to the lattice polaron
## I Introduction
Usually, quantum Monte Carlo (QMC) methods are used to study ground-state, thermodynamic, or static properties of quantum-mechanical systems. Some important dynamical characteristics can also be obtained through various forms of fluctuation-dissipation relations. Examples of these are the superfluid fraction of Bose-liquids , the Drude weight of conductors , the Meissner fraction of superconductors , and the effective mass of defects and polarons . Beyond that dynamical calculations are less straightforward. For instance, calculation of the excitation spectrum normally requires measurement of the Green’s function at imaginary times and subsequent analytical continuation to real times.
However, there exists one special type of excitation spectrum that can be measured directly by QMC. This is the ground-state dispersion, i.e., the total energy of the system $`E_𝐏`$ as a function of the total momentum P. In a translationally-invariant system, P is a constant of motion, and the Hamiltonian does not mix subspaces with different P. Then, if a QMC is designed as to operate within a given P-subspace only, it may be able to access the ground state for the given P, thereby providing $`E_𝐏`$. Not for any physical system $`E_𝐏`$ is of interest. For a collection of identical particles, for instance, one has simply $`E_𝐏=𝐏^2/(2M)`$, $`M`$ being the total mass, which corresponds to free movement of the system as a whole. Positive examples include cases when the system can be divided into a tagged particle and an environment (usually bosons). The best known example of this kind is the polaron, i.e., an electron strongly interacting with phonons. In this case $`E_𝐏`$ is nothing but the polaron spectrum. The polaron spectrum will be the main subject of this paper.
There exist at least two different strategies of how to operate within a restricted P-subspace. The first one is to work in momentum space and to fix the total momentum of the system from outset. An example of this approach is the diagrammatic method of Prokof’ev and Svistunov . In this method QMC is used to sum the entire diagrammatic series for an imaginary-time Green’s function $`G(𝐏,\tau )`$. Since the total momentum is an external parameter of the series, it is possible to extract $`E_𝐏`$ from the $`\tau \mathrm{}`$ limit behavior of $`G`$, by fitting it to a single exponential $`e^{E_𝐏\tau }`$. This method is exact and universal but requires a separate simulation for each P-point.
The second strategy is to work in real space but to use Fourier-type projection operators to project on states with definite P. This amounts to the free boundary conditions in imaginary-time. Usually, the projections are used to calculate the second derivative of the energy with respect to momentum (effective mass) . In Ref. the projection was applied for the first time to the whole polaron spectrum. In this scheme, the ground-state dispersion is measured directly, and all $`E_𝐏`$ are calculated simultaneously. Unfortunately, at non-zero P the weight of the polaron path is no longer positive definite, and one needs to deal with a sign-problem. It turns out, however, that the main idea can be reformulated in a way that does not require a division by the average sign, but only taking its logarithm. While the new formulation does not constitute a complete elimination of the sign-problem, it is more statistically stable and extends the parameter domain accessible in practical simulations. Below we derive the new formula, discuss its properties, and apply to the physically interesting example of the lattice polaron.
## II A formula for the ground-state dispersion
Up to our knowledge, the projection relations required for our purposes were first derived by Basile . For completeness a derivation is given below. Let $`R`$ denote a many-body real-space configuration, and $`R+𝐫`$ a many-body configuration which is a result of the parallel transport of $`R`$ by a vector $`𝐫`$. (Note that the sum of $`R`$ and r is only symbolic. The dimensionality of $`R`$ is equal to the number of degrees of freedom, i.e., very large or infinite, while the dimensionality of r is the dimensionality of space, i.e. 1, 2, or 3.) States $`|R`$ form a complete orthogonal basis, $`𝐈=𝑑R|RR|`$, and $`R|R^{}=\delta (RR^{})`$. A different basis is formed by the states $`|n`$ which are characterized by the definite total momentum $`𝐏`$. One is interested in the projected partition function $`Z_𝐏`$ which includes only states with the given P:
$`Z_𝐏`$ $``$ $`{\displaystyle \underset{n}{}}n|e^{\beta H}|n\delta _{𝐏,𝐏_n}`$ (1)
$`=`$ $`{\displaystyle 𝑑R𝑑R^{}R^{}|e^{\beta H}|RQ_𝐏},`$ (2)
$$Q_𝐏=\underset{n}{}R|nn|R^{}\delta _{𝐏,𝐏_n}=R|𝐏𝐏|R^{},$$
(3)
where $`\beta =(k_BT)^1`$ is the inverse temperature and $`H`$ is the full Hamiltonian. The meaning of Eq. (3) is that the two configurations, $`R`$ and $`R^{}`$, have to be projected on the given momentum $`𝐏`$. This is achieved as follows (below $`\mathrm{}=1`$ is set). Any arbitrary configuration $`R`$ generates a set of states $`|𝐏_R=V^{1/2}𝑑𝐫e^{i\mathrm{𝐏𝐫}}|R+𝐫`$, where $`V`$ is the volume. Inversely, $`|R+𝐫=V^{1/2}_𝐏e^{i\mathrm{𝐏𝐫}}|𝐏_R`$. Upon projection, only P-components of both configurations survive. As a result
$`Q_𝐏`$ $`=`$ $`{\displaystyle \frac{1}{V}}{\displaystyle 𝑑𝐫R+𝐫|𝐏𝐏|R^{}+𝐫}`$ (4)
$`=`$ $`{\displaystyle \frac{1}{V^2}}{\displaystyle 𝑑𝐫\underset{𝐏^{}𝐏^{\prime \prime }}{}e^{i(𝐏^{}𝐏^{\prime \prime })𝐫}𝐏_{}^{}{}_{R}{}^{}|𝐏𝐏|𝐏_{}^{\prime \prime }{}_{R^{}}{}^{}}`$ (5)
$`=`$ $`{\displaystyle \frac{1}{V}}𝐏_R|𝐏_R^{}`$ (6)
$`=`$ $`{\displaystyle \frac{1}{V^2}}{\displaystyle 𝑑𝐫𝑑𝐫^{}R+𝐫|R^{}+𝐫^{}e^{i𝐏(𝐫𝐫^{})}}`$ (7)
$`=`$ $`{\displaystyle \frac{1}{V}}{\displaystyle d(\mathrm{}𝐫)R+\mathrm{}𝐫|R^{}e^{i𝐏\mathrm{}𝐫}}`$ (8)
$`=`$ $`{\displaystyle \frac{1}{V}}{\displaystyle d(\mathrm{}𝐫)e^{i𝐏\mathrm{}𝐫}\delta ((R+\mathrm{}𝐫)R^{})},`$ (9)
where $`\mathrm{}𝐫=𝐫𝐫^{}`$. Substitution in Eq. (2) and integration over $`R^{}`$ yields
$`Z_𝐏`$ $`=`$ $`{\displaystyle \frac{1}{V}}{\displaystyle d(\mathrm{}𝐫)e^{i𝐏\mathrm{}𝐫}𝑑RR+\mathrm{}𝐫|e^{\beta H}|R}`$ (10)
$`=`$ $`{\displaystyle \frac{1}{V}}{\displaystyle d(\mathrm{}𝐫)e^{i𝐏\mathrm{}𝐫}𝑑R\rho (R,R+\mathrm{}𝐫;\beta )},`$ (11)
where $`\rho (R,R^{};\beta )`$ is the full many-body density matrix. Next, we assume that for each P the state with the lowest energy $`E_𝐏`$ is non-degenerate, and in the low-temperature limit the projected partition function is dominated by the contribution from this state, $`Z_𝐏\mathrm{exp}(\beta E_𝐏)`$. Now take the ratio of $`Z_𝐏`$ and $`Z_{𝐏=0}`$:
$`e^{\beta (E_𝐏E_0)}`$ $`=`$ $`\underset{\beta \mathrm{}}{lim}{\displaystyle \frac{Z_𝐏}{Z_{𝐏=0}}}`$ (12)
$`=`$ $`\underset{\beta \mathrm{}}{lim}{\displaystyle \frac{d(\mathrm{}𝐫)e^{i𝐏\mathrm{}𝐫}𝑑R\rho (R,R+\mathrm{}𝐫;\beta )}{d(\mathrm{}𝐫)𝑑R\rho (R,R+\mathrm{}𝐫;\beta )}},`$ (13)
where $`E_0`$ is the ground-state energy. The rhs is nothing but the average value of $`\mathrm{cos}𝐏\mathrm{}𝐫`$ taken over the distribution $`\rho `$. \[We have assumed that $`𝑑R\rho (R,R+\mathrm{}𝐫;\beta )`$ is an even function of $`\mathrm{}𝐫`$.\] A simple formula for $`E_𝐏`$ now follows
$$E_𝐏E_0=\underset{\beta \mathrm{}}{lim}\frac{1}{\beta }\mathrm{ln}\mathrm{cos}𝐏\mathrm{}𝐫,$$
(14)
which is the main result of this section.
Eq. (14) shows that the ground-state dispersion can be obtained from the end-to-end distribution of many-body paths. It offers a direct way of evaluating the ground-state dispersion by QMC methods in cases when $`\rho (R,R^{};\beta )`$ is positive-definite. However, the QMC process must be organized in a special way, as apparent from Eq. (12). It must generate only such paths whose end configurations at imaginary time $`\tau =\beta `$ are exact images of the end configurations at $`\tau =0`$ except for a parallel transport by an arbitrary vector $`\mathrm{}𝐫`$. It is allowed to change $`\mathrm{}𝐫`$, make simultaneous changes of both end configurations, and to make arbitrary changes of paths at internal times $`0<\tau <\beta `$, but the end configurations must always be kept identical up to a shift. It is important that this restriction affects neither the ergodicity nor the applicability of the Metropolis algorithm.
Formula (14) involves only one measures quantity, $`\mathrm{cos}𝐏\mathrm{}𝐫`$, instead of two in the previous formulation. Moreover, it does not require division by the measured quantity, but only taking its logarithm. Additionally, Eq. (14) provides the difference between two large numbers, $`E_𝐏`$ and $`E_0`$, and a large cancellation of errors may occur. This makes Eq. (14) much more stable statistically than explicit energy estimators.
On the other hand, at small temperatures, the average cosine becomes exponentially small and it cannot be measured reliably. This reflects the fact that configurations with $`E_𝐏E_0k_BT`$ are very rare because of the Boltzmann factor. Thus, the present method is limited to excitation energies of the order of several $`k_BT`$.
## III Application to the lattice polaron
We now demonstrate the practical importance of Eq. (14) on the model problem of lattice polaron, which is often considered as a paradigmatic example of a particle strongly interacting with a boson field. We consider a hypercubic lattice with the nearest-neighbor hopping, dispersionless phonons, and the “density-displacement” electron-phonon interaction. The model Hamiltonian reads
$$H=t\underset{\mathrm{𝐧𝐧}^{}}{}c_𝐧^{}c_𝐧^{}\underset{\mathrm{𝐧𝐦}}{}f_𝐦(𝐧)c_𝐧^{}c_𝐧\xi _𝐦+\mathrm{}\omega \underset{𝐦}{}b_𝐦^{}b_𝐦.$$
(15)
Here $`t`$ is the hopping amplitude (it will be used as the energy unit), $`\omega `$ is the phonon (oscillator) frequency, $`\xi _𝐦`$ is the internal coordinate of the mth oscillator, and $`f_𝐦(𝐧)`$ is the force between mth oscillator and the particle at site n ($`f`$ is a function of distance $`|𝐦𝐧|`$ only). The model is parametrized by the dimensionless frequency $`\overline{\omega }=\mathrm{}\omega /t`$ and by the dimensionless coupling constant $`\lambda =[_𝐦f_𝐦^2(0)]/(2M\omega ^2D)`$, where $`M`$ is the mass of the oscillator and $`D`$ is the half-bandwidth of the bare band. (For an isotropic band with nearest-neighbor hopping, $`D=zt`$, $`z`$ being the number of neighbors.)
For the polaron problem, a many-body configuration $`R`$ is specified by the position of the electron r and oscillator displacements $`\xi 𝐦`$. Making use of the Feynman’s idea of analytic integration over $`\xi 𝐦`$ the problem is reduced to a single-particle system with retarded self-interaction. The latter can be simulated exactly, using the continuous-time representation of polaron paths . The resulting algorithm is very efficient and allow accurate determination of the ground-state energy and effective mass of the polaron for a wide class of models. It this paper, it will be shown that the method also produces accurate polaron spectra, when combined with Eq. (14). There are other reasons why the polaron is an ideal system to try formula (14). First, due to a constant phonon frequency, excited states are, at any P, separated from the restricted ground state by a finite energy gap $`\mathrm{}\omega `$. Therefore, instead of performing numerically the limit procedure to $`\beta =\mathrm{}`$, one can study the system at finite $`\beta `$, provided $`\mathrm{exp}(\beta \mathrm{}\omega )1`$ and the contribution from excited states is negligible. Second, by increasing the coupling constant $`\lambda `$ one can always decrease $`E_𝐏E_0`$, i.e., substantially increase $`\mathrm{cos}𝐏\mathrm{}𝐫`$, and stabilise the sumulations. Third, the polaron momentum P is not a parameter of simulations. This implies that statistics can be collected for all momenta simultaneously. In other words, the whole polaron spectrum is measured in a single QMC run. This will enable us to calculate for the first time exact polaron densities of states.
We begin with the simplest Holstein model which has local electron-phonon interaction, $`f_𝐦(𝐧)=\kappa \delta _{\mathrm{𝐦𝐧}}`$. In one dimension, the polaron spectrum has been extensively studied by exact diagonalisation , strong-coupling perturbation , and variational techniques. Our QMC data for the one-dimensional Holstein model are shown in Fig. 1.
The most interesting feature of the spectrum is its non-cosine shape in the adiabatic regime $`\overline{\omega }1.0`$ (triangles and diamonds). At large momenta, the spectrum is more flat than at small momenta. The nature of this flattening was understood a long time ago . In the weak-coupling limit, the free-particle state hybridizes with the single-phonon state and creates a mixed ground state which is free-particle-like at small P and phonon-like at large P, hence the weak dispersion. With increasing $`\lambda `$, the free-particle state is replaced with a polaron state with an effective mass $`m^{}`$, which now hybridizes with the single-phonon state, still leading to a more flat dispersion at large momenta. The flattening effect weakens with growing $`\lambda `$ and $`\overline{\omega }`$ because both processes increase the energy separation of the two hybridizing states. In recent years the flattening of the polaron spectrum was observed in numerical studies .
Our Quantum Monte Carlo data fully confirm the previous analytical and numerical results, see Fig. 1. We found that the spectrum shape is more sensitive to the phonon frequency than to the coupling constant. At a small frequency $`\overline{\omega }=1.0`$, the increase of the coupling constant from $`\lambda =2.0`$ to $`\lambda =2.5`$ results in a 3.5-times increase of the effective mass, and in a 2.8-times drop of the bandwidth, yet the spectrum shape changes only slightly, cf. triangles and diamonds in Fig. 1. At the same time, a simultaneous 10-times increase of $`\lambda `$ and $`\overline{\omega }`$ results in a similar, 4.8 times, increase of effective mass but brings the spectrum shape very close to cosine, cf. triangles and squares in Fig. 1. Again, a doubling of $`\lambda `$ strongly affects $`E_0`$ and $`W`$ but not the spectrum shape, cf. circles and squares in Fig. 1.
It is instructive to compare the exact QMC results with the Lang-Firsov (LF) approximation which is believed to be the correct description of the polaron in the antiadiabatic regime (high phonon frequency). The LF formula for the spectrum reads
$$E_PE_0=2te^{z\lambda /\overline{\omega }}(1\mathrm{cos}P),$$
(16)
which also implies the relation between the renormalized mass and bandwidth
$$\frac{m^{}}{m_0}=\frac{W_0}{W}=e^{z\lambda /\overline{\omega }},$$
(17)
where $`W_0=2zt`$ is the bare bandwidth and $`m_0=\mathrm{}^2/(2ta^2)`$ is the bare mass, $`a`$ being the lattice constant. For $`\overline{\omega }=10.0`$ and $`\lambda =10.0`$ QMC results are $`W=0.543(2)t`$ and $`m^{}=6.06(2)m_0`$ while LF yields $`W_{LF}=0.541t`$ and $`m_{LF}^{}=7.39m_0`$. For $`\overline{\omega }=10.0`$ and $`\lambda =20.0`$ one has $`W=0.0739(2)t`$ and $`m^{}=47.6(1)m_0`$ from QMC and $`W_{LF}=0.0733t`$ and $`m_{LF}^{}=54.6m_0`$ from LF. One can see that LF predicts very accurate values of the polaron bandwidth. This fact was established in the previous studies of the Holstein model . On the other hand, LF slightly overestimates the polaron mass. This is due to small deviations from the cosine shape, still present in the true spectrum at these model parameters. Still, the LF masses are reasonably close to the exact ones, and the agreement improves with the further increase of $`\overline{\omega }`$ and $`\lambda `$.
Consider now the two-dimensional Holstein model. The only exact polaron spectra published so far were calculated by Wellein, Fehske, and Loos with the exact diagonalization method . These authors found a flattening of the spectrum in the outer part of the Brillouin zone, even stronger than in the one-dimensional case. We checked that for the model parameters used in , formula (14) yields precisely the same values of $`E_𝐏E_0`$ as the exact diagonalization method. A definite advantage of the present method is that it allows simultaneous calculations at any desired number of P-points, while exact diagonalization studies are limited to a small number of P-points due to the finite size of the clusters. On the other hand, the QMC method is limited to the condition $`W\mathrm{}\omega `$, which prevents us from studying the weak-coupling regime and such an interesting phenomenon as the limit point of the polaron spectrum . (The latter is possible with the diagrammatic QMC .) Figure 2 shows our QMC data for a new set of parameters in the adiabatic regime, $`\overline{\omega }=1.0`$, $`\lambda =1.4`$, where 30 P-points have been used to represent the spectrum. One can see that the dispersion is indeed weak for $`|𝐏|>\pi /2`$, i.e. in the larger part of the Brillouin zone. Note, that at these parameters the polaron bandwidth is reduced by $`8t/0.12t=67`$ times, and the mass enhancement is 8.7, so we are already in the small polaron regime. Yet, the spectrum shape is profoundly non-cosine. With increasing $`\lambda `$, it will be approaching the cosine shape, but this is expected to happen only at such large $`\lambda `$ where the polaron is very heavy and is easily localized.
In two dimensions, the volume of the outer part of the Brillouin zone is larger than that of the inner one. Therefore the linear representation of the spectrum, like in Fig. 2, does not fully convey the changes in the band structure, caused by the flattening of the spectrum. The proper physical quantity which takes into account all the states of the Brillouin zone is the density of states (DOS). The QMC method, coupled with formula (14), provides the unique opportunity to calculate polaron DOS exactly, since it allows a simultaneous measurement of the whole spectrum. In this work, the two-dimensional Brillouin zone was divided in $`200^2`$ points at which the spectrum was measured. In the end, the total of $`\mathrm{40\hspace{0.17em}000}`$ polaron energies were distributed over 50 energy intervals between 0 and $`W`$. The resulting DOS for $`\overline{\omega }=1.0`$, $`\lambda =1.4`$ (the same parameters as in Fig. 2) is shown in Fig. 3. One can see that the effect of the spectrum flattening is indeed quite dramatic. The upper half-band is jammed into a narrow, $`0.015t`$-width, energy interval, thereby increasing DOS at the top of the band to $`50`$ times the DOS at the bottom of the band. The van Hove singularity is shifted from the middle to the top of the band. The lower half of the band contains only $`13\%`$ of all states. Overall, DOS looks qualitatively different from the free-particle one.
As in the one-dimensional case, the band structure approaches the free-particle-like as the phonon frequency increases. As an example, we considered a frequency equal to the bare bandwidth, $`\overline{\omega }=8.0`$, and $`\lambda =8.0`$. (This value of the coupling constant was chosen to have the polaron bandwidth, $`W`$, close to the previously considered adiabatic case.) The polaron spectrum and DOS are shown in Figs. 4 and 5, respectively. Deviations from the free-particle behavior are still visible, but they are small and not qualitative. DOS at the top of the band is just $`1.5`$ times larger than at the bottom, and the singularity appears close to the middle of the band. The “normalization” of the spectrum at large $`\overline{\omega }`$ is quite understandable. With increasing phonon frequency, the retardation effects become less important, the lattice deformation more readily follows the particle movement, and the whole complex behaves more like a free particle with a renormalized hopping integral. The Lang-Firsov formula (17) predicts $`W_{LF}=0.1465t`$ and $`m_{LF}^{}=54.6m_0`$ which is to be compared with the QMC results $`W=0.1510(3)t`$ and $`m^{}=38.4(1)m_0`$. Again, the LF approximation yields the correct bandwidth but overestimates the effective mass by some 40%.
The most spectacular transformation of DOS occurs in the three-dimensional Holstein model. In three dimensions, the volume of the outer part of the Brillouin zone is much larger than that of the inner part, and it should completely dominate the total DOS. We do not show the polaron spectrum which is not very informative. Density of states was calculated by measuring the polaron spectrum at $`60^3`$ points of the full Brillouin zone, and then distributing them among 50 energy intervals between 0 and $`W`$. DOS in the adiabatic regime, $`\overline{\omega }=1.0`$ and $`\lambda =1.2`$, is shown in Fig. 6. The states of the flat part of the spectrum form a massive peak at the top of the band. The width of the peak is about $`10\%`$ of the total bandwidth. DOS at the bottom of the band is negligible, the capacity of the lower half-band is less than $`1\%`$ of the total number of states. The two van Hove singularities are not visible, at least on the chosen level of energy resolution, they are absorbed into the peak. Overall, DOS is completely different from the free-particle one. Should the three-dimensional Holstein model with such parameters exist in nature, an extreme care would be necessary in interpreting experimental data. In any real material, the lowest states would likely be localized, and any response to an external perturbation would be dominated by the peak. Then, for instance, fitting to a free-particle-like form of DOS would lead to wrong estimates of the coupling constant and other errors.
As before, the band structure returns to the free-particle shape in the anti-adiabatic regime. We considered the case of the phonon frequency being equal to the bare bandwidth, $`\overline{\omega }=12.0`$, and coupling constant $`\lambda =10.0`$, when the polaron bandwidth is close to the just considered adiabatic case, see Fig. 7. Although still distorted, the DOS shape is close to the free-particle one, with the square-root behavior at the top and the bottom of the band, and with two van Hove singularities fully developed at the “right” places. The polaron bandwidth is $`W=0.0827(2)t`$, which is in good agreement with the Lang-Firsov value $`W_{LF}=0.0809t`$, while the polaron mass $`m^{}=112(1)m_0`$ is 24% lighter than the LF mass $`m_{LF}^{}=148m_0`$.
The polaron QMC algorithm of Ref. is not limited to the Holstein model. In fact, it allows studies of arbitrary forms of the electron-phonon interactions (of the density-displacement type), and arbitrary forms of the particle kinetic energy. Combined with Eq. (14), it provides an efficient and exact way of calculating the band structure of a whole class of polaron models. As possibilities are numerous, we have chosen to illustrate the point on two particular examples.
The first example is the anisotropic two-dimensional Holstein model with $`\overline{\omega }=1.0`$, $`\lambda =1.4`$ (these parameters are the same as in Figs. 2 and 3), and the bare anisotropy ratio $`t_y/t_x=0.2`$. For a free particle with such an anisotropy, the saddle points at $`(\pm \pi ,0)`$ and $`(0,\pm \pi )`$ have different energies, which results in two singularities in DOS, positioned symmetrically with respect to the center and edges of the band. Polaron DOS, calculated by QMC, is shown in Fig. 8. The flattening effect creates a strong peak at the top of the band which absorbs the higher-energy singularity \[at $`(\pm \pi ,0)`$\]. At the same time, the second singularity is still clearly visible. Now it appears in the vicinity of the middle of the band.
The second example is the two-dimensional polaron model with long-range electron-phonon interaction \[combine with the Hamiltonian (15)\]:
$$f_𝐦(𝐧)=\frac{\kappa }{(|𝐦𝐧|^2+1)^{3/2}},$$
(18)
where the distance $`|𝐦𝐧|`$ is measured in lattice constants. For this form of the force, $`\lambda =1.742\kappa ^2/(2M\omega ^2D)`$. The model describes a two-dimensional particle interacting with a parallel plane of ions vibrating perpendicular to the plane. It was proposed in Ref., where it was used to model the interaction of holes doped in copper-oxygen planes, with apical oxygens in the layered cuprates. It was found that this long-range (Fröhlich) polaron is much lighter than the short-range Holstein polaron. Here we present the density of states for $`\overline{\omega }=1.0`$ and $`\lambda =2.75`$, see Fig. 9. DOS shape is close to the free-particle one, with a single, well-developed singularity in the middle of the band. Note, that we are in the adiabatic regime, at the same frequency $`\overline{\omega }=1.0`$ where the two-dimensional Holstein polaron has a very distorted DOS, cf. Fig. 3. The comparison of Figs. 9 and 5 shows that a long-range electron-phonon interaction plays the same role as increasing phonon frequency, as far as the flattening effect is concerned.
## IV Discussion and conclusions
The main message of this paper is that a path-integral imaginary-time Quantum Monte Carlo is quite capable of direct measuring of real-time spectra. By relaxing the boundary conditions in imaginary time, one allows many-body paths to have arbitrary real-space shift $`\mathrm{}𝐫`$. Then the Fourier transform projects out configurations with a certain total momentum P. The result, expressed by the formula (14), is the ground-state energy as a function of the total momentum. The physical system, where such a ground-state dispersion is of interest, should be carefully chosen.
The role of temperature in this process is two-fold. On one hand, temperature should be made as low as possible to exclude the contribution from the excited states with the same P. On the other hand, only states with $`E_𝐏E_0k_BT`$ are excited in the system. This means that the corresponding configurations will be generated by the QMC process in amounts sufficient for a good statistics. For higher-energy states, configurations will be exponentially rare. This is the reason for the average cosine in Eq. (14) to become exponentially small in the low-temperature limit. In this case, the measurement process will be statistically unstable. Thus, the temperature should be of the order of the energy interval one is interested in. The two conditions on the temperature can be reconciled if the energy scale of the ground-state dispersion is much smaller than the energy gap between the ground state and the excited states within the same P-sector. This is realized in the polaron system where one has $`W\mathrm{}\omega `$ for a wide range of parameters. If this condition is not satisfied, Eq. (14) will be measuring the difference of projected free energies rather than that of ground-state energies.
Eq. (14) shows that in a many-body system there exists a general and simple relation between the ground-state dispersion and the end-to-end distribution of imaginary-time paths. It does not involve the energy estimator, although the latter is required for the separate calculation of $`E_0`$ and $`E_𝐏`$. This might be useful in cases when the evaluation of the energy estimator is computationally costly. There are other computational advantages. First, Eq. (14) involves only the logarithm of one measured quantity, the average cosine. Second, Eq. (14) calculates the difference of two energies both of which may be large. In the polaron problem, typical energies are of the order of a few $`t`$ but the bandwidth is $`W0.1t`$. Both $`E_0`$ and $`E_𝐏`$ can be calculated with typical accuracy $`0.30.5\%`$. This may result in a sizable error in their difference if the two energies are calculated separately and then subtracted. Eq. (14) produces much more stable energy differences because of the large cancellation of errors between $`E_𝐏`$ and $`E_0`$. In all the spectra presented in this paper, Figs. 1, 2, 4, the statistical errors are smaller that symbols representing the data. Finally, the whole dispersion (as well as $`E_0`$, and derivatives at any P-point) can be calculated during a single QMC run. This property of Eq. (14) also allows fast computation of the density of states.
To demonstrate the practical usefulness of Eq. (14), we have combined it with an exact continuous-time algorithm for the lattice polaron , and calculated first detailed polaron spectra and densities of states in two and three dimensions. Although our method of calculating the spectrum is limited to the condition $`W\mathrm{}\omega `$, i.e., to the intermediate and strong coupling, this is the most physically interesting regime. Together with the weak- and strong-coupling perturbation expansions, the exact diagonalization, density-matrix renormalization group , variational, and diagrammatic QMC techniques, the present method covers the whole parameter range of the polaron problems. One can say, that the problem of calculating the exact polaron spectrum has found its solution.
Apart from being a test for Eq. (14), the lattice polaron still has a considerable interest on its own. We have seen in this paper that the polaron spectrum in the adiabatic limit of the Holstein model is generically non-cosine, as was predicted theoretically and recently observed numerically. The flattening has been found to continue well into the strong-coupling regime, where the polaron mass is a few dozens and approaches a hundred. The same conclusion was reached previously in . For physical applications the most interesting regime is when polarons are not very heavy and can be mobile. We conclude that the non-cosine spectrum is typical for the Holstein model in the physically relevant parameter region, i.e., small phonon frequencies and intermediate couplings.
A surprise finding has been the great extent at which the flattening changes the band structure in high dimensions. Density of states is completely changed, van Hove singularities are shifted, low-lying states are almost irrelevant, relations between the effective mass and the bandwidth is broken. We have seen that the combination of dimensionality, phonon frequency, coupling strength, and anisotropy may produce densities of states of various shapes. In such a situation, one should be careful about the interpretation of any experimental data, like the estimation of $`\lambda `$ or polaron mass from the bandwidth or from the location of singularities. The use of a “naive” model band structure would lead to wrong conclusions.
We have also found that in all dimensions the polaron band structure becomes free-particle-like with increasing phonon frequency. In particular, the spectrum approaches the cosine shape and the effective mass approaches the inverse of the bandwidth. Moreover, numerical values of $`W`$ are well described by the Lang-Firsov approximation. Thus, our QMC data support the LF approximation as the right description of the polaron in the antiadiabatic regime. At the same time, we have found that LF could still overestimate the polaron mass by a few dozen percent in cases where the bandwidth is predicted correctly.
Finally, we considered a long-range electron-phonon interaction and found a free-particle-like band structure even in the adiabatic regime. In Ref. it was found that in the adiabatic regime polaron masses are well described by the LF approximation. Two conclusions follow from these facts. First, all the unusual properties of the Holstein model caused by the flattening, may be specific to the local electron-phonon interaction and may not be generic polaron properties. Second, the long-range electron-phonon interaction on a lattice seems to have the same effect on the band structure as the increasing phonon frequency. Although it is clear intuitively that a long-range interaction leads to higher mobility of the lattice deformation, details of this mechanism are yet to be fully understood.
The author is grateful to A. S. Alexandrov, D. M. Ceperley, V. Elser, W. M. C. Foulkes, and V. V. Kabanov for useful discussions and communications. This work was supported by EPSRC under grant GR/L40113.
|
no-problem/9903/astro-ph9903170.html
|
ar5iv
|
text
|
# Gamma-Ray Line Emission from Superbubbles
## Abstract
ABSTRACT We present an evolutionary model for $`\mathrm{\gamma }`$-ray line emission from superbubbles based on shock acceleration of metal-rich stellar ejecta. Application to the Orion OB1 association shows that $`\mathrm{\gamma }`$-ray lines at the detection threshold of the SPI spectrometer aboard of INTEGRAL are expected, making this region an interesting target for studies of the interaction of supernova shocks with the interstellar medium.
<sup>1</sup>Service d’astrophysique, CEA/DSM/DAPNIA, CEA-Saclay, 91191 Gif-sur-Yvette, France <sup>2</sup>Centre d’Etude Spatiale des Rayonnements, CNRS/UPS, B.P. 4346, 31028 Toulouse Cedex 4, France
KEYWORDS: gamma-ray lines; superbubbles; OB associations; mean wind composition; Orion.
1. INTRODUCTION
The claim for a detection by COMPTEL (Bloemen et al. 1994) of an intense flux of 3-7 MeV gamma-rays from the Orion molecular complex, attributed most naturally to <sup>12</sup>C and <sup>16</sup>O de-excitation lines, has led many authors to re-consider the nature and impact of energetic particles (EPs) in the interstellar medium (ISM). Although re-analysis of COMPTEL data suggests now that the observed emission was an instrumental artefact (Bloemen, these proceedings), the former “detection” raised the question about the possible existence of a low-energy, C and O enriched cosmic-ray component. Indeed, independent of the COMPTEL result, new observations relating to the Be and B abundances in the early Galaxy support the existence of such a component (e.g. Gilmore et al. 1992; Duncan et al. 1992; Cassé et al. 1995). A mechanism revived to explain low-energy C and O rich cosmic-rays has been the acceleration of particles in a superbubble resulting from the intense energetic activity of an OB association inside a molecular cloud (Bykov & Bloemen 1994; Parizot 1998). Strong stellar winds and supernova (SN) explosions fill the superbubble with both energy and enriched material to be accelerated by the numerous secondary shocks and by magnetic turbulence resulting from the interaction of shock waves (from winds and SNe) with each other and with dense clumps inside the bubble (Bykov & Toptygin 1990; Bykov & Fleishman 1992). The resulting energy spectrum is expected to be very hard ($`E^1`$) up to a cut-off energy, $`E_0`$, of $`100`$ MeV/n. As for the chemical composition of the EPs, it is clearly related to the composition of stellar winds and SN ejecta, although some contamination by swept-up and/or evaporated material is likely to occur.
In this paper we calculate the $`\gamma `$-ray line emission associated with such a sce-
nario. As we believe that the Orion complex associated with the Orion-Eridanus superbubble represents the most favoured target for a detection, we normalise our results to the distance of Orion (450 pc) and the stellar content as inferred from observations of the Orion OB1 association (Brown et al. 1994).
2. BASIC INGREDIENTS OF THE MODEL
The first step in our model calculation consists in the evaluation of the enrichment of the superbubble by stellar winds and SN ejecta as a function of bubble age. For this purpose we follow the evolution of a coevally formed OB association, characterised by an IMF of slope $`\mathrm{\Gamma }`$. The enrichment is calculated using the stellar yield compilation of Portinari et al. (1998) who combined the Padova stellar evolutionary models with SN models of Woosley & Weaver (1995). Additionally, yields for the production of radioactive <sup>26</sup>Al have been taken from Meynet et al. (1997), Woosley & Weaver (1995), and Woosley et al. (1995). To determine the parameters of the superbubble “blown” by the association, we derive the time dependent mechanical luminosity of the OB association from the evolutionary tracks of the Padova group. Using this luminosity, we solve the dynamical equation for a spherical, homogeneous bubble (e.g. Shull & Saken 1995). The characteristic density and temperature of the bubble interior is dominated by the “mass loading” from evaporated gas off the shell. This mass loading dilutes the bubble interior with ambient ISM material which we assume to have solar composition. We calculate the conductive mass evaporation from the shell into the bubble by solving the equation of classical, unsaturated conductivity (e.g. Shull & Saken 1995). Even if we disposed of reliable stellar evolutionary tracks giving the composition of the winds and the SN ejecta, we would still have to evaluate the mixing of the ejecta with the evaporated ISM. To avoid such a hazardous attempt, we consider two extreme scenarios, in which the EPs are made of the stellar ejecta alone (models P), or a perfect mixture of the ejecta and the evaporated ambient material (models D).
The second step of our calculation consists of accelerating the enriched material within the superbubble assuming a constant acceleration rate during a time $`\tau _0`$ following each SN explosion. The EP spectrum is thus normalised so that the energy injection rate $`\dot{E}`$ is equal to $`E_{\mathrm{SN}}/\tau _0`$, where $`E_{\mathrm{SN}}10^{51}`$ erg is the SN energy. To calculate the time scale $`\tau _0`$, we assume that each new supernova influences and provides energy to a region of size $`L`$ around its explosion site, in which particles are accelerated with an efficiency $`\eta 10^3`$ (Bykov and Fleishman 1992; Parizot 1998). Further assuming that the extension of the region in which particles are accelerated increases as $`L=v_\mathrm{A}t`$, where $`v_\mathrm{A}200`$ km/s is the Alfvén velocity, we find that the total energy injected in the form of EPs after time $`t=\tau _0`$ is $`E_{\mathrm{EP}}=\eta n_\mathrm{b}\frac{4}{3}\pi v_\mathrm{A}^3\tau _0^3E,`$ where $`E`$ is the mean EP energy, averaged over the assumed spectrum, and $`n_\mathrm{b}`$ is the density of the superbubble interior. Equating $`E_{\mathrm{EP}}`$ to $`E_{\mathrm{SN}}`$, we obtain an estimate for $`\tau _0`$, which we then use to normalise the EP spectrum and thus the $`\gamma `$-ray fluxes. For typical values of $`E=100`$ MeV/n and $`n_\mathrm{b}=10^2\mathrm{cm}^3`$, we obtain $`\tau _010^5`$ years, corresponding to an acceleration power of $`\mathrm{3\hspace{0.17em}10}^{38}`$ erg/s. As argued by Bykov and Fleishman (1992), the energy spectrum of the EPs depends on their feedback over the magnetic turbulence and the shock waves system inside the bubble. Any detailed calculation of this spectrum being out of the scope of this paper, we consider here the cut-off energy, $`E_0`$, as a free parameter with values in the range $`33000`$ MeV/n.
3. APPLICATION TO ORION AND DISCUSSION
The evolution of the Orion-Eridanus superbubble composition and the predicted $`\gamma `$-ray line emission is summarised in Fig. 1. On the one hand we populated the IMF using Monte Carlo samples that are compatible with the present Orion population (Brown et al. 1994). On the other hand we studied the academic case of an ‘analytic’ stellar population where the IMF is densely populated by ‘fractional’ stars. While the latter case provides the average $`\gamma `$-ray line emission, the Monte Carlo sampling gives us the possible scatter around this average.
Our models predict 4.44 and 6.13 MeV $`\gamma `$-ray line fluxes of the order of a few $`10^5`$ ph cm<sup>-2</sup>s<sup>-1</sup>, i.e. around the expected threshold of SPI for broadened lines (Jean 1996). We want to emphasise that this value is only an order of magnitude estimate due to the intrinsic uncertainties in our simplified model. Nevertheless, the result indicates that Orion is still an interesting target for the observation of $`\gamma `$-ray excitation lines due to its proximity and star formation activity. Our model predicts an <sup>26</sup>Al production around $`10^4M_{}`$, corresponding to 1.809 MeV line fluxes of $`\mathrm{6\hspace{0.17em}10}^6`$ ph cm<sup>-2</sup>s<sup>-1</sup>. This is compatible with the upper limit of COMPTEL (Oberlack et al. 1995), and again is at the detection threshold of SPI. In particular, the observation of either the 1.809 MeV line or the excitation lines (or both) will severely constrain the model parameters and hence provide important information about shock induced particle acceleration.
Among the most interesting observables is the $`{}_{}{}^{12}\mathrm{C}_{}^{}/^{16}\mathrm{O}^{}`$ line ratio. For ejecta mixed with the evaporated ISM, the ratio is always very close to the *solar value* ($`0.76`$ for $`E_0=100`$ MeV/n). For pure ejecta, this ratio may deviate significantly from the solar value, with values depending on the presence of a very massive star ($`M50M_{}`$) in the association (like in simulation MC1). However, $`{}_{}{}^{12}\mathrm{C}_{}^{}/^{16}\mathrm{O}^{}`$ is also very sensitive to the cut-off energy $`E_0`$ due to the different energy dependencies of the excitation cross sections (cf. Fig. 2). Additionally, for $`E_0<20`$ MeV/n the acceleration time scale $`\tau _0`$ becomes too long, and hence the injection power too small, for significant $`\gamma `$-ray line emission. The ambiguity of interpreting a given line ratio from the 3D-space of parameters ($`E_0`$, association age, dilution) may be removed by jointly studying additional line ratios, e.g. $`\mathrm{LiBe}^{}/^{16}\mathrm{O}^{}`$ (where LiBe refers to the so-called Li-Be feature around 450 keV). We will discuss the expected correlations in a separate paper where we also give more detailed information about the modelling procedure (Parizot & Knödlseder 1999, in preparation).
REFERENCES
Bloemen, H., et al. 1994, A&A, 281, L5
Brown, A.G.A., de Geus, E.J., & de Zeeuw, P.T. 1994, A&A, 289, 101
Bykov, A.M. & Bloemen, H. 1994, A&A, 283, L1
Bykov, A.M. & Fleishman, G.D. 1992, MNRAS, 255, 269
Bykov, A.M. & Toptygin, I.N. 1990, Sov. Phys.-JETP, 71, 702
Cassé, M., Lehoucq, R. & Vangioni-Flam, E. 1995, Nature, 373, 318
Duncan, D., Lambert, D. & Lemke, M. 1992, ApJ, 401, 584
Gilmore, G., et al. 1992, Nature, 357, 379
Jean, P. 1996, PhD thesis, Univ. Paul Sabatier, Toulouse
Meynet, G., et al. 1997, A&A, 320, 460
Oberlack, U., et al. 1995, Proc. ICRC, Rome, p. 207
Parizot, E. 1998, A&A, 331, 726
Portinari, L., Chiosi, C., & Bressan, A. 1998, A&A, 334, 505
Shull, J.M. & Saken, J.M. 1995, ApJ, 444, 663
Woosley, S.E., Langer, N., & Weaver, T.A. 1995, ApJ, 448, 315
Woosley, S.E. & Weaver, T.A. 1995, ApJS, 101, 181
|
no-problem/9903/quant-ph9903044.html
|
ar5iv
|
text
|
# Spin-spin interaction and spin-squeezing in an optical lattice
## Abstract
We show that by displacing two optical lattices with respect to each other, we may produce interactions similar to the ones describing ferro-magnetism in condensed matter physics. We also show that particularly simple choices of the interaction lead to spin-squeezing, which may be used to improve the sensitivity of atomic clocks. Spin-squeezing is generated even with partially, and randomly, filled lattices, and our proposal may be implemented with current technology.
Simulation of quantum many-body problems on a classical computer is difficult because the size of the Hilbert space grows exponentially with the number of particles. As suggested by Feynman the growth in computational requirements is only linear on a quantum computer , which is itself a quantum many-body system, and such a computer containing only a few tens of quantum bits may outperform a classical computer. A quantum computer aimed at the solution of a quantum problem is expected to be easier to realize in practice than a general purpose quantum computer, because the desired solution is governed by physical interactions which are constrained, e.g., by locality . In essence, such a quantum computer is a quantum simulator with the attractive feature that the experimentalist can control and observe the dynamics more precisely than in the physical system of interest. In this Letter we describe how atoms in an optical lattice may be manipulated to simulate spin-spin interactions which are used to describe ferro-magnetism in condensed matter physics. We also show that with a specific choice of interaction we may generate spin squeezed states which may be used to enhance spectroscopic resolution, e.g., in atomic clocks.
In Refs. two different methods to perform a coherent evolution of the joint state of pairs of atoms in an optical lattice were proposed. Both methods involve displacement of two identical optical lattices with respect to each other. Each lattice traps one of the two internal states $`|0`$ and $`|1`$ of the atoms. Initially, the atoms are in the same internal state $`|0`$, the two lattices are on top of each other and the atoms are assumed to be cooled to the vibrational ground state in the lattice. Using a resonant pulse the atoms may be prepared in any superposition of the two internal states. The lattice containing the $`|1`$ component of the wavefunction is now displaced so that if an atom (at the lattice site $`k`$) is in $`|1`$, it is transferred to the vicinity of the neighbouring atom (at the lattice site $`k+1`$) if this is in $`|0`$, causing an interaction between the two atoms. See Fig. 1. The procedures described in this Letter follow the proposal in Ref. where, the atoms interact through controlled collisions. Also the optically induced dipole-dipole interactions proposed in may be adjusted to fit into this framework. After the interaction, the lattices are returned to their initial position and the internal states of each atom may again be subject to single particle unitary evolution. The total effect of the displacement and the interaction with the neighbour can be tailored to yield a certain phaseshift $`\varphi `$ on the $`|1_k|0_{k+1}`$ component of the wavefunction, i.e.,
$`|0_k|0_{k+1}`$ $`|0_k|0_{k+1}`$ $`|0_k|1_{k+1}|0_k|1_{k+1}`$ (1)
$`|1_k|0_{k+1}`$ $`e^{i\varphi }|1_k|0_{k+1}`$ $`|1_k|1_{k+1}|1_k|1_{k+1},`$ (2)
where $`|a_k`$ ($`a=0`$ or $`1`$) refers to the state of the atom at the $`k`$’th lattice site. In it is suggested to build a general purpose quantum computer in an optical lattice. Such a general computer requires two-atom gates, which may be accomplished through the dynamics in (2) and single atom control, which is possible by directing a laser beam on each atom. We shall show that even without allowing access to the individual atoms, the lattice may be used to perform a highly non-trivial computational task: Simulation of a ferro-magnet.
Our two level quantum systems conveniently describe spin $`1/2`$ particles with the two states $`|0_k`$ and $`|1_k`$ representing $`|jm_k=|1/2,1/2_k`$ and $`|1/2,1/2_k`$, where states $`|jm_k`$ are eigenstates of the $`j_{z,k}`$-operator $`j_{z,k}|jm_k=m|jm_k`$ ($`\mathrm{}=1`$). The phase-shifted component of the wavefunction in Eq. (2) may be isolated by applying the operator $`(j_{z,k}+1/2)(j_{z,k+1}1/2)`$, and the total evolution composed of the lattice translations and the interaction induced phaseshift may be described by the unitary operator $`e^{iHt}`$ with Hamiltonian $`H=\chi (j_{z,k}+1/2)(j_{z,k+1}1/2)`$ and time $`t=\varphi /\chi `$. In a filled lattice the evolution is described by the Hamiltonian $`H=\chi _k(j_{z,k}+1/2)(j_{z,k+1}1/2)`$, and if we are only interested in the bulk behaviour of the atoms we may apply periodic boundary conditions, so that the Hamiltonian reduces to
$$H_{zz}=\chi \underset{<k,l>}{}j_{z,k}j_{z,l},$$
(3)
where the sum is over nearest neighbours. By appropriately displacing the lattice we may extend the sum to nearest neighbours in two and three dimensions. $`H_{zz}`$ coincides with the celebrated Ising-model Hamiltonian introduced to describe ferro-magnetism. Hence, by elementary lattice displacements we perform a quantum simulation of a ferro-magnet.
A more general Hamiltonian of the type
$$H_f=\underset{<k,l>}{}\chi j_{z,k}j_{z,l}+\eta j_{x,k}j_{x,l}+\lambda j_{y,k}j_{y,l}$$
(4)
may be engineered using multiple resonant pulses and displacements of the lattices: A resonant $`\pi /2`$-pulse acting simultaneously on all atoms rotates the $`j_z`$-operators into $`j_x`$-operators, $`e^{ij_{y,k}\pi /2}j_{z,k}e^{ij_{y,k}\pi /2}=j_{x,k}`$. Hence, by applying $`\pi /2`$-pulses, in conjunction with the displacement sequence, we turn $`H_{zz}`$ into $`H_{xx}`$, the second term in Eq. (4). Similarly we may produce $`H_{yy}`$, the third term in Eq. (4), and by adjusting the duration of the interaction with the neighbours we may adjust the coefficients $`\chi `$, $`\eta `$ and $`\lambda `$ to any values. We cannot, however, produce $`H_f`$ by simply applying $`H_{zz}`$ for the desired time $`t`$, followed by $`H_{xx}`$ and $`H_{yy}`$, because the different Hamiltonians do not commute. Instead we apply a physical implementation of a well-known numerical scheme: The split operator technique. If we choose short time steps, i.e., small phaseshifts $`\varphi `$ in Eq. (2), the error will only be of order $`\varphi ^2`$, and by repeated application of $`H_{zz}`$, $`H_{xx}`$ and $`H_{yy}`$, we may stroboscopically approximate $`H_f`$.
For a few atoms the system may be simulated numerically on a classical computer. In Fig. 2 we show the propagation of a spin wave in a one-dimensional string of 15 atoms which are initially in the $`|1/2,1/2`$ state. The central atom is flipped at $`t=0`$ and a spin wave propagates to the left and right. The figure shows the evolution of $`<j_{z,k}>`$ for all atoms, obtained by repeatedly applying the Hamiltonians $`H_{zz}`$, $`H_{xx}`$ and $`H_{yy}`$ with $`\chi =\eta =\lambda `$ and periodic boundary conditions. Small time steps $`dt=0.1\chi ^1`$ result in a stroboscopic approximation almost indistinguishable from the results of a direct numerical integration of $`H_f`$.
A host of magnetic phenomena may be simulated on our optical lattice: Solitons, topological excitations, two magnon bound states, etc. By pumping a fraction of the atoms into the $`|1/2,1/2`$ state, we may also perform micro-canonical ensemble calculations for non-vanishing temperature. Other procedures for introducing a non-vanishing temperature are described in Ref. .
We now show how to generate spin squeezed states using the same techniques as discussed above. Signals obtained in spectroscopic investigations of a sample of two level atoms are expressed by the collective spin operators $`J_i=_kj_{i,k}`$, and their quantum mechanical uncertainty limits the measurement accuracy, and e.g., the performance of atomic clocks. In standard spectroscopy with $`N`$ uncorrelated atoms starting in the $`|1/2,1/2`$ state, the uncertainties $`\mathrm{\Delta }J_x=\sqrt{J_x^2J_x^2}`$ and $`\mathrm{\Delta }J_y`$ are identical, and the standard quantum limit resulting from the uncertainty relation of angular momentum operators
$$(\mathrm{\Delta }J_x)^2(\mathrm{\Delta }J_y)^2\left|<J_z/2>\right|^2$$
(5)
predicts a spectroscopic sensitivity proportional to $`1/\sqrt{N}`$. Polarization rotation spectroscopy and high precision atomic fountain clocks are now limited by this sensitivity . In it is suggested to produce spin squeezed states which redistribute the uncertainty unevenly between components like $`J_x`$ and $`J_y`$ in (5), so that measurements, sensitive to the component with reduced uncertainty, become more precise. Spin squeezing resulting from absorption of non-classical light has been suggested and demonstrated experimentally . Ref. presents an analysis of squeezing obtained from the non-linear couplings $`H=\chi J_x^2`$ and $`H=\chi (J_x^2J_y^2)`$. For neutral atoms, such a coupling has been suggested in the spatial overlap of two components of a Bose-Einstein condensate . Spin squeezing in an optical lattice has two main advantages compared to the condensates: The interaction can be turned on and off easily, and the localization at lattice sites increases the density and thus the interaction strength. The product of two collective spin operators involve terms $`j_{x,k}j_{x,l}`$ for all atoms $`k`$ and $`l`$, and this coupling may be produced by displacing the lattices several times so that the $`|1/2,1/2`$ component of each atom visits every lattice site and interacts with all other atoms. In a large lattice such multiple displacements are not desirable. We shall show, however, that substantial spin-squeezing occurs through interaction with only a few nearby atoms, i.e., for Hamiltonians
$$H=\underset{k,l}{}\chi _{k,l}j_{x,k}j_{x,l}$$
(6)
and
$$H=\underset{k,l}{}\chi _{k,l}(j_{x,k}j_{x,l}j_{y,k}j_{y,l}),$$
(7)
where the coupling constants $`\chi _{k,l}`$ between atoms $`k`$ and $`l`$ vanishes except for a small selection of displacements of the lattices.
Expectation values of relevant angular momentum operators and the variance of the spin operator $`J_\theta =\mathrm{cos}(\theta )J_x+\mathrm{sin}(\theta )J_y`$ may be calculated for an initially uncorrelated state with all atoms in $`|1/2,1/2`$, propagated by the simple coupling (6). If each atom visits one neighbour $`\chi _{k,l}=\chi \delta _{k+1,l}`$, we get the time dependent variance of the spin component $`J_{\pi /4}=\frac{1}{\sqrt{2}}(J_xJ_y)`$
$$(\mathrm{\Delta }J_{\pi /4})^2=\frac{N}{4}\left[1+\frac{1}{4}\mathrm{sin}^2(\chi t)\mathrm{sin}(\chi t)\right].$$
(8)
The mean spin vector is in the negative $`z`$ direction and has the expectation value
$$<J_z>=\frac{N}{2}\mathrm{cos}^2(\chi t).$$
(9)
For small values of $`\chi t`$, $`\mathrm{\Delta }J_{\pi /4}`$ decreases linearly with $`\chi t`$ whereas $`|<J_z>|`$ decreases proportional to $`(\chi t)^2`$, hence $`\mathrm{\Delta }J_{\pi /4}`$ falls below $`|<J_z/2>|`$, and the spin is squeezed.
In Fig. 3 we show numerical results for 15 atoms in a one-dimensional lattice with periodic boundary conditions. Fig. 3 (a) shows the evolution of $`(\mathrm{\Delta }J_\theta )^2`$ when we apply the coupling (6) and visit 1, 2, and 3 neighbours. The squeezing angle $`\theta =\pi /4`$ is optimal for short times $`\chi t<<1`$. For longer times the optimal angle deviates from $`\pi /4`$, and we plot the variance $`(\mathrm{\Delta }J_\theta )^2`$ minimized with respect to the angle $`\theta `$. We assume the same phaseshift for all collisions, i.e., all non-vanishing $`\chi _{k,l}`$ are identical.
For spectroscopic investigations not only the variance of a spin component is relevant. In it is shown that if spectroscopy is performed with $`N`$ particles, the reduction in the frequency variance due to squeezing is given by the quantity
$$\xi ^2=\frac{N\mathrm{\Delta }J_\theta ^2}{J_z^2}.$$
(10)
In Fig. 3 (b) we show the minimum value of $`\xi ^2`$ obtained with the couplings (6) and (7) as functions of the number of neighbours visited. Fig. 3 (b) shows that the coupling (7) produces better squeezing than (6). The coupling (6), however, is more attractive from an experimental viewpoint. Firstly, all $`j_{x,k}`$ operators commute and we do not have to apply several displacements with infinitesimal durations to produce the desired Hamiltonian. We may simply displace the atoms so that they interact with one neighbour to produce the desired phaseshift $`\varphi `$, and then go on to interact with another neighbour. Secondly, if the $`j_{x,k}j_{x,l}`$ coupling involves a phaseshift $`\varphi `$, the operator $`j_{y,k}j_{y,l}`$ requires the opposite phaseshift $`\varphi `$. This requires a long interaction producing $`2\pi \varphi `$, or a change of the interaction among the atoms, i.e., a change of the sign of the scattering length in the implementation of .
Like the analytic expression for $`\chi ^2`$ obtained from (8,9), the results shown in Fig. 3 (b) are independent of the total number of atoms as long as this exceeds the “number of neighbours visited”. When all lattice sites are visited we approach the results obtained in Ref. , i.e., a variance scaling as $`N^{1/3}`$ and a constant for the couplings (6) and (7).
So far we have assumed that the lattice contains one atom at each lattice site and that all atoms are cooled to the vibrational ground state. The present experimental status is that atoms can be cooled to the vibrational ground-state, but with a filling factor below unity . A mean filling factor of unity is reported in , but when at most a single atom is permitted at each lattice site a mean occupation of 0.44 is achieved. It has been suggested that a single atom per lattice site may be achieved by filling the lattice from a Bose-Einstein condensate .
To describe a partially filled lattice it is convenient to introduce stochastic variables $`h_k`$, describing whether the $`k`$’th lattice site is filled $`h_k=1`$ or empty $`h_k=0`$. The interaction may be described by the Hamiltonian $`H=_{k,l}\chi _{k,l}h_k(j_{z,k}+1/2)h_l(j_{z,l}1/2)`$, where the sum is over all lattice sites $`k`$ and $`l`$. If we, rather than just displacing the atoms in one direction, also displace the lattice in the opposite direction, so that $`\chi _{k,l}`$ is symmetric in $`k`$ and $`l`$, we may produce the Hamiltonian
$$H=\underset{k,l}{}\chi _{k,l}h_kj_{x,k}h_lj_{x,l}.$$
(11)
This Hamiltonian models ferro-magnetism in random structures, and it might shed light on morphology properties, and, e.g., percolation . Here we shall restrict our analysis to spin-squeezing aspects, since these are both of practical interest, and they represent an ideal experimental signature of the microscopic interaction.
In Fig. 4 we show the result of a simulation of squeezing in a partially filled one dimensional lattice. Each lattice site contains an atom with a probability $`p`$, and the size of the lattice is adjusted so that it contains 15 atoms. In Fig. 4 (a) we show the decrease in the variance of $`J_\theta `$, averaged over 20 realizations and minimized with respect to $`\theta `$. Lines indicate the predictions from the time derivatives at $`t=0`$
$`{\displaystyle \frac{d}{dt}}(\mathrm{\Delta }J_{\pi /4})^2`$ $`=`$ $`{\displaystyle \frac{1}{2}}{\displaystyle \underset{k,l}{}}\chi _{k,l}<h_kh_l>`$ (12)
$`{\displaystyle \frac{d}{dt}}J_z`$ $`=`$ $`0,`$ (13)
where $`<h_kh_l>`$ denotes the ensemble average over the distribution of atoms in the lattice, i.e., the two atom correlation function. In Fig. 4 (b) we show the minimum value of $`\xi ^2`$ for different filling factors $`p`$ as a function of the number of neighbours visited. The calculations confirm that even in dilute lattices, considerable squeezing may be achieved by visiting a few neighbours.
In conclusion we have suggested a method to simulate condensed matter physics in an optical lattice, and we have shown how the dynamics may be employed to produce spin-squeezing. We emphasize the moderate experimental requirement for our scheme. With the two internal states represented as hyperfine structure states in alkaline atoms, all spin rotations may be performed by Raman or RF-pulses acting on all atoms simultaneously, and lattice displacements may be performed by simply rotating the polarisation of the lasers . With the parameters in , the duration of the sequence in Fig. 1 can be as low as a few micro-seconds. Following our suggestion spin-squeezing may be produced in dilute optical lattices, and implementation is possible with current technology. The resulting macroscopic decrease in projection noise has several promising applications in technology and quantum physics, and it provides an experimental signature of the microscopic interaction between the atoms.
|
no-problem/9903/physics9903009.html
|
ar5iv
|
text
|
# Symplectic algorithm for constant-pressure molecular dynamics using a Nosé-Poincaré thermostat
## 1 Introduction
Traditionally, molecular-dynamics simulations are performed using constant particle number $`N`$, volume $`V`$ and energy $`E`$. However, these are not usually the conditions under which experiments are done and there has been much attention to the development of simulation methods designed to sample from other, experimentally more relevant ensembles, such as constant temperature (canonical) and/or constant pressure. Some of the most popular and useful of these are those based on so-called “extended” Hamiltonians, i.e., Hamiltonians in which extra degrees of freedom have been added to the system in order to ensure that the trajectory samples from the statistical distribution corresponding to the desired thermodynamic conditions.
For a constant pressure system, for example, Andersen introduced the volume $`V`$, along with its corresponding conjugate momentum $`\pi _V`$, as extra variables. The new variables are coupled to the system in such a way as to guarantee that the trajectory (if ergodic) samples from an isobaric statistical distribution. Similarly, to generate a constant temperature distribution, Nosé introduced a new mechanical variable $`s`$ (with conjugate momentum $`\pi _s`$) that couples into the system through the particle momenta and acts to effectively rescale time in such a way as to guarantee canonically distributed configurations. These two extensions can be combined to give a Hamiltonian whose trajectories can be shown to sample from an isothermal-isobaric ensemble.
This combined Nosé-Andersen (NA) Hamiltonian is given by
$$_{NA}=V^{2/3}\frac{p_i^2}{2m_is^2}+U(V^{1/3}𝐪)+\frac{\pi _V^2}{2Q_V}+\frac{\pi _s^2}{2Q_s}+gkT\mathrm{ln}s+P_{ext}V,$$
(1)
where $`p_i`$ is the conjugate momentum to the scaled position $`q_i=V^{1/3}r_i`$, $`P_{ext}`$ is the external pressure and $`g`$ is given by $`N_f+1`$ where $`N_f`$ is number of degrees of freedom of the original system. The quantities $`Q_V`$ and $`Q_s`$ are the masses of the Andersen “piston” and the Nosé thermostat variable, respectively.
The equations of motion for this system are
$`\dot{p_i}`$ $`=`$ $`V^{1/3}_iU(V^{1/3}𝐪)`$ (2a)
$`\dot{q_i}`$ $`=`$ $`{\displaystyle \frac{p_i}{s^2m_iV^{2/3}}}`$ (2b)
$`\dot{\pi _V}`$ $`=`$ $`𝒫P_{ext}`$ (2c)
$`\dot{V}`$ $`=`$ $`\pi _v/Q_V`$ (2d)
$`\dot{\pi _s}`$ $`=`$ $`V^{2/3}s^3{\displaystyle \frac{p_i^2}{m_i}}{\displaystyle \frac{gkT}{s}}`$ (2e)
$`\dot{s}`$ $`=`$ $`\pi _s/Q_s,`$ (2f)
where the instantaneous pressure $`𝒫`$ is given by
$$𝒫=\frac{2}{3V}\underset{i}{}\frac{p_i^2}{2m_iV^{2/3}s^2}\frac{1}{3V}\underset{i}{}\frac{U}{q_i}q_i$$
(3)
There are two major drawbacks to this approach: First, because of the time rescaling, the time variable in Nosé dynamics is not “real” time, so any discretized trajectory generated by numerically integrating the Nosé equations of motion must be transformed back into “real” time, leading to the configurations that are spaced at unequal “real”-time intervals. This is inconvenient for the construction of equilibrium averages, especially of dynamical quantities. Second, the Hamiltonian is not separable (that is, the kinetic and potential terms in the Hamiltonian are not functions only of momenta and position variables, respectively), making standard Verlet/leapfrog approaches inapplicable.
By a change of variables and a time rescaling of the equations of motion, Hoover derived new equations of motion that generate the same trajectories (for the exact solution) as the original Nosé Hamiltonian, but in real time. This Nosé-Hoover dynamics has become a standard method in molecular simulation. However, the change of variables that links the Nosé Hamiltonian to the Nosé-Hoover equations of motion is a non-canonical transformation - the total energy function of the system is still conserved, but it is no longer a Hamiltonian, since the equations of motion cannot be derived from it. Although a variety of very good time-reversible methods have been put forward, the lack of Hamiltonian structure precludes the use of symplectic integration schemes, which have been shown to have superior stability over non-symplectic methods.
## 2 The Nosé-Poincaré-Andersen (NPA) Hamiltonian
Recently, Bond, Leimkuhler and Laird have developed a new formulation of Nosé constant-temperature dynamics in which a Poincaré time transformation is applied directly to the Nosé Hamiltonian, instead of applying a time transformation to the equations of motion as in Nosé-Hoover. The result of this is a method that runs in “real” time, but is also Hamiltonian in structure. In this work we combine this new thermostat with the Andersen method for constant pressure to give an algorithm for isothermal-isobaric molecular dynamics. For a system with an Andersen piston, the new Nosé-Poincaré-Andersen (NPA) Hamiltonian is given by
$$_{NPA}=[_{NA}_{NA}(t=0)]s,$$
(4)
where $`_{NA}`$ is given in Eq. 1. As discussed in Ref. 12, the above form of the Hamiltonian (a specific case of a Poincaré time transformation) will generate the same trajectories as the original Nosé-Andersen Hamiltonian, except with time rescaled by $`s`$ (which puts the trajectories back into real time), The resulting equations of motion (except for $`\pi _s`$) for this constant pressure and temperature Nosé-Poincaré Hamiltonian are the as those given above for the Nosé-Andersen system (Eqs. 2a-2f), except that the right-hand side is multiplied by the thermostat variable $`s`$. For $`\pi _s`$ we have
$$\dot{\pi _s}=s\frac{}{s}\mathrm{\Delta }H=V^{2/3}\frac{p_i^2}{m_is^2}gkT\mathrm{\Delta }H$$
(5)
where $`\mathrm{\Delta }_{NA}_{NA}(t=0)`$.
## 3 Integrating the NPA Equations of motion
The NPA Hamiltonian is nonseperable since the kinetic energy contains the extended “position” variables $`s`$ and $`V`$. The equations of motion for a general time-independent, non-seperable Hamiltonian can be written (for general positions $`Q`$ and conjugate momenta $`P`$)
$`\dot{Q}`$ $`=`$ $`G(P,Q)`$
$`\dot{P}`$ $`=`$ $`F(P,Q),`$ (6)
where $`G(P,Q)=\frac{}{P}`$ and $`F(P,Q)=\frac{}{Q}`$. (For a seperable Hamiltonian $`G`$ is only a function of $`P`$ and $`F`$ is only a function of $`Q`$.) For such a nonseperable system, standard symplectic splitting methods, such as the Verlet/leapfrog algorithm, are not directly applicable. However, symplectic methods specifically for nonseperable systems have been developed. One simple example that is second-order and time-reversible is the Generalized Leapfrog Algorithm (GLA)
$`P_{n+1/2}`$ $`=`$ $`P_n+hF(P_{n+1/2},Q_n)/2`$
$`Q_{n+1}`$ $`=`$ $`Q_n+h[G(P_{n+1/2},Q_n)+G(P_{n+1/2},Q_{n+1})]/2`$
$`P_{n+1}`$ $`=`$ $`P_{n+1/2}+hF(P_{n+1/2},Q_{n+1})/2,`$ (7)
where $`h`$ is the time step and $`P_n`$ and $`Q_n`$ are the approximations to $`P(t)`$ and $`Q(t)`$ at $`t=t_n=nh`$. (This method can be obtained as the concatenation of the Symplectic Euler method,
$`P_{n+1}`$ $`=`$ $`P_n+hF(P_{n+1},Q_n)`$
$`Q_{n+1}`$ $`=`$ $`Q_n+hG(P_{n+1},Q_n),`$ (8)
with its adjoint. The concatenation of a integrator with its adjoint guarantees a time-reversible method.) This method is a simple example of a class of symplectic integrators for nonseperable Hamiltonians.
Applying the GLA to the NPA equations of motion gives
$`p_{i,n+1/2}`$ $`=`$ $`p_{i,n}{\displaystyle \frac{h}{2}}s_nV_n^{1/3}_iU(V^{1/3}𝐪)`$ (9a)
$`\pi _{v,n+1/2}`$ $`=`$ $`\pi _{v,n}+{\displaystyle \frac{h}{2}}s_n[𝒫(𝐪_n,𝐩_{n+1/2},V_n,s_n)P_{ext}]`$ (9b)
$`\pi _{s,n+1/2}`$ $`=`$ $`\pi _{s,n}+{\displaystyle \frac{h}{2}}\left({\displaystyle \underset{i=1}{\overset{N}{}}}{\displaystyle \frac{p_{i,n+1/2}^2}{m_iV_n^{2/3}s_n^2}}gk_BT\right)`$ (9c)
$`{\displaystyle \frac{h}{2}}\mathrm{\Delta }(𝐪_n,𝐩_{n+1/2},V_n,\pi _{v,n+1/2},s_n,\pi _{s,n+1/2})`$
$`s_{n+1}`$ $`=`$ $`s_n+{\displaystyle \frac{h}{2}}(s_n+s_{n+1}){\displaystyle \frac{\pi _{s,n+1/2}}{Q_s}}`$ (9d)
$`V_{n+1}`$ $`=`$ $`V_n+{\displaystyle \frac{h}{2}}(s_n+s_{n+1}){\displaystyle \frac{\pi _{v,n+1/2}}{Q_v}}`$ (9e)
$`q_{i,n+1}`$ $`=`$ $`q_{i,n}+{\displaystyle \frac{h}{2}}\left({\displaystyle \frac{1}{s_nV_n^{2/3}}}+{\displaystyle \frac{1}{s_{n+1}V_{n+1}^{2/3}}}\right){\displaystyle \frac{p_{i,n+1/2}}{m_i}}`$ (9f)
$`\pi _{s,n+1}`$ $`=`$ $`\pi _{s,n+1/2}+{\displaystyle \frac{h}{2}}\left({\displaystyle \underset{i=1}{\overset{N}{}}}{\displaystyle \frac{p_{i,n+1/2}^2}{m_iV_{n+1}^{2/3}s_{n+1}^2}}gk_BT\right)`$ (9g)
$`{\displaystyle \frac{h}{2}}\mathrm{\Delta }(𝐪_{n+1},𝐩_{n+1/2},V_{n+1},\pi _{v,n+1/2},s_{n+1},\pi _{s,n+1/2})`$
$`\pi _{v,n+1}`$ $`=`$ $`\pi _{v,n+1/2}+{\displaystyle \frac{h}{2}}s_{n+1}[𝒫(𝐪_{n+1},𝐩_{n+1/2},V_{n+1},s_{n+1})P_{ext}]`$ (9h)
$`p_{i,n+1}`$ $`=`$ $`p_{i,n+1/2}+{\displaystyle \frac{h}{2}}s_{n+1}V_{n+1}^{1/3}_iU(V_{n+1}^{1/3}𝐪_{𝐧+\mathrm{𝟏}})`$ (9i)
As in the case of the constant volume Nosé-Poincare algorithm, the GLA for the NPA is explicit - this is not necessarily the case for a general nonseperable Hamiltonian. Note that Eq. 9c requires the solution of a scalar quadratic equation for $`\pi _{s,n+1/2}`$. Details of how to solve this equation without involving subtractive cancellation can be found in Ref. 12.
## 4 Test Simulation Results and Summary
In order to evaluate this method, simulations were performed using an embedded atom potential for aluminum. We report test simulations on a system of 256 particles with periodic boundary conditions for an aluminum melt at $`T=1000K`$ and $`P=0`$. For this model mass is measured in amu, distance in Å and energy in eV, the natural time unit of the simulation is then 10.181fs, that is, a simulation time step of 0.1 corresponds to an acutal time step of 1.0181fs.
First the stability of the method was tested. Fig. 1(a) shows the value of the NPA Hamiltonian (a conserved quantity) as a function of time in a long run. The trajectory shown here was begun after initial equilibration at 1000K for 2$`\times 10^6`$ time steps (2.03ns). In this simulation, the values of $`Q_v`$ and $`Q_s`$ (in reduced units) were 10<sup>-4</sup> and 2.5, respectively. The stability of the method is excellent, giving no noticeable drift in $`_{\text{NPA}}`$ over the course of a long trajectory. The pressure and temperature trajectories for this run are also shown in Figs. 1(b) and 1(c), respectively.
In Fig. 2(a)-(c), we show the ability of the method to regulate the pressure, temperature and density, respectively, for three sets of extended variable masses. (The values for $`Q_s`$ were larger here than that used in Figure 1 because small values of that variable lead to instabilities when the initial temperature is far from the target temperature.) The system was initialized to an fcc lattice of initial density 0.06021Å<sup>-3</sup> with the individual velocity components chosen from a Maxwell-Boltzmann distribution at 100K. The simulation was then run with the NPA with $`T=1000K`$ and $`P=0`$. In all cases, the instantaneous pressure, temperature and density evolve quickly and stabilize about their desired values.
The GLA has a global error that is second-order in the time step. To demonstrate that this also is true in our results, a series of simulations were performed using various values for the time step. The system was initialized in an identical manner to that described in the last paragraph and then run for a total time of 2.0362 ps. Figure shows, for several combinations of extended variable masses, a log-log plot of the energy error, as estimated by the standard deviation of $`_{\text{NPA}}`$ (Eq. 3), versus the time step. One notes that here smaller values of $`Q_v`$ lead to smaller fluctuations in $`_{\text{NPA}}`$.
Finally, to demonstrate that the method yields relevant dynamical quantities, the normalized velocity autocorrelation function, $`C(t)=𝐯(t)𝐯(0)/𝐯(0)𝐯(0)`$, was calculated using our constant NPT algorithm (with $`Q_v`$ and $`Q_s`$ as in Fig. 1) and compared to the same quantity calculated using standard constant NVE molecular dynamics (with a velocity-Verlet integrator). The NVE simulations were run at an energy and density corresponding to the average energy and density for the constant NPT simulations. This comparison is shown in Figure . Both systems were first equilibrated at 1000K for 200,000 steps (203.6 ps) and run for 20,000 steps (20.36 ps) to collect averages. $`C(t)`$ for the Nosé-Poincaré-Andersen method for constant NPT molecular dynamics is indistinguishable in this figure from that of the NVE simulation.
## 5 Acknowledgements
We gratefully acknowledge Steve Bond and Ben Leimkuhler for helpful conversations and invaluable advice, as well as the Kansas Center for Advanced Scientific Computing for the use of their computer facilities. We also would like to thank the National Science Foundation (under grants CHE-9500211 and DMS-9627330) as well as the University of Kansas General Research Fund for their generous support of this research.
|
no-problem/9903/cond-mat9903117.html
|
ar5iv
|
text
|
# Effects of carrier concentration on the superfluid density of high-Tc cuprates
## Abstract
The absolute values and temperature, T, dependence of the in-plane magnetic penetration depth, $`\lambda _{ab}`$, of La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> and HgBa<sub>2</sub>CuO<sub>4+δ</sub> have been measured as a function of carrier concentration. We find that the superfluid density, $`\rho _s`$, changes substantially and systematically with doping. The values of $`\rho _s(0)`$ are closely linked to the available low energy spectral weight as determined by the electronic entropy just above T<sub>c</sub> and the initial slope of $`\rho _s(T)/\rho _s(0)`$ increases rapidly with carrier concentration. The results are discussed in the context of a possible relationship between $`\rho _s`$ and the normal-state (or pseudo) energy gap.
Superconductivity arises from the binding of electrons into Cooper pairs thereby forming a superfluid with a superconducting energy gap, $`\mathrm{\Delta }`$, in the single-particle excitation spectrum. In high-temperature superconductors (HTS) $`\mathrm{\Delta }`$ has essentially $`d_{x^2y^2}`$ symmetry in k-space with $`\mathrm{\Delta }_k=\mathrm{\Delta }_0cos(2\varphi )`$ , where $`\varphi =arctan(k_y/k_x)`$ and $`\mathrm{\Delta }_0`$ is the superconducting gap amplitude which will in general be $`\varphi `$ dependent. Changes in carrier concentration affect the superconducting \[2-8\] and normal state (NS) \[4-7\] properties of HTS and there is evidence \[3-7\] that in addition to $`\mathrm{\Delta }_k`$ there is a normal state (or pseudo) gap, $`\mathrm{\Delta }_N`$, in the NS energy excitation spectrum in under- and optimally doped samples which increases with decreasing doping. The maximum gap amplitude shows little variation with underdoping even though T<sub>c</sub> is reduced \[3-8\], in disagreement with the standard mean-field Bardeen-Cooper-Schrieffer (BCS) theory. This unusual behaviour is probably linked to the presence of $`\mathrm{\Delta }_N`$ . However, fundamental problems such as the origin of $`\mathrm{\Delta }_N`$ and its possible effect on the superfluid density, $`\rho _s`$, have not been clearly resolved experimentally or theoretically.
The physical quantity most directly associated with $`\rho _s`$ is the magnetic penetration depth $`\lambda `$. Appropriate systems to investigate $`\rho _s`$ as a function of doping are La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> and HgBa<sub>2</sub>CuO<sub>4+δ</sub>. Both have a simple crystal structure with one CuO<sub>2</sub> plane per unit cell, can have their carrier concentration controlled, and there is experimental evidence suggesting the presence of $`\mathrm{\Delta }_N`$ which closes with doping \[3-5,9\]. Here we report in-plane penetration depth, $`\lambda _{ab}`$, measurements for high-quality La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> (LSCO) with x = 0.10, 0.15, 0.20, 0.22, 0.24 measured by the ac-susceptibility ($`acs`$) and muon spin relaxation ($`\mu `$SR) techniques and for HgBa<sub>2</sub>CuO<sub>4+δ</sub> (Hg-1201) with $`\delta `$ = 0.10, 0.37 measured only by $`\mu `$SR. We find systematic changes in $`\rho _s`$ with carrier concentration and a correlation with $`\mathrm{\Delta }_N`$.
Single-phase polycrystalline samples of LSCO were prepared in Cambridge using solid-state reaction procedures. No other phases were detected by powder x-ray diffraction and the phase purity is thought to be better than 1%. Lattice parameters were in good agreement with published work . High field magnetic susceptibility measurements showed no signatures of excess paramagnetic centres. The measured T<sub>c</sub>’s are 30, 37.7, 36, 27.5 and 20.3 K for x = 0.10, 0.15, 0.20, 0.22 and 0.24, respectively. These values are also in very good agreement with previous measurements . $`\mu `$SR experiments as a function of T were performed on the same powders for x = 0.10 and 0.15. Although unoriented powders can be used to determine $`\lambda _{ab}`$ by $`\mu `$SR , the $`acs`$ technique requires the powders to be magnetically aligned . To eliminate grain agglomerates, powders were ball-milled in ethanol and dried after adding a defloculant. Scanning electron microscopy confirmed the absence of grain boundaries and showed that the average grain diameter was $`5\mu m`$. The powders were mixed with a 5 min curing epoxy and aligned in a static field of 12T at room temperature. Debye-Scherrer x-ray scans showed that $`90\%`$ of the grains were aligned within $`2.0^o`$. Low-field susceptibility measurements were performed at an $`ac`$-field H<sub>ac</sub> = 1 G rms (parallel to the c-axis) and a frequency f = 333 Hz down to 1.2K. Details of the application of London’s model for deriving $`\lambda `$ from the measured susceptibility can be found in an earlier publication . Transverse-field-cooled $`\mu `$SR experiments were performed at 400 Gauss in the ISIS, Rutherford-Appleton Laboratory. The field produced a flux-line lattice whose field distribution was probed by muons. The depolarisation rate, $`\sigma (T)`$, of the initial muon spin is proportional to $`\lambda _{ab}^{}{}_{}{}^{2}(T)`$ . Checks were made to ensure that the values of $`\lambda _{ab}`$ obtained were independent of the applied field. The Hg-1201 \[$`\delta `$ = 0.10 (T<sub>c</sub> = 60K) and 0.37 (T<sub>c</sub> = 35K)\] samples were prepared in Houston by the controlled solid-vapour reaction technique .
The values of $`\lambda _{ab}(0)`$ derived from the $`acs`$ data for LSCO are 0.28, 0.26, 0.197, 0.193, 0.194 $`\mu `$m for x = p = 0.10, 0.15, 0.20, 0.22 and 0.24, respectively. Here p is the hole content per planar copper atom. The estimated error for $`\lambda _{ab}(0)`$ obtained by the $`acs`$ technique is $`\pm 15\%`$ and within this uncertainty the $`\lambda _{ab}(0)`$ values agree with the $`\mu `$SR measurements. We thus find that $`\lambda _{ab}^2(0)`$ is strongly suppressed on the underdoped side, including optimal doping, but there is no suppression with increasing overdoping (up to p = 0.24) in contrast to reports for Tl<sub>2</sub>Ba<sub>2</sub>CuO<sub>6+δ</sub> . Values of $`\lambda _{ab}(0)`$ as obtained by $`\mu `$SR for Hg-1201 are 0.194 and 0.148 $`\mu `$m for $`\delta `$ = 0.10 and 0.37, respectively. We note that $`\delta `$ = 0.10 and 0.37 in Hg-1201 correspond to p = 0.075 and 0.22, respectively .
The T-dependence of $`\lambda _{ab}`$ for LSCO is shown in Fig. 1(a) as a plot of $`[1/\lambda _{ab}(T)]^2\rho _s(T)`$. Data for x = 0.10 and 0.15 obtained by $`\mu `$SR are also included for comparison. Overall there is good agreement between the results from the two techniques. From the $`acs`$ data we find that the initial linear term in $`\lambda _{ab}(T)`$, characteristic of a clean d-wave superconductor, persists up to the highest doping measured (x = 0.24) in agreement with electronic specific heat studies on polycrystalline LSCO samples from the same batch as those studied here . Figure 1(b) depicts data for Hg-1201 powders measured only by $`\mu `$SR, including data from Ref. for a Hg-1201 sample (also from Houston) with $`\delta `$ = 0.154 (p = 0.17). As in LSCO, we observe a change in the shape of $`\sigma (T)`$ $`[1/\lambda _{ab}(T)]^2`$ of Hg-1201 with doping. Namely, in the underdoped region $`[1/\lambda _{ab}(T)]^2`$ shows a more pronounced curvature. Taking the slope of the low-T linear term to be proportional to $`\rho _s(0)/\mathrm{\Delta }_0`$ the observed trend of $`[1/\lambda _{ab}(T)]^2`$ with p would imply that $`\mathrm{\Delta }_0`$ remains approximately constant in the underdoped region and decreases rapidly with overdoping.
Figure 2 shows a comparison of the present results for LSCO with specific heat data taken on the same samples where $`\mathrm{\Delta }_N`$ was observed for x = p $`<`$ 0.19. In the inset we observe a good correlation between $`[1/\lambda _{ab}(0)]^2`$ and \[S/T(T<sub>c</sub>) - S/T(2K)\] both plotted versus Sr content x, where S(T) is the electronic entropy obtained by integrating the electronic specific heat coefficient $`\gamma `$(T) from 0 to T. The quantity \[S/T(T<sub>c</sub>) - S/T(2K)\] is a measure of the energy-dependent NS electronic density of states (DOS), $`g_n`$(E), averaged over $`\pm 2k_BT_c`$ around the Fermi energy E<sub>F</sub>. The effect of an energy -dependent DOS on the London penetration depth $`\lambda _L`$ or $`\rho _s(0)`$ is not usually considered in standard theory which implicitly assumes a constant DOS and a parabolic E($`\stackrel{}{k}`$) dispersion relation. It has been argued elsewhere that $`\rho _s(0)=4\pi ^2v_{x}^{}{}_{}{}^{2}g_n(E)/e^2`$ where the average is over an (anisotropic) energy shell $`E_F\pm \mathrm{\Delta }_0`$. Note that this result agrees with the standard expression for the NS conductivity and the usual relation between $`\lambda _L(0)`$ and the real part of the frequency-dependent electronic conductivity in the normal and superconducting states $`\sigma _{1}^{}{}_{}{}^{n}(\omega )`$ and $`\sigma _1^s(\omega )`$, respectively. Namely, $`\lambda _L(0)`$ is determined by the area under the \[$`\sigma _{1}^{}{}_{}{}^{n}(\omega )\sigma _{1}^{}{}_{}{}^{s}(\omega )`$\] curve in the frequency range $`0<h\omega /2\pi <2\mathrm{\Delta }_0`$. Thus the inset to Fig. 2 suggests that the strong decrease of $`\rho _s(0)`$ with x from x = 0.20 to 0.10 is related to the suppression of spectral weight with energy range $`E_F\pm \mathrm{\Delta }_0`$ due to the presence of $`\mathrm{\Delta }_N`$.
The main panel in Fig. 2 shows a correlation between the doping dependence of the initial linear terms of $`\lambda _{ab}(T)`$ and the low-T specific heat coefficient $`\gamma `$, both quantities being related to the number of excited quasi-particles n<sub>e</sub>(T). For low values of x, n<sub>e</sub>(T=10) is much smaller than expected from the T<sub>c</sub> value and this probably implies that the average value of $`\mathrm{\Delta }_0(\varphi )`$ is significantly larger than T<sub>c</sub>. The rapid rise above x = 0.20 may arise from the combined effects of the closure of $`\mathrm{\Delta }_N`$ at x = 0.19 , the decreasing T<sub>c</sub> plus the fact that for LSCO there is significant pile up of states near E<sub>F</sub> in the overdoped region $`0.20<x<0.35`$ .
In Fig. 3(a) we present the LSCO $`acs`$ data as $`[\lambda _{ab}(0)/\lambda _{ab}(T)]^2`$ versus T/T<sub>c</sub> and compare the data with the mean-field calculation for a d-wave weak-coupled BCS superconductor with a cylindrical Fermi surface (FS) which gives $`\mathrm{\Delta }_0/T_c2.14`$ . There appears to be a systematic deviation of the data from the weak-coupling T-dependence with a greater (weaker) curvature on the underdoped (overdoped) side. We note however, that particularly in the overdoped samples there is positive curvature near T<sub>c</sub> which may arise from a small amount of doping inhomogeneity giving a distribution of T<sub>c</sub> values in this region where $`dT_c/dp`$ is maximal . The effect of this is to rescale the curves with a slightly lower value of T<sub>c</sub>. We have modelled $`\rho _s(T)`$ using the d-wave T-dependence and a normal distribution of T<sub>c</sub> values with standard deviation of 3%, 5% and 9% for x = 0.20, 0.22 and 0.24, respectively. These corrections, plotted in Fig. 3(b), bring the curves for x = 0.20 and 0.22 into good agreement with weak-coupling BCS with $`\mathrm{\Delta }_0/T_c2.14`$. Similar deductions, as to the magnitude and p-dependence of $`\mathrm{\Delta }_0/T_c`$, were made in the specific heat studies on these overdoped samples . However, the x = 0.24 sample still shows significant deviations that possibly reflect changes in the electronic structure. This would not be surprising given the changes in the FS with the rapid crossover from hole-like to electron-like states near x = 0.27 . We note that the data for x = 0.24 is in excellent agreement with a weak-coupling d-wave calculation for a rectangular FS .
In contrast to the overdoped samples the optimal and underdoped samples \[Fig. 3(a)\], both possessing very small rounding near T<sub>c</sub>, diverge significantly from the weak-coupling curve and in the opposite direction. We note that accounting for inhomogeneities in these samples will, if anything, move the curves even further from the weak-coupling BCS fit.
A central conclusion of the present work is that there is a crossover in both $`\rho _s(0)`$ and $`\rho _s(T)`$ near p = 0.20. Such behaviour is characteristic of many other NS and superconducting properties which have been interpreted in terms of the presence of $`\mathrm{\Delta }_N`$ in the underdoped region. The rate of depression of T<sub>c</sub> due to impurity scattering ($`1/\gamma `$ at T<sub>c</sub>) remains constant across the overdoped region then rises sharply with the opening of $`\mathrm{\Delta }_N`$, beginning in the lightly overdoped region at p $``$ 0.19 . Boebinger and coworkers using intense pulsed magnetic fields observe a crossover from insulating to metallic behaviour at T = 0 occurring near p = 0.18 and angle-resolved photoemission studies show the development, in the overdoped region, of a full NS Fermi surface . In this region the resistivity coefficient, $`[\rho (T)\rho (0)]/T`$, exhibits a low-T suppression due to the opening of $`\mathrm{\Delta }_N`$ . These considerations provide a compelling motivation for interpreting our penetration depth data within a similar scenario.
The proper means of incorporating the pseudogap effects within a realistic model, and indeed the very nature of the pseudogap is a matter of current debate. However, a key characteristic of $`\mathrm{\Delta }_N`$ is the loss of NS spectral weight near E<sub>F</sub>. The loss of spectral weight can cause, as discussed above, both a strong reduction in $`\rho _s(0)`$ and, in a simple model, enhanced curvature in $`\rho _s(T)/\rho _s(0)`$ above the BCS weak-coupling d-wave T-dependence , the very features we observe for the optimal and underdoped samples.
We note that our data are in reasonable agreement with earlier reports for slightly underdoped grain-aligned HgBa<sub>2</sub>Ca<sub>2</sub>Cu<sub>3</sub>O<sub>8+δ</sub> and single crystal LSCO with x = 0.15 . In contrast to the strong p dependence we have found in $`[\lambda _{ab}(0)/\lambda _{ab}(T)]^2`$ for LSCO and Hg-1201, studies in YBCO reported that $`[\lambda _{ab}(0)/\lambda _{ab}(T)]^2`$ scaled approximately with $`1/T_c`$ for various dopings, at all temperatures. However, systematic changes in $`[\lambda _{ab}(0)/\lambda _{ab}(T)]^2`$ with p were noted at the time although these were too small to allow further analysis. This may simply be due to the fact that the YBCO samples were not as heavily underdoped as the x = 0.10 LSCO sample. We also note that YBCO is complicated by a mixed s+d order parameter and the effect of the Cu-O chains on the total $`\rho _s`$ .
In summary, using the $`acs`$ and $`\mu `$SR techniques we have obtained consistent and systematic results on the effects of carrier concentration on $`\rho _s`$ of monolayer cuprates. In the overdoped region we find a more or less constant value of $`\rho _s(0)`$ (up to p = 0.24), and $`\rho _s(T)/\rho _s(0)`$ is in reasonably good agreement with the weak-coupling d-wave T-dependence. In the optimal and underdoped regions $`\rho _s(0)`$ is rapidly suppressed and above 0.1T<sub>c</sub> there is a marked departure of $`\rho _s(T)/\rho _s(0)`$ from the weak-coupling curve. In a comparative study with available specific heat data we found evidence supporting a link in the behaviour of $`\rho _s`$ and the normal state gap $`\mathrm{\Delta }_N`$.
We thank P.A. Lee, P.B. Littlewood, T. Xiang and J.F. Annett for stimulating discussions; J. Chrosch for assistance with part of the x-ray analysis of the grain-aligned samples, and P. King and C. Scott (ISIS) for technical support during the $`\mu `$SR measurements. C.P. thanks Trinity College, Cambridge for financial support and J.B. the European Union for a Marie Curie grant.
|
no-problem/9903/gr-qc9903053.html
|
ar5iv
|
text
|
# LARGE SCALE STRUCTURES IN THE UNIVERSE
## Abstract
In this brief communication we show why superclusters would naturally arise in the universe.
<sup>0</sup><sup>0</sup>footnotetext: E-mail:birlasc@hd1.vsnl.net.in
It is well known in the theory of the Random Walk or Brownian motion that
$$R=\sqrt{N}l$$
(1)
holds where $`R`$ denotes the dimension of the system, $`N`$ the number of steps or events and $`l`$ represents a mean free path. It has already been argued that in the context of the universe as a whole, $`N`$ representing the total number of particles, (1) gives the Eddington relation with $`l`$ being the pion Compton wavelength.
Let us now consider $`N10^6`$ constituents in the universe. Then (1) gives
$$l10^{25}cm$$
(2)
Indeed there is observational evidence for (2): We can easily see that (2) holds for superclusters, both in terms of their size $`l`$ and their number $`N`$. Further, (1) shows that these superclusters must have a two dimension character, which is also true.
It is ofcourse well known that one cannot apply the theory of Brownian motion to stars or even galaxies because they are gravitationally bound. However for superclusters with the huge separating voids, Brownian motion would be a reasonable approximation, as can be seen by the fact that (2) is valid. So it is natural that such superclusters should arise.
It is interesting to note also that recently, in a completely different context, it was suggested that there could be a large scale quantization, giving precisely (2) as the quantized length.
|
no-problem/9903/hep-th9903030.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
The matrix theory is the large-$`N`$ limit of the 10-dimensional supersymmetric Yang–Mills theory dimensionally reduced to 0 spatial dimensions. When the coupling constant $`g_{\mathrm{YM}}^2`$ is large, the matrix theory describes 11-dimensional M-theory while the limit of small $`g_{\mathrm{YM}}^2`$ is associated with 10-dimensional IIA superstring. The matrix theory correctly reproduces properties of D-branes in the superstring theory including their interactions to the leading order in violation of supersymmetry, e.g. at small velocities or large separations between D-branes or weak magnetic fields living on D-branes.
In this talk I consider the formulation of the matrix theory at finite temperature given by an Euclidean path integral with boundary conditions along the compactified “time” which are periodic for the Yang–Mills fields and antiperiodic for fermionic superpartners. I present the result of the computation of the effective potential between static D0-branes in the one-loop approximation and show that it agrees with an analogous computation in superstring theory, where an integration is to be performed over the non-trivial holonomies of the temporal components of Abelian gauge fields living on the D0-branes. This agreement is to the leading order in the supersymmetry violation by temperature, where the one-loop approximation is reliable, thus providing one more argument supporting the validity of the matrix theory. The computed effective static potential which is short-ranged and attractive has consequences for thermal properties of D0-branes.
## 2 Matrix theory at finite temperature
The matrix theory is formulated by the reduction of ten dimensional supersymmetric Yang-Mills theory
$$S_{\mathrm{YM}}[A,\theta ]=\frac{1}{g_{YM}^2}𝑑\tau \mathrm{Tr}\left(\frac{1}{4}F_{\mu \nu }^2+\frac{i}{2}\theta \gamma _\mu D_\mu \theta \right)$$
(1)
to one temporal and zero spatial dimensions: $`A_\mu =A_\mu \left(\tau \right)`$, $`\theta =\theta \left(\tau \right)`$.
The thermal partition function is given by the Euclidean path integral
$$Z_{\mathrm{YM}}=\left[dA\left(\tau \right)\right]\left[d\theta \left(\tau \right)\right]e^{S_{\mathrm{YM}}[A,\theta ]},$$
(2)
where the time-coordinate is periodic. The bosonic and fermionic coordinates have, respectively, periodic and antiperiodic boundary conditions
$`A_\mu \left(\tau +\beta \right)=A_\mu \left(\tau \right),\theta \left(\tau +\beta \right)=\theta \left(\tau \right),\beta =1/T,`$ (3)
where $`T`$ is the temperature. Gauge fixing involves introducing ghost fields which have periodic boundary conditions.
The representation (2) of the thermal partition function can be derived in the standard way starting from the known Hamiltonian of the matrix theory and representing the thermal partition function
$$Z_{\mathrm{YM}}=\mathrm{Tr}e^{\beta H}$$
(4)
via the path integral. The trace is calculated over all states obeying Gauss’s law which is taken care by the integration over $`A_0`$ in (2). This representation of the matrix theory at finite temperature have been discussed in Refs. .
The diagonal components of the gauge fields, $`\stackrel{}{a}^\alpha \stackrel{}{A}^{\alpha \alpha }`$, are interpreted in the matrix theory as the positions of the $`\alpha `$-th D0-brane and should be treated as collective variables. Static configurations play a special role since they satisfy classical equations of motion with the periodic boundary conditions and dominate the path integral as $`g_{\mathrm{YM}}^20`$. There are no such static zero modes for fermionic components since they would not satisfy the antiperiodic boundary conditions. This is an important difference from the zero temperature case and a manifestation of the fact that supersymmetry is explicitly broken by non-zero temperature.
An effective action for these coordinates is constructed by integrating the off-diagonal components of the gauge fields, the fermionic variables and the ghosts:
$$S_{\mathrm{eff}}\left[\stackrel{}{a}^\alpha \right]\mathrm{ln}\underset{\beta \alpha }{}\left[da_0^\alpha \right]\left[dA_\mu ^{\alpha \beta }\right]\left[d\theta \right]\left[d\mathrm{ghost}\right]e^{S_{\mathrm{YM}}S_{\mathrm{gf}}S_{\mathrm{gh}}}.$$
(5)
Generally, this integration can only be done in the a simultaneous loop expansion and expansion in the number of derivatives of the coordinates $`\stackrel{}{a}^\alpha `$. Such an expansion is accurate in the limit where $`\left|\stackrel{}{a}^\alpha \stackrel{}{a}^\beta \right|`$ are large for each pair of D0-branes and where their velocities are small. The remaining dynamical problem then defines the statistical mechanics of a gas of D0-branes:
$$Z_{\mathrm{YM}}=\underset{\tau ,\alpha }{}\left[d\stackrel{}{a}^\alpha \left(\tau \right)\right]e^{S_{\mathrm{eff}}\left[\stackrel{}{a}^\alpha \right]}.$$
(6)
## 3 One-loop computation in matrix theory
The computation of the effective action $`S_{\mathrm{eff}}`$ for the interaction between static D0-branes at one loop is standard for the matrix theory.
The gauge field decomposed into diagonal part which satisfies the classical equation of motion and fluctuating off-diagonal part:
$$A_\mu ^{\alpha \beta }=a_\mu ^\alpha \delta ^{\alpha \beta }+g_{\mathrm{YM}}\overline{A}_\mu ^{\alpha \beta }.$$
(7)
The gauge is fixed by
$$D_\mu ^{\alpha \beta }\overline{A}_\mu ^{\alpha \beta }=0,$$
(8)
where
$$D_0^{\alpha \beta }=_0i\left(a_\mu ^\alpha a_\mu ^\beta \right),\stackrel{}{D}^{\alpha \beta }=i\left(\stackrel{}{a}^\alpha \stackrel{}{a}^\beta \right).$$
(9)
This adds the Fadeev-Popov ghosts to the action
$$S_{\mathrm{gh}}=\underset{\alpha \beta }{}\left\{\overline{c}^{\alpha \beta }\left(D_\mu ^{\alpha \beta }\right)^2c^{\beta \alpha }+ig_{\mathrm{YM}}\overline{c}^{\beta \alpha }D_\mu ^{\alpha \beta }[\overline{A}_\mu ,c]\right\}.$$
(10)
There is a residual Abelian gauge invariance
$$\overline{A}_\mu ^{\alpha \beta }\overline{A}_\mu ^{\alpha \beta }e^{i\left(\chi ^\alpha \chi ^\beta \right)},a_\mu ^\alpha a_\mu ^\alpha +_\mu \chi ^\alpha ,$$
(11)
which can be used to make $`a_0^\alpha `$ independent on the compactified time-variable ($`_0a_0^\alpha =0`$). In contrast to the zero-temperature case, $`a_0^\alpha `$’s can not be completely removed bacause of the existence of the nontrivial holonomy
$$\mathrm{P}e^{i_0^\beta 𝑑\tau A_0\left(\tau \right)}=\mathrm{\Omega }^{}\mathrm{diag}(e^{i\beta a_0^1},\mathrm{},e^{i\beta a_0^\mathrm{N}})\mathrm{\Omega },$$
(12)
which is known as the Polyakov loop winding around the compact Euclidean time, whose trace is gauge invariant. Due to periodicity, it is chosen $`\pi /\beta <a_0^\alpha \pi /\beta `$.
Expanding the action to the quadratic order in $`\overline{A},c,\overline{c},\theta `$ and doing the Gaussian integration, it is obtained in the standard way
$$S_{\mathrm{eff}}=8\underset{\alpha <\beta }{}\left\{\mathrm{Tr}_B\mathrm{ln}\left(\left(D_\mu ^{\alpha \beta }\right)^2\right)\mathrm{Tr}_F\mathrm{ln}\left(\left(D_\mu ^{\alpha \beta }\right)^2\right)\right\},$$
(13)
where the subscript $`B`$ denotes contributions from the gauge fields and ghosts, whereas $`F`$ denotes those from the adjoint fermions. The determinants should be evaluated with periodic boundary conditions for bosons and antiperiodic boundary conditions for fermions.
The boundary conditions are taken into account by proper Matsubara frequencies, so that
$$e^{S_{\mathrm{eff}}}=\beta ^\mathrm{N}_{\pi /\beta }^{\pi /\beta }\underset{\gamma >\alpha }{}\frac{da_0^\alpha }{2\pi }\underset{n=\mathrm{}}{\overset{\mathrm{}}{}}\left(\frac{\left(\frac{2\pi n}{\beta }+\frac{\pi }{\beta }+a_0^\alpha a_0^\gamma \right)^2+\left|\stackrel{}{a}^\alpha \stackrel{}{a}^\gamma \right|^2}{\left(\frac{2\pi n}{\beta }+a_0^\alpha a_0^\gamma \right)^2+\left|\stackrel{}{a}^\alpha \stackrel{}{a}^\gamma \right|^2}\right)^8.$$
(14)
Using the formula
$$\underset{n=\mathrm{}}{\overset{\mathrm{}}{}}\left(\frac{2\pi n}{\beta }+\omega \right)=\mathrm{sin}\left(\frac{\beta \omega }{2}\right),$$
(15)
we obtain finally
$$e^{S_{\mathrm{eff}}}=\beta ^\mathrm{N}_{\pi /\beta }^{\pi /\beta }\underset{\gamma >\alpha }{}\frac{da_0^\alpha }{2\pi }\left(\frac{\mathrm{cosh}\beta \left|\stackrel{}{a}^\alpha \stackrel{}{a}^\gamma \right|+\mathrm{cos}\beta \left(a_0^\alpha a_0^\gamma \right)}{\mathrm{cosh}\beta \left|\stackrel{}{a}^\alpha \stackrel{}{a}^\gamma \right|\mathrm{cos}\beta \left(a_0^\alpha a_0^\gamma \right)}\right)^8.$$
(16)
The integration over the temporal components $`a_0^\alpha `$ implements the projection onto the gauge invariant eigenstates of the matrix theory Hamiltonian.
If both bosons and fermions had periodic boundary conditions the determinants would cancel because of supersymmetry. This would give the well-known result that the lowest energy state is a BPS state whose energy does not depend on the relative separation of the D0-branes.
## 4 Comparison with superstring theory
### 4.1 Open string language
The starting point is the thermal partition function of the single open superstring:
$$Z_{1\mathrm{s}\mathrm{t}\mathrm{r}}\beta =\mathrm{Tr}e^{\beta H},$$
(17)
where $`H`$ is the superstring Hamiltonian and the trace is over all physical (GSO projected) superstring states.
Since the ends of the open string end on two D0-branes separated by the distance $`L`$, the superstring has the Neumann boundary condition along the temporal direction and the Dirichlet boundary conditions along the nine spatial directions. The corresponding superstring spectrum is given by
$$\sqrt{\alpha ^{}}E_N=\sqrt{\frac{L^2}{4\pi ^2\alpha ^{}}+N},$$
(18)
where $`N`$ are eigenvalues of the oscillator number operator.
Knowing the spectrum (18), the thermal partition function of the string gas can be immediately written as
$$Z_{\mathrm{str}}(\beta ,L,\nu )=e^\beta =\underset{N=0}{\overset{\mathrm{}}{}}\left|\frac{1+e^{\beta E_N+i\pi \nu }}{1e^{\beta E_N+i\pi \nu }}\right|^{2d_N},$$
(19)
where $`d_N`$ stands for the degeneracy of either bosonic of fermionic superstring states at level $`N`$:
$$8\underset{n=1}{\overset{\mathrm{}}{}}\left(\frac{1+e^{nl}}{1e^{nl}}\right)^8=\underset{N=0}{\overset{\mathrm{}}{}}d_Ne^{Nl}.$$
(20)
For the lowest levels, $`d_0=8`$ and $`E_0=L/2\pi \alpha ^{}`$. The factor of 2 in the exponent $`2d_N`$ in (19) is the famous one by Polchinski and is due to the interchange of the superstring ends. It is crucial in providing the agreement with the matrix theory computation.
The physical meaning of Eq. (19) is obvious: the partition function is equal to the ratio of the Fermi and Bose distributions with the power being (twice) the degeneracy of the states.
Equation (19) is derived in Ref. (in Ref. for Dp-branes) by calculating the annulus diagram for the open superstring in compactified Euclidean time of the circumference $`\beta `$. The parameter $`\nu `$ has the meaning of the constant U(1) gauge field which enters though the quantized temporal momentum $`p^0=2\pi \left(r\nu \right)/\beta `$ of the open string whose world-sheet winds aroung the space-time cylinder as is depicted in Fig. 1.
In the above formula, $`r`$ is integer in the NS sector (associated with space-time bosons) and half-integer in the R sector (associated with space-time fermions).
In order to compare with the Yang-Mills computation, we identify the coordinates of D0-branes with $`\stackrel{}{q}^\alpha =2\pi \alpha ^{}\stackrel{}{a}^\alpha `$, so that the separation is $`L=2\pi \alpha ^{}\left|\stackrel{}{a}^1\stackrel{}{a}^2\right|`$. Then the integrand in (16) coincides for N=2 with (19) truncated to the massless modes ($`N=0`$) provided $`\pi \nu =\beta \left(a_0^1a_0^2\right)`$.
The truncation of the stringy modes is justified for $`\beta L`$ (or $`TL1`$) when the energy gap $`\mathrm{\Delta }`$ between the first two levels is finite. From (18) we get
$$\mathrm{\Delta }=\sqrt{\left(\frac{L}{2\pi \alpha ^{}}\right)^2+\frac{1}{\alpha ^{}}}\frac{L}{2\pi \alpha ^{}}$$
(21)
and the spectrum can be truncated at the first level when and only when $`\beta \mathrm{\Delta }1`$. If the temperature is small, this condition is always satisfied unless the length $`L`$ is not too large.
The integration over $`a_0`$’s in Eq. (16) corresponds to the integration over $`\nu `$ in Eq. (19). This integration comes about in the string theory as follows. The open-string gauge field $`A_0`$ interacts with D0-branes adding the surface term to the action:
$$S_{\mathrm{int}}=𝑑q^\mu A_\mu =_0^\beta 𝑑\tau \left(A_0(\tau ,\stackrel{}{q}^1)A_0(\tau ,\stackrel{}{q}^2)\right)=\pi \nu $$
(22)
The matrix theory automatically takes into account the integration over $`A_0`$ while in the string theory calculation of Ref. the open-string gauge field is fixed. This intergation over $`A_0`$ is needed to provide Gauss’s law for the charges at the ends of the open string which are induced on D-branes. Therefore, the effective potential between static D0-branes in the superstring theory is given by
$$S_{\mathrm{eff}}\left[\stackrel{}{a}^\alpha \right]=\mathrm{ln}_1^1𝑑\nu Z_{\mathrm{str}}(\beta ,L,\nu ).$$
(23)
This issue will be further discussed in the next Subsection.
### 4.2 Closed string language
At least two issues remain unclear in the open string language. Firstly, why to exponetiate the single string partition function to get the string gas and, secondly, why to integrate over $`\nu `$ the string gas partition function (19) rather than, say, the single string partition function (17)? This has a natural explanation in the closed string language.
The passage to a closed string is performed by the standard modular transformation which converts the annulus diagram for an open string into a cylinder diagram for a closed string. The right-hand side of Eq. (17) can then be represented as
$$(L,\beta ,\nu )=\frac{8\pi ^4}{\sqrt{2\pi \alpha ^{}}}_0^{\mathrm{}}\frac{ds}{s^{9/2}}e^{sL^2/2s\alpha ^{}}\mathrm{\Theta }_2\left(\nu |\frac{i\beta ^2s}{2\pi ^3\alpha ^{}}\right)\underset{n=1}{\overset{\mathrm{}}{}}\left(\frac{1e^{\left(2n+1\right)s}}{1e^{2ns}}\right)^8,$$
(24)
where
$$\mathrm{\Theta }_2\left(\nu |iz\right)=\underset{q=\mathrm{}}{\overset{\mathrm{}}{}}e^{\pi z\left(2q+1\right)^2/4+i\pi \left(2q+1\right)\nu }.$$
(25)
The meaning of Eq. (24) is that of the closed-string propagator, which describes the interaction between D0-branes, rather than the thermal partition function as for an open string.
The sum over $`q`$ in Eq. (24) represents the sum over all possible winding numbers $`w=2q+1`$ of the closed string around the compact dimension $`X_0`$. Only odd winding numbers survive since the contribution of the even ones vanishes due to supersymmetry. The vanishing of the term with zero winding number is analogous to that at zero temperature and is due to the cancellation between the NS-NS and R-R sectors.
When two D0-branes interact, they can exchange several closed strings, not necessarily one. All such exchanges are of the same order of magnitude in the string coupling constant and exponentiate since the closed strings are identical. This is analogous to the exponentiation of the single-gluon exchange when the interaction between static quark and antiquark in the Yang–Mills theory is calculated via the correlator of two Polyakov loops. Therefore, Eq. (19) naturally emerges in the closed string language. It is also clear why there is only one $`\nu `$ for each multi-string term: we have just two interacting D0-branes rather than a gas of D0-branes. This results in Eq. (23).
Each of the closed strings mediating the interaction between D0-branes has its own winding number $`w_i`$. In the open string language, this induces on the D-brane the charge $`_iw_i`$ with respect to the open-string gauge field. Such charged states look suspicious since they are missing at zero temperature where $`X_0`$ is not compact and there are no windings along the $`X_0`$ direction, so that the total charge equals zero at each value of the time $`\tau `$. But the integration over $`\nu `$ picks up exactly the state with $`_iw_i=0`$, i.e. which is not charged! In particular, all states with a single closed string vanish after the integration over $`\nu `$. The leading order contribution to the D0-brane interaction comes from the state with two closed strings having unit winding numbers of opposite signs.
It is worth nothing that in the closed string language $`w_i`$ is associated with the NS-NS charge of the closed string. Therefore, the condition $`_iw_i=0`$ implies the vanishing of the total NS-NS charge.
## 5 Discussion
The effective static potential between two D0-branes emerges because supersymmetry is broken by finite temperature. This effect of breaking supersymmetry is somewhat analogous to the velocity effects at zero temperature where the matrix theory and superstring computations agree to the leading order of the velocity expansion . It is thus shown that the leading term in a low temperature expansion is correctly reproduced by the matrix theory.
The effective static potential between D0-branes at one loop is logarithmic and attractive at short distances. The singularity occurs when the distance between the D0-branes vanishes and the SU(N) symmetry which is broken by finite distances is restored. The integration over the off-diagonal components of the gauge field can no longer be treated in the one-loop approximation! (This issue has been further discussed recently in Ref. .) In the superstring theory, the singularity is exactly the same as in the matrix theory since it is determined only by the massless bosonic modes in the NS sector.
The computed partition functions take into account only thermal fluctuations of superstring stretched between D0-branes but not the fluctuations of D0-branes themselves. These separation of the degrees of freedom is justified by the fact that D0-branes have a mass $`1/g_s\sqrt{\alpha ^{}}`$ and are very heavy as $`g_s0`$. To calculate the thermal partition function of D0-branes, a further path integration over their periodic trajectories $`\stackrel{}{a}\left(\tau \right)`$ is to be performed as in (6). Classical statistics is not applicable to this problem due to the singularity of the one-loop effective static potential at small distances in spite of tha fact that D0-branes are very heavy. However, this singularity is only in the classical partition function. The path integral over the periodic trajectories $`\stackrel{}{a}\left(\tau \right)`$ can not diverge since the two-body quantum mechanical problem has a well-defined spectrum.
Acknowledgements
This work is supported in part by the grants INTAS 96–0524 and RFFI 97–02–17927.
|
no-problem/9903/hep-ph9903519.html
|
ar5iv
|
text
|
# Gaugino pair production at LHC for the case of nonuniversal gaugino masses
## 1 Introduction
One of the LHC goals is the discovery of the supersymmetry. In particular, it is very important to investigate a possibility to discover nonstrongly interacting superparticles (sleptons, higgsino, gaugino). In ref. (see, also references ) the LHC gaugino discovery potential has been investigated within the minimal SUGRA-MSSM framework where all sparticle masses are determined mainly by two parameters: $`m_0`$ (common squark and slepton mass at GUT scale) and $`m_{\frac{1}{2}}`$ (common gaugino mass at GUT scale). The signature used for the search for gauginos at LHC is $`3isolatedleptons+nojets+E_T^{miss}`$ events. The conclusion of ref. is that LHC is able to detect gauginos with $`m_{\frac{1}{2}}`$ up to 170 GeV and in some cases (small $`m_0`$) up to 420 GeV.
In this paper we investigate the gaugino discovery potential of LHC for the case of nonuniversal gaugino masses. Despite the simplicity of the SUGRA-MSSM framework it is a very particular model. The mass formulae for sparticles in SUGRA-MSSM model are derived under the assumption that at GUT scale ($`M_{GUT}210^{16}`$ GeV) soft supersymmetry breaking terms are universal. However, in general, we can expect that real sparticle masses can differ in a drastic way from sparticle masses pattern of SUGRA-MSSM model due to many reasons, see for instance refs. . Therefore, it is more appropriate to investigate the LHC SUSY discovery potential in a model-independent way <sup>1</sup><sup>1</sup>1The early version of this study has been published in ref. .. The cross section for the $`\stackrel{~}{\chi }_1^\pm \stackrel{~}{\chi }_2^0`$ chargino second neutralino pair production depends mainly on the mass of chargino which is approximately degenerate in mass with the second neutralino $`M(\stackrel{~}{\chi }_1^\pm )M(\stackrel{~}{\chi }_2^0)`$. The two lightest neutralino and the lightest chargino $`(\stackrel{~}{\chi }_1^0,\stackrel{~}{\chi }_2^0,\stackrel{~}{\chi }_1^\pm )`$ have, as largest mixing components, the gauginos, and hence their masses are determined by the common gaugino mass, $`m_{\frac{1}{2}}`$. Within mSUGRA model $`M(\stackrel{~}{\chi }_1^0)0.4m_{\frac{1}{2}}`$ and $`M(\stackrel{~}{\chi }_2^0)M(\stackrel{~}{\chi }_1^\pm )2M(\stackrel{~}{\chi }_1^0)`$.
The lightest chargino $`\stackrel{~}{\chi }_1^\pm `$ has several leptonic decay modes giving an isolated lepton and missing energy:
three-body decay
* $`\stackrel{~}{\chi }_1^\pm \stackrel{~}{\chi }_1^0+l^\pm +\nu `$,
two-body decays
* $`\stackrel{~}{\chi }_1^\pm \stackrel{~}{l}_{L,R}^\pm +\nu `$,
$`\stackrel{~}{\chi }_1^0+l^\pm `$
* $`\stackrel{~}{\chi }_1^\pm \stackrel{~}{\nu }_L+l^\pm `$,
$`\stackrel{~}{\chi }_1^0+\nu `$
* $`\stackrel{~}{\chi }_1^\pm \stackrel{~}{\chi }_1^0+W^\pm `$.
$`l^\pm +\nu `$
Leptonic decays of $`\stackrel{~}{\chi }_2^0`$ give two isolated leptons and missing energy:
three-body decays
* $`\stackrel{~}{\chi }_2^0\stackrel{~}{\chi }_1^0+l^+l^{}`$,
* $`\stackrel{~}{\chi }_2^0\stackrel{~}{\chi }_1^\pm +l^{}+\nu `$,
$`\stackrel{~}{\chi }_1^0+l^\pm +\nu `$
two-body decay
* $`\stackrel{~}{\chi }_2^0\stackrel{~}{l}_{L,R}^\pm +l^{}`$.
$`\stackrel{~}{\chi }_1^0+l^\pm `$
For relatively large $`\stackrel{~}{\chi }_2^0`$ mass there are two-body decays $`\stackrel{~}{\chi }_2^0\stackrel{~}{\chi }_1^0h`$, $`\stackrel{~}{\chi }_2^0\stackrel{~}{\chi }_1^0Z`$ which suppress three-body decay of $`\stackrel{~}{\chi }_2^0`$. Direct production of $`\stackrel{~}{\chi }_1^\pm \stackrel{~}{\chi }_2^0`$ followed by leptonic decays of both gives three high $`p_T`$ isolated leptons accompanied by missing energy due to escaping $`\stackrel{~}{\chi }_1^0`$’s and $`\nu `$’s. These events do not contain jets except jets coming from initial state radiation. Therefore the signature for $`\stackrel{~}{\chi }_1^\pm \stackrel{~}{\chi }_2^0`$ pair production is $`3l+nojets+missingenergy`$.
As mentioned above, this signature has been used in ref. for investigation of LHC gaugino discovery potential within mSUGRA model, where gaugino masses $`M(\stackrel{~}{\chi }_1^0)`$, $`M(\stackrel{~}{\chi }_2^0)`$ are determined mainly by a common gaugino mass $`m_{\frac{1}{2}}`$ and $`M(\stackrel{~}{\chi }_2^0)2.5M(\stackrel{~}{\chi }_1^0)`$. In our study we consider the general case when the relation between $`M(\stackrel{~}{\chi }_1^\pm )`$ and $`M(\stackrel{~}{\chi }_1^0)`$ is arbitrary. We find that LHC gaugino discovery potential depends rather strongly on the relation between $`\stackrel{~}{\chi }_1^0`$ and $`\stackrel{~}{\chi }_2^0`$ masses. For $`M_{\stackrel{~}{\chi }_2^0}M_{\stackrel{~}{\chi }_1^0}M_Z`$ the decay $`\stackrel{~}{\chi }_2^0\stackrel{~}{\chi }_1^0Z`$ dominates and due the real Z-boson in final state the signal is as a rule too small to be observable due to huge background. For $`M_{\stackrel{~}{\chi }_2^0}`$ closed to $`M_{\stackrel{~}{\chi }_1^0}`$ the leptons in final state are rather soft that also prevents the signal detection. We also give some preliminary results on the investigation of squark and gluino production at LHC for the case of nonuniversal gaugino masses.
## 2 Simulation of detector response. Backgrounds
Our simulations are made at the particle level with parametrized detector responses based on a detailed detector simulation. To be concrete our estimates have been made for the CMS(Compact Muon Solenoid) detector. The CMS detector simulation program CMSJET 3.2 is used. The main aspects of the CMSJET relevant to our study are the following.
* Charged particles are tracked in a 4 T magnetic field. 90 percent reconstruction efficiency per charged track with $`p_T>1`$ GeV within $`|\eta |<2.5`$ is assumed.
* The geometrical acceptances for $`\mu `$ and $`e`$ are $`|\eta |<2.4`$ and 2.5, respectively. The lepton number is smeared according to parametrizations obtained from full GEANT simulations. For a 10 GeV lepton the momentum resolution $`\mathrm{\Delta }p_T/p_T`$ is better than one percent over the full $`\eta `$ coverage. For a 100 GeV lepton the resolution becomes $`(15)10^2`$ depending on $`\eta `$. We have assumed a 90 percent triggering plus reconstruction efficiency per lepton within the geometrical acceptance of the CMS detector.
* The electromagnetic calorimeter of CMS extends up to $`|\eta |=2.61`$. There is a pointing crack in the ECAL barrel/endcap transition region between $`|\eta |=1.4781.566`$ (6 ECAL crystals). The hadronic calorimeter covers $`|\eta |<3`$. The Very Forward calorimeter extends from $`|\eta |<3`$ to $`|\eta |<5`$. Noise terms have been simulated with Gaussian distributions and zero suppression cuts have been applied.
* $`e/\gamma `$ and hadron shower development are taken into account by parametrization of the lateral and longitudinal profiles of showers. The starting point of a shower is fluctuated according to an exponential law.
* For jet reconstruction we have used a slightly modified UA1 Jet Finding Algorithm, with a cone size of $`\mathrm{\Delta }R=0.8`$ and 25 GeV transverse energy threshold on jets.
All SUSY processes with full particle spectrum, couplings, production cross section and decays are generated with ISAJET 7.32, ISASUSY . The Standard Model backgrounds are generated with PYTHIA 5.7 .
The following SM processes give the main contribution to the background:
$`WZ,ZZ,t\overline{t},Wtb,Zb\overline{b},b\overline{b}`$. In this paper we use the results of the background simulation of ref. . Namely following ref. we require 3 isolated leptons with $`p_T^l>15GeV`$ in $`|\eta ^l|<2.4(2.5)`$ for muons (electrons) and with the same-flavour opposite-sign leptons. As an lepton isolation criterium we require the absence of charged tracks with $`p_T>1.5GeV`$ in a cone $`R=0.3`$ around lepton. We require also the absence of jets with $`E_T^{jet}>30GeV`$ in $`|\eta ^l|<3`$. The last requirement is that the two same-flavour opposite-sign lepton invariant mass $`M_{l^+l^{}}<81GeV`$. Lepton isolation is useful for the suppression of the background events with leptons originating from semileptonic decays of b-quarks. The central jet veto requirement allows to get rid of the internal SUSY background coming from $`\stackrel{~}{g}`$ and $`\stackrel{~}{q}`$ cascade decays, which otherwise overwhelms $`\stackrel{~}{\chi }_1^\pm `$ $`\stackrel{~}{\chi }_2^0`$ direct production. Also this cut reduces $`t\overline{t}`$, $`Wtb`$, $`Zb\overline{b}`$, $`b\overline{b}`$ SM background.
For such set of cuts the background cross section $`\sigma _{back}=10^2pb`$ that corresponds to the number of background events $`N_b=10(100)`$ for total luminosity $`L=10^3(10^4)pb^1`$. See for details ref.. If we refuse from the $`M_Z`$-cut, namely if we refuse from the requrement that the two same-flavour opposite-sign lepton invariant mass $`M_{l^+l^{}}<81`$ GeV, then the background cross section is $`\sigma _{back}=0.11pb`$ .
## 3 Results
The results of our calculations are presented in Tables 1-9. In estimation of the LHC(CMS) gaugino discovery potential we have used the significance determined as $`S_{12}=\sqrt{N_s+N_b}\sqrt{N_b}`$ which is appropriate for the estimation of discovery potential in the case of future experiments . For the comparison we also give the values of often used significance determined as $`S=\frac{N_S}{\sqrt{(N_S+N_B)}}`$. Here $`N_s=\sigma _sL`$ is the number of signal events and $`N_b=\sigma _bL`$ is the number of background events for a given total luminosity $`L`$. As it follows from our results for given value of chargino mass $`M(\stackrel{~}{\chi }_1^\pm )`$ the number of signal events depends rather strongly on the mass of the lightest superparticle $`M(\stackrel{~}{\chi }_1^0)`$ and for $`M(\stackrel{~}{\chi }_1^0)0.7M(\stackrel{~}{\chi }_1^\pm )`$ signal is too small to be observable. For small LSP masses $`M(\stackrel{~}{\chi }_1^0)`$ two body decay $`\stackrel{~}{\chi }_2^0\stackrel{~}{\chi }_1^0Z`$ dominates and due to $`M_Z`$-cut signal is as a rule too small to be observable. However if we refuse from $`M_Z`$ cut in some cases it is possible to detect two body decay mode $`\stackrel{~}{\chi }_2^0\stackrel{~}{\chi }_1^0(Zl^+l^{})`$. As an illustration consider several examples wich correspond to total luminosity $`L=310^4`$ $`pb^1`$ (for such luminosity the number of baskground events is expected to be equal 3300).
A. $`M_{\stackrel{~}{q}}=M_{\stackrel{~}{l}}=M_{\stackrel{~}{g}}=2`$ TeV. $`\mathrm{tan}(\beta )=5`$, $`M(\stackrel{~}{\chi }_2^0)M(\stackrel{~}{\chi }_1^\pm )=104`$ GeV, $`M(\stackrel{~}{\chi }_1^0)=11GeV,N_{ev}=1921,S_{12}=14,S=26`$.
B. $`M_{\stackrel{~}{q}}=M_{\stackrel{~}{l}}=M_{\stackrel{~}{g}}=2`$ TeV. $`\mathrm{tan}(\beta )=5`$, $`M(\stackrel{~}{\chi }_2^0)M(\stackrel{~}{\chi }_1^\pm )=126`$ GeV, $`M(\stackrel{~}{\chi }_1^0)=21GeV,N_{ev}=924,S_{12}=8,S=14`$.
C. $`M_{\stackrel{~}{q}}=M_{\stackrel{~}{l}}=M_{\stackrel{~}{g}}=500`$ GeV. $`\mathrm{tan}(\beta )=5`$, $`M(\stackrel{~}{\chi }_2^0)M(\stackrel{~}{\chi }_1^\pm )=122`$ GeV, $`M(\stackrel{~}{\chi }_1^0)=26GeV,N_{ev}=744,S_{12}=6,S=11`$.
D. $`M_{\stackrel{~}{q}}=M_{\stackrel{~}{l}}=M_{\stackrel{~}{g}}=1`$ TeV. $`\mathrm{tan}(\beta )=5`$, $`M(\stackrel{~}{\chi }_2^0)M(\stackrel{~}{\chi }_1^\pm )=124`$ GeV, $`M(\stackrel{~}{\chi }_1^0)=32GeV,N_{ev}=864,S_{12}=7.1,S=13`$.
## 4 Squark and gluino production for the case of nonuniversal gaugino masses
Here we give first preliminary results on the squark and gluino production for the case of nonuniversal gaugino masses . We investigated 3 cases:
* $`m_{\stackrel{~}{q}}m_{\stackrel{~}{g}}`$,
* $`m_{\stackrel{~}{q}}m_{\stackrel{~}{g}}`$,
* $`m_{\stackrel{~}{q}}m_{\stackrel{~}{g}}`$, $`m_{\stackrel{~}{q}}m_{\stackrel{~}{g}}`$.
We used the signature $`n2`$ jets \+ $`E_T^{miss}`$ for the supersymmetry search. As for the signature $`n2`$ jets \+ $`n1`$ leptons \+ $`E_T^{miss}`$, it depends on the relation among $`\stackrel{~}{\chi }_2^0,\stackrel{~}{\chi }_1^0,\stackrel{~}{q}`$ and $`\stackrel{~}{g}`$ masses. For the instance, for $`m_{\stackrel{~}{\chi }_2^0}min(m_{\stackrel{~}{g}},m_{\stackrel{~}{q}})`$ there are no cascade decays of $`\stackrel{~}{\chi }_2^0`$ and, hence, the signature with $`n1`$ leptons is not essential.
We have found that for the case of arbitrary relations among gaugino masses the number of signal events for the signature $`n2`$ jets \+ $`E_T^{miss}`$ depends rather strongly on the relation among $`m_{\stackrel{~}{g}}`$, $`m_{\stackrel{~}{q}}`$ and $`m_{\stackrel{~}{\chi }_1^0}`$. For $`m_{\stackrel{~}{\chi }_1^0}`$ closed to $`min(m_{\stackrel{~}{g}},m_{\stackrel{~}{q}})`$ the perspectives of SUSY detection become very problematic. For instance, for $`min(m_{\stackrel{~}{g}},m_{\stackrel{~}{q}})1TeV`$ and $`m_{\stackrel{~}{\chi }_1^0}0.75min(m_{\stackrel{~}{g}},m_{\stackrel{~}{q}})`$ it is extremely difficult or even impossible to detect SUSY using the channel $`n2`$ jets \+ $`E_T^{miss}`$.
As a concrete example consider the SUSY detection for $`m_{\stackrel{~}{q}}=1550GeV`$, $`m_{\stackrel{~}{g}}=1500GeV`$. For cut with $`n_{jet}3`$, $`p_{T_{jet1}}350GeV`$, $`p_{T_{jet2}}290GeV`$, $`p_{T_{jet3}}230GeV`$, $`E_T^{miss}1200GeV`$ for luminosity $`L=10^5pb^1`$ the number of background events is 35 whereas the number of signal events is 334, 66, 9, 5 for $`m_{\stackrel{~}{\chi }_1^0}=250GeV,750GeV,1125GeV`$ and $`1350GeV`$, correspondingly.
## 5 Conclusion
In this paper we have presented the results of the calculations for $`\stackrel{~}{\chi }_1^\pm \stackrel{~}{\chi }_2^0`$ pair production at LHC (CMS) with their subsequent decays into leptons for the case of nonuniversal gaugino masses. We have found that the visibility of signal by an excess over SM background in $`3l+nojets+E_T^{miss}`$ events depends rather strongly on the relation between LSP mass $`\stackrel{~}{\chi }_1^0`$ and chargino $`\stackrel{~}{\chi }_1^\pm `$ mass. For relatively heavy LSP mass $`M_{\stackrel{~}{\chi }_1^0}M_{\stackrel{~}{\chi }_1^\pm }`$ signal is too small to be observable. Also for small values of LSP mass $`M_{\stackrel{~}{\chi }_1^0}`$ two body decay $`\stackrel{~}{\chi }_2^0\stackrel{~}{\chi }_1^0Z`$ complicates the observation of the signal. For total luminosity $`L=310^4pb^1`$ signal could be observable for chargino mass $`M(\stackrel{~}{\chi }_1^\pm )`$ up to 150 GeV .
Acknowledgments
We are indebted to I.N. Semeniouk for his help in writing the code of the events selections. This work has been supported by RFFI grant 99-02-16956.
|
no-problem/9903/nucl-th9903039.html
|
ar5iv
|
text
|
# Neutron Star Vortex Dynamics and Magnetic Field Decay: Implications for High Density Nuclear Matter
(March, 1999)
## Abstract
We investigate the effect of the density-dependent proton and neutron gaps on vortex dynamics in neutron stars. We argue that the persistence of neutron star magnetic fields on timescales of $`10^9`$ y suggests a superconducting gap curve with local maximum at intermediate density. We discuss the implications for exotic core phenomena such as pion/kaon condensation or a transition to quark matter.
In this letter we address the evolution of magnetic fields in neutron stars, in particular the distribution of magnetic vortices inside the star. Residual magnetic fields are believed to persist over very long timescales ($`10^9y`$) in neutron stars. While naively attributed to the confinement of magnetic flux into vortices (henceforth, flux vortices or FVs) due to proton superconductivity , the phenomena is more involved, and may involve the interaction of FVs with neutron superfluid vortices (henceforth, SVs) . Here we will argue that a prerequisite for the persistence of magnetic fields, as well as for the applicability of models in , is that the proton gap curve $`\mathrm{\Delta }_p`$ have a certain shape as a function of density within the neutron star. The point is that the density-dependent proton gap leads to a force which acts on FVs. At low densities (in the outer core), this force will always act to eject vortices into the non-superconducting crust. A simple calculation shows that this proton gap force dominates any vortex bouyancy effects , and leads to ejection on timescales of $`10^6`$ y. However, if the proton gap decreases at higher densities after reaching a local maximum at some intermediate density, the sign of the force will reverse and act to anchor vortex segments to the core of the neutron star . We will argue that without this effect, interactions between pinned SVs and FVs are insufficient to prevent FV ejection.
The phenomenology of magnetic fields in neutron stars has long been of interest to those studying pulsar glitches , and has recently been given a prominent role in the magnetar model of local gamma ray bursters . Our main interest here will be in the dynamics of fluxoids deep within the core of the neutron star, in particular the forces which act to either anchor or expel them. We will conclude with a discussion of the implications of our work on exotic states of matter in neutron stars.
Below we list some neutron star properties of relevance to our analysis
$``$ Neutron star structure: In the outer layer of thickness $`1`$ km, a lattice of neutron-rich nuclei is surrounded by a neutron superfluid. As the density increases, conversion of protons and captured electrons into neutrons becomes more efficient, and eventually the proton and electron fraction becomes of order a few percent, sufficient to prevent neutron decay by Fermi blocking. The neutron superfluid order parameter (see for recent computations) is initially in the $`{}_{}{}^{1}S_{0}^{}`$ channel, but probably shifts to the $`{}_{}{}^{3}P_{2}^{}`$ channel at higher density, due to the repulsive core of the neutron-neutron potential. The gap size is of order 1 MeV. A proton gap of similar size, leading to superconductivity, is also expected in the core region. Due to uncertainties in the equation of state at high density, the maximum core density is unknown. Various exotic phenomena such as pion or kaon condensation, or even a transition to quark matter may occur deep in the core. We note that in all of these scenarios a superconducting gap which is larger than the proton gap is to be expected. (See for recent progress on the quark color-superconducting gap.)
$``$ Superfluid vortices (SVs) carry the star’s angluar momentum in quantized lines parallel to the spin axis. The have an area density
$$n_{SV}=2m_n\mathrm{\Omega }/\pi 10^4/P(\mathrm{sec})\mathrm{cm}^2.$$
(1)
Because of the strong coupling between neutrons and protons, the circulation of neutrons leads in turn to circulation of protons, and the SVs are themselves expected to carry carry magnetic fields.
$``$ Magnetic flux vortices (FVs) are the result of the type II superconductivity in the inner crust and core region. The magnetic field of the star is confined into individual vortices of flux $`\mathrm{\Phi }_0=\pi /e=210^7\mathrm{Gauss}/\mathrm{cm}^2`$. The number density of such vortices is
$$n_{FV}=B/\mathrm{\Phi }_010^{19}B_{12}\mathrm{cm}^2,$$
(2)
where $`B_{12}=B/10^{12}`$ Gauss. Note that the density of FVs is enormously larger than that of the SVs.
Now let us consider the effect on vortex dynamics of the shapes of the relevant gap curves. Because the string tension (energy per unit length) of a vortex behaves as
$$\mu =c\mathrm{\Delta }^2,$$
(3)
(where c is a dimensionless constant of order 1) there is a force per unit length exerted on the vortex due to the variation of the gap with radial position (density) within the star:
$$\stackrel{}{f}_\mathrm{\Delta }=2c\mathrm{\Delta }\frac{\mathrm{\Delta }}{r}\widehat{r}.$$
(4)
The magnitude of this force per unit length is of order
$$f_\mathrm{\Delta }\mathrm{MeV}^2/R,$$
(5)
where R is the characteristic length scale over which the gap varies. For the FV gap $`RR_{NS}10^4`$ m, while for the $`{}_{}{}^{1}S_{0}^{}`$ superfluid gap $`R10^3`$ m. Comparing with the bouyancy effect of Muslimov and Tsygan , we see that this effect is of similar but somewhat larger size.
In the region where $`\mathrm{\Delta }`$ is increasing with density, the force will act to expel vortices. The characteristic time for this to occur depends on the drag force exerted on the vortex due to interactions with leptons (at high densities there may be muons present). Since the protons and neutrons form a superfluid their contribution to the drag is negligible. The lepton drag force has been considered in some detail by Jones , and is of the order
$$f_{drag}\mathrm{MeV}^3v_{vortex}.$$
(6)
Using this result, the terminal velocity can be found and therefore the expulsion time, which is $`\tau _\mathrm{\Delta }10^6`$ y for magnetic vortices. Once a vortex has been expelled into the outer crust, the magnetic field can decay by ohmic dissipation on timescales of $`\tau _\omega r_c^2\sigma 10^7`$ y, where $`r_c`$ is the crust thickness and $`\sigma `$ the conductivity <sup>1</sup><sup>1</sup>1Some calculations, such as that of Sang and Chanmugam , have obtained timescales for ohmic decay which are larger than the usual estimate. However, it is important to note that the mechanism described here ejects the magnetic fields into the outer crust ($`\rho <10^{12}\mathrm{g}/\mathrm{cm}^3`$), where the conductivity is lower and where even the calculations of yield decay timescales of order $`10^7y`$.. These timescales are inconsistent with the observed persistence of magnetic fields of order $`10^{910}`$ G in millisecond pulsars with ages $`10^9`$ y.
In regions where the gap decreases with increasing density, the force acts to pull the vortex deeper into the star. For example, in the case of a superfluid vortex, the $`{}_{}{}^{1}S_{0}^{}`$ gap falls off after reaching its maximum at a Fermi momentum $`p_F150`$ MeV. In this region an SV is pulled toward the center of the star, until the sign of the gradient switches again. The case of superfluid vortices is complicated, because the superfluid order parameter switches from $`{}_{}{}^{1}S_{0}^{}`$ to $`{}_{}{}^{3}P_{2}^{}`$ at $`p_F300`$ MeV. In addition, because SVs also carry magnetic fields, they are also affected by the proton gap force gradient. In figure 1 we show the likely behavior of the neutron and proton gap functions. The leftmost curve shows the likely behavior of the $`{}_{}{}^{1}S_{0}^{}`$ superfluid gap, while the two curves on the right display possibilities for the superconducting gap. We will refer to the upper curve, which increases monotonically with density, as curve 1, and the lower curve as curve 2. Superfluid vortices can minimize their energy in the region where the superfluid and superconducting gap curves intersect. The evolution of FVs depends crucially on the shape of the $`\mathrm{\Delta }_p`$ curve. If there is no local maximum (as shown in curve 1), then all FVs will eventually be ejected from the star. Alternatively, if curve 2 is correct then FVs with sufficient length in the attractive core will be anchored against ejection (see figure 2). Some sub-population of the FVs could presumably remain indefinitely.
One might think that the interactions between FVs and SVs, or their respective pinning to the crust, might be enough to prevent FV ejection even in the case of curve 1. However, since the number of FVs is so much greater than the number SVs, they will either carry the SVs along in their motion, or cut through them on their way to the surface. (Note that intercommutation of vortices is highly efficient , so if a vortex line is cut through it will almost always reconnect with itself afterwards.) The crustal pinning force on an SV is less than of order $`\mathrm{MeV}^2`$, so it is easily overcome by the combined force exerted by $`f_\mathrm{\Delta }`$ through $`n_{FV}/n_{SV}`$ flux vortices, each of order $`R_{NS}`$ in length. The total force exerted on a single SV is
$$F_\mathrm{\Delta }\frac{n_{FV}}{n_{SV}}\mathrm{MeV}^2,$$
(7)
which completely dominates any restraining effects on the SV.
The general form of curve 2 in figure 1 is to be expected from standard calculations, given that pp interactions are attractive at long distances and have a repulsive core. Of course, medium effects due to the large density of neutrons will be important and are difficult to account for. The particular values of curve 2 were obtained using the Fermi surface effective field theory technique of , using experimentally determined pp phase shifts and the beta-stability condition to determine the proton density relative to the neutron density. The result should be accurate at lower densities, but the eventual behavior of the curve (i.e. curve 1 vs curve 2) is subject to large uncertainties. We have argued that the long time persistence of pulsar magnetic fields favors case 2.
As previously mentioned, many of the exotic possibilities for the inner core behavior (pion or kaon condensation, quark matter) imply superconducting gaps larger than of order 1 MeV, due to condensation of electrically charged degrees of freedom: $`\pi ^\pm `$, $`K^+`$ or a diquark pair, at densities of several times $`\rho _0=210^{14}\mathrm{g}/\mathrm{cm}^3`$. In the case of quark matter , the gap size is expected to be at least 10 MeV, and perhaps as large as 50-100 MeV. This would be hard to reconcile with curve 2. The transition from normal matter to exotic phase would have to occur at sufficiently high density (i.e. at the far right of figure 1) to allow for a region in the star which remains attractive to FVs. The maximum of the proton gap curve in figure 1 is already at a density of $`2\rho _0`$ (and density increases with the cube of Fermi momentum), so this at most leaves room for a thin shell of attractive volume. We conclude that exotic phases (if they occur at all) (1) can only occur at very high density ($`>(\mathrm{few})\rho _0`$) and (2) will occupy at most only a small fraction of the volume of the star.
In summary, we have argued that the proton gap curve is likely to exhibit a local maximum at intermediate density, implying a region at higher density which traps flux vortices and disfavoring an exotic phase at the core. Vortices which are formed with insufficient length in this region will be ejected on timescales of order $`\tau _\mathrm{\Delta }10^6`$ y, and decay in the outer crust. As mentioned, the asymptotic values of neutron star magnetic fields are estimated to be less than of order $`10^{10}`$ G, compared to $`10^{12}`$ G or more at formation. It is not known whether the decay of the magnetic field is due to accretion or flux decay. If the cause is flux decay, it would imply that in any (young) neutron star the ejection process is under way, with some FVs being pushed into the crust at all times. It is not clear what the phenomenological implications of this are, although the presence of large magnetic fields confined to the outer crust presumably leads to significant crustal stresses and perhaps starquake activity. Another issue worth considering is the fate of SVs if they are carried along in the expulsion of FVs to the surface of the star. This may lead to spin down which is correlated to the decay of the magnetic fields. While the causality is different, the phenomenology might resemble that of models in which magnetic field decay is caused by the flow of SVs during spin down .
The author would like to thank Jim Hormuzdiar for discussions. This work was supported in part under DOE contracts DE-AC02-ERU3075 and DE-FG06-85ER40224.
|
no-problem/9903/astro-ph9903172.html
|
ar5iv
|
text
|
# Image Reconstruction of COMPTEL 1.8 MeV 26Al Line Data
## 1 Introduction
Imaging the $`\gamma `$-ray sky at MeV energies by the COMPTEL telescope aboard the Compton Gamma-Ray Observatory (CGRO) presents a major methodological challenge. Registered events are dominated by instrumental background, and additionally, source signals are widespread over the event parameter space. Consequently, image recovery relies on a complex deconvolution procedure and on the accurate modelling of the instrumental background component. A maximum entropy algorithm has been employed extensively for the reconstruction of intensity maps from COMPTEL data (Strong et al. 1992). Recent examples of maximum entropy all-sky maps can be found in Strong et al. (1999) and Bloemen et al. (1999a) for galactic continuum emission or in Oberlack et al. (1996), Oberlack (1997) and Bloemen et al. (1999b) for 1.809 MeV $`\gamma `$-ray line radiation, attributed to the radioactive decay of <sup>26</sup>Al.
Simulations revealed a tendency of clumpy reconstruction of emission in our maximum entropy images, leading to artificial ‘hot spots’ of $`\gamma `$-ray emission in the reconstructions of diffuse emission distributions (Knödlseder et al. 1996). From the images alone, these ‘hot spots’ are indistinguishable from real point-like $`\gamma `$-ray sources, leading to considerable difficulties for the interpretation of the sky maps. Indeed, the assessment of the significance of individual ‘hot spots’ requires an substantial analysis effort, using simulations, Bootstrap analysis, and model fitting (e.g. Oberlack 1997).
We understand the image lumpiness as the result of the weak constraints that are imposed on individual image pixels by our data. COMPTEL images are usually reconstructed on a $`1\mathrm{°}\times 1\mathrm{°}`$ pixel grid in order to exploit the telescope’s angular location accuracy for point sources. The fine pixelisation implies, however, that for weak diffuse emission, $`\gamma `$-ray intensities in individual pixels are generally not significant. Increasing the pixel size could in principle avoid this problem at the expense, however, of a reduced angular resolution. We want to notice that this is not a particular property of the maximum entropy algorithm, but of every method that is operating on a fixed grid of independent image pixels. Apparently, significance and angular resolution are intimately related quantities (this relation is more generally known as the bias-variance tradeoff).
Algorithms that rely on a pre-defined pixel grid require an a priori choice of the angular resolution (by defining the pixel size) without constraining the significance of the fluxes in individual image pixels. Alternatively, one may follow the opposite approach by choosing a priori the significance of image structures without constraining the angular resolution in the reconstruction. An implementation of such an algorithm was discussed by Piña & Puetter (1993) who introduced generalised image cells (called ‘pixons’) to correlate adjacent image pixels according to the signal strength. However, the application of Pixon based image reconstruction to COMPTEL 1.8 MeV did not provide satisfactory results (Knödlseder et al. 1996).
In this paper we present a new algorithm, called Multiresolution Regularized Expectation Maximization (MREM), which we developed in particular for the reconstruction of diffuse $`\gamma `$-ray emission. We combined an expectation maximization (EM) algorithm with a multi-resolution analysis based on wavelets, which explicitly accounts for spatial correlations in the reconstructed image. This leads to a convergent algorithm which automatically stops when the significant structure has been extracted from the data (by significant structure we mean structure that will not change much under perturbation of the data). The method requires an a priori choice of the significance level of emission structures while adapting the angular resolution according to the signal.
In the following we will present the MREM algorithm (§2) and illustrate its performance by means of simulations of COMPTEL observations (§3). The MREM algorithm is then applied for the reconstruction of an 1.809 MeV all-sky map based on COMPTEL data obtained between May 1991 to June 1996 (§4). This sky map will be compared to 1.8 MeV all-sky maps presented previously which have been derived by the maximum entropy method (Oberlack et al. 1996) or the Richardson-Lucy algorithm (Knödlseder et al. 1996). A more theoretical description of the MREM algorithm will be given in a separate paper (Dixon et al., in preparation).
## 2 The MREM algorithm
MREM is based on the Richardson-Lucy (RL) algorithm which has been proposed by Richardson (1972) and Lucy (1974) for the restoration of degraded images. Given an initial estimate $`f_j^0`$ for the image, RL iteratively improves this estimate using
$$f_j^{k+1}=f_j^k\left(\frac{_{i=1}^N\frac{n_i}{e_i^k}R_{ij}}{_{i=1}^NR_{ij}}\right)$$
(1)
($`k`$ denotes the iteration). $`R_{ij}`$ is the instrumental response matrix which links the data space (indexed by $`i`$) to the image space (indexed by $`j`$). For a given image $`f_j^k`$ and a given background model $`b_i`$, the expected number of counts in a data space cell is given by $`e_i^k=_{j=1}^MR_{ij}f_j^k+b_i`$. The number of events observed in data space cell $`i`$ is given by $`n_i`$. It is easily seen that Eq. (1) may be also written in the additive form
$$f_j^{k+1}=f_j^k+\delta f_j^k,$$
(2)
where
$$\delta f_j^k=f_j^k\left(\frac{_{i=1}^N\left(\frac{n_i}{e_i^k}1\right)R_{ij}}{_{i=1}^NR_{ij}}\right)$$
(3)
is the additive RL correction.
Shepp & Vardi (1982) demonstrated that the Richardson-Lucy scheme is a special case of the expectation maximization (EM) algorithm (Dempster et al. 1977), and consequently it converges to the positively constrained maximum likelihood solution for Poisson data. Due to the slow convergence of the algorithm, several modifications have been proposed to accelerate convergence (Fessler & Hero 1994). For COMPTEL data we found that the ML-LINB-1 algorithm of Kaufman (1987) gives reasonable acceleration without degrading the reconstruction properties. For ML-LINB-1, Eq. (2) is replaced by
$$f_j^{k+1}=f_j^k+\lambda ^k\delta f_j^k,$$
(4)
where $`\lambda ^k`$ is determined for each iteration using a line-search in order to maximise the likelihood for $`f_j^{k+1}`$ subject to the constraint $`\lambda ^k\delta f_j^k>f_j^k`$ (this constraint ensures the positivity of the intensities).
It is obvious from Eqs. (2) to (4) that RL operates on a pre-defined pixel grid without any direct correlation between individual pixels. In particular, apart from the convolution with the transpose of the response matrix $`R_{ij}`$, there is nothing which prevents RL from matching the estimates $`e_i^k`$ to the measurement $`n_i`$, and noise can easily propagate into the reconstruction where it is generally amplified.
For these reasons we added a multiresolution analysis to the iterative procedure which aims to correlate the image pixels and to extract only the significant structure from the data.
Each iteration of our MREM algorithm is composed of four steps: First, we evaluate the normalised correction map
$$\delta h_j^k=\frac{_{i=1}^N\left(\frac{n_i}{e_i^k}1\right)R_{ij}}{\sqrt{_{i=1}^N\frac{R_{ij}^2}{e_i^k}}},$$
(5)
for which $`Var(\delta h_j^k)=1`$. Second, $`\delta h_j^k`$ is transformed into the wavelet domain where it is represented by a set of wavelet coefficients $`w_m^l`$, $`l`$ representing the scales, and $`m`$ denoting the wavelet coefficients at this scale. At scales $`l>1`$ ($`l=1`$ represents a DC offset) the coefficients falling below a given threshold $`\tau ^l`$ are zeroed by applying the operator
$$\eta (w,\tau ^l)=\{\begin{array}{cc}\hfill 0:& |w|<\tau ^l\hfill \\ \hfill w:& |w|\tau ^l\hfill \end{array}.$$
(6)
This method is generally referred to as wavelet thresholding and has been proven successful for the removing of noise from a dataset without smoothing out sharp structures (Donoho 1993; Graps 1995). Backtransformation of the nonzero coefficients from the wavelet domain into the image domain provides then a de-noised correction map $`\delta \widehat{h}_j^k`$. In compact matrix notation, the second step is given by
$$\delta \widehat{h}^k=𝐖^T\eta 𝐖\delta h^k$$
(7)
where $`𝐖`$ is the discrete wavelet transform (throughout this paper we use the translation invariant ‘cycle spinning’ transformation of Coifman & Donoho (1995) and employ Coiflet wavelets with 4 parameters). Third, we calculate
$$\delta f_j^k=f_j^k\delta \widehat{h}_j^k\left(\frac{\sqrt{_{i=1}^N\frac{R_{ij}^2}{e_i^k}}}{_{i=1}^NR_{ij}}\right),$$
(8)
which, in absence of any wavelet thresholding, is equivalent to the original RL correction map Eq. (3). In the last step, the previous estimate $`f_j^k`$ is updated using Eq. (4). Due to the wavelet thresholding, positivity of the pixel intensities is not implicitly assured, and we explicitly require $`f_j^{k+1}f_ϵ`$ where $`f_ϵ`$ is a negligible intensity level.
For efficient de-noising, the scale-dependent thresholds $`\tau ^l`$ have to be related to the expected statistical noise $`\sigma ^l`$ in the wavelet domain at each scale. We estimate $`\sigma ^l`$ by simulations where we replace $`n_i`$ in Eq. (5) by a Poisson derivate of $`e_i^k`$ and transformation of the resulting ‘mock correction map’ into the wavelet domain. In this approach it is important that the statistical noise in the MREM correction map $`\delta h_j^k`$ is independent of the pixel location $`j`$. For this reason we normalised $`\delta h_j^k`$ so that $`Var(\delta h_j^k)=1`$. We define then $`\tau ^l=s\sigma ^l`$, where $`s`$ specifies the significance level below which structures should be suppressed in the reconstruction. In the examples presented below we will vary $`s`$ between $`2.5`$ and $`3.5`$ in order to demonstrate the impact of the choice of $`s`$ on the reconstructed images.
The final critical aspect of MREM is that we do not calculate corrections for all wavelet scales simultaneously. We begin by allowing in only corrections corresponding to the largest wavelet scale, with the other scales simply being zeroed out; thus our estimate in the initial iterations corresponds only to the average large scale structure in the map. Once this converges, we then admit corrections from the next smallest wavelet scale, allow it to converge, and so forth. The resulting reconstruction is the final product of the algorithm, which generally requires between 20 and 40 iterations for our data.
This procedure of progressively admitting smaller wavelets is crucial to the performance of MREM, and has some rather interesting ramifications which we shall discuss in detail in a subsequent paper (Dixon et al., in preparation). We briefly note here that its main purpose is to aid in the discrimination of noise-induced corrections from that structure which is “stable” in the sense of being reproducible from different datasets. Examination of Eq. (3) indicates that if the estimates $`e_i^k`$ are very different from the data $`n_i`$, the corresponding corrections will also be large. Ideally, we would like this to occur only if the correction corresponds to some statistically interesting structure, but for arbitrary $`e_i^k`$ (e.g., the initial flat guess) this won’t be the case. By first converging to a coarse approximation, we get an estimate that is “close” to the next coarsest approximation, which generally forces the noise-induced corrections at that next smallest scale to be small compared to those which we deem “interesting”. It is further interesting to note that the unregularised RL iteration tends to pick out the larger scale average structure first, only adding details in later iterations, and in this sense the progressive scale procedure dovetails nicely with the known characteristics of the iteration.
The de-noising using the wavelet transform has the desired property of introducing pixel-to-pixel correlations in the image where the correlation length depends on the amount of structure in the data. Regions of the sky with uniform emission will be represented by few large-scale wavelet coefficients, while point sources are represented by few small-scale coefficients. An important feature of our algorithm is that it is convergent. If all significant structure has been extracted from the data, where ‘significant’ is defined by the choice of $`s`$, the thresholding operator will zero all wavelet coefficients and consequently the correction map will be structureless. At this point, further iterations won’t alter the reconstructed image anymore, hence we stop the iterations.
## 3 Simulations
To illustrate the performance of MREM with respect to the maximum entropy (ME) and the Richardson-Lucy (RL) algorithms, we apply them to simulated COMPTEL observations of 1.809 MeV $`\gamma `$-ray line emission. The mock data that are used in the simulations are based on a two-component data space model, composed of the instrumental background and adopted models for the $`\gamma `$-ray line distribution for two typical cases: a smooth large-scale emission model, and a rather structured model with emission on many spatial scales. The instrumental response and background were calculated as expected for the combination of observation periods $`0.1522.5`$, corresponding to data taken between May 1991 to June 1996. From both components of the data space model mock datasets were created independently by means of a random number generator assuming Poisson noise. Both components were then added and images have been reconstructed from the combined mock dataset. For the reconstructions it has been assumed that the instrumental background is known precisely, hence the resulting images are not subject to possible systematic uncertainties of the employed background model. They are sensitive, however, to statistical uncertainties which are due to the particular data ‘realisation’ as obtained by the random sampling procedure. To illustrate this sensitivity, the same mock dataset has been used for the instrumental background component in all simulations.
The following 1.809 MeV model intensity distributions have been chosen. First, we use an exponential disk model, i.e. the intensity distribution that is expected if the galactic <sup>26</sup>Al mass density would follow a double exponential law with scale radius of $`R_0=4.5`$ kpc and scale height of $`z_0=90`$ pc. Model fitting has confirmed that these parameters provide a reasonable first-order description of 1.809 MeV emission (Knödlseder 1997). The total galactic <sup>26</sup>Al mass has been normalised to $`3M_{\mathrm{}}`$, a value slightly in excess from recent findings (e.g. Diehl et al. 1997). Second, the EGRET $`>100`$ MeV all-sky map was taken as template for the 1.809 MeV intensity distribution. The 1.809 MeV intensity level of the map was adjusted to a plausible level by fitting the map to COMPTEL 1.8 MeV $`\gamma `$-ray line data. The first case testifies the response of the image reconstruction algorithms to a smooth intensity distribution while the second case represents probably a more realistic situation with structure on all spatial scales, from point-like to diffuse galactic plane emission.
### 3.1 Exponential disk model
The results of the exponential disk simulation are compiled in Fig. 1. For comparison, the intensity distribution of the exponential disk model is also shown. Since in our implementation ME and RL provide no criteria where to stop the iterative procedure, we used the correlation coefficient between the reconstruction and the model intensity map to determine the iteration which provides the smallest discrepancy to the model. This is the case after iteration 8 for ME and the accelerated RL algorithm.
Both the ME and the RL reconstructions clearly pick up the emission ridge along the galactic plane with the highest intensities found towards the central radian ($`30\mathrm{°}<l<30\mathrm{°}`$). The most striking difference between the model and the ME and RL reconstructions, however, is the lumpiness of the recovered sky maps. Although the emission follows in average the model intensity profile, it exhibits strong oscillations around this average, leading to ‘hot spots’ and emission gaps along the galactic plane. Indeed, these oscillations are already present in the first iterations of the reconstruction process and become more and more amplified with proceeding iterations. If the iterations are pursued beyond those shown in Fig. 1, the oscillations will break up, and the image will be composed of nearly isolated point sources (Knödlseder et al. 1996).
It is also interesting to recognise that the ME reconstruction is virtually identical to the RL reconstruction. The difference between both algorithms is that ME imposes an additional constraint on the reconstructed image in that it ‘pushes’ the image towards a ‘flat’ sky map – especially if the data are not very constraining. This results in systematically lower fluxes for the ME reconstructions with respect to RL, which can be seen from the intensity profiles in Fig. 1. If the ME iterations are proceeded further, and hence the entropy criterion is gradually weakened, the flux discrepancy between ME and RL disappears. For this reason we always use ‘high’ (e.g. 20-30) iterations when we determine fluxes from our ME sky maps.
In contrast to ME and RL, the MREM reconstructions provide rather smooth emission distributions. While some lumpiness remains for $`s=2.5`$, the images obtained with $`s=3.0`$ and $`3.5`$ show no ‘hot spots’ or emission gaps. In particular, the longitude profile is reasonably well reproduced and obeys only small deviations from the model distribution. The most striking difference between the MREM sky maps and the model is the larger latitude extent of the reconstructions. This, however, is not surprising since the width of the exponential disk model of $`2.7\mathrm{°}`$ (FWHM) is considerably smaller than the instrument’s angular resolution of $`4\mathrm{°}`$ (FWHM) at 1.8 MeV (Schönfelder et al. 1993). Together with the weakness of the signal, this limits the achievable resolution in the reconstructions. Indeed, the width of the latitude profile depends on the selected significance level $`s`$, rising from $`5.3\mathrm{°}`$ for $`s=2.5`$ to $`9.6\mathrm{°}`$ for $`s=3.5`$. Obviously, the significance of the recovered emission features and the angular resolution are intimately related quantities. Note that the width of the latitude profiles obtained by ME and RL is $`5.7\mathrm{°}`$ (FWHM), which is also considerably wider than that of the model.
To judge the quality of the reconstructed images we determine the 1.8 MeV $`\gamma `$-ray line residuals by means of a maximum likelihood ratio test (de Boer et al. 1992). For this purpose the sky maps of Fig. 1 are convolved into the COMPTEL data space and added to the instrumental background model. Residual emission is then searched by fitting point source models on top of the combined data space model for a grid of source positions. The results of this point source search are shown in Fig. 2. The quantity plotted is $`2\mathrm{ln}\lambda `$, where $`\lambda `$ is the maximum likelihood ratio $`L(M)/L(S+M)`$, $`M`$ represents the (two-component) data space model, and $`S`$ the source model which is moved over the sky area searched for residual emission. In such a search, $`2\mathrm{ln}\lambda `$ obeys a $`\chi _3^2`$ distribution; in studies of a given source, $`\chi _1^2`$ applies. In the latter case, the point source significance (in Gaussian $`\sigma `$) is given by $`\sqrt{2\mathrm{ln}\lambda }`$.
The top panel in Fig. 2 shows the residuals of the instrumental background sample only, hence reflects the statistical noise in the mock datasets (due to the dominance of the instrumental background component, the statistical noise is dominated by the background fluctuations). In the ideal case, the residuals of the reconstructed images should be almost identical to those of the background sample. Indeed, the residuals found on top of the MREM ($`s=3.0`$) reconstruction (panel d) are very similar to those expected for an ideal reconstruction (panel a). The features are basically identical; only small deviations are found in their amplitude, e.g. at $`l110\mathrm{°}`$ where MREM slightly overestimates the emission. The ME reconstruction (panel b) forces image flatness, hence the residuals reflect a prominent flux suppression of the entire plane emission. We therefore cannot easily detect to which extent noise has been included in the ME reconstruction. For RL (panel c), no residual 1.8 MeV emission is seen that correlates with the galactic plane. In contrary, the likelihood ratios are even too small for the RL reconstruction with respect to the noise simulation, as expected if the data were overfit by the intensity map. Additionally, prominent background features, like those at $`(l,b)=(113\mathrm{°},2\mathrm{°})`$ or at $`(l,b)=(81\mathrm{°},16\mathrm{°})`$, are drastically reduced in both the ME and RL residual maps, yet are perceptible in the reconstructed intensity maps (cf. Fig. 1). This illustrates that statistical noise in the data is at least partially fit by the ME and RL sky maps. It follows that the lumpiness of the ME and RL reconstructions are due to overfitting of the data.
### 3.2 EGRET $`>100`$ MeV sky map
Figure 3 presents the results of the simulations based on the EGRET $`>100`$ MeV all-sky map. This map shows a ridge of diffuse emission along the galactic plane with a notably intensity enhancement towards the inner Galaxy, some prominent galactic point sources, some localised emission regions, and some extragalactic point sources. The intensity level of the EGRET $`>100`$ MeV all-sky map has been adjusted to a plausible 1.8 MeV intensity level by fitting the map to COMPTEL 1.8 MeV $`\gamma `$-ray line data, resulting in a 1.809 MeV line flux from the inner radian ($`|b|<20\mathrm{°}`$) of $`3\times 10^4`$ ph cm<sup>-2</sup>s<sup>-1</sup>rad<sup>-1</sup>. This adjustment pushed most of the point sources in the EGRET map below the sensitivity limit of COMPTEL at 1.8 MeV, leaving Vela with $`3\times 10^5`$ ph cm<sup>-2</sup>s<sup>-1</sup> and Geminga with $`1\times 10^5`$ ph cm<sup>-2</sup>s<sup>-1</sup> as the most prominent objects.
Indeed, the only point source recovered in the ME and RL reconstructions is Vela ($`l=264\mathrm{°}`$, $`b=3\mathrm{°}`$), while only a small hint of $`\gamma `$-ray emission is seen at the position of Geminga ($`l=195\mathrm{°}`$, $`b=4\mathrm{°}`$). Similar to the exponential disk simulation, ‘hot spots’ and emission gaps appear along the galactic plane which only occasionally coincide with localised features in the model map. Such coincidences are found e.g. at $`l80\mathrm{°}`$ (Cygnus) or at $`l45\mathrm{°}`$. However, localised emission features are also found in the exponential disk simulations at these positions (cf. Fig. 1) where no such features are present in the model. Since only the mock dataset of the (dominant) instrumental background component is common to both simulations, it is very suggestive that the observed features are at least partially due to positive statistical fluctuations of the background data. Other localised emission features in the EGRET map, like the spots at $`l20\mathrm{°}`$, $`l18\mathrm{°}`$, or $`l75\mathrm{°}`$ (Carina), coincide with negative statistical fluctuations of the background component and annihilate; consequently no feature is seen in the reconstructions at these positions. Additionally, artificial ‘hot spots’ appear in the ME and RL maps where no such features are present in the model. Examples are the strong feature at $`l115\mathrm{°}`$ and the spur towards negative latitudes at $`l15\mathrm{°}`$. Again, these features can also be percepted in the exponential disk simulations, confirming that they arise from the statistical noise of the background sample.
In contrast to ME and RL, MREM again provides much smoother reconstructions of the data, avoiding most of the artifacts. In the $`s=2.5`$ run, only the most prominent artifacts are visible (e.g. the spot at $`l115\mathrm{°}`$), but many of the real localised features are recovered ($`l80\mathrm{°}`$, $`l135\mathrm{°}`$, $`l45\mathrm{°}`$, and Vela). Increasing the requirement for the significance of the emission structures to $`s=3.0`$ removes the remaining artifacts, but eliminates also most of the localised emission features. Nevertheless, weak hints for Vela and the $`l135\mathrm{°}`$ source are still present in the sky map. These hints disappear when $`s`$ is increased to $`3.5`$. Again, the latitude extent of the reconstructions is slightly higher than that of the models due to the combined result of the instrument’s angular resolution of only $`4\mathrm{°}`$ (FWHM) together with a low signal to noise ratio. Yet, the extended diffuse emission above the galactic centre is still recovered in the maps.
The residual analysis of the MREM $`s=3.0`$ reconstruction reveals only weak emission at the position of the localised features, indicating that they are not very significant (cf. Fig. 4). The most prominent residuals are found at the position of Vela and at $`(l,b)=(81\mathrm{°},16\mathrm{°})`$, with likelihood ratios of $`2\mathrm{ln}\lambda =16.6`$ and $`14.3`$, respectively. While the first residual corresponds to a real source in the EGRET map, the second one is a clear background fluctuation. If the existence of the Vela source would not be known a priori, the likelihood ratio of $`16.6`$ converts to a detection significance of $`3.3\sigma `$ (3 d.o.f.). Taking into account the number of trials made in the point source search, this value can not be interpreted as a significant detection. However, if the Vela source is considered as known object, the likelihood ratio converts to a $`4.1\sigma `$ detection significance (1 d.o.f.). The major objective of an 1.809 MeV all-sky map, however, is the discovery of unknown objects, hence it is desirable that the Vela source is not recovered in the reconstruction. Otherwise, as demonstrated by the ME and RL or the MREM ($`s=2.5`$) reconstructions, artifacts will also enter the reconstruction, making the interpretation of the sky map difficult.
To illustrate that MREM indeed recovers point sources if they are significant, we performed an additional simulation where we increased the intensity of the EGRET $`>100`$ MeV template by a factor of 5 with respect to the 1.809 MeV intensity. This corresponds to an increase of a factor of 5 in the signal-to-noise ratio, which is equivalent to a sensitivity enhancement of the same magnitude. The resulting MREM reconstruction is shown in Fig. 5 for $`s=3.0`$. As expected, much more structure along the galactic plane is now recovered. Prominent point sources, such as Vela, Geminga, or the Crab, and localised emission features, e.g. in Cygnus and Carina, are now clearly visible. This demonstrates that the absence of these features in the MREM map derived for the 1.809 MeV intensity level (Fig. 3) relates to their significance.
## 4 The COMPTEL 1.8 MeV sky
The MREM algorithm is now applied to real COMPTEL 1.8 MeV data, taken during observation periods 0.1 - 522.5 (May 1991 to June 1996). The instrumental background component was estimated using contemporaneous data at adjacent energies, following the procedure described in Knödlseder et al. (1996). The time-variability of the instrumental background was taken into account by determination of the background model on single observation basis, and by its proper relative normalisation using the activation history of major background components (Oberlack 1997). Since the absolute normalisation as well as the $`\overline{\phi }`$ distribution of the background model are only weakly constrained, we added an additional step where we determine both by all-sky model fitting (maximum likelihood optimisation). For this purpose we fitted the instrumental background model together with a template for the 1.809 MeV intensity distribution to the COMPTEL 1.8 MeV data, where we determined independent scaling factors for all $`\overline{\phi }`$ layers of the background model as well as a global scaling factor for the 1.8 MeV intensity template. This procedure provides an estimate of the total 1.809 MeV sky flux and an improved estimate of the instrumental background component, which is then used for image reconstruction. For the 1.809 MeV template we used the 53 GHz free-free emission map derived from COBE/DMR data (Bennett et al. 1992) which was found to provide the best description of the COMPTEL 1.8 MeV data in a recent study using a wide variety of models (Knödlseder et al. 1999). An alternative method for deriving an instrumental background model for the analysis of 1.8 MeV $`\gamma `$-ray line data is described in Bloemen et al. (1999b).
Figure 6 shows the COMPTEL 1.809 MeV $`\gamma `$-ray line all-sky maps that are obtained by the different reconstruction algorithms. The maximum entropy and Richardson-Lucy maps are similar to those presented in previous work (Diehl et al. 1995; Oberlack et al. 1996; Knödlseder et al. 1996; Oberlack 1997; Bloemen et al. 1999b) with minor differences being due to differences in the analysed data volume or the employed background modelling procedure. The most distinct feature in these maps is emission along the ridge of the galactic plane. Again we see the lumpiness that our above simulations also show for these methods, indicating overfitting of the data. According to the discussion above we cannot decide from the sky maps alone which of the lumps may correspond to real emission and which are artifacts due to the background noise.
MREM avoids this confusion, suppressing efficiently the noise components of the image. The reconstructed intensity profiles are characterised by a notable asymmetry with respect to the galactic centre and some localised emission features. The most prominent of these features is located in the Cygnus region around $`(l,b)(80\mathrm{°},0\mathrm{°})`$ where a bright extended emission spot is clearly separated from the inner galactic ridge emission by a bridge of relatively low 1.8 MeV intensity. The same feature is also seen in the ME and RL maps where it obeys a much more complex structure. The MREM reconstructions suggest that most of this structure is not individually significant, and could as well be more diffuse or located differently. We therefore safely may extract the fact of significant Cygnus region emission, separated from the inner galactic ridge.
This inner ridge appears smooth on the MREM image, yet also here reveals a pronounced asymmetry with respect to the galactic centre. While at positive longitudes the intensity drops steeply from $`l30\mathrm{°}`$ to $`l50\mathrm{°}`$, the 1.8 MeV emission extends continuously to $`l240\mathrm{°}`$ at negative longitudes. Along the ridge the MREM reconstructions reveal only little structure. From the many ‘hot spots’ seen in the ME and RL reconstructions, only the most prominent ones are still perceptible in the MREM image obtained for $`s=2.5`$. Increasing the significance level to $`s=3.0`$ removes most of them, keeping only two emission spots at $`l=317\mathrm{°}`$ and $`332\mathrm{°}`$ which are separated by a weak emission gap at $`l=324\mathrm{°}`$. This is the most persistent structure along the inner galactic plane ridge which is still clearly visible for a significance level of $`s=3.5`$. It is also very pronounced in the ME and RL maps. Additional hints for weak excess emission are found in the longitude profile at $`l=21\mathrm{°},30\mathrm{°},44\mathrm{°},286\mathrm{°}`$, and $`345\mathrm{°}`$, but they disappear if $`s`$ is increased to $`3.5`$. Obviously, the significance of these excesses is close to the sensitivity limit of COMPTEL, and the assessment of their reality needs more dedicated studies.
The distinct emission gap which separates two localised emission regions at $`l=266\mathrm{°}`$ (Vela) and $`l=286\mathrm{°}`$ (Carina) in the ME and RL maps is not seen in the MREM reconstructions. Yet, a weak intensity dip is found in the $`s=2.5`$ and $`3.0`$ MREM maps at this location, indicating that some structure may indeed be present. The prominent ‘hot spot’ towards the galactic centre, clearly visible in the ME and RL maps at $`l=4\mathrm{°}`$, is only present in the MREM map for $`s=2.5`$, but disappears for higher significance levels. Apparently, the data are also consistent with a smooth emission profile in this region.
Comparison of the 1.8 MeV maps with the EGRET map simulations indicates that the 1.809 MeV emission is very confined to the galactic plane. In particular, there is no hint for an extended emission component similar to that seen in the EGRET map above the inner Galaxy. Indeed, model fitting using exponential disk models revealed a small scale height of $`z_0=90`$ pc for the galactic <sup>26</sup>Al distribution (Knödlseder 1997). Comparison of the 1.8 MeV maps with the exponential disk simulations (for which a scale height of $`z_0=90`$ pc was assumed) confirms this result. In particular, the width of the MREM ($`s=3.0`$) 1.8 MeV latitude profile ($`7.5\mathrm{°}`$ FWHM) is even smaller than that obtained for the exponential disk simulation ($`8.8\mathrm{°}`$ FWHM), indicating that the scale height of the <sup>26</sup>Al distribution may be even below 90 pc.
Near the anticentre, all maps of Fig. 6 show indications for extended 1.8 MeV $`\gamma `$-ray line emission. In the ME and RL reconstructions, weak emission spots are spread over a region extending from $`125\mathrm{°}170\mathrm{°}`$ in galactic longitude and from $`20\mathrm{°}30\mathrm{°}`$ in galactic latitude. The MREM algorithm combines these spots to a more concentrated emission structure, roughly located at $`l160\mathrm{°}`$ with an angular extent of $`20\mathrm{°}`$. This again illustrates that the spots in the ME and RL images are not significant for themselves, but when combined they provide a significant 1.809 MeV emission feature.
Residual maximum likelihood ratio maps of the COMPTEL 1.8 MeV all-sky maps are compiled in Fig. 7. The ME reconstruction shows significant residual emission along the galactic plane which is strongly correlated to the reconstructed sky intensity distribution. The intensity profiles in Fig. 6 illustrate that iteration 6 of the ME reconstruction considerable underestimates 1.8 MeV intensities with respect to RL and MREM. For higher ME iterations, this underestimation disappears as the maximum entropy reconstruction approaches the maximum likelihood solution. Yet, the diffuse intensity distribution breaks up into nearly isolated point sources for late iterations due to overfit of the data (Knödlseder et al. 1996). Therefore we typically present COMPTEL ME images and longitude profiles from ‘early’ iterations in order not to emphasise artificial structures, while ‘late’ iterations are used to derive 1.809 MeV fluxes and latitude profiles in order to recover the correct flux values (Diehl et al. 1995; Oberlack et al. 1996). Alternatively, intensity distributions for ‘late’ iterations have been smoothed to the instrumental resolution for image presentation to reduce the artificial lumpiness (Oberlack 1997; Strong et al. 1999).
Also the RL reconstruction shows residuals that are correlated with the galactic plane, although they are much smaller than for ME. Yet there are regions where almost no residuals are found, in particular at negative galactic longitudes ($`l<0\mathrm{°}`$) above and below the galactic plane. Comparison with the simulations suggests that the lack of residuals is again due to overfit of the data. In contrast, the MREM $`(s=3.0)`$ reconstruction provides residuals that appear uncorrelated with the galactic plane. This clearly illustrates that the MREM map presents a statistical satisfactory description of COMPTEL 1.8 MeV data.
## 5 Conclusions
An alternative imaging method for COMPTEL $`\gamma `$-ray data is presented, using a newly developed multiresolution reconstruction algorithm based on wavelets. The maximum entropy and Richardson-Lucy algorithms, which have been used previously for COMPTEL image reconstruction, are very sensitive to statistical noise in the data, leading to image lumpiness and ‘hot spots’ in the reconstructed intensity maps. In particular, artificial ‘hot spots’ are indistinguishable from real point sources on basis of the sky maps alone, requiring an substantial additional analysis effort to assess their reality. We present the resulting reconstructed image of the 1.809 MeV sky as an alternative view, complementing previously presented images from the other methods, and pointing out their limitations. In particular, we caution to overinterpret structure in the ME and RL sky maps when modelling the galactic <sup>26</sup>Al emission for other studies (e.g. Lentz et al. 1998).
Applying our new algorithm to COMPTEL data largely reduces or even removes artificial ‘hot spots’ and image lumpiness, depending on the selected significance requirement $`s`$. Simulations indicate that $`s=3.0`$ seems to provide a reasonable choice for the reconstruction: while artifacts are mainly removed from the image, hints for weak ($`34\sigma `$) point sources are still present. Nevertheless, it should be clear that the MREM sky map obtained in this work not necessarily provides a realistic view of the 1.809 MeV sky. The real 1.809 MeV intensity profile is probably much more confined to the galactic plane than the emission in the MREM sky map, but with an angular resolution of $`4\mathrm{°}`$ (FWHM), COMPTEL is not capable of resolving this confinement. The 1.809 MeV emission along the galactic plane may be much more structured than shown in the MREM map, but the sensitivity of COMPTEL is not sufficient to map this structure. In this sense, MREM provides a more reliable image of the 1.809 MeV $`\gamma `$-ray sky with respect to ME and RL since it does not show emission structures for which there is no strong evidence. Weak emission features which are close to the sensitivity limit of COMPTEL may however be suppressed in the MREM maps. Therefore ME and RL maps are used as complementary analysis tools.
###### Acknowledgements.
JK is supported by the European Community through grant number ERBFMBICT 950387. The COMPTEL project is supported by the German government through DARA grant 50 QV 90968, by NASA under contract NAS5-26645, and by the Netherlands Organisation for Scientific Research NWO. This research has made use of data obtained through the High Energy Astrophysics Science Archive Research Center Online Service, provided by the NASA/Goddard Space Flight Center.
|
no-problem/9903/hep-ph9903205.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
In reactions between particles which lead to multi-hadronic final states the constructive interference between two identical bosons, the so called Bose-Einstein Correlation (BEC), is well known. These correlations lead to an enhancement of the number of identical bosons over that of non-identical bosons when the two particles are close to each other in phase space. Experimentally this effect, also known as the GGLP effect, was for the first time observed in particle physics by Goldhaber et al. in like-sign charged pions produced in $`\overline{p}p`$ annihilations at $`\sqrt{s}`$ = 2.1 GeV. In addition to the quantum mechanical aspect of the BEC, these correlations are also used to estimate the dimension of the emitting source of the identical bosons . Recently the interest in the BEC of identical charged pions has been extended to their possible effect on the mass measurements of the $`W`$ gauge bosons in the the reaction $`e^{}e^+W^+W^{}`$ which has been dealt with theoretically and experimentally . Finally in high energy reactions the BEC of charged pions may influence the properties of the multi-hadron final states. Therefore more efforts are now devoted to estimate these effects by incorporating BEC in the various model based Monte Carlo programs that are confronted with the data.
The relation to the size of the emission region, which is often assumed to be spherical with a Gaussian distribution, is given by the well known formula for the correlation function $`C_2`$ of two identical particles with four momenta $`q_i(i=1,2)`$ and $`Q=\sqrt{(q_1q_2)^2}`$ namely,
$$\sigma _{tot}\frac{d^2\sigma _{12}}{d\sigma _1d\sigma _2}C_2(Q)=\mathrm{\hspace{0.17em}\hspace{0.17em}1}+\lambda e^{r^2Q^2},$$
(1)
where $`\lambda `$, the chaoticity parameter which can vary between 0 and +1, measures the strength of the effect. The term $`e^{r^2Q^2}`$ is the normalised Fourier transform of $`\rho `$, the source density. Some recent reviews which summarize the underlying BEC theoretical aspects and the experimental results are given, for example, in references .
It has further been shown, that under certain conditions, a Bose-Einstein like enhancement can also be expected in a boson-antiboson system like the $`K^o\overline{K}^o`$ pair if the $`\lambda `$ parameter is larger than zero. In fact, in a sample of spinless boson-antiboson system, which is a mixture of C = +1 and C = –1 states, the C = +1 part behaves like identical bosons, that is, it produces a BEC like low mass enhancement whereas the C = –1 part decreases to zero as $`Q_{K,\overline{K}}0`$. This has been experimentally demonstrated by the OPAL collaboration in their study of the $`K_S^oK_S^o`$ system and was later confirmed by the DELPHI and ALEPH groups at LEP. This GGLP low mass enhancement is the result of the spatial symmetric state which is always the case for identical Bosons and is also true for the C = +1 part of the Boson-Antiboson system.
Recently it has been pointed out that a pair of identical fermions, $`ff`$, having a total spin of $`S`$ = 1, will have an $`Q_{ff}`$ dependence near threshold similar to that of a Boson-Antiboson pair with the eigenvalue of C = –1. This follows from the Pauli exclusion principle and the expectation that in the absence of low mass p-wave resonances the relative contribution of the $`\mathrm{}=0`$ state increases as $`Q_{ff}`$ decreases due to the angular momentum barrier. The measurement of this behavior allowed to estimate for the first time the dimension $`r`$ of the identical fermions $`\mathrm{\Lambda }\mathrm{\Lambda }(\overline{\mathrm{\Lambda }}\overline{\mathrm{\Lambda }})`$ emitter in a similar way as it has been done in the past for identical bosons emitter.
In the same way that the Pauli principle can be used to treat particles in the same spin multiplet to show BEC-like effects, and is generalised to treat particles in the same isospin multiplet one can also consider using the generalised Bose statistics to treat hadrons in the same isospin multiplets as being identical.
Bose statistics allows only even partial waves for states of two identical spinless bosons. Similarly generalized Bose statistics allows only even partial waves for states of two spinless bosons in the same isospin multiplet if they are in an isospin eigenstate which is symmetric under isospin permutations; e.g. I=1 for two kaons or I=0 or 2 for two pions. Thus pairs of bosons which are in these symmetric isospin eigenstates can be expected to show the analogue of Bose enhancement. These effects should be easily observable in cases where kaon or pion pairs are produced inclusively from a initial isoscalar state. In this case we shall see that the s-wave amplitudes for $`K^+K^o`$ and $`\pi ^+\pi ^o`$ states should be exactly the same as those for the corresponding identical boson states $`K^+K^+`$ and $`\pi ^+\pi ^+`$ produced in the same experiment<sup>3</sup><sup>3</sup>3Whenever we refer to a specific two-boson state we also mean its charge conjugate one..
Here we discuss the consequences of a generalised Bose statistics and isospin invariance to the properties of two-boson systems belonging to the same isospin multiplet which emerge from an I=0 state. As an example we address our study to the $`K^+K^o`$ and $`\pi ^+\pi ^o`$ pairs present in the hadronic $`Z^o`$ decays. In Section 2 we we present the basic relations between pairs of kaons obtained from the generalised Bose statistics. Section 3 deals specifically with question concerning the interpretation of the observed $`K_S^oK_S^o`$ low mass enhancement in multi-hadronic final states. In Section 4 we deal with the two-pion system and in Section 5 we discuss the possible deviations from an isoscalar dominance. Finally a summary and conclusions are presented in Section 6.
## 2 Isospin Invariance applied to hadrons produced from an I=0 state
In many cases boson pairs, like the $`KK`$ and $`\pi \pi `$ systems, are produced from an initial state which is isoscalar to a very good approximation. Among them are pairs produced from an initial multi-gluon state as in the central region of a high energy collision, pairs from hadronic decays of isoscalar heavy quarkonium resonances; e.g. $`J/\psi `$ or $`\mathrm{{\rm Y}}`$, or pairs produced by $`Z^o`$ decays. In some of these cases the initial state is however not pure I=0 but has also some contamination of an I=1 component which is mixed in like in those processes where the $`J/\psi `$ and $`\mathrm{{\rm Y}}`$ decay to hadrons via one photon annihilation. According to the specific case methods can be applied to reduce, or even to eliminate, this contamination. For example, the subsample of the C = –1 quarkonia which decay into an odd number of pions assure, due to G-parity, that the hadronic final state is in an I=0 state. This method is not useful for the BEC measurements of the hadronic $`Z^o`$ decays. Multi-hadron final states however which originate from the $`Z^o`$ decay to the heavy quarks, $`s\overline{s},c\overline{c}`$ and $`b\overline{b}`$, are in an I=0 state. An efficient signature for a $`Z^os\overline{s}`$ decay produced in $`e^+e^{}`$ collisions is for example given by the hadronic final states which contain a high momentum $`\mathrm{\Lambda }`$ . In fact for $`Z^0`$ decay events with a $`\mathrm{\Lambda }`$ momentum $`P_\mathrm{\Lambda }/P_{beam}>0.4`$ the fraction of I=0 may amount to more than 70$`\%`$.
In the following we restrict our study to the three di-kaon systems with strangeness +2 namely, $`K^+K^+`$, $`K^oK^o`$ and $`K^+K^o`$ pairs, which are part of a multi-particle final state produced from an isoscalar state. We note that $`K^+K^+`$ and $`K^oK^o`$ are pure I=1 isospin states, while the $`K^+K^o`$ is a mixture of two isospin states, I=1 and I=0. Bose statistics further tells us that in the KK center-of-mass system the $`K^+K^+`$ and $`K^oK^o`$ are built of two identical particles and therefore have only even partial waves. The $`K^+K^o`$ has both even and odd partial waves, because these two kaons are not identical. The generalised Bose statistics however tells us that the I=1 state of this system has only even partial waves while the I=0 state has only odd partial waves.
Since the $`K^+K^+`$ state has isospin quantum numbers $`(I,I_z)=(1,+1)`$ the accompanying multi-particle state is required by isospin invariance to be an isospin eigenstate with the isospin quantum numbers $`(I,I_z)=(1,1)`$ which we denote as $`X_{1,1}`$. Isospin invariance further requires that the other multi-particle states in the same isospin multiplet denoted by $`X_{1,I_z}`$ be equally produced together with the two-kaon multiplet carrying the quantum numbers $`(1,I_z)`$. We can therefore write the following relation between the amplitudes for production of these states from an initial isoscalar state, denoted by $`i_o`$,
$$\sqrt{2}A[i_oK^+(\stackrel{}{p})K^+(\stackrel{}{p})X_{1,1}]=\sqrt{2}A[i_oK^o(\stackrel{}{p})K^o(\stackrel{}{p})X_{1,+1}]=$$
$$=A[i_oK^+(\stackrel{}{p})K^o(\stackrel{}{p})X_{1,0}]A[i_oK^o(\stackrel{}{p})K^+(\stackrel{}{p})X_{1,0}],$$
$`(2)`$
where $`\stackrel{}{p}`$ denotes the momentum of the kaon in the centre of mass system of the two kaons.
There is an additional isospin-zero amplitude for the $`K^+K^o`$ final state which can be produced together with an isoscalar multi-particle state which we denote by $`X_{0,0}`$. Since the generalized Bose statistics requires the isoscalar state to be antisymmetric in space, the amplitude for the production of this state satisfies the relation:
$$A[i_oK^+(\stackrel{}{p})K^o(\stackrel{}{p})X_{0,0}]=A[i_oK^o(\stackrel{}{p})K^+(\stackrel{}{p})X_{0,0}].$$
$`(3)`$
This isoscalar contribution clearly vanishes in the limit $`\stackrel{}{p}0`$ which is relevant for the BEC. Since the multi-particle states $`X_{1,0}`$ and $`X_{0,0}`$ accompanying the kaon pair have different isospin, they are orthogonal and all interference terms between the two isospin amplitudes must cancel if there is no measurement on the accompanying multi-particle state. Plots of the number of pairs versus Q should be identical for $`K^oK^+`$, $`K^oK^o`$ and $`K^+K^+`$ in the low Q region where only s-waves contribute. This means that if a BEC enhancement is seen in the $`K^+K^+`$ it should also be present in the $`K^oK^+`$ and the $`K^oK^o`$ systems. At higher energy, as soon as p-waves can contribute to $`K^oK^+`$ but not of course to $`K^+K^+`$ nor $`K^oK^o`$, there should be an excess of events for $`K^oK^+`$.
We now note that the $`K^o`$ states are in general not detected as such but rather as $`K_S^o`$ and $`K_L^o`$, which are mixtures of $`K^o`$ and $`\overline{K}^o`$. The contribution from the $`K^o`$ component is well defined and satisfies the isospin predictions. However, the contribution from the $`\overline{K}^o`$ component is completely unrelated except for the fact that it is positive definite and thus the isospin relations provide lower bounds. A detailed analysis is given below for the $`K_S^oK_S^o`$ system where the simplifications from the Bose symmetry of the identical particles allow the testing of reasonable assumptions for the additional $`K^o\overline{K}^o`$. contribution. The $`K^+K_S^o`$ system is more complicated because of the additional unknown contributions from odd partial waves, and is not treated further here.
## 3 The $`K_S^oK_S^o`$ system
The $`K_S^oK_S^o`$ system has only even partial waves and is a linear combination of $`K^oK^o`$ and $`K^o\overline{K}^o`$. The $`K^oK^o`$ component is related to the $`K^+K^+`$ component by the isospin relation (2). We can thus write the following relations for the probabilities, denoted by P, for the detection of these states.
$$P[i_oK^o(\stackrel{}{p})K^o(\stackrel{}{p})X_{1,+1}K_S^o(\stackrel{}{p})K_S^o(\stackrel{}{p})X_{1,+1}]=$$
$$(1/2)P[i_oK^o(\stackrel{}{p})K^o(\stackrel{}{p})X_{1,+1}]=(1/2)P[i_oK^+(\stackrel{}{p})K^+(\stackrel{}{p})X_{1,1}],$$
$`(4)`$
where the factor(1/2) arises from the division of $`K^oK^o`$ into $`K_L^oK_L^o`$ and $`K_S^oK_S^o`$.
In a realistic experiment the final $`K_S^oK_S^o`$ is identified but the specific multi-particle state $`X_{I,I_z}`$ is not and the result is obtained by summing over all possible multi-particle final states here denoted simply by $`X`$. Since these states include both states of strangeness $`\pm 2`$ and strangeness 0, the final $`K_S^oK_S^o`$ pairs included in the sum may come not only from $`K^oK^o`$ and $`\overline{K}^o\overline{K}^o`$ states but also the even parity $`\overline{K}^oK^o`$ and $`K^o\overline{K}^o`$. Thus we can write
$$\underset{X}{}P[i_oK_S^o(\stackrel{}{p})K_S^o(\stackrel{}{p})X]=$$
$$(1/4)\underset{X}{}P[i_oK^+(\stackrel{}{p})K^+(\stackrel{}{p})X]+(1/4)\underset{X}{}P[i_oK^{}(\stackrel{}{p})K^{}(\stackrel{}{p})X]$$
$$+\underset{X}{}P[i_oK^o\overline{K}^oXK_S^o(\stackrel{}{p})K_S^o(\stackrel{}{p})X],$$
$`(5a)`$
where $`P[i_oK^o\overline{K}^oXK_S^o(\stackrel{}{p})K_S^o(\stackrel{}{p})X]`$ denotes the sum of the probabilities of all transitions from the initial state $`i_o`$ to the final state $`K_S^o(\stackrel{}{p})K_S^o(\stackrel{}{p})X`$ via any $`K^o\overline{K}^oX`$ state. This can be conveniently rewritten
$$\frac{\underset{X}{}P[i_oK_S^o(\stackrel{}{p})K_S^o(\stackrel{}{p})X]}{_XP[i_oK^+(\stackrel{}{p})K^+(\stackrel{}{p})X]+_XP[i_oK^{}(\stackrel{}{p})K^{}(\stackrel{}{p})X]}=$$
$$=\frac{1}{4}+\frac{\underset{X}{}P[i_oK^o\overline{K}^oXK_S^o(\stackrel{}{p})K_S^o(\stackrel{}{p})X]}{_XP[i_oK^+(\stackrel{}{p})K^+(\stackrel{}{p})X]+_XP[i_oK^{}(\stackrel{}{p})K^{}(\stackrel{}{p})X]}.$$
$`(5b)`$
This last relation has a bearing on the analysis of the enigmatic $`f_o(980)`$ scalar resonance which has a long history regarding its nature and decay modes . Its existence is well established by its decay into a pair of pions. As for its decay into the $`K\overline{K}`$ final states the situation is more complicated. The $`f_o(980)`$ central mass value lies below the $`K\overline{K}`$ threshold however the upper part of its width is above it. At the same time the $`f_o(980)K\overline{K}`$ decay branching is a major tool in nailing down the nature of this resonance and its total width. Since the analyses of the final $`K^+K^{}`$ state are handicapped by the strong presence of the $`\varphi (1020)K^+K^{}`$ decay many of the analyses utilised instead the $`K_S^oK_S^o`$ final state system. The origin of the excess of $`K_S^oK_S^o`$ near threshold however could a priori have two sources, the decay product of the $`f_o(980)`$ resonance and the BEC enhancement.
Since the denominators of Eq. (5b) come from exotic kaon pair states which have no resonances, the presence of resonances in the numerator of the second term on the right hand side of (5b) will show up as energy-dependent enhancements over the background which is expected to be similar to that of the first term, namely $`1/4`$. Note that the observed number of counts for decays into the $`K_S^oK_S^o`$ mode is reduced by an additional factor of (4/9) because in general only the $`\pi ^+\pi ^{}`$ decay mode of each $`K_S^o`$ is detected. Thus the statistical errors in the denominators of Eq. (5b) are expected to be much lower than those of the numerators.
Eqs. (5) relate the BEC excess of the $`K_S^oK_S^o`$ to that of the $`K^+K^{}`$ system which, for example, was measured and found to exist in the hadronic $`Z^o`$ decay events . The experimental results of the low mass enhancement seen in the the $`K^+K^{}`$ and $`K_S^oK_S^o`$ systems determine the contribution from the $`K^o\overline{K}^o`$ states which may be taken as the upper limit of the $`f_o(980)K^o\overline{K}^o`$ decay rate.
## 4 The generalised Bose statistics applied to pion pairs
As in the case of the kaon pairs, the production of $`\pi ^+\pi ^+`$ and $`\pi ^+\pi ^o`$ pairs in a multi-particle final state can be related if produced from an initial isoscalar state. Here $`\pi ^+\pi ^+`$ is a pure isospin state with I=2, while $`\pi ^+\pi ^o`$ is a mixture of two isospin states with I=2 and I=1. Bose statistics tells us that in the $`\pi \pi `$ centre-of-mass system the $`\pi ^+\pi ^+`$ system has two identical particles and therefore has only even partial waves. The $`\pi ^+\pi ^o`$ has both even and odd partial waves, because these two pions are not identical. But the generalized Bose statistics tells us that the I=2 state of this system has only even partial waves and the I=1 state has only odd partial waves.
Since the $`\pi ^+\pi ^+`$ state has isospin quantum numbers $`(I,I_z)=(2,+2)`$ the remaining multi-particle state which we denote by $`X_{2,2}`$ is required by isospin invariance to be an isospin eigenstate with the isospin quantum numbers $`(I,I_z)=(2,2)`$. Isospin invariance further requires that the other multi-particle states in the same isospin multiplet denoted by $`X_{2,I_z}`$ be equally produced together with the two-pion multiplet carrying the quantum numbers $`(2,I_z)`$ We can therefore write the following relation between the amplitudes for production of these states from an initial isoscalar state, denoted by $`i_o`$
$$\sqrt{2}A[i_o\pi ^+(\stackrel{}{p})\pi ^+(\stackrel{}{p})X_{2,2}]=A[i_o\pi ^+(\stackrel{}{p})\pi ^o(\stackrel{}{p})X_{2,1}]$$
$$A[i_o\pi ^o(\stackrel{}{p})\pi ^+(\stackrel{}{p})X_{2,1}],$$
$`(6)`$
where $`\stackrel{}{p}`$ is the momentum of the pion in the centre of mass system of the two pions.
There is an additional isospin-one amplitude for the $`\pi ^+\pi ^o`$ final state which can be produced together with an I=1 multi-particle state which we denote by $`X_{1,1}`$. Since the generalized Bose statistics requires the I=1 state to be antisymmetric in space, the amplitude for the production of this state satisfies the relation:
$$A[i_o\pi ^+(\stackrel{}{p})\pi ^o(\stackrel{}{p})X_{1,1}]=A[i_o\pi ^o(\stackrel{}{p})\pi ^+(\stackrel{}{p})X_{1,1}].$$
$`(7)`$
This I=1 contribution clearly vanishes in the limit $`\stackrel{}{p}0`$ which is relevant for BEC. The I=2 and I=1 amplitudes have opposite parity. Again the interference terms between the two isospin amplitudes cancel out if there is no measurement on the accompanying multi-particle state. Plots of the number of events versus Q should be identical for $`\pi ^o\pi ^+`$ and $`\pi ^+\pi ^+`$ in the low Q region where only s-waves contribute. As soon as p-waves can contribute to $`\pi ^o\pi ^+`$ but not of course to $`\pi ^+\pi ^+`$ there should be an excess of events for $`\pi ^o\pi ^+`$. One can expect a large p-wave contribution because of the presence of the $`\rho `$ resonance, and the tail of the $`\rho `$ may still be appreciable at the $`\pi \pi `$ threshold.
In the $`\pi \pi `$ system there are more states and more isospin amplitudes than in the KK system. There are also the $`\pi ^+\pi ^{}`$ and $`\pi ^o\pi ^o`$ states and an additional I=0 amplitude. The $`\pi ^o\pi ^o`$ state has two identical particles and only even partial waves in the $`\pi \pi `$ center of mass system. But it is a linear combination of two isospin states, I=0 and I=2, and therefore is not related directly to the even partial waves of the I=2 system. The $`\pi ^+\pi ^{}`$ state has all three isospin eigenvalues, 0, 1 and 2, and both even and odd partial waves. The odd partial wave amplitudes have I=1 and are directly related to the odd partial waves of the $`\pi ^o\pi ^+`$ system. The even partial waves have both I=0 and I=2 components and can be related to the other I=2 and I=0 amplitudes by a a full amplitude analysis. In the low Q region where only s-waves contribute the $`\pi ^+\pi ^+`$, $`\pi ^+\pi ^{}`$ and $`\pi ^o\pi ^o`$ amplitude depend upon only two isospin amplitudes, I=0 and I=2. Their intensities in this region satisfy a triangular inequality
$$\underset{X}{}\left|\sqrt{(2/3)P[i_o(\pi ^o\pi ^o)X]}\sqrt{(1/3)P[i_o(\pi ^+\pi ^{})_eX]}\right|$$
$$\underset{X}{}\sqrt{P[i_o(\pi ^\pm \pi ^\pm )X]}=\underset{X}{}\sqrt{P[i_o(\pi ^\pm \pi ^o)_eX]}$$
$$\underset{X}{}\left|\sqrt{(2/3)P[i_o(\pi ^o\pi ^o)X]}+\sqrt{(1/3)P[i_o(\pi ^+\pi ^{})_eX]}\right|,$$
$`(8)`$
where the notation $`(\pi \pi )_e`$ is used to indicate that only even partial waves for the $`(\pi \pi )`$ final state are included in the sum.
## 5 Possible deviations from isoscalar dominance
Precise calculations of the errors in the predictions based on an assumed charge-symmetric or isoscalar final state are difficult. We present here some arguments supporting the neglect of deviations from symmetry. But rather than trying to improve these arguments it seems more profitable to test these predictions by experiment. Since the isospin predictions are energy independent, they can be tested over an energy range which includes not only the Bose-Einstein and resonance regions but also higher relative momenta where both these effects should be absent.
In processes leading to multi-particle states, nearly all of the final quarks and antiquarks are produced by isoscalar gluons and this should enhance isoscalar dominance. For example, the charge asymmetry in a fragmentation process initiated by a $`u\overline{u}`$ pair produced in a $`Z^o`$ decay can easily be washed out quickly in a fragmentation process dominated by isoscalar gluons. We present two examples of how this can occur.
When a leading u-quark in the final stage of the fragmentation process picks up a strange antiquark, the spins of the quark and antiquark are expected to be uncorrelated. This gives a 3:1 ratio favoring $`K^+`$ production over $`K^+`$. The final states $`K^+`$, $`K^+\pi ^o`$ and $`K^o\pi ^+`$ are then produced with the ratio 1:1:2 giving equal probabilities of producing a $`K^+`$ and a $`K^o`$ in the final state. The charge asymmetry is completely lost in this approximation where effects of the $`KK^{}`$ mass difference are neglected.
If the leading u-quark in the final stage of the fragmentation process picks up a nonstrange antiquark from a $`(u\overline{u})`$ or $`(d\overline{d})`$ pair created with equal probability by gluons, the leading u-quark will combine with the $`\overline{u}`$ or $`\overline{d}`$ to make a final nonstrange meson, leaving the remaining $`u`$ or $`d`$ to continue the fragmentation process with equal probabilities.
$$u+(u\overline{u})M^o(u\overline{u})+u$$
$`(9a)`$
$$u+(d\overline{d})M^+(u\overline{d})+d$$
$`(9b)`$
where $`M^o`$ and $`M^+`$ denote neutral and positive final meson states. Thus the initial charge asymmetry leaves the process as a charge asymmetry between the $`M^o`$ and $`M^+`$ mesons and the remaining quark which continues the fragmentation process is charge symmetric.
These two mechanisms for washing out the charge asymmetry by gluons assume mass degeneracies whose breaking can introduce charge asymmetry. The $`KK^{}`$ mass difference suppresses $`K^{}`$ production and the $`d`$ quark production in the final state. But the $`\eta \eta ^{}\pi `$ mass differences suppress the reaction (9a) while leaving (9b) unaffected and thus suppress $`u`$ quark production. Thus these two symmetry-breaking mechanisms work in opposite directions and the violations of charge symmetry observed in the total final state are expected to be small.
We also note that in $`Z^o`$ decay one can expect that the isoscalar final states arising from initially produced $`s\overline{s}`$, $`c\overline{c}`$ and $`b\overline{b}`$ pairs will occur at least equally with the production of $`u\overline{u}`$ and $`d\overline{d}`$ pairs, This gives an additional factor of two favoring isoscalar states over those discussed above.
## 6 Summary and conclusions
From the generalised Bose statistics and isospin invariance follows that also boson pairs with different charges which belong to the same isospin multiplet may show low mass BEC enhancement. In the case that these pairs, together with their accompanying hadrons, are produced from a pure I=0 state, relations can be derived between their low energy production amplitudes and those of two identical bosons in the same isospin multiplet. In verifying this effect experimentally, e.g. in the hadronic $`Z^o`$ decays, one should however remember that the condition for a pure I=0 state is not 100$`\%`$ satisfied. Furthermore, pions originating from weak or electro-magnetic decays, such as the $`\eta `$ decay or from unresolved $`K_S^o`$ decays, may also affect slightly the relation given in Eq. (6).
In as much that BEC effects of identical boson are relevant for the description of the data in terms of model dependent Monte Carlo program, one should also consider these effects in the $`\pi ^\pm \pi ^o`$ and $`K^+K^o(K^{}\overline{K}^o)`$ systems. In the experimental studies of the BEC the choice of the reference sample against which the effect is measured is crucial. Ideal reference samples are those which retain all the features and correlations of the data apart from the one due to the BEC. The relations given by the generalised Bose statistics restrict the choice of these reference samples. For example, for the study of the BEC of the $`\pi ^\pm \pi ^\pm `$ pairs a data reference sample constructed out of $`\pi ^\pm \pi ^o`$ is forbidden. Due to the triangle inequality given by Eq. (8), it may be wise even to avoid the use of $`\pi ^+\pi ^{}`$ data pairs as a reference sample and to use instead a Monte Carlo generated sample. Finally it is worthwhile to point out that in trying to associate the observed $`K_S^oK_S^o`$ low mass enhancement, seen in multi-hadron final states, to the $`f_o(980)K_S^oK_S^o`$ decay channel one should also examine the possible contribution from the BEC to this final state.
|
no-problem/9903/cs9903001.html
|
ar5iv
|
text
|
# Introduction to Public Key Cryptography and Modular Arithmetic.
## 1 Introduction.
These notes are an introduction to the RSA algorithm, and to the mathematics needed to understand it. The RSA algorithm — the name comes from the initials of its inventors, Rivest, Shamir, and Adleman — is the foundation of modern public key cryptography. It is used for electronic commerce and many other types of secure communication over the Internet.
The RSA algorithm is based on a type of mathematics known as modular arithmetic. Modular arithmetic is an interesting variation of ordinary arithmetic, but whereas everyday arithmetic is familiar to school children everywhere, modular arithmetic is a somewhat obscure subject. It’s not that modular arithmetic is particularly difficult, or confusing; one could teach it in high school, or even earlier. Conventional thinking, however, places a higher value on the ability to balance a checkbook than on the ability to communicate in code; this is probably the reason why most people have never heard of modular arithmetic. Today, the rapid proliferation of the Internet and the growing popularity of electronic financial transactions are causing a shift in attitudes. These days, even an introductory understanding of public key cryptography can be enormously useful. Consequently, today’s students deserve an opportunity to become acquainted with the methods and ideas of modular arithmetic.
First, a quick word about these notes. I have tried to make the material here as down to earth, and accessible as possible. As such, the emphasis is on concrete calculations, rather than abstruse theory. My goal is to guide you through the concrete steps needed to implement and understand RSA encryption/decryption. Therefore in the present context, calculations are in and abstraction is out. A number of simple exercises is included; you should work them as you go along. The answers to the exercises are collected in a final appendix.
So set aside a few hours, go grab a pencil, and pour yourself a cup of coffee. It shouldn’t take very long to go through these notes; the material in here just isn’t all that hard.
## 2 Modular Arithmetic
I remember that as a child, after I learned the general system of counting, I was fascinated by and frequently thought about the fact that numbers never end. In principle, one can count as high as one wants — an activity I experimented with as a child, and one that serves me well in adulthood when insomnia comes calling. What, however, would be the consequences if numbers did end? What if numbers behaved like the numbers on a clock? What if, as one counted higher and higher, one would come full circle and begin counting again from the beginning, from zero?
To make this idea concrete suppose that there are only five different numbers: $`0,1,2,3,4`$; and that these numbers are arranged on a circular clock in the usual clock-wise direction, as in the figure below. It’s not hard to imagine how to do addition using this alternate system of counting.
| | | 0 |
| --- | --- | --- |
| 4 | | | | 1 |
| | 3 | | 2 |
Doing $`2+2`$ one starts with $`2`$, increments twice, and obtains the answer $`4`$. Doing $`2+4`$, however, is a little unusual, because this time the answer is $`1`$. The reason is that after $`3`$ steps one comes full circle, i.e. to $`0`$; from there, one more increment brings the final total to $`1`$. Subtraction isn’t hard to understand either; one counts backwards, i.e. counter-clockwise, rather than forwards. So, in the clock-like way of counting, $`24=3`$; one starts with $`2`$ and turns the clock back $`4`$ times.
The usual name for this kind of counting is modular arithmetic ( some people have also called it “clock arithmetic”.) It is important to note that there are many different modular arithmetics; it all depends on how big the clock is. The size of the clock, i.e. the total of all possible numbers, is called the modulus. In the example of the preceding paragraph the modulus was $`5`$. One can just as happily work with other moduli: $`7`$, $`10`$, $`123`$; any integer greater than $`1`$ will do. For example, the binary arithmetic of ones and zeroes that underlies the workings of modern computers is nothing but arithmetic with a modulus of $`2`$.
In addition to the clock analogy there is another explanation of modular arithmetic that needs to be mentioned. This alternate system is made up of all the possible integers; however any two numbers that generate the same displacement on a clock are considered to be equivalent. Thus modulo $`5`$ the numbers $`8,3,2,7,12,17,21`$ are all considered to be equivalent, because they all correspond to a displacement of $`2`$ spaces on a $`5`$-space clock. To put it another way, two numbers are considered equivalent if and only if they differ by a multiple of $`5`$. The mathematical notation for this equivalence works like this. One writes:
$$616(\text{mod}5),$$
and says out loud: “six is equivalent to sixteen modulo five”. The meaning of this statement is that $`6`$ and $`16`$ differ by a multiple of $`5`$; or, equivalently that $`6`$ and $`16`$ generate an equivalent displacement on a clock with $`5`$ spaces. In summary, in the modulo $`5`$ system, there are exactly five classes of numbers: the numbers equivalent to $`0`$, the numbers equivalent to $`1`$, to $`2`$, to $`3`$, and to $`4`$.
###### Exercise 1
Which of the following statements are true, and which are false?
$$180\left(\text{mod}7\right),183\left(\text{mod}5\right),9+9+98\left(\text{mod}13\right),2+11330\left(\text{mod}10\right),$$
In a system of modular arithmetic with modulus $`n`$ it is possible to reduce every integer to an equivalent number between $`0`$ and $`n1`$. To do the reduction one divides and calculates the remainder. For example, let’s say one wants to calculate the reduced form of $`2040`$ modulo $`209`$. One has
$$2040÷209=9\frac{159}{209},\text{i.e. }2040=9\times 209+159.$$
Therefore $`2040159(\text{mod}209)`$. Again, the meaning of this statement is that a total displacement of $`2040`$ spaces on a clock with $`209`$ places yields a net displacement of $`159`$ spaces.
By the way, save yourself some grief and check the documentation of that expensive calculator you bought when you came to university. Many modern calculators have a built-in remainder function, and possibly other functions for performing the operations of modular arithmetic.
### 2.1 Multiplication
As everyone knows, multiplication is nothing but repeated addition. This is the meaning of multiplication in everyday arithmetic; this is how multiplication works in modular arithmetic as well. Say one wants to calculate $`2\times 3`$, i.e. $`2+2+2`$, in the modulo $`5`$ system. The answer of course is $`6`$, which modulo $`5`$ is equivalent to $`1`$. Writing this more succinctly: $`2\times 31(\text{mod}5)`$.
How sensible is the above definition of multiplication? Does it obey the same algebraic laws as ordinary multiplication? The answer is, yes! For example, ordinary multiplication obeys a rule called the distributive law. As a particular instance of this rule, $`(3+4)\times 2=7\times 2`$ is equal to $`3\times 2+4\times 2=6+8`$.
What about modular arithmetic; does the distributive law continue to work? Consider the last calculations modulo $`5`$. For the first expression one gets
$$(3+4)\times 2=7\times 22\times 24(\text{mod}5).$$
For the second expression one gets
$$3\times 2+4\times 2=6+81+34(\text{mod}5);$$
the two answers agree.
What about something like $`8\times 2`$ versus $`3\times 2`$? Since $`3`$ and $`8`$ are equivalent modulo $`5`$, it stands to reason that the two answers should also be equivalent. This is indeed the case; one gets an answer of $`1(\text{mod}5)`$ for both calculations. The answers are the same because $`8`$ and $`3`$ differ by a multiple of $`5`$; indeed, $`8=3+5`$. Therefore
$$2\times 8=2\times (3+5)=2\times 3+2\times 5.$$
Now $`2\times 5`$ is a multiple of $`5`$ and is consequently equivalent to $`0(\text{mod}5)`$. Therefore
$$2\times 8=2\times 3+2\times 52\times 3+0(\text{mod}5).$$
In summary, multiplication is a perfectly valid, consistent operation in the world of modular arithmetic.
###### Exercise 2
Complete the following table of multiplication modulo $`5`$.
| $`\times `$ | $`1`$ | $`2`$ | $`3`$ | $`4`$ |
| --- | --- | --- | --- | --- |
| $`1`$ | $`1`$ | $`2`$ | $`3`$ | $`4`$ |
| $`2`$ | $`2`$ | $`4`$ | $`1`$ | $`3`$ |
| $`3`$ | . | . | . | . |
| $`4`$ | . | . | . | . |
A surprising aspect of modular arithmetic is the fact that one can also do division. Recall that every division operation in ordinary arithmetic can be recast as a corresponding multiplication by a reciprocal. Thus, $`a÷b=a\times b^1`$, where $`b^1`$ is the one particular number such that $`b\times b^1=1`$. A moment’s worth of reflection will reveal that this definition makes perfect sense for modular arithmetic. Say one wanted to find the reciprocal of $`2`$ modulo $`5`$. A consultation of the multiplication table from the preceding exercise readily yields the answer: $`2\times 31(\text{mod}5)`$, and therefore one writes $`2^13(\text{mod}5)`$. In a sense, division in modular arithmetic is easier than it is in ordinary arithmetic; one doesn’t have to worry about fractions. For example, division by $`2`$ is equivalent to a multiplication by $`3`$. Thus $`3÷23\times 34(\text{mod}5)`$. One can even check the answer: $`4\times 23(\text{mod}5)`$; everything is correct.
###### Exercise 3
Find the reciprocals of $`3`$ and $`4`$ modulo $`5`$. Perform the following divisions modulo $`5`$ and then check the answers by performing the necessary multiplications: $`4÷3`$, $`3÷4`$, $`3÷3`$, $`1÷3`$.
### 2.2 It’s a strange world, after all.
Who can forget that old bugaboo of elementary school arithmetic: the problem of division by zero? “Division by zero is …” There are a number of typical endings to this sentence, and all of them serve to imply that division by zero is somehow bad, if not outright impossible. My preference in the face of such a question is to be as undogmatic as possible, and to pass the ball back to the questioner: “I don’t know what one divided by zero is, my friend, but if you would like to hazard an answer, I will be happy to check it for you. What did you say? One divided by zero is three? I don’t think so. You see, zero times three is zero, not one; so I’m afraid you shall have to try again.” In this fashion the questioner should quickly become convinced that $`1÷0`$, whatever such an answer may be, cannot be an ordinary, everyday number. So perhaps what one should really be saying is that the equation $`\text{?}\times 0=1`$ has no solutions.
Division by zero is just as problematic in modular arithmetic as it is in ordinary arithmetic. Furthermore, in the world of modular arithmetic, there exist, in addition to zero, other “division unfriendly” numbers. In order to see this, consider the following operation in the modulo $`6`$ system: $`1÷4`$. In other words, try to solve the following equation: $`4\times \text{?}1(\text{mod}6)`$. No such solution exists, of course. Think about the multiplication table modulo $`6`$. The row that begins with $`4`$ reads: $`4,2,0,4,2`$. In contrast to the way multiplication worked modulo $`5`$, certain rows of the multiplication table modulo $`6`$ have repeated entries, and consequently, in such rows, certain other numbers do not appear at all. So $`1÷4`$ cannot be given a value, because the number $`1`$ doesn’t occur in row number $`4`$. Notice that something like $`4÷4`$ doesn’t work either, but now ambiguity is to blame. There are two possible solutions to the equation $`4\times x4(\text{mod}6)`$; both $`x=1`$ and $`x=4`$ will work.
###### Exercise 4
Write down the multiplication table modulo $`6`$.
These strange “division unfriendly” numbers possess another curious property. Notice that $`4\times 30(\text{mod}6)`$. One would never expect to see an equation like that in ordinary arithmetic; one simply doesn’t expect to multiply two non-zero numbers and have the answer turn out to be zero. As unusual as this may appear at first glance, such goings on are quite commonplace in the world of modular arithmetic. In fact, there is some standard terminology that serves to describe such situations. If the product of two given numbers is zero, one calls these numbers divisors of zero. It’s an apt title, because such numbers are literally able to divide zero and have a non-zero number be the answer. Modulo $`6`$ the divisors of zero are $`2`$, $`3`$, and $`4`$, because
$$2\times 34\times 30(\text{mod}6).$$
What about the remaining numbers in the mod $`6`$ system; what about $`1`$ and $`5`$? These are the “division friendly” numbers. Again there is some standard terminology that one should learn at this point. A number is called a unit if and only if it possesses a multiplicative reciprocal. Thus, both $`1`$ and $`5`$ have reciprocals and are therefore called units: $`1^11(\text{mod}6)`$ and $`5^15(\text{mod}6)`$
###### Exercise 5
Explain why in modular arithmetic a unit can never be a divisor of zero, and why a divisor of zero can never be a unit.
### 2.3 GCD: the Greatest Common Divisor
The acronym GCD stands for greatest common divisor. A common divisor of numbers $`a`$ and $`b`$ is a number that evenly divides both $`a`$ and $`b`$. The greatest common divisor of $`a`$ and $`b`$ is simply the largest such common divisor. Consider for example the numbers $`12`$ and $`16`$. Obviously $`2`$ is a common divisor of both. However, $`4`$ is also a common divisor, and in fact it is the largest common divisor. Therefore $`4`$ is the GCD of $`12`$ and $`16`$.
Note that $`1`$ is a common divisor of every possible pair of numbers. If $`1`$ is the only common divisor of $`a`$ and $`b`$, then one says that $`a`$ and $`b`$ are relatively prime. For example, $`9`$ and $`16`$ are relatively prime.
###### Exercise 6
Calculate the GCD of the following pairs of numbers: $`6`$ and $`8`$; $`6`$ and $`18`$; $`7`$ and $`15`$; $`30`$ and $`20`$.
Common divisors are a very important idea in modular arithmetic; they are the key concept for understanding divisors of zero.
> Fact. If numbers $`x`$ and $`n`$ have a common divisor other than $`1`$, then $`x`$ is a divisor of zero modulo $`n`$.
For example, $`2`$ is a common divisor of $`6`$ and $`8`$, and consequently $`6`$ is a divisor of zero, modulo $`8`$. Indeed, $`6\times 4=240(\text{mod}8)`$. It isn’t difficult to explain this fact. Let $`d>1`$ be a common divisor of $`x`$ and $`n`$. By definition, $`x÷d`$ and $`n÷d`$ are both whole numbers, and therefore
$$x\times (n÷d)=(x÷d)\times n0(\text{mod}n).$$
###### Exercise 7
Write down the multiplication table modulo $`10`$. Identify the units and the divisors of zero. Notice that $`10=2\times 5`$. How is this fact relevant to your search for units and divisors of zero?
### 2.4 The Euclidean Algorithm
It is easy enough to guess a GCD of two small numbers, but larger numbers require a more systematic approach. Fortunately there is a straightforward method, called the Euclidean algorithm, for calculating the GCD of two whole numbers. The algorithm is based on the following idea. Suppose the goal is to find the GCD of a pair of whole numbers $`x`$ and $`y`$. It isn’t hard to see that if a number $`d`$ evenly divides both $`x`$ and $`y`$, then $`d`$ will also divide all of the following numbers: $`xy`$, $`x2y`$, $`x3y`$, etc. Indeed $`d`$ will evenly divide any number of the form $`xp\times y`$. Conversely if $`d`$ evenly divides $`y`$ and $`xp\times y`$, then $`d`$ will also evenly divide $`x`$.
> Conclusion: for every whole number $`p`$, the GCD of $`x`$ and $`y`$ is equal to the GCD of $`y`$ and $`xp\times y`$.
For example, say one wants to compute the GCD of $`168`$ and $`91`$. One may as well be looking for the GCD of $`91`$ and $`16891=77`$. Repeating this reasoning, one should next look for the GCD of $`77`$ and $`9177=14`$, and then for the GCD of $`14`$ and $`775\times 14=7`$. Now $`14=2\times 7`$, and therefore it is clear that $`7`$ is the desired GCD. Indeed: $`168÷7=24`$ and $`91÷7=13`$. The algorithm is summarized below.
> The Euclidean Algorithm. Goal: to find the GCD of whole numbers $`x>y`$. If $`y`$ divides $`x`$ evenly then $`y`$ is the GCD. Otherwise, calculate the remainder, $`r`$, from the division of $`x`$ by $`y`$. In other words, $`r=xp\times y`$ where $`p`$ is the whole part of $`x÷y`$. Since GCD($`x,y`$)=GCD($`y,r`$), one goes back to the beginning of the algorithm and restarts the calculation with the new $`x`$ equal to the old $`y`$, and the new $`y`$ equal to the old $`r`$. Since $`y<x`$ and $`r<y`$, the inputs to the algorithm become smaller with each iteration. The algorithm is, therefore, guaranteed to terminate after a finite number of repetitions.
###### Exercise 8
Use the Euclidean algorithm to find the GCD of $`1113`$ and $`504`$.
It is easy enough to see that if $`d`$ evenly divides a pair of numbers $`x`$ and $`y`$, then $`d`$ will also evenly divide $`a\times x+b\times y`$ for all integers $`a`$ and $`b`$. The following fact is a little more surprising.
> Fact. Given whole numbers $`x`$ and $`y`$, one can find integers $`a`$ and $`b`$ so that $`a\times x+b\times y`$ is exactly equal to the GCD of $`x`$ and of $`y`$. In particular, if $`x`$ and $`y`$ are relatively prime, then one can find $`a`$ and $`b`$ so that $`a\times x+b\times y=1`$.
This fact is enormously useful in connection with the calculation of reciprocals in modular arithmetic; this point will be explained shortly. First, however, one needs to master a modified version of the Euclidean algorithm, a version tailored to the calculation of the critical $`a`$ and $`b`$. Proceeding by way of example, suppose one is interested in the GCD of $`1113`$ and $`504`$. Consider the following table.
| $`n`$ | $`p`$ | $`a`$ | $`b`$ |
| --- | --- | --- | --- |
| $`1113`$ | | $`1`$ | $`0`$ |
| $`504`$ | $`2`$ | $`0`$ | $`1`$ |
| $`105`$ | $`4`$ | $`1`$ | $`2`$ |
| $`84`$ | $`1`$ | $`4`$ | $`9`$ |
| $`21`$ | $`4`$ | $`5`$ | $`11`$ |
| $`0`$ | |
In what follows, subscripts are used to specify the number of the row. Thus $`n_3`$ refers to $`105`$, the $`n`$ entry in the third row. The first column contains the sequence of numbers obtained in the course of applying the Euclidean algorithm (c.f. Exercise 8). In other words, $`n_3`$ is the remainder of $`n_1÷n_2`$, $`n_4`$ is the remainder of $`n_2÷n_3`$, etc. The $`p`$ column contains the whole part of each of these divisions: $`p_2`$ is the whole part of $`n_1÷n_2`$, $`p_3`$ is the whole part of $`n_2÷n_3`$, etc. In other words,
$`n_3`$ $`=`$ $`n_1p_2\times n_2,`$
$`n_4`$ $`=`$ $`n_2p_3\times n_3,`$
$`n_5`$ $`=`$ $`n_3p_4\times n_4,`$
The entries in the $`a`$ and $`b`$ columns are chosen so that $`n=a\times 1113+b\times 504`$ in each row of the table. For example, row four contains $`84=4\times 1113+9\times 504`$. The final row (not counting the last zero) contains the final, desired $`a`$ and $`b`$. Indeed, $`21=5\times 111311\times 504`$. For obvious reasons, $`a=1`$, $`b=0`$ in the first row, and $`a=0`$, $`b=1`$ in the second row. For the subsequent rows the entries are calculated in the same fashion as the entries in the $`n`$ column. To be more specific:
$`a_3`$ $`=`$ $`a_1p_2\times a_2,`$
$`a_4`$ $`=`$ $`a_2p_3\times a_3,`$
$`a_5`$ $`=`$ $`a_3p_4\times a_4,`$
and so on. The $`b`$ column is computed in the same way. I will leave it to you to figure out why this method ensures that $`n=a\times 1113+b\times 504`$ in every row.
###### Exercise 9
Calculate the GCD of $`1466`$ and $`237`$, and find the integers $`a`$, $`b`$ such that $`a\times 1466+b\times 237`$ is equal to the GCD.
You are now in possession of a method that makes it easy to calculate reciprocals in modular arithmetic. Remember, that $`x`$ is a unit modulo $`n`$ if and only if $`x`$ and $`n`$ are relatively prime. If this is the case, then one simply finds $`a`$ and $`b`$ so that $`a\times x+b\times n=1`$, and voila: $`a\times x1(\text{mod}n)`$, i.e. $`ax^1(\text{mod}n)`$.
###### Exercise 10
Calculate $`59÷237`$ modulo $`1466`$. Check your answer.
### 2.5 Curiouser and curiouser ….
Presented for your reading pleasure, the following curious tidbit from the world of recreational mathematics. Take out a calculator and randomly punch in a whole number. Note the last digit of your chosen number, and then raise your number to the fifth power. If your starting number wasn’t too large, and if your calculator has sufficiently many digits on its display, then you will notice that the last digit of the result is the same as the last digit of the starting number. For example: $`17^5=1419857`$ and $`22^5=5153632`$. This interesting phenomenon occurs for other powers as well: the powers $`1,5,9,13,17,21,`$ etc, will all work. For example, $`7^9=40353607`$ and $`2^{13}=8192`$.
You may have realized that the business of looking at the last digit can be handled most conveniently in terms of modular arithmetic. The fact is that modulo 10, a number is equivalent to its last digit, and therefore this curious fact can be written down as the following identity:
$$x^px(\text{mod}10),\text{as long as }p=1,5,9,13,17,21,25,\mathrm{}$$
The above identity is far from being a mere curiosity. It is, in fact, the essential component of the RSA encryption/decryption process. The application to cryptography will be explained in the next section. First, it will be necessary to take a closer look at the above “curious fact”, and try to find the analogous trick for moduli other than $`10`$.
As you may have noticed, the critical exponents, i.e. the values of $`p`$ for which $`x^px(\text{mod}10)`$, are precisely the numbers $`p`$ such that $`p1`$ is a multiple of $`4`$. To put it differently, the list of critical exponents is obtained by starting with $`1`$ and then repeatedly adding $`4`$. Why the number $`4`$ though? Look back at Exercise 7, and count the number of units in that particular system of modular arithmetic. That’s right; there are $`4`$ units, and for reasons that we won’t get into here, the spacing in the list of critical exponents is always equal to the total number of units in one’s chosen system of modular arithmetic. This all-important principle is summarized below.
> A Curious Fact. Suppose that $`n`$ is a square free whole number, and let $`\varphi `$ be the total number of units in the system of arithmetic modulo $`n`$. In such circumstances the following identity holds: $`x^px(\text{mod}n)`$ where the exponent can be any of the critical values $`p=1,1+\varphi ,1+2\varphi ,1+3\varphi ,\mathrm{}`$
The business about square-free numbers is an important, but technical detail. The problem is that the above “curious fact” does not work for certain values of the modulus $`n`$. The values of $`n`$ for which the trick fails to work are precisely those whole numbers that can be divided evenly by a square. For example, working modulo $`8`$ and starting with $`2`$ one has
$$2^24,2^30,2^40,2^50,\text{etc}$$
and so one never gets back to $`2`$, no matter which power is used. In other words, the trick doesn’t work when $`8`$ is the modulus. Now $`8`$ can be divided by $`4`$, and the latter is a square, and that is why the trick fails to work modulo $`8`$. The bottom line is that the “curious fact” will work if and only if one chooses a clock size (i.e. a modulus $`n`$) that does not have any squares as a factor.
###### Exercise 11
What are the critical exponents $`p`$ in the system of arithmetic modulo $`22`$? Do some calculations and verify that $`x^px(\text{mod}22)`$ for these critical values of $`p`$.
###### Exercise 12
Explain why, in the modulo $`22`$ system, it suffices to consider only the exponents from $`1`$ to $`10`$. Calculate $`3^{32}(\text{mod}22)`$ just “by looking at it”.
## 3 RSA
Those of you who navigate the World Wide Web using Netscape browsers may have noticed that the very first screen shown by Navigator and Communicator has a bunch of corporate looking logos, including one that has a pair of keys alongside the letters RSA. The icon in question is the logo of RSA Data Security, the owners of the patented RSA public key encryption algorithm. The letters RSA stand for Rivest, Shamir and Adleman; these are the names of the M.I.T. (Massachusetts Institute of Technology) academics who invented the algorithm in 1977. In 1982 these three individuals went on to found RSA Data Security, a company that specializes in secure digital communication. Another significant acronym in the world of computer cryptography is PGP, which stands for pretty good privacy. This is a loosely organized collection of (mostly) free software that puts the power of private and secure digital communication in the hands of ordinary citizens. PGP relies on the RSA algorithm for its core functionality.
### 3.1 Public key cryptography.
What is the basic goal of cryptography? Speaking generally, one wants a method for encoding human-readable messages into an unreadable cipher, as well as a corresponding method for translating the coded information back into everyday language. Typically, a cryptographic method involves some sort of a secret key. One simple example is the so-called substitution cipher. One assigns a number to each letter of the alphabet, and then encodes and decodes messages using this code. In theory, people who are not privy to the translation table of letter-number pairs, should not be able to decode and understand the message ( A skilled codebreaker, however, can break a substitution cipher without much trouble. )
The substitution cipher is a basic example of traditional, single key cryptography. The reason for the name “single key” ( “secret key” is also used ) is that both the sender and receiver of an encoded message make use of the same, secret key — in this case the table of letter-number pairs — to generate and decipher the message. All single key encryption schemes share a fundamental difficulty: the encryption key must be agreed on beforehand and then kept absolutely secret. Thus, single key encryption is both inconvenient (one has to find a secure means of sharing the encryption key between the communicating parties) and fundamentally insecure in a group setting ( it takes just one set of “loose lips” to blow the security of a single-key code that is shared by a group of individuals ).
Public key cryptography is based on a very simple, very beautiful idea that effectively deals with both of the above issues. The idea is that everybody should possess two encryption keys: a public key, and a private key. Furthermore, the encryption method should work so that messages encoded using person X’s public key can only be decoded using person X’s private key. Likewise a message encoded with a private key should only be decodable using the corresponding public key. Next, everyone who wants to communicate shares their public keys, but keeps their private key strictly hidden. With such an arrangement, in order to communicate with person X, the sender will encode a message using person X’s public key. Now everyone in the world knows person X’s public key, but it doesn’t do them any good, because it will take person X’s private key to actually decode the message.
A system of public key cryptography can also be used to establish identity, and to enable secure financial transactions. Let’s say that I show up at a bank and claim that I am person X. If I am indeed person X, then I should be able to encode a simple message, something to the effect of “Hello, my name is X.”, using my secret, private key. Now everyone, including the bank, has access to person X’s public key, and so should be able to decipher the simple message, and verify that it really was encrypted using person X’s private key. The point is that person X, and only person X, could have created a message that is decodable using person X’s public key. In this way an arrangement of public and private keys can be made to serve as a secure authentication system (very useful for bank and credit card transactions, as you may well imagine).
Credit for the invention of public key cryptography is typically given to Diffie and Hillman: two computer scientists who publicized the first public key method back in 1975.
### 3.2 Modular arithmetic to the rescue
The next issue to consider is the means by which one can implement such a scheme of public and private key encryption. If you’ve read the preceding section on modular arithmetic, then you already have in your possession all of the mental equipment required for such a task.
First, recall the following rule of high school algebra:
$$(a^b)^c=a^{bc}.$$
There is nothing mysterious about this rule. As a particular example think of the rule with $`b=2`$ and $`c=3`$. In this instance the rule is saying that if one squares a number $`a`$ and then cubes the result, the final answer will be $`a`$ raised to the sixth power. Or to put it another way:
$$(a^2)^3=(a^2)\times (a^2)\times (a^2)=a\times a\times a\times a\times a\times a=a^6.$$
Now in modular arithmetic, there are certain powers that do absolutely nothing; this is the “curious fact” discussed in Section 2.5. For example, in Exercise 11 we saw that $`x^{21}x(\text{mod}22)`$. Note that $`21=3\times 7`$. It therefore stands to reason that working modulo $`22`$, if I first raise a number to the power $`3`$, and then raise the result to the power $`7`$, the final result will be the number that I started with. Eureka! Why don’t I then use the number $`3`$ as a private key, the number $`7`$ as the public key, and encrypt messages by raising numbers to these powers in modulo $`22`$ arithmetic? The following calculation illustrates this. Say you wants to send me a message that consists of the numbers: $`2,3,8`$. You raise each of these numbers to the power $`7`$ — that being my public key — and so the message I receive is $`18,9,2`$, because
$$2^7=12818,3^7=21879,8^7=2,097,1522,$$
where all equivalences are $`(\text{mod}22)`$. I then take the message, raise each number to the power $`3`$, and obtain
$$18^3=58322,9^3=7293,2^38;$$
I get back the original message! That’s all there is to it; the above example illustrates the essentials of the RSA public key encryption system.
###### Exercise 13
Construct a table that shows the effect of raising numbers in the modulo $`22`$ system to the powers $`3`$ and $`7`$. Inspect this table and confirm that, indeed, the two operations undo one another.
### 3.3 Some final details.
The above discussion of the RSA algorithm has not, as yet, addressed the following simple, but crucial question: how does one know that the RSA public key encryption method is secure? The fact of the matter is that the method is not secure unless one is careful about the way one chooses the modulus, $`n`$. Consider again the example of the preceding section: it is public knowledge that $`n=22`$ and that $`e=7`$, but the other exponent, $`f`$, is not publicized. A moment of reflection suffices to show that knowledge of $`n`$ and $`e`$ allows one to guess the supposedly secret value of $`f`$. Indeed, knowing that $`n=22`$, one also knows that $`\varphi =10`$ (see Exercise 11), and hence that whatever $`f`$ is, $`e\times f`$ must be one of the following numbers: $`1,11,21,31,41,\mathrm{}`$ In other words, $`f`$, whatever it is, must satisfy the equation $`7\times f1(\text{mod}10)`$. A quick check of the numbers from $`1`$ to $`9`$ will reveal that $`f=3`$ is a solution to the above equation. The supposedly secret, private key is thereby revealed, and the entire method of encryption rendered useless.
The point of the above example, is that one has to be careful about choosing the modulus $`n`$. Indeed, part of the RSA methodology, something that has not been mentioned so far, is a certain procedure for choosing $`n`$ that makes the subsequent process of encryption/decryption secure. According to RSA, one begins by choosing two prime numbers, call them $`p`$ and $`q`$, and sets $`n=p\times q`$ (Recall that a number is called prime if it has no divisors except for $`1`$ and itself.) An added advantage of choosing $`n`$ in this manner is the ease with which one can then calculate $`\varphi `$, the number of units modulo $`n`$.
> Fact. If $`n`$ is the product of primes $`p`$ and $`q`$, then $`\varphi =(p1)(q1)`$.
This formula is confirmed by the examples considered so far. Look back at Exercise 7. The modulus was $`10=2\times 5`$, and the number of units was $`4=(21)\times (51)`$; just as predicted by the formula. In Exercise 11 $`n`$ was $`22=2\times 11`$, and the modulus was $`10=(21)\times (111)`$, also in agreement with the formula.
I won’t prove the above formula for $`\varphi `$ in these notes, but rather indicate why the formula is valid by considering a particular example. Let us consider $`n=15=3\times 5`$. Arrange the numbers in the modulo $`15`$ system in a three by five rectangular array like so:
| | $`0`$ | $`1`$ | $`2`$ | $`3`$ | $`4`$ |
| --- | --- | --- | --- | --- | --- |
| $`0`$ | $`0`$ | $`6`$ | $`12`$ | $`3`$ | $`9`$ |
| $`1`$ | $`10`$ | $`1`$ | $`7`$ | $`13`$ | $`4`$ |
| $`2`$ | $`5`$ | $`11`$ | $`2`$ | $`8`$ | $`14`$ |
What is the meaning of the above arrangement? Every number $`x`$ in the modulo $`15`$ system can be described equally well by the following two items of information: the value of $`x`$ modulo $`3`$, and by the value of $`x`$ modulo $`5`$. Look, for example in row $`1`$, column $`3`$ of the above table, and you will find the number $`13`$. The reason for this placement is that $`131(\text{mod}3)`$ and that $`133(\text{mod}5)`$. Similarly $`120(\text{mod}3)`$ and $`122(\text{mod}5)`$, and therefore $`12`$ is located in row $`0`$, column $`2`$.
Having understood the arrangement you will see that the numbers that are divisible by $`3`$ are located in row $`0`$, while the numbers that are divisible by $`5`$ are located in column $`0`$. Furthermore, the divisors of zero in the system modulo $`15`$ are the numbers that are divisible either by $`3`$ or by $`5`$ ( I leave it to you to figure out the reason for this.) Therefore the units of the system are going to be all the numbers outside row $`0`$ and column $`0`$ of the above table. In other words, the units are the numbers found in the two by four rectangle formed by rows $`1`$ and $`2`$, and columns $`1`$ through $`4`$. No wonder then that there is a total of $`2\times 4=8`$ units.
Returning to considerations of security, one needs to choose the $`n`$ so that the corresponding $`\varphi `$ is enormously difficult to discover. This is accomplished by using very large primes $`p`$ and $`q`$; it is not unusual to take $`p`$ and $`q`$ to be a hundred digits long! Now multiplying two numbers a hundred digits long will produce a number two hundred digits long — a tedious process but one that a computer can handle easily. However, factoring a such a large number into its prime components is a task that cannot be accomplished even by today’s fastest computers. Factorization, is fundamentally a process of trial and error, and with numbers that large, there are just too many possibilities to check.
The upshot is that with very large $`p`$ and $`q`$, one can publicize $`n`$, and the values of $`p`$ and $`q`$ will remain safely uncomputable. Since the knowledge of $`p`$ and $`q`$ remains secret, there is no way for an outside agency to calculate $`\varphi `$, and therefore no way to guess the private key $`f`$, either. The cryptosystem can then be considered to be secure.
It is important to make one final remark regarding computations in modular arithmetic. At first glance, the calculation of powers when the modulus is large poses a considerable computational challenge. Say that one wanted to calculate $`48^{29}(\text{mod}221)`$. Ostensibly,
$`48^{29}=5,701,588,684,667,867,878,541,238,858,441,350,344,816,132,620,288`$
$`=2,579,904,382,202,655,148,661,194,053,593,371,196,749,381,276\times 221+107.`$
In other words, this gargantuan power is equivalent to $`107(\text{mod}221)`$. This sort of brute-force calculation is beyond the capability of most hand-held calculators. Fortunately there is a more elegant way to do the calculation. Remember that $`48^{29}`$ is obtained by multiplying $`48`$ together $`29`$ times. The trick is to reduce modulo $`221`$ at the intermediate stages of the calculation, rather than waiting to reduce at the very end. Note that $`29=6\times 4+5`$, and hence that
$$48^{29}=(48^6)^4\times 48^5.$$
The overall calculation can therefore be done like this (all equivalences are modulo $`221`$):
$$48^6=12,230,590,46466,(48^6)^466^4=18,974,736118,$$
$$48^5=254,803,96829,(48^6)^4\times 48^3118\times 29=3,422107.$$
There are many ways to break up this calculation. It all depends on how one chooses to break up the exponent $`29`$.
###### Exercise 14
Calculate $`29^{48}`$ modulo $`221`$.
### 3.4 A step by step description of the RSA algorithm.
* Choose prime numbers $`p`$ and $`q`$, and set $`n=p\times q`$. The larger the values of $`p`$ and $`q`$, the more secure the resulting encryption system.
* Let $`\varphi `$ denote the total number of units in the corresponding system of modular arithmetic. Having chosen $`n`$ to be the product of primes $`p`$ and $`q`$, one has $`\varphi =(p1)(q1)`$.
* Next, one chooses two whole numbers $`e`$ and $`f`$ in such a way that $`e\times f`$ is equal to one of the critical exponents, i.e. to one of the following numbers: $`1,1+\varphi ,1+2\varphi ,1+3\varphi ,\mathrm{}`$. This is done by first, choosing $`e<\varphi `$ so that $`e`$ and $`\varphi `$ are relatively prime; second, calculating the reciprocal of $`e`$ modulo $`\varphi `$ using the modified Euclidean algorithm; and finally setting $`f`$ equal to that reciprocal. A useful trick is to use a prime number less than $`\varphi `$ as $`e`$. This way one is certain that $`e`$ and $`\varphi `$ are relatively prime, and hence that $`e`$ is a unit modulo $`\varphi `$.
* One tells the whole world that one’s public key consists of the modulus $`n`$ and the exponent $`e`$. The exponent $`f`$ is kept a secret.
* Whenever someone wants to send a message — and let us suppose that the message consists of a string of numbers modulo $`n`$ — the sender will raise each number in the message to the power $`e`$. To decode the original message, one raises each of the encoded numbers to the power $`f`$.
* Conversely, to establish one’s identity, one would compose a signature message, and raise the numbers in the message to the power $`f`$. The receiver would then perform the authentication by raising the numbers in the encoded message to the power $`e`$, and thereby decode the original signature message.
### 3.5 A final example.
The present section illustrates the RSA algorithm with a simple example. First, one needs to choose a pair of primes — for instance, $`p=13`$ and $`q=17`$. From there, $`n=13\times 17=221`$, and $`\varphi =12\times 16=192.`$
Next, one needs to choose a public key $`e`$ and a private key $`f`$ in such a way that $`e\times f1(\text{mod}192)`$. For this example let $`e=29`$; this is a prime number, and therefore guaranteed to be a unit modulo any $`n`$. Using the modified Euclidean algorithm one calculates that $`8\times 192+53\times 29=1`$. Hence $`f=53`$, because $`23\times 531(\text{mod}192)`$.
In order to send messages there must be a standard way to encode letters in terms of numbers. For the purposes of this exercise, the following encoding will be used:
| A | B | C | D | E | F | G | H | I | J | K | L | M |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| $`1`$ | $`2`$ | $`3`$ | $`4`$ | $`5`$ | $`6`$ | $`7`$ | $`8`$ | $`9`$ | $`10`$ | $`11`$ | $`12`$ | $`13`$ |
| N | O | P | Q | R | S | T | U | V | W | X | Y | Z | space |
| $`14`$ | $`15`$ | $`16`$ | $`17`$ | $`18`$ | $`19`$ | $`20`$ | $`21`$ | $`22`$ | $`23`$ | $`24`$ | $`25`$ | $`26`$ | $`27`$ |
Thus, the message “HELLO” corresponds to the following string of numbers: $`8`$, $`5`$, $`5`$, $`12`$, $`16`$. Encoding the message using the public key means raising each of these numbers to the power $`29`$ modulo $`221`$. The result is the following encoded message: $`60,122,122,116,152`$. To decode the message using the private key, one raises each of these numbers to the $`53^{\text{rd}}`$ power modulo 221, and recovers $`8,5,5,12,16`$ — the original message!
## 4 Appendix. Answers to exercises.
Exercise 1. T, T, F, T. Note that $`9+9+9=27=1+2\times 131(\text{mod}13)`$.
Exercise 2.
| $`\times `$ | $`1`$ | $`2`$ | $`3`$ | $`4`$ |
| --- | --- | --- | --- | --- |
| $`1`$ | $`1`$ | $`2`$ | $`3`$ | $`4`$ |
| $`2`$ | $`2`$ | $`4`$ | $`1`$ | $`3`$ |
| $`3`$ | $`3`$ | $`1`$ | $`4`$ | $`2`$ |
| $`4`$ | $`4`$ | $`3`$ | $`2`$ | $`1`$ |
Exercise 3. $`3\times 21(\text{mod}5)`$, and hence $`3^12(\text{mod}5)`$. Similarly, $`4\times 41(\text{mod}5)`$, and therefore $`4^14(\text{mod}5)`$. Regarding the division problems one has: $`4÷33,3÷42,3÷31,1÷32,`$ where all equivalences are $`(\text{mod}5)`$.
Exercise 4.
| $`\times `$ | $`1`$ | $`2`$ | $`3`$ | $`4`$ | $`5`$ |
| --- | --- | --- | --- | --- | --- |
| $`1`$ | $`1`$ | $`2`$ | $`3`$ | $`4`$ | $`5`$ |
| $`2`$ | $`2`$ | $`4`$ | $`0`$ | $`2`$ | $`4`$ |
| $`3`$ | $`3`$ | $`0`$ | $`3`$ | $`0`$ | $`3`$ |
| $`4`$ | $`4`$ | $`2`$ | $`0`$ | $`4`$ | $`2`$ |
| $`5`$ | $`5`$ | $`4`$ | $`3`$ | $`2`$ | $`1`$ |
Exercise 5. If a number $`a`$ is a unit then, by definition, there is a number $`b`$ such that $`b\times a1`$. Therefore if $`c0`$, then $`a\times c`$ cannot possibly be zero either, because $`b\times a\times c1\times cc`$. On the other hand, if $`a`$ is a divisor of zero, then $`a\times b0`$ for some non-zero $`b`$. Therefore it is useless to look for a number $`c`$ such that $`c\times a1`$. The reason is that if such a $`c`$ were to exist then $`c\times a\times b`$ would be equal to $`1\times bb`$. However we know that $`c\times a\times b0`$.
Exercise 7
| $`\times `$ | $`1`$ | $`2`$ | $`3`$ | $`4`$ | $`5`$ | $`6`$ | $`7`$ | $`8`$ | $`9`$ |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| $`1`$ | $`1`$ | $`2`$ | $`3`$ | $`4`$ | $`5`$ | $`6`$ | $`7`$ | $`8`$ | $`9`$ |
| $`2`$ | $`2`$ | $`4`$ | $`6`$ | $`8`$ | $`0`$ | $`2`$ | $`4`$ | $`6`$ | $`8`$ |
| $`3`$ | $`3`$ | $`6`$ | $`9`$ | $`2`$ | $`5`$ | $`8`$ | $`1`$ | $`4`$ | $`7`$ |
| $`4`$ | $`4`$ | $`8`$ | $`2`$ | $`6`$ | $`0`$ | $`4`$ | $`8`$ | $`2`$ | $`6`$ |
| $`5`$ | $`5`$ | $`0`$ | $`5`$ | $`0`$ | $`5`$ | $`0`$ | $`5`$ | $`0`$ | $`5`$ |
| $`6`$ | $`6`$ | $`2`$ | $`8`$ | $`4`$ | $`0`$ | $`6`$ | $`2`$ | $`8`$ | $`4`$ |
| $`7`$ | $`7`$ | $`4`$ | $`1`$ | $`8`$ | $`5`$ | $`2`$ | $`9`$ | $`6`$ | $`3`$ |
| $`8`$ | $`8`$ | $`6`$ | $`4`$ | $`2`$ | $`0`$ | $`8`$ | $`6`$ | $`4`$ | $`2`$ |
| $`9`$ | $`9`$ | $`8`$ | $`7`$ | $`6`$ | $`5`$ | $`4`$ | $`3`$ | $`2`$ | $`1`$ |
The divisors of zero are $`2`$, $`4`$, $`6`$, $`8`$, and $`5`$. These are the numbers that are divisible either by $`2`$ or by $`5`$. All the other numbers are units: $`1`$, $`3`$, $`7`$, $`9`$.
Exercise 6 Answers: $`2`$, $`6`$, $`1`$, $`10`$.
Exercise 8
$$1113=2\times 504+105,504=4\times 105+84,105=1\times 84+21,84=4\times 21+0.$$
Therefore $`21`$ is the GCD of $`1113`$ and $`504`$.
Exercise 9
| $`n`$ | $`p`$ | $`a`$ | $`b`$ |
| --- | --- | --- | --- |
| $`1466`$ | | $`1`$ | $`0`$ |
| $`237`$ | $`6`$ | $`0`$ | $`1`$ |
| $`44`$ | $`5`$ | $`1`$ | $`6`$ |
| $`17`$ | $`2`$ | $`5`$ | $`31`$ |
| $`10`$ | $`1`$ | $`11`$ | $`68`$ |
| $`7`$ | $`1`$ | $`16`$ | $`99`$ |
| $`3`$ | $`2`$ | $`27`$ | $`167`$ |
| $`1`$ | $`3`$ | $`70`$ | $`433`$ |
Answer: the GCD is $`1`$. It is given by $`1=70\times 1466+433\times 237`$.
Exercise 10 From the preceding exercise we know that $`433\times 2371(\text{mod}1466)`$, and hence that $`237^1433(\text{mod}1466)`$. Therefore
$$59÷23759\times 433=25,547625(\text{mod}1466).$$
Checking this answer:
$$625\times 237=148,125=101\times 1466+5959(\text{mod}1466).$$
Exercise 11 Note that $`22=2\times 11`$. Therefore, the divisors of zero in the mod $`22`$ system are going to be the numbers that are divisible either by $`2`$ or by $`11`$; i.e. $`2,4,6,8,10,12,14,16,18,20`$, and $`11`$ — a total of eleven divisors of zero, twelve if one includes $`0`$ itself. That leaves a total of $`2212=10`$ units, i.e. $`\varphi =10`$, and therefore the critical exponents are going to be $`p=1,11,21,31,41,51,\mathrm{}`$.
Consider a computation with $`p=11`$ and $`x=7`$. Now $`7^{11}=1,977,326,743`$. What is this number modulo $`22`$? To get the answer one must divide by $`22`$ and calculate the remainder. The division gives $`89,878,488`$. As for the remainder, it is $`7^{11}89,878,488\times 22=7`$, the starting number. Try it again with $`p=21`$ and $`x=3`$. One gets $`3^{21}=10,460,353,203`$. Dividing the latter by $`22`$ one gets $`475,470,600`$ plus a remainder. To calculate the remainder one does $`3^{21}475,470,600\times 22`$; the answer is $`3`$, just as expected.
Exercise 12 It is evident that in the modulo $`22`$ system, exponents that differ by a multiple of $`10`$ end up giving the same result. Here is why. We already noted that
$$\mathrm{}x^{31}x^{21}x^{11}x^1(\text{mod}22).$$
Multiplying through by $`x`$ one observes that
$$\mathrm{}x^{32}x^{22}x^{12}x^2(\text{mod}22),$$
and of course one can keep going to conclude that
$$\mathrm{}x^{37}x^{27}x^{17}x^7(\text{mod}22),$$
and so on and so forth. In conclusion, if one starts with an exponent that is greater than $`10`$ (such as $`32`$ for example) one can subtract a multiple of $`10`$ from the exponent without affecting the answer. In particular, $`3^{32}3^29(\text{mod}22)`$.
Exercise 13
| $`x`$ | $`1`$ | $`2`$ | $`3`$ | $`4`$ | $`5`$ | $`6`$ | $`7`$ | $`8`$ | $`9`$ | $`10`$ | $`11`$ | $`12`$ | $`13`$ | $`14`$ | $`15`$ | $`16`$ | $`17`$ | $`18`$ | $`19`$ | $`20`$ | $`21`$ |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| $`x^3`$ | $`1`$ | $`8`$ | $`5`$ | $`20`$ | $`15`$ | $`18`$ | $`13`$ | $`6`$ | $`3`$ | $`10`$ | $`11`$ | $`12`$ | $`19`$ | $`16`$ | $`9`$ | $`4`$ | $`7`$ | $`2`$ | $`17`$ | $`14`$ | $`21`$ |
| $`x^7`$ | $`1`$ | $`18`$ | $`9`$ | $`16`$ | $`3`$ | $`8`$ | $`17`$ | $`2`$ | $`15`$ | $`10`$ | $`11`$ | $`12`$ | $`7`$ | $`20`$ | $`5`$ | $`14`$ | $`19`$ | $`6`$ | $`13`$ | $`4`$ | $`21`$ |
Exercise 14 The answer is $`1`$. Note that $`48=6\times 4\times 2`$, and hence that $`29^{48}=((29^6)^4)^2`$. From there
$$29^6=594,823,32153,53^4=7,890,481118,$$
$$29^{48}=((29^6)^4)^2118^2=13,9241,$$
where all equivalences are, of course, modulo $`221`$.
|
no-problem/9903/astro-ph9903133.html
|
ar5iv
|
text
|
# The optical variability of the narrow line Seyfert 1 galaxy IRAS 13224-3809
## 1 Introduction
X-ray observations by Boller et al (1997) have shown persistent giant variability on short timescales, with an amplitude of variability far in excess of that seen in a typical broad line Seyfert 1 galaxy. Such extreme variability is determined by the emission and variability mechanisms in the nucleus. The degree of variability at other wavelengths may be used to constrain the conditions and mechanisms. Optical variability will occur if, for example, the X-ray emitting electron population is rapidly changing and Compton scattering infrared radiation in the nucleus, or if the mechanism responsible for the X-ray variations causes all the emission processes to vary together. It has also been suggested (Boller et al 1997) that such large amplitude X-ray variability may be due to relativistic boosting, the degree of which is spectrally dependent. We have therefore observed IRAS 13224-3809 on three consecutive nights in order to determine the level of optical variability in the source.
Previous studies of the narrow line Seyfert 1 galaxy IRAS 13224-3809 have shown it to be variable both in the UV and X-ray wavebands. Long timescale Ly$`\alpha `$ line variability has been observed, with the line profile and flux changing between three IUE observations over four months (Mas-Hesse et al. 1994). UV continuum variability has been observed in 11 IUE observations spanning three years, with the flux varying by 24 per cent (Rodríguez-Pascual 1997).
## 2 Optical observations
IRAS 13224-3809 ($`\alpha _{2000}=13^\mathrm{h}25^\mathrm{m}19^\mathrm{s}`$, $`\delta _{2000}=38^\mathrm{h}24^\mathrm{m}53^\mathrm{s}`$) was observed in *B*, *V*, *R* and *I* on three consecutive nights from 1997 March 13 to 16 using the blue sensitive TEK4 CCD at the Cassegrain focus of the 1m Jacobus Kapteyn Telescope (JKT). The $`5.6\times 5.6`$ arcmin CCD field contained the galaxy and a number of foreground stars (see Fig. 1) allowing us to perform relative photometry. The images were bias subtracted and flat fielded using sky flats. Flux calibration was performed using the standard stars PG0942-029B, C and D of Landolt (1992). IRAS 13224-3809 has a foreground star 7.6 arcsec from the optical nucleus (10.7 arcsec from the X-ray centroid position (Boller et al. 1997)), and the aperture used to collect the light from the nucleus was chosen to have a diameter of 6.6 arcsec. This was found to collect most of the light from the nucleus whilst minimising that from the surrounding galaxy and the foreground star. Repeating our analysis with different sized apertures did not significantly affect our results. The same size aperture was used to obtain the flux from a comparison star in the field. Fig. 2 shows the radial profile of pixel values from the centre of IRAS 13224-3809. The scatter in pixel value is due to the different contributions to the radial profile from different azimuths. The contribution from the nucleus, underlying galaxy, the nearby star and background may be seen.
We follow the method of Done et al (1990) in which the variations seen in a comparison star due to small-scale rapid changes in seeing are used as a scalable template to model the variability one would expect to see in the nucleus as a results of those seeing changes. During each night the zenith distance of IRAS 13224-3809 varied between 67–72 and over such a small range we expect the extinction to remain almost constant. Since the images were taken through a number of different filters we grouped the images by night and by filter and then normalised them so that they have the same mean. This allows us to compare images taken with different filters. Fig. 3 shows the stellar and galactic residuals about the mean for a typical night. We would expect the slope of the best fitting line to be 1 if the seeing had an equal effect on both the galaxy and the comparison star. The actual fit has a slope of 1.10 indicating that seeing has a larger effect on the flux measured for the galaxy than the star, assuming both are not variable. The stellar residuals are then used as a scalable template for the variations of the galaxy about the mean flux of the galaxy. The normalisation of this template is chosen to minimise the difference between the predicted and observed galactic fluxes. Fig. 4 shows the template light curve, created using the scaled stellar residuals, and the actual light curve. There is very good agreement between the two.
Fig. 5 shows the light curve for the three nights using data from all filters, with $`1\sigma `$ Poisson error bars. The light curve is consistent with less than 1 per cent variability during each night. One may expect the amplitude of variability to be wavelength dependent, but we are unable to detect significant variability in the data from any one filter. The light curve for each night is normalised to the mean for that night although there is less than 1 per cent variability between nights. The large breaks in the light curve are due to high winds at the telecsope preventing observing.
## 3 Discussion
### 3.1 Optical
The optical light curve of IRAS 13224-3809 is consistent with the short time-scale variability of the source being less than 1 per cent. We are unable to comment on the wavelength dependence of any optical variability. Our aperture, however, also collects flux from the underlying galaxy which dilutes the signal from the nucleus. As may be seen from Fig. 1 the three main components of the flux received within a radius of 10 pixels, namely the contribution of the background, the underlying galaxy and the nucleus may be separated. If the nucleus is assumed to be a Gaussian point source and we (over-) estimate the contribution from the galaxy then the variability of the nucleus may be constrained. It is found that a 1 per cent variation of the entire source may correspond to a $`2`$ per cent variation in the nucleus.
The continuum optical spectrum of IRAS 13224-3809 (Boller et al 1993) has a photon index of approximately $`\mathrm{\Gamma }_\mathrm{o}0`$ across the optical waveband. The V band magnitude of the nucleus is 15.2, corresponding to a $`\nu F_\nu `$ flux in the V band of $`1.5\times 10^{11}`$ erg cm<sup>-2</sup> s<sup>-1</sup> at 5500Å.
### 3.2 X-ray
Simultaneous X-ray data are not available but if the 30 day ROSAT light curve of Boller et al (1997) is assumed to be typical of IRAS 13224-3809 it is extremely unlikely that the X-ray flux did not at least double during the period of our observations. This light curve is shown in Fig. 6 and there is no set of three consecutive days during which the X-ray flux did not change considerably. The only period of relative ‘quiescence’ is between 8–10 days, and even during this the X-ray flux at least doubles. A more recent HRI monitoring campaign has confirmed the continued existence of such extreme variability (private communication).
The X-ray photon index of IRAS 13224-3809 was observed to be very high, $`\mathrm{\Gamma }_\mathrm{x}4`$, and the average HRI count rate was $`4.7\times 10^3`$ count s<sup>-1</sup> (Boller et al 1997) corresponding to a $`\nu F_\nu `$ flux of $`2.9\times 10^{13}`$ erg cm<sup>-2</sup> s<sup>-1</sup> at 1.25 keV.
### 3.3 The nuclear X-ray and optical emitting regions
Optical emission can originate from the soft X-ray emission region, or very close to it, by several means. It may be emitted by the same electrons that produce the X-rays, by cyclotron or synchrotron emission or Comptonization (perhaps of photons produced by the cyclo-synchrotron process; see e.g. Di Matteo, Celotti & Fabian 1997). Comptonization is however unlikely to make a large contribution to the observed flux since the steep X-ray spectrum implies a low Compton $`y`$-parameter which gives a similarly steep optical spectrum. Optical emission may also originate from reprocessing of the X-ray flux, and therefore would have a thermal spectrum (probably from a range of temperatures). Finally it may be triggered by whatever mechanism causes the X-ray variability but not share the same emitting electrons as the X-rays.
The X-ray flux is expected to have at least varied by a factor of two during our observations. Let us assume that the X-ray flux doubled at some point and consider the implications of the lack of optical variability.
A lack of variability in the optical band can be explained in several ways. It may simply be due to most, more than 98 per cent, of the optical emission emerges from a region unconnected with that producing the soft X-ray flux. Note however that the average soft X-ray flux (below 1 keV) is comparable to the optical flux and the peak soft X-ray flux may be significantly greater. It could also be due to the presence of many very dense clouds in the nucleus which free-free absorb the intrinsic optical flux (Celotti, Rees & Fabian 1994). Alternatively, the spectrum of the X-ray source may be such that relativistic effects preferentially enhance the variability of the X-ray emission over the optical emission.
The exceptionally large and rapid X-ray variability of IRAS 13224-3809 may be explained if the X-ray emission is relativistically boosted (Boller et al 1997). For a source moving at a fraction $`\beta `$ of the speed of light inclined at an angle $`i`$ to the observer the Doppler parameter is given by $`\delta =[\gamma (1\beta \mathrm{sin}i)]^1`$, where $`\gamma =(1\beta ^2)^{\frac{1}{2}}`$. If the source is a power law of photon index $`\mathrm{\Gamma }`$ this gives rise to a boost in the amplitude of variability by a factor of $`\delta ^{3+\mathrm{\Gamma }}`$. Fig. 7 shows the X-ray and optical boost factors that may be produced as a function of radius for different continuum photon indices and accretion disc inclinations. It is possible for the X-ray boost factor to be many times greater than the optical boost factor. (The ratio of the X-ray to optical boost factor is $`\delta ^{\mathrm{\Gamma }_\mathrm{x}\mathrm{\Gamma }_\mathrm{o}}`$). The fraction of the optical emission that may be produced by, or whose variability is tied to, the soft X-ray emitting regions may then be constrained. The maximum fraction $`0.02\delta ^{\mathrm{\Gamma }_\mathrm{x}\mathrm{\Gamma }_\mathrm{o}}`$ of the optical emission that is due to the X-ray source is shown in Fig. 8 assuming values of $`\mathrm{\Gamma }_\mathrm{x}=4`$ and $`\mathrm{\Gamma }_\mathrm{o}=0`$. For reasonable values of the inclination and emission radius it is unlikely that the X-ray source is responsible for more than about 10 per cent of the optical flux.
The observed photon index of the optical continuum may differ from that of the optical emission associated with the X-ray source. The lowest possible $`\mathrm{\Gamma }_\mathrm{o}`$ is $`1`$ from the Rayleigh-Jeans part of the blackbody spectrum, and it is possible that $`\mathrm{\Gamma }_\mathrm{x}`$ exceeds 4. It remains unlikely, however, that more than about 20 per cent of the optical emission is closely associated with the soft X-ray emission. Differential relativistic effects between the X-ray and optical bands can therefore be up to a factor of 10 but not completely explain the lack of optical variability.
The hot electron population responsible for the X-ray emission can inverse-Compton-scatter lower energy infrared photons into the optical waveband and similarly ultraviolet photons into the X-ray waveband. The observed lack of variability can be used to place a limit on the ratio of the energy density of infrared to ultraviolet radiation in the nucleus. The observed ratio of 1.25 keV X-ray to optical luminosity, $`L_\mathrm{x}/L_\mathrm{o}10^2`$, and the lack of optical variability suggests that the variable X-rays have $`L_{\mathrm{x}(\mathrm{var})}/L_{\mathrm{o}(\mathrm{var})}`$ less than or comparable with 1, so, in the nucleus, the energy density of infrared radiation $`\epsilon _{\mathrm{IR}}`$ is at most that of the ultraviolet radiation $`\epsilon _{\mathrm{UV}}`$. If we assume the source to have an X-ray photon index $`\mathrm{\Gamma }_\mathrm{x}=4`$, the 1.25 keV flux implies the 0.1 keV $`\nu F_\nu `$ flux will be $`4.5\times 10^{11}`$ erg cm<sup>-2</sup> s<sup>-1</sup> allowing the stronger constraint $`\epsilon _{\mathrm{IR}}<0.01\times \epsilon _{\mathrm{UV}}`$
### 3.4 Location of any reprocessing material
The lack of optical variability implies that very little of the primary soft X-ray flux is reprocessed into the optical waveband within approximately a light day of the source. This may be due to the inner disc where the X-rays originate being hot and small, radiating mostly in the EUV, and to a lack of optically thick material oriented to reprocess these X-rays at larger radii. For a $`3\times 10^7M_{}`$ black hole two light days corresponds to roughly 1000 Schwarzschild radii.
## 4 Conclusion
The Narrow Line Seyfert 1 galaxy IRAS 13224-3809 shows persistent giant amplitude X-ray variability yet almost no optical variability exceeding 2 per cent. Such an extreme difference in variability between the two wavebands suggests the X-ray emitting regions do not produce any optical variability. This is expected if the electron populations responsible for X-ray and optical emission are physically distinct and the spectrum of the X-ray source intrinsically produces almost no optical emission. The energy density of infrared radiation in the nucleus is at most equal that of the ultraviolet radiation since the X-ray emitting electrons are not responsible for significant inverse Compton scattering into the optical waveband. If relativistic boosting occurs then the conclusions are weakened somewhat with at most about 20 per cent of the optical emission associated with the X-ray source. Any significant reprocessing of primary soft X-rays into the optical waveband occurs on scales greater than $`1000R_\mathrm{s}`$ The lack of optical variability is consistent with observations of less extremely X-ray variable Seyfert 1 galaxies such as NGC 4051 (Done et al 1990).
## 5 Acknowledgements
AJY thanks PPARC for support. ACF and CSC thank the Royal Society.
|
no-problem/9903/hep-ex9903033.html
|
ar5iv
|
text
|
# Triggering BTeV
## I Introduction
The BTeV experiment at Fermilab is expected to begin running in the new Tevatron C0 interaction region by the year 2005. The physics goals include studies of CP violation and mixing, rare decays, and high sensitivity searches for decays forbidden within the Standard Model. The primary focus of BTeV is on precision studies of CP violation and mixing in $`B`$ decays.
BTeV benefits from the new Fermilab Main Injector, which was built to achieve higher luminosity in the Tevatron. With a luminosity of $`2\times 10^{32}cm^2s^1`$ the Tevatron can produce $`4\times 10^{11}`$ $`B`$ hadrons in $`10^7`$ seconds of running. This rate of $`B`$ production is almost four orders of magnitude larger than the $`B`$ production rate anticipated for $`e^+e^{}`$ colliders operating at the $`\mathrm{{\rm Y}}(4S)`$ resonance.
In this paper I describe the Level 1 trigger for BTeV. I begin with an overview of the BTeV detector and the operating environment in the C0 interaction region. I describe the baseline design of the silicon pixel vertex detector, which provides the data for the Level 1 trigger. The Level 1 trigger performs track and vertex reconstruction to select events with detached vertices. The goal is to trigger on $`B`$ decays with high efficiency, while rejecting minimum bias (light quark) events.
Details of the baseline design of the Level 1 trigger have been published . Here I provide an overview of the Level 1 trigger, report on results from trigger simulations, and present some new ideas for track and vertex reconstruction algorithms that challenge our baseline design. Our design for both the trigger and the vertex detector will undoubtedly evolve as we refine our understanding of detector hardware and physics simulations.
## II The BTeV Detector
BTeV is optimized for $`B`$ physics. The detector is a two-arm forward-geometry spectrometer (see FIG. 1) designed to run at a luminosity of $`2\times 10^{32}cm^2s^1`$ and a production rate of $`4\times 10^{11}`$ $`B`$ hadrons per year. This rate of $`B`$ production is high. However, the background from light quark events is also large, with only 1 in 1000 events expected to be a $`B`$ event. To select a broad spectrum of $`B`$ events efficiently, BTeV will reconstruct tracks and vertices with a Level 1 trigger that receives data from a state-of-the-art vertex detector. The Level 1 trigger selects $`B`$ decays by reconstructing primary-interaction vertices, and by identifying tracks that are *not* associated with a primary vertex. The goal is to trigger on tracks that come from $`B`$ decays, which are found as secondary vertices detached from a primary vertex.
The trigger and vertex detector operate in a high-rate hadron-collider environment. Like all of the new collider experiments in the Tevatron, BTeV is being designed to operate with a Tevatron bunch spacing of 132 ns. Unlike most hadron collider experiments, BTeV operates in a region with high track density, due to the forward geometry. BTeV also operates close to the Tevatron beams (the innermost edge of the vertex detector is within 6 mm of the beams), and design considerations for both the vertex detector and the Level 1 trigger involve studies with beam luminosities in excess of $`2\times 10^{32}cm^2s^1`$. At $`2\times 10^{32}`$ the mean number of interactions per beam crossing is expected to be 2. Although events with two or more interaction vertices could pose a problem for a Level 1 trigger designed to trigger on detached vertices, the BTeV trigger benefits from a long interaction region with $`\sigma _z\mathrm{\hspace{0.17em}30}\mathrm{cm}`$, and is designed to be relatively insensitive to multiple interactions per beam crossing. Simulations show that the Level 1 trigger selects less than 1$`\%`$ of minimum bias interactions, even up to a luminosity of $`2.5\times 10^{32}cm^2s^1`$.
The two-arm forward-geometry design of the BTeV spectrometer is accompanied by significant design challenges. However, the forward geometry provides considerable advantages for $`B`$ physics , compared to a more central detector. The main advantage is a larger Lorentz boost, which increases the reconstruction efficiency for $`B`$ decays and improves the proper time resolution. Another significant advantage is that a forward spectrometer can have a much longer detector volume, and we exploit this aspect of the geometry by including state-of-the-art particle identification for hadrons. Furthermore, having a two-arm configuration with a dipole magnet centered on the interaction region means that BTeV has two spectrometers, thereby doubling the acceptance for $`B`$ physics compared to a single-arm spectrometer. As a double-arm spectrometer, BTeV covers both the forward and the backward rapidity regions with a combined coverage of $`1.5<|y|<4.5`$.
BTeV includes detectors (see FIG. 1) featured in many collider experiments, with inner and outer tracking systems, E&M calorimetry, and muon detection. BTeV also includes a Ring Imaging Cherenkov (RICH) counter in each arm of the spectrometer. The RICH detectors are ideal for charged-hadron identification, so that different types of $`B`$ decays can be distinguished from one another. The spectrometer has inner and outer tracking systems that are highly segmented. The inner tracking system, used for precision tracking and vertex reconstruction, is the “centerpiece” of the BTeV spectrometer, and it consists of planar pixel arrays located inside the Tevatron beam vacuum. FIG. 2 shows a close-up view of 13 out of a total of 31 inner tracking stations. In our baseline design the inner tracking system consists of silicon pixel detectors; however, diamond pixel detectors are also being considered.
The inner tracker/vertex detector is a key component of the BTeV spectrometer. There are 31 pixel tracking stations. In the central part of the vertex detector the tracking stations are separated by 3.2 cm (center-to-center); at the outer ends of the vertex detector the spacing for 8 of the 31 tracking stations is increased, and the total length of the detector is 128 cm. Each station has three pixel planes that are arranged in views with respect to the magnetic field. There are two bend views and one non-bend view. Each pixel plane has over 500,000 pixels in a 10 x 10 cm area, excluding the beam region. Each pixel provides an X and Y position measurement, and a pulse-height measurement. The pixels, which are 50 x 400 $`\mu `$ in size, are arranged on sensor chips that are tiled to provide close to 100 $`\%`$ coverage over the active area of the vertex detector. FIG. 3 shows a diagram of sensor chips for three quadrants of a pixel plane, and shows the 1.2 x 1.2 cm beam hole at the center of the vertex detector . The beam hole is larger during injection of the Tevatron beams, and is brought into the configuration shown in FIG. 3 after the beams have stabilized.
## III Level 1 Trigger
The Level 1 trigger selects events by detecting the presence of $`B`$ decays. These decays are detected by first reconstructing primary interaction vertices. Tracks from $`B`$ decays are then found with an impact parameter cut that selects the tracks that miss the primary vertex by a significant amount. To accomplish these tasks quickly, the Level 1 trigger hardware receives data directly from the pixel detectors (at a rate of 100 Gb/sec). The trigger itself consists of three stages. The first stage is the *segment finder*, where hits from the three pixel planes per tracking station are assembled into 3-dimensional space points, each with a track-direction mini-vector. These mini-vectors are used for track reconstruction in the second stage of the Level 1 trigger. In the third stage, the reconstructed tracks are used to reconstruct primary interaction vertices, calculate impact parameters, and select tracks coming from secondary vertices.
The Level 1 trigger is heavily pipelined throughout the 3-stage reconstruction process, and performs many operations in parallel to accommodate the data flow from the pixel detectors. Data from an individual pixel consists of an X and Y position measurement, a pulse height measurement, and a time stamp (in units of 132 ns) that is used by the trigger to assemble all of the data belonging to a particular beam crossing. Data from adjacent pixels are combined into a pixel *hit* by a clustering algorithm that uses pulse height and position measurements. Pixel hits are processed in parallel, for each pixel station, by the *segment finder*. Additional parallelism is obtained by subdividing pixel hits into $`\varphi `$ slices (see Section IV). The hits for each station are combined into groups of three hits (one from each of three pixel planes), and are used to calculate a 3-dimensional space point and a track-direction mini-vector. FIG. 2 shows a schematic representation of the mini-vectors found for a simulated $`B^o\pi ^+\pi ^{}`$ event. The red line segments, which appear to penetrate each tracking station, represent the mini-vectors that are found by the first stage of the Level 1 trigger.
In the second stage of the trigger, the slope and position measurements for each mini-vector in a pixel station are used to find matching mini-vectors in a neighboring station. Mini-vectors with compatible measurements are combined to form particle trajectories. In the third stage of the trigger, the curvature of these reconstructed trajectories is used to eliminate low momentum tracks, which tend to have large multiple Coulomb scattering errors. We also eliminate tracks with large impact parameters (greater than 2 mm, for example). This reduces the number of incorrect trigger decisions caused by tracks associated with other interactions that occurred during the same beam crossing. The decision to keep or reject the data for a particular beam crossing is based on the number of tracks that are found with a normalized impact parameter ($`b/\sigma _b`$) greater than some value. For example, the requirement that there be at least 2 tracks with $`b/\sigma _b>3.5`$ yields a trigger efficiency of 40$`\%`$ for $`B^o\pi ^+\pi ^{}`$ events , while providing a rejection factor of $`5\times 10^3`$ for minimum bias events. This result comes from a simulation of the full pattern recognition with simulated pixel hits, and is presented in greater detail elsewhere .
## IV Trigger Hardware and Simulations
With 100 gigabytes of pixel data coming into the Level 1 trigger every second, a beam crossing every 132 ns, and an average of 2 interactions per crossing, the time required for each pipeline step in the trigger is critical. Many of the timing issues have been studied by performing a variety of trigger simulations. These include Monte Carlo studies of the pattern recognition and reconstruction algorithms, and simulations of the trigger hardware. The trigger hardware and software algorithms will be extensively tested with a trigger prototype that is being built with components specified in our baseline trigger design. The design utilizes FPGAs (field programmable gate arrays) and two types of DSPs (digital signal processors): a fixed-point DSP, and a floating-point DSP. The fixed-point DSP is made by Texas Instruments and belongs to the TMS320C6X family of DSPs. The floating-point DSP is an Analog Devices ADSP-2106x SHARC. The layout of the logic for the FPGAs is mostly complete. We are currently developing the software algorithms (including optimizations of the code) for the DSPs. I should note that future implementations of the Level 1 trigger may use different DSPs (as new DSPs with better performance are introduced), and that our DSP algorithms will undoubtedly be modified as we optimize the performance of the trigger.
Simulations of the Level 1 baseline design show that the first stage of the trigger, the segment finder, requires the most computational power compared to other parts of the trigger. A large number of operations are performed by the segment finder, due to the number of combinations of pixel hits that must be sampled to find the hits that define a mini-vector. To address this problem, we introduce considerable *parallelism* in the trigger architecture. This is a key feature of the baseline trigger design. The parallelism begins with the organization of the data read out from the pixel detectors. Pixel hits are read out in parallel for each quadrant of a pixel plane. The hits from the three planes that belong to a pixel station are brought together in a *quadrant processor*, which represents one of 124 identical circuit boards that make up the first stage of the trigger. Additional parallelism is introduced by subdividing the pixel hits in a quadrant into 8 $`\varphi `$ slices, for a total of 32 $`\varphi `$ slices per pixel station. The hits in each $`\varphi `$ slice are processed in parallel by one of the 992 TMS320C6X DSPs (a total of 992 DSPs for 31 pixel stations, each with 32 $`\varphi `$-slice DSPs). FIG. 3 shows the “trigger view” of one quadrant, and shows that the subdivision of hits into $`\varphi `$ slices reduces the mean number of hits per pixel plane to a value of 0.23 hits for minimum bias events. This is a small number of hits (on average), and with suitable buffering in the Level 1 trigger we can average over numerous beam crossings in our timing studies.
Even with the small number of hits per $`\varphi `$ slice, the timing for the segment finder is critical. Table I shows timing results for the segment finder algorithm developed for the TMS320C6X DSP. In the table we define four cases to present our results. For a given $`\varphi `$ slice and with three planes (A, B, and C) in a pixel station (as shown in FIG. 3), we have a single hit in each plane for one track passing through a pixel station. This is the (1,1,1) case in Table I. For two tracks passing through the same $`\varphi `$ slice we have two hits per plane, or (2,2,2). With three and four tracks in the same $`\varphi `$ slice we get the (3,3,3) and (4,4,4) cases, respectively. The timing results for the four cases are divided into three categories that represent different levels of optimization for the segment finder algorithm. The first category involves $`C`$ code that has *not* been optimized (the code is compiled without any optimization), and we use this to establish a reference measurement for subsequent measurements involving different levels of optimization. For the second category we allow the $`C`$ compiler for the TMS320C6X DSP to optimize the algorithm, and for the third case the programming is done directly in assembly language. Although programming in assembly language is more cumbersome, we can optimize the code so that all hardware components in the DSP are performing useful operations most of the time. Our timing results in Table I show that the $`C`$ compiler for the TMS320C6X DSP is not very effective at optimizing the segment finder algorithm; the compiler achieves only a minor performance gain for the (1,1,1) case, and at best a factor of two improvement for cases with more hits in a $`\varphi `$ slice. Table I also shows that we are able to achieve an order of magnitude improvement in the timing by optimizing the algorithm in assembly language.
To obtain an estimate of the average time required to find mini-vectors in minimum bias events, we compute a weighted average by using the results in Table I weighted by the distribution of hits in FIG. 3. We use a value of zero for the (0,0,0) case, since the DSP does not perform any operations when there are no hits in a $`\varphi `$ slice. The average time for each level of optimization is shown in the last row of Table I. Although these results are somewhat oversimplified, they do indicate that an assembly language implementation of the segment finder algorithm is necessary so that the first stage of the trigger stays within a budget of 132 ns per beam crossing. The average time of 37 ns for the segment finder is encouraging, since it suggests that we are close to having a feasible Level 1 trigger design. We continue to work on more refined simulations of the trigger, and on developing the trigger prototype for detailed studies of the trigger hardware.
## V Alternatives to the Baseline Design
Although most of our trigger-design efforts are devoted to the development of the baseline design of the Level 1 trigger, we are also exploring alternative designs. These alternatives invariably involve design changes in both the Level 1 trigger and the vertex detector, since the two systems are interdependent. For example, we are investigating a design that entails two pixel planes per tracking station instead of the three-plane tracking stations in our baseline design. The two-plane design for the vertex detector has less material, which reduces multiple Coulomb scattering errors. Other advantages include a reduction in the heat load for the pixel cooling system, and perhaps a reduction in cost. These improvements in the vertex detector require significant changes in the Level 1 trigger. The segment finder in the baseline design must be replaced by a different algorithm to initiate the track-reconstruction phase of the trigger. We are working on algorithms that find track segments spanning three tracking *stations*, compared to the three-plane mini-vectors in the baseline design. In one approach, we are investigating a massively parallel system based on an FPGA design that can handle the large number of pixel-hit combinations that must be sampled to identify three-station track segments. In a second approach, we reduce the number of pixel-hit combinations that must be sampled by finding the first three pixel hits for each track as it passes from the beam region into the vertex detector. In this approach each track is found once, and is then projected to neighboring stations to extend the track and improve the momentum measurement for the track. Both of the alternative trigger designs require additional work before they can be considered as viable alternatives to our baseline trigger design.
## VI BTeV Status
BTeV is an approved R&D project. The goal of this project is to conduct all detector R&D, and to design the BTeV experiment. Although the forward-geometry of the BTeV detector offers numerous design challenges, the benefits of a second-generation experiment dedicated to *precision* studies of $`B`$ physics are substantial. A technical design report will be submitted in 14 months.
|
no-problem/9903/nucl-ex9903004.html
|
ar5iv
|
text
|
# New Possibilities for Investigation of TRI Violation with the use of Aligned Nuclei
## 1 Introduction
Polarized neutron beams are the excellent tool for investigation of fundamental symmetries violation, namely, parity (P) violation and time reversal invariance (TRI) violation. At the moment a great body of data is obtained on P violation in the interaction of polarized neutrons with unpolarized nuclei. On the other hand, a number of possible tests of TRI violation in similar experiments with polarized and aligned targets are discussed.
The method to search for P- and T-odd nuclear interaction by investigation of three-fold correlation $`(𝐧_s[𝐧_k\times 𝐧_I])`$ in transmission of polarized neutrons through polarized target was proposed in . Here $`𝐧_s`$, $`𝐧_I`$ and $`𝐧_k`$ are unit vectors along neutron and nuclear polarization axes and neutron momentum, respectively. The possibility to search for P-even T-odd nuclear interaction by studying of five-fold correlation $`(𝐧_s[𝐧_k\times 𝐧_I])(𝐧_k𝐧_I)`$ was first considered in -. This method needs an aligned target (note, that here $`𝐧_I`$ is a unit vector along an alignment axis).
A difficulty of nuclear alignment presents a major obstacle for five-fold correlation experiment. Up to now the number of nuclei which were aligned is very limited. We propose the new method of dynamic nuclear alignment (DNA) which can be used to increase significantly the number of nuclei accessible for appropriate physical experiments.
## 2 TRI tests
Both three- and five-fold correlation experiments are unique null tests of TRI. The usually used methods as comparison of the cross sections of direct and inverse reactions or of the polarization and analyzing power in scattering experiments (see, e.g., ) are based on measurements of two values, which should coincide if TRI holds. Clearly, a measurement of a single value, which is nonzero only if TRI breaks, is much more reliable. Three- and five-fold correlations arise in the forward scattering amplitude $`f(0)`$. Thus, they appear in the total cross section, which is linear on $`f(0)`$ as a result of the optical theorem
$$\sigma _{tot}=\frac{4\pi }{k}\mathrm{Im}\mathrm{Sp}(\rho f(0)).$$
(1)
Here $`k`$ is the relative wave vector of colliding particles, and $`\rho `$ is their spin density matrix.
Generally, the forward scattering amplitude may be represented in terms of S-matrix elements. Let us consider an elastic scattering of two particles with spins $`s`$ and $`I`$. The sum of the channel spin $`F`$ ($`𝐅=𝐬+𝐈`$) and the relative orbital momentum $`l`$ in an entrance channel gives the total angular momentum $`J`$ ($`𝐉=𝐅+𝐥`$). In an exit channel the channel spin $`F^{}`$ and the relative orbital momentum $`l^{}`$ can differ from $`F`$ and $`l`$, respectively, provided the rules of angular momenta summation $`𝐉=𝐅^{}+𝐥^{}`$ and $`𝐅^{}=𝐬+𝐈`$ are satisfied. Thus, such transition is described by the S-matrix element $`S_J(lFl^{}F^{})`$. If TRI holds, the S-matrix should be symmetric: $`S_J(lFl^{}F^{})=S_J(l^{}F^{}lF)`$. It can be shown that the terms in the total cross section (1) related with the correlations $`(𝐧_s[𝐧_k\times 𝐧_I])`$ and $`(𝐧_s[𝐧_k\times 𝐧_I])(𝐧_k𝐧_I)`$ are proportional to differences
$$S_J(lFl^{}F^{})S_J(l^{}F^{}lF).$$
(2)
Clearly, such terms are nonzero only if TRI breaks. When a light particle is scattered by a heavy particle, in particular, in neutron-nucleus interaction, it is more convenient to use the total angular momentum of the light particle $`j`$ ($`𝐣=𝐬+𝐥`$, $`𝐉=𝐣+𝐈`$) instead of the channel spin $`F`$.
The first five-fold correlation test of TRI was made in in the interaction of 2 MeV polarized neutrons with aligned nuclei <sup>165</sup>Ho. A bound of $`10^2`$ on the ratio of T-odd forces to T-even ones in the effective nucleon-nucleon interaction was obtained. A similar test in the interaction of polarized protons with aligned deuterons is now under preparation at the cooler synchrotron COSY at Julich .
Both three- and five-fold correlations were proposed to be studied in the interaction of resonance p-wave neutrons with heavy nuclei. Such tests of TRI have the advantage that the effects may be enhanced in a p-wave resonance by the factor of $`10^3`$ . The reason is the smallness of a resonance width and, hence, an increase of the time of T-odd forces action. Note, that the description of the effect for slow neutrons is quite simple because of only three partial waves participate in the scattering. Namely, $`lj`$=s1/2, p1/2 and p3/2, where $`l`$ is a neutron orbital momentum, and $`j`$ is its total angular momentum. Thus all scattering effects for resonance neutrons are determined by nine S-matrix elements $`S_J(ljl^{}j^{})`$. Let us consider a total cross section of polarized neutron interaction with nuclei, which may be both polarized and aligned.
To describe nuclear orientation we choose an axis $`z`$ along the unit vector $`𝐧_I`$. Let $`m`$ be a projection of the nuclear spin $`I`$ on the $`z`$ axis, and $`n_m`$ is a population of the $`m`$-substate ($`_mn_m=1`$). Thus, we define nuclear polarization and alignment as
$$p_1(I)=\frac{m}{I},p_2(I)=\frac{3m^2I(I+1)}{I(2I1)},$$
(3)
where $`m^k=_mm^kn_m`$. In the case of pure alignment $`n_m=n_m`$, thus $`p_1(I)=0`$. Both parameters $`p_1(I)`$ and $`p_2(I)`$ equal unity when only the substate with the maximal projection $`m=I`$ is populated ($`n_m=\delta _{mI}`$). By the same way the neutron polarization is defined by $`p_1(s)=\sigma /s`$, where $`\sigma `$ is a projection of the neutron spin $`s=1/2`$ on the $`z`$ axis along the unit vector $`𝐧_s`$.
The total cross section of the interaction of slow neutrons with nuclei is of the form
$$\begin{array}{c}\sigma _{tot}=\sigma _0+a_1p_1(s)p_1(I)(𝐧_s𝐧_I)+a_2p_1(s)p_1(I)(3(𝐧_s𝐧_k)(𝐧_I𝐧_k)(𝐧_s𝐧_I))+\hfill \\ +a_3p_2(I)(3(𝐧_k𝐧_I)^21)+\hfill \\ +b_1p_1(s)(𝐧_s𝐧_k)+b_2p_1(I)(𝐧_k𝐧_I)+b_3p_1(s)p_2(I)(3(𝐧_s𝐧_I)(𝐧_k𝐧_I)(𝐧_s𝐧_k))+\hfill \\ +c_1p_1(s)p_1(I)(𝐧_s[𝐧_k\times 𝐧_I])+c_2p_1(s)p_2(I)(𝐧_s[𝐧_k\times 𝐧_I])(𝐧_k𝐧_I).\hfill \end{array}$$
(4)
Here $`\sigma _0`$ is the total cross section for unoriented neutrons and nuclei. It can be presented in terms of S-matrix elements, as well as the quantities $`a_i`$, $`b_i`$ and $`c_i`$ (see ). The terms related with $`b_i`$ are P-odd, while for T-odd terms we have
$$c_1=\frac{2\pi }{k^2}\underset{j}{}C_1^{Jj}\mathrm{Im}\left(S_J(0\frac{1}{2}1j)S_J(1j0\frac{1}{2})\right),$$
(5)
$$c_2=\frac{2\pi }{k^2}C_2^J\mathrm{Im}\left(S_J(1\frac{1}{2}1\frac{3}{2})S_J(1\frac{3}{2}1\frac{1}{2})\right),$$
(6)
where $`C_1^{Jj}`$ and $`C_2^J`$ are numerical coefficients of the unit scale. The three-fold correlation arises from the asymmetry of scattering from s1/2-wave to p1/2- and p3/2-waves and vice versa. It is P-odd as the transitions between s- and p-waves are parity violating. The five-fold correlation tests the equality of the transition rates from p1/2-wave to p3/2-one and vice versa, thus it is P-even. Clearly, both correlation should be studied in p-wave resonances to maximize p-wave contribution to the scattering. There exist additional possibilities to test TRI using neutron spin rotation in transmission through polarized and aligned targets (see, e.g., ).
Experimental setup for development of research technique for investigation of TRI violation by studying three-fold and five-fold correlations are being constructed now at neutron beam of pulsed reactor IBR-30 (JINR). The setup will include well known neutron polarizer , neutron analyzer (constructed in ITEP ), system fo precise control and adjustment of neutron spin and oriented nuclear targe. The description of setup will be publised elsewhere.
## 3 Dynamic nuclear alignment (DNA) method
Let us consider a nucleus with a spin $`I1`$ and quadrupolar moment $`Q`$ which is acted upon by an axial symmetric electric field gradient (EFG) directed along an axis $`z`$. An interaction of $`Q`$ with EFG results in a set of $`(2I+1)/2`$ sublevels (we assume for simplicity that $`I`$ takes half-integer values). Each of them is a degenerated doublet of substates with projections $`\pm m`$ of spin $`I`$ on the axis $`z`$. The energy splitting of sublevels is determined by a parameter which is proportional both to nuclear quadrupolar moment $`Q`$ and EFG value. In the case under consideration the energy differences between sublevels are equal to $`a`$, $`2a`$, $`3a`$…(from sublevel with $`m=\pm 1/2`$). One can observes the signals of nuclear quadrupolar resonance (NQR) at frequencies determined by the energy splitting. Their intensities depend on substate populations $`n_m`$ and define the value of nuclear alignment .
If the spins are in equilibrium at the temperature $`T_0`$, the distribution $`n_m`$ over substates is given by the Boltzman law. For the case of $`I=3/2`$ the quadrupolar spectrum falls into two sublevels with $`m=\pm 1/2`$ and $`\pm 3/2`$ separated by the energy $`a`$. Then, $`n_2/n_1=\mathrm{exp}(a/kT)`$, and for $`a/h100`$ MHz (for a typical value of $`a`$ for heavy nuclei) and $`T_0=0.5`$ K the equilibrium value of an alignment is $`p_2(I)=4.9\times 10^3`$. To obtain a higher nuclear alignment the method of dynamic nuclear alignment (DNA) is proposed.
DNA method is similar to dynamic nuclear polarization (DNP) method. However, in the case of DNA there is no need in an external magnetic field. The idea is following. Ground states of paramagnetic ions with electron spin $`S1`$ are being split in an electric field of crystal in the same way as the states of quadrupolar nuclei. This splitting results from the interaction of quadrupolar (and higher order) moment of an electron shell of paramagnetic ion with EFG. The energy differences between sublevels $`h\mathrm{\Delta }_0`$ (where $`\mathrm{\Delta }_0`$ is a frequency of electron paramagnetic resonance - EPR in zero magnetic field) may be of several orders of magnitude more than those for nuclear quadrupolar splitting, and $`\mathrm{\Delta }_0`$ may be of order of tens GHz. Taking, for example, $`S=3/2`$, $`\mathrm{\Delta }_0=50`$ GHz and T=0.3 K, we obtain completely aligned electron spins $`p_2(S)=1`$ . One should emphasize that in this case there is no necessity in an external magnetic field, and the direction of quantization axis $`z`$ coincides with the main axis of the crystal electric field. The dynamic nuclear alignment method is based on the transmission of high alignment of electron spins of paramagnetic admixture to the nuclei of the basic crystal lattice.
As in the case of DNP it may be realized by saturating irradiation of a target by microwaves on a frequency $`\mathrm{\Delta }_0+\delta `$ near the resonance frequency of paramagnetic ions. The decreasing of spin temperature due to the shift $`\delta `$ from the precise resonance frequency (which corresponds to the center of EPR line in zero magnetic field) leads to the decreasing of spin temperature $`T_d`$ of the electron dipole-dipole subsystem
$$\frac{T_0}{T_d}\frac{\mathrm{\Delta }_0}{2\omega _L}.$$
(7)
Here $`\omega _L`$ is a parameter of electron spin-spin interaction which is of order of EPR linewidth at zero magnetic field (typical values of $`\omega _L`$ are 100-300 MHz). As a consequence of electron-nucleus dipole interaction this temperature is transmitted to the spin subsystem of quadrupole nuclei. Thus, an enhancement of nuclear alignment $`p_2(I)`$ arises by a factor of $`T_0/T_d10^210^3`$ both in the cases of low energy ($`\delta <0`$) and high energy ($`\delta >0`$) sublevels . The change of sign in $`\delta `$ can lead to change sign in $`p_2`$. The values 0.4-0.8 for nuclear alignment parameter $`p_2(I)`$ can be obtained. Besides, a further nuclear alignment can be provided by ”solid effect” which can be used when EPR linewidth of paramagnetic admixture is less than NQR linewidth . In this case it is possible to obtain a full transmission of electron alignment to nuclei.
Theory of dynamic cooling and solid-effect is described in . The decreasing of spin temperature at zero magnetic field by the factor of 10<sup>2</sup> was observed in with the use of Cr<sup>3+</sup> ions in rutile crystal (TiO<sub>2</sub>) at 1.7 K and microwave frequency $`\mathrm{\Delta }_0=43`$ GHz. The method of dynamic cooling was realized. This work did not involve nuclear alignment. However, according our estimation an alignment of quadrupolar nuclei included in a crystal at 0.3 K would be 0.5. The value of nuclear alignment can be further increased by lowering of the temperature and increasing of the pumping frequency.
A requirement on precise crystal orientation relative to the fixed direction in the case of DNA is less strict than in the case of DNP. Some misalignment would result in the lowering of an alignment. However, it would not influence EPR linewidth which is constant in the absence of magnetic field. So a misalignment of the order of 20<sup>0</sup> is acceptable. This allows to produce a sample with a large volume using a set of small crystals.
The main problem in realization of DNA method is a choosing of the appropriate sample which has to fulfill the following criteria:
1) high content of nuclei of interest,
2) high quadrupolar energy splitting of nuclear sublevels (NQR frequency should be more than 30 MHz),
3) high energy splitting of sublevel of paramagnetic admixture (EPR frequency at zero magnetic field should be more than 30 GHz),
4) the absence of the disoriented and nonequivalent sites of quadrupoare nuclei and paramagnetic ions in crystal.
The last requirement is significant for achievement of an acceptable degree of orientation along a separated axis.
## 4 Summary
From above consideration it is clear that any nuclei with spin $`I1`$ having low lying p-wave resonances are good candidates for TRI tests with aligned nuclei. Note, that about two tens nuclei with low energy p-wave resonances were involved in P violation experiments. However, only eight of them have quadrupolar moment . From our point of view the more appropriate nuclei among them are <sup>35</sup>Cl, <sup>81</sup>Br and <sup>139</sup>La.
It should be pointed out that aligned target is of interest not only for TRI violation experiment. An alignment of deformed nuclei allows to study the deformation effects in nuclear reactions. Up to now, such effects were investigated only in the cross section of the elastic scattering of neutrons by the aligned nuclei <sup>59</sup>Co and <sup>165</sup>Ho . Angular correlations of secondary radiation provide also the worth information on spin dependent reaction amplitudes. For example, the energy and spin dependence of fission amplitudes was recently obtained in the measurements of fission fragment angular distributions in the fission of aligned nuclei <sup>235</sup>U by resonance neutrons .
In conclusion we shall exemplify some materials containing quadrupolar nuclei and doped with paramagnetic admixtures which have large splitting of levels in EFG of crystal (the NQR frequencies for this materials were not measured): ScO<sub>2</sub> doped with ion Cr<sup>3+</sup> (quadrupolar nucleus <sup>45</sup>Sc, I= 7/2, $`\mathrm{\Delta }_0=70`$ GHz ), Ga<sub>2</sub>O<sub>3</sub> doped with ion Cr<sup>3+</sup> (quadrupolar nuclei <sup>69</sup>Ga, <sup>71</sup>Ga, I=3/2, $`\mathrm{\Delta }_0=36`$ GHz), Hf<sub>2</sub>O<sub>3</sub> doped with ion Fe<sup>3+</sup> (quadrupolar nucleus <sup>177</sup>Hf, I=7/2, $`\mathrm{\Delta }_0=60`$ GHz) and LiNbO<sub>3</sub> doped with ion Cr<sup>3+</sup> (quadrupolar nucleus <sup>93</sup>Nb, I=9/2, $`\mathrm{\Delta }_0=30`$ GHz). This list is preliminary and can be significantly extended.
This work was supported by the ISTC (grant N 608) and RFBR (grant N 96-15-96548).
|
no-problem/9903/cond-mat9903012.html
|
ar5iv
|
text
|
# Theory of Tunneling for Rough Junctions
## INTRODUCTION
The study of the quantum mechanical tunneling of electrons between two metallic electrodes separated by a thin barrier is an important method for investigating condensed matter systems (e.g. see Ref. ). Although the vast majority of tunneling experiments have been carried out on tunnel junctions whose interfaces have a significant roughness, the impressive theoretical literature treating the properties of different types of tunnel barriers and tunneling mechanisms has almost without exception (see however Ref. ) discussed only the case of flat tunnel junctions. This article presents the first detailed theory of tunneling appropriate for tunnel junctions with rough interfaces. The potential significance of such a development is apparent from one of our conclusions, namely that for junctions where the interface roughness fluctuations exceed an electron wavelength in magnitude, the contribution of the diffuse transmission of electrons to the tunneling current dominates the specular transmission that is usually calculated.
A central idea in the flat interface theory of tunneling is that for thick barriers the electrons which dominate the tunneling are those whose momenta are directed close to the forward direction . This “tunneling cone” effect is the basis for attempts to determine the anisotropy of the superconducting energy gap (see page 126 of Ref. ), and has also recently been invoked in the explanation of tunneling phenomena high temperature superconductors where the spectrum of quasiparticle excitation energies is highly anisotropic. The investigation carried out below of tunneling directionality in the case of rough interfaces (where flat interface tunneling cone ideas are not applicable) thus has important implications for these studies.
The theory of wave scattering at rough surfaces is a highly developed subject with applications in many areas of physics. Below, some established ideas from these studies, such as the use of certain scattering amplitudes and of ensemble averages over the random variables describing the rough interfaces, are used to derive a formal expression for the tunneling current and to separate it into specular and diffuse components. This expression is then evaluated within the framework of two complementary classical approximation schemes, a small perturbation method valid for roughness fluctuations smaller than the electron wavelength, and a quasiclassical approximation (implemented via the tangent plane method) valid in the opposite limit. The approach of this article is thus quite different from a previous discussion of diffuse scattering in tunneling which has no way to separate the specular from the diffuse scattering, to calculate their relative magnitudes, or to investigate the factors influencing directionality in the case of rough interfaces.
The SUMMARY AND CONCLUSIONS section at the end of the paper gives an overview of the main results.
## FLAT TUNNEL JUNCTION INTERFACES AND THE TUNNELING CONE
Consider an electron tunneling from one metal to another through an insulating barrier. In the prototypical problem the electron is described by the Schrödinger equation
$$(\mathrm{}^2/2m)^2\psi +V(z)\psi =E\psi .$$
(1)
The potential $`V(z)`$ is shown in Fig. 1. The energy of the electron $`E`$ lies between $`0`$ and $`V_0`$ so that the insulating slab is a classically forbidden region.
The electron wave function in the insulating region has the form
$`\psi _I(𝐑)`$ $`=`$ $`e^{i𝐤𝐫}(Be^{\kappa _{Ik}z}+B^{}e^{\kappa _{Ik}z}),`$ (3)
$`\kappa _{Ik}=[\kappa _I^2+k^2]^{1/2},\kappa _I=[(2m/\mathrm{}^2)(V_0E)]^{1/2}.`$
Throughout this article three-dimensional vectors are denoted by boldface uppercase letters and two-dimensional vectors by boldface lowercase letters. Thus, $`𝐑=(𝐫,z)`$.
The current tunneling across a thick ($`\kappa _It1`$) barrier in the presence of an applied voltage $`V`$ is
$`J_z={\displaystyle \frac{2e}{h}}`$ $`{\displaystyle 𝑑E[f(E)f(EeV)]\frac{d^2k}{(2\pi )^2}D(E,𝐤)},`$ (4)
$`D(E,𝐤)`$ $`=`$ $`g(𝐤)exp(2\kappa _It)2\pi (\mathrm{\Delta }k_I)^2P_{\mathrm{\Delta }k_I}(𝐤),`$ (5)
$`P_{\mathrm{\Delta }k_I}(𝐤)`$ $`=`$ $`[2\pi (\mathrm{\Delta }k_I)^2]^1exp\{k^2/[2(\mathrm{\Delta }k_I)^2]\},`$ (6)
where $`D`$ is the transmission coefficient and $`P_{\mathrm{\Delta }k_I}`$ is a normalized (i.e. $`P_{\mathrm{\Delta }k_I}(𝐤)d^2k=1`$) two dimensional Gaussian function of width $`\mathrm{\Delta }k_I=[\kappa _I/(2t)]^{1/2}.`$ The factor $`exp(2\kappa _It)`$ in Eq. 5 comes from the exponential decay of the wave function in the insulating layer, whereas the Gaussian function reflects the fact that for thick barriers, only electrons which have momenta close to the forward direction contribute to the tunneling current. (This last property is often called the “tunneling cone” effect.) The prefactor $`g`$ is a relatively weakly varying function of momentum and energy of order of magnitude unity, and can usually be neglected in calculations of the tunneling current (e.g. see the discussion on p. 21 of Ref. ). Below, $`g`$ and its analogues will be put equal to unity.
## ROUGH TUNNEL JUNCTION INTERFACES - FORMAL THEORY
Consider an electron which is incident on the tunnel junction from the metal M in Fig. 1. The electron passes into the classically forbidden region at $`z=t`$ and then, after having been attenuated by a factor $`exp(\kappa _{Ik}t)`$, it arrives at the $`z=0`$ interface where it is finally transmitted into the metal $`M^{}`$. Because $`\kappa _{Ik}`$ is given by Eq. 3, independently of whether the interfaces are rough or not, it is primarily electrons with small parallel momentum components $`𝐤`$ that arrive at the interface $`z=0`$. If the interface at $`z=0`$ is rough, however, it will impart a random parallel component of momentum to electrons entering the metal $`M^{}`$. Thus the tunneling electrons will sample states in $`M^{}`$ having a wide distribution of parallel momentum components, in spite of the tunneling cone effect in the insulator. The essential problem is to find how the transmitted current from electrons of wave vector $`𝐤`$ in the metal M is distributed over the various wave vectors $`𝐤^{}`$ in metal $`M^{}`$.
The calculations below use the methods described in Ref. , adapted here to the problem of tunneling. Consider first a general description of the transmission and reflection of a plane wave incident from medium 1 onto a rough interface separating media 1 and 2. The wave reflected back into medium 1 is written as a linear combination of waves with all possible parallel momentum components, as is the wave transmitted into the medium 2. The wave functions in media 1 and 2 are thus
$`\psi _1(𝐑)`$ $`=`$ $`{\displaystyle \frac{e^{i𝐊_\mathrm{𝟏}𝐑}}{q_{1k}^{1/2}}}+{\displaystyle \frac{d^2k^{}}{(2\pi )^2}S_{11}(𝐤^{},𝐤)\frac{e^{i𝐊_\mathrm{𝟏}^{}𝐑}}{q_{1k^{}}^{1/2}}},`$ (7)
$`\psi _2(𝐑)`$ $`=`$ $`{\displaystyle \frac{d^2k^{}}{(2\pi )^2}S_{21}(𝐤^{},𝐤)\frac{e^{i𝐊_\mathrm{𝟐}^{}𝐑}}{q_{2k^{}}^{1/2}}}.`$ (8)
Here $`𝐊_\alpha =(𝐤,q_{\alpha k})`$, $`\alpha =1,2`$ where $`q_{\alpha k}=i\kappa _{Ik}`$ if $`\alpha `$ refers to the insulating region, and $`q_{\alpha k}=(K_M^2k^2)^{1/2}`$ with $`K_M=[(2m/\mathrm{}^2)E]^{1/2}`$ if $`\alpha `$ refers to a metallic region. The division by $`q^{1/2}`$ in Eqs. 7 and 8 represents the conventional normalization of the plane waves . The quantities $`S_{11}`$ and $`S_{12}`$ are called scattering amplitudes.
For the tunneling problem, the basic scattering amplitudes are $`S_{I,M}`$ (i.e. $`1M`$ and $`2I`$ in Eq. 8) and $`S_{M^{},I}`$ describing the transmission of an electron from the metal $`M`$ to the insulator $`I`$, and from the insulator $`I`$ to the metal $`M^{}`$, respectively. An important simplification in the calculation of $`S_{I,M}`$ is that for the thick junctions considered here, only the exponentially decaying waves need be considered in the insulating region, and the exponentially increasing waves can be neglected (e.g. see Ref. ). It can be shown that in this approximation the scattering amplitude $`S_{M^{},M}`$ describing the transmission from metal $`M`$ to metal $`M^{}`$ is given by
$`S_{M^{},M}(𝐤^{},𝐤)=`$ (10)
$`{\displaystyle d^2k^{\prime \prime }S_{M^{},I}(𝐤^{},𝐤^{\prime \prime })S_{I,M}(𝐤^{\prime \prime },𝐤)e^{i(q_{Ik^{\prime \prime }}q_{Mk})t}}.`$
The metal-insulator interfaces are given in terms of the random functions $`h_\alpha (𝐫)`$, $`\alpha =1,2`$ by the equations $`z=t+h_1(𝐫)`$ and $`z=h_2(𝐫)`$. The ensemble average of each $`h_\alpha (𝐫)`$ is taken to be zero, so that the average interfaces are flat. The boundary conditions satisfied by the wave function are that both the wave function and its normal derivative are continuous on both interfaces.
The ensemble average of Eq. 8 must give an average wave function $`\overline{\psi _2}`$ corresponding to flat interfaces; the average scattering amplitude thus has the form $`\overline{S}_{21}(𝐤^{},𝐤)=\overline{V}_{21}(𝐤)\delta (𝐤^{}𝐤)`$. The scattering amplitude is now written as the sum of its average value and a fluctuating part:
$$S_{21}(𝐤^{},𝐤)=\overline{V}_{21}(𝐤)\delta (𝐤^{}𝐤)+\mathrm{\Delta }S_{21}(𝐤^{},𝐤).$$
(11)
Furthermore the correlation function of the scattering amplitude fluctuations can be written in the form
$$\overline{\mathrm{\Delta }S_{21}(𝐤^{\prime \prime },𝐤)\mathrm{\Delta }S_{21}(𝐤^{},𝐤)}=\sigma _{21}(𝐤^{},𝐤)\delta (𝐤^{\prime \prime }𝐤^{}).$$
(12)
Given that the ensemble average of the current density normal to junction in the metal $`M^{}`$ can be calculated using the formula $`\overline{J_z}=(\mathrm{}/m)Im\overline{(\psi _M^{}^{}d\psi _M^{}/dz)}`$, the transmission coefficient $`D(E,𝐤)`$ appearing in Eq. 4 can now be found using Eqs. 8, 11 and 12, with the result that
$$D(E,𝐤)=|\overline{V}_{M^{},M}(𝐤)|^2+\sigma _{M^{},M}(𝐤^{},𝐤)d^2k^{},$$
(13)
where the integration is restricted to $`|𝐤^{}|<K_M`$. From Eq. 13 it is clear that the fraction of the incoming current in M with parallel wave vector $`𝐤`$ transmitted into states in $`d^2k^{}`$ is $`\sigma _{M^{},M}(𝐤^{},𝐤)d^2k^{}`$ whereas the fraction transmitted without change in the parallel component of momentum is $`|\overline{V}_{M^{},M}(𝐤)|^2`$. In terms of quantities characterizing the two junction interfaces, one finds
$$|\overline{V}_{M^{},M}(𝐤)|^2=|\overline{V}_{M^{},I}(𝐤)\overline{V}_{I,M}(𝐤)|^2$$
(14)
and
$`\sigma _{M^{}M}(𝐤^{},𝐤)`$ $`=`$ $`\sigma _{M^{}I}(𝐤^{},𝐤)|\overline{V}_{I,M}(𝐤)|^2e^{2\kappa _{Ik}t}`$ (15)
$`+`$ $`|\overline{V}_{M^{},I}(𝐤^{})|^2\sigma _{IM}(𝐤^{},𝐤)e^{2\kappa _{Ik^{}}t}`$ (16)
$`+`$ $`{\displaystyle d^2k^{\prime \prime }\sigma _{M^{}I}(𝐤^{},𝐤^{\prime \prime })e^{2\kappa _{Ik^{\prime \prime }}t}\sigma _{IM}(𝐤^{\prime \prime },𝐤)}`$ (17)
The diffuse contribution to the tunneling current, Eq. 17, contains contributions in which the transmission is diffuse at one interface and specular at the other, as well as a contribution (the last term) which is diffuse at both interfaces.
## THE SMALL PERTURBATION METHOD
The small perturbation method works when the flat surface problem (i.e. the problem for $`h_\alpha (𝐫)=0`$) is a good first approximation. The corrections are calculated by expanding in powers of $`h_\alpha (𝐫)`$. The quantities $`h_\alpha (𝐫)`$ appear in the calculations because expressions such as Eqs. 7 and 8 are evaluated at the interfaces $`z=t+h_1(𝐫)`$ and $`z=h_2(𝐫)`$ when applying the boundary conditions. Thus $`h_\alpha (𝐫)`$ appears in expressions such as $`exp[\kappa _{Ik}h_\alpha (𝐫)]`$ and $`exp[iq_{Mk}h_\alpha (𝐫)]`$, and expansions in powers of $`h_\alpha (𝐫)`$ will be expansions in powers of the parameters $`[\kappa _{Ik}h_\alpha (𝐫)]`$ and $`[q_{Mk}h_\alpha (𝐫)]`$. The quantities $`q_{Mk}`$ and $`\kappa _{Ik}`$ are of the order of magnitude of $`2\pi /\lambda `$ where the electron’s wavelength $`\lambda `$ is expected to be comparable in magnitude to the lattice constant. The small perturbation approach will therefore be valid only when the root mean square fluctuations in $`h_\alpha (𝐫)`$ are smaller than a lattice constant, i.e. for atomically flat interfaces. Since the results of the section on flat interfaces are already a good first approximation when the small perturbation method is applicable, no further results of this approximation will be given.
## THE TANGENT PLANE APPROXIMATION
This section evaluates the transmission coefficient $`D(E,𝐤)`$ occurring in Eq. 4 for the tunneling current within the framework of the tangent plane approximation . This approximation works best when the spatial scale of the roughness is larger than the electron wavelength, and is thus complementary to the small perturbation approach outlined in the previous section.
Consider first the general case of the transmission of a plane wave from medium 1 to medium 2 across the random interface $`z=h(𝐫)`$, which is described in terms of Eqs. 7 and 8. The method begins with a mathematical formulation of Huygens’ principle in which the wave function of the electron in the medium 2 is given in terms of its value and the value of its normal derivative on the interface $`z=h(𝐫)`$, namely,
$`\psi _2(𝐑)`$ $`=`$ $`{\displaystyle \psi _2(𝐑^{})\frac{G_0(𝐑^{}𝐑)}{n^{}}𝑑S^{}}`$ (19)
$`+{\displaystyle \frac{\psi _2(𝐑^{})}{n^{}}G_0(𝐑^{}𝐑)𝑑S^{}}.`$
Here $`S^{}`$ is the surface $`z=h(𝐫)`$, $`𝐑^{}`$ is on this surface, and $`/n^{}`$ is a normal derivative into medium 2. Also, the Green’s function $`G_0(𝐑^{}𝐑)`$ satisfies the equation $`(^2+K_2^2)G_0(𝐑^{}𝐑)=\delta (𝐑^{}𝐑)`$ and can be represented as
$$G_0(𝐑)=\frac{i}{8\pi ^2}\frac{exp[i𝐤𝐫+iq_{2k}|z|]}{q_{2k}}d^2k.$$
(20)
The next step is to find the electron wave function, $`\psi _2(𝐑^{})`$, at points $`z=h(𝐫)`$ on the interface. This is done in the tangent plane approximation by considering a given point on the interface, constructing a tangent plane there, and then considering the reflection and transmission of the incoming plane wave (which is taken to be the first term in Eq. 7) at this tangent plane. This gives $`\psi _2(𝐑^{})`$ in terms of the amplitude and phase of the incoming plane wave, and this result can be combined with Eqs. 8, 19 and 20 to yield
$`S_{21}(𝐤^{},𝐤)=`$ (22)
$`{\displaystyle exp[i(𝐤^{}𝐤)𝐫+i(q_{2k}q_{1k})h(𝐫)]\frac{d^2r}{(2\pi )^2}}.`$
Here, a complicated function of the wave vectors of order unity, and analogous to the prefactor $`g`$ in Eq. 5, has been omitted.
The quantities $`\overline{V}_{M^{},M}`$ and $`\sigma _{M^{},M}`$ necessary for an evaluation of the transmission coefficient $`D(E,𝐤)`$ (Eq. 13) can now be evaluated by combining Eqs. 10, 11, 12 and 22. In carrying out the necessary ensemble averages the function $`h(𝐫)`$ is assumed to be Gaussian, and the theorem $`\overline{exp(h)}=exp(\overline{h^2}/2)`$ (valid for any Gaussian variable having $`\overline{h}=0`$) is used. The results are
$$|\overline{V}_{M^{},M}(𝐤)|^2=F\beta _1\beta _2P_{\mathrm{\Delta }k_I}(𝐤)$$
(23)
and
$`\sigma _{M^{},M}(𝐤^{},𝐤)`$ $`=`$ $`F\beta _1P_{\mathrm{\Delta }k_2}(𝐤^{}𝐤)P_{\mathrm{\Delta }k_I}(𝐤)`$ (24)
$`+`$ $`F\beta _2P_{\mathrm{\Delta }k_I}(𝐤^{})P_{\mathrm{\Delta }k_1}(𝐤^{}𝐤)`$ (25)
$`+`$ $`F{\displaystyle d^2k^{\prime \prime }P_{\mathrm{\Delta }k_2}(𝐤^{}𝐤^{\prime \prime })P_{\mathrm{\Delta }k_I}(𝐤^{\prime \prime })P_{\mathrm{\Delta }k_1}(𝐤^{\prime \prime }𝐤)}.`$ (26)
where
$`F`$ $`=`$ $`2\pi (\mathrm{\Delta }k_I)^2e^{2\kappa _It}e^{2\kappa _I^2(\overline{h_1^2}+\overline{h_2^2})},`$ (27)
$`\beta _\alpha `$ $`=`$ $`e^{(\kappa _I^2+K_M^2)\overline{h_\alpha ^2}},`$ (28)
$`(\mathrm{\Delta }k_\alpha )^2`$ $`=`$ $`(\kappa _I^2+K_M^2)\overline{s_\alpha ^2}.`$ (29)
The $`P_{\mathrm{\Delta }k_\alpha }`$’s are the normalized Gaussian functions defined by Eq. 6 with widths $`\mathrm{\Delta }k_\alpha `$ given by Eq. 29 where $`\overline{s_\alpha ^2}=\overline{(h_\alpha /x)^2}=\overline{(h_\alpha /y)^2}`$ is the mean square slope of the roughness.
## SUMMARY AND CONCLUSIONS
The approach to tunneling theory introduced above allows a calculation of the consequences of rough tunnel junction interfaces on the tunneling current and on its directionality. The general formula for the tunneling current is given by Eqs. 4, 13, 14, and 17. Eq. 13 shows the separation of the current into specular and diffuse parts, and the scattering cross section $`\sigma _{M^{}M}`$ gives the directional dependence of the diffuse part. Eqs. 14, and 17 reduce the problem to the determination of the transmission properties of the individual junction interfaces. These expressions give a formally exact theory of tunneling for thick rough tunnel junctions, and can be evaluated using any appropriate approximation scheme.
The small perturbation method treats the roughness as a perturbation of a flat interface model, and shows that flat interface models represent a good first approximation when the amplitude of the roughness fluctuations is less than the electron wavelength (which normally requires atomically flat interfaces).
The results obtained in the tangent plane approximation show that for rough tunnel junction interfaces (i.e. surface height fluctuations significantly greater than the electron wavelength) the transmitted current is nearly totally diffuse. To see this recall that the functions $`P`$ occurring in Eqs. 23 and 26 are normalized Gaussians. Thus the relative weights of the different contributions in Eqs. 23 and 26 to the tunneling current are determined by the factors $`\beta _\alpha `$. This means that for root mean square fluctuations in the height functions $`h(𝐫)`$ much greater than an electron wavelength, i.e. such that the factors $`\beta _\alpha `$ are small, the purely diffuse contribution, namely that last term in Eq. 26, dominates.
Now examine the directionality in the rough junction case where the tunneling current is dominated by the last term in Eq. 26. For sufficiently thick tunnel junctions, the factor $`P_{\mathrm{\Delta }k_I}(𝐤^{\prime \prime })=\delta (𝐤^{\prime \prime })`$ and the integration over $`𝐤^{\prime \prime }`$ is easily carried out. The incoming and outgoing electrons contributing to the tunneling current thus have their parallel moment components within $`\mathrm{\Delta }k_1`$ and $`\mathrm{\Delta }k_2`$ (see Eq. 29) of zero, respectively. For root mean square (rms) roughness slopes $`s_\alpha `$ which are of order unity or not too much smaller, there is no significant directionality of the tunneling. On the other hand for rms roughness slopes much less than unity, the smaller the roughness slope, the closer to the forward direction is the momentum of electrons contributing to the tunneling current, both for incoming and outgoing electrons.
It is of interest to examine the physical reasons for the dominance of the diffuse component of the tunneling current in the case of rough junctions. As for flat junctions, the tunneling current is reduced by the factor $`exp(2\kappa _It)`$ depending exponentially on the average thickness $`t`$ of the junction, (see Eq. 27). This effect is reduced by the interface height fluctuations, which give regions where the potential barrier has a smaller than average thickness. Hence the factor $`exp(2\kappa _I^2(\overline{h_1^2}+\overline{h_2^2}))`$ in Eq. 27. The reduction of the attenuation due to barrier thickness fluctuations is not as great for the specular component of the transmission, as indicated by the factor $`exp(\kappa _I^2\overline{h_\alpha ^2})`$ in $`\beta _\alpha `$. The other factor contributing to $`\beta _\alpha `$, $`exp(K_M^2\overline{h_\alpha ^2})`$, gives the reduction in the specularly transmitted component of transmission due to destructive interference of waves with the different phase lags due to having traveled different distances in the insulator. These two factors combine to make the specular component of the tunneling negligible relative to the diffuse component for sufficiently rough interfaces. Clearly, tunneling theory must account for the roughness of tunnel junctions in order to correctly describe the dominant diffuse contribution to the tunneling current.
This article has given a formally exact expression for the tunneling current valid for thick, rough tunnel junctions, has shown that for rough tunnel junctions the diffuse component of the tunneling current dominates the specular component, and has also shown that even when the tunneling current is entirely diffuse, a tunneling cone effect can exist if the root mean square roughness slope is sufficiently small.
## ACKNOWLEDGEMENTS
I wish to thank M. Aprili, J.P. Carbotte, and J.R. Kirtley for stimulating discussions, and the Natural Sciences and Engineering Research Council of Canada for support.
|
no-problem/9903/cond-mat9903028.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
It is now widely accepted that superconductivity of cuprates is closely related to their unusual magnetic properties, and it is increasingly clear that magnetic pairing is the most realistic mechanism of cuprate superconductivity. However the mechanism of pairing as well as other unusual properties are far from completely understood. The problem has been attacked along several directions. First we have to mention the empirical or semi-empirical approach which allows one to relate different characteristics measured experimentally. This approach is to a large extent based on the Hubbard model. For a review see article . In the low energy limit the Hubbard model can be reduced to the $`tJ`$ model. Another approach to cuprates is based on numerical studies of the $`tJ`$ model (see review ). Our studies are also based on this model. We used the ordered Neel state at zero doping as a starting point to develop the spin-wave theory of pairing . The method we used was not fully satisfactory, since it violated spin-rotational symmetry, nevertheless it allowed us to calculate from first principles all of the most important properties including the critical temperature, the spin-wave pseudogap and the low energy spin triplet excitations .
A sharp collective mode with very low energy has been revealed in YBCO in spin polarized inelastic neutron scattering . A number of theoretical explanations have been suggested for this effect , all of these are based on the idea that the system is close to AF instability. However all known explanations use some uncontrolled approximations and assumptions.
In the present work we investigate close to half filling regime for the 2D $`tJ`$ model, where it can be solved analytically without any uncontrolled approximations. It can be done for the region of parameters where long-range AF order is preserved under doping. This is the regime where non Fermi liquid behavior can be studied in detail. We analyze the superconducting pairing in this regime and consider the spin triplet collective excitation. It is demonstrated that close to the point of AF instability energy of this excitation is very small. The excitation exists only at very small momenta. The idea of this work is somewhat similar to that of our previous paper , however here we investigate different regime.
## 2 Hamiltonian and single hole dispersion
Let us consider a $`tJJ^{\prime \prime }V`$ model defined by the Hamiltonian
$$H=t\underset{ij\sigma }{}c_{i\sigma }^{}c_{j\sigma }t^{\prime \prime }\underset{ij_2\sigma }{}c_{i\sigma }^{}c_{j_2\sigma }+\underset{ij}{}\left[J\left(𝐒_i𝐒_j\frac{1}{4}n_in_j\right)+Vn_in_j\right].$$
(1)
$`c_{i\sigma }^{}`$ is the creation operator of an electron with spin $`\sigma `$ $`(\sigma =,)`$ at site $`i`$ of the two-dimensional square lattice. The $`c_{i\sigma }^{}`$ operators act in the Hilbert space with no double electron occupancy. The $`ij`$ represents nearest neighbor sites, and $`ij_2`$ represents next next nearest sites. The spin operator is $`𝐒_i=\frac{1}{2}_{\alpha ,\beta }c_{i\alpha }^{}\sigma _{\alpha \beta }c_{i\beta }`$, and the number density operator is $`n_i=_\sigma c_{i\sigma }^{}c_{i\sigma }`$. In addition to the minimal $`tJ`$ model (see Ref. ) we have introduced additional next next nearest hopping $`t^{\prime \prime }`$, and Coulomb repulsion $`V`$ at nearest sites. Note that we do not introduce next nearest neighbor hopping $`t^{}`$ (diagonal) because we do not need it for the purposes of this study.
In the paper we analyzed the model defined by the Hamiltonian (1) in the limit $`t,t^{\prime \prime }J`$. In the present work we consider limit
$$t,Jt^{\prime \prime }.$$
(2)
It is well known that the $`tJ`$ model at half filling describes the Mott insulator. It is equivalent to the 2D Heisenberg model, and the ground state of the model has long range AF order. At small doping the holes are concentrated near the points $`(\pm \pi /2,\pm \pi /2)`$ where single hole dispersion has minima. In leading approximation the dispersion is of the form (we take energy at the minimum as a reference point)
$`ϵ_k=\beta \left(\gamma _𝐤^2+(\gamma _𝐤^{})^2\right),`$ (3)
$`\beta 0.8\times 8t^{\prime \prime }=6.4t^{\prime \prime },`$
$`\gamma _𝐤=\frac{1}{2}(\mathrm{cos}k_x+\mathrm{cos}k_y)`$, $`\gamma _𝐤^{}=\frac{1}{2}(\mathrm{cos}k_x\mathrm{cos}k_y)`$. Calculation of the dispersion (3) is straightforward because it is due to hopping within the same magnetic sublattice. Coefficient $`0.8`$ appears because of spin quantum fluctuations: $`0.8=10.2`$, where $`0.2`$ is the spin flip probability in the Heisenberg model. Alon with quasimomentum the hole in the AF background has an additional quantum number: pseudospin. We denote the hole creation operator by $`h_{𝐤\sigma }^{}`$, where $`\sigma =\pm 1/2`$ is pseudospin. The relation between pseudospin and usual spin is discussed in the paper .
We will consider the case of very small doping, $`\delta 1`$, with respect to half filling (total filling is $`1\delta `$). In this case all holes are concentrated in small pockets around the points $`𝐤_\mathrm{𝟎}=(\pm \pi /2,\pm \pi /2)`$. Single hole dispersion (3) can be expanded near each of these points
$$ϵ_k=\frac{1}{2}\beta 𝐩^2,$$
(4)
where $`𝐩=(p_1,p_2)`$ is deviation from the center of the pocket: $`𝐩=𝐤𝐤_\mathrm{𝟎}`$, $`p_1`$ is orthogonal to the face of the magnetic Brillouin zone, and $`p_2`$ is parallel to the face (see Fig. 1). The Fermi energy and Fermi momentum for the holes equal $`ϵ_F\frac{1}{2}\pi \beta \delta `$, $`p_F(\pi \delta )^{1/2}`$.
## 3 Hole-spin-wave interaction and instability of the Neel state
Spin-wave excitations on an AF background are usual spin waves with dispersion $`\omega _𝐪=2J\sqrt{1\gamma _𝐪^2}\sqrt{2}Jq`$, at $`q<<1`$, see Ref. for review. The hole-spin-wave interaction is well known (see, e.g. Ref.)
$`H_{h,sw}={\displaystyle \underset{𝐤,𝐪}{}}g_{𝐤,𝐪}\left(h_{𝐤+𝐪}^{}h_𝐤\alpha _𝐪+h_{𝐤+𝐪}^{}h_𝐤\beta _𝐪+\text{H.c.}\right),`$ (5)
$`g_{𝐤,𝐪}=4t\sqrt{2}(\gamma _𝐤U_𝐪+\gamma _{𝐤+𝐪}V_𝐪),`$
where $`h_{𝐤\sigma }^{}=c_{𝐤,\sigma }`$ is the hole creation operator with pseudospin $`\sigma `$, $`\alpha _𝐪^{}`$ and $`\beta _𝐪^{}`$ are the spin wave creation operators for $`S_z=1`$, and $`U_𝐪=\sqrt{\frac{J}{\omega _𝐪}+\frac{1}{2}}`$ and
$`V_𝐪=sign(\gamma _𝐪)\sqrt{\frac{J}{\omega _𝐪}\frac{1}{2}}`$ are parameters of the Bogoliubov transformation diagonalizing the spin-wave Hamiltonian, see Ref.. Virtual spin wave emission gives a correction to the hole dispersion, see Fig.2. However this correction is small $`\delta ϵt^2/t^{\prime \prime }`$ and therefore can be neglected compared with (3).
To describe renormalization of the spin wave under doping, it is convenient to introduce the set of Green’s functions
$`D_{\alpha \alpha }(t,𝐪)`$ $`=`$ $`iT[\alpha _𝐪(t)\alpha _𝐪^{}(0)],`$ (6)
$`D_{\alpha \beta }(t,𝐪)`$ $`=`$ $`iT[\alpha _𝐪(t)\beta _𝐪(0)],`$
$`D_{\beta \alpha }(t,𝐪)`$ $`=`$ $`iT[\beta _𝐪^{}(t)\alpha _𝐪^{}(0)],`$
$`D_{\beta \beta }(t,𝐪)`$ $`=`$ $`iT[\beta _𝐪^{}(t)\beta _𝐪(0)].`$
In the present work we consider only the long-range dynamics: $`qkp_F1`$. In this limit all possible polarization operators coincide $`P_{\alpha \alpha }(\omega ,𝐪)=P_{\alpha \beta }(\omega ,𝐪)=P_{\beta \alpha }(\omega ,𝐪)=P_{\beta \beta }(\omega ,𝐪)=\mathrm{\Pi }(\omega ,𝐪)`$, where $`\mathrm{\Pi }(\omega ,𝐪)`$ is given by the diagram presented at Fig. 3.
For stability of the system the condition (Stoner criterion)
$$\omega _q+2\mathrm{\Pi }(0,𝐪)>0$$
(7)
must be fulfilled . Otherwise the Green’s functions (6) would possess poles with imaginary $`\omega `$. Considering holes as a “normal Fermi liquid” one can easily calculate the polarization operator at $`qp_F`$: $`\mathrm{\Pi }(0,𝐪)4t^2\sqrt{2}q/\pi \beta `$, Ref. . Relatively weak pairing, which we consider below, does not influence this result. Then the condition of stability can be rewritten as
$$\beta =6.4t^{\prime \prime }>\frac{8t^2}{\pi J}.$$
(8)
To provide stability of the AF order we have to choose
$$t^{\prime \prime }>t_c^{\prime \prime }0.4t^2/J.$$
(9)
If $`t<J`$ or $`tJ`$ the stability condition is automatically fulfilled since in the present work we consider $`t,Jt^{\prime \prime }`$. However at $`tJ`$ one can violate the condition (8). In this case we will assume that $`t^{\prime \prime }>t_c^{\prime \prime }`$. If $`t^{\prime \prime }`$ is close to $`t_c^{\prime \prime }`$ it is convenient to introduce the parameter $`\eta `$
$$\eta ^2=1\frac{8t^2}{\pi J\beta }=(t^{\prime \prime }t_c^{\prime \prime })/t^{\prime \prime }$$
(10)
as a measure of this closeness. The criterion (7) is proportional to this parameter.
## 4 Spin-singlet p-wave pairing caused by the short-range attraction
It is not convenient to consider the superconducting pairing in the magnetic Brillouin zone with four half-pockets (see Fig. 1). Because of this we translate the picture to the shifted zone with two whole pockets, Fig. 4. We stress that this is question of convenience only, the representations are absolutely equivalent because of the translational invariance.
There are two mechanisms for the superconducting pairing: short-range attraction and long-range attraction. First we consider the short-range effect. Attraction between holes at nearest sites (short-range) is due to the reduction in number of missing AF links. The value of this attraction immediately follows from eq.(1)
$$U=J𝐒_𝐢𝐒_𝐣\frac{1}{4}+V0.58J+V.$$
(11)
Strong enough Coulomb repulsion ($`V>0.58J`$) kills this mechanism. In the momentum representation the interaction (11) can be rewritten as
$$H_U=8U\underset{𝐤_\mathrm{𝟏},𝐤_\mathrm{𝟐},𝐤_\mathrm{𝟑},𝐤_\mathrm{𝟒}}{}\gamma _{𝐤_\mathrm{𝟏}𝐤_\mathrm{𝟑}}h_{𝐤_\mathrm{𝟑}}^{}h_{𝐤_\mathrm{𝟒}}^{}h_{𝐤_\mathrm{𝟐}}h_{𝐤_\mathrm{𝟏}}\delta _{𝐤_\mathrm{𝟏}+𝐤_\mathrm{𝟐},𝐤_\mathrm{𝟑}+𝐤_\mathrm{𝟒}}.$$
(12)
For scattering inside a hole pocket the interaction is practically momentum independent because $`𝐤_\mathrm{𝟏}𝐤_\mathrm{𝟐}𝐤_\mathrm{𝟑}𝐤_\mathrm{𝟒}(\pi /2,\pi /2)`$, and hence $`\gamma _{𝐤_\mathrm{𝟏}𝐤_\mathrm{𝟑}}1`$. Such interaction gives “s-wave pairing” with the gap without nodes at the Fermi surface. The value of the superconducting gap one can easily find using the results of papers . This gives
$$\mathrm{\Delta }=Ct^{\prime \prime }\sqrt{\delta }e^{\pi \beta /4U}=Ct^{\prime \prime }\sqrt{\delta }e^{5t^{\prime \prime }/(0.58JV)},$$
(13)
where $`C10`$ is some dimensionless constant. The solution is valid only if $`V<0.58J`$, for stronger Coulomb repulsion the pairing disappears. It is important to stress the peculiar symmetry properties of the above pairing. This peculiarity comes from the presence of long-range AF order. As we already mentioned, the gap has no nodes at the Fermi surface and from this point of view it is “s-wave pairing”. However we remind that we have considered the pairing in the shifted zone and in this zone it is not easy to classify the states by parity. For well defined parity we have to return to the magnetic Brillouin zone, so we have to translate the outside parts of the Fermi surface by the inverse vector of the magnetic lattice $`𝐆=(\pi ,\pi )`$, see Fig. 5.
The point is that under such translation the superconducting gap changes the sign as it is shown at Fig. 5. This property follows from the fact that the coefficient in the interaction (12) changes sign under such translation: $`\gamma _{𝐤_\mathrm{𝟏}𝐤_\mathrm{𝟑}+𝐆}=\gamma _{𝐤_\mathrm{𝟏}𝐤_\mathrm{𝟑}}`$ (for details see paper ).
Thus in reality we have negative parity pairing which is usually called p-wave. The above consideration was relevant to the hole pocket centered at $`(\pi /2,\pi /2)`$. Similar construction is valid for another pocket centered at $`(\pi /2,\pi /2)`$. Existence of two solutions corresponds to the double degeneracy of the E-representation of the $`C_{4v}`$ group. Taking linear combinations of the single pocket solutions we find two degenerate solutions for the entire Brillouin zone with lines of nodes $`𝐤_𝐱=0`$ or $`𝐤_𝐲=0`$ well outside the Fermi surface. We would like to stress that we have considered the spin-singlet (more exactly pseudospin-singlet) pairing! This situation is very much different from the usual one when p-wave pairing implies spin triplet. We repeat that the peculiarity is due to the presence of long-range AF order.
## 5 D- and g-wave pairings caused by the long-range attraction
The long range attraction comes from the spin-wave exchange shown on Fig. 6. In this exchange the typical spin-wave momenta are $`qp_F\sqrt{\delta }`$, and hence the typical distances are $`r1/q1/\sqrt{\delta }1`$.
Similarly to the previous section, it convenient to consider first the pairing inside a hole pocket, say centered at $`(\pi /2,\pi /2)`$, see Fig. 4. This pairing has been considered in detail in our previous work . It has been shown that for the case of “isotropic” dispersion (4) the only solution is the one with a single node line in the pocket. The gap at the Fermi surface ($`ϵ_F=\frac{1}{2}\pi \beta \delta `$) is of the form
$`\mathrm{\Delta }(\varphi )=\mathrm{\Delta }_0\mathrm{sin}\varphi ,`$ (14)
$`\mathrm{\Delta }_0=Cϵ_Fe^{\pi J\beta /2t^2}10Ct^{\prime \prime }\delta e^{10Jt^{\prime \prime }/t^2},`$
where $`\mathrm{sin}\varphi =p_2/p_F`$, $`p_F^2=p_1^2+p_2^2`$, and $`C1`$ is some constant.
The eqs.(14) describe pairing within a single pocket of the shifted zone. Translation of this solution to the magnetic Brillouin zone is shown at Fig. 7. This is absolutely identical to what we did in the previous section (change of sign at the translation).
There are effectively two pockets in the Brillouin zone, see Fig. 4. Taking symmetric and antisymmetric combinations between the pockets, we get the d- and g-wave pairings respectively. The symmetries of the corresponding superconducting gaps are shown at Fig. 8. It is clear that the d-wave belongs to the $`B_1`$ representation of the $`C_{4v}`$ group and the g-wave belongs to the $`A_2`$ representation.
Both solutions originate from (14), therefore they are close in energy. Nevertheless the constant $`C`$ in eq. (14) is smaller for the g-wave. This is the price for additional lines of nodes ($`𝐤_𝐱=0`$ and $`𝐤_𝐲=0`$). The above consideration did not include short range interaction (12). This is absolutely correct for g-wave pairing which is not sensitive to the interaction (12) at all. However the d-wave is sensitive. Therefore at $`V<0.58J`$ the d-wave pairing is enhanced because of (12), while, on the contrary, at larger Coulomb repulsion $`V>0.58J`$ the d-wave is suppressed and can even disappear. To avoid misunderstanding we stress that in the limit under consideration ($`t^{\prime \prime }t,J`$) the short range interaction (12) is too weak (even at $`V=0`$) to produce d-wave pairing without spin-wave exchange. However the short-range interaction influences the dimensionless constant $`C`$ (see eq. (14)) which arises in spin-wave exchange mechanism.
## 6 The phase diagram
The phase diagram of the model under consideration is given on Fig. 9. To be specific we present the case of the not too strong Coulomb repulsion at the nearest sites: $`V<0.58J`$. At stronger $`V`$ the p-wave superconductivity disappears, see eq. (13). Comparing eqs. (13) and (14) we see that the p-wave pairing is stronger at $`t<t_c`$, while at $`t>t_c`$ the d-g-wave pairing dominates. At $`V=0`$ the critical value is $`t_cJ`$. In the p-wave phase the gap, as well as the critical temperature, is proportinal to square root of the hole concentration: $`\mathrm{\Delta }T_c\sqrt{\delta }`$. But in the d-g-wave phase they are proportional to the first power of concentration: $`\mathrm{\Delta }T_c\delta `$
According to eq. (9) at $`t<t_{cN}1.6\sqrt{t^{\prime \prime }J}`$, the long range AF order at zero temperature is preserved under doping, so we have coexistence of the superconductivity and the Neel order. At $`t>t_{cN}`$ the Neel order is destroyed by the doping and one gets a transition into the quantum disordered phase. However as soon as the magnetic correlation length is larger than the superconducting correlation length the mechanism of pairing is valid and one still has the d-g-wave superconductor. At a temperature higher than the critical one the system behaves as a metal with very strong scattering of mobile holes on spin-wave excitations. Following the tradition we call this state “strange metal”.
## 7 The spin-wave collective excitation
We will see that the spin-wave collective excitation has nontrivial behaviour in the vicinity of the quantum phase transition from the Neel to the disordered phase. Therefore we study this excitation only in the d-g-wave superconducting phase at $`T=0`$. The energy spectrum and Bogoliubov parameters are given by the usual BCS formulas
$`E_𝐤`$ $`=`$ $`\sqrt{(ϵ_kϵ_F)^2+\mathrm{\Delta }_𝐤^2},`$ (15)
$`u_𝐤^2,v_𝐤^2`$ $`=`$ $`{\displaystyle \frac{1}{2}}\left(1\pm {\displaystyle \frac{ϵ_𝐤ϵ_F}{E_𝐤}}\right)`$
with gap $`\mathrm{\Delta }_𝐤`$ from eq. (14). The spin wave polarization operator due to mobile holes is given by diagram on Fig. 3 plus a similar diagram with anomalous fermionic Green’s functions. Straightforward calculation gives (see e.g. Ref.)
$$\mathrm{\Pi }(\omega ,𝐪)=\underset{𝐤,𝐤_\mathrm{𝟎}}{}g_{𝐤_\mathrm{𝟎}𝐪}^2\frac{2(E_𝐤+E_{𝐤+𝐪})}{\omega ^2(E_𝐤+E_{𝐤+𝐪})^2}\left(u_𝐤^2v_{𝐤+𝐪}^2+u_𝐤v_𝐤u_{𝐤+𝐪}v_{𝐤+𝐪}\right).$$
(16)
This equation includes summation over pockets $`𝐤_\mathrm{𝟎}=(\pi /2,\pm \pi /2)`$. In these pockets the vertex (5) is $`g_{𝐤_\mathrm{𝟎},𝐪}2^{5/4}t(q_x\pm q_y)/\sqrt{q}`$. Let us consider the case of very small momenta and frequencies: $`v_Fq<\mathrm{\Delta }_0`$, and $`\omega <\mathrm{\Delta }_0`$. In this limit one can put $`q=0`$ in eq. (16) everywhere except at the vertex and therefore the polarization operator can be evaluated analytically
$$\mathrm{\Pi }(\omega ,𝐪)=\frac{4t^2\omega _𝐪}{\pi J\beta }\left(1+i\frac{\pi \omega }{8\mathrm{\Delta }_0}\right)$$
(17)
Note that the the imaginary part is nonzero even at $`\omega <2\mathrm{\Delta }_0`$ because the gap (14) has a line of nodes. Any of the Green’s functions (6) have a denominator $`\omega ^2\omega _𝐪^22\omega _𝐪\mathrm{\Pi }(\omega ,𝐪)`$, see e.g. Refs. . The zero of this denominator gives the energy and width of the spin-triplet collective excitation. Using eqs.(17) and (10) we find
$`o_𝐪`$ $`=`$ $`\eta \omega _𝐪,`$ (18)
$`\mathrm{\Gamma }_𝐪`$ $`=`$ $`{\displaystyle \frac{\pi }{8}}{\displaystyle \frac{1\eta ^2}{\eta }}{\displaystyle \frac{\omega _𝐪}{\mathrm{\Delta }_0}}o_𝐪.`$
In essence this is the renormalized spin-wave. Far from the point of AF instability the parameter $`\eta 1`$, therefore the renormalization is relatively weak and the decay width is small. The situation is different when approaching the point of instability $`tt_{cN}1.6\sqrt{t^{\prime \prime }/J}`$. Here, according to eq. (10), $`\eta 0`$ and therefore the energy of the renormalized spin wave is much smaller than the energy of the bare spin-wave, $`o_𝐪/\omega _𝐪=\eta 1`$. Moreover this collective excitation exists as a narrow peak only at very small $`q`$, when
$$\pi \omega _𝐪/8\eta \mathrm{\Delta }_0<1.$$
(19)
At higher $`q`$ the width is larger than its frequency because of decay to particle-hole excitations. We stress that the closer to the point of instability, the smaller is $`\eta `$, and therefore the smaller is the region of $`q`$ where the excitation exists.
## 8 Conclusions
We have considered a close to half filling $`tt^{\prime \prime }JV`$ model at $`t^{\prime \prime }t,J`$. We restrict our consideration to the case of small doping $`\delta 1`$. It is demonstrated that at $`t<t_{cN}1.6\sqrt{t^{\prime \prime }J}`$ the Neel order is preserved under the doping, and at $`t>t_{cN}`$ the order is destroyed and the system undergoes a transition to the quantum spin disordered phase, see phase diagram at Fig. 9.
If the hole-hole Coulomb repulsion at nearest sites is not too strong ($`V<0.58J`$), then at small $`t`$ the model has psedospin-singlet p-wave superconductivity. As $`t`$ increases, at the some point $`t_c`$ (at $`V=0`$ the critical point is $`t_cJ`$) the system undergoes a phase transition from the p-wave to the d-g-wave superconductor, see Fig. 9. Which state is realized (d- or g-wave) crucially depends on the Coulomb repulsion $`V`$. If $`V`$ is small the d-wave is preferable, while at larger $`V`$ the g-wave superconductivity is realized.
In the Neel state we found the collective spin triplet excitation (renormalized spin wave). In the vicinity of the quantum phase transition to the spin disordered state the excitation exists as a narrow mode only at very small momenta and its energy is substantially below the energy of the bare spin wave.
|
no-problem/9903/astro-ph9903064.html
|
ar5iv
|
text
|
# The fate of LSB galaxies in clusters and the origin of the diffuse intra-cluster light
## 1. Introduction
Clusters of galaxies provide a unique environment wherein the galaxy population has been observed to rapidly evolve over the past few billion years (Butcher & Oemler 1978, Dressler et al. 1998). At a redshift $`z>0.4`$, clusters are dominated by spiral galaxies that are predominantly faint irregular or Sc-Sd types. Some of these spirals have disturbed morphologies; many have high rates of star-formation (Dressler et al. 1994a). Conversely, nearby clusters are almost completely dominated by spheroidal (dSph), lenticulars (S0) and elliptical galaxies (Bingelli et al. 1987, 1988, Thompson & Gregory 1993). Observations suggest that the elliptical galaxy population was already in place at much higher redshifts, at which time the S0 population in clusters is deficient compared to nearby clusters (Couch et al. 1998, Dressler et al. 1998). This evolution of the morphology-density relation appears to be driven by an increase in the S0 fraction with time and a corresponding decrease in the luminous spiral population.
Low surface brightness (LSB) galaxies appear to avoid regions of high galaxy densities (Bothun et al. 1993, Mo et al. 1994). This is somewhat puzzling since recent work by Mihos et al. (1997) demonstrated that LSB disk galaxies are actually more stable to close tidal encounters than HSB disk galaxies. In fact, LSB galaxies have lower disk mass surface densities and higher mass-to-light ratios, therefore their disks are less susceptible to internal global instabilities, such as bar formation. However, in a galaxy cluster, encounters occur frequently and very rapidly, on a shorter timescale than investigated by Mihos et al. and the magnitude of the tidal shocks are potentially very large.
Several physical mechanisms have been proposed that can strongly affect the morphological evolution of disks: ram-pressure stripping (Gunn & Gott 1978), galaxy merging (Icke 1985, Lavery & Henry 1988, 1994) and galaxy harassment (Moore et al. 1996a, 1998). The importance of these mechanisms varies with environment: mergers are frequent in groups but rare in clusters (Ghigna et al. 1998), ram pressure removal of gas is inevitable in rich clusters but will not alter disk morphology (Abadi et al. 1999). The morphological transformation in the dwarf galaxy populations ($`M_b>16`$) in clusters since $`z=0.4`$ can be explained by rapid gravitational encounters between galaxies and accreting substructure - galaxy harassment. The impulsive and resonant tidal heating from rapid fly-by interactions causes a transformation from disks to spheroidals.
The numerical simulations of Moore et al. focussed on the evolution of fainter Sc-Sd spirals in static cluster-like potentials and their transition into dSph’s. In this work we shall examine the role of gravitational interactions in driving the evolution of luminous spirals in dense environments. We will use more realistic simulations that follow the formation and growth of a large cluster that is selected from a cosmological simulation of a closed CDM universe. The parameter space for the cluster model is fairly well constrained once we have adopted hierarchical structure formation. The structure and substructure of virialised clusters is nearly independent of the shape and normalisation of the power spectrum. Clusters that collapse in low Omega universes form earlier, thus their galaxies have undergone more interactions. The cluster that we follow virialises at $`z0.3`$, leaving about 4 Gyrs for the cluster galaxies to evolve.
The parameter space for the model spirals is much larger. Mihos et al. examined the effects of a single encounter at a fixed number of disk scale lengths, whilst varying the disk surface brightness and keeping other properties fixed. The key parameter that determines whether or not dark matter halos survive within a cluster N-body simulation is the core radius of the substructure, which is typically dictated by the softening length (Moore, Katz & Lake 1996b). We suspect that the “softness” of the dark matter potentials may also be the key factor that governs whether or not a given disk galaxy will survive within a real cluster.
## 2. The model galaxies
We use the technique developed by Hernquist (1989) to construct equilibrium spiral galaxies with disk, bulge and halo components, that represent “standard” HSB and LSB disk galaxies. In each model the disk mass is $`4.0\times 10^{10}M_{}`$ and the rotation curves both peak at $`200\mathrm{km}\mathrm{s}^1`$. They are a little less massive than “$`L_{}`$” galaxies, the characteristic luminosity of the break in the galaxy luminosity function and would have absolute magnitudes $`17.8`$ for a mass to light ratio of 5. The “HSB” spiral has an exponential disk scale length, $`r_d=3.0`$ kpc and a bulge with a mass of one third of the total disk mass. The “LSB” disk scale length is 10 kpc and has no bulge. The scale height, $`r_z`$, of each disk is $`0.1r_d`$ and they are constructed with a Toomre $`Q`$ parameter of 1.5. Each galaxy has a dark halo modeled by truncated isothermal spheres with core radii set equal to the disk scale length. This scaling ensures that each galaxy lies at the same point on the Tully-Fisher relation, yet the galaxies will have different internal mass distributions (Zwaan et al. 1995).
Figure 1. The curves show the contributions from stars and dark matter to the total rotational velocity of the disk within (a) the HSB galaxy and (b) the LSB galaxy.
Figure 1 shows the contribution to the rotation velocity of the disks from each component. Note that the bulge component of the HSB galaxy has ensured that the rotation curve is close to flat over the inner 5 disk scale lengths, whereas the rotation curve of the LSB galaxy rises slowly over this region. These rotation curves are typical of that measured for LSB galaxies (de Blok & McGaugh 1996) and HSB galaxies (Persic & Salucci 1997).
Each disk is modeled using 20,000 star particles of mass $`2\times 10^6M_{}`$ and 40,000 dark matter halo particles of mass $`2\times 10^7M_{}`$ in the LSB and $`6\times 10^6M_{}`$ in the HSB galaxy. The force softening is $`0.1r_d`$ for the star particles and $`0.5r_d`$ for the halo particles. Their disks are stable and they remain in equilibrium when simulated in isolation. Discreteness in the halo particles causes the disk scale height to increase with time as quantified in Section 4 for the LSB galaxy.
## 3. The response to impulsive encounters
For a given orbit through a cluster, the visible response of a disk galaxy to a tidal encounter depends primarily upon its internal dynamical timescale. Galaxies with cuspy central mass distributions, such as ellipticals, have short orbital timescales at their centres and they will respond adiabatically to tidal perturbations. Sa-Sb spirals have flat rotation curves, therefore a tidal encounter will cause an impulsive disturbance to a distance $`v_cb/V`$, where $`b`$ is the impact parameter, $`V`$ is the encounter velocity and $`v_c`$ is the galaxy’s rotation speed. LSB galaxies and Sc-Sd galaxies have slowly rising rotation curves, indicating that the central regions are close to a uniform density. The central dynamical timescales are constant throughout the inner disk and an encounter that is impulsive at the core radius will be impulsive throughout the galaxy.
The strength of an encounter is $`M_p^2/V^2`$, where $`M_p`$ is the perturbing mass. The typical galaxy-galaxy encounter within a virialised cluster occurs at a relative velocity $`\sqrt{2}\sigma _{1d}`$. Substituting typical parameters for an Sa–Sb spiral orbiting within a cluster, we find that such encounters will not perturb the disk within $`3r_d`$. However, tidal shocks from the mean cluster field also provides a significant heating source for those galaxies on eccentric orbits (Byrd & Valtonen 1990, Valluri 1993). Ghigna et al. (1998) studied the orbits of several hundred dark halos within a cluster that formed hierarchically in a cold dark matter universe. The median ratio of apocenter to pericenter was 6:1, with a distribution skewed towards radial orbits. More than 20% of the halos were on orbits more radial than 10:1. A galaxy on this orbit would move past pericenter at several thousand $`\mathrm{km}\mathrm{s}^1`$ and would be heated across the entire disk.
We illustrate the effect of a single impulsive encounter on each of our model disks in Figure 2 and Figure 3. At time t=0 we send a perturbing halo of mass $`2\times 10^{12}M_{}`$ perpendicular to the plane of the disk at an impact parameter of 60 kpc and velocity of 1500 $`\mathrm{km}\mathrm{s}^1`$. This encounter would be typical of that occurring in a rich cluster with a tidally truncated $`L_{}`$ elliptical galaxy near the cluster core. Any one galaxy in the cluster will suffer several encounters stronger than this since the cluster formed. Although we simulate a perpendicular orbit here, we do not expect the encounter geometry to make a significant difference since the difference between direct and retrograde encounters will be small i.e. $`V>>v_c`$.
At t=0.1 Gyrs after the encounter, the perturber has moved 150 kpc away, yet the visible response to the encounter is hardly apparent. After 0.2 Gyrs, we can begin to see the response to the tidal shock as material is torn from the disk into extended tidal arms. Even at this epoch their is a clear difference to the response of the perturbation by each galaxy. After 0.4 Gyrs, the LSB galaxy is dramatically altered over the entire disk and a substantial fraction of material has been removed past the tidal radius. Remarkably, the central disk of the HSB galaxy remains intact and only the outermost stars have been strongly perturbed.
Figure 3. Snapshots of the distribution of disk stars from a HSB galaxy after a single high-speed encounter with a massive galaxy. Each frame is 120 kpc on a side and encounter takes place perpendicular to the disk at the box edge (60 kpc).
Figure 4. Snapshots of the distribution of disk stars from an LSB galaxy after a single high-speed encounter with a massive galaxy. Each frame is 120 kpc on a side and the encounter takes place perpendicular to the disk at the box edge (60 kpc).
## 4. Simulating disk evolution within a hierarchical universe
Previous simulations of tidal shocks and galaxy harassment focussed upon the evolution of disk galaxies in static clusters with substructure represented by softened potentials with masses drawn from a Schechter function (Moore et al. 1996a & 1998). Here we use a more realistic approach of treating the perturbations by following the growth of a cluster within a hierarchical cosmological model. The cluster was extracted from a large CDM simulation of a closed universe within a $`50`$ Mpc box and was chosen to be virialised by the present epoch. (We assume a Hubble constant of 100 km s<sup>-1</sup> Mpc<sup>-1</sup> .) Within the turn-around region there are $`10^5`$ CDM particles of mass $`10^{10}M_{}`$ and their softening length is 20 kpc. At a redshift $`z=0`$ the cluster has a one dimensional velocity dispersion of $`700\mathrm{km}\mathrm{s}^1`$ and a virial radius of 2h<sup>-1</sup> Mpc . The tidal field from the mass distribution beyond the cluster’s turn-around radius is simulated with massive particles to speed the computation.
Between a redshift z=2 to z=0.5 we follow the merger histories of several candidate dark matter halos from the cosmological simulation that end up within the cluster at later times. We select three halos with circular velocities $`200\mathrm{km}\mathrm{s}^1`$ that have suffered very little merging over this period and would therefore be most likely to host disk galaxies. We extract these halos from the simulation at z=0.5 and replace the entire halo with the pre-built high resolution model galaxies. We rescale the disk and halo scale lengths by $`(1+z)^1`$ according to the prescription of Mao et al. (1998) to represent the galaxies entering the cluster at higher redshifts. On a 32 node parallel computer, each run takes several hours; three runs were performed in which the halos were replaced with LSB disks and a further three runs using HSB disks.
Figure 4. The vertical scale height, $`r_z`$, of the disk in units of the initial disk scale length, $`r_d`$, measure at $`r_d`$ and plotted against time. The circles show the HSB galaxy placed in a void to test the numerical heating. The squares and triangles show one of the HSB and LSB galaxies that enters the cluster respectively.
At z=0.5, the cluster is only just starting to form from a series of mergers of several individual group and galaxy sized halos. The cluster quickly virialises, although several dark matter clumps survive the collapse and remain intact orbiting within the clusters virial radius. Between $`z=0.40.3`$ the model galaxy receives a series of large tidal shocks from the halos that are assembling the cluster. Once the galaxy enters the virialised cluster, it continues to suffer encounters with infalling and orbiting substructure. By a redshift z=0.1, many stars have been stripped from the disk and now orbit through the cluster - closely following the rosette orbit of the parent galaxy. Of the three LSB galaxy runs, between 60% and 90% of the stars were harassed from the disk, whereas the stellar mass loss in the HSB runs was between 10%–30%.
## 5. Summary
The response of a disk galaxy to tidal shocks is governed primarily by the concentration of the mass distribution that encompasses the visible disk. LSB galaxies have slowly rising rotation curves and dynamical timescales that are constant within their central regions. LSB galaxies cannot survive the chaos of cluster formation; gravitational tidal shocks from the merging substructure literally tear these systems apart, leaving their stars orbiting freely within the cluster and providing the origin of the intra-cluster light (c.f. Moore et al. 1999).
Recent observations of individual planetary nebulae within clusters, but outside of galaxies, lends support to this scenario. Estimates of the total diffuse light within clusters, using CCD photometry (Bernstein et al. 1995, Tyson & Fischer 1995) or the statistics of intra-cluster stars (Theuns & Warren 1997, Feldmeier et al. 1998, Mendez et al. 1998, Ferguson et al. 1998), ranges from 10% to 45% of the light attached to galaxies. Presumably, these stars must have originated within galactic systems. The integrated light within LSB galaxies may be equivalent to the light within “normal” spirals (Bothun, Impey & McGaugh 1997, and references within). This is consistent with the entire diffuse light in clusters originating from harassed LSB galaxies.
High surface brightness disk galaxies and galaxies with luminous bulges have steep mass profiles that give rise to flat rotation curves over their visible extent. The orbital time within a couple of disk scale lengths is short enough for the disk to respond adiabatically to rapid encounters. Tidal shocks cannot remove a large amount of material from these galaxies, nor transform them between morphological types, but will heat the disks and drive instabilities that can funnel gas into the central regions (Lake et al. 1998). A few Gyrs after entering a cluster, their disks are thickened and no spiral features remain. If ram-pressure is efficient at removing gas from disks, we speculate that these galaxies will rapidly evolve into SO’s. Since the harassment process and ram-pressure stripping are both more effective near the cluster centers, we expect that a combination of these effects may drive the morphology–density relation within clusters.
## References
Abadi M., Moore B. & Bower R.G. 1999, MNRAS, in press.
Bernstein, G.M., Nichol R.C., Tyson J.A. & Wittman D. 1995, AJ, 110, 1507.
de Blok W.J.G. & McGaugh S.S. 1996, ApJ, 469, L89.
Bothun G.D., Schombert J.M., Impey C.D., Sprayberry D. & McGaugh S.S. 1993, AJ, 106, 530.
Butcher, H. and Oemler, A. 1978, ApJ, 219, 18.
Byrd G. and Valtonen M. 1990, ApJ, 350, 89.
Couch W.J., Barger A.J., Smail I., Ellis R.S. & Sharples R.M. 1998, ApJ, 497, 188.
Dressler A, Oemler A., Butcher H. and Gunn J.E. 1994, ApJ, 430, 107.
Dressler A., Oemler A., Couch W.J., Smail I., Ellis R.S., Barger A., Butcher H., Poggianti B.M., Sharples R.M. 1998, ApJ, 409, 577.
Dubinski J., 1998, ApJ, 502, 141.
Feldmeier J, Ciardullo R. & Jacoby G. 1998, ApJ, 503, 109.
Ferguson H.C., Tanvir N.R. & von Hippel T. 1998, Nature, 391, 461.
Ghigna S., Moore B., Governato F., Lake G., Quinn T. & Stadel J. 1998, MNRAS, 300, 146.
Gunn J.E. & Gott J.R. 1972, ApJ, 176, 1.
Hernquist L. 1993, ApJS, 86, 389.
Icke V. 1985, Astr. Ap. 144, 115-23.
Lake, G., Katz, N. and Moore, B. 1998, ApJ, 495, 152.
Lavery R.J. & Henry J.P. 1988, ApJ, 330, 596.
Lavery R.J. & Henry J.P. 1994, ApJ, 426, 524.
Mao S., Mo. H.J & White S.D.M. 1998, MNRAS, 297, 71.
Mendez, R.H., Guerrero M.A., Freeman K.C., Arnaboldi M., Kudritzki R.P., Hopp U., Capacciolo M. & Ford H. 1997, ApJ, 491, 23.
Mihos J.C., McGaugh S.S. & de Blok W.J.G. 1997, ApJ, 477, L79.
Mo H.J., McGaugh S.S. & Bothun G.D. 1994, MNRAS, 267, 129.
Moore B., Katz N., Lake G., Dressler A. and Oemler A. 1996a, Nature 379, 613.
Moore B., Katz N. and Lake G. 1996b, ApJ, 457, 455.
Moore B., Lake G. & Katz N. 1998, ApJ, 495, 139.
Moore B., Lake G., Quinn T & Stadel J. 1999, MNRAS, in press.
Persic M. & Salucci P., 1997, Dark and visible matter in galaxies ASP Conference series, 117 ed. M. Persic P. Salucci.
Theuns T. & Warren S.J. 1997, MNRAS, 284, L11.
Thompson, L.A. and Gregory, S.A. 1993, AJ, 106, 2197.
Tyson J.A. & Fischer P. 1995, ApJ, 446, L55.
Valluri M. and Jog C. J. 1991, ApJ, 374, 103.
Zwaan M.A. van der Hulst J.M. de Blok W.J.G. & McGaugh S.S. 1995, MNRAS, 273, L35.
|
no-problem/9903/hep-ph9903548.html
|
ar5iv
|
text
|
# I Introduction
## I Introduction
This paper summarizes the work done for the Tevatron Run II Higgs/Supersymmetry workshop on the supersymmetry models with Gauge Mediation/Low Scale Supersymmetry Breaking (GMSB) . Six final states in which new physics might manifest itself are investigated using the parameters of the upgraded DØ detector . All of these final states are expected to have small physics and instrumentation backgrounds. Implications of the analyses of these final states in future Tevatron runs on the minimal (and not-so-minimal) GMSB models are discussed. Estimated discovery reaches in the supersymmetry parameter space are presented.
## II Object Identification
Due to a large number of Monte Carlo (MC) events generated, no detector simulation is done for supersymmetry signals. All studies described in this paper except those extrapolated from Run I analyses are carried out at the particle level of the Isajet MC program . A 2 TeV Tevatron center-of-mass energy is assumed throughout the studies. Leptons ($`\mathrm{}=e,\mu `$) and photons ($`\gamma `$) are ‘reconstructed’ from the generated particle list by requiring them to have transverse energy ($`E_T`$) or momentum ($`p_T`$) greater than 5 GeV and to be within the pseudorapidity ranges:
* $`e`$: $`|\eta |<1.1`$ or $`1.5<|\eta |<2.0`$;
* $`\mu `$: $`|\eta |<1.7`$;
* $`\gamma `$: $`|\eta |<1.1`$ or $`1.5<|\eta |<2.0`$.
These fiducial ranges are dictated by the coverages of the electromagnetic calorimeter and the central tracker of the DØ detector. Furthermore, the leptons and photons must be isolated. Additional energy in a cone with a radius $`\sqrt{(\mathrm{\Delta }\varphi )^2+(\mathrm{\Delta }\eta )^2}=0.5`$ in $`\eta \varphi `$ space around the lepton/photon is required to be less than 20% of its energy.
Jets are reconstructed using a cone algorithm with a radius $`=0.5`$ in $`\eta \varphi `$ space and are required to have $`E_T^j>20`$ GeV and $`|\eta ^j|<2.0`$. All particles except neutrinos, the lightest supersymmetric particles (lsp), and the identified leptons and photons are used in the jet reconstruction. The transverse momentum imbalance ($`\text{/}E_T`$) is defined to be the total transverse energy of neutrinos and the lsps.
Energies or momenta of leptons, photons and jets of Monte Carlo events are taken from their particle level values without any detector effect. Smearing of energies or momenta of leptons, photons and jets according to their expected resolution typically changes signal efficiencies by less than 10% relatively and therefore has negligible effect on the study.
The reconstruction efficiencies are assumed to be 90% for leptons and photons. For the purpose of background estimations, the probability ($`𝒫`$) for a jet to be misidentified as a lepton ($`j\mathrm{}`$) or a photon ($`j\gamma `$) is assumed to be $`10^4`$. The probability for an electron to be misidentified as a photon ($`e\gamma `$) is assumed to be $`4\times 10^4`$, These probabilities are a factor of three or more smaller than those obtained in Run I <sup>*</sup><sup>*</sup>*The typical numbers determined in Run I are $`𝒫(je)=5\times 10^4`$, $`𝒫(j\gamma )=7\times 10^4`$, and $`𝒫(e\gamma )=4\times 10^3`$.. With a new magnetic central tracking system and a fine-segmented preshower detector, they should be achievable in Run II.
In Run I, tagging of b-jets was limited to the use of soft muons in DØ. Secondary vertex tagging of b-jets will be a powerful addition in Run II. For the studies described below, a tagging efficiency of 60% is assumed for those b-jets with $`E_T>20`$ GeV and $`|\eta |<2.0`$. The probability $`𝒫(jb)`$ for a light-quark or gluon jet to be tagged as a b-jet is assumed to be $`10^3`$. These numbers are optimistic extrapolations of what CDF achieved in Run I.
Heavy stable charged particles can be identified using their expected large ionization energy losses ($`dE/dx`$) in the silicon detector, fiber tracker, preshower detectors and calorimeter. Based on Ref. , a generic $`dE/dx`$ cut is introduced with an efficiency of 68% for heavy stable charged particles and a rejection factor of 10 for the minimum ionization particles (MIP). Note that the efficiency for identifying at least one such particle in events with two heavy stable charged particles is 90%.
With the addition of preshower detectors, DØ will be able to reconstruct the distance of the closest approach (dca) of a photon with a resolution $`1.5`$ cm . Here the dca is defined as the distance between the primary event vertex and the reconstructed photon direction. Thereby it will enable us to identify photons produced at secondary vertices. In the following, a photon is called displaced if its dca is greater than 5.0 cm and is denoted by $`\gamma ^{}`$. We further assume that the probability for a photon produced at the primary vertex to have the measured dca$`>5`$ cm is $`𝒫(\gamma \gamma ^{})=2\times 10^3`$ (about $`3\sigma `$).
All final states studied have large $`E_T`$ ($`p_T`$) leptons/photons with or without large $`\text{/}E_T`$. Triggering on these events are not expected to have any problem. Nevertheless, we assume a 90% trigger efficiency for all the final states.
## III Final States
Signatures for supersymmetry vary dramatically from one model to another. They can also be very different for different regions of the parameter space in a model. Furthermore, these signatures are generally not unique to supersymmetry. In fact, some of the signatures are also expected from other theories beyond the Standard Model. Instead of chasing after theoretical models (all of which except perhaps one are wrong anyway) a set of final states which are somewhat generic to many new physics models, including supersymmetric models, is identified. All of these final states are characterized by high $`E_T`$($`p_T`$) isolated leptons/photons with or without large missing transverse momentum and are thus expected to have small physics and instrumental backgrounds. In the following, we discuss selection criteria and estimate observable background cross sections for six such final states:
### A $`\gamma \gamma \text{/}E_T`$ Final State
The DØ Collaboration reported a search for di-photon events with large $`\text{/}E_T`$ ($`\gamma \gamma \text{/}E_T`$ events) motivated by supersymmetric models with a light gravitino ($`\stackrel{~}{G}`$) as the lsp from a data sample of an integrated luminosity ($``$) of $`106.3\pm 5.6`$ pb<sup>-1</sup> in Run I. The $`\gamma \gamma \text{/}E_T`$ events were selected by requiring two identified photons, one with $`E_T^\gamma >20`$ GeV and the other with $`E_T^\gamma >12`$ GeV, each within pseudorapidity $`|\eta ^\gamma |<1.1`$ or $`1.5<|\eta ^\gamma |<2.0`$, and a $`\text{/}E_T`$ greater than 25 GeV. Two events satisfied all requirements.
The principal backgrounds were multijet, direct photon, $`W+\gamma `$, $`W+\mathrm{jets}`$, $`Zee`$, and $`Z\tau \tau ee`$ events from Standard Model processes with misidentified photons and/or mismeasured $`\text{/}E_T`$. The numbers of estimated background events were $`2.1\pm 0.9`$ from $`\text{/}E_T`$ mismeasurement (QCD) and $`0.2\pm 0.1`$ from misidentified photons (fakes). This led to an observed background cross section of 20 fb from QCD and of 2 fb from fakes in Run I. The $`\text{/}E_T`$ distributions before the $`\text{/}E_T`$ cut for both candidates and background events are shown in Fig. 1. Note that events with large $`\text{/}E_T`$ are rare.
Since the backgrounds are dominated by the $`\text{/}E_T`$ mismeasurement, they can be significantly reduced by raising the $`\text{/}E_T`$ cut. Therefore, the following selection criteria are used for the Run II studies:
* At least two photons with $`E_T^\gamma >20`$ GeV;
* $`\text{/}E_T`$$`>50`$ GeV.
The backgrounds with this set of selection criteria are expected to be significantly reduced by the increased cutoffs on $`\text{/}E_T`$ and photon $`E_T`$ and by the improved photon identification. The total observable background cross section in Run II is estimated to be $`\sigma _b=0.4(\mathrm{QCD})+0.2(\mathrm{fakes})=0.6`$ fb assuming reduction factors of 5 from the raised $`\text{/}E_T`$ cutoff, 4 from the improved $`𝒫(j\gamma )`$, 3 from the higher photon $`E_T`$ requirement, and 10 from the decreased $`e\gamma `$ fake probability.
### B $`\gamma bj\text{/}E_T`$ Final State
DØ Collaboration carried out a search for single-photon events with at least two jets and large $`\text{/}E_T`$ ($`\gamma jj\text{/}E_T`$ events) in Run I. The $`\gamma jj\text{/}E_T`$ events were selected by requiring at least one identified photon with $`E_T^\gamma >20`$ GeV and within pseudorapidity ranges $`|\eta ^\gamma |<1.1`$ or $`1.5<|\eta ^\gamma |<2.0`$, two or more jets having $`E_T^j>20`$ GeV and $`|\eta ^j|<2.0`$, and $`\text{/}E_T>25`$ GeV. A total of 318 events were selected from a data sample corresponding to an integrated luminosity of $`99.4\pm 5.4`$ pb<sup>-1</sup>.
The principal backgrounds were found to be QCD direct photon and multijet events, where there was mismeasured $`\text{/}E_T`$ and a real or fake photon. The number of events from this source was estimated to be $`315\pm 30`$. Other backgrounds such as those from $`W`$ with electrons misidentified as photons were found to be small, contributing $`5\pm 1`$ events. This led to an observed background cross section of 3,200 fb from the $`\text{/}E_T`$ mismeasurement and of 50 fb from the fakes. The $`\text{/}E_T`$ distribution before the $`\text{/}E_T`$$`>25`$ GeV cut is shown in Fig. 2. As shown in the figure, the backgrounds can be significantly reduced by raising the requirement on $`\text{/}E_T`$.
Events with a high $`E_T`$ photon, b-jets and large $`\text{/}E_T`$ are expected in some new physics models. These events, referred to as $`\gamma bj\text{/}E_T`$, are in many ways similar to the $`\gamma jj\text{/}E_T`$ events and thereby can be selected similarly:
* At least one photon with $`E_T^\gamma >20`$ GeV;
* At least two jets with $`E_T^j>20`$ GeV;
* At least one jet is tagged as a b-quark jet with $`E_T^b>20`$ GeV;
* $`\text{/}E_T`$$`>50`$ GeV;
* No leptons with $`E_T^{\mathrm{}}>20`$ GeV.
The backgrounds from the QCD multijet events with real or misidentified photons and from the W events with electrons faking photons are estimated to be 0.63 fb, assuming background reduction factors of 5 from the raised $`\text{/}E_T`$ requirement, 2 from the improved photon identification and using the assumed value of $`𝒫(jb)`$. The dominant background sources are expected to be $`\gamma b\overline{b}`$ and $`\gamma t\overline{t}`$ events. These background sources cannot be reduced by the tagging of b-jets. However, the $`\gamma b\overline{b}`$ contribution is expected to be small due to the large $`\text{/}E_T`$ requirement. Monte Carlo studies show that it is negligible. The $`\gamma t\overline{t}`$ (with $`t\overline{t}W^+W^{}b\overline{b}`$) contribution is reduced by the requirements 4) and 5) and is estimated using the cross section of Ref. . A total of 0.9 fb observable background cross section is assumed.
### C $`\gamma ^{}jj\text{/}E_T`$ Final State
Photons produced at secondary vertices are predicted in a class of new physics models. These photons will appear to have large values of dca. Though dramatic, they alone are unlikely to be sufficient to reduce backgrounds from cosmic rays or from mismeasurement. We therefore select events with displaced photons accompanied by jets and large $`\text{/}E_T`$:
* At least one displaced photon with $`E_T^\gamma ^{}>20`$ GeV;
* At least two jets with $`E_T^j>20`$ GeV;
* $`\text{/}E_T`$$`>50`$ GeV.
These are called $`\gamma ^{}jj\text{/}E_T`$ events. The dominant backgrounds are the same as those for the $`\gamma jj\text{/}E_T`$ events, with a vertex-pointing photon being misidentified as a displaced photon. Using $`𝒫(\gamma \gamma ^{})`$, the observable background cross section from QCD and W events is estimated to be 0.6 fb.
### D High $`p_T`$ $`\mathrm{}\mathrm{}+dE/dx`$ Final State
One possible new physics signature is the presence of heavy stable charged particles. These particles, if produced, will manifest themself in the detector as slowly moving muons with large ionization energy losses. Though DØ had several di-lepton analyses in Run I, none of these can be extrapolated to Run II, thanks to the replacement of the central tracker. Based on the expected signatures of several supersymmetric models with heavy stable charged particles discussed below, we select high $`p_T`$ di-lepton events ($`\mathrm{}\mathrm{}+dE/dx`$) with large $`dE/dx`$ loss using the following requirements:
* At least two leptons with $`p_T^{\mathrm{}}>50`$ GeV;
* $`M_{\mathrm{}\mathrm{}}>150`$ GeV;
* At least one lepton passing the $`dE/dx`$ requirement.
The di-lepton mass requirement is intended to reduce Drell-Yan backgrounds. The principal backgrounds are: QCD dijet events with jets misidentified as leptons, $`t\overline{t}`$, and Drell-Yan events. Using $`𝒫(j\mathrm{})=10^4`$ and the assumed rejection factor of the $`dE/dx`$ cut for the MIP particles, the observable background cross sections are estimated to be 0.1 fb from QCD dijet, 0.2 fb from $`t\overline{t}`$ events, and 0.2 fb from Drell-Yan processes. The QCD dijet cross section for $`p_T>50`$ GeV is assumed to be 1 $`\mu `$b in the estimation. The total observable cross section is therefore 0.5 fb for the above selection.
### E $`\mathrm{}\mathrm{}\mathrm{}j\text{/}E_T`$ Final State
DØ searched for gaugino pair production using the tri-lepton signature in Run I. The lepton $`p_T`$ cut was typically 15 GeV for the leading lepton and 5 GeV for the non-leading leptons. The analysis also had a small $`\text{/}E_T`$ requirement. The observable background cross section was estimated to be around 13 fb. Most of these backgrounds are due to Drell-Yan processes. We select the $`\mathrm{}\mathrm{}\mathrm{}j\text{/}E_T`$ events using the following criteria:
* $`p_T^\mathrm{}_1>15`$ GeV, $`p_T^\mathrm{}_2>5`$ GeV, $`p_T^\mathrm{}_3>5`$ GeV;
* $`\text{/}E_T`$$`>20`$ GeV;
* At least one jet with $`E_T^j>20`$ GeV.
The Drell-Yan production, a major background source for the Run I analysis, is significantly reduced by the new jet requirement. The total observable background cross section is estimated to be 0.3 fb assuming background reduction factors of 10 from the jet requirement, 2 from the improved particle identification, 2 from the higher $`\text{/}E_T`$ cut.
### F $`\mathrm{}^\pm \mathrm{}^\pm jj\text{/}E_T`$ Final State
Like-sign di-lepton events are expected from processes such as gluino pair production. They are also expected from processes with three or more leptons in the final states, but only two are identified. This final state is expected to have small backgrounds. Again without a magnetic tracker, DØ had no analysis of this nature in Run I. Based on Monte Carlo studies for several supersymmetric models, we select $`\mathrm{}^\pm \mathrm{}^\pm jj\text{/}E_T`$ events using the following criteria:
* Two like-sign leptons with $`p_T^{\mathrm{}}>15`$ GeV;
* At least two jets with $`E_T^j>20`$ GeV;
* $`\text{/}E_T`$$`>25`$ GeV.
Events with three or more identified leptons are removed to make the sample orthogonal to the $`\mathrm{}\mathrm{}\mathrm{}j\text{/}E_T`$ sample. Since leptons are relatively soft in $`p_T`$ for the new physics model we investigated using this selection, the effect of charge confusion due to a limited tracking resolution is thus neglected in this study. The major backgrounds are: $`W+\mathrm{jets}`$ events with one of the jets misidentified as a lepton, $`t\overline{t}`$ events with energetic leptons from b-quark decays, and Drell-Yan ($`WZ`$, $`ZZ`$) events. The $`W+\mathrm{jets}`$ background is estimated using the number of $`W+3j`$ events observed in Run I folded with $`𝒫(j\mathrm{})`$ to be 0.2 fb. The $`t\overline{t}`$ and Drell-Yan backgrounds are estimated using Monte Carlo to be 0.1 and 0.1 fb respectively. Adding the three background sources together yields a total observable background cross section of 0.4 fb.
## IV Constraints on GMSB Models
The supersymmetric models with gauge mediated supersymmetry breaking are characterized by a supersymmetry breaking scale $`\mathrm{\Lambda }`$ as low as 100 TeV and a light gravitino which is naturally the lightest supersymmetric particle. In these models, supersymmetry is assumed to be broken in a hidden sector and the symmetry breaking is transmitted to the visible sector of Standard Model particles and their superpartners through the Standard Model gauge interactions. The minimum gauge mediated supersymmetry breaking model is described by five parameters:
The phenomenology is largely determined by the next-to-lightest supersymmetric particle which in turn depends on the values of the above five parameters. For a review of GMSB models, see Ref. . In the following, we discuss expected sensitivities with integrated luminosities of 2, 30 fb<sup>-1</sup> for four different model lines defined by the working group. Each model line has a different nlsp. All theoretical expectations and signal efficiencies are obtained from the Isajet MC program. A minimum $`p_T`$ of 50 GeV of the hard scattering is applied for all signal processes. We define the significance ($`N_s/\delta N_b`$) as the ratio between the number ($`N_s`$) of expected signal events and the error ($`\delta N_b`$) on the estimated number of background events. Here a 20% systematic uncertainty is assumed for all estimated observable background cross sections. Therefore,
$$\delta N_b=\sqrt{\sigma _b+(0.2\sigma _b)^2}$$
We characterize the sensitivity using the minimum signal cross section $`\sigma _{dis}`$ for a 5 standard deviation ($`5\sigma `$) discovery:
$$\frac{N_s}{\delta N_b}=\frac{\sigma _{dis}ϵ}{\delta N_b}=5$$
where $`ϵ`$ is the efficiency for the signal. The minimum observable signal cross section $`\sigma _{obs}`$ defined as $`\sigma _{dis}ϵ`$ for the discovery is therefore independent of signal processes. The $`\sigma _{obs}`$ as a function of $``$ for several different values of $`\sigma _b`$ are shown in Fig. 3. It decreases dramatically as $``$ increases for small $``$ values and flattens out for large $``$ values. Clearly, the sensitivity can be improved for large $``$ values by tightening the cuts to reduce backgrounds further.
In the following, we express the $`5\sigma `$ discovery cross sections as functions of the supersymmetry breaking scale $`\mathrm{\Lambda }`$ and the lighter chargino ($`\stackrel{~}{\chi }_1^\pm `$) mass for the four different model lines.
### A Model Line 1: $`\stackrel{~}{\chi }_1^0`$ as the NLSP
Within the framework of the minimal GMSB models, $`\stackrel{~}{\chi }_1^0`$ is the nlsp for most of the parameter space. If the $`\stackrel{~}{\chi }_1^0`$ has a non-zero photino component, it is unstable and decays to a photon plus a gravitino ($`\stackrel{~}{\chi }_1^0\gamma \stackrel{~}{G}`$) with a branching ratio of nearly 100%. Depending on its lifetime, pair production of supersymmetric particles will result in $`\gamma \gamma \text{/}E_T`$, $`\gamma \text{/}E_T`$, and $`\text{/}E_T`$$`+X`$ events. For the purpose of this study, we consider a class of models with the following parameters fixed:
$$N=1,\frac{M_m}{\mathrm{\Lambda }}=2,\mathrm{tan}\beta =2.5,\mu >0$$
while $`\mathrm{\Lambda }`$ is allowed to vary. For the range of $`\mathrm{\Lambda }`$ values of interest at the Tevatron, the supersymmetry production cross section is dominated by $`\stackrel{~}{\chi }_1^\pm \stackrel{~}{\chi }_1^\pm `$ and $`\stackrel{~}{\chi }_1^\pm \stackrel{~}{\chi }_2^0`$ production. Figure 4 shows the schematic decay chains of $`\stackrel{~}{\chi }_1^\pm `$ and $`\stackrel{~}{\chi }_2^0`$ with their branching ratios for $`\mathrm{\Lambda }=100`$ TeV. In the following, scenarios with prompt and delayed $`\stackrel{~}{\chi }_1^0`$ decays are discussed. If the $`\stackrel{~}{\chi }_1^0`$ is quasi-stable, i.e. with a long lifetime, the signature will be identical to that of the supersymmetric models with gravity mediation.
#### 1 Prompt $`\stackrel{~}{\chi }_1^0\gamma \stackrel{~}{G}`$ Decay
If the $`\stackrel{~}{\chi }_1^0`$ decays in the vicinity of the production vertex, $`\gamma \gamma \text{/}E_T`$ events are expected. The distributions of photon $`E_T`$ and event $`\text{/}E_T`$ for $`\mathrm{\Lambda }=80,140`$ TeV are shown in Fig. 5. These events typically have high $`E_T`$ photons together with large transverse momentum imbalances, and therefore can be selected using the $`\gamma \gamma \text{/}E_T`$ criteria discussed in Section III A. Table I shows the detection efficiencies, significances along with the total theoretical cross sections of supersymmetry, chargino and neutralino masses for different values of $`\mathrm{\Lambda }`$. Figure 6 compares the $`5\sigma `$ discovery cross sections $`\sigma _{dis}`$ with the theoretical cross sections expected from supersymmetry for two different values of $``$ as functions of the lighter chargino mass $`m_{\stackrel{~}{\chi }_1^\pm }`$ (and the supersymmetry breaking scale $`\mathrm{\Lambda }`$). The lighter chargino with mass up to 290, 340 GeV can be discovered for $``$=2, 30 fb<sup>-1</sup> respectively.
#### 2 Delayed $`\stackrel{~}{\chi }_1^0\gamma \stackrel{~}{G}`$ Decay
If the $`\stackrel{~}{\chi }_1^0`$ has a significant lifetime, the photon from its decay may not point back to the primary vertex. If the decay occurs inside the tracking volume of the DØ detector, the photon is expected to traverse standard electromagnetic detectors (the preshower detectors and the electromagnetic calorimeter). It, therefore, can be identified. However, if the decay occurs outside the tracking detector, the photon identification is problematic. For this study, we assume that the photon is identifiable if it is produced inside a cylinder defined by the DØ tracking volume ($`r<50`$ cm and $`|z|<120`$ cm) and is lost if it is produced outside the cylinder. Figure 7(a) shows the average decay distance and distance of closest approach of the $`\stackrel{~}{\chi }_1^0`$ as functions of its proper decay length (c$`\tau `$) for $`\mathrm{\Lambda }=100`$ TeV. Due to its heavy mass, the Lorentz boost for the $`\stackrel{~}{\chi }_1^0`$ is typically small ($`\gamma 1.5`$). The probabilities that a photon is identifiable and that an identifiable photon has dca$`>5`$ cm as functions of the $`\stackrel{~}{\chi }_1^0`$ proper decay distance $`c\tau `$ are shown in Fig. 7(b), again for $`\mathrm{\Lambda }=100`$ TeV. Distributions for other $`\mathrm{\Lambda }`$ values are similar. Figure 8 shows jet multiplicity and $`E_T`$ distributions for $`\mathrm{\Lambda }=80,140`$ TeV. Most of these events have large $`E_T`$ jets and thus can be selected using the $`\gamma ^{}jj\text{/}E_T`$ selection criteria discussed above. The detection efficiencies and the expected significances of the $`\gamma ^{}jj\text{/}E_T`$ selection for $`c\tau =50`$ cm are tabulated in Table II as an example. The estimated $`5\sigma `$ discovery reaches in $`\mathrm{\Lambda }`$ and chargino mass for different values of $`c\tau `$ are shown in Fig. 9 along with those expected from the $`\gamma \gamma \text{/}E_T`$ analysis. As expected, the $`\gamma \gamma \text{/}E_T`$ analysis has a stronger dependence on $`c\tau `$ than the $`\gamma ^{}jj\text{/}E_T`$ analysis.
### B Model Line 2: $`\stackrel{~}{\tau }_1`$ as the NLSP
If $`\stackrel{~}{\tau }_1`$ (the lighter of the two mixed states of $`\stackrel{~}{\tau }_R`$ and $`\stackrel{~}{\tau }_L`$) is lighter than $`\stackrel{~}{\chi }_1^0`$, all supersymmetric particles will cascade into the $`\stackrel{~}{\tau }_1`$ which in turn will decay to $`\tau \stackrel{~}{G}`$ with a 100% branching ratio. This class of models is defined by following parameter values:
$$N=2,\frac{M_m}{\mathrm{\Lambda }}=3,\mathrm{tan}\beta =15,\mu >0$$
with varying $`\mathrm{\Lambda }`$. Again, $`\stackrel{~}{\chi }_1^\pm \stackrel{~}{\chi }_1^\pm `$ and $`\stackrel{~}{\chi }_1^\pm \stackrel{~}{\chi }_2^0`$ dominate the production cross section for $`\mathrm{\Lambda }<75`$ TeV. For $`\mathrm{\Lambda }`$ values above 75 TeV, $`\stackrel{~}{\tau }_1\stackrel{~}{\tau }_1`$, $`\stackrel{~}{e}_R\stackrel{~}{e}_R`$, and $`\stackrel{~}{\mu }_R\stackrel{~}{\mu }_R`$ productions become important. As an example, branching ratios of $`\stackrel{~}{\chi }_1^\pm `$ and $`\stackrel{~}{\chi }_2^0`$ for $`\mathrm{\Lambda }=40`$ TeV are graphically displayed in Fig. 10. In the following, two cases corresponding to short-lived and quasi-stable $`\stackrel{~}{\tau }_1`$s are discussed. It should be noted that the two analyses discussed below are also sensitive to the case with a intermediate $`\stackrel{~}{\tau }_1`$ lifetime.
#### 1 Prompt $`\stackrel{~}{\tau }_1\tau \stackrel{~}{G}`$ Decay
If the $`\stackrel{~}{\tau }_1`$ is short-lived and decays in the vicinity of the production vertex (i.e. with a decay distance $`\gamma c\tau <10`$ cm), anomalous $`\tau `$ production is expected from supersymmetry. Together with the $`W^{}/Z^{}`$ productions from the cascade decays of primary supersymmetric particles, these events will give rise to $`\mathrm{}\mathrm{}\mathrm{}j\text{/}E_T`$ and $`\mathrm{}^\pm \mathrm{}^\pm jj\text{/}E_T`$ final states. The lepton $`p_T`$ distributions of the $`\mathrm{}\mathrm{}\mathrm{}j\text{/}E_T`$ and $`\mathrm{}^\pm \mathrm{}^\pm jj\text{/}E_T`$ events are shown in Fig. 11. Since most leptons are produced in $`\tau `$ decays, their $`p_T`$s are relatively soft. Table III shows the efficiencies of the $`\mathrm{}\mathrm{}\mathrm{}j\text{/}E_T`$ and $`\mathrm{}^\pm \mathrm{}^\pm jj\text{/}E_T`$ selection criteria for these events along with the theoretical cross sections, $`\stackrel{~}{\chi }_1^\pm `$ and $`\stackrel{~}{\tau }_1`$ masses. Note that the $`\mathrm{}\mathrm{}\mathrm{}j\text{/}E_T`$ and $`\mathrm{}^\pm \mathrm{}^\pm jj\text{/}E_T`$ criteria are orthogonal. The efficiencies are relatively small largely due to the small branching ratio of the events to tri-leptons. We note that the total efficiencies shown in the table are somewhat conservative. They do not take into account the migration of the $`\mathrm{}\mathrm{}\mathrm{}j\text{/}E_T`$ events to the $`\mathrm{}^\pm \mathrm{}^\pm jj\text{/}E_T`$ events due to inefficiency in the lepton identification. The $`5\sigma `$ discovery curves are shown in Fig. 12. The lighter chargino with mass up to 160 and 230 GeV can be discovered for $``$=2, 30 fb<sup>-1</sup>.
The conventional wisdom is that this analysis should benefit from a $`\tau `$ identification. However, we doubt that it will have a dramatic impact on the reach in the supersymmetry parameter space. Though a $`\tau `$ identification could improve the efficiency for the signal, it will undoubtably come with large backgrounds. Nevertheless, a $`\tau `$ identification is essential to narrow down theoretical models if an excess is observed in the tri-lepton final state.
#### 2 Quasi-stable $`\stackrel{~}{\tau }_1`$
If the $`\stackrel{~}{\tau }_1`$ has a long lifetime (quasi-stable) and decays outside the detector ($`\gamma c\tau `$ greater than $`3`$ m), it will appear in the detector like a muon with the exception of a large ionization energy loss. The signature is, therefore, two high $`p_T`$ ‘muons’ with large $`dE/dx`$ values. These events can be selected using the criteria described in Section III D. The expected $`p_T`$ distributions of the $`\stackrel{~}{\tau }_1`$ for two different values of $`\mathrm{\Lambda }`$ are shown in Fig. 13(a). The cut of $`p_T>50`$ GeV of the $`\mathrm{}\mathrm{}+dE/dx`$ selection is efficient for the signal while it is expected to reduce backgrounds significantly. The typical invariant mass of the two ‘muons’ (assuming massless) is very large as shown in Fig. 13(b). A $`M_{\mathrm{}\mathrm{}}>150`$ GeV requirement does little harm to the signals. Due to its large mass, the $`\stackrel{~}{\tau }_1`$ is expected to move slowly. However since most of the $`\stackrel{~}{\tau }_1`$’s are produced in the decays of massive $`\stackrel{~}{\chi }_1^\pm `$s and $`\stackrel{~}{\chi }_2^0`$s, the average speed $`\beta (v/c)`$ is relatively large. It is around 0.7 for the $`\mathrm{\Lambda }`$ values studied. Note that the $`\beta `$ distribution is very similar to that shown in Fig. 18(b) for the models with $`\stackrel{~}{\mathrm{}}`$ as the Co-nlsp. Nevertheless, the not-so-slow moving $`\stackrel{~}{\tau }_1`$’s are expected to deposit large ionization energies in the detector, differentiating them from other high $`p_T`$ MIP particles. Since the backgrounds for the requirements $`p_T>50`$ GeV and $`M_{\mathrm{}\mathrm{}}>150`$ GeV are already small, it pays to have a $`dE/dx`$ requirement with a relatively high efficiency for the signal and a resonable rejection for the MIP particles. The $`\text{/}E_T`$ distribution of these events as shown in Fig. 14 shows two distinct regions: small and large $`\text{/}E_T`$. The decays $`\stackrel{~}{\chi }_1^\pm \stackrel{~}{\chi }_1^0W\stackrel{~}{\tau }_1\tau W`$ and $`\stackrel{~}{\chi }_2^0e\stackrel{~}{e},\mu \stackrel{~}{\mu }`$ contribute to events with small $`\text{/}E_T`$. The decays $`\stackrel{~}{\chi }_1^\pm \stackrel{~}{\tau }_1\nu `$ and $`\stackrel{~}{\chi }_2^0\tau \stackrel{~}{\tau }_1`$ are responsible for events with large $`\text{/}E_T`$. The detection efficiencies and the expected significances of the $`\mathrm{}\mathrm{}+dE/dx`$ selection for different values of $`\mathrm{\Lambda }`$ are tabulated in Table IV. The high efficiency is largely due to the high momentum expected for the quasi-stable $`\stackrel{~}{\tau }_1`$. The $`5\sigma `$ discovery curves are shown in Fig. 12 for two values of $``$. The lighter chargino with mass up to 340, 410 GeV and the $`\stackrel{~}{\tau }_1`$ with mass up to 160, 200 GeV can be discovered for the two integrated luminosities respectively.
### C Model Line 3: $`\stackrel{~}{\mathrm{}}`$ as the Co-NLSP
For some regions of the parameter space, three light sleptons ($`\stackrel{~}{e}_R`$, $`\stackrel{~}{\mu }_R`$, and $`\stackrel{~}{\tau }_1`$) are essentially degenerate in mass and they can be lighter than $`\stackrel{~}{\chi }_1^0`$. As a result, the sleptons ($`\stackrel{~}{\mathrm{}}\stackrel{~}{\tau }_1,\stackrel{~}{e}_R,\stackrel{~}{\mu }_R`$) effectively share the role of the nlsp. Quantitative studies of this type of models are done for following GMSB parameter values:
$$N=3,\frac{M_m}{\mathrm{\Lambda }}=3,\mathrm{tan}\beta =3,\mu >0$$
with again varying $`\mathrm{\Lambda }`$ values. For small values of $`\mathrm{\Lambda }`$, $`\stackrel{~}{\chi }_1^\pm \stackrel{~}{\chi }_1^\pm `$ and $`\stackrel{~}{\chi }_1^\pm \stackrel{~}{\chi }_2^0`$ dominate the production cross section. As shown in Fig. 15, $`\stackrel{~}{\chi }_1^\pm \stackrel{~}{\chi }_1^\pm `$ and $`\stackrel{~}{\chi }_1^\pm \stackrel{~}{\chi }_2^0`$ production will yield events with multileptons in the final state. The slepton pair production surpasses chargino-neutralino production if $`\mathrm{\Lambda }>50`$ TeV. The total supersymmetry cross section for several different $`\mathrm{\Lambda }`$ values can be found in Table V. The lifetime of $`\stackrel{~}{\mathrm{}}`$ determines the event topology. In the following, we discuss the cases with short-lived and quasi-stable $`\stackrel{~}{\mathrm{}}`$s. Again, the analyses should also be sensitive to the $`\stackrel{~}{\mathrm{}}`$ nlsp with a intermediate lifetime.
#### 1 Prompt $`\stackrel{~}{\mathrm{}}\mathrm{}\stackrel{~}{G}`$ Decay
If the decay $`\stackrel{~}{\mathrm{}}\mathrm{}\stackrel{~}{G}`$ is prompt ($`\gamma c\tau <10`$ cm), $`\mathrm{}\mathrm{}\text{/}E_T`$ events are expected from supersymmetry. Unfortunately, this final state is swamped by backgrounds from the Standard Model processes such as $`t\overline{t}`$, $`WW`$, $`WZ`$ and $`ZZ`$ productions as well as from $`W+\mathrm{jets}`$ production with one of the jets misidentified as a lepton. However we note that these events typically have multiple leptons in the final state and most of them are in the central pseudorapidity region with good lepton identification. Apart from those from $`\stackrel{~}{\mathrm{}}`$ decays, leptons are also expected from $`W^{}`$’s and $`Z^{}`$’s produced in the cascade decays of $`\stackrel{~}{\chi }_1^\pm `$ and $`\stackrel{~}{\chi }_2^0`$ of supersymmetry originated events. Therefore, they can be selected using the $`\mathrm{}\mathrm{}\mathrm{}j\text{/}E_T`$ criteria. The $`p_T`$ distributions of the leading lepton and the third lepton of these events are shown in Fig. 16(a). Since most of the leading leptons are produced in the direct decays of heavy $`\stackrel{~}{\mathrm{}}`$’s, its $`p_T`$ spectrum is relatively hard as shown in the figure. The detection efficiencies and the expected significances are summarized in Table V. The reduction in the relative cross section of the tri-lepton producing $`\stackrel{~}{\chi }_1^\pm \stackrel{~}{\chi }_1^\pm `$ and $`\stackrel{~}{\chi }_1^\pm \stackrel{~}{\chi }_2^0`$ processes is responsible for the decrease in efficiency as $`\mathrm{\Lambda }`$ increases. For $`\mathrm{\Lambda }>50`$ TeV, the $`\stackrel{~}{\mathrm{}}\stackrel{~}{\mathrm{}}`$ production cross section surpasses that of the $`\stackrel{~}{\chi }_1^\pm \stackrel{~}{\chi }_1^\pm `$ and $`\stackrel{~}{\chi }_1^\pm \stackrel{~}{\chi }_2^0`$. With the $`\stackrel{~}{\mathrm{}}\mathrm{}\stackrel{~}{G}`$ decay, $`\stackrel{~}{\mathrm{}}\stackrel{~}{\mathrm{}}`$ events will result in a high $`p_T`$ $`\mathrm{}\mathrm{}`$$`\text{/}E_T`$ final state. We note that the improvement by adding the $`\mathrm{}^\pm \mathrm{}^\pm jj\text{/}E_T`$ selection is minimal in this case. The $`5\sigma `$ discovery curves are compared with the theoretical cross sections in Fig. 17. With integrated luminosities of 2, and 30 fb<sup>-1</sup>, the lighter chargino with mass up to 310 and 360 GeV can be discovered respectively.
#### 2 Quasi-stable $`\stackrel{~}{\mathrm{}}`$
If the $`\stackrel{~}{\mathrm{}}`$ has a long lifetime, it can decay outside the detector ($`\gamma c\tau >3`$ m). In this case, the $`\stackrel{~}{\mathrm{}}`$ will appear in the detector like a ‘muon’ except that the ionization energy loss will be large. This signature is identical to that of a quasi-stable $`\stackrel{~}{\tau }_1`$ discussed above. Therefore, the signal events can be identified using the $`\mathrm{}\mathrm{}+dE/dx`$ selection. The expected $`p_T`$ and $`\beta `$ distributions of the $`\stackrel{~}{\mathrm{}}`$ for $`\mathrm{\Lambda }=40,60`$ TeV are shown in Fig. 18. The $`\stackrel{~}{\mathrm{}}`$s typically have very large $`p_T`$ and are mostly central. For example, about 90% of the $`\stackrel{~}{\mathrm{}}`$s are in central pseudorapidity region with the tracking coverage for the case of $`\mathrm{\Lambda }=70`$ TeV. Table VI shows the detection efficiencies and the expected significances for different $`\mathrm{\Lambda }`$ values. The $`5\sigma `$ discovery curves are shown in Fig. 17. The lighter chargino mass discovery reach is about 390 GeV for $``$=2 fb<sup>-1</sup> and 480 GeV for $``$=30 fb<sup>-1</sup>.
### D Model Line 4: $`\stackrel{~}{h}`$ as the NLSP
For most of the parameter space, $`\stackrel{~}{\chi }_1^0`$ will predominantly decay to $`\gamma \stackrel{~}{G}`$ if it is the nlsp. However if $`\stackrel{~}{\chi }_1^0`$ is higgsino-like ($`\stackrel{~}{h}`$), $`\stackrel{~}{\chi }_1^0Z\stackrel{~}{G}`$ and $`\stackrel{~}{\chi }_1^0h\stackrel{~}{G}`$ decays could have significant branching ratios in some regions of the parameter space of non-minimal GMSB models. For the Run II studies, these models are defined to have fixed values of
$$N=2,\frac{M_m}{\mathrm{\Lambda }}=3,\mathrm{tan}\beta =3,\frac{\mu }{M_1}=0.75$$
with $`\mathrm{\Lambda }`$ varying. Here $`M_1`$ is the mass parameter associated with the $`U(1)_Y`$ symmetry. In these models, the lightest neutral higgs boson $`h`$ has a mass around 104 GeV. Pair production of supersymmetric particles may result in $`\stackrel{~}{\chi }_1^0\stackrel{~}{\chi }_1^0\gamma \stackrel{~}{G}h\stackrel{~}{G}\gamma b\overline{b}\stackrel{~}{G}\stackrel{~}{G}`$ final state which would appear as $`\gamma b\overline{b}\text{/}E_T`$ events in the detector assuming prompt $`\stackrel{~}{\chi }_1^0`$ decays. These events are characterized by high $`E_T`$ photons and large $`\text{/}E_T`$ as shown in the Fig. 19 for $`\mathrm{\Lambda }=80,110`$ TeV. They can be selected using the $`\gamma bj\text{/}E_T`$ selection criteria discussed in Section III B. The detection efficiencies and $`N_s/\delta N_b`$ significances are shown in Table VII for different values of $`\mathrm{\Lambda }`$. Most of the events selected are due to the $`\gamma h`$ production with $`hb\overline{b}`$. However, a non-negligible fraction of the events is actually due to the $`\gamma Z+X`$ with $`Zb\overline{b}`$. The discovery reach in $`\mathrm{\Lambda }`$ and $`m_{\stackrel{~}{\chi }_1^\pm }`$ is shown in Fig. 20 for $``$=2, 30 fb<sup>-1</sup>.
If an excess is seen in the $`\gamma bj\text{/}E_T`$ final state, it will be of great interest to reconstruct the di-jet invariant mass. Figure 21 shows the invariant mass distribution of the two leading jets for $`\mathrm{\Lambda }=80`$ TeV and $``$=2 fb<sup>-1</sup>. A mass peak around $`m_h=104`$ GeV is clearly identifiable. The asymmetry in the mass distribution is partly due to $`\gamma Z+X`$ contribution and partly due to the effect of gluon radiation and the energy outside the jet cone.
## V Summary
In this paper, observable background cross sections for the six final states in which new physics might manifest itself are estimated. All the final states studied are found to have small backgrounds. Implications of the analyses of these final states for future Tevatron runs are discussed in the framework of Gauge Mediated Supersymmetry Breaking models. Potential discovery reaches in supersymmetry parameter space for integrated luminosities of 2 and 30 fb<sup>-1</sup> are examined for models with different nlsp. Though the selection criteria are not optimized for the models discussed and not all final states are investigated, the study does show that the upgraded DØ experiment at the improved Tevatron collider has great potentials for discovery.
## VI Acknowledgement
The author would like to thank D. Cutts, K. Del Signore, G. Landsberg, S. Martin, H. Montgomery, S. Thomas, D. Toback, A. Turcot, H. Weerts, and J. Womersley for their assistance in the course of this study and/or their critical reading of this writeup and X. Tata for pointing out a mistake in one of the background estimations.
|
no-problem/9903/hep-ph9903345.html
|
ar5iv
|
text
|
# The rare decays 𝐵→𝑋_{𝑠,𝑑}𝜈𝜈̄ and 𝐵_{𝑠,𝑑}→𝑙⁺𝑙⁻ in the Multiscale Walking Technicolor Model Supported by the National Natural Science Foundation of China under Grant No.19575015 and By the Sino-British Friendship Scholarship Scheme.
## Abstract
We calculate the contributions to the rare B-decays, $`BX_{s,d}\nu \overline{\nu }`$, $`B_{s,d}l^+l^{}`$ from the unit-charged technipions. Within the considered parameter space we find that: (a) the enhancements to the branching ratios in question can be as large as three orders of magnitude; (b) the ALEPH data of $`BX_s\nu \overline{\nu }`$ lead to strong mass bounds on $`m_{p1}`$ and $`m_{p8}`$: $`m_{p8}620,475GeV`$ for $`F_Q=40GeV`$ and $`m_{p1}=100,400GeV`$ respectively. (c) the CDF data of $`B_s\mu \overline{\mu }`$ lead to a relatively weak limit: $`m_{p8}320GeV`$ for $`F_Q=40GeV`$ and $`m_{p1}=200GeV`$.
PACS: 12.60.Nz, 12.15.Ji, 13.20.Jf
In the framework of the Standard Model (SM), the rare decays $`BX_{s,d}\nu \overline{\nu }`$ and $`B_{s,d}l^+l^{}`$ are theoretically very clean and dominated by similar $`Z^0`$-penguin and W-box diagrams involving top quark exchanges . These rare B-decay modes therefore may play an important role in searching for the new physics beyond the SM .
In ref., Lane and Ramana constructed a specific multiscale walking technicolor model(MWTCM) and investigated its phenomenology. This model predicted a rich spectrum of technipions. Among them are unit-charged color-octets $`\pi _{\overline{D}U}^a`$ ( $`a`$ is the color index) and unit-charged color-singlets $`P_1^+`$ and $`P_2^+`$. In this letter, we use the symbol $`\pi _8`$ and $`m_{p8}`$ to denote the color-octet $`\pi _{\overline{D}U}`$ and its mass. The mixed state $`\pi _1`$ of the $`P_1^+`$ and $`P_2^+`$ is the same kind of technipion as $`P^+`$ given in ref.. We will study the new contributions to the rare B-decays from the physical mixed state $`\pi _1`$ instead of the two technipions $`P_1^+`$ and $`P_2^+`$, for the sake of simplicity. According to the estimations in ref., $`m_{p1}200GeV`$ and $`m_{p8}300GeV`$.
In ref., the authors calculated the contributions to the rare decay $`bs\gamma `$ due to the effective $`bs\gamma `$ coupling induced by the $`\pi _1`$ and $`\pi _8`$ appeared in the MWTCM and found that resultant enhancement to the branching ratio $`B(BX_s\gamma )`$ can be two orders of magnitude. The CLEO data led to strong limit on charged technipion masses: $`m_{p8}>600GeV`$ for $`m_{p1}=300GeV`$. The MWTCM itself is therefore strongly disfavored by the CLEO data.
In this letter, we calculate the new contributions to the rare decays $`BX_{s,d}\nu \overline{\nu }`$ and $`B_{s,d}l^+l^{}`$ due to the effective $`b\overline{s}Z`$ coupling induced also by the $`\pi _1`$ and $`\pi _8`$. This work is complementary to the relevant studies about the new effects to the $`bs\gamma `$ decay and the rare K-decays $`etc`$ , in order to test or constrain the Technicolor(TC) models by currently available data of rare B- and K-decays.
In the numerical calculation, we treat $`m_{p1}`$ and $`m_{p8}`$ as semi-free parameters, varying in the ranges of $`100GeVm_{p1}400GeV`$ and $`200GeVm_{p8}800GeV`$ respectively. The relevant Yukawa and gauge couplings of charged technipions to fermion pairs and to $`Z^0`$ gauge boson can be found in refs..
The new penguin diagrams for the induced $`b\overline{s}Z`$ couplings due to the exchange of the $`\pi _1`$ and $`\pi _8`$ are shown in Fig.1. The corresponding one-loop diagrams in the SM were evaluated long time ago and can be found in ref.. Only the color-singlet $`\pi _1`$ couples to lepton pairs, and therefore may contribute to the rare B-decays in question through the box diagrams. But the Yukawa couplings between $`\pi _1`$ and $`l\nu `$ pairs are strongly suppressed by the lightness of the lepton masses $`m_l`$ ($`l=e\mu ,\tau `$). Consequently, we can neglect the tiny contributions from $`\pi _1`$ through the box diagrams safely.
Because of the lightness of the $`s`$ and $`b`$ quarks when compared with the top quark mass $`m_t`$ and the technipion masses $`m_{p1}`$ and $`m_{p8}`$ we set $`m_s=0`$ and $`m_b=0`$ in the calculation. We will use dimensional regularization to regulate all the ultraviolet divergences in the virtual loop corrections and adopt the modified minimal subtracted ($`\overline{MS}`$) renormalization scheme.
By analytical evaluations of the Feynman diagrams as shown in Fig.1, we find the effective $`b\overline{s}Z`$ vertex induced by the $`\pi _1`$ and $`\pi _8`$ exchanges,
$`\mathrm{\Gamma }_{Z_\mu }^I={\displaystyle \frac{1}{16\pi ^2}}{\displaystyle \frac{g^3}{\mathrm{cos}\theta _W}}{\displaystyle \underset{j}{}}V_{js}^{}V_{jb}\overline{s_L}\gamma _\mu b_LC_0^{New}(y_j)`$ (1)
$`\mathrm{\Gamma }_{Z_\mu }^{II}={\displaystyle \frac{1}{16\pi ^2}}{\displaystyle \frac{g^3}{\mathrm{cos}\theta _W}}{\displaystyle \underset{j}{}}V_{js}^{}V_{jb}\overline{s_L}\gamma _\mu b_LC_0^{New}(z_j)`$ (2)
with
$`C_0^{New}(y_j)=\eta _{TC}^a\left[{\displaystyle \frac{y_j(1+2\mathrm{sin}^2\theta _W3y_j+2\mathrm{sin}^2\theta _Wy_j)}{8(1y_j)}}{\displaystyle \frac{\mathrm{cos}^2\theta _Wy_j^2}{2(1y_j)^2}}\mathrm{ln}(y_j)\right]`$ (3)
$`C_0^{New}(z_j)=\eta _{TC}^b\left[{\displaystyle \frac{z_j(1+2\mathrm{sin}^2\theta _W3z_j+2\mathrm{sin}^2\theta _Wz_j)}{8(1z_j)}}{\displaystyle \frac{\mathrm{cos}^2\theta _Wz_j^2}{2(1z_j)^2}}\mathrm{ln}(z_j)\right]`$ (4)
and
$`\eta _{TC}^a={\displaystyle \frac{m_{p1}^2}{3\sqrt{2}F_Q^2G_FM_W^2}},\eta _{TC}^b={\displaystyle \frac{8m_{p8}^2}{3\sqrt{2}F_Q^2G_FM_W^2}}`$ (5)
where $`y_j=m_j^2/m_{p1}^2`$, $`z_j=m_j^2/m_{p8}^2`$, $`V_{ij}`$ ($`i=u,c,t`$ and $`j=d,s,b`$) are the elements of the CKM mixing matrix, $`M_W`$ is the mass of W gauge boson, $`F_Q`$ is the technipion decay constant in the MWTCM, $`\mathrm{sin}\theta _W`$ is the Weinberg angle, and $`G_F=1.16639\times 10^5(GeV^2)`$ is the Fermi coupling constant. The functions $`C_0^{New}(y_j)`$ and $`C_0^{New}(z_j)`$ in eqs.(3, 4) are just the same kind of functions as the basic function $`C_0(x_i)`$ in eq.(2.18) of ref.. Functions $`C_0(y_t)`$ and $`C_0(z_t)`$ describe the contributions to the $`b\overline{s}Z`$ vertex from the $`\pi _1`$ and $`\pi _8`$ respectively. The $`C_0(y_t)`$ is always positive, but the $`C_0(z_t)`$ will change its sign from $`\mathrm{"}+\mathrm{"}`$ to $`\mathrm{"}\mathrm{"}`$ at $`m_{p8}=531GeV`$.
Within the SM, the rare B-decays under consideration depend on the functions $`X(x_t)`$ and/or $`Y(x_t)`$, they are currently known at the next-to-leading order level . When the new contributions from charged technipions are included, one has
$`X(x_t,y_t,z_t)`$ $`=`$ $`X(x_t)+C_0^{New}(y_t)+C_0^{New}(z_t),`$ (6)
$`Y(x_t,y_t,z_t)`$ $`=`$ $`Y(x_t)+C_0^{New}(y_t)+C_0^{New}(z_t).`$ (7)
where $`x_t=m_t^2/m_W^2`$, $`y_t=m_t^2/m_{p1}^2`$ and $`z_t=m_t^2/m_{p8}^2`$.
In the following numerical calculations, we fix the relevant parameters as follows and use them as the Standard Input: $`M_W=80.2GeV`$, $`\alpha _{em}=1/129`$, $`\mathrm{sin}^2\theta _W=0.23`$, $`m_t\overline{m_t}(m_t)=170GeV`$, $`\tau (B_s)=\tau (B_d)=1.6ps`$, $`\mathrm{\Lambda }_{\overline{MS}}^{(5)}=0.225GeV`$, $`F_{B_s}=0.210GeV`$, $`m_{B_s}=5.38GeV`$, $`m_{B_d}=5.28GeV`$, $`A=0.84`$, $`\lambda =0.22`$, $`\rho =0`$ and $`\eta =0.36`$. For the definitions and values of these input parameters, one can see refs..
Within the SM, normalizing to the semi-leptonic branching ratio $`B(BX_ce\overline{\nu })`$ and summing over the three neutrino flavors one finds
$`B(BX_s\nu \overline{\nu })=B(BX_ce\overline{\nu }){\displaystyle \frac{3\alpha _{em}^2}{4\pi ^2\mathrm{sin}^4\theta _W}}{\displaystyle \frac{V_{ts}^2}{V_{cb}^2}}{\displaystyle \frac{X(x_t)^2}{f(z)}}{\displaystyle \frac{\overline{\eta }}{\kappa (z)}}`$ (8)
where $`\overline{\eta }=\kappa (0)`$, $`f(z)`$ and $`\kappa (z)`$ with $`z=m_c/m_b=0.29`$ are the phase-space and quantum chromodynamics(QCD) correction factors for the decay $`BX_ce\overline{\nu }`$,
$`f(z)`$ $`=`$ $`18z^2+8z^6z^824z^4\mathrm{ln}(z),`$ (9)
$`\kappa (z)`$ $`=`$ $`1{\displaystyle \frac{2\alpha _s(m_b)}{3\pi }}\left[(\pi ^2{\displaystyle \frac{31}{4}})(1z)^2+{\displaystyle \frac{3}{2}}\right]`$ (10)
where $`\alpha _s(m_b)`$ is the QCD coupling constant at the energy scale $`\mu =m_b`$.
Within the SM, using the Standard input parameters and setting $`B(BX_ce\overline{\nu })=10.4\%`$ and $`V_{ts}/V_{cb}^2=0.95`$, one finds $`B(BX_s\nu \overline{\nu })=3.52\times 10^5`$ and $`B(BX_d\nu \overline{\nu })=2.03\times 10^6`$. When the new contributions due to $`\pi _1`$ and $`\pi _8`$ are included, one has $`B(BX_s\nu \overline{\nu })=1.57\times 10^3`$ for the typical values of $`F_Q=40GeV`$, $`m_{p1}=200GeV`$ and $`m_{p8}=300GeV`$, that is two-orders of magnitude higher than the value of the SM prediction. The color-octet $`\pi _8`$ dominates the total contribution.
In Fig.2 the dot-dash (solid, dots) curve shows the branching ratio when the new contributions from $`\pi _1`$ and $`\pi _8`$ are taken into account for $`F_Q=40GeV`$ and $`m_{p1}=100GeV`$ ($`200GeV`$, $`400GeV`$). The horizontal short-dash line shows the ALEPH bound on the branching ratio $`B(BX_s\nu \overline{\nu })`$ : $`B(BX_s\nu \overline{\nu })<7.7\times 10^4`$, which is a factor of 20 above the SM expectation but sensitive enough to put stringent limits on technipion masses. Assuming $`F_Q=40GeV`$, one has $`m_{p8}>620GeV`$ for $`m_{p1}=100GeV`$, $`530GeV<m_{p8}<890GeV`$ for $`m_{p1}=200GeV`$ and $`475GeV<m_{p8}<740GeV`$ for $`m_{p1}=400GeV`$. For smaller $`F_Q`$, the constraints will become much stronger.
In the case of the decay $`BX_d\nu \overline{\nu }`$ one has to replace $`V_{ts}`$ in eq.(8) by $`V_{td}`$, which results in a decrease of the branching ratio by roughly an order of magnitude. But unfortunately, no experimental bound on the decay $`BX_d\nu \overline{\nu }`$ is available currently.
Within the SM, using the effective Hamiltonian as given in ref. one finds
$`B(B_sł^+l^{})={\displaystyle \frac{\tau (B_s)G_F^2}{\pi }}({\displaystyle \frac{\alpha _{em}}{4\pi \mathrm{sin}^2\theta _W}})^2F_{B_s}^2m_l^2m_{B_s}\sqrt{1{\displaystyle \frac{4m_l^2}{m_{B_s}^2}}}V_{tb}^{}V_{ts}^2Y(x_t)^2`$ (11)
with s replaced by d in the case of $`B_dl^+l^{}`$.
In the numerical calculations, we use the standard input parameters and assume that $`F_Q=40GeV`$, $`m_{p1}=200GeV`$, $`200GeVm_{p8}800GeV`$, and set $`V_{tb}^{}V_{ts}^2=0.0021`$ and $`V_{tb}^{}V_{td}^2=1.3\times 10^4`$. The numerical results with the inclusion of new contributions from the technipions $`\pi _1`$ and $`\pi _8`$ are listed in Table 1.
For the decays $`B_sl^+l^{}`$, the available experimental bound is $`B(B_s\mu ^+\mu ^{})2.0\times 10^6`$, which leads to the lower bounds on $`m_{p8}`$: $`m_{p8}>320GeV`$ for $`F_Q=40GeV`$ and $`m_{p1}=200GeV`$. But this bound is much weaker than that from the ALEPH data of $`BX_s\nu \overline{\nu }`$. For the decays $`B_dl^+l^{}`$, the available experimental bound is $`B(B_d\mu ^+\mu ^{})6.8\times 10^7`$ , which is still not sensitive enough to put any limits on $`m_{p1}`$ and $`m_{p8}`$.
In summary, the ALEPH data of $`BX_s\nu \overline{\nu }`$ lead to strong limits on the charged technipion masses $`m_{p1}`$ and $`m_{p8}`$. The assumed mass ranges of $`\pi _1`$ and $`\pi _8`$ in the MWTCM are excluded and therefore the model itself is strongly disfavored by ALEPH data. Other relevant studies also led to the similar conclusions. The major problem of the MWTCM is that the heavy top quark mass is assumed to be generated by extended technicolor interaction, which is clearly unreasonable.
Figure Captions
New $`Z^0`$penguin diagrams contributing to the induced $`b\overline{s}Z`$ vertex from the internal exchanges of the technipion $`\pi _1`$ and $`\pi _8`$. The dashed lines are $`\pi _1`$ and $`\pi _8`$ lines and $`u_j`$ stands for the quarks $`(u,c,t)`$.
The $`m_{p8}`$ dependence of the branching ratio $`B(BX_s\nu \overline{\nu })`$ when the new contributions are included. The horizontal short-dash line shows the ALEPH upper bound, while the dot-dash (solid, dots) curve shows the branching ratio for $`m_{p1}=100GeV`$ ($`200GeV`$, $`400GeV`$).
|
no-problem/9903/cond-mat9903021.html
|
ar5iv
|
text
|
# Floppy Membranes
## Abstract
Floppy membranes are tensionless surfaces without extrinsic stiffness whose fluctuations are governed by fourth-order bending elasticity. This suppresses spiky superstructures and ensures that floppy membranes remain smooth over any distance, with Hausdorff dimension $`D_H=2`$, in contrast to surfaces with stiffness which are rough on the scale of some finite persistence length.
PACS: 68.10.-m
1. Under deformations, fluid membranes behave approximately like ideal tensionless surfaces with curvarture stiffness , and have a model energy
$$H=\kappa d^2\xi \sqrt{g}\mathrm{Tr}C^2,$$
(1)
where $`C_{ab}`$ is the second fundamental form of the surface described by an embedding function $`𝐱=𝐱(\xi ^1,\xi ^a2)`$. The symbol $`g`$ denotes the determinant of the induced metric $`g_{ab}_a𝐱_b𝐱`$.
Thermal fluctuations are known to soften the curvature stiffness with increasing membrane size, such that there exists a finite persistence length $`\zeta =\zeta _0\mathrm{exp}(4\pi \kappa /3T)`$, where $`\zeta _0`$ is the molecular size, beyond which the membrane looses its stiffness completely and begins exhibiting a surface tension proportional to $`\zeta ^2`$, which was initially absent. Thus the tangential correlation functions have only the range $`\zeta `$, and surfaces much larger than $`\zeta `$ appear rough. The typical behavior of bilipid vesicles can therefore be observed in the laboratory only for sizes smaller than $`\zeta `$. Surfaces much larger than $`\zeta `$ will crumple and fill the embedding space with spiky structures .
Apart from these common membranes, nature may provide us also with another type, not described by the Hamiltonian (1). If the molecules in a bilayer are strongly conical, the bending stiffness can be zero or negative . Such a situation can also arise for charged or dipolar molecules . In this case we shall speak of floppy membranes. The purpose of this note is to study the statistical properties of such floppy membranes, which will turn out to be quite different from those of ordinary membranes. In particular we shall find that, in contrast to ordinary membranes, floppy membranes without tension and stiffness are smooth over long distances. If the stiffness is negative, floppy membranes are able to form disordered superstructures similar to those recently observed in the laboratory . These are thought to be molten versions of the egg carton-like crystalline arrangement of local maxima and minima on the surface. This has not yet been confirmed experimentally, but was suggested by recent numerical simulations .
2. In order to describe floppy membranes, we must stabilize their fluctuations by adding to the energy (1) a higher-gradient term. Here we shall consider only one of several possibilities, focusing our attention upon the following Hamiltonian:
$`H`$ $`=r{\displaystyle d^2\xi \sqrt{g}}+{\displaystyle \frac{\kappa }{2}}{\displaystyle d^2\xi \sqrt{g}\mathrm{Tr}C^2}`$ (3)
$`+{\displaystyle \frac{1}{2m}}{\displaystyle d^2\xi \sqrt{g}\left[g^{cd}𝒟_aC_{ac}𝒟_bC_{bd}+\mathrm{Tr}C^4\right]}.`$
The properties of a surface with this Hamiltonian will be studied non-perturbatively in the limit of a large number $`D`$ of embedding dimensions.
The first term in (3) parametrizes the surface tension; the third term provides the surface with the stabilizing higher-order bending stiffness, whose parameter $`m`$ has the dimension $`(\mathrm{energy}\times \mathrm{surface})^1`$. Although this term is irrelevant in a perturbative renormalization group analysis, it becomes relevant non-perturbatively in the limit $`D\mathrm{}`$ by a mechanism familiar from the three-dimensional Gross-Neveu model . The new term stabilizes the surface against growing spikes and makes it smooth over long distances.
The Hamiltonian (3) can be reformulated alternatively in terms of the tangent vectors $`_a𝐱`$ of the surface, or in terms of the normal vectors $`𝐧`$:
$`H`$ $`=r{\displaystyle d^2\xi \sqrt{g}}{\displaystyle \frac{1}{m}}{\displaystyle d^2\xi \sqrt{g}K^2}`$ (5)
$`+{\displaystyle \frac{1}{2}}{\displaystyle d^2\xi \sqrt{g}𝐧\left[\kappa 𝒟^2+\frac{1}{m}𝒟^4\right]𝐧},`$
where $`K`$ is the Gaussian curvature. This form exposes an important physical aspect of the model. As pointed out in , the first term in the second line is analogous to the the continuous version of the Heisenberg model of ferromagnets , albeit with an additional integrability condition for the $`𝐧(\xi )`$-field. It tries to make an ordinary membrane with a positive stiffness smooth corresponding to a ferromagnetic alignement of the normal vectors. The fact that ordinary membranes cannot be smooth over long distances has its parallel in the absence of an ordered phase in the two-dimensional Heisenberg ferromagnet. By the same analogy, we see that our new term introduces (apart from an intrinsic term $`K^2`$) an antiferromagnetic next-to-nearest neighbours interaction between normal vectors. This generates frustration, and it is due to this non-local interaction that the surface can have an ordered phase after all, although with an antiferromagnetic type of order.
To exhibit these features analytically, a formulation of the Hamiltonian (3) in terms of the tangent vectors will be most convenient:
$$H=\frac{1}{2}d^2\xi \sqrt{g}g^{ab}𝒟_ax_\mu \left(r\kappa 𝒟^2+\frac{1}{m}𝒟^4\right)𝒟_bx_\mu .$$
(6)
3. We analyze the model (6) in the large-$`D`$ approximation along the lines of Refs. . To this end we introduce a Lagrange multiplier matrix $`\lambda ^{ab}`$ to enforce the constraint $`g_{ab}=_a𝐱_b𝐱`$, extending the Hamiltonian (6) to
$$H_{\mathrm{ext}}=H+\frac{1}{2}d^2\xi \sqrt{g}\lambda ^{ab}\left(_a𝐱_b𝐱g_{ab}\right).$$
(7)
Then we parametrize the surface in a Gauss map by $`𝐱(\xi )=(\xi _1,\xi _2,\varphi ^i(\xi ))`$, $`(i=3,\mathrm{},D)`$, where $`R_1/2\xi _1R_1/2`$, $`R_2/2\xi _2R_2/2`$ and $`\varphi ^i(\xi )`$ describe the ($`D`$-2) transverse fluctuations. In the limit of large $`D`$, to be studied here, the large number of components suppresses the fluctuations of $`\lambda ^{ab}`$ and $`g_{ab}`$. These fields take extremal values which, for large surface areas, are homogeneous and istropic: $`g_{ab}=\rho \delta _{ab}`$, $`\lambda ^{ab}=\lambda g^{ab}`$. Thus we may replace (7) for large $`D`$ by
$`H_{\mathrm{ext}}`$ $`={\displaystyle d^2\xi \left[r+\lambda (1\rho )\right]}`$ (9)
$`+{\displaystyle \frac{1}{2}}{\displaystyle d^2\xi _a\varphi ^iK\left(𝒟^2\right)_a\varphi ^i},`$
where $`K`$ represents the differential operator
$`K\left(𝒟^2\right)`$ $`=r+\lambda \kappa 𝒟^2+{\displaystyle \frac{1}{m}}𝒟^4.`$ (10)
Integrating out the transverse fluctuations, always for large areas, we get the free energy
$`F`$ $`=A_{\mathrm{ext}}\left[r+\lambda (1\rho )\right]`$ (12)
$`+A_{\mathrm{ext}}{\displaystyle \frac{D2}{8\pi ^2}}\rho {\displaystyle d^2p\mathrm{ln}\left[p^2K\left(p^2\right)\right]},`$
where, for simplicity, we have chosen natural units by setting $`\beta =1/k_\mathrm{B}T=1`$ and $`A_{\mathrm{ext}}=R_1R_2`$ is the extrinsic, projected area in the coordinate plane. The factor $`(D2)`$ in the second term ensures that, for large $`D`$, the fields $`\lambda `$ and $`\rho `$ are extremal and satisfy thus the saddle-point (“gap”) equations
$$0=f(r,\kappa ,m,\lambda ),\rho =\frac{1}{f^{}(r,\kappa ,m,\lambda )}.$$
(13)
The prime denotes derivatives with respect to $`\lambda `$ and the saddle-point function $`f`$ is defined by
$$f(r,\kappa ,m,\lambda )\lambda \frac{D2}{8\pi }𝑑pp\mathrm{ln}\left[p^2K\left(p^2\right)\right].$$
(14)
Inserting (13) in (12) we get $`F=\left(r+\lambda \right)A_{\mathrm{ext}}`$ showing that $`r_{\mathrm{ph}}\left(r+\lambda \right)`$ is the physical surface tension.
We now introduce the following combinations of the parameters of the model:
$$R^2\frac{1}{2}\sqrt{m(r+\lambda )}+\frac{\kappa m}{4},I^2\frac{1}{2}\sqrt{m(r+\mathrm{\Lambda })}\frac{\kappa m}{4}.$$
(15)
In terms of these, the kernel $`V`$ can be written as
$$mK\left(p^2\right)=\left(R^2+I^2\right)^2+2\left(R^2I^2\right)p^2+p^4,$$
(16)
from where we deduce the stability condition for the homogeneous saddle-point as being $`R^2>0`$, insuring that the spectrum of transverse fluctuations is positive for all $`p>0`$. Since we shall mostly consider negative or vanishing stiffnesses $`\kappa `$ we shall also assume that $`I^2>0`$, so that both $`R`$ and $`I`$ are real. Note that the spectrum of transverse fluctuations depends drastically on the sign of the stiffness: for negative $`\kappa `$ we have $`I>R`$ and $`K`$ develops a minimum at $`p=\sqrt{I^2R^2}`$. Correspondingly (but for a slightly higher value of $`I/R`$), the spectrum $`E\left(p^2\right)=p^2K\left(p^2\right)`$ develops a roton-like minimum, which, for $`R/I1`$, lies at $`pI\left(18R^2/3I^2\right)`$, as shown schematically in Fig. 1.
Having established the stability conditions we proceed to the evaluation of the saddle-function. This contains an ultraviolet divergent integral which must be regularized. To this end we use the standard dimensional regularization, computing the integral in $`(2ϵ)`$ dimensions. For small $`ϵ`$, this leads to
$`f(r,\kappa ,m,\lambda )`$ $`=\lambda +{\displaystyle \frac{1}{4\pi }}\left(R^2I^2\right)\mathrm{ln}{\displaystyle \frac{R^2+I^2}{\mathrm{\Lambda }^2}}`$ (18)
$`{\displaystyle \frac{1}{2\pi }}RI\left({\displaystyle \frac{\pi }{2}}+\mathrm{arctan}{\displaystyle \frac{I^2R^2}{2RI}}\right),`$
where $`\mathrm{\Lambda }\mu \mathrm{exp}(2/ϵ)`$ and $`\mu `$ is a reference scale which must be introduced for dimensional reasons. The scale $`\mathrm{\Lambda }`$ plays the role of an ultraviolet cutoff, diverging for $`ϵ0`$.
In order to distinguish the various phases of our model we compute two correlation functions. First, we consider the orientational correlation function $`g_{ab}(\xi \xi ^{})_a\varphi ^i(\xi )_b\varphi ^i(\xi ^{})`$ for the normal components of tangent vectors to the surface. From (9) this is given by
$$g_{ab}(\xi \xi ^{})=\frac{\delta _{ab}}{4\pi ^2}d^2p\frac{1}{K\left(p^2\right)}\mathrm{e}^{i\sqrt{\rho }p(\xi \xi ^{})}.$$
(19)
In terms of $`R`$ and $`I`$, the Fourier components can be writtens as
$$\frac{1}{K\left(p^2\right)}=\frac{m}{2RI}\mathrm{Im}\frac{1}{p^2+(RiI)^2},$$
(20)
from where we obtain the following exact result for the diagonal elements $`gg_{aa}`$ of (19):
$$2\pi g(d)=\frac{m}{2RI}\mathrm{Im}K_0\left(\left(RiI\right)\sqrt{\rho }d\right),$$
(21)
where $`d|\xi \xi ^{}|`$ and $`K_0`$ is a Bessel function of imaginary argument .
Secondly, we compute the scaling law of the distance $`d_E`$ in embedding space between two points on the surface when changing its projection $`d`$ on the reference plane. The exact relation between the two lengths is
$$d_E^2=d^2+\underset{i}{}|\varphi ^i(\xi )\varphi ^i(\xi ^{})|^2.$$
(22)
With a computation analogous to the one leading to (21) we obtain the following behaviour:
$`d_E^2=\{\begin{array}{cc}{\displaystyle \frac{\left(R^2+I^2\right)}{8\pi r_{\mathrm{ph}}RI}}\mathrm{arctan}(I/R)\alpha d^2,\hfill & d^2\frac{1}{\alpha }\hfill \\ & \\ {\displaystyle \frac{1}{2\pi r_{\mathrm{ph}}}}\left[\mathrm{ln}\left(\alpha d^2/4\right)+c(R,I)\right],\hfill & \frac{1}{\alpha }d^2\frac{1}{2\pi r_{\mathrm{ph}}}\hfill \\ & \\ d^2,\hfill & d^2\frac{1}{2\pi r_{\mathrm{ph}}}\hfill \end{array}`$ (23)
where $`\alpha \left(R^2+I^2\right)\rho `$ and $`c(R,I)=C+\left[\left(I^2R^2\right)/RI\right]\mathrm{arctan}(I/R)`$ with $`C`$ = Euler’s constant.
These results show that the model has three possible phases. The first is realized when there are no solutions to the saddle-point equations in the allowed range of parameters. For this choice of parameters, there exist no homogeneous, isotropic surfaces. In this phase the surfaces will form inhomogeneous structures.
If a solution to the saddle-point equations exists, two situations can be realized. For large positive stiffness $`\kappa `$ we have $`RI`$, the asymptotic region is $`d1/R\sqrt{\rho }`$ and $`I`$ can be neglected. In this region we have
$$g(d)\frac{1}{\sqrt{R\sqrt{\rho }d}}\mathrm{e}^{R\sqrt{\rho }d},$$
(24)
exhibiting short-range orientational order. For short distances $`d1/R\sqrt{\rho }`$, the surfaces behave as two-dimensional objects. If $`\rho `$ becomes large we have a region $`1/R\sqrt{\rho }d1/\sqrt{2\pi r_{\mathrm{ph}}}`$ in which $`d_E`$ scales logarithmically with $`d`$ and distances along the surface become large. The transition to this regime happens on the scale of the persistence length $`d_E^{\mathrm{PL}}=1/\sqrt{r_{\mathrm{ph}}}`$. Above this scale world-sheets are crumpled, with no orientational correlations (if the tension is not large enough to dominate over the entire surface, causing $`\rho 1`$). This phase corresponds to the familiar behaviour of stiff membranes .
For large negative stiffness $`\kappa `$, in contrast, we have $`IR`$, the asymptotic region is $`d1/I\sqrt{\rho }`$, and $`R`$ can be neglected in $`K_0`$ for $`d1/R\sqrt{\rho }`$. In this region we have
$$g(d)=\frac{m}{8RI}J_0\left(I\sqrt{\rho }d\right),$$
(25)
with $`1/R\sqrt{\rho }`$ playing the role of an infrared cutoff for the oscillations on the scale $`1/I\sqrt{\rho }`$ over which the Bessel function $`J_0`$ varies. We have thus a new scale (in embedding space)
$$d_E^O=\sqrt{\frac{I}{32Rr_{\mathrm{ph}}}},$$
(26)
on which the transverse fluctuations create oscillations characterized by the “antiferromagnetic” orientational correlations (25). Crumpling takes place only if $`1/R\sqrt{\rho }1/\sqrt{2\pi r_{\mathrm{ph}}}`$ and the corresponding persistence length is now
$$d_E^{\mathrm{PL}}=\frac{1}{\sqrt{4\pi r_{\mathrm{ph}}}}\sqrt{\frac{\pi I}{2R}+\mathrm{ln}\frac{I^2}{4R^2}},$$
(27)
which is much larger than $`1/\sqrt{r_{\mathrm{ph}}}`$. Otherwise, the oscillating superstructure goes over directly into the tension dominated region. In this case $`d_E`$ scales logarithmically with $`d`$ for $`1/I\sqrt{\rho }d1/\sqrt{2\pi r_{\mathrm{ph}}}`$, and $`\rho `$ is large not because of crumpling but because of the oscillating superstructure. Indeed there are strong orientational correlations in this region. Note that the oscillations represent a disordered superstructure caused by fluctuations on an otherwise homogeneous and isotropic ground-state described by the solution of the saddle-point equations.
4. One might imagine that our disordered superstructure undergoes a transition to a crystalline egg-carton-type structure when $`R0`$, so that the spectrum of transverse fluctuations develops an instability at a finite value $`p=I`$. However this is not so, as can be seen from the explicit expression for $`\rho `$ obtained from (13),
$$\rho =\left[1\frac{m}{16\pi RI}\left(\frac{\pi }{2}+\mathrm{arctan}\frac{I^2R^2}{2RI}\right)\right]^1.$$
(28)
When lowering $`R`$ at fixed $`I`$ one hits a pole where $`\rho `$ diverges. This means that the surface crumples before one reaches the crystal instability.
For symmetry reasons, the transition from the stiff to the disordered superstructure phase occurs on the line $`R=I`$, where $`\kappa =0`$ and the the kernel $`K`$ develops its minimum.
The fourth-order bending elasticity term dominates the fluctuations of surfaces which have both vanishing bare tension and stiffness. These can be studied further analytically since, for $`\kappa =0`$ ($`R=I`$), the saddle-point equations become polynomial, with solution
$$\lambda =\frac{m}{128}\left(1+\sqrt{1+256\frac{r}{m}}\right).$$
(29)
This gives
$$r_{\mathrm{ph}}=\frac{a^2}{64}m,\rho =\left(1\frac{1}{2a}\right)^1,$$
(30)
where $`a`$ is the following function of the dimensionless parameter $`r/m`$:
$$a^2=\frac{1+128r/m+\sqrt{1+256r/m}}{2}.$$
(31)
This is the previously announced result. The fourth-order bending elasticity term, although irrelevant in perturbation theory, becomes relevant non-perturbatively by contributing a term $`m/64`$ to the physical surface tension. This is the reason why, contrary to stiff membranes , floppy membranes do not crumple. The physical tension can be decreased arbirarily with $`\rho `$ remaining in the range $`1\rho 2`$. In other words one can lower the two scales $`r`$ and $`m`$ so that the range of orientational correlations $`d=1/R\sqrt{\rho }=(4/a\sqrt{m})\sqrt{a1/2}`$ is always of the same order or larger than the inverse of the square root of the physical surface tension $`1/\sqrt{r_{\mathrm{ph}}}=8/a\sqrt{m}`$.
The corresponding $`\gamma `$-functions are easily obtained as
$`\gamma (r)`$ $`\mathrm{\Lambda }{\displaystyle \frac{d}{d\mathrm{\Lambda }}}\mathrm{ln}{\displaystyle \frac{r}{\mathrm{\Lambda }^2}}=2,`$ (32)
$`\gamma (m)`$ $`\mathrm{\Lambda }{\displaystyle \frac{d}{d\mathrm{\Lambda }}}\mathrm{ln}{\displaystyle \frac{m}{\mathrm{\Lambda }^2}}=2,`$ (33)
showing the absence of anomalous dimensions for $`\kappa =0`$. Correspondingly, we have
$$\beta \left(\frac{r}{m}\right)=\mathrm{\Lambda }\frac{d}{d\mathrm{\Lambda }}\frac{r}{m}=\frac{r}{m}\left[\gamma (r)\gamma (m)\right]=0,$$
(34)
which means that $`r/m`$, and thus also $`a`$, are renormalization group invariants.
5. In conclusion we see that the physics of floppy membranes is governed by an infrared-stable fixed-point characterized by vanishing tension and a dimensionless renormalization group invariant parameter $`a^{}lim_{r0,m0}a(r,m)`$. At this point, the surface exhibits long-range order in which the diagonal elements of the correlation functions $`g_{ab}(\xi \xi ^{})`$ do not depend on the distance, $`g(d)=4\pi ^2/a^{}`$, and in which the length (22) scales with the distance in coordinate space like $`d_E^2=\left(\pi ^2\rho ^{}/a^{}\right)d^2`$, from which we deduce a Hausdorff dimension $`D_H=2`$ for floppy membranes.
|
no-problem/9903/cond-mat9903269.html
|
ar5iv
|
text
|
# Fluctuations and correlations in sandpile models
## Abstract
We perform numerical simulations of the sandpile model for non-vanishing driving fields $`h`$ and dissipation rates $`ϵ`$. Unlike simulations performed in the slow driving limit, the unique time scale present in our system allows us to measure unambiguously response and correlation functions. We discuss the dynamic scaling of the model and show that fluctuation-dissipation relations are not obeyed in this system.
PACS numbers: 05.65.+b, 05.70.Ln
The sandpile automaton is one of the simplest model of avalanche transport, a phenomenon of growing experimental and theoretical interest. In the model introduced by Bak, Tang and Wiesenfeld (BTW) , grains of “energy” are injected into the system. Open boundary conditions or bulk dissipation insure a balance between input and output flow and allow for a non-equilibrium stationary state. In the limit of slow external driving and small dissipation, which corresponds to an infinite time scale separation between driving and response, the model displays an highly fluctuating avalanche behavior, indicative of a critical point. Despite the impressive theoretical effort devoted to understanding the critical behavior of the model , several important issues still remain to be addressed.
Numerical simulations are usually performed under slow driving and boundary dissipation, since the limit of infinite time scale separation is easily implemented in the computer and provides a simple way to access the avalanche critical behavior . However, due to the presence of two infinitely separated time scales, an unambiguous definition of dynamic response and correlation functions is not possible . This hinders a clear characterization of the non-equilibrium stationary state in terms of static and dynamic response and correlation functions. Evaluation of these quantities helps to elucidate the nature of the critical point and provides a test of fluctuation-dissipation relations, at least in some weaker sense. Recently, it has been proposed to interpret the behavior of sandpile models in analogy with other non-equilibrium critical phenomena, such as absorbing phase transitions , driven interfaces in random media and branching processes . These theoretical studies suggest new ways to perform numerical simulations in which a unique time scale is considered .
In this letter, we present numerical simulations of the sandpile model for different driving rates $`h`$ and study how the system approaches the critical point when $`h0`$. In this way, we are able to measure quantities that are not accessible in the time scale separation regime. The local density of active sites, that can be identified as the order parameter of the model , is homogeneous only in the case of bulk dissipation. For boundary dissipation, it displays a marked curvature, that was anticipated in Refs. and could explain several scaling anomalies found in the BTW model. The energy landscape is instead homogeneous in both cases and its statistical properties do not depend on the dissipation rate $`ϵ`$ in the limit $`h0`$.
We measure correlation and response functions in time and space domains and observe the scaling of the related characteristic lengths and times. We find two different characteristic times, implying that fluctuation-dissipation relations are not obeyed. We observe, however, a well defined scaling behavior and the value of the critical exponents are in agreement with recent large scale numerical simulations of slowly driven sandpiles . Finally, the present numerical analysis opens the road to future studies to resolve some longstanding problems such as the precise identification of universality classes for these models .
In sandpile models , each site $`i`$ of a $`d`$dimensional lattice bears an integer variable $`z_i0`$, which we call energy. At each time step an energy grain is added on a randomly chosen site. When a site reaches or exceeds a threshold $`z_c`$ it topples: $`z_iz_iz_c`$, and $`z_jz_j+1`$ at each of the $`g`$ nearest neighbors (nn) of $`i`$. Each toppling can trigger nn to topple and so on, generating an avalanche. The original BTW model is conservative and energy is dissipated only at the boundary, i.e. energy grains from toppling boundary sites flow out of the system. Infinitely slow driving is implicitly built into the model: during the avalanche the energy input stops, until the system is again quiescent (no active sites are present), so that we can identify two distinct time scales $`T_d`$ and $`T_a`$, for driving and activity, respectively. A single driving time step can in principle be followed by an infinite number of avalanche time steps and $`T_a/T_d0`$. For this reason, there are two possible definitions for the correlation function, depending on the choice of the scale used to measure time (slow or fast) .
Here we simulate the BTW sandpile model for a non-vanishing driving field: each site has a probability $`h`$ per unit time to receive an energy grain, also if active sites are present in the system. This defines a unique time step for both driving and activity updating. The parameter $`h`$ sets the driving rate, and in the limit $`h0^+`$ we recover the slow driving limit; i.e. during the evolution of an avalanche the system does not receive energy. We consider two possible mechanisms for energy dissipation: (i) usual boundary dissipation and (ii) bulk dissipation, simulated introducing the probability $`\alpha `$ that a toppling site looses its energy without transferring it to the neighbors, which corresponds to an effective average dissipation $`ϵ=\alpha z_c`$. In case (ii), we impose periodic boundary conditions. We use two dimensional lattices with linear sizes ranging from $`L=64`$ to $`L=300`$, and parameters in the range $`10^6<h<10^3`$ and $`10^3<ϵ<10^2`$.
The order parameter in sandpile models is the density $`\rho _a`$ of active sites, whose energy is larger than $`z_c`$ . The dependence of the order parameter on the control parameters $`h`$ and $`ϵ`$ is readily obtained by means of conservation arguments : since energy is conserved in the stationary state, the incoming energy flux $`J_{in}=hL^d`$ must be equal to the dissipated energy $`J_{out}=ϵ\rho _aL^d`$. By equating the two fluxes we obtain $`\rho _a=h/ϵ`$. In systems with boundary dissipation, the effective dissipation rate scales with the system size as $`ϵL^\mu `$, with $`\mu =2`$ , yielding $`\rho _ahL^2`$. It has to be noticed that the model is critical just in the double limit $`ϵ0`$ and $`h/ϵ0`$. The onset of a nonvanishing driving thus destroys criticality in that it enforces a nonzero dissipation. For $`hϵ`$ the cutoff length scaling is dominated only by dissipation, while for greater driving fields more complicate scaling behaviors occur.
We study the behavior of the density of active sites in the system and measure the stationary average density of local energies; i.e. the density $`\rho _i`$ of sites with $`i`$ energy grains. In Fig. 1 we report the behavior of the densities as a function of $`h/ϵ`$. For small values of $`h/ϵ`$ we find $`\rho _i=\rho _i^0+c_ih/ϵ+𝒪((h/ϵ)^2)`$, where $`\rho _i^0`$ are the values extrapolated from the limit $`h0^+`$ and are given by $`\rho _0^0=0.075(1)`$, $`\rho _1^0=0.176(1)`$, $`\rho _2^0=0.307(1)`$ and $`\rho _3^0=0.442(1)`$. These values are in excellent agreement with the exact values obtained for the slowly driven sandpile (with boundary dissipation) and are independent of the dissipation rate. This implies that the energy substrate over which avalanches propagate is the same in the case of bulk and boundary dissipation. For $`i>3`$ we obtain $`\rho _i^0=0`$ and for small $`h`$ we observe $`\rho _a\rho _4`$, while for larger $`h`$ higher energy levels become populated and $`\rho _a`$ has non negligible contributions coming from $`\rho _i`$ with $`i>4`$. Finally we confirm that $`\rho _a=h/ϵ`$ for the whole range of parameters. In the case of boundary dissipation we recover $`\rho _ahL^2`$.
To elucidate the differences between bulk and boundary dissipation, we measure the local density of active sites $`\rho _a(r)`$. In the case of bulk dissipation the density profile is flat $`\rho _a(r)=\rho _a`$, while in the case of boundary dissipation we obtain a surface that can be well approximated by a paraboloid (see Fig. 2). This is due to the highly inhomogeneous dissipation which imposes a zero density of active sites on the lattice boundary and corresponds to an elastic interface pinned at the boundaries as discussed in Ref.s . This effect can explain the anomalies encountered in the numerical evaluation of avalanche exponents and the persistent deviations from simple scaling observed in BTW model .
In order to obtain a quantitative description of the stationary state, we study the effect on the stationary density of a small perturbation in the driving field
$$\mathrm{\Delta }\rho _a(r,t)=\chi _{h,ϵ}(rr^{},tt^{})\mathrm{\Delta }h(r^{},t^{})𝑑r^{}𝑑t^{}$$
(1)
where $`\chi _{h,ϵ}(r,t)`$ is the local response function. In the limit $`h0^+`$ the integrated susceptibility $`\chi 𝑑td^dr\chi _{h,ϵ}(r,t)`$ scales as the average avalanche size $`\chi s`$ and the time integrated response function scales as
$$\overline{\chi }_{h0,ϵ}(r)\chi _{h0,ϵ}(r,t)𝑑t\frac{1}{r^{d2}}e^{r/\xi }$$
(2)
where $`\xi `$ is the characteristic length. Since $`\chi =\rho _a/h`$ and $`\rho _a=h/ϵ`$, the response function diverges in the limit of vanishing driving and dissipation as $`\chi =1/ϵ`$. By noting that $`\chi \xi ^2`$, we obtain that $`\xi ϵ^\nu `$ with $`\nu =1/2`$ . These results can be obtained in mean-field (MF) theory but hold in all dimensions due to conservation .
To measure the response function, we drive the system in the stationary state with a given $`h`$ and we then add $`n`$ energy grains (i.e. $`\mathrm{\Delta }h=n/L^2`$) on a given lattice site. The time integrated response function is equivalent to the average difference of activity $`\overline{\chi }_{h,ϵ}(r)=\mathrm{\Delta }\rho _a(r)=\rho _a(r)_{h+\mathrm{\Delta }h}\rho _a(r)_h`$, where $`r`$ denotes the distance from the perturbed site. We observe that this function decays exponentially as predicted by Eq. 2 and measure the correlation length $`\xi `$ (see Fig. 3). In the case of bulk dissipation, for small driving fields the $`\xi `$ depends only on the dissipation rate and scales as $`\xi ϵ^\nu `$, with $`\nu =0.50\pm 0.01`$ (see Fig. 4). In the case of boundary dissipation, to evaluate $`\overline{\chi }_{h,L}(r)`$ we have to consider explicitly the spatial inhomogeneity of the stationary density: $`\rho _a(r)h/ϵ`$. We observe also in this case that the integrated response function decays exponentially and defines a correlation length increasing linearly with the lattice size; i.e $`\xi L`$. This result does not agree with the anomalous scaling found in a continuous energy sandpile. We perform analogous simulations in $`d=3`$ and find that Eq. 2 is still verified .
Furthermore, we study the response function in the time domain defined as $`\stackrel{~}{\chi }_{h,ϵ}(t)𝑑r\chi _{h,ϵ}(r,t)`$ after a small variation $`\mathrm{\Delta }h`$ of the driving field. Also in this case we obtain a clear exponential behavior defining the characteristic time scale $`\tau `$. For small driving field, $`\tau `$ scales as a function of the dissipation rate as $`\tau ϵ^\mathrm{\Delta }`$, with $`\mathrm{\Delta }=0.75\pm 0.05`$ (see Fig. 4). We then evaluate the dynamical exponent $`z=\mathrm{\Delta }/\nu =1.5\pm 0.1`$ relating time and spatial characteristic length: $`\tau \xi ^z`$. In the limit $`h0^+`$, we expect that the critical exponents $`\nu `$ and $`z`$ express the divergence of avalanche characteristic size and time, respectively. The numerical results confirm this observation . For increasing values of $`h`$, the driving field enters the scaling form and the results will be reported elsewhere.
We now turn to the analysis of the correlation function defined as $`C(r,t)=\rho _a(r,t)\rho _a(0,0)\rho _a^2`$. In previous simulations, performed in the slow driving limit, correlation functions were usually measured with respect to the slow time scale and the fast time scale was explored studying the avalanche propagation. The introduction of non vanishing driving and dissipation allows us to bridge the gap between the two regimes. We study the correlation function in time and space domains and find an exponential decay at long times and distances , defining the correlation lengths $`\xi _c`$ and $`\tau _c`$ for space and time, respectively. The scaling of these correlation lengths is in agreement with the one obtained analyzing the response functions (i.e. $`\xi _cϵ^\nu `$, with $`\nu 0.5`$ and $`\tau _cϵ^\mathrm{\Delta }`$, with $`\mathrm{\Delta }0.75`$) and confirms the existence of a unique critical behavior in time and space (see Fig. 4).
In order to clarify the interplay between slow and fast dynamical modes, we analyze fluctuation-dissipation relations. In equilibrium phenomena, the fluctuation-dissipation theorem ensures that the response of the system to a small perturbation is related to the correlation function. In particular, the response function is given by
$$\chi (t)=\frac{1}{T}\frac{dC(t)}{dt},$$
(3)
where $`T`$ is the temperature. Eq. 3 is strictly verified only in equilibrium systems, but it has been recently generalized to some classes of non equilibrium systems, namely systems displaying “aging” . In those examples the fluctuation-dissipation relation provides an information on an effective non equilibrium temperature that rules the dynamical evolution of the system.
We test Eq. 3 and we find that the usual linear behavior does not hold. On the contrary, we show that the parametric plot of $`\chi (t)`$ versus $`C(t)`$ defines a power law behavior, as shown in the double logarithmic plot of Fig. 5. This is striking evidence that the fluctuation-dissipation relation does not hold in these systems. Since we are in presence of two exponential functions, the linear behavior on the logarithmic scale is the signature of two different values for characteristic times $`\tau _c`$ and $`\tau `$ for the correlation and response function respectively. The slope indicates the ratio among the two time scales is given by $`\tau _c/\tau 0.4`$ and does not depend on driving and dissipation rates. This observation reflects the fact that the correlation and the response times scale with the same exponents with respect to dissipation and define unambiguously the critical behavior of the model. In particular, it implies that the dynamical exponent $`z1.5`$ is unique and can be estimated either by measuring avalanche distributions or the correlation functions. Previous simulations revealed two different dynamical exponents in the fast and in the slow time scale. These differences are probably due to the ambiguous definition of time in the infinite time scale separation limit.
Finally, we note that it is not possible to define an effective temperature for the dynamics of sandpile models. It is interesting to compare this observation with a recent work showing that the stationary state of non-equilibrium threshold models, similar to the one studied here, can be described by Boltzmann statistics in the mean-field limit. The validity of claim of Ref. has been debated in the literature . We measure fluctuation-dissipation relations in a random neighbor sandpile model, which is described by mean-field theory, and find that fluctuation-dissipation relations are not satisfied , in disagreement with the conclusions of Ref. .
We thank A.Chessa, D.Dhar, R. Dickman, S. Franz, K.B. Lauritsen, E.Marinari, M. A. Muñoz, R. Pastor-Satorras and A. Stella for comments and discussions. A.V. and S.Z. acknowledge partial support from the European Network Contract ERBFMRXCT980183.
|
no-problem/9903/astro-ph9903259.html
|
ar5iv
|
text
|
# Dust-to-Gas Ratio and Phase Transition of Interstellar Medium
## 1 Introduction
Recent chemical evolution models of galaxies including the dust content are successful in explaining the dust amount of nearby galaxies (Wang wang (1991); Lisenfeld & Ferrara lisenfeld (1998); Dwek dwek (1998); Hirashita 1999a , hereafter H99; see also Takagi, Arimoto, & Vansevičius takagi (1999)). Supernovae (SNe) are the dominant source of the formation of dust grains (Dwek & Scalo dwekscalo (1980)), and SN shocks destroy grains (Jones et al. jones (1994); Borkowski & Dwek borkowski (1995)). Thus, in those models are dust content connected with star formation histories.
In our previous work, H99, the dust-to-gas ratio was expressed as a function of metallicity (see also Lisenfeld & Ferrara lisenfeld (1998)), which is also related to star formation histories. It confirmed the suggestion proposed by Dwek (dwek (1998)) that the accretion process onto preexisting dust grains is efficient in spiral galaxies. However, since the accretion is effective in cold clouds, the global efficiency of the accretion depends on the fraction of the gas in cold phase (Seab seab (1987); McKee mckee (1989); Draine draine (1990)). Thus, the efficiency varies on a timescale of $`10^{7\text{}8}`$ yr by the phase transition of the ISM (Ikeuchi ikeuchi (1988); McKee mckee (1989)).
In this Letter, we combine the framework of H99 with a theoretical work on multiphase ISM by Ikeuchi & Tomita (ikeuchitomita (1983)), whose limit-cycle model of the ISM phase transition is applied to the result in Tomita, Tomita, & Saitō (tomita (1996)) by Kamaya & Takeuchi (kamaya (1997); hereafter KT97). In the limit-cycle model, mass fraction of each phase oscillates continuously because of mass exchange among the components of the phases. The timescale of the phase transition in the model is determined by a few parameters intrinsic to a spiral galaxy; sweeping rate of SN shocks, evaporation rate of the cold gas, and the cooling rate of the gas heated by SN shocks. Actually, a static solution as well as the limit-cycle solution for the filling factors of the three components is possible depending on the parameters. However, the oscillatory behaviour (i.e., the limit-cycle model) of the filling factors is supported observationally. Indeed, the observed scatter of the far-infrared-to-optical flux ratios of spiral galaxies (Tomita, Tomita, & Saitō tomita (1996)) is interpreted through the limit-cycle model in KT97. KT97 suggested that the fraction of the gas mass (i.e., the mass filling factor) in the cold phase changes in the range of 0.1 to 0.7 (or more) on the timescale of $`10^{7\text{}8}`$ yr (see also Korchagin, Ryabtsev, & Vorobyov korchagin (1994)).
This Letter is organized as follows. In §2, we investigate the variation of the dust-to-gas ratio due to the phase transition of ISM in spiral galaxies. Finally, we discuss the result in §3.
## 2 Grain growth in multiphase interstellar medium
The ISM in a spiral galaxy is composed of multiphase gas. McKee & Ostriker (mckeeostriker (1977)) constructed the model of the ISM with three components in a pressure equilibrium: the cold phase ($`T10^2`$ K and $`n10`$ cm<sup>-3</sup>), the warm phase ($`T10^4`$ K and $`n10^1`$ cm<sup>-3</sup>), and the hot phase ($`T10^6`$ K and $`n10^3`$ cm<sup>-3</sup>). Since the mass of the hot component is negligible in a galactic disc compared with the others, we only consider the warm and the cold gas.
Dwek (dwek (1998)) and H99 showed that the accretion of heavy elements onto preexisting dust grains is the dominant process for the growth of the dust content in spiral galaxies. Thus, we concentrate on the effect of the phase transition on the accretion.
The timescale of the grain growth through the accretion of heavy element, $`\tau _{\mathrm{grow}}`$, can be estimated by the duration of the collisions between heavy-elements atom and grains. According to Draine (draine (1990)), $`\tau _{\mathrm{grow}}5\times 10^7`$ yr in cold gas. Here, we should note that the accretion process is more effective in denser environments. The efficiency of the accretion is proportional to the square of the gas density if metallicity and dust-to-gas ratio are the same, since the densities of both metal and dust contribute to the efficiency. Therefore, among the three components of the ISM, we only consider the accretion process in the cold gas, the densest component of the ISM.
According to H99, the increase rate of dust mass by the accretion process in a galaxy, $`[dM_\mathrm{d}/dt]_{\mathrm{acc}}`$, is expressed as
$`\left[{\displaystyle \frac{dM_\mathrm{d}}{dt}}\right]_{\mathrm{acc}}={\displaystyle \frac{𝒟M_\mathrm{g}(1f)}{\tau _{\mathrm{acc}}}},`$ (1)
where $`𝒟`$ is the dust-to-gas mass ratio, $`M_\mathrm{g}`$ is the total mass of ISM in the galaxy (i.e., $`M_\mathrm{d}=𝒟M_\mathrm{g}`$), $`f`$ is the fraction of the metal in dust phase, and $`\tau _{\mathrm{acc}}`$ is the accretion timescale of heavy elements onto dust grains (see Eq. 3 in H99). We note that the newly introduced parameter $`\tau _{\mathrm{acc}}`$ is different from $`\tau _{\mathrm{grow}}`$, since $`\tau _{\mathrm{acc}}`$ is the accretion timescale averaged over all the ISM phases. As commented above, the dust in the cold gas dominantly contributes to the accretion process. Thus, $`[dM_\mathrm{d}/dt]_{\mathrm{acc}}`$ can also be expressed in the following way:
$`\left[{\displaystyle \frac{dM_\mathrm{d}}{dt}}\right]_{\mathrm{acc}}={\displaystyle \frac{𝒟X_{\mathrm{cold}}M_\mathrm{g}(1f)}{\tau _{\mathrm{grow}}}},`$ (2)
where $`X_{\mathrm{cold}}`$ represents the mass fraction of the cold phase to the total mass of ISM. Here, we assume that the values of $`𝒟`$ and $`f`$ are constant for each phase. McKee (mckee (1989)) showed that the mixing of phases makes the difference in the $`𝒟`$ values among phases negligible. We also expect that $`f`$ is treated as constant for all phases because of the mixing (Tenorio-Tagle tenorio (1996)). Combining equations (1) and (2), we finally obtain
$`\tau _{\mathrm{acc}}={\displaystyle \frac{\tau _{\mathrm{grow}}}{X_{\mathrm{cold}}}}.`$ (3)
According to Ikeuchi (ikeuchi (1988)) and KT97, $`X_{\mathrm{cold}}`$ can vary with the range of $`0.1\begin{array}{c}<\hfill \\ \hfill \end{array}X_{\mathrm{cold}}\begin{array}{c}<\hfill \\ \hfill \end{array}0.7`$ in $`10^{7\text{}8}`$ yr. Therefore, from equation (3), we see that $`\tau _{\mathrm{acc}}`$ varies in the range of $`1.4\tau _{\mathrm{grow}}\begin{array}{c}<\hfill \\ \hfill \end{array}\tau _{\mathrm{accr}}\begin{array}{c}<\hfill \\ \hfill \end{array}10\tau _{\mathrm{grow}}`$ on that timescale.
## 3 Discussions
We have shown in the previous section that the parameter $`\tau _{\mathrm{acc}}`$, the typical timescale of accretion of heavy elements onto dust grains, changes on a timescale of $`10^{7\text{}8}`$ yr through phase transition of ISM. The range of $`\tau _{\mathrm{acc}}`$ is estimated as $`1.4\tau _{\mathrm{grow}}\begin{array}{c}<\hfill \\ \hfill \end{array}\tau _{\mathrm{accr}}\begin{array}{c}<\hfill \\ \hfill \end{array}10\tau _{\mathrm{grow}}`$, which is typically $`7\times 10^7\mathrm{yr}\begin{array}{c}<\hfill \\ \hfill \end{array}\tau _{\mathrm{accr}}\begin{array}{c}<\hfill \\ \hfill \end{array}5\times 10^8\mathrm{yr}`$. This means that the parameter $`\beta _{\mathrm{acc}}`$ (proportional to the efficiency of the accretion of heavy elements onto preexisting dust grains), defined in H99, changes by nearly an order of magnitude. Moreover, the timescale of the variation is much shorter than the typical timescale of the gas consumption in a galactic disc ($`\begin{array}{c}>\hfill \\ \hfill \end{array}1`$ Gyr; Kennicutt, Tamblyn, & Congdon kennicutt (1994)). Thus, the dust-to-gas ratio in a spiral galaxy experiences a short-term ($`10^{7\text{}8}`$ yr) variation with the amplitude of an order of magnitude.
The short-term variation can be tested by examining nearby spiral galaxies. The dust-to-gas ratios of the spiral galaxies shows scatter around their mean values even if the metallicity is almost the same (Issa, MacLaren, & Wolfendale issa (1990); see also H99). According to Figure 1 in H99, the theoretical lines almost reproduce the observed values. However, the dust-to-gas ratios of the Galaxy and M31 differ by several times. Both the galaxies lie in a range of $`5\begin{array}{c}<\hfill \\ \hfill \end{array}\beta _{\mathrm{acc}}\begin{array}{c}<\hfill \\ \hfill \end{array}20`$. This means that we can explain the dust-to-gas ratios of these galaxies if $`\beta _{\mathrm{acc}}`$ changes by more than 4 times. Indeed, the discussion in §2 demonstrated that $`\beta _{\mathrm{acc}}`$ can change by more than 7 times on $`10^{7\text{}8}`$ yr because of the phase change of ISM. Thus, it is possible to explain the scatter of the dust-to-gas ratios of spiral galaxies by considering the phase transition.
As for dwarf galaxies, we need another way to approach them, since the heavy element accretion in dwarf galaxies is much less efficient than spiral galaxies due to their small metallicity (Hirashita 1999b ). Because of their shallow gravitational potential, the mass outflow (e.g., Mac Low & Ferrara mac (1999)) can be responsible for the dust-to-gas ratio spread, as emphasized by Lisenfeld & Ferrara (lisenfeld (1998)).
We only have considered the dust formation process. However, we should also consider dust destruction. The dominant dust destruction occurs in the warm and hot phases in which SN shock waves propagate (Seab seab (1987)). This means that the destruction efficiency is expected to show anticorrelation with $`X_{\mathrm{cold}}`$. If a galaxy is in a higher-$`X_{\mathrm{cold}}`$ state, the dust destruction is more inefficient whereas the dust growth is faster. Thus, the variation of dust-to-gas ratio may become larger if we take into account the dust destruction.
Finally, we should note that it is still probable that the scatter is caused by observational uncertainty, since the dust-to-gas ratio is not a direct observable. However, from the discussion in §2, we can still propose that the dust-to-gas ratio varies on a timescale of $`10^{7\text{}8}`$ yr by nearly an order of magnitude.
###### Acknowledgements.
We would like to thank A. Ferrara, the referee, for careful reading and useful comments that improved this Letter. We acknowledge T. T. Takeuchi for kind helps and invaluable discussions. We are grateful to S. Mineshige for continuous encouragement. This work is supported by the Research Fellowship of the Japan Society for the Promotion of Science for Young Scientists. We made extensive use of the NASA’s Astrophysics Data System Abstract Service (ADS).
|
no-problem/9903/astro-ph9903055.html
|
ar5iv
|
text
|
# The R-Mode Oscillations in Relativistic Rotating Stars
## 1 Introduction
In recent years, the r-mode oscillations in rotating stars have been found to be significant implications. The axial oscillations are unstable by the gravitational radiation reaction (Andersson (1998), Friedman and Morsink (1998)). The mechanism of the instability can be understood by the generic argument for the gravitational radiation driven instability, so-called CFS-instability, which is originally examined for the polar perturbations, (Chandrasekhar (1970), Friedman and Schutz (1978), Friedman (1978)). All rotating stars become unstable in the absence of internal fluid dissipation, irrespective of the parity modes. Viscosity however damps out the oscillations, and stabilizes them in general. The polar f-mode instability is believed to act only in rapidly rotating neutron stars (Lindblom (1995)). The axial instability is however found to set in even in much more slowly rotating cases, and to play an important role on the evolution of hot newly-born neutron stars (Lindblom et al. (1998), Andersson et al., (1998)). The instability carries away most of angular momentum and rotational energy of the stars by the gravitational radiation. The gravitational wave emitted during the spin-down process is expected to be one of the promising sources for the laser interferometer gravitational wave detectors (Owen et al. (1998)).
Most of the estimates for the r-mode instability are based on the Newtonian calculations. That is, the oscillation frequencies are determined by inviscid hydrodynamics under the Newtonian gravity, and the gravitational radiation reaction is incorporated by evaluating the (current) multipole moments. Relativity has great influence on stellar structures, redshift in oscillation frequencies, frame dragging, radiations and so on. It is important to explore the relativistic effects, which may not change the general features of the oscillations, but shift the critical angular velocity. The relativistic calculation is not so easy task, even using the linear perturbation method concerning the oscillation amplitude. Both relativity and rotation complicate the problems considerably.
As the first step toward a clear understanding of relativistic corrections, we have examined the r-mode oscillations with two approximations, the slow rotation approximation (Hartle (1967)) and the Cowling approximation (Cowling (1941)). The rotation is assumed to be slow, and treated as small perturbation from the non-rotating case. We also assume that the perturbation of the gravity can be neglected in the oscillations. The accuracy of the Cowling approximation are tested in the rotating relativistic system (Finn (1988), Lindblom and Splinter (1990), Yoshida and Kojima (1997)). The calculations give the same qualitative results and reasonable accuracy of the oscillation frequencies, as in the Newtonian stellar pulsation theory (e.g., Cox (1980)). In Section 2, we summarize the basic equations for the inviscid fluid in slowly rotating relativistic stars and the perturbation scheme to solve them. The r-mode solution is given at the lowest order with respect to the rotational parameter in Section 3. The mode can not be specified by a single frequency unlike in the Newtonian r-mode oscillation, although the frequency is bounded to a certain range. The spatial function of the mode is arbitrary at this order. In order to determine the radial structure, we need the rotational corrections up to the third order in the background. In Section 4, we include the rotational effects, and derive the equation governing the r-mode oscillations in non-barotropic region. The resultant equation becomes singular in the barotropic case, since the second-order differential term vanishes. In Section 5, we separately consider for the barotropic stars the mode function and the correction to the leading order. In Section 6, we discuss the implication of our results. We use the geometrical units of $`c=G=1.`$
## 2 Pulsation equations of axial mode
We consider a slowly rotating star with a uniform angular velocity $`\mathrm{\Omega }O(\epsilon )`$. The rotational effects can be treated as the corrections to the non-rotating spherical star. We will take account of the corrections up to the third order in $`\epsilon `$. The metric tensor for describing the stationary axisymmetric star is given by (Hartle (1967),Chandrasekhar and Miller (1974), Quintana (1976))
$`ds^2`$ $`=`$ $`e^\nu \left[1+2(h_0+h_2P_2)\right]dt^2+e^\lambda \left[1+{\displaystyle \frac{2e^\lambda }{r}}\left(m_0+m_2P_2\right)\right]dr^2`$ (1)
$`+r^2(1+2k_2P_2)\left\{d\theta ^2+\mathrm{sin}^2\theta \left[d\varphi \left(\omega +W_1W_3{\displaystyle \frac{1}{\mathrm{sin}\theta }}{\displaystyle \frac{dP_3}{d\theta }}\right)dt\right]^2\right\},`$
where $`P_l(\mathrm{cos}\theta )(l=2,3)`$ denotes the Legendre’s polynomial of degree $`l`$. The metric functions except $`g_{t\varphi }`$ should be expanded by an even power of $`\epsilon `$, while $`g_{t\varphi }`$ by an odd power of $`\epsilon `$ due to the rotational symmetry, i.e., $`\omega O(\epsilon ),`$ $`(h_0,h_2,m_0,m_2,k_2)O(\epsilon ^2)`$ and $`(W_1,W_3)O(\epsilon ^3).`$ These quantities are functions of the radial coordinate $`r`$.
The equilibrium state is assumed to be described with perfect-fluid stress-energy tensor. The 4-velocity of the fluid element inside the star has the components, which are correct to $`O(\epsilon ^3),`$
$$U^\varphi =\mathrm{\Omega }U^t,U^t=(g_{tt}2\mathrm{\Omega }g_{t\varphi }\mathrm{\Omega }^2g_{\varphi \varphi })^{1/2}.$$
(2)
The pressure and density distributions are subject to the centrifugal deformation, which is the effect of $`O(\epsilon ^2).`$ These distributions are expressed as
$$p=p_0(r)+\{p_{20}(r)+p_{22}(r)P_2(\mathrm{cos}\theta )\},$$
(3)
$$\rho =\rho _0(r)+\{\rho _{20}(r)+\rho _{22}(r)P_2(\mathrm{cos}\theta )\},$$
(4)
where $`p_0`$ and $`\rho _0`$ are the values for the non-rotating star, and the quantities in the braces are the rotational corrections. The non-rotating spherical configuration and the rotational corrections in eqs.(1)-(4) are determined by successively solving the perturbed Einstein equations with the same power of $`\epsilon `$ (Hartle (1967),Chandrasekhar and Miller (1974), Quintana (1976)).
We consider the linear perturbations of the equilibrium fluid state. We use the Eulerian change of the pressure, density and 4-velocity, which are represented by $`\delta p`$, $`\delta \rho `$ and $`\delta U_\nu `$. The perturbations of the gravitational field are neglected in the Cowling approximation. It is convenient in the following calculation to expand these functions in terms of the spherical harmonics $`Y_{lm},`$
$`\delta p`$ $`=`$ $`{\displaystyle \underset{lm}{}}\delta p_{lm}(t,r)Y_{lm}(\theta ,\varphi ),`$ (5)
$`\delta \rho `$ $`=`$ $`{\displaystyle \underset{lm}{}}\delta \rho _{lm}(t,r)Y_{lm}(\theta ,\varphi ),`$ (6)
$`(\rho +p)\delta U_r`$ $`=`$ $`e^{\nu /2}{\displaystyle \underset{lm}{}}R_{lm}(t,r)Y_{lm}(\theta ,\varphi ),`$ (7)
$`(\rho +p)\delta U_\theta `$ $`=`$ $`e^{\nu /2}{\displaystyle \underset{lm}{}}\left[V_{lm}(t,r)_\theta Y_{lm}(\theta ,\varphi ){\displaystyle \frac{U_{lm}(t,r)}{\mathrm{sin}\theta }}_\varphi Y_{lm}(\theta ,\varphi )\right],`$ (8)
$`(\rho +p)\delta U_\varphi `$ $`=`$ $`e^{\nu /2}{\displaystyle \underset{lm}{}}\left[V_{lm}(t,r)_\varphi Y_{lm}(\theta ,\varphi )+U_{lm}(t,r)\mathrm{sin}\theta _\theta Y_{lm}(\theta ,\varphi )\right],`$ (9)
$`\delta U_t`$ $`=`$ $`\mathrm{\Omega }\delta U_\varphi .`$ (10)
With these definitions, the pulsation equations are derived from the conservation laws of the perturbed energy-momentum,
$$\delta T_{\mu ;\nu }^\nu =0,$$
(11)
where
$$\delta T_\mu ^\nu =(\delta \rho +\delta p)U_\mu U^\nu +\left[(\rho +p)\delta U_\mu U^\nu +(\rho +p)\delta U^\nu U_\mu \right]+\delta p\delta _\mu ^\nu .$$
(12)
Since we assume that the perturbation is adiabatic, the thermodynamic relation between the perturbed pressure and density can be written as
$$\delta p+\xi p=\frac{\mathrm{\Gamma }p}{p+\rho }\left(\delta \rho +\xi \rho \right),$$
(13)
where $`\mathrm{\Gamma }`$ is the adiabatic index and $`\xi `$ is the Lagrange displacement.
We will solve eqs.(11) and (13) by the expansion of $`\epsilon `$. In a spherical star, the perturbations decouple into purely polar and purely axial modes for each $`l`$ and $`m`$. A set of the functions $`𝒫_{lm}`$ $`(\delta p_{lm},\delta \rho _{lm},R_{lm},V_{lm})`$ describes the polar mode, while the function $`𝒜_{lm}`$ $`U_{lm}`$ describes the axial mode. In the presence of the rotation, the mode will be mixed among the terms with different $`l`$, while the mode can still be specified by a single $`m`$ (e.g., Kojima (1992), Kojima (1997)). The coupled equations are schematically expressed as
$$0=[𝒜_{lm}]+\times [𝒫_{l\pm 1m}]+^2\times [𝒜_{lm},𝒜_{l\pm 2m}]+\mathrm{},$$
(14)
$$0=[𝒫_{lm}]+\times [𝒜_{l\pm 1m}]+^2\times [𝒫_{lm},𝒫_{l\pm 2m}]+\mathrm{},$$
(15)
where the symbol $``$ means some functions of order $`\epsilon ,`$ and the square bracket formally represents the relation among the axial perturbation function $`𝒜_{lm}`$, or the polar perturbation function $`𝒫_{lm}`$ appeared therein. We also assume that the time variation of the oscillation is slow and proportional to $`\mathrm{\Omega },`$ i.e., $`_t\mathrm{\Omega }O(\epsilon ).`$ This is true in the r-mode oscillation, as will be confirmed soon. We look for the mode which is described by a single axial function with index $`(l,m)`$ in the limit of $`\epsilon 0.`$ That is, the polar part should vanish, while the axial part becomes finite. Hence, the perturbation functions are expanded as
$$𝒜_{lm}=𝒜_{lm}^{(1)}+\epsilon ^2𝒜_{lm}^{(2)}+\mathrm{},𝒫_{lm}=\epsilon (𝒫_{lm}^{(1)}+\epsilon ^2𝒫_{lm}^{(2)}+\mathrm{}).$$
(16)
Substituting these functions into eqs.(14) and (15), and comparing the coefficients of $`\epsilon ^n(n=0,1,2)`$, we have
$`0`$ $`=`$ $`[𝒜_{lm}^{(1)}],`$ (17)
$`0`$ $`=`$ $`[\epsilon 𝒫_{l\pm 1m}^{(1)}+\times 𝒜_{lm}^{(1)}],`$ (18)
$`0`$ $`=`$ $`[\epsilon ^2𝒜_{lm}^{(2)}]+\times [\epsilon 𝒫_{l\pm 1m}^{(1)}]+^2\times [𝒜_{lm}^{(1)},𝒜_{l\pm 2m}^{(1)}]=[\epsilon ^2𝒜_{lm}^{(2)}+^2\times 𝒜_{lm}^{(1)}],`$ (19)
where we have used $`𝒜_{l\pm 2m}^{(1)}=0`$ and the relation (18) in the last part of (19). Equation (17) represents the axial oscillation at the lowest order, which can be specified by $`U_{lm}^{(1)}.`$ Equation (19) is the second-order form of it, and the term $`^2\times 𝒜_{lm}^{(1)}`$ can be regarded as the rotational corrections. We will show the explicit forms corresponding to eqs.(17)-(19) in subsequent sections.
## 3 First-order solution
The leading term of eq.(11) is reduced to
$$\dot{U}_{lm}^{(1)}im\chi U_{lm}^{(1)}=0,$$
(20)
where
$$\chi =\frac{2}{l(l+1)}\varpi =\frac{2}{l(l+1)}(\mathrm{\Omega }\omega ),$$
(21)
and a dot denotes time derivative in the co-rotating frame, i.e., $`\dot{U}_{lm}=(_t+im\mathrm{\Omega })U_{lm}`$. The evolution of the perturbation can be solved by the Laplace transformation,
$$u(s,r)=_0^{\mathrm{}}U_{lm}^{(1)}(t,r)e^{st}𝑑t.$$
(22)
The Laplace transformation of eq.(20) is written as
$$\left(s+im(\mathrm{\Omega }\chi )\right)u(s,r)f_{lm}^{(1)}(r)=0,$$
(23)
where $`f_{lm}^{(1)}`$ describes the initial disturbance at $`t=0`$. After solving $`u`$ and using the inverse transformation, the solution in $`t`$-domain is easily constructed as
$$U_{lm}^{(1)}(t,r)=f_{lm}^{(1)}(r)\frac{e^{st}}{s+im(\mathrm{\Omega }\chi )}𝑑s=f_{lm}^{(1)}(r)e^{im(\mathrm{\Omega }\chi )t}H(t),$$
(24)
where $`H(t)`$ is the Heaviside step function. We will consider $`t>0`$ region only, so that the function $`H(t)`$ may well be omitted from now on. For the Newtonian star, $`m(\mathrm{\Omega }\chi )`$ becomes a constant
$$\sigma _N=\left(1\frac{2}{l(l+1)}\right)m\mathrm{\Omega }.$$
(25)
This is the r-mode frequency measured in the non-rotating frame (Papaloizou and Pringle (1978), Provost et al. (1981), Saio (1982)). In the relativistic stars, $`\varpi `$ is monotonically increasing function of $`r`$, $`\varpi _0\varpi \varpi _R.`$ The possible range of the r-mode frequency is spread out. If one regards eq.(24) as the sum of the Fourier component $`e^{i\sigma t},`$ then the spectrum is continuous in the range
$$\left(1\frac{2}{l(l+1)}\frac{\varpi _R}{\mathrm{\Omega }}\right)m\mathrm{\Omega }\sigma \left(1\frac{2}{l(l+1)}\frac{\varpi _0}{\mathrm{\Omega }}\right)m\mathrm{\Omega }.$$
(26)
This result is the same<sup>1</sup><sup>1</sup>1 Note that this conclusion is derived within the lowest-order approximation of the perturbation scheme. The consistency in the higher order will affect it for barotropic case as will be shown in the following sections., even if metric perturbations are considered (Kojima (1998)). The time-dependence is determined, whereas the radial dependence $`f_{lm}^{(1)}`$ is arbitrary at this order. The function $`f_{lm}^{(1)}`$ in eq.(24) is constrained by the equation of motions for the polar part, as will be shown in the subsequent sections. In this manner, the perturbation scheme (17)-(19) is degenerate perturbation.
## 4 Second-order equation
The pressure and density perturbations are induced by the rotation, while the axial mode in the non-rotating stars is never coupled to them. We may assume that the perturbation can be specified by a single function $`U_{lm}^{(1)}`$ at the leading order, since general case can be described by the linear combination. The axial part induces the polar parts with index $`(l\pm 1,m)`$ according to the perturbation scheme (18). It is straightforward to solve $`\delta p_{l\pm 1m},`$ and $`\delta \rho _{l\pm 1m}`$ from two components of eq.(11) in terms of the first-order corrections and $`U_{lm}^{(1)}`$. The explicit results are
$`\delta p_{l\pm 1m}`$ $`=`$ $`2S_\pm \varpi U_{lm}^{(1)},`$ (27)
$`\delta \rho _{l\pm 1m}`$ $`=`$ $`4S_\pm {\displaystyle \frac{e^{\nu /2}}{\nu ^{}}}(e^{\nu /2}\varpi U_{lm}^{(1)})^{}+2T_\pm {\displaystyle \frac{e^\nu }{r^2\nu ^{}}}(r^2\varpi e^\nu )^{}U_{lm}^{(1)},`$ (28)
where
$`S_+`$ $`=`$ $`{\displaystyle \frac{l}{l+1}}Q_+,S_{}={\displaystyle \frac{l+1}{l}}Q_{},`$ (29)
$`T_+`$ $`=`$ $`lQ_+,T_{}=(l+1)Q_{},`$ (30)
$`Q_+`$ $`=`$ $`\sqrt{{\displaystyle \frac{(l+1)^2m^2}{(2l+1)(2l+3)}}},Q_{}=\sqrt{{\displaystyle \frac{l^2m^2}{(2l1)(2l+1)}}}.`$ (31)
We here denote a derivative with respect to $`r`$ by a prime. The lowest order form of eq.(13) is also decoupled into the equation of each $`l,m`$ component,
$$\delta \dot{p}_{l\pm 1m}C^2\delta \dot{\rho }_{l\pm 1m}=AC^2e^\nu \left(e^\lambda R_{l\pm 1m}\frac{3im\xi _2}{r^2}Q_\pm U_{lm}^{(1)}\right),$$
(32)
with
$`C^2`$ $`=`$ $`{\displaystyle \frac{\mathrm{\Gamma }p_0}{p_0+\rho _0}},`$ (33)
$`A`$ $`=`$ $`{\displaystyle \frac{\rho _0^{}}{p_0+\rho _0}}{\displaystyle \frac{p_0^{}}{\mathrm{\Gamma }p_0}}.`$ (34)
In eq.(32), the displacement $`\xi _2`$ represents the quadrupole deformation of the stationary star, and is related to the quantities of $`O(\epsilon ^2)`$ as
$$\xi _2=\frac{2}{\nu ^{}}\left(h_2+\frac{1}{3}\varpi ^2r^2e^\nu \right).$$
(35)
The region for $`A>0`$ is convectively unstable, while that for $`A<0`$ is stably stratified. We here assume $`A0`$ in the following calculations, but will separately consider the case $`A=0`$ in Section 5. From eq.(32), the function $`R_{l\pm 1m}`$ is solved by $`U_{lm}^{(1)}`$ for the region $`A0`$. The function $`V_{l\pm 1m}`$ describing the horizontal motion is calculated from the remaining component of eq.(11). The explicit form is given by
$`\left[l^{}(l^{}+1)V_{l^{}m}\right]_{l^{}=l\pm 1}`$ $`=`$ $`S_\pm (v_2+l(l+1)r^2\mathrm{\Omega }e^\nu \dot{U}_{lm}^{(1)})+T_\pm (v_1+{\displaystyle \frac{1}{2}}r^2\varpi e^\nu \dot{U}_{lm}^{(1)})`$ (36)
$`+l(l+1)Q_\pm v_0,`$
where
$`v_2`$ $`=`$ $`4e^{(3\nu +\lambda )/2}\left[{\displaystyle \frac{r^2e^{\lambda /2}}{A\nu ^{}}}\left((e^{\nu /2}\varpi \dot{U}_{lm}^{(1)})^{}+{\displaystyle \frac{\nu ^{}}{2C^2}}(e^{\nu /2}\varpi \dot{U}_{lm}^{(1)})\right)\right]^{}`$ (37)
$`\left({\displaystyle \frac{4r^2e^{3\nu /2}}{\nu ^{}}}\right)(e^{\nu /2}\varpi \dot{U}_{lm}^{(1)})^{},`$
$`v_1`$ $`=`$ $`2e^{(3\nu +\lambda )/2}\left[{\displaystyle \frac{e^{(3\nu +\lambda )/2}}{A\nu ^{}}}(r^2\varpi e^\nu )^{}\dot{U}_{lm}^{(1)}\right]^{}\left[{\displaystyle \frac{2e^{\nu /2}}{\nu ^{}}}(r^2\varpi e^{\nu /2})^{}+\mathrm{\Omega }r^2e^\nu \right]\dot{U}_{lm}^{(1)},`$ (38)
$`v_0`$ $`=`$ $`{\displaystyle \frac{3}{2}}{\displaystyle \frac{\xi _2}{\varpi }}\dot{U}_{lm}^{(1)}\left[{\displaystyle \frac{r^2e^\nu }{2}}(\varpi +2\mathrm{\Omega }){\displaystyle \frac{3}{2}}{\displaystyle \frac{e^\lambda m_2}{\varpi r}}{\displaystyle \frac{3}{2}}e^{(\nu +\lambda )/2}\left[e^{(\nu +\lambda )/2}{\displaystyle \frac{\xi _2}{\varpi }}\right]^{}\right]\dot{U}_{lm}^{(1)}.`$ (39)
In eq.(36), we have used eq.(20) to simplify it.
These corrections in the polar functions affect the axial parts with indices $`(l\pm 2,m)`$ and $`(l,m).`$ We are interested in the term with index $`(l,m)`$ as the corrections to the leading equation. In deriving the axial equation with these corrections, the terms up to $`O(\epsilon ^3)`$ in the background field also affect it. We include both corrections to the axial equation and have the following form,
$$0=\dot{U}_{lm}^{(2)}im\chi U_{lm}^{(2)}+[\dot{U}_{lm}^{(1)}],$$
(40)
where $``$ is the Sturm-Liouville differential operator defined by
$`[\dot{U}_{lm}^{(1)}]`$ $`=`$ $`8c_3\varpi e^{(\lambda /2\nu )}(\rho _0+p_0)\left[{\displaystyle \frac{r^2e^{(\lambda \nu )/2}}{A\nu ^{}(\rho _0+p_0)}}(e^{\nu /2}\varpi \dot{U}_{lm}^{(1)})^{}\right]^{}(F+G)\dot{U}_{lm}^{(1)},`$ (41)
$`F`$ $`=`$ $`4c_3\varpi ^2e^{(\lambda /2\nu )}\left({\displaystyle \frac{r^2e^{\lambda /2}\rho _0^{}}{Ap_0^{}}}\right)^{}{\displaystyle \frac{4c_2\varpi ^2e^{(\lambda 3\nu )/2}}{\rho _0+p_0}}\left[{\displaystyle \frac{e^{(\lambda +3\nu )/2}}{A\nu ^{}\varpi }}(\varpi r^2e^\nu )^{}(\rho _0+p_0)\right]^{}`$ (42)
$`+\left({\displaystyle \frac{2c_1e^\nu }{A\nu ^{}r^2}}\right)[(r^2\varpi e^\nu )^{}]^2,`$
$`G`$ $`=`$ $`8c_3\varpi ^2e^{(\lambda /2\nu )}\left({\displaystyle \frac{r^2e^{\lambda /2}}{\nu ^{}}}\right)^{}+{\displaystyle \frac{4c_2}{\nu ^{}r^2}}(\varpi ^2r^4e^\nu )^{}`$ (43)
$`3c_1\left[re^{\lambda /2}\left({\displaystyle \frac{e^{\lambda /2}\xi _2}{r}}\right)^{}{\displaystyle \frac{3}{2}}{\displaystyle \frac{\varpi ^{}}{\varpi }}\xi _2k_2+{\displaystyle \frac{e^\lambda }{r}}m_2+{\displaystyle \frac{5W_3}{\varpi }}\right]`$
$`{\displaystyle \frac{3m^2}{l(l+1)}}\left[{\displaystyle \frac{\xi _2}{r}}+{\displaystyle \frac{1}{2}}{\displaystyle \frac{\varpi ^{}}{\varpi }}\xi _2+k_2{\displaystyle \frac{5W_3}{\varpi }}\right]\left({\displaystyle \frac{W_1}{\varpi }}+{\displaystyle \frac{6W_3}{\varpi }}\right),`$
$`c_n`$ $`=`$ $`{\displaystyle \frac{l+1}{l^n}}Q_{}^2+(1)^{n1}{\displaystyle \frac{l}{(l+1)^n}}Q_+^2.`$ (44)
In order to solve eq.(40), we introduce a complete set of functions as for the operator $`,`$
$$[y_\kappa ]+\kappa y_\kappa =0.$$
(45)
where $`\kappa `$ is the eigenvalue and the eigenfunction $`y_\kappa (r)`$ is characterized by $`\kappa `$, e.g., the number of the nodes. The eigenvalue is real number of $`O(\epsilon ^2)`$, since $``$ is the Hermitian operator of $`O(\epsilon ^2)`$. The eigenvalue problem is solved with appropriate boundary conditions. For example, the function should satisfy the regularity condition at the center and the Lagrangian pressure should vanish at the stellar surface. Certain initial date $`f_{lm}^{(1)}`$ in eq.(24) can be decomposed by the set. We may restrict our consideration to a single function labeled by $`\kappa `$, since the general case is described by discrete sum or integration over a certain range. By putting $`y_\kappa =im\chi f_{lm;\kappa }^{(1)},`$ we have
$$[\dot{U}_{lm}^{(1)}]=[im\chi f_{lm;\kappa }^{(1)}e^{im(\mathrm{\Omega }\chi )t}]=im\kappa \chi f_{lm;\kappa }^{(1)}e^{im(\mathrm{\Omega }\chi )t},$$
(46)
where $`(e^{im(\mathrm{\Omega }\chi )t})^{}`$ gives the first order correction and is neglected here. Using $`f_{lm;\kappa }^{(1)}`$, we can integrate eq.(40) with respect to $`t`$, and have the function $`U_{lm}^{(2)}(t,r)`$ of $`O(\epsilon ^2)`$ as
$$U_{lm}^{(2)}=(im\kappa \chi tf_{lm;\kappa }^{(1)}+f_{lm}^{(2)})e^{im(\mathrm{\Omega }\chi )t},$$
(47)
where the function $`f_{lm}^{(2)}`$ of $`O(\epsilon ^2)`$ is unknown at this order. The sum of the first and second order forms is approximated as
$`U_{lm}^{(1)}+U_{lm}^{(2)}`$ $`=`$ $`\left[(1+im\kappa \chi t)f_{lm;\kappa }^{(1)}+f_{lm}^{(2)}\right]e^{im(\mathrm{\Omega }\chi )t}`$ (48)
$`=`$ $`\left[f_{lm;\kappa }^{(1)}+f_{lm}^{(2)}\right]e^{im(\mathrm{\Omega }(1+\kappa )\chi )t},`$ (49)
where we have exploited the freedom of $`f_{lm}^{(2)}`$ to eliminate the unphysical growing term in eq.(48). The value $`\kappa `$ originated from the fixing of the initial data becomes evident for large $`t`$, since the accumulation of small effects from the higher order terms is no longer neglected. As a result, the frequency should be adjusted with the second-order correction to be a good approximation even for slightly large $`t`$, as in eq.(49). This renormalization of the frequency is closely related to treating $`t`$ as strained coordinate in the perturbation method. (See e.g. Hinch (1991).) In this way, the specification of the initial data at the leading order has influence on the second-order correction $`\kappa `$.
## 5 Oscillations in barotropic stars
The structure of the neutron stars is almost approximated to be barotropic. The pulsation equation is quite different from that of non-barotropic case, as in the Newtonian pulsation theory. The relation (32) for the case $`A=0`$ is replaced by
$$\delta p_{l\pm 1m}=C^2\delta \rho _{l\pm 1m}=\frac{p_0^{}}{\rho _0^{}}\delta \rho _{l\pm 1m}=\frac{\nu ^{}(p_0+\rho _0)}{2\rho _0^{}}\delta \rho _{lm}.$$
(50)
In the last part of eq.(50), we have used hydrostatic equation of the non-rotating star. In this case, the function $`R_{l\pm 1m}`$ is never determined through eq.(32) unlike in $`A0`$ case, but rather we have two restrictions to a single function $`U_{lm}^{(1)}`$. These conditions are never satisfied simultaneously unless for $`m=\pm l`$, in which one condition is trivial, $`\delta p_{l1m}=\delta \rho _{l1m}=0`$ due to $`Q_{}=0`$. The other condition for $`m=\pm l`$ becomes
$$(e^{\nu /2}\varpi U_{lm}^{(1)})^{}+\left(\frac{\nu ^{}}{2C^2}\frac{l+1}{2}\frac{(\varpi r^2e^\nu )^{}}{\varpi r^2e^\nu }\right)e^{\nu /2}\varpi U_{lm}^{(1)}=0.$$
(51)
Substituting the form (24) into this and neglecting the higher order term due to $`(e^{im(\mathrm{\Omega }\chi )t})^{}`$, we have the same differential equation for $`f_{lm}^{(1)}.`$ The integration with respect to $`r`$ results in
$`U_{lm}^{(1)}(t,r)`$ $`=`$ $`f_{lm;}^{(1)}e^{im(\mathrm{\Omega }\chi )t}`$ (52)
$`=`$ $`\left[N_0(\rho _0+p_0)r^{l+1}e^\nu (\varpi e^\nu )^{(l1)/2}\right]e^{im(\mathrm{\Omega }\chi )t},`$
where $`N_0`$ is a normalization constant. In this way, the function of the lowest order is determined. The corresponding the 3-velocity at $`t+0`$ is given by
$$\dot{\xi }_\varphi =\delta v_\varphi =N_0r^{l+1}(\varpi e^\nu )^{(l1)/2}.$$
(53)
As shown by Friedman and Morsink (1998), the canonical energy of the perturbation is negative for the Lagrange displacement $`\xi _\varphi ,`$ and the solution therefore means unstable if the gravitational radiation reaction sets in.
We now specify $`R_{l\pm 1m}`$ of $`O(\epsilon ^2)`$ to proceed to the pulsation equation with the second order corrections. The function $`X_{l\pm 1m}`$ describing the radial motion is introduced as
$$R_{l\pm 1m}=X_{l\pm 1m}+\frac{3ime^\lambda \xi _2}{r^2}Q_\pm U_{lm}^{(1)}.$$
(54)
The function $`X_{l\pm 1m}`$ is arbitrary at this order, but we have two special cases. One is $`X_{l\pm 1m}=0,`$ which is the limiting case for eq.(32) in a sense. The other is chosen so as to vanish the Lagrangian change of the pressure within the entire star. The condition corresponds to $`X_{l\pm 1m}=4S_\pm \varpi e^{\lambda \nu }\dot{U}_{lm}^{(1)}/\nu ^{}.`$ For the first choice, the Lagrangian change of the pressure never vanishes at the surface unless $`\rho _0`$ vanishes there. The second one is the usual way treated in the Newtonian r-mode, as shown by Lindblom and Ipser (1998). With the second choice of $`X_{l\pm 1m}`$ and eq.(52), the pulsation equation for $`m=\pm l`$ leads to
$$\dot{U}_{lm}^{(2)}im\chi U_{lm}^{(2)}G\dot{U}_{lm}^{(1)}=0,$$
(55)
where $`G`$ is defined in eq.(43). This equation is solved for $`U_{lm}^{(2)}`$ as in Section 4. The solution up to $`O(\epsilon ^2)`$ is written as
$$U_{lm}^{(1)}+U_{lm}^{(2)}=\left[f_{lm;}^{(1)}+f_{lm}^{(2)}\right]e^{im(\mathrm{\Omega }(1+G)\chi )t},$$
(56)
where the function $`f_{lm}^{(2)}`$ of $`O(\epsilon ^2)`$ is unknown at this order. This expression (56) is formally the same as in eq.(49) for the non-barotropic case with $`\kappa =G.`$ The second-order correction in the frequency however depends on the position owing to the particular choice of the function $`f_{lm;}^{(1)}.`$
We show the second-order rotational correction $`G`$ for $`l=m=2`$ mode in Fig.1. We adopt the polytropic stellar model with index $`n=1.`$ For the Newtonian star, $`G`$ is a positive function, which monotonically increases from the center to the surface. The value ranges from $`G=0.55(\mathrm{\Omega }^2R^3/M)`$ to $`G=0.75(\mathrm{\Omega }^2R^3/M),`$ where $`R`$ and $`M`$ are the radius and the mass for the non-rotating star. These values are rather smaller than that of incompressible case, $`G=37/27(\mathrm{\Omega }^2R^3/M).`$ (Only for the Newtonian incompressible case, the factor $`G`$ is a constant. See Appendix.) The stellar deformation $`\xi _2,`$ which is the most important contribution to $`G,`$ diminishes in the compressible fluid. As the star becomes relativistic, other relativistic factors become significant. As a result, the factor $`G`$ is scaled down as a whole, and eventually becomes negative for some regions. In any cases, the frequencies satisfy the criterion of the radiation reaction instability, which implies $`0<(1+G)\chi /\mathrm{\Omega }<1.`$ That is, retrograde in the rotating frame and prograde in the inertial frame. Therefore, the second-order correction never changes qualitative picture of the instability.
In Fig.2, we show the frequency range of the r-mode oscillations in the first-order rotational calculation. The upper and lower limits on the continuous spectrum in eq.(26) are shown by two lines. The intermediate values between the two lines are allowed for a fixed model $`M/R.`$ The frequency is a single value $`\sigma _N`$, given by eq.(25) in the Newtonian limit. The dragging effect relevant to the relativity broadens the allowed range as shown by eq.(26). In Fig. 3, the allowed range is shown including the second-order corrections for the extreme case $`\mathrm{\Omega }^2=M/R^3.`$ Even in the Newtonian case, the oscillation frequency is not a single value, since the factor $`G`$ depends on $`r,`$ as seen in Fig.1. The allowed range of the frequency further increases with the relativistic factor. From this result, we expect that the r-mode frequency ranges from $`0.8\sigma _N`$ to $`1.2\sigma _N`$ for a typical neutron star model with $`M/R0.2.`$
We examine the effect on the spectrum of the gravitational waves emitted by the r-mode oscillations. We for simplicity neglect all relativistic corrections expect in the frequency, and estimate the spectrum by the Newtonian radiation theory. The dimensionless gravitational amplitude $`h(t)`$ at infinity is determined by evaluating the time variation of the current multipole moment, $`h(t)d^lS_{lm}/dt^l`$ (Thorne (1980)). The current multipole moment $`S_{lm}`$ for $`l=m=2`$ mode and the velocity (53) is given by
$$S_{22}=N\rho _0(\varpi e^\nu )^{1/2}e^{2i(\mathrm{\Omega }\chi )t}r^6𝑑r,$$
(57)
where the normalization $`N`$ is also included the constant from the integration over angular parts. The Fourier component $`h(\sigma )`$ can be expressed as
$$h(\sigma )=h(t)e^{+i\sigma t}𝑑t\rho _0(\varpi e^\nu )^{1/2}(\mathrm{\Omega }\chi )^2\delta (\sigma 2(\mathrm{\Omega }\chi ))r^6𝑑r.$$
(58)
The spectrum has a finite line breadth as shown in Fig.4. The spectrum is non-zero only for $`\sigma =\sigma _N`$ in the Newtonian treatment, but is broad in the relativistic one. The width at the half maximum is $`\mathrm{\Delta }\sigma /\sigma _N0.1.`$
## 6 Discussion
In this paper, we have calculated the r-mode oscillations in the relativistic rotating stars, neglecting the gravitational perturbations. The evolution can be described by oscillatory solutions which are neutral, i.e., never decay or grow in the absence of the dissipation. The oscillation is described not by a single frequency, but by the frequencies of a broad range, unlike in the Newtonian case. The reason is that the local rotation rate depends on the position due to the frame dragging effect even for uniform rotation. The r-mode oscillation frequency forms a continuous spectrum within a certain range. This property is distinguished from the well-known stellar oscillation modes such as the polar f-, p-modes, in which the spectrum of frequency is discrete. The r-mode frequencies lie in the unstable region for the gravitational radiation reaction instability, but it will be an important issue whether or not the different spectrum of the unstable modes leads to different growth, e.g., in the non-linear regime.
###### Acknowledgements.
We would like to thank Prof. Y. Eriguchi for enlightening discussion about this topics. This was supported in part by the Grant-in-Aid for Scientific Research Fund of the Ministry of Education, Science and Culture of Japan (08640378).
## Appendix A Newtonian limit
In this appendix, we will consider the r-mode oscillations in the Newtonian limit, in which
$$(e^\nu ,e^\lambda ,(\rho _0+p_0)/\rho _0,\varpi /\mathrm{\Omega })1,(\varpi ^{},m_2,k_2,W_1,W_3)0.$$
(A1)
Equation (40) reduces to the equation derived by Provost et al. (1981), if the variables are matched. They solved the eigenvalue problem, assuming the form $`e^{i(m\mathrm{\Omega }\sigma _0(1+\sigma _1))t}`$, where $`\sigma _0`$ of $`O(\epsilon )`$ is given by $`2m\mathrm{\Omega }/(l(l+1))`$. The correction $`\sigma _1`$ of $`O(\epsilon ^2)`$ was determined by solving the eigenvalue problem for the operator $`_N=D_N(F_N+G_N)`$ in the case of $`A0`$,
$$_N[y]+\sigma _1y=0,$$
(A2)
with
$`𝒟_N[y]`$ $`=`$ $`4c_3\mathrm{\Omega }^2\rho _0\left[{\displaystyle \frac{r^2}{Ag\rho _0}}y^{}\right]^{},`$ (A3)
$`F_N`$ $`=`$ $`4c_3\mathrm{\Omega }^2\left({\displaystyle \frac{r^2\rho _0^{}}{Ap_0^{}}}\right)^{}{\displaystyle \frac{4c_2\mathrm{\Omega }^2}{\rho _0}}\left[{\displaystyle \frac{\rho _0r}{Ag}}\right]^{}+\left({\displaystyle \frac{4c_1\mathrm{\Omega }^2}{Ag}}\right),`$ (A4)
$`G_N`$ $`=`$ $`4c_3\mathrm{\Omega }^2\left({\displaystyle \frac{r^2}{g}}\right)^{}+{\displaystyle \frac{8c_2\mathrm{\Omega }^2r}{g}}+2c_1r\alpha ^{}+{\displaystyle \frac{2m^2}{l(l+1)}}\alpha ,`$ (A5)
where we have used the gravitational acceleration $`g=\nu ^{}/2`$ and ellipticity $`\alpha =3\xi _2/(2r).`$ The second-order correction $`\sigma _1`$ exactly corresponds to $`\kappa `$ in eq.(45).
As shown previously (Provost et al. (1981), Saio (1982)), the eigen-value problem becomes singular<sup>2</sup><sup>2</sup>2 Recently, Lockitch and Friedman (1998) showed the resolution of the singularity in Newtonian isentropic stars. for the barotropic case $`A=0`$, since the second-order differential term vanishes. The second-order solution (56) in the Newtonian limit reduces to
$$U_{lm}^{(1)}+U_{lm}^{(2)}=\left[N_0\rho _0r^l+f_{lm}^{(2)}\right]e^{i(m\mathrm{\Omega }\sigma _0(1+\sigma _1))t}.$$
(A6)
This equation is valid only for $`l=\pm m`$, since the Newtonian counterpart of eq.(50) is never satisfied otherwise. When the correction $`G_N`$ is not a constant, the eigenvalue problem is ill-posed. The function $`G_N`$ indeed depends on $`r`$ for the compressible matter, so that Provost et al. (1981) concluded no solution in this case. The exceptional case is the incompressible fluid, in which $`G_N`$ is a constant since $`\alpha =5/4(\mathrm{\Omega }^2R^3/M),g=Mr/R^3.`$ The correction in the frequency for $`l=m`$ is
$$G_N=\frac{4l}{(l+1)^3}\frac{\mathrm{\Omega }^2R^3}{M}+\frac{2l}{l+1}\alpha ,$$
(A7)
which should be the same value calculated by Provost et al. (1981) except a misprint in their expression.
|
no-problem/9903/astro-ph9903117.html
|
ar5iv
|
text
|
# The X-ray Luminosity Function of Nearby Rich and Poor Clusters of Galaxies: A Cosmological Probe
## 1. Introduction and Background
Much of the work on the luminosity distributions of rich clusters has been motivated by the results of Henry et al. (1992) who found evidence for statistically significant negative evolution in the XLF (i.e. fewer high $`L_X`$ clusters at higher z) at $`z0.3`$ for $`L_{X(0.33.5keV)}5\times 10^{44}h_{50}^2ergssec^1`$ from 67 clusters in the Einstein Extended Medium-Sensitivity Survey (EMSS). Recently, Vikhlinin et al. (1998) have confirmed the EMSS result at $`z>0.3`$ for $`L_{X(0.52keV)}>3\times 10^{44}h_{50}^2ergs/sec`$ from a 160 $`deg^2`$ survey from pointed ROSAT fields. They found a factor of 3-4 decrease in the number density of these high $`L_X`$ clusters as compared to a zero-evolution model. Several other studies have claimed no evolution in the XLF out to redshifts as high as $`z=0.8`$ (Burke et al. 1997; Jones et al. 1998; Rosati et al. 1998). However, none of these studies have sufficiently large search volumes to address evolution in the XLF at the highest X-ray luminosities and thus do not contradict the original EMSS result.
Of prime importance in any evolutionary study is an accurate determination of the local XLF as a baseline to compare with the distant cluster XLF. Until recently, even the local XLF was quite poorly constrained due to low cluster numbers. The largest local samples compiled to date are the X-ray Brightest Abell Clusters (XBACS) (Ebeling et al. 1993, 1996) and the Brightest Cluster Sample (BCS) of Ebeling et al. (1997, 1998). The BCS includes 199 X-ray selected clusters down to $`5\times 10^{42}\mathrm{ergs}\mathrm{sec}^1`$ in the $`0.12.4`$keV band out to $`z0.3`$. Consistent with most previous claims, no evidence was found for evolution in the XLF within $`z0.20.3`$ (Ebeling et al. 1998).
We have examined a statistically complete sample of 294 Abell rich clusters within $`z0.09`$ using the ROSAT All-Sky-Survey (RASS) over the energy band $`0.52\mathrm{keV}`$ as part of a multiwavelength study of nearby galaxy clusters. Unlike most other studies, our sample is purely optically-selected within the criteria for inclusion in Abell’s catalog. There is some overlap with both the BCS and XBACs sample, with the primary differences that we have used only Abell’s northern catalog (Abell 1958), and our X-ray flux-limit is approximately a factor of 8 lower than the BCS sample. Our sample is larger than the BCS while our volume is nearly 30 times smaller. Given our large sample size, we have reduced statistical errors in the local XLF for $`L_X10^{43}h_{50}^2\mathrm{ergs}\mathrm{sec}^1`$ by up to a factor of 2 compared to previous work. Combined with the poor cluster XLF of Burns et al. (1996) (BLL96), we examine the composite local XLF over more than 3 orders of magnitude in $`L_X`$ in order to understand the cosmological constraints imposed by the tight power-law shape noted in BLL96.
In section 2 we describe the sample, the derivation of the local XLF, and discuss the limitations imposed by our sample selection. In section 3 we compare our new XLF with previous work. In section 4 we explore the consequences of the shape of the local XLF with regard to Press-Schechter analytic predictions of the mass-function and possible constraints on $`\mathrm{\Omega }_0`$ and $`\mathrm{\Lambda }`$. We list our conclusions in section 5. We adopt $`H_0=50h_{50}kmsec^1`$ and $`q_0=0.5`$ when dealing with the observational data.
## 2. The Sample and Derivation of the XLF
Our cluster sample is derived from Abell’s Northern catalog, and includes all Abell clusters in the range $`0.016z0.09`$ with galactic absorption less than 0.1 magnitudes at R-band ($`\mathrm{log}\mathrm{N}_\mathrm{H}20.73`$). See Voges et al. (1999) and Ledlow & Owen (1995) for more details on the sample selection. The total sample includes 294 Abell clusters. All clusters have measured redshifts and we include all richness classes in the sample. We calculate a survey area of 14,155 $`deg^2`$ or 34% of the sky. Within our observed volume we find the number density of clusters to be constant as a function of richness class and redshift suggesting that our sample is nearly complete and volume-limited within the limits of Abell’s selection criteria. These findings are consistent with those of Briel & Henry (1993) and Mazure et al. (1996) with regards to the completeness of Abell’s catalog over this redshift regime.
The X-ray luminosity function was derived from images produced by the RASS as described in Voges et al. (1999). X-ray luminosities were calculated within a metric aperture of 0.75 $`h_{50}^1`$ Mpc in diameter over the energy band 0.5-2 keV assuming a thermal spectrum with $`T=5keV`$. Corrections for missing flux were made according to the prescription of Briel & Henry (1993) (using $`\beta =2/3`$) to produce a total $`L_X`$ for each cluster over our ROSAT band. The primary effect of using a different $`\beta `$ would be to shift the total luminosities to higher or lower values (a larger $`\beta `$ results in a smaller correction, thus lower total $`L_X`$), while not significantly changing the shape or amplitude of the XLF within the error bars.
Voges et al. found a total detection rate of 83% for this sample of Abell clusters. For non-detections, we adopt the $`3\sigma `$ upper-limits given in their Table 1. Because of variations in exposure time (and slight variations in galactic absorption) across the sky with the RASS, each cluster has a different flux-limit, or maximum volume to which the cluster could have been detected. We follow the prescription of Avni & Bahcall (1980), and calculate the observed volume separately for each cluster. The volume is evaluated from $`z_{min}`$=0.016 to the maximum redshift at which the cluster could have been detected with a $`3\sigma `$ confidence. For clusters with only upper-limits to $`L_X`$ we set $`z_{max}`$ equal to the redshift of the cluster. The XLF is then found by calculating $`dn(L)/dL`$ as the sum over all clusters divided by the maximum search volumes of each cluster. Each binned data point is then found by dividing the above sum by the binwidth ($`\mathrm{\Delta }L_X`$). For the entire sample, we find $`V/V_{max}=0.56\pm 0.02`$. Error bars on the data points were calculated assuming Poisson statistics following the prescription of Rosati et al. (1998).
## 3. The X-ray Luminosity Function
In Figure 1 we show the differential XLF for our low-redshift cluster sample. Also on this plot are the measurements of BLL96 derived from 49 poor clusters and the BCS sample of Ebeling et al. (1998). The steady decline in volume-density observed in our rich cluster sample for $`L_X<10^{43}h_{50}^2ergs/sec`$ can be understood from the limitations of Abell’s optical selection criteria. Because $`L_X`$ varies considerably for a given optical richness (Voges et al. 1999), there are a significant number of optically poor clusters with $`L_X`$ in the range of Richness Class 0 clusters which are not in our sample. Thus, our sample is truly volume-limited only for clusters above this cutoff in $`L_X`$. Note, however, that for $`L_X>10^{43}ergs/sec`$, our Abell cluster sample and the BCS sample are in excellent agreement. The BCS also extends to higher $`L_X`$ because of the larger search volume ($`z0.3`$). Our XLF shown in Figure 1 is also consistent with that of Edge et al. (1990) and Briel & Henry (1993).
Fig. 1. The X-ray luminosity function derived from our low-redshift ($`z0.09`$) Abell cluster sample (solid circles), the poor-cluster data points from Burns et al. (1996) (open squares), and the XLF from the Brightest-Cluster Sample (BCS) of Ebeling et al. (1998) (open circles).
The local, differential XLF is remarkably well represented by a power-law over more than three orders of magnitude in $`L_X`$. The high luminosity break in the XLF occurs at $`>10^{45}h_{50}^2ergs/sec`$, and can be seen when we include the highest luminosity point from the BCS sample. Using the combined XLF of BLL96 and our new determination of the local rich-cluster XLF (for $`L_X>10^{43}h_{50}^2ergs/sec`$), we find a power-law fit of the form $`\varphi (L)=KL_{44}^\alpha `$ where $`L_{44}`$ is the X-ray luminosity in units of $`10^{44}ergssec^1`$ and K is in units of $`10^7Mpc^3L_{44}^{\alpha 1}`$. We find best-fit values of $`\alpha =1.83\pm 0.04`$ and $`K=2.35\pm _{0.22}^{0.24}`$. For completeness, we also fit a Schechter function after including the highest-L<sub>X</sub> point from the BCS. For a fit of the form: $`\frac{dN}{dL_X}=A\mathrm{exp}\left(L_X/L_X^{}\right)L_X^\alpha `$, we find A=$`(2.93\pm 0.14)\times 10^7`$ ($`Mpc^3L_{44}^{\alpha 1}`$), $`L_{X(0.52\mathrm{k}\mathrm{e}\mathrm{V})}^{}=5.49\pm 0.39`$ ($`10^{44}ergs/sec`$), and $`\alpha =1.77\pm 0.01`$. These values are consistent within the errors to the BCS, the RDCS XLF (Rosati et al. 1998) out to z=0.6, and the Southern SHARC survey (Burke et al. 1997) for $`0.3<z<0.7`$. Note that these results do not conflict with the claimed negative evolution in the XLF observed by Henry et al. (1992), and most recently by Vikhlinin et al. (1998) at the highest luminosities.
As noted by BLL96, the remarkable power-law shape over such a large range in $`L_X`$ suggests a continuity in that the bulk X-ray properties of poor clusters must not be fundamentally different from richer systems. We explore the consequences of this result in the next section.
## 4. Derivation of the Theoretical XLF
In order to assess the constraints our local XLF imposes on cosmological models, we compare it with various analytic predictions. We proceed by using the Press-Schechter (PS) formalism (e.g., Press & Schechter 1974; Bond et al. 1991) to construct theoretical mass functions and then convert these to XLFs assuming a form for the X-ray mass-to-light ratio (c.f., Evrard & Henry 1991; hereafter EH91).
We begin with the set of cosmological models whose parameters are listed in Table 1. These models form a representative sample of current views as they include open and flat universes spanning a range in $`\mathrm{\Omega }_o`$. For each model, the rms density fluctuation on 8$`h^1`$Mpc scales ($`\sigma _8`$) was determined from the $`\sigma _8\mathrm{\Omega }_o`$ relation of Viana & Liddle (1996) which, in turn, was fixed by the local number density of 7 keV clusters. The Hubble constant was chosen to give an age for the Universe of roughly 12.5 Gyrs (consistent with globular cluster age determinations; e.g., Chaboyer et al. 1998). For each model we list the relative contributions of matter ($`\mathrm{\Omega }_o`$), baryonic matter ($`\mathrm{\Omega }_b`$), and the cosmological constant ($`\mathrm{\Omega }_\mathrm{\Lambda }`$) to the overall energy density. Power spectra for all the models were generated using the code described in Klypin & Holtzman (1997) and then PS mass functions (with $`\delta _c=1.3`$) were computed at $`z=0`$.
Our PS mass functions can be converted to XLFs by assuming a form for the mass-luminosity relation and correcting to our bandpass. We assume the bolometric X-ray luminosity is related to cluster mass as $`L_{bol}=cM^p`$ and will later fit for the parameters $`c`$ and $`p`$. There exist at least two theoretical predictions for the value of the exponent $`p`$. The self-similar model of Kaiser (1986), derived assuming a power-law initial perturbation spectrum and purely adiabatic gas physics, predicts $`p=4/3`$ but it is well known that this fails to give the correct shape for the XLF (e.g., EH91; see also below). However, pre-heating of the ICM at an early epoch (possibly by galaxy formation) results in a different scaling relation and also resolves several discrepancies between theoretical and observational results concerning evolution in the XLF (e.g., Evrard 1990, Navarro, Frenk & White 1995). For the case of a constant entropy core, EH91 derived a scaling which implies $`p=(10\beta 3)/3\beta `$ where $`\beta =\mu m_p\sigma ^2/kT`$ is the usual ratio of dark matter to gas ‘temperatures’.
We correct our bolometric luminosities to the 0.5-2 keV bandpass by calculating temperatures and applying a correction appropriate for a plasma with a metallicity of Z=0.3Z. Specifically, the temperature corresponding to a given mass can be calculated from the analytic M-T relation derived from the virial theorem (e.g., Bryan & Norman 1998): $`kT=\frac{1.39}{\beta }\left(\frac{M}{10^{15}M_{}}\right)^{2/3}\mathrm{\Delta }_c^{1/3}h^{2/3}\mathrm{keV}`$ where $`\mathrm{\Delta }_c`$ is the current density contrast within the cluster virial radius. The luminosity in our bandpass is then calculated by applying the usual bremmstrahlung correction factor as well as a multiplicative factor to account for the presence of metals (Bryan & Norman 1998; eqn. 21).
Using the relation for $`L_{bol}`$ and the bandpass correction, we converted our PS mass functions to differential luminosity functions and made $`\chi `$-squared fits to a subset of the observational data. The observational points used in the fits are all four poor cluster points (BLL96), the five highest luminosity Abell cluster points and the highest luminosity BCS point from Figure 1. We first set $`\beta =1`$ in the $`MT`$ relation and fit for $`c`$ and $`p`$. The fitted value for $`p`$ is included in Table 1 and examples of two of the fits are shown in Figure 2. The dashed curve in Figure 2a is the best fit when the exponent is kept fixed at the analytic prediction $`p=4/3`$. Clearly, the shape of the XLF derived using this prediction is in gross disagreement with the observed function. Figure 2b also shows the importance of the low and/or high-luminosity data points. If only our five Abell cluster data points are used (dotted line), the fitted value of $`p`$ increases by at least 0.2 in all cases (from $`p=3.18`$ to $`p=3.88`$ in this case). We get virtually identical results if we redo our fits without the BCS point whereas dropping the poor cluster points results in slightly greater discrepancies.
Fig. 2. Fits to the observed XLF for two models. The solid lines are the best fits to a subset of the observational XLF points for models $`\mathrm{\Lambda }`$CDM1 (left panel) and $`\mathrm{\Lambda }`$CDM5 (right panel). The dashed line in the left panel is the best-fit to the data assuming $`p`$=4/3 as predicted by the self-similar analytic models. The dotted line in the right panel represents the best-fit to the data when the BCS and poor cluster points are ignored.
If we invoke the constant entropy core model of EH91, then the exponent in the mass-luminosity relation is actually a function of $`\beta `$ ($`p=(10\beta 3)/3\beta `$). In this case, we fit for $`c`$ and $`\beta `$ and find the values listed in Table 1. Interestingly, only the models with $`0.1<\mathrm{\Omega }_0<0.4`$ are consistent with the expected value $`\beta 1`$. \[A recent observational analysis found $`\beta =0.94\pm 0.08`$ (Lubin & Bahcall 1993) which is in good agreement with numerical results (e.g., Eke, Navarro, & Frenk 1998).\] Thus, if the constant entropy core model of EH91 applies, the present-day XLF observations suggest a low-density universe but cannot distinguish between open and flat cases.
## 5. Conclusions
Starting from an optically-selected, statistical sample of Abell clusters, we have made a new determination of the local XLF to compare to previous work and more distant cluster samples. Our cluster sample is larger than all previous studies, and is contained within a smaller volume. For this reason, we have reduced statistical uncertainties in the local XLF by nearly a factor of two for a limited range in $`L_X`$ ($`L_X>10^{43}h_{50}^2ergs/sec`$). It is only for $`L_X<10^{43}ergs/sec`$ that incompleteness due to the optical selection of our sample is apparent. The observed incompleteness is not a failing in Abell’s catalog, but rather results from the contribution of poor clusters and groups below Abell’s richness limit.
Combined with the poor-cluster XLF of BLL96, we have examined the local XLF over nearly three orders of magnitude in $`L_X`$. We find that the local XLF is remarkably well represented by a power-law over nearly this entire range in $`L_X`$. This is significant evidence that hierarchical formation results in similar cluster properties over a large range in $`L_X`$ and mass. Including the brightest $`L_X`$ clusters from the BCS sample which fall above the break in the XLF at $`L_X>10^{45}h_{50}^2ergs/sec`$, we also performed a Schechter-function fit which is in good agreement with other recent surveys to much higher redshift ($`z<0.7`$), confirming a lack of significant evolution at these luminosities.
We have used our new local XLF to derive a constraint on $`\mathrm{\Omega }_0`$. This would appear to contradict a common claim that the $`\sigma _8\mathrm{\Omega }_0`$ dengeneracy can be broken only by including the evolution with redshift (e.g. Bahcall & Fan 1998). In fact, PS mass functions for combinations of $`\sigma _8`$ and $`\mathrm{\Omega }_0`$ that satisfy a $`\sigma _8\mathrm{\Omega }_0`$ constraint differ in shape. Borgani et al. (1999) have recently used the shape of the local XLF in order to constrain $`\sigma _8\mathrm{\Omega }_0`$ and the shape of the L-T relation. Including clusters at higher redshift, they concluded that $`\mathrm{\Omega }_0=0.4_{0.2}^{+0.3}`$ for open models, and $`\mathrm{\Omega }_00.6`$ for flat models assuming no evolution in the L-T relation; both of which are consistent with our results. In this work, we have used the shape of the local XLF, the local number density of 7 keV clusters, and the PS formalism in order to constrain the cluster M-L relation; $`L_XM^p`$. There is a clear trend for $`p`$ to increase with $`\mathrm{\Omega }_0`$ (see also Mathiesen & Evrard 1998). None of the theoretical models are consistent with the analytic prediction $`p=4/3`$ from Kaiser (1986). If we adopt the constant core-entropy model of EH91, and the additional constraint that $`\beta 1`$, the shape of the local XLF suggests that $`0.1\mathrm{\Omega }_00.4`$, with no constraint on $`\mathrm{\Lambda }`$.
Acknowledgements
This work was supported in part by NASA grants NAG5-6739 and NAGW-3152, and NSF grant AST-9896039. We thank Anatoly Klypin and Jon Holtzman for use of their code and useful discussions. We also thank Neta Bahcall and an anonymous referee for helpful suggestions. JOB and FNO thank MPE for their hospitality during several visits.
|
no-problem/9903/astro-ph9903482.html
|
ar5iv
|
text
|
# The Intra-cluster Medium Influence on Spiral Galaxies
## 1 Introduction
The study of cluster galaxies helps us understand and constrain models of galaxy-evolution. Recently, several authors have used samples of thousands of galaxies to outline the main global properties of galaxies in clusters (see Adami et al. 1998 and references therein) such as the relation between the galaxy type with its environment or with its cluster centric distance. Some other studies (Rubin et al. 1988 and Whitmore et al. 1988: RWF hereafter, Amram et al. 1992 to 1996: hereafter AmI, II, III, IV and V or Sperandio et al. 1995: Sp hereafter) have made a more precise analysis (although with smaller samples) of the spiral morphological types to determine if their rotation curves are related to the cluster centric distance. However, these last studies suffer from an evident bias: they are based on a bidimensional analysis. Projection effects could explain the contradictory results among the authors. We analysed the sample of 45 galaxies observed by Amram et al. (AmI, AmIII and AmIV) in various clusters and used a statistical deprojection technique (based on the density profile of each of the spiral morphological types) to extract the main properties of the spiral galaxies in clusters and to constrain the models. In section 2 we describe the samples and the methods to discard interloppers. Section 3 describes the deprojection technique and the results. We discuss the implications in the last section.
We have assumed here H<sub>0</sub>=75 km.s<sup>-1</sup>Mpc<sup>-1</sup>.
## 2 The sample
We use here the shape of the Rotation Curve (RC hereafter) of spiral galaxies to study the effect of the intra cluster medium on the halos of these galaxies. Amram et al. (AmI, AmIII and AmIV) observed 45 cluster galaxies with a scanning Fabry-Perot interferometer and with 3.6m telescopes (CFHT or ESO) at H$`\alpha `$ wavelength. They drew detailed velocity fields with high resolution (both spectral and spatial) and derived the RC with a much better precision than can be done with slit spectroscopy. A detailed comparison of both techniques is given in AmI.
The shape of the outer part of the RCs is given by the outer gradient ”OG”, defined as the difference between the velocity at 0.4R<sub>25</sub> and 0.8R<sub>25</sub>, normalized to the maximum velocity of the RC. R<sub>25</sub> is the optical radius of a galaxy defined to the point where the surface brightness is 25 mag/arcsec<sup>2</sup>.
From the total sample of 45 galaxies Amram et al. could measure 39 OG (AmV). In some cases, the OG were obtained by a slight extrapolation of the RC. These 39 galaxies are listed in Table1, with the corresponding OG and the distance parameters that are discussed in section 3.
Although the OG is perhaps not the best parameter to define the tendency of a rotation curve to rise or decrease beyond the optical radius it has been used by many authors and appears as a common language. Also it is easy to compute as soon as the RC of a galaxy is available.
In order to get a sample as clean as possible we decided to discard the galaxies for which the OG was obtained with too large an extrapolation. We only kept those for which the RC reached at least 0.7R<sub>25</sub>. We thus excluded WR 42 and NGC 4921. The original data for NGC4921 had been obtained with rather bad weather conditions and were of rather poor quality, providing a RC with large error bars, and the OG was found abnormaly large (22). Although, we re-observed it at CFHT in 1995 in better conditions, with the same instrument, and find now a flat curve with OG close to zero. However it still remains the result of an extrapolation since the RC barely reaches 0.6R<sub>25</sub>.
We also discarded galaxies with a high extinction: we thus removed the galaxies with an inclination greater than 75 degrees, namely NGC 669 and UGC 4386 (both having an inclination of 80 degrees).
We also removed interacting galaxies to avoid effects due to the interaction on the RC. Six galaxies were thus excluded (see appendix for details).
This severe selection finally limits the total sample to 29 galaxies marked with a * in Table 1. They were homogeneously selected in 8 nearby distinct clusters and have morphological types lying between 2 and 6 (following the RC3 classification : de Vaucouleurs et al. 1989) although six of them have no clearly defined type.
In this respect the case of DC 10 deserves an explanation. Although it is refered to as a type 1 galaxy in the RC3 its type is most probably around 4, as suggested in AmIV, and we decided to attribute no definite type to this galaxy. Furthermore its OG is abnormally high (32), although it successfully passed all the selection criteria, placing it clearly outside our average distribution of points on Fig.1 (its d2D and d3D values are respectively 0.11 and 1.3 Mpc). We have no explanation to this, but we suppose that this galaxy suffers from projection effects that are impossible to correct due to its apparent position very close to the center of its cluster.
We also checked the projected clustercentric distances given by the authors by using an X-Ray determination of the center whenever possible (for A262, A539, A1656, A2151 and Pegasus). The difference between the centers used by the authors and the X-ray centers are quite small (typically less than 0.1 Mpc) and may be neglected regarding the cluster centric distances used here (see Tab. 1).
## 3 Other samples of Rotation Curves
We looked through the literature for other RCs of cluster galaxies, in order to extend our number of OG values. The other main samples of RC are all obtained through slit spectroscopy. We now discuss these samples.
Mathewson et al. (1992) : Of 965 spiral galaxies for which they give H$`\alpha `$ RC only 261 belong to clusters. Applying our selection criteria (i.e. removing too much inclined galaxies, interacting ones and those with RC within 0.7R<sub>25</sub>) we keep fewer than 100 galaxies. Many of the remaining galaxies exhibit RCs with strong asymetries or dispersions and are not suited for obtaining reliable OG values. Furthermore most of them are found in loose clusters, not useful for the study of environmental effects. At last we deal with fewer than 10 galaxies in rich clusters (some of them being already found in our own sample: AmI + AmIII + Am IV) and we decided not to use this sample.
Mathewson and Ford (1996) : They added 1051 H$`\alpha `$ RC of galaxies to the previous sample. Applying the same selection criteria, we keep fewer than 40 galaxies. Furthermore it was hard to get reliable values of OG from the published curves and we decided not to use this sample for our paper.
Persic and Salucci (1995) : This is a subsample of Mathewson et al. with 80 high quality RCs of galaxies. Most of them are field galaxies and, after rejection of that are too inclined, it remains only a few galaxies in loose clusters. Hence, we did not use this sample.
Courteau (1997) : The sample contains 304 field galaxies from the UGC catalogue, but none in clusters.
Corradi and Cappacioli (1991) : This catalogue of kinematical data about 245 galaxies is based on kinematical studies found in the literature. It is not a compilation of RCs but provides an interesting classification of the shape of the RC around the optical radius, with three classes : rising, flat and decreasing. We use these data in section 5 (discussion and conclusion).
Dale et al. (1997,1998) : Dale published a huge sample of 522 RC of late-type cluster galaxies in his PH D thesis. The data, now completed by the morphological type information, are being prepared for publication. The two first papers that are already published contain 145 RCs of cluster galaxies, of which 89 remain after removing higly inclined ones and those with abnormal radial velocities. The use of these RC is not straightforward however, since no R<sub>25</sub> is available for these galaxies (the parameters given by the authors are R<sub>23.5</sub> and R<sub>83</sub>, radius of the isophote containing 83% of the total emission in the I band). So that there is no direct way for getting OG from these data. Moreover, some of the last 89 galaxies have a RC not enough extended to allow a determination of OG.
Almost all of the samples listed above were intended to check the Tully-Fisher relation. Let us now examine the samples specificaly devoted to the study of OG in cluster galaxies :
Rubin et al. (1988) and Whitmore et al. (1988) : They measured 16 OG, 10 for early type galaxies (Sa + Sb) and 6 for late type galaxies (Sc). Most of them were measured with more accuracy, owing to the scanning Fabry-Perot, by AmI and AmIII. Only 2 galaxies satisfying our selection criteria could be added to the sample discussed in this paper, namely, UGC 12417 and WR 66.
Distefano et al. (1990) : They measured 15 OG of galaxies, most of them in the Virgo cluster. For the 9 OG values that were obtained by a combination of H$`\alpha `$ and HI data, the optical RC could not be drawn far enough and we discarded them. Finally there are only 5 galaxies from this sample (1 early and 4 late) satisfying our selection criteria : NGC 4254, NGC 4294, NGC 4501, NGC 4651 and NGC 4654.
Sperandio et al. (1995) : They measured 16 OG of galaxies in the Virgo cluster (2 early and 16 late), 4 of which (2 early and 2 late) satisfy our selection criteria : NGC 4178, NGC 4480, NGC 4639 and NGC 4713.
From the 3 above samples, according to our selection criteria, it is possible to add 11 galaxies to our own sample of 29 galaxies from Fabry-Perot observations discussed in section 2. We did not plot them on Fig. 1 for sake of homogeneity (the 11 extra galaxies having been observed through slit spectroscopy) but it is worth noting already that these addition data reinforce the conclusions reached from the analysis discussed hereafter (see end of section 4).
## 4 Analysis
### 4.1 Deprojection
We have used the results of Adami et al. (1998) to find a way to deproject statistically the cluster-centric distances of our sample of galaxies. The method is based on the distribution profile of the spiral galaxies. Using a very large sample of about 2000 galaxies in 40 clusters, they have shown that the Sa+Sb galaxies (resp. Sc+Sd+Sm+Irr) follow a King profile with a core radius of 0.212 Mpc (respect. 0.263 Mpc). We have assumed these profiles for the present galaxies and via the Abel inversion, we have statistically deprojected these King profiles (they have the same core radius in 2 or 3 dimensions). We note that the values we get are only statistical estimations and must not be interpreted as reliable values for individual galaxies.
For each galaxy we have made 10<sup>6</sup> random generations of the spatial cluster-centric distance according to the given 3D King density law. We have kept only the distances greater than the observed projected distance (the projected distance observed in the sky plane being always lower than the 3D distance). The dispersion of these distances gives an estimation of the error we have for each galaxy (according to the size of the error, some of our galaxies are consistent with a central location in the cluster).
Finally, we have imposed a maximum radius for the cluster equal to 4 Mpc. This is about the maximum virial radius observed for the clusters (see e.g. Carlberg et al. 1996). A larger distance would imply that the isothermal sphere model (and then the deprojection method) is not valid.
We note that we computed here a value similar to the analytical mean weightened with a truncated King profile (due to the large number of realizations). Two galaxies with the same projected cluster-centric distance will have the same deprojected distance. To take into account this degeneracy, we have used a second deprojected distance: a single value generated by a random generator and weighted for the King profile (taking into account the 2D constraint), as suggested by the referee. This gives different values of the deprojected distance for a given projected distance. This allows to evaluate the robusteness of the 2 methods.
In Table 1 we give, for each galaxy, the mean likely 3D distance from the center of the cluster (hereafter d3D) computed from the observed projected distance in the sky (d2D), the second estimation of the 3D distance (d3DII) and the error we have for d3D ($`\sigma `$d3D).
The adopted maximum radius of 4 Mpc led us to not apply any deprojection to the galaxies with projected distances greater than this radius : they were considered to be field galaxies in the present paper (this is the case for the 3 outermost galaxies of our sample).
### 4.2 Results
Fig. 1 shows the variation of OG as a function of d3D for the 39 galaxies of AmV having a measured OG. The plot is the same as in Fig. 13 of AmV with d3D instead of d2D (only one OG value has changed in the meantime, that for NGC 4921 which is now at 0 from the new data obtained at CFHT).
AmV found no correlation between OG and the distance to the cluster center. Using the d3D distance does not change this result since, as explained below, it may only amplify an already existing gradient.
With the selected sample of 29 galaxies following the selection criteria discussed in section 2 (that is to say discarding the galaxies plotted as simple points on Fig.1) there is still no clear tendency of OG to vary with the distance, especially since the dispersion is quite large.
Discarding now the galaxies with no defined type (23 galaxies remaining, with open and filled circles) things become clearer and there is a marked tendency of OG to increase with the distance.
OG is found to increase as a function of distance following the relation:
OG = (35.8$`\pm `$20.8)$`\times `$log10(d3D) + (-8.9$`\pm `$18.3)
N.B. This linear relation uses the bissector method from Isobe et al. 1990. The value of the Spearman’s rank correlation (Press et al 1992) is 0.11.
Using the d3DII estimator, we get consistent results with however a lower value of the slope: (23.2$`\pm `$21.9). The two results being consistent, we have choosen to use hereafter only the d3D statistical estimator.
Using the more usual projected distance d2D, instead of the corrected spatial distance d3D, one finds :
OG = (12.6$`\pm `$9.8)$`\times `$log10(d2D) + (3.1$`\pm `$8.7)
The measured slope is much steeper when using the deprojected distance (about 3 times). This is because the deprojection method significantly increases the distance of galaxies close to the cluster center, thus shortening the distance range and amplifying any gradient of OG as a function of distance. We remark that the deprojection method, when applied to galaxies at the same projected distance, will place late types further than early types, although the difference is not significant (for instance this difference is less than 3 % for d2D = 2 Mpc, whereas it would be more than 10% between ellipticals and late types).
The slope found here with the 2D distances (with the sample limited to galaxies of well defined type) is significant, although it is much smaller than the slope originally claimed by RWF (our slope is about three times smaller than their value).
Let us now look at the behaviour of galaxies depending on their type :
On Fig. 1 one can see that OG remains constant for early type galaxies, whatever their location within the cluster, meanwhile there is a marked tendency of OG to increase with the distance to the cluster center with late type ones.
The linear relation found for early type galaxies (bissector method from Isobe et al. 1990) is (with the Spearman’s rank correlations respectively equal to 0.21 and 0.19, indicating a better correlation):
OG = (-0.10$`\pm `$0.14)$`\times `$log10(d3D) + (0.55$`\pm `$3.3)
and that for late type ones:
OG = (38.6$`\pm `$18.4)$`\times `$log10(d3D) + (-9.13$`\pm `$16.2)
Both relations are plotted respectively as a dotted line and a dashed line on Fig. 1.
N.B. The 11 galaxies (4 early and 7 late) discussed at the end of section 3 (observed through slit spectroscopy by other authors) show the same trend, although with a high dispersion since it is a very small sample, and the tendency remains the same when adding them to our own sample, leading to the following linear relations:
OG = (4.7$`\pm `$15.3)$`\times `$log10(d3D) + (4.00$`\pm `$9.0) for the early types
OG = (32.9$`\pm `$20.3)$`\times `$log10(d3D) + (-6.98$`\pm `$13.1) for the late types
## 5 Discussion and conclusion
We have shown from a selected homogeneous sample of 29 spiral galaxies that the OG increases with the 3D cluster centric distance with a significant slope for late type galaxies, meanwhile early ones show no peculiar tendency.
Looking at the morphological types, one can see that the early types are dominant in the inner part of the clusters, meanwhile late types are found in the outer parts (see open and filled circles on Fig. 1).
This segregation phenomenon between early type and late type spirals has been demonstrated by Adami et al. (1998) from the analysis of a sample of 2000 galaxies in 40 clusters. It appears as the natural extension of the segregation already well known for elliptical, S0 and spiral galaxies in clusters (e.g. Melnick and Sargent, 1977; Dressler, 1980; Whitmore and Gilmore, 1991; Stein, 1997). This distribution of galaxies, depending on their type, may explain the observed tendency for OG to increase with the distance to the center of a cluster, since there is a correlation between the morphological type and the shape of the RC, as shown by Corradi and Cappacioli (1990). They have shown indeed, from a sample of 167 galaxies, that flat curves are associated to the earlier types and rising curves to the late types. We confirm this effect when analysing the larger sample (245 galaxies) catalogued by Corradi and Capaccioli (1991) for which they give $``$ 200 shapes of RC.
Fig. 2 shows the histogram of the distribution of types for the 73 rising RC of that sample (there are only 14 decreasing curves, evenly distributed among the different types, and 112 flat curves for which the histogram is practically the complement of Fig. 2). Among the 73 galaxies with rising RC, 15 are found isolated and may be considered as field galaxies, meanwhile 58 are found in groups or clusters. On Fig. 2 we plotted indeed two histograms side by side, one for the 51 field galaxies (hatched areas) and another one for the 148 cluster galaxies (gray areas). The same trend is observed in both cases, namely that the percentage of rising curves clearly increases regularly from early to late type spirals (we also checked that on our own sample of 39 cluster galaxies).
This suggests that the value of OG is more dependent on the type of a galaxy than on its evolution within a cluster. We conclude that the correlation between OG and the morphological type of galaxies seen on Fig. 1 mainly reflects the importance of the dark halo, increasing with the type.
Then the tendency of OG to increase with the distance to a cluster center, although remaining a controversial subject, could mainly reflect the morphological segregation of galaxies, those with larger values of OG being found preferentially in the outer parts of the clusters.
We now conjecture why there is a marked tendency of OG to increase with the distance for late type spirals, meanwhile the OG of early type spirals remains around zero.
Our results suggest that the effects of evolution within clusters predominantly affect late type galaxies, since the shape of their RC is more clearly dependent on their position within the cluster than for the early type galaxies.
We propose the following scenario:
Galaxies in a cluster are undergo to interactions which make them evolve. Those getting closer to the center will experience a larger number of interactions, thus losing a significant fraction of their halo and exhibiting less rising RC. This is why OG is found to be smaller in the central part of the clusters. Finally, with time elapsing, galaxies gradually reach more stable orbits closer to the cluster center meanwhile they evolve toward earlier types, as suggested by the trend observed in both the distribution and the velocity dispersion profiles for the different morphological types by Adami et al. (1998). The early type galaxies have almost finished their evolution within the cluster, now remaining on more circular orbits (also closer to the center) than the late type ones. Their behaviour is more homogeneous and they show no special trend. The late type galaxies are still evolving, with those closer to the cluster center being more evolved and displaying smaller OG values, because they have experienced a larger number of interactions. As time runs they will eventually turn into early type galaxies.
A question remains however : Early types being closer to the center should experience more interactions, hence loosing more material from their halo and exhibiting more decreasing RCs. On the contrary we have seen that their OG remains around zero and does not show any significant variation with the distance to the cluster center. This could be because they are moving in the denser parts of the cluster, hence accreting material more or less compensating the fraction of halo lost during interactions. Another possible explanation is that we have no type 1 (Sa) nor type 0 (S0a) galaxies in our sample (because the data we used are based on emission lines measurements), maybe preventing to see any tendency that could be more clearly seen with very early types.
Besides explaining the tendency of OG to increase with the distance to the cluster center, the proposed scenario would also explain why the behaviour is different for early and late type galaxies.
Our severe selection criteria, looking for high quality RCs of cluster galaxies, led us to work on a rather limited sample. The main result of our study, namely that early type galaxies exhibit flat RCs whatever their location in the cluster, while late type ones have all the more rising RCs as they are found further from the cluster center, needs however to be confirmed with larger homogeneous samples.
###### Acknowledgements.
AC acknowledges the staff of the Dearborn Observatory for their hospitality during his postdoctoral fellowship. DR acknowledges the staff of the Anglo Australian Observatory for their hospitality during her postdoctoral fellowship, she thanks the french ministery of foreign affairs for its help with a Lavoisier grant. The authors thank the referee for very useful and constructive comments. The authors thank M. Ulmer for a detailed reading of the revised manuscript.
## Appendix A Appendix: Discussion on the six interacting galaxies removed from the sample
Z 160-106: the interaction with its companion may explain the strange shape of the RC, with a very high value of OG, found in AmI ; also it is quite surprising to find H$`\alpha `$ emission in this type of object, which is the only type -2 galaxy in our sample, making it all the more suspect.
DC 47: probably an interacting pair as explained in AmIV.
NGC 6045: already at the limit of being excluded for its high inclination, 75 degrees, it appears warped with a companion at the end of the eastern arm, see AmI.
NGC 6050 / IC 1179: they appear to be an interacting pair although they have flat RCs. Since there is a 1500 km.s<sup>-1</sup> difference in systemic velocities, we suggest that it is maybe a superposition, see AmIII.
NGC 3861: has a close companion superimposed on a spiral arm, see AmIII.
|
no-problem/9903/astro-ph9903113.html
|
ar5iv
|
text
|
# BeppoSAX Detection and Follow-up of GRB980425
## 1 Introduction
The GRB of 1998 April 25, detected both by the BeppoSAX GRBM and BATSE and localized with arcminute accuracy by the BeppoSAX WFC, stands out for its spatial and temporal coincidence with the optically and radio exceedingly bright Type Ic supernova SN 1998bw (Galama et al. 1998; Kulkarni et al. 1998), in the nearby galaxy ESO 184-G82 ($`z=0.0085`$). Since the other GRBs for which a redshift measurement is available are located at larger distances ($`z\stackrel{>}{}1`$) and are characterized by power-law decaying optical afterglows, in agreement with the “classical” fireball model (Rees & Mészáros 1992), this has raised a debate about a possible association between GRBs and supernovae. Following the detection of GRB980425, observations of its error box with the BeppoSAX NFI have been activated 10 hours, one week, and six months later. We present here some results and discuss their implications in view of the detection of SN 1998bw in the GRB field. A detailed presentation will be given in Pian et al. (1999).
## 2 Data analysis and results
GRB980425 triggered the BeppoSAX GRBM at 21:49:11 UT, and was simultaneously detected by the WFC unit 2 (Soffitta et al. 1998). The event had a duration of 31 s in the range 40-700 keV and of 40 s in the range 2-26 keV, and exhibited a single, non structured peak profile in both bands (Fig. 1). The fluences at $`\gamma `$\- and hard X-ray energies are ($`2.8\pm 0.5`$) $`\times 10^6`$ erg cm<sup>-2</sup> and ($`1.8\pm 0.3`$) $`\times 10^6`$ erg cm<sup>-2</sup>, respectively. (The Galactic absorption in the direction of GRB980425, $`N_{HI}=4\times 10^{20}`$ cm<sup>-2</sup>, is negligible at energies higher than 2 keV.) The BeppoSAX NFI were pointed at the $`8^{}`$ radius error box determined by the WFC at three epochs starting 10 hours after the GRB (see Table 1; note that the first pointing has been split in two parts). The preliminary analysis of the LECS and MECS data of the first portion of the first pointing shows that inside the WFC error box, two point-like, previously unknown X-ray sources are detected with a positional uncertainty of $`1^{}.5`$: 1SAXJ1935.0-5248 (hereafter S1), at RA = 19h 35m 05.9s and Dec = -52 50 03<sup>′′</sup>, and 1SAXJ1935.3-5252 (hereafter S2), at RA = 19h 35m 22.9s and Dec =-52 53 49<sup>′′</sup>. Note that the coordinates distributed by Pian et al. (1998) have been revised in November 1998 (see to this regard Piro et al. 1998). The revised position of S1 is consistent within the uncertainty with the position of the optical and radio supernova SN 1998bw (Galama et al. 1998; Kulkarni et al. 1998), while the revised position of S2 is $`4^{}`$ away from SN 1998bw, and therefore inconsistent with it (see Fig. 1 in Galama et al. 1999).The MECS count rates and upper limits for both sources during the three pointings are reported in Table 1. The upper limits have been estimated by taking into account, besides the normal photon statistics, also the fact that, at these flux levels, the MECS background may be dominated by the fluctuations of the cosmic X-ray background. The observation of November 1998 (taken about a week after the conclusion of this Conference) shows a decrease in the X-ray flux of S1 of approximately a factor of two with respect to the level measured in April-May and the suggestion of slightly extended X-ray emission around the source. During the second portion of the first pointing, as well as in the November pointing, S2 is not detected, while it is detected in the May pointing, at a marginally lower level than in the first observation (see Table 1).
| Table 1: Journal of BeppoSAX-MECS Observations | | | |
| --- | --- | --- | --- |
| Date (UT) | t<sup>a</sup> (s) | Flux<sup>b</sup> ($`\times 10^3`$ cts s<sup>-1</sup>) | |
| | | S1 | S2 |
| 1998 Apr 26.334-27.458 | 37220 | $`4.6\pm 0.6^c`$ | $`2.4\pm 0.5`$ |
| Apr 27.469-28.160 | 21805 | $`4.5\pm 0.7`$ | $`<2.5`$ |
| May 02.605-03.621 | 31975 | $`3.0\pm 0.5`$ | $`1.4\pm 0.5`$ |
| Nov 10.754-12.004 | 53122 | $`1.8\pm 0.4`$ | $`<2.0`$ |
| <sup>a</sup> On source exposure time | | | |
| --- | --- | --- | --- |
| <sup>b</sup> In the energy range 1.6-10 keV | | | |
| <sup>c</sup> Uncertainties are at 1-$`\sigma `$; upper limits are at 3-$`\sigma `$ | | | |
## 3 Discussion
The count rates in the first line of Table 1 correspond to $`F_{210keV}3\times 10^{13}`$ erg s<sup>-1</sup> cm<sup>-2</sup> for S1 and to $`F_{210keV}1.6\times 10^{13}`$ erg s<sup>-1</sup> cm<sup>-2</sup> for S2. The following data points show a decay for S1 of a factor of two in $``$6 months. Assuming, as suggested by the positional coincidence and by variability, that S1 is associated with SN 1998bw, the observed variation represents a lower limit on the amplitude of X-ray variability of SN 1998bw. In fact, the possible NFI detection of extended emission indicates that S1 might contain a non negligible contribution from the host galaxy of the supernova. This is the first detection of hard X-ray emission from a Type I supernova. At the distance of SN 1998bw, the luminosity observed in the range 2-10 keV, $`5\times 10^{40}`$ erg s<sup>-1</sup>, is compatible with the luminosity observed in the 0.1-2.4 keV range for the Type Ic SN 1994I, the only case of soft X-ray Type I supernova emission so far detected (Immler et al. 1998). If SN 1998bw is the counterpart of GRB980425, the production of $`\gamma `$-rays could be accounted for by the explosion of a very massive star ($`40M_{}`$) and by the subsequent expansion of a relativistic shock, in which non thermal electrons are radiating photons of $``$100 keV, provided the explosion is asymmetric, i.e. the GRB is produced in a relativistic jet (Iwamoto et al. 1998; Woosley et al. 1998; see however, Kulkarni et al. 1998). This raises the hypothesis that two classes of GRBs might exist, with apparently indistinguishable high energy characteristics, but with different progenitors. On the other hand, disregarding the extremely low probability of chance coincidence of GRB980425 and SN 1998bw, one might consider S2 as the X-ray counterpart candidate of the burst. Assuming a power-law decay between the X-ray flux measured by the WFC in the 2-10 keV range during the GRB and the flux measured in the first NFI observation, we derive a power-law index of $`1.4`$. The X-ray flux measured in May is however a factor $``$10 larger than implied by the power-law decay. This behavior is unlike that of previously observed X-ray afterglows, although it could be still reconciled with it under the hypothesis of a re-bursting superposed on a “typical” power-law decline.
###### Acknowledgements.
We thank the BeppoSAX Mission Planning Team and the BeppoSAX SDC and SOC personnel for help and support in the accomplishment of this project.
|
no-problem/9903/astro-ph9903252.html
|
ar5iv
|
text
|
# Nature of Clustering in the Las Campanas Redshift Survey
## 1 Introduction
Many surveys have been carried out to chart the positions of galaxies in large regions of the universe around us, and many more surveys which go deeper into the universe are currently underway or are planned for the future. These surveys give us detailed information about the distribution of matter in the universe, and identifying the salient features that characterize this distribution has been a very important problem in cosmology. The statistical properties, the geometry and the topology are some of the features that have been used to characterize the distribution of galaxies, and a large variety of tools have been developed and used for this purpose.
The correlation functions which characterize the statistical properties of the distributions have been widely applied to quantify galaxy clustering. Of the various correlation functions (2-point, 3- point, etc…) the galaxy-galaxy two point correlation function $`\xi (r)`$ is very well determined on small scales (Peebles 1993 and references therein) and it has been found to have the form
$$\xi (r)=\left(\frac{r}{r_0}\right)^\gamma \mathrm{with}\gamma =1.77\pm 0.04\mathrm{and}r_0=5.4\pm 1h^1\mathrm{Mpc}$$
(1)
This power-law form of the two point correlation function suggests that the universe exhibits a scale invariant behaviour on small scales $`r<r_0`$. The two point correlation function becomes steeper at larger scales $`r>r_0`$. It is, however, not very well determined on very large scales where the observations are consistent with the correlation function being equal to zero. The standard cosmological model and the correlation function analysis are both based on the underlying assumption that the universe is homogeneous on very large scales and the indication that the correlation function vanishes at very large scales is consistent with this.
Fractal characterization is another way of quantifying the gross features of the galaxy distribution. Fractals have been invoked to describe many physical phenomena which exhibit a scale invariant behaviour and it is very natural to use fractals to describe the clustering of galaxies on small scales where the correlation function analysis clearly demonstrates a scale invariant behaviour.
Coleman and Pietronero (Coleman and Pietronero, (1992)) applied the fractal analysis to galaxy distributions and concluded that it exhibits a self-similar behaviour up to arbitrarily large scales. Their claim that the fractal behaviour extends out to arbitrarily large scales implies that the universe is not homogeneous on any scale and hence it is meaningless to talk about the mean density of the universe. These conclusions are in contradiction with the Cosmological Principle and the entire framework of cosmology, as we understand today, will have to be revised if these conclusions are true.
On the other hand, several others (Martinez and Jones, (1990); Borgani, (1995)) have applied the fractal analysis to arrive at conclusions that are more in keeping with the standard cosmological model. They conclude that while the distribution of galaxies does exhibit self similarity and scaling behaviour, the scaling behaviour is valid only over a range of length scales and the galaxy distribution is homogeneous on very large scales. Various other observations including the angular distribution of radio sources and the X-ray background testify to the universe being homogeneous on large scales (Wu, Lahav and Rees, (1998); Peebles (1998)).
Recent analysis of the ESO slice project (Guzzo (1998)) also indicates that the universe is homogeneous over large scales. The fractal analysis of volume limited subsamples of the SSRS2 (Cappi et. al. (1998)) studies the spatial behaviour of the conditional density at scales up to $`40h^1\mathrm{Mpc}`$. Their analysis is unable to conclusively determine whether the distribution of galaxies is fractal or homogeneous and it is consistent with both the scenarios. A similar analysis carried out for the APM-Stromlo survey (Labini & Montuori, (1997)) seems to indicate that the distribution of galaxies exhibits a fractal behaviour with a dimension of $`D=2.1\pm 0.1`$ on scales up to $`40h^1\mathrm{Mpc}`$. In a more recent paper (Amendola & Palladino, (1999)) the fractal analysis has been applied to volume limited subsamples of the Las Campanas Redshift Survey. This uses the conditional density to probe scales up to $`200h^1\mathrm{Mpc}`$. They find evidence for a fractal behaviour with dimension $`D2`$ on scales up to $`20\text{}40h^1\mathrm{Mpc}`$. They also conclude that there is a tendency to homogenization on larger scales ($`50\text{}100h^1\mathrm{Mpc}`$) where the fractal dimension has a value $`D3`$, but the scatter in the results is too large to conclusively establish homogeneity and rule out a fractal universe on large scales.
In this paper we study the scaling properties of the galaxy distribution in the Las Campanas Redshift Survey (LCRS) (Shectman et. al. (1996)). This is the deepest redshift survey available at present. Here we apply the multi-fractal analysis (Martinez and Jones, (1990); Borgani, (1995)) which is based on a generalization of the concept of a mono-fractal. In a mono-fractal the scaling behaviour of the point distribution is the same around each point and the whole distribution is characterized by a single scaling index which corresponds to the fractal dimension. A multi-fractal allows for a sequence of scaling indices known as the multi-fractal spectrum of generalized dimensions. This allows for the possibility that the scaling behaviour is not the same around each point. The spectrum of generalized dimensions tells how the scaling properties of the galaxy distribution changes from the very dense regions (clusters) to the sparsely populated regions (voids) in the survey.
In this paper we compute the spectrum of generalized dimensions ($`D_q`$ vs $`q`$) by calculating the Minkowski-Bouligand dimension (Borgani, (1995)) for both volume limited and magnitude limited subsamples of the LCRS. We also investigate how the spectrum of generalized dimensions depends on the length scales over which it is measured and whether the distribution of galaxies in the LCRS exhibits homogeneity on very large scales or if the fractal nature extends to arbitrarily large scales. .
We next present a brief outline of the organization of the paper. Section 2 describes the method we adopt to compute the spectrum of generalized dimensions. In section 3 we describe the basic features of the LCRS and discuss the issues related to the processing of the data so as to bring it into a form usable for our purpose. Section 4 gives the details of the method of analysis specifically in the context of LCRS. The discussion of the results are presented in section 5 and the conclusions in section 6..
In several parts of the analysis it is required to use definite values for the Hubble parameter $`H_0(=100h\mathrm{km}/\mathrm{s}/\mathrm{Mpc})`$ and the decceleration parameter $`q_0`$, and we have used $`h=1`$ and $`q_0=.5`$.
## 2 Generalized Dimension
A fractal point distribution is usually characterized by its dimension and there exists a large variety of ways in which the dimension can be defined and measured. Of these possibilities two which are particularly simple and can be easily applied to a finite distribution of points are the box-counting dimension and the correlation dimension. In this section we discuss the “working definitions” of these two quantities that we have adopted for analyzing a distribution of a finite number of points. For more formal definitions of these dimensions the reader is referred to Borgani (1995) and references therein. The formal definitions usually involve the limit where the number of particles tends to infinity and they cannot be directly applied to galaxy distributions.
We first consider the box-counting dimension. In calculating the box-counting dimension for a distribution of points, the space is divided into identical boxes and we count the number of boxes which contain at least one point inside them. We then progressively reduce the size of the boxes while counting the number of boxes with at least one point inside them at every stage of this process. This gives the number of non-empty boxes $`N(r)`$ as a function of the size of one edge of the box $`r`$ at every stage of the procedure. If the number of non-empty boxes exhibits a power-law scaling as a function of the size of the box i.e.
$$N(r)r^D$$
(2)
we then define $`D`$ to be the box-counting dimension. In practice the nature of the scaling may be different on different length scales and we look for a sufficiently large range of $`r`$ over which $`N(r)`$ exhibits a particular scaling behaviour and we then use equation (2) to obtain the box-counting dimension valid over those scales. So finally we may get more than one value of box-counting dimension for the distribution, each value of the box counting dimension being valid over a limited range of length scales.
To compute the correlation dimension for a point distribution with N points we proceed by first labeling the points using an index j which runs from $`1`$ to $`N`$. We then randomly select $`M`$ of the $`N`$ points and the index $`i`$ is used to refer to these $`M`$ randomly chosen points.
For every point $`i`$, we count the total number of points which are within a distance $`r`$ from the $`i^{th}`$ point and this quantity $`n_i(r)`$ can be written as
$$n_i(r)=\underset{j=1}{\overset{N}{}}\mathrm{\Theta }(rx_ix_j)$$
(3)
where $`x_i`$ is the position vector of the $`i^{th}`$ point and $`\mathrm{\Theta }`$ is the Heavy-side function. $`\mathrm{\Theta }=0`$ for $`x<0`$ and $`\mathrm{\Theta }=1`$ for $`x0`$. We next divide $`n_i(r)`$ by the total number of points $`N`$ to calculate $`p_i(r)`$, the probability of finding a point within a distance $`r`$ from the $`i\mathrm{th}`$ point. We then average the quantity, $`p_i(r)`$, over all the $`M`$ randomly selected centers to determine the probability of finding a point within a distance $`r`$ of another point and we denote this by $`C_2(r)`$ which is given by,
$`C_2(r)={\displaystyle \frac{1}{MN}}{\displaystyle \underset{i=1}{\overset{M}{}}}n_i(r).`$ (4)
If the probability $`C_2`$ exhibits a scaling relation
$$C_2(r)r^{D_2}$$
(5)
we then define $`D_2`$ to be the correlation dimension.
As with the box-counting dimension, the nature of the scaling behaviour may be different on different length scales and we may then get more than one value for the correlation dimension, each different value being valid over a range of scales.
It is very clear that $`C_2(r)`$ \- which is the probability of finding a point within a sphere of radius $`r`$ centered on another point, is closely related to the volume integral of the two point correlation function. In a situation where the two point correlation function exhibits a power-law behaviour $`\xi (r)=(r/r_o)^\gamma `$ on scales $`r<r_0`$, we expect the correlation dimension to have a value $`D_2=3\gamma `$ over these scales.
For a mono-fractal the box-counting dimension and the correlation dimension will be the same, and for a homogeneous, space filling point distribution they are both equal to the dimension of the ambient space in which the points are embedded.
The box-counting dimension and the correlation dimension quantify different aspects of the scaling behaviour of a point distribution and they will have different values in a generic situation. The concept of a generalized dimension connects these two definitions and provides a continuous spectrum of dimensions $`D_q`$ for a range of the parameter $`q`$. The definition of the Minkowski-Bouligand dimension $`D_q`$ (Falconer (1990); Feder (1989)) closely follows the definition of the correlation dimension the only difference being that we use the $`(q1)\mathrm{th}`$ moment of the galaxy distribution $`n_i(r)`$ (eq. 3) around any point. Equation (4) can then be generalized to define
$`C_q(r)={\displaystyle \frac{1}{NM}}{\displaystyle \underset{i=1}{\overset{M}{}}}[n_i(<r)]^{q1}.`$ (6)
which is used to define the generalized dimension
$$D_q=\frac{1}{q1}\frac{d\mathrm{ln}C_q(r)}{d\mathrm{ln}r}.$$
(7)
The quantity $`C_q(r)`$ may exhibit different scaling behaviour over different ranges of length scales and we will then get more than one spectrum of generalized dimensions each being valid over a different range of length scales.
From equations (6) and (7) it is clear that the the generalized dimension $`D_q`$ corresponds to the correlation dimension at $`q=2`$. In addition $`D_q`$ corresponds to the box-counting dimension at $`q=1`$.
For a mono-fractal the generalized dimension is a constant i.e. $`D_q=D`$ which reflects the fact that for a mono-fractal the point distribution is characterized by a unique scaling behaviour. For a generic multi-fractal the values of $`D_q`$ will be different for different values of $`q`$. The positive values of $`q`$ give more weight-age to the over-dense regions. Thus, for $`q>0`$, $`D_q`$ probes the scaling behaviour of the distribution of points in the over-dense regions like inside clusters etc. The negative values of $`q`$, on the other hand, give more weight-age to the under-dense regions and, hence, for negative $`q`$, $`D_q`$ probes the scaling behaviour of the distribution of points in the under-dense regions like voids.
Finally it should be pointed out that the Minkowski-Bouligand generalized dimension $`D_q`$ is one of the many possible definitions of a generalized dimension. The minimal spanning tree used by van der Weygaert and Jones (van der Weygaert and Jones, (1992)) is another possible method which can be used. The Minkowski-Bouligand generalized dimension has the advantage of being easy to compute. In addition the various selection effects which have to be taken into account when analyzing redshift surveys can be easily accounted for when determining the Minkowski-Bouligand generalized dimension and hence we have chosen this particular method for the multi-fractal characterization of the galaxy distribution in LCRS,
## 3 A Brief Description of the Survey and the Data.
The LCRS consists of 6 alternating slices each subtending $`80^{}`$ in right-ascension and $`1.5^{}`$ in declination, 3 each in the Northern and Southern Galactic Caps centered at $`\delta =3^{},6^{},12^{}`$ and $`\delta =39^{},42^{},45^{}`$ respectively. The survey extends to a redshift of $`.2`$ corresponding to $`600h^1\mathrm{Mpc}`$ in the radial direction. The survey contains about 24000 galaxies distributed with a mean redshift of $`z=.1`$ corresponding to $`300h^1\mathrm{Mpc}`$.
We next elaborate a little on the shape of the individual slices. Consider two cones both with the same axis and with their vertices at the same point. Let the angle between the first cone and the axis be $`90^{}(\delta .75^{})`$ and the second cone and the axis be $`90^{}(\delta +.75^{})`$ so that the angle between the two cones is $`1.5^{}`$. Next truncate both the cones at a radial distance of $`600h^1\mathrm{Mpc}`$ from the vertex. Finally, a slice centered at a declination $`\delta `$ corresponds to a $`80^{}`$ wedge of the region between these two cones. The effect of the extrinsic curvature of the cones is small for the three northern slices and we have restricted our analysis to only these three slices for which we have neglected the effect of the curvature.
Each slice in the LCRS is made up of $`1.5^{}`$ x $`1.5^{}`$ fields some of which were observed with a $`50`$ object fibre system and others with a $`112`$ object fibre system. Of the three northern slices the one at $`\delta =12^{}`$ is exclusively made up of 112 fibre fields while the slice at $`\delta =6^{}`$ is mostly 50 fibre, and the slice at $`\delta =3^{}`$ has got both 50 and 112 fibre fields.
For each field, redshifts were determined for those galaxies which satisfy the magnitude limits and the central brightness limits of the survey. These limits are different for the 50 fibre and the 112 fibre fields. In addition, for those fields where the number of galaxies satisfying the criteria for inclusion in the survey exceeded the number of fibres, the redshifts were determined for only a fraction of the galaxies in the field. This effect is quantified by the “galaxy sampling function” $`f`$ which varies from field to field and is around $`80\%`$ for the 112 fibre fields and around half this number for the 50 fibre fields. In addition to the field to field variation of the galaxy sampling function there are two other effects which have to be accounted for when analyzing the galaxy distribution. They are, (1). Apparent Magnitude and Surface Brightness Incompleteness, and, (2). Central Surface Brightness Selection. These are quantified by two factors $`F`$ and $`G`$, respectively, which are discussed in detail in Lin et al. (1996). The survey data files provide the product of these three factors $`sf=fFG`$ for each galaxy and the contribution from the $`ith`$ galaxy has to be weighted with the factor
$$W_i=\frac{1}{f_iF_iG_i}$$
(8)
when analyzing the survey.
The factor $`W_i`$ discussed above takes into account the effects of the field-to-field sampling fraction and the incompleteness as a function of the apparent magnitude and central surface brightness. In addition, the selection function $`s(r)`$ has also to be taken into account, and this depends on both the differential luminosity function $`\varphi (M)`$ and the magnitude limits of the survey. The luminosity function of LCRS has been studied by Lin et al. (1996) who have determined the luminosity function for different sub-samples of LCRS.
They find that the Schechter form with the parameters $`M^{}=20.29+5\mathrm{log}(h)`$, $`\alpha =0.70`$ and $`\varphi ^{}=0.019h^3\mathrm{Mpc}^3`$ provides a good fit for the luminosity function in the absolute magnitude range $`23.0M17.5`$. They have obtained these parameters from the analysis of the combined Northern and Southern 112 fibre fields and we shall refer to the Schechter luminosity function with these set of parameters as the NS112 luminosity function. The analysis of Lin et al. (1996) shows that this luminosity function can be used for the Northern 50 fibre fields in addition to the Northern and Southern 112 fibre fields, and we have used the NS112 luminosity function for most of our analysis.
Lin et al. (1996) have also separately provided the luminosity function determined using just the Northern 112 fibre fields. This has the Schechter form with the parameters $`M^{}=20.28+5\mathrm{log}(h)`$, $`\alpha =0.75`$ and $`\varphi ^{}=0.018h^3\mathrm{Mpc}^3`$ and we refer to this as the N112 luminosity function. We have used this in some of our analysis of the $`\delta =12^{}`$ slice which contains only 112 fibre fields.
The selection function $`s(z)`$ quantifies the fact that the fraction of the galaxies which are expected to be included in the survey varies with the distance from the observer. For a magnitude limited survey the apparent magnitude limits $`m_1`$ and $`m_2`$ can be converted to absolute magnitude limits $`M_1(z)`$ and $`M_2(z)`$ at some redshift $`z`$. In addition if we impose further absolute magnitude criteria $`M_1MM_2`$, then the selection function can be expressed as
$$s(z)=_{max[M_1(z),M_1]}^{min[M_2(z),M_2]}\varphi (M)𝑑M/_{M_1}^{M_2}\varphi (M)𝑑M.$$
(9)
The apparent magnitude limits are different for the 50 and 112 fibre fields and we have used the appropriate magnitude limits and the N112/ NS112 luminosity functions to calculate the selection function at the redshift of each of the galaxies. This is then used to calculate a weight factor for each of the galaxies, and the contribution of the $`ith`$ galaxy in the survey has to be weighed by
$$w_i=\frac{W_i}{s(z_i)}.$$
(10)
Another effect that we have to correct for arises because of the fact that we would like to treat the distribution of galaxies in each slice as a two dimensional distribution. Each slice consists of galaxies that are contained within a thin conical shell of thickness $`1.5^o`$ and we construct a two dimensional distribution by collapsing the thickness of the slice. The thickness of each slice increases with the distance from the observer and in order to compensate for this effect we weigh each galaxy by the inverse of the thickness of the slice at its red-shift. Taking this effect into account the weight factor gets modified to
$$w_i=\frac{W_i}{z_is(z_i)}.$$
(11)
which we use to weigh the contribution from the $`ith`$ galaxy in the LCRS.
We should also point out that through the process of flattening the conical slices and collapsing its thickness, the three dimensional galaxy distribution has been converted to a 2-dimensional distribution and the whole of our multi-fractal analysis is for a planar 2-dimensional point distribution.
In our analysis we have considered various subsamples of LCRS all chosen from the 3 Northern slices. In addition to the apparent magnitude limits of the survey we have imposed further absolute magnitude and redshift cutoffs to construct both volume and apparent magnitude limited subsamples whose details are presented in Table I.
## 4 Method of Analysis
We first extract various subsamples of LCRS using the criteria given in Table I for each of the subsample. For each subsample we next calculate the weight function $`w_i`$ (equation 11) for all the galaxies in the subsample. In addition the 3-dimensional distribution of galaxies in the sub-sample is converted into a corresponding 2-dimensional distribution using the steps outlined in the previous section and we finally have a collection of $`N`$ galaxies distributed over a region of a plane.
We next choose $`M`$ of these galaxies at random and count the number of galaxies inside a circle of radius $`r`$ drawn around each of these $`M`$ randomly chosen galaxies. In determining this we use a modified version of equation (3) where each galaxy in the circle has an extra weight factor $`w_j`$ as calculated in the previous section, i.e.
$$n_i(r)=\underset{j=1}{\overset{N}{}}w_j\mathrm{\Theta }(rx_ix_j).$$
(12)
The different moments of this quantity are averaged over the $`M`$ galaxies to obtain $`C_q(r)`$ defined in equation (6) for a range of $`q`$. The exercise is repeated with circles of different radii (different values of $`r`$) to finally obtain $`C_q(r)`$ for a large range of $`r`$.
It should be noted that the region from which the $`M`$ points can be chosen at random depends on the size of the circle which we are considering. For very large values of $`r`$ a large region around the boundaries of the survey has to be excluded because a circle of radius $`r`$ drawn around a galaxy in that region will extend beyond the boundaries of the survey. As a consequence for large values of $`r`$ we do not have many galaxies which can serve as centers, while for small values of $`r`$ there are many galaxies which can serve as centers for circles of radius $`r`$. For $`r`$ between $`80h^1\mathrm{Mpc}`$ to $`200h^1\mathrm{Mpc}`$ we use $`M=60`$ which is of the same order as the the total number of galaxies available for use as centers. To estimate the statistical significance of our results at this range of length-scales we have randomly divided the 60 centers into independent groups of 20 centers and repeated the analysis for each of these. We have used the variation in the results from the different subsamples to estimate the statistical errors for our results on large scales. In the range $`r<80h^1\mathrm{Mpc}`$ we have used $`M=100`$ which is only a small fraction of the total number of galaxies which can possibly serve as centers which is around 1500. At this range of length-scales it is possible to choose many independent sets of 100 centers. We have performed the analysis for a large number of such sets of 100 centers and these have been used to estimate the mean generalized dimension $`D_q`$ and the statistical errors in the estimated $`D_q`$ at small scales. For both the range of length-scales considered we have tried the analysis making changes in the number of centers and we find that the results do not vary drastically as we vary the number of centers used in the analysis.
The value of the generalized dimension $`D_q`$ is determined for a fixed value of $`q`$ by looking at the scaling behaviour of $`C_q(r)`$ as a function of $`r`$ (e.g. Figures 4 and 5) We have considered $`q`$ in the range $`10q+10`$. In principle we could have considered arbitrarily large (or small) values of $`q`$ also, but the fact that there are only a finite number of galaxies in the survey implies that only a finite number of the moments can have independent information. This point has been discussed in more detail by Bouchet et.al. (Bouchet et al. (1991)).
In addition to the subsamples of galaxies listed in table 1, we have also carried out our analysis for mock versions of these subsamples of galaxies. The mock versions of each subsample contains the same number of galaxies as the actual subsample. The galaxies in the mock versions are selected from a homogeneous random distribution using the same selection function and geometry as the actual subsample. We have carried out the whole analysis for many different random realizations of each of the subsamples listed in Table I. The main aim of this exercise was to test the reliability of the method of analysis adopted here.
## 5 Results and Discussion
We first discuss our analysis of the mock subsamples. Since the effect of the selection function and the geometry of the slices have both been included in generating these subsamples, our analysis of these subsamples allows us to check how well these effects are being corrected for. In the ideal situation for all the mock subsamples we should recover a flat spectrum of generalized dimensions with $`D_q=2`$ corresponding to a homogeneous point distribution. The actual results of the multi-fractal analysis of the mock subsamples are presented below where we separately discuss the behaviour of $`D_q`$ at small scales $`(r<80h^1\mathrm{Mpc})`$ and at large scales $`(r>80h^1\mathrm{Mpc})`$.
The results for mock versions of the subsample d-12.1 are shown in figure (1). This is a magnitude limited subsample from a slice that has only 112 fibre fields and it contains the largest number of galaxies. We get a nearly flat spectrum with $`D_q=2`$ corresponding to a homogeneous point distribution at both small and large scales. Similar results are also obtained for mock versions of the other subsamples of the $`\delta =12^{}`$ slice.
The analysis of mock versions of the subsample d-03.1 which contains both 112 and 50 fibre fields gives a spectrum with a weak $`q`$ dependence (figure 2). This effect is more noticeable at small scales than at large scales. The analysis of mock versions of the d-06.1 subsample (figure 3) gives similar results at small scales. At large scales we get a nearly flat curve with $`D_q1.8`$ This subsample d-06.1 has mostly 50 fibre fields and it has around half the number of galaxies as the d-12.1 subsample.
We thus find that the analysis is most effective for the subsample from the $`\delta =12^{}`$ slice where $`D_q`$ shows very little $`q`$ dependence and $`2.1D_q1.9`$. For the other two slices we find a weak $`q`$ dependence with $`2.2D_q1.8`$. This clearly demonstrates that our method of multi-fractal analysis correctly takes into account the different selection effects and the complicated sampling and geometry for all the subsamples that we have considered.
We next discuss our analysis of the actual data. The analysis of the curves corresponding to $`C_q(r)`$ versus $`r`$ for the different subsamples shows the existence of two very different scaling behaviour - one at small scales and another at large scales, with the transition occurring around $`80h^1\mathrm{Mpc}`$ to $`100h^1\mathrm{Mpc}`$. The scaling behaviour of $`C_q(r)`$ is shown in figure 4 and figure 5 for $`q=0`$ and $`q=2`$, respectively for the subsample d-12.1 The other subsamples all exhibit a similar behaviour. Based on this we have treated the scales $`20h^1\mathrm{Mpc}r80h^1\mathrm{Mpc}`$ (small scales) and $`80h^1\mathrm{Mpc}r200h^1\mathrm{Mpc}`$ (large scales) separately and the multi-fractal analysis has been performed separately for the small and large scales. Figures 6, 7 and 8 show the spectrum of generalized dimensions $`D_qvsq`$ at both small and large scales for three of the subsamples.
We find that at small scales the plots of $`D_q`$ versus $`q`$ for the actual data (figures 6, 7, 8 ) are quite different from the corresponding plots for the mock versions of the data (figures 1, 2, 3). This clearly shows that the distribution of galaxies is not homogeneous over the scales $`20h^1\mathrm{Mpc}\mathrm{r}80\mathrm{h}^1\mathrm{Mpc}`$. In addition we find that all the subsamples exhibit a multi-fractal behaviour over this range of length-scales. The interpretation of the different values of the multi-fractal dimension $`D_q`$ is complicated by the geometry of the survey and we do not attempt this here.
At large scales the behaviour of the generalized dimension $`D_q`$ is quite different. For the subsample d-12.1 the spectrum shows a weak $`q`$ dependence (figure 6) and $`D_q`$ shows a gradual change from $`D_q2`$ to $`D_q1.8`$ as $`q`$ varies from $`10`$ to $`10`$. This is quite different from the behaviour at small scales where the change in $`D_q`$ is larger and more abrupt. The behaviour of the other subsamples of the $`\delta =12^{}`$ slice are similar. For the subsample d-03.1 we find that the spectrum is nearly flat (figure 7) with $`D_q2`$ and for d-06.1 (figure 8) the spectrum is nearly flat with $`D_q1.8`$. These values are within the range we recover from our analysis of the mock subsamples which are constructed from an underlying random homogeneous distribution of galaxies. This agreement between the actual data and the random realizations with $`2.2D_q1.8`$ in all the subsamples shows that the distribution of galaxies in LCRS is homogeneous at the large scales.
The work presented here contains significant improvements on the earlier work of Amendola & Palladino (1999) on two counts and these are explained below:
(1). Unlike the earlier work which has analyzed volume limited subsamples of one of the slices ($`\delta =12^{}`$) of the LCRS we have analyzed both volume and magnitude limited subsamples of all the three northern slices of the LCRS. The magnitude limited samples contain more than four times the number of galaxies in the volume limited samples and they extend to higher redshifts. This allows us to make better use of the data in the LCRS to improve the statistical significance of the results and to probe scales larger than those studied in the previous analysis.
(2). We have calculated the full spectrum of generalized dimensions which has information about the nature of clustering in different environments. The integrated conditional density used by the earlier workers is equivalent to a particular point $`(q=2)`$ on the spectrum and it does not fully characterize the scaling properties of the distribution of galaxies.
## 6 Conclusion.
Here we present a method for carrying out the multi-fractal analysis of both magnitude and volume limited subsamples of the LCRS. Our method takes into account the various selection effects and the complicated geometry of the survey.
We first apply our method to random realizations of the LCRS subsamples for which we ideally expect a flat spectrum of generalized dimensions with $`D_q=2`$. Our analysis gives a nearly flat spectrum with $`1.8D_q2.2`$ on large scales. The deviation from the expected value includes statistical errors arising from the finite number of galaxies and systematic errors arising from our treatment of the selection effects and the complicated geometry. The fact that the errors are small clearly shows that our method correctly accounts for these effects.
Our analysis of the actual data shows the existence of two different regimes and the distribution of galaxies on scales $`20h^1\mathrm{Mpc}r80h^1\mathrm{Mpc}`$ shows clear indication of a multi-fractal scaling behaviour. On large scales $`80h^1\mathrm{Mpc}r200h^1\mathrm{Mpc}`$ we find a nearly flat spectrum with $`1.8D_q2.2`$. This is consistent with our analysis of the random realizations which have been constructed from a homogeneous underlying distribution of galaxies.
Based on the above analysis we conclude that the distribution of galaxies in the Las Campanas Redshift Survey is homogeneous at large scales with the transition to homogeneity occurring somewhere around $`80h^1\mathrm{Mpc}`$ to $`100h^1\mathrm{Mpc}`$.
###### Acknowledgements.
TRS would like to thank thank T. Padmanabhan, K. Subramanian, J. S. Bagla, F. S. Labini and L. Pietronero for several useful discussions. AKG and TRS gratefully acknowledge the project grant (SP/S2/009/94) from the Department of Science and Technology, India. All the authors are extremely grateful to the LCRS team for making the catalogue publicly available.
Table 1.
| | | | Absolute | Luminosity | Number of | Vol./Mag. |
| --- | --- | --- | --- | --- | --- | --- |
| subsample | $`\delta `$ | z range | Magnitude range | Function | Galaxies | Limited |
| d-12.1 | $``$12.0 | 0.017-0.2 | $``$23.0 - $``$17.5 | NS112 | 4458 | M |
| d-12.2 | $``$12.0 | 0.017-0.2 | $``$23.0 - $``$17.5 | N112 | 4458 | M |
| d-12.3 | $``$12.0 | 0.05-0.1 | $``$21.0 - $``$20.0 | N112 | 869 | V |
| d-12.4 | $``$12.0 | 0.065-0.125 | $``$21.5 - $``$20.5 | N112 | 923 | V |
| d-06.1 | $``$6.0 | 0.017-0.2 | $``$23.0 - $``$17.5 | NS112 | 2316 | M |
| d-03.1 | $``$3.0 | 0.017-0.2 | $``$23.0 - $``$17.5 | NS112 | 4055 | M |
|
no-problem/9903/cond-mat9903151.html
|
ar5iv
|
text
|
# Photoemission Evidence for a Remnant Fermi Surface and d-Wave-Like Dispersion in Insulating Ca2 Cu O2 Cl2
## I Introduction
Introduction: A consensus on the d$`_{x^2y^2}`$ pairing state and the basic phenomenology of the anisotropic normal state gap (pseudo gap) in high-T<sub>c</sub> superconductivity has been established , partially on the basis of angle-resolved photoemission spectroscopy (ARPES) experiments , in which two energy scales have been identified in the pseudo gap, a leading-edge shift of 20-25 meV and a high-energy hump at 100-200 meV. Both of these features have an angular dependence consistent with a d-wave gap. For simplicity in the discussion below, we refer to these as low- and high-energy pseudo gaps, respectively, in analogy to the analysis of other data. The evolution of these two pseudo gaps as a function of doping are correlated , but the microscopic origin of the pseudo gap and its doping dependence are still unestablished. Theoretical ideas of the pseudo gap range from pre-formed pairs or pair fluctuation and damped spin density wave (SDW) to the evidence of the resonating valence bond (RVB) singlet formation and spin-charge separation .
To further differentiate these ideas, it is important to understand how the pseudo gap evolves as the doping is lowered and the system becomes an insulator. We present experimental data from the insulating analog of the superconductor La<sub>2-x</sub> Sr<sub>x</sub> CuO<sub>4</sub> , Ca<sub>2</sub> Cu O<sub>2</sub> Cl<sub>2</sub> which suggest that the high energy pseudo gap is a remnant property of the insulator that evolves continuously with doping, as first pointed out by Laughlin. The Compound Ca<sub>2</sub> Cu O<sub>2</sub> Cl<sub>2</sub> , a half-filled Mott insulator, has the crystal structure of La<sub>2</sub> CuO<sub>4</sub> and it can be doped by replacing Ca with Na or K to become a high-temperature superconductor. As with the case of Sr<sub>2</sub> Cu O<sub>2</sub> Cl<sub>2</sub> , Ca<sub>2</sub> Cu O<sub>2</sub> Cl<sub>2</sub> has a much better surface property than La<sub>2</sub> CuO<sub>4</sub> and thus is better suited for ARPES experiments. Although the data from Ca<sub>2</sub> Cu O<sub>2</sub> Cl<sub>2</sub> are consistent with earlier results from Sr<sub>2</sub> Cu O<sub>2</sub> Cl<sub>2</sub> , the improved spectral quality obtainable from this material allows us to establish that: (I) The Fermi surface, which is destroyed by the strong Coulomb interactions, left a remnant in this insulator with a volume and shape similar to what one expects if the strong electron correlation in this system is turned off; (II) The strong correlation effect deforms this otherwise iso-energetic contour (the non-interacting Fermi surface) into a form that matches the $`|`$cos($`k`$<sub>x</sub>a)-cos($`k`$<sub>y</sub>a)$`|`$ function very well, but with a very high energy scale of 320 meV. Thus, a d-wave like dispersive behavior exists even in the insulator.
Comparison with data from underdoped Bi<sub>2</sub> Sr<sub>2</sub> CaCu<sub>2</sub> O<sub>8+δ</sub> (Bi2212) with T<sub>c</sub>’s of 0, 25 and 65 K indicates that the high energy d-wave like pseudo gap in the underdoped regime originates from the d-wave like dispersion in the insulator. Once doped to a metal, the chemical potential drops to the maximum of this d-wave like function, but the dispersion relation retains its qualitative shape, albeit the magnitude decreases with doping. Thus, only the states near the d-wave node touch the Fermi level and form small segments of the Fermi surface, with the rest of Fermi surface gapped. In this way, the d-wave high energy pseudo gap in the underdoped regime is naturally connected to the properties of the insulator. Since the high energy pseudo gap correlates with the low energy pseudo gap which is likely to be related to superconductivity , it is likely that the same physics that controls the d-wave dispersion in the insulator is responsible for the d-wave like normal state pseudo gap and the superconducting gap in the doped superconductors.
## II Methodology
To investigate the strong correlation effect, we contrast our experimental data with the conventional results for the case when the correlation effects are neglected. We can obtain the occupation probability, n(k), by integrating A(k,$`\omega `$) obtained by ARPES, over energy. Experimentally A(k,$`\omega `$) can not be integrated over all energies due to contributions from secondary electrons and other electronic states. Instead an energy window for integration must be chosen, and the resulting quantity we define as the relative n(k). Fortunately, the features we are interested in are clearly distinguishable from any other contributions. We note that n(k) is a ground state property, and hence is different from the integration of the single-particle spectral weight, A(k,$`\omega `$), over energy. However, under the sudden approximation integration of A(k,$`\omega `$) as measured by ARPES gives n(k). We then use the drop of the relative n(k) to determine the Fermi surface as illustrated in Fig. 1. For a metal with non-interacting electrons, the electron states are filled up to the Fermi momentum, k<sub>F</sub>, and the n(k) shows a sudden drop(Fig. 1A). As more electrons are added, the electron states are eventually filled and the system becomes an insulator with no drop in n(k) (Fig. 1B). Therefore, the drop in n(k) characterizes the Fermi surface of a metal with non-interacting electrons. When correlation increases, n(k) begins to deform (Fig. 1C), although there is still some discontinuity at k<sub>F</sub> when the correlation is moderate. Note that the electrons that used to occupy states below k<sub>F</sub> have moved to the states that were unoccupied. For a non-Fermi liquid with very strong correlation, n(k) drops smoothly without a discontinuity(panel D). Several theoretical calculations using very different models have found that n(k) of the interacting system mimics that of the non-interacting system, even when the material is fully gapped . Hence we can recover the remnant of a Fermi surface or an underlying Fermi surface by following the contour of steepest descent of n(k) even when correlation is strong enough that the system becomes a Mott insulator. The volume obtained by this procedure is consistent with half-filling as expected in a Mott insulator.
We apply this method to determine the Fermi level crossing of a real system. The traditional way (Fig. 2A) is shown for the ARPES spectra on the (0,0) to $`(\pi ,\pi )`$ cut taken from Bi2212 which is metallic. As we move from (0,0) toward $`(\pi ,\pi )`$, the peak disperses to the Fermi level, E<sub>F</sub>. As the peak reaches E<sub>F</sub> and passes it, it begins to lose spectral weight (this again is k<sub>F</sub>). Alternatively, we simply integrate the spectral function from 0.6 eV to -0.1 eV relative to the E<sub>F</sub>, and the resulting relative n(k) is plotted in Fig. 2C. We can now define k<sub>F</sub> as the point of steepest descent in the relative n(k). The same conclusion can be drawn here independent of the method we use. Note that the n(k) also drops as we approach (0,0); this is a photoemission artifact, because the photoemission cross-section of the d$`_{x^2y^2}`$ orbital vanishes due to symmetry.
We can show that the n(k) procedure is still valid for strongly correlated systems with gapped Fermi surface by presenting ARPES spectra on ferromagnetic La<sub>3-x</sub> Sr<sub>x</sub> Mn<sub>2</sub> O<sub>7</sub> on the $`(\pi ,0)`$ to $`(\pi ,\pi )`$ cut (Fig. 2B). It shows a dispersive feature initially moving toward E<sub>F</sub> and then pulling slightly back away from E<sub>F</sub> around $`(\pi ,0.27\pi )`$, but never reaching the E<sub>F</sub>. However, the feature suddenly loses its spectral weight when it crosses $`(\pi ,0.27\pi )`$ as if it crosses the Fermi surface as shown in panel D. Furthermore, the Fermi surface determined by a local density approximation calculation coincides with the Fermi surface determined by the n(k) despite the spectra of this ferromagnetic metallic state material having a significant gap. Thus, the underlying Fermi surface can survive a strong interaction, and the n(k) method is effective in identifying it even when the peak does not disperse across E<sub>F</sub>.
## III Experimental results
The low-energy feature along the (0,0) to $`(\pi ,\pi )`$ cut on Ca<sub>2</sub> Cu O<sub>2</sub> Cl<sub>2</sub> (Fig. 3A) has the same origin as the lo- energy peak seen in Bi2212, the Zhang-Rice singlet on the Cu O<sub>2</sub> plane. As k increases from (0,0) toward $`(\pi ,\pi )`$, the peak moves to lower energy and subsequently pulls back to higher energy as it crosses $`(\pi /2,\pi /2)`$. Its spectral weight increases as it moves away from the (0,0) point for the reason described earlier, and then drops as it crosses $`(0.43\pi ,0.43\pi )`$. These changes along the (0,0) to $`(\pi ,\pi )`$ cut are consistent with the earlier reports on Sr<sub>2</sub> Cu O<sub>2</sub> Cl<sub>2</sub> . Similar to the drop of n(k) across the Fermi surface seen in Bi2212, Ca<sub>2</sub> Cu O<sub>2</sub> Cl<sub>2</sub> also shows that the intensity of the peak n(k) drops as if there is a crossing of E<sub>F</sub> even though the material is an insulator. The intensity along the (0,0) to $`(\pi ,0)`$ cut (Fig. 3B) goes through a maximum around $`(2\pi /3,0)`$ as in Sr<sub>2</sub> Cu O<sub>2</sub> Cl<sub>2</sub> . This behavior is also seen in superconducting cuprates. Earlier works on Sr<sub>2</sub> Cu O<sub>2</sub> Cl<sub>2</sub> show the spectral weight along the $`(\pi ,0)`$ to $`(\pi ,\pi )`$ cut is strongly suppressed. However, for Ca<sub>2</sub> Cu O<sub>2</sub> Cl<sub>2</sub> , the improved spectral quality allows us to clearly observe the spectral weight drop along the $`(\pi ,0)`$ to $`(\pi ,\pi )`$ cut (Fig. 3C). Note that the spectral weight drops as we move toward the $`(\pi ,\pi )`$ point, which we attribute to the crossing of a remnant Fermi surface. We also show another cut (Fig. 3D) which exhibits essentially the same behavior. The relative n(k) s of the cuts are summarized in Fig. 3E in arbitrary units. The relative n(k) here and in fig. 4 were obtained by integrating from 0.5 eV to -0.2 eV relative to the peak position at $`(\pi /2,\pi /2)`$. All of the n(k) s show a drop (after the maximum) as we cross the remnant Fermi surface. Here we emphasize that we are using the same method as we do for metals, where the identification of a Fermi surface is convincing.
The remnant Fermi surface can be identified in the contour plot of n(k) of Ca<sub>2</sub> Cu O<sub>2</sub> Cl<sub>2</sub> (Fig. 4A). The little crosses in the figure denote the k-space points where spectra were taken. The data points here and in Fig. 4C have been reflected about the line k<sub>y</sub> =k<sub>x</sub> to better illustrate the remnant Fermi surface. Again, it should be emphasized that the suppressed n(k) near (0,0) comes from the vanishing photoemission cross section due to the d$`_{x^2y^2}`$ orbital symmetry rather than a remnant Fermi surface crossing. For the same reason, the photon polarization suppresses the overall spectral weight along the (0,0) to $`(\pi ,\pi )`$ line as compared with the (0,0) to $`(\pi ,0)`$ line, with a monatonic change between the two directions. In fig. 4B we present the relative n(k) of an optimally doped Bi2212 sample in the normal state. In this case the identification of the Fermi surface is unambiguous, but the same matrix element effects that were seen in the insulator can be seen in the metallic sample as well. However, for both samples, the drop in n(k) near the diagonal line connecting $`(\pi ,0)`$ and $`(0,\pi )`$ can not be explained by the photoemission cross section. In the metallic case, the Fermi surface is clearly identified (the white-hashed region in Fig. 4B). For the insulator, the drop is approximately where band theory predicts the Fermi surface. Therefore, we attribute the behavior in the insulator to a remnant of the Fermi surface that existed in the metal. The similarity of the results in the insulator and the metal makes the identification of the remnant Fermi surface unambiguous. The white hashed area in Fig. 4A represents the area where the remnant Fermi surface may reside as determined by the relative n(k). Although there is some uncertainty in the detailed shape of this remnant Fermi surface, this does not affect the discussion and the conclusions drawn below. The relative n(k) we presented is a very robust feature. In metallic samples with partially gapped Fermi surfaces, underlying Fermi surfaces have also been identified in the gapped region. This effect is similar to what we report here in the insulator. The remnant Fermi surface in the underdoped Bi2212 was also identified at similar locations to the n(k) drop in these materials with a different criteria of minimum gap locus. Calculations also show the Fermi surface defined by n(k) is robust in the presence of strong correlation. Given that there is a remnant Fermi Surface as shown by the white hashed lines in Fig. 4, A and C, the observed energy dispersion along this line has to stem from the strong electron correlation. In other words, the electron correlation disperses the otherwise iso-energetic contour of the remnant Fermi surface. This dispersion is consistent with the non-trivial d-wave $`|`$cos($`k`$<sub>x</sub>a)-cos($`k`$<sub>y</sub>a)$`|`$ form. These results also support our identification of the remnant Fermi surface in a Mott insulator.
Fig. 4C plots the energy contour of the peak position of the lowest energy feature of Ca<sub>2</sub> Cu O<sub>2</sub> Cl<sub>2</sub> referenced to the energy of $`(\pi /2,\pi /2)`$ peak. The hashed area indicates the remnant Fermi surface determined in fig. 4A. The ’Fermi surface’ is no longer a constant energy contour as it would be in the non-interacting case. Instead it disperses as much as the total dispersion width of the system. In Fig. 4D we plot the dispersion at different points on the remnant Fermi surface referenced to the lowest energy state at $`(\pi /2,\pi /2)`$. The dispersion of the peaks along the Fermi surface is plotted against $`|`$cos($`k`$<sub>x</sub>a)-cos($`k`$<sub>y</sub>a)$`|`$. The straight line shows the d-wave dispersion function at the ’Fermi surface’ with a d-wave energy gap. The figure in the inset presents the same data in a more illustrative fashion. On a line drawn from the center of the Brillouin zone to any point either experimental (blue) or theoretical (red), the distance from this point to the intersection of the line with the antiferromagnetic Brillouin zone boundary gives the value of the ’gap’ at the k-point of interest. The red line is for a d-wave dispersion along $`(\pi ,0)`$ to $`(0,\pi )`$. The good agreement is achieved without the need for free parameters. This d-wave like dispersion can only be attributed to the many-body effect. The relative energy difference between the energy at $`(\pi /2,\pi /2)`$ and $`(\pi ,0)`$ has been referred to as a gap , which we follow.
This gap differs from the usual optical Mott gap (Fig. 5) and may correspond to the momentum dependent gap once the system is doped. This gap monotonically increases when we move away from $`(\pi /2,\pi /2)`$ as also reported earlier. As well as summarizing the data presented, Fig. 5 also shows the intriguing similarity between the data from the insulator and a slightly overdoped d-wave superconductor(Bi2212), and thus gives the reason for comparing the dispersion along the remnant Fermi surface with the $`|`$cos($`k`$<sub>x</sub>a)-cos($`k`$<sub>y</sub>a)$`|`$ form. In the superconducting case, n(k) helps determine the Fermi surface. The anisotropic gapping of this surface below T<sub>c</sub> reveals the d-wave nature of the gap. In the insulator, n(k) helps determine the remnant Fermi surface. The k-dependent modulation along this surface reveals the d-wave like dispersion. Whether this similarity between the insulator and the doped superconductor is a reflection of some underlying symmetry principle is a question which needs to be investigated.
The above analysis is possible only because we now observe the remnant Fermi surface. Although the dispersion for Sr<sub>2</sub> Cu O<sub>2</sub> Cl<sub>2</sub> was similar to the present case, the earlier results did not address the issue of a remnant Fermi surface because the smaller photoemission cross section along the $`(\pi ,0)`$ to $`(\pi ,\pi )`$ cut prevented this identification. Therefore the analysis shown above was not possible. With only the energy contour information (such as in Fig 4C), it is plausible to think that the Fermi surface evolves to a small circle around the $`(\pi /2,\pi /2)`$ point. However, with the favorable photoemission cross section, the results from Ca<sub>2</sub> Cu O<sub>2</sub> Cl<sub>2</sub> show that the Fermi surface leaves a clear remnant, although it may be broadened and weakened. Therefore, the energy dispersion along the original Fermi surface of a non-interacting system is due to the opening of an anisotropic ’gap’ along the same remnant Fermi surface.
The same analysis is shown in Fig. 6 for Bi2212 with different Dy dopings together with Ca<sub>2</sub> Cu O<sub>2</sub> Cl<sub>2</sub> results. The corresponding doping level and T<sub>c</sub> as a function of Dy concentration are also shown. The energy for Ca<sub>2</sub> Cu O<sub>2</sub> Cl<sub>2</sub> is referenced to the peak position at $`(\pi /2,\pi /2)`$ and that for Dy doped Bi2212 is to E<sub>F</sub>. However, the two energies essentially refer to the same energy since the peak on the (0,0) to $`(\pi ,\pi )`$ cut for all Bi2212 samples reaches the Fermi level. Note that the gaps for Dy doped Bi2212 data also follow a function that is qualitatively similar to the d-wave function with reduced gap sizes as shown with the $`(\pi ,0)`$ spectra in Fig. 6B. This result suggests that the d-wave gap originating in the insulator continuously evolves with doping, but retains its anisotropy as a function of momentum and that the high energy pseudo gap in the underdoped regime is the same gap as the d-wave gap seen in the insulator as discussed above. Of course, the high energy pseudo gap in the underdoped regime is smaller than the gap in the insulator. In a sense, the doped regime is a diluted version of the insulator, with the gap getting smaller with increasing doping. The two extremes of this evolution are illustrated in the quasiparticle dispersions shown in Fig. 5. The insulator shows a large d-wave like dispersion along the remnant Fermi surface. In the overdoped case, no gap is seen in the normal state along an almost identical curve in k-space; however, a d-wave gap is observed in the superconducting state. Although their sizes vary, the d-wave superconducting gap, and the d-wave ’gap’ of the insulator have the same non-trivial form, and are thus likely to stem from the same underlying mechanism.
## IV Discussion
We do not know the full implications of the data we report, but can offer the following possibilities. First, we compare the experimental dispersion with a simple spin-density wave picture. Starting with the Hubbard model
H = $`\mathrm{\Sigma }`$ $`ϵ`$<sub>k</sub> c c \+ U$`\mathrm{\Sigma }`$<sub>i</sub> n<sub>i↑</sub> n<sub>i↓</sub>
with
$`ϵ`$<sub>k</sub> = -2t(cos($`k`$<sub>x</sub>a)+cos($`k`$<sub>y</sub>a)) - 4t’(cos($`k`$<sub>x</sub>a)cos($`k`$<sub>y</sub>a)) - 2t”(cos(2k<sub>x</sub>a)+cos(2k<sub>y</sub>a))
and adding a SDW picture, the following dispersion relation will be found
E $``$ -4t’(cos($`k`$<sub>x</sub>a)cos($`k`$<sub>y</sub>a)) - 2t”(cos(2k<sub>x</sub>a)+cos(2k<sub>y</sub>a)) $`\pm `$ \[U/2 + J(cos($`k`$<sub>x</sub>a)+cos($`k`$<sub>y</sub>a))<sup>2</sup> \]
with J=t<sup>2</sup> /U. With realistic values for t’ and t”, and an experimental value for J of -0.12 eV, 0.08 eV, and 0.125 eV respectively, we find that the experimental dispersion deviates significantly from this mean field result giving a bandwidth of 1.1 eV. It is crucial to note the observed isotropic dispersion around the $`(\pi /2,\pi /2)`$ point, with almost identical dispersions from $`(\pi /2,\pi /2)`$ to (0,0) and from $`(\pi /2,\pi /2)`$ to $`(\pi ,0)`$. This result is unlikely to be a coincidence of the parameters t’, t”, and J as suggested by the SDW picture above.
We now compare the data with numerical calculations that, unlike the mean field SDW picture, appropriately accounts for the dynamics. Being mainly concerned with the dispersion relation, we concentrate our discussion on the t-J model as more extensive literature exists and as J can be independently measured. Qualitatively, the same conclusion is expected for the Hubbard model , which has the added advantage of yielding n(k), but has more uncertainty in the parameter U. Although the t-J model correctly predicts the dispersion along (0,0) to $`(\pi /2,\pi /2)`$ quantitatively , with the band width along this direction solely determined by J, it incorrectly predicts the energy of $`(\pi ,0)`$ to be nearly degenerate to $`(\pi /2,\pi /2)`$. This is a serious deficiency of the t-J model, because the evolution of the $`(\pi ,0)`$ feature is crucial to understand the d-wave-like pseudo gap. The inclusion of the next nearest neighbor hoppings of t’ and t” can resolve this problem. In fact, the t-t’-t”-J model can account for both the dispersion and lineshape evolution over all doping levels, which is a remarkable success of this model. With a J/t ratio in the realistic range of 0.2 to 0.6, the t-t’-t”-J model shows that the dispersion from $`(\pi /2,\pi /2)`$ to (0,0) and to $`(\pi ,0)`$ are equal and scaled by J. This result supports the notion that the isotropic dispersion is controlled by a single parameter, J, as stressed by Laughlin.
The above discussion indicates that we have a model, when solved by Monte Carlo or exact diagonalization, that can account for the data, but what does the data fitting the non-trivial $`|`$cos($`k`$<sub>x</sub>a)-cos($`k`$<sub>y</sub>a)$`|`$ function so well mean? As pointed out , the key to the inclusion of t’ and t” is that the additional hole mobility destabilizes the one-hole Neé l state with the hole at $`(\pi ,0)`$ and makes the system with one-hole move closer to a spin liquid state rather than to a Neé l state that is stable in the t-J model. This point is relevant to some early literature of the resonating valence bond(RVB) state . Anderson conjectured that the ground state of the insulator at half filling is a RVB spin liquid state. This idea was extended in the context of a mean field approach to the t-J model that yields a d-wave RVB or flux phase solution. The mean-field solution also predicts a phase diagram similar to what is now known about the cuprates, with the d-wave like spin gap in the underdoped regime being the most successful example. The problem with the mean-field solution of the t-J model is that it does not agree with exact numerical calculation results , and the half-filled state was found by neutron scattering to have long range order. If these numerical calculations are right then the d-wave RVB is not the right solution of the t-J model. However, the d-wave RVB like state may still be a reasonable way to think about the experimental data that describes the situation of the spin state near a hole. It is just that one has to start with a model where the single hole Neé l state is destabilized, as in the t-t’-t”-J model. We leave this open question as a challenge to theory.
The presence of d-wave like dispersion along the remnant Fermi surface shows that the high energy pseudo gap is a remnant of the d-wave ’gap’ seen in the insulator. The details of the evolution of this gap, and its connection to the low energy pseudo gap (which is likely due to pairing fluctuations) as well as the superconducting gap is unclear at the moment. However, we believe that there has to be a connection between these gaps of the similar $`|`$cos($`k`$<sub>x</sub>a)-cos($`k`$<sub>y</sub>a)$`|`$ form, as their presence is correlated with each other.
|
no-problem/9903/astro-ph9903008.html
|
ar5iv
|
text
|
# A Complete Set of Solutions For Caustic-Crossing Binary Microlensing Events
## 1 Introduction
Binary-lens microlensing events, especially those with caustic crossings, have a number of potentially important applications. First, if the caustic crossing is well sampled, the proper motion of the lens relative to the observer-source line of sight can be measured. Since different populations have different proper-motion distributions, such a measurement can help determine the nature of the lens. For example, five groups observed the event MACHO 98-SMC-1 found by the MACHO collaboration (Alcock et al. 1999) in observations toward the Small Magellanic Cloud (SMC) and all concluded that its proper motion is consistent with the lens being in the SMC rather than the Galactic halo (Afonso et al. 1998 \[EROS\]; Albrow et al. 1999a \[PLANET\]; Alcock et al. 1999 \[MACHO\]; Udalski et al. 1998 \[OGLE\]; Rhie et al. 1999 \[MPS\]). This provides an important clue regarding the controversy (e.g. Sahu 1994; Gould 1995) over the location and nature of the lenses currently being discovered toward the Magellanic Clouds (Aubourg et al. 1993; Alcock et al. 1997a). Second, caustic-crossing binaries are one of the few classes of microlensing events for which it is possible, at least in principle, to obtain a complete solution of the mass, distance, and velocity of the lens (Hardy & Walker 1995; Gould & Andronov 1999). Third, caustic-crossing events (both binary and points lens) can be used to measure the limb-darkened profile of the source star (Albrow et al. 1999b). Fourth, binary-lens events can potentially tell us about the distributions of binary mass ratios and separations. The light-curve solution directly yields the mass ratio and also gives the projected separation in units of the Einstein ring. By calibrating the binary-lens detection efficiency (Gaudi & Sackett 1999), the observed distribution can be compared with that predicted for various models. Finally, planet-star systems are, from a microlensing standpoint, extreme mass-ratio binaries and hence can be discovered by looking for binary-lens type events (Mao & Paczyński 1991).
For most of these applications, one must correctly and uniquely measure the parameters that describe the observed binary lens and quantify the uncertainties in this solution. Or, if an unambiguous determination is not possible, one must at least find the entire set of degenerate solutions.
Nine parameters are required to specify the most basic caustic-crossing binary-lens event. These are usually taken to be $`t_0`$, $`u_0`$, $`t_\mathrm{E}`$, $`q`$, $`d`$, $`\alpha `$, $`\rho _{}`$, $`F_s`$, and $`F_b`$. Here $`t_0`$ is the time of closest approach to the origin of the binary, $`t_\mathrm{E}`$ is the Einstein crossing time, and $`u_0`$ is the angular impact parameter at time $`t_0`$ in units of the angular Einstein radius, $`\theta _\mathrm{E}`$,
$$\theta _\mathrm{E}=\left(\frac{4GMD_{\mathrm{LS}}}{D_\mathrm{L}D_\mathrm{S}c^2}\right)^{1/2},$$
(1)
$`D_\mathrm{L}`$, $`D_\mathrm{S}`$, and $`D_{\mathrm{LS}}`$ are the observer-lens, observer-source, and lens-source distances, and the mass $`M`$ is the total mass of the binary. Note that $`\theta _\mathrm{E}=\mu t_\mathrm{E}`$, where $`\mu `$ is the proper motion. The three parameters specific to the binary character of the lens are the mass ratio $`q=M_2/M_1`$ of the secondary to the primary ($`0<q1`$), the projected angular binary separation $`d`$ in units of $`\theta _\mathrm{E}`$, and the angle $`\alpha `$ $`(0\alpha <2\pi )`$ between the binary-separation vector ($`M_2`$ to $`M_1`$) and the proper motion of the source relative to the origin of the binary. Our convention is that the center of the binary lies on the right hand side of the moving source, and we adopt the midpoint of the lenses as the origin of the binary. The angular size of the source in units of $`\theta _\mathrm{E}`$ is $`\rho _{}`$, the source flux is $`F_s`$, and $`F_b`$ is the light from any unlensed sources (including the lens) that enters the aperture. If the event is observed from more than one observatory, then two additional parameters are required for each additional observatory to account for the different fluxes and backgrounds registered by different telescopes. One may include more than these basic parameters to account for other higher-order effects, such as limb darkening of the source, orbital motion of the binary, and parallax effects due to the motion of the Earth, but we will ignore these effects in this paper. This means that we will be implicitly assuming that the source can be approximated as uniform.
Using the above parameterization, fitting binary-lens light curves poses a significant challenge for several reasons. First, $`\chi ^2`$ is very sensitive to small changes in most of the parameters, and furthermore responds in a complicated manner. The sheer size of parameter space combined with the sensitivity of $`\chi ^2`$ to subtle changes in the parameters make brute force searches practically impossible. Second, choosing suitable initial guesses for possible solutions is difficult because most of the parameters have no direct relationship to observable features in the light curve. Thus, even if one finds a trial solution, it is difficult to be sure that all possible solutions have been found. Finally, the magnification of a binary lens is nonanalytic. While this poses no significant challenge for calculating light curves for events that can be approximated as having a point source, such as binary-lens events with no caustic crossings, finite-source caustic crossing light curves are notoriously difficult to calculate. Although many efficient and robust methods have been proposed to do this (Kayser & Schramm 1988; Gould & Gaucherel 1997; Wambsganss 1997; Dominik 1998), they are invariably time consuming. This is a serious detriment to fitting a light curve because of the large number of models that must be calculated.
Mao & Di Stefano (1995) attacked the problem of fitting binary-lens light curves by developing a densely-sampled library of point-source binary microlensing events, each of which is characterized by catalogued “features” such as the number of maxima, heights of peaks, time between peaks, etc. They can then examine individual events, characterize their “features,” and search their library for events that are consistent with these features. This alleviates many of the problems discussed above, as it reduces the search to a relatively few regions of parameter space. Mao & Di Stefano (1995) report that their method is robust for caustic crossing events since these have well defined features. However, this method does have some shortcomings that make it difficult to apply to well-sampled caustic-crossing binary-lens events. First, the method relies on the approximate magnification of the observed peaks to reduce the possible space of solutions. However, the magnification of the observed peaks depends on the baseline magnitude, which can be unknown or poorly determined. Furthermore, even if the baseline is exactly measured, the magnification is not a direct observable, as it depends not only on the binary model and trajectory, but also on the amount of blended light, $`F_b`$. Finally, the peak magnification also depends on the unknown size of the source $`\rho _{}`$. While it may be possible to extend the method of Mao & Di Stefano (1995) to take into account these difficulties, the search space would increase by two dimensions and thus the efficiency would decrease. Di Stefano & Perna (1997) suggested that binary lenses could be fitted by decomposing the observed light curve into a linear combination of basis functions. The coefficients of these functions could then be compared to those fitted to a library of events in order to isolate viable regions of parameter space. This is essentially the same method as Mao & Di Stefano (1995), except that, rather than use gross features to identify similar light curves, one uses the coefficients of the polynomial expansion, which is more quantitative and presumably more robust. However, this method has the same shortcomings as that of Mao & Di Stefano (1995) for the same reasons. Also, the method of Di Stefano & Perna (1997) requires that, before the basis function fitting, one map the observed light curve onto the same temporal interval for which the event library light curves were fitted. This is impossible to do for only partially sampled events, or events where the fraction of blended light is unknown.
Here we propose an althernate method to systematically search for solutions in the specific case of a binary lens with one well-sampled caustic crossing. Initially, binary-lens events were monitored only by the primary search groups and so were observed only once or twice per night. Since caustic crossings generally take less than one day, this implied that the crossings were not well sampled. For example, the first binary-lens event with caustic crossings, OGLE-7, was observed by OGLE only once near the first caustic and not at all near the second (Udalski et al. 1994), although MACHO did serendipitously observe one point on the second crossing of this event thereby resolving the source (Mao et al. 1994). Dominik (1999a) showed that a large variety of binary-lens parameters are consistent with the photometric data for this event as well as for another caustic-crossing binary, DUO-2.
However, at present the three primary search groups, OGLE, MACHO, and EROS, all have alert systems by which they can recognize microlensing events in real time. Three other groups, GMAN (Alcock et al. 1997b), PLANET, and MPS then monitor these alerted events much more frequently. Once a source crosses the first caustic, it is possible to predict the second crossing at least a day in advance on the basis of these frequent follow-up observations by observing the rise to the crossing (although it is not possible to predict the second caustic crossing from observations of the first alone, as we demonstrate in § 4.2). The second caustic crossing can then be observed very intensively. Indeed, one caustic crossing was even observed spectroscopically by making use of target-of-opportunity time (Lennon et al. 1996). Thus, well-sampled caustic crossings should become more common in the future.
We present our method for searching for binary-lens solutions in § 2. In § 3 and § 4, we illustrate the method using PLANET data for MACHO 98-SMC-1. We show that a broad range of parameter combinations are consistent with the PLANET data. In § 5, we therefore examine what sort of data are required to break these degeneracies.
We emphasize that our treatment of MACHO 98-SMC-1 is not intended to be definitive, but merely illustrative. A thorough investigation of this event will be made by Afonso et al. (1999) by combining data from all five groups.
## 2 The Method
We assume that the binary-lens light curve can be decomposed into two parts. The first part characterizes the caustic crossing itself, and is described by a five-parameter semi-analytic function. The five parameters are not directly related to any of the traditional parameters, but are more directly related to observables, so that $`\chi ^2`$ is less sensitive to small changes in these parameters. Furthermore, the function is semi-analytic, and thus very simple and quick to compute. We fit the data near the caustic crossing to this function. Four of the five parameters extracted from the fit, along with a measurement of the baseline, are then used to constrain the search of parameter space. We then search for fits to the non-caustic crossing light-curve data in this restricted space. We calculate the magnification of these images from the full binary-lens equation with the standard parameters. Since the magnification arising from the diverging images associated with the caustic is not being considered, $`\chi ^2`$ behaves much more sensibly. Furthermore, no finite-source effects need be considered when fitting to the non-caustic crossing data, greatly improving the computational efficiency of the search. The end result of this search is a complete set of trial solutions. We then perform refined searches begining with these trial solutions, incorporating all the data, and using a variant of the method just described.
In the next section, we describe in detail the method of fitting and extracting parameters from the five-parameter function that describes generic caustic crossings. The section following then describes how the parameters extracted from the caustic-crossing fit can be used to constrain the search for the global fit to the remaining data, and an effective method for performing this search. Figure 1 is a flow chart which illustrates the relations among the various steps of the method.
### 2.1 Parameterized Fit to the Caustic Crossing Data
Imagine a point at the center of a source as it crosses a caustic. While inside the caustic, the point source has five images. As it crosses the caustic, the magnifications of two of these images diverge toward a square-root singularity, until the images suddenly disappear. If we neglect any changes of the lens properties in the neighborhood of the caustic crossing, then the magnification of these two divergent images can be written (e.g. Schneider & Weiß1986a),
$$A_{\mathrm{div}}^0(𝐮)=\left(\frac{\mathrm{\Delta }u_{}}{u_r}\right)^{1/2}\mathrm{\Theta }(\mathrm{\Delta }u_{}),$$
(2)
where
$$\mathrm{\Delta }u_{}\mathrm{\Delta }𝐮𝐧_{\mathrm{cc}},\mathrm{\Delta }𝐮𝐮𝐮_{\mathrm{cc}},$$
(3)
$`𝐮_{\mathrm{cc}}`$ is the position of the caustic crossing, $`𝐧_{\mathrm{cc}}`$ is the unit vector at $`𝐮_{\mathrm{cc}}`$ pointing inward normal to the caustic, $`\mathrm{\Theta }`$ is a step function, and $`u_r`$ is the characteristic rise length of the caustic. The other three images are unaffected by the caustic crossing, so their total magnification can be Taylor expanded,
$$A_{\mathrm{non}\mathrm{div}}^0(𝐮)=A_{\mathrm{cc}}+𝐙\mathrm{\Delta }𝐮,$$
(4)
where $`A_{\mathrm{cc}}`$ is the magnification of the three images at the caustic crossing, and $`𝐙`$ is the gradient of the magnification with respect to $`𝐮`$. Hence the full magnification in the neighborhood of the caustic crossing can be approximated as,
$$A^0(𝐮)=\left(\frac{\mathrm{\Delta }u_{}}{u_r}\right)^{1/2}\mathrm{\Theta }(\mathrm{\Delta }u_{})+A_{\mathrm{cc}}+𝐙\mathrm{\Delta }𝐮.$$
(5)
For an extended source of angular radius $`\theta _{}\rho _{}\theta _\mathrm{E}`$, the magnification is given by the convolution of $`A^0`$ with the source surface brightness profile, which yields (e.g. Schneider et al. 1992, p. 215f),
$$A(𝐮)=\left(\frac{u_r}{\rho _{}}\right)^{1/2}G\left(\frac{\mathrm{\Delta }u_{}}{\rho _{}}\right)+A_{\mathrm{cc}}+𝐙\mathrm{\Delta }𝐮.$$
(6)
Note that $`\mathrm{\Delta }u_{}`$ is positive and the argument of $`G`$ is negative when $`𝐮`$ is inside the caustic.
Here $`G`$ is a characteristic profile function which depends only on the shape of the stellar profile, and not on the size of the source. That is, the source size affects the width of the caustic crossing only through the argument $`\mathrm{\Delta }u_{}/\rho _{}`$ of $`G`$ and the magnification only through the factor $`\rho _{}^{1/2}`$. For uniform surface brightness, the profile function $`G`$ reads (Schneider & Weiß1986b),
$$G_0(\eta )\frac{2}{\pi }_{\mathrm{max}(\eta ,1)}^1\left(\frac{1x^2}{x\eta }\right)^{1/2}𝑑x\mathrm{\Theta }(1\eta ),$$
(7)
which can be expressed in terms of elliptical integrals. The case of limb-darkened profiles has been discussed by Schneider & Wagoner (1987). Consider an extended source moving over the caustic with proper motion $`\mu =\theta _E/t_E`$, at an angle $`\varphi `$ relative to the caustic. The time required for the radius to cross the caustic is
$$\mathrm{\Delta }t=\frac{\theta _{}}{\mu \mathrm{sin}\varphi }=\rho _{}t_\mathrm{E}\mathrm{csc}\varphi .$$
(8)
Note that the width of the caustic crossing $`\mathrm{\Delta }t`$ can be measured from the caustic-crossing data alone, while the three quantities whose product forms $`\mathrm{\Delta }t`$ ($`\rho _{}`$, $`t_\mathrm{E}`$, and $`\mathrm{csc}\varphi `$) can only be determined from an analysis of the complete light curve (see § 2.3). The angular separation (normalized to $`\theta _\mathrm{E}`$) of the source from the caustic crossing as a function of time is,
$$\mathrm{\Delta }𝐮=\frac{\mu \mu (tt_{\mathrm{cc}})}{\theta _\mathrm{E}},$$
(9)
where $`\mu \mu `$ is the vector proper motion, and $`t_{\mathrm{cc}}`$ is the time of the caustic crossing. This implies,
$$\mathrm{\Delta }u_{}=\frac{\mu (tt_{\mathrm{cc}})\mathrm{sin}\varphi }{\theta _E}=\frac{tt_{\mathrm{cc}}}{t_\mathrm{E}}\mathrm{sin}\varphi .$$
(10)
Hence the magnification as a function of time is given by,
$$A(t)=\left(\frac{t_\mathrm{r}}{\mathrm{\Delta }t}\right)^{1/2}G_0\left(\frac{tt_{\mathrm{cc}}}{\mathrm{\Delta }t}\right)+A_{\mathrm{cc}}+\omega (tt_{\mathrm{cc}}),$$
(11)
where
$$t_r=u_rt_\mathrm{E}\mathrm{csc}\varphi ,$$
(12)
is the characteristic rise time of the caustic crossing, and $`\omega \mu \mu 𝐙/\theta _\mathrm{E}`$.
If the flux of the source star is $`F_\mathrm{s}`$ and the flux of the blend is $`F_\mathrm{b}`$, the total flux is given by
$$F(t)=F_sA(t)+F_b=Q^{1/2}G_0\left(\frac{tt_{\mathrm{cc}}}{\mathrm{\Delta }t}\right)(\mathrm{\Delta }t)^{1/2}+F_{\mathrm{cc}}+\stackrel{~}{\omega }(tt_{\mathrm{cc}}),$$
(13)
where $`Q=F_s^2t_r`$, $`F_{\mathrm{cc}}=F_sA_{\mathrm{cc}}+F_b`$, and $`\stackrel{~}{\omega }=F_s\omega `$. Thus, a caustic crossing can be fit to a five-parameter function of the form of equation (13), the parameters being $`Q,t_{\mathrm{cc}},F_{\mathrm{cc}},\mathrm{\Delta }t`$, and $`\stackrel{~}{\omega }`$. Below, we will use the three parameters $`Q`$, $`t_{\mathrm{cc}}`$, and $`F_{cc}`$ to constrain the search for fits to the non-caustic-crossing points on the light curve. The caustic-crossing time scale $`\mathrm{\Delta }t`$ summarizes information about the caustic crossing only, and does not affect the remainder of the light curve or its analysis. The slope $`\stackrel{~}{\omega }`$ was introduced only to allow a more accurate estimate of $`F_{cc}`$ and will be of no further interest.
It is also possible to parameterize the total flux by,
$$F(t)=F_{\mathrm{cc}}\left[\left(\frac{t_{\mathrm{r},\mathrm{eff}}}{\mathrm{\Delta }t}\right)^{1/2}G\left(\frac{tt_{\mathrm{cc}}}{\mathrm{\Delta }t}\right)+1+\omega _{\mathrm{eff}}(tt_{\mathrm{cc}})\right],$$
(14)
where
$$t_{\mathrm{r},\mathrm{eff}}=\left(\frac{F_\mathrm{s}}{F_{\mathrm{cc}}}\right)^2t_\mathrm{r}=\frac{Q}{F_{\mathrm{cc}}^2},$$
(15)
is an “effective” rise time, and $`\omega _{\mathrm{eff}}=\omega /F_{\mathrm{cc}}`$. This parameterization seems to be more appealing, as it replaces the unintuitive parameter $`Q`$ with $`t_{\mathrm{r},\mathrm{eff}}`$, the effective rise time of the caustic crossing. Unfortunately, in this parameterization, $`F_{\mathrm{cc}}`$ and $`t_{\mathrm{r},\mathrm{eff}}`$ are very highly correlated: we find below for a specific example that the fractional error in $`t_{\mathrm{r},\mathrm{eff}}`$ is about 7 times larger than the fractional error in $`Q`$ which makes $`t_{\mathrm{r},\mathrm{eff}}`$ substantially less suitable for numerical calculations. We will therefore use the parameterization in equation (13).
Note that in the neighborhood of the end of the caustic crossing,
$$G_0(\eta )2^{1/2}(1\eta )\mathrm{\Theta }(1\eta ),$$
(16)
and thus an abrupt change of slope occurs at $`\eta =1`$. Hence, while for most points on the light curve it is appropriate to use simply the midpoint of the exposure for the time, this approximation breaks down when the time between the midpoint of the exposure and the end of the caustic crossing ($`\eta 1`$), $`\delta t=t\mathrm{\Delta }tt_{\mathrm{cc}}`$, is less than half the exposure time, $`t_{\mathrm{exp}}`$, i.e. $`|\delta t|<t_{\mathrm{exp}}/2`$. For this case we integrate equation (16) over the exposure time and find,
$$G_0\left(\frac{tt_{\mathrm{cc}}}{\mathrm{\Delta }t}\right)2^{1/2}\frac{(\delta tt_{\mathrm{exp}}/2)^2}{t_{\mathrm{exp}}\mathrm{\Delta }t},\left(|\delta t|<\frac{t_{\mathrm{exp}}}{2}\right).$$
(17)
### 2.2 Relations Between Parameterizations
As shown in the previous subsection, the caustic-crossing fit yields the four parameters $`Q`$, $`t_{\mathrm{cc}},\mathrm{\Delta }t`$, and $`F_{\mathrm{cc}}`$. Three of the remaining parameters are the same as in the conventional parameterization: the Einstein time scale, $`t_\mathrm{E}`$, the normalized projected separation between the lenses, $`d`$, and the mass ratio, $`q`$. The eighth parameter is the baseline flux $`F_{\mathrm{base}}`$ which is often, but not always, well measured. For the final parameter, we adopt the path length $`\mathrm{}`$ along the caustic curve(s) for the configuration ($`d,q`$). This is a logical choice, since we know that the light curve contains a caustic crossing, and the trajectory must therefore cross a caustic at some value of $`\mathrm{}`$. Below we show how the local properties of the binary-lens at $`\mathrm{}`$ can be used to relate our non-standard parameters to the more familiar parameters. In our parameterization, the binary-lens event is described by the 9 parameters $`(Q,t_{cc},\mathrm{\Delta }t,F_{cc},d,q,\mathrm{},t_\mathrm{E},F_{\mathrm{base}})`$ rather than by the 9 “standard” parameters $`(t_E,t_0,u_0,d,q,\alpha ,\rho _{},F_s,F_b)`$. In order to use the caustic crossing parameters $`(Q,t_{cc},F_{cc})`$ to constrain the fit to the non-caustic crossing data, we must know the relation between the two parameter sets. This is trivial for $`t_\mathrm{E},d`$, and $`q`$. Given a binary configuration ($`d,q`$), one can determine at each $`\mathrm{}`$ the following five local properties of the binary lens. The first two are simply the x and y positions of the caustics at $`\mathrm{}`$, $`u_{\mathrm{cc},\mathrm{x}}(\mathrm{})`$ and $`u_{\mathrm{cc},\mathrm{y}}(\mathrm{})`$ with respect to the standard coordinate system (i.e. the origin located at the midpoint of the binary and the x-axis coincident with the binary axis). These values can be determined using the algorithm of Witt (1990). The third property is the angle of the caustic with respect to the binary axis at $`\mathrm{}`$, $`\gamma (\mathrm{})`$, which can be found by the same algorithm and by fitting a line to positions offset by $`\delta \mathrm{}`$ from $`\mathrm{}`$. The last two properties must be calculated by solving the full binary-lens equation. The near-caustic magnification $`A_{\mathrm{cc}}`$ is the sum of the magnifications of the three non-diverging images at the position of the caustic. The caustic divergence, $`u_r`$, is defined by equation (2) and can be determined by fitting an inverse square-root function to the sum of the magnifications of the two diverging images in the neighborhood of $`\mathrm{}`$. Note that all five quantities are functions of $`(\mathrm{},d,q)`$. Using these quantities, the relations between the standard parameters and those used in this paper are simple to determine and are given in Table 1. Figure 2 shows the relation between the two sets of parameters graphically for the parameters that do not involve the finite-size of the source. Figure 3 shows a detailed view of the finite source crossing the caustic. Note that several of the quantities shown in Figure 3 are not discussed in the text until equation (28) in § 4.1, below.
### 2.3 Fitting Non-Caustic-Crossing Data: Idealized Case
We now use our parameterization and the results of the fit to the caustic-crossing data to find corresponding binary-lens configurations that contain the observed caustic crossing. For illustrative purposes, let us initially assume that both the baseline flux of the event, $`F_{\mathrm{base}}`$, and the three caustic-crossing parameters $`Q`$, $`F_{\mathrm{cc}}`$, and $`t_{\mathrm{cc}}`$ have been measured with high precision. (Recall that the fourth caustic-crossing parameter, $`\mathrm{\Delta }t`$, is not used in the analysis of the non-caustic-crossing data.) The search for solutions would then be reduced to a four-dimensional space and could be conducted as follows. First, one begins with a binary configuration $`(d,q)`$, varying the parameter $`\mathrm{}`$ over the total length of the caustic. At each $`\mathrm{}`$ in geometry $`(d,q)`$, one has two equations relating the source and background fluxes: $`F_{\mathrm{base}}=F_s+F_b`$ and $`F_{\mathrm{cc}}=F_sA_{\mathrm{cc}}+F_b`$. Thus,
$$F_s=\frac{F_{\mathrm{cc}}F_{\mathrm{base}}}{A_{\mathrm{cc}}1}.$$
(18)
If $`F_s>F_{\mathrm{base}}`$ (i.e., $`A_{\mathrm{cc}}<F_{\mathrm{cc}}/F_{\mathrm{base}}`$), then there would be negative background flux. Hence any position $`\mathrm{}`$ yileding $`F_s>F_{\mathrm{base}}`$ does not correspond to a physical solution, and one can move on to the next value of $`\mathrm{}`$. At each physical $`\mathrm{}`$, $`t_\mathrm{E}`$ is varied and for each $`t_\mathrm{E}`$, the angle $`\varphi `$ at which the source crosses the caustic is determined using equation (12) and the definition $`QF_s^2t_r`$,
$$\mathrm{sin}\varphi =\frac{u_rt_\mathrm{E}F_s^2}{Q},.$$
(19)
Of course, $`\varphi `$ must satisfy $`\mathrm{sin}\varphi 1`$, which means that only values of $`t_\mathrm{E}`$ in the range
$$t_\mathrm{E}\frac{Q}{u_rF_s^2},$$
(20)
need to be searched. Note that $`\varphi `$ is restricted to lie in the range $`0\varphi \pi `$, and the orientation of $`\gamma `$ is set to enforce the relation in Table 1: $`\alpha =\varphi +\gamma `$. At this point all of the standard parameters needed to evaluate the magnifications at all the times of the observations have been determined. Since $`F_s`$ and $`F_b(=F_{\mathrm{base}}F_s)`$ are completely determined for this geometry, these magnifications can be used to predict the flux, $`F(t)=F_sA(t)+F_b`$, and these predictions can be compared to the data using $`\chi ^2`$. However, before doing the calculation for the entire (non-caustic-crossing) light curve, the following checks should be done. From inspection of the light curve, it is often clear which measurements are inside the caustic and which are outside the caustic. One could then evaluate the number of images at the most restrictive of these measurements (i.e., the last measurement before the first crossing and the first measurement after this crossing), and determine whether the model is consistent with these observational constraints. If it is not, no further evaluation need be done and one can continue to the next parameter combination. Thus one can calculate $`\chi ^2`$ for each combination $`(d,q,t_\mathrm{E},\mathrm{})`$ and find the best fit (or fits) to the data. The search is over a four-dimensional space, but under restricted circumstances.
### 2.4 Non-Caustic-Crossing Data: Realistic Case
In practice, $`Q`$, $`F_{\mathrm{cc}}`$, and $`F_{\mathrm{base}}`$ are not known with infinite precision, and so one must take account of the uncertainties in these parameters. For well-sampled caustic crossings, the time of caustic crossing $`t_{\mathrm{cc}}`$ is measured to much higher precision than is required, so for this purpose we assume that it is known perfectly. The parameter $`\mathrm{\Delta }t`$ has no effect on the non-caustic-crossing data, so uncertainties in this parameter are unimportant. The uncertainties in $`Q`$, $`F_{\mathrm{cc}}`$, and $`F_{\mathrm{base}}`$ introduce two major changes into the above procedure. First, one must consider a range of $`\mathrm{sin}\varphi `$ at each parameter combination $`(d,q,\mathrm{},t_\mathrm{E})`$ rather than a single value. That is, there is a fifth dimension to the search, albeit over a truncated domain. Second, once a parameter combination $`(d,q,\mathrm{},t_\mathrm{E})`$ is chosen, and the range in $`\varphi `$ to be explored is determined, one must fit for the two flux parameters $`F_s`$ and $`F_b`$ since these are no longer determined with infinite precision. This appears to add two dimensions to the search, but in fact this is a linear fit and can be computed much more quickly than the other steps required for each combination $`(d,q,\mathrm{},t_\mathrm{E},\varphi )`$ (see also Rhie et al. 1999). Thus, the search is effectively increased to 4.5 dimensions. Good constraints on the time of the first caustic crossing, restrict the search further as discussed following equation (20).
We now consider the realistic case more closely. Since $`F_{\mathrm{cc}}`$ and $`F_{\mathrm{base}}`$ have uncertainties, so will $`F_s`$ through equation (18). Then, the uncertainties in $`F_s`$ and $`Q`$ will propagate to the estimate of $`\mathrm{sin}\varphi `$ through equation (19). Since $`Q`$ and $`F_{\mathrm{cc}}`$ are highly anti-correlated, the error in $`\mathrm{sin}\varphi F_s^2/Q`$ will be higher than given by naive error propagation. One then needs to explore a range for $`\mathrm{sin}\varphi `$ (say 2 or 3 $`\sigma `$) rather than the single value derived in the previous section.
Usually, the uncertainty in $`F_{\mathrm{base}}`$ will lie at one of two extremes. Either the baseline is very well known from many observations before or after the event, or it is very poorly known because the event is not yet over. In the latter case, there will of course be baseline measurements made using the telescope from which the event was discovered, but these may not be generally available. Even if they are, they will usually be in a different filter with different seeing conditions and so not directly useful for establishing a baseline for the observations of the caustic crossing (but see § 5.3). In the first case, the error in $`F_s`$ is simply $`(A_{\mathrm{cc}}1)^1`$ times the error in $`F_{\mathrm{cc}}`$. In the second case, one has only an upper limit, $`F_{\mathrm{base}}<F_{\mathrm{lim}}`$. This leads to range of allowed values for $`F_s`$,
$$\frac{F_{\mathrm{cc}}F_{\mathrm{lim}}}{A_{\mathrm{cc}}1}F_s\frac{F_{\mathrm{cc}}}{A_{\mathrm{cc}}},$$
(21)
with the second relation coming from $`A_{\mathrm{cc}}F_s=F_{\mathrm{cc}}F_bF_{\mathrm{cc}}`$. This range must then be expanded to allow for errors in $`F_{\mathrm{cc}}`$ before being combined with equation (19) and its associated uncertainties in $`Q`$.
Once the four trial parameters $`(d,q,\mathrm{},t_\mathrm{E})`$ are chosen, the allowed range in $`\varphi `$ can be determined. The standard parameters $`t_0,u_0`$, and $`\alpha `$, which completely specifying the trajectory can be found from the relations in Table 1, and using $`d`$, $`q`$, and $`t_\mathrm{E}`$, the magnification can be determined as a function of time. The best fit for the remaining two parameters needed for the non-crossing data, $`F_s`$ and $`F_b`$, can be determined by linear regression. That is, for each non-caustic-crossing observation (to be defined more precisely below) at time $`t_i`$, one predicts the flux,
$$F_{\mathrm{pred},i}=A^0(t_i)F_s+F_b,$$
(22)
and then forms $`\chi ^2=_i(F_{\mathrm{pred},i}F_{\mathrm{obs},i})^2/\sigma _i^2`$, where $`F_{\mathrm{obs},i}`$ is the observed flux, and $`\sigma _i`$ is the error of the observation at $`t_i`$. This does not yet take into account the information about $`F_s`$ and $`F_b`$ contained in the caustic-crossing data. To include this information, we simply invert equation (19) and note that in the present context, $`\varphi `$ and $`t_\mathrm{E}`$ should both be regarded as constants. That is, $`F_s=[Q\mathrm{sin}\varphi /(u_rt_\mathrm{E})]^{1/2}`$ and $`\sigma _{F_s}=\sigma _Q[\mathrm{sin}\varphi /(4Qu_rt_\mathrm{E})]^{1/2}`$, where $`\sigma _Q`$ is the error in $`Q`$ taken from fit to the caustic-crossing data. Hence, $`\chi ^2`$ is given by,
$$\chi ^2=\underset{i}{}\frac{(A^0(t_i)F_s+F_bF_{\mathrm{obs},i})^2}{\sigma _i^2}+4\frac{[(Qu_rt_\mathrm{E}\mathrm{csc}\varphi )^{1/2}F_sQ]^2}{\sigma _Q^2},$$
(23)
which can be solved for $`F_s`$ and $`F_b`$ by standard linear techniques.
Clearly all the points that were not used in the caustic-crossing fit can be incorporated into equation (23). In addition, one might also wish to use the points outside the caustic which were included in the caustic-crossing fit in order to determine $`F_{\mathrm{cc}}`$ and the slope $`\stackrel{~}{\omega }`$. Since $`F_{\mathrm{cc}}`$ does not directly enter equation (23), this may appear to be permissible. Actually, since $`F_{\mathrm{cc}}`$ is highly correlated with $`Q`$, inclusion of these points is not strictly permitted. Nevertheless, we advocate including them (and thus slightly overcounting the information they contain) because the method is being used to find allowed regions of parameter space, not to determine the errors of the best fit.
## 3 A Worked Example: PLANET data for MACHO 98-SMC-01
To illustrate how the method works, we apply it to the PLANET data for MACHO 98-SMC-1. These data differ from those analyzed by Albrow et al. (1999a) in two ways: the SAAO data have been re-reduced using a better template, and a few late times points that became available only later have been added. In addition, we now report the Heliocentric Julian Date (HJD) rather than Julian Date (JD) and uniformly report the midpoints of the exposures, rather than their beginnings as was previously done for some of the observatories. We choose this example because it has a well-covered caustic and the data are publically available (http://www.astro.rug.nl/$``$planet).
### 3.1 Choosing the Data Set for the Caustic-Crossing Fit
The first step is to fit the caustic crossing, and to do this we must choose which data points should be used for the fit. The entire data set is shown in Figure 4. Data within 1.5 days of the caustic crossing (HJD-2450000.0$`=982.6\pm 1.5`$) are shown as individual points while the rest are shown as daily averages. Figure 5 shows the immediate neighborhood of the crossing in more detail.
What should be the first point included in the caustic-crossing fit? When the source is too close to the caustic, it cannot be approximated as a point source, and so cannot be included in the non-caustic-crossing fit. Hence, these observations should be included in the caustic-crossing fit. This condition can be understood precisely because, from equation (7), $`G_0(\eta )`$ can be expanded in the limit $`\eta 1`$,
$$G_0(\eta )=(\eta )^{1/2}\left(1+\frac{3}{32}\eta ^2+\mathrm{}\right),(\eta <1).$$
(24)
Hence, for typical daily-averaged photometry errors of $`1\%`$, this cut off should be about 3 source-radius crossing times before the time of the caustic crossing, i.e., at $`t=t_{\mathrm{cc}}3\mathrm{\Delta }t`$ where the fractional effect of the finite source is $`(3/32)\eta ^21\%`$. For well sampled crossings, one can estimate $`\mathrm{\Delta }t`$ and $`t_{\mathrm{cc}}`$ by eye, and use these estimates to determine which data should be included. As we show below, for this data set $`\mathrm{\Delta }t0.18`$ days, and $`t_{\mathrm{cc}}982.62`$ days, so data after $`t982.08`$ should be included. Another important consideration is that the magnifications too far before the crossing will not be well approximated by equation (7), primarily because the two divergent images will not be well approximated by equation (2). If the time that the source spends inside the caustic is not long compared to $`\mathrm{\Delta }t`$, then this condition cannot be satisfied simultaneously with the previous one, and the method breaks down. This might happen either because the caustic is very small (e.g., a planetary caustic) or because the crossing is close to a cusp. Other methods must then be used (e.g., Gaudi & Gould 1997; Albrow et al. 1999b). From Figure 4, however, the ratio of these two times is at least 50 in the present case, so this is not a major concern.
What should be the last point included? Sufficient data after the crossing are required to establish the slope $`\stackrel{~}{\omega }`$ well enough to extrapolate back “under” the high magnification peak ($`t_{\mathrm{cc}}\pm \mathrm{\Delta }t`$) and so establish the value of $`F_{\mathrm{cc}}`$. In the present case, the three Yale points near 982.8 days are too close to the end of the caustic crossing for this purpose. The next set of points are the SAAO data near 983.6 days. Fortunately, the cusp-approach “bump” centered near 988 days is sufficiently far from these SAAO observations that they can be used. In general, one might not be so lucky, and the choice of a final cut off for data to be included in the caustic-crossing fit should be made carefully.
Altogether, there are 74 data points in caustic crossing region, 71 from SAAO and 3 from Yale. Since these data come from two different telescopes, they could in principle have different values of $`F_s`$ and $`F_b`$. Since the caustic crossing itself does not possess sufficient information to determine the relative values of $`F_s`$ and $`F_b`$, either external information must be applied or data from one of the observatories must be ignored. The latter choice would be tolerable in the present case because there are only 3 Yale points and, as we will show, these reduce the error bars of the caustic-crossing parameters by only about 20%. In general, however, fitting the crossing may depend critically on data from several observatories. Even in the present case, using all the data would be preferable. To do so, we first make an initial educated guess as to the relative values, namely that $`F_s`$ and $`F_b`$ are both the same for the two observatories. (In the present case, $`F_s`$ is known a priori to be the same because the two observatories use similar filters and the photometric measurements are made relative to the same reference stars. However, the two $`F_b`$ could be different because the reductions are not carried out with the same template, and so different amounts of background light could enter the photometric apertures.) We then search for solutions using the non-caustic-crossing data. We find that all viable solutions have $`F_b^{\mathrm{SAAO}}F_b^{\mathrm{Yale}}+0.039F_{20}`$ where $`F_{20}`$ is the flux from an $`I=20`$ star. The scatter $`(0.04F_{20})`$ in these determinations is smaller than the $`0.08F_{20}`$ combined error for the three Yale measurements, so we simply employ the offset without incorporating an additional uncertainty.
### 3.2 Caustic-Crossing Parameters
We find fit parameters,
$$Q=(15.73\pm 0.35)F_{20}^2\mathrm{day},t_{\mathrm{cc}}=(982.62439\pm 0.00087)\mathrm{day},\mathrm{\Delta }t=(0.1760\pm 0.0015)\mathrm{day},$$
(25)
$$F_{\mathrm{cc}}=(1.378\pm 0.096)F_{20}\stackrel{~}{\omega }=(0.02\pm 0.10)F_{20}\mathrm{day}^1,$$
(26)
with a matrix of coefficients of local correlation
$$\left(\begin{array}{ccccc}1.00& 0.45& 0.64& 0.97& 0.91\\ 0.45& 1.00& 0.76& 0.39& 0.32\\ 0.64& 0.76& 1.00& 0.57& 0.52\\ 0.97& 0.39& 0.57& 1.00& 0.93\\ 0.91& 0.32& 0.52& 0.93& 1.00\end{array}\right),$$
(27)
where the order of the rows and columns corresponds to the order of the parameters in equations (25) and (26). The effective rise time of the caustic crossing is $`t_{\mathrm{r},\mathrm{eff}}=(8.28\pm 1.34)\mathrm{day}`$. Note that the midpoint of the first Yale data point occurs 4 minutes before the best-fit time for the end of the crossing. Since the exposure time was $`t_{\mathrm{exp}}=20`$min, we use equation (17) for this point.
For completeness, we note that if we ignore the Yale data, we obtain $`Q=(15.73\pm 0.41)F_{20}^2\mathrm{day}`$, $`t_{\mathrm{cc}}=(982.62444\pm 0.00096)\mathrm{day}`$, $`\mathrm{\Delta }t=(0.1761\pm 0.0017)\mathrm{day}`$, $`F_{\mathrm{cc}}=(1.379\pm 0.115)F_{20}`$, and $`\stackrel{~}{\omega }=(0.02\pm 0.12)F_{20}\mathrm{day}^1`$.
Figure 5 shows the best-fit curve to the caustic crossing. It has $`\chi ^2=113`$ for 69 degrees of freedom. We therefore estimate that the formal DoPHOT errors should be multiplied by $`(113/69)^{1/2}=1.28`$, and we use these higher errors in all subsequent work. This ratio between formal errors and true uncertainties is typical of DoPHOT reduced PLANET data (Albrow et al. 1998).
We note in passing that the end of caustic crossing occurred at $`t=t_{\mathrm{cc}}+\mathrm{\Delta }t=982.8004\pm 0.0028`$. This may be compared to the values obtained by EROS from their detailed observations of the end of the caustic crossing, $`982.7987\pm 0.0012`$ and $`982.7997\pm 0.0021`$ for their blue and red filters respectively (Afonso et al. 1998), where we have converted the EROS numbers from JD to HJD.
### 3.3 Grid of Lens Parameters
In principle, the lens could have any geometry $`(d,q)`$, with $`0<d<\mathrm{}`$ and $`0<q1`$. We must therefore choose a grid of geometries that adequately samples this space. We initially choose arrays of values $`q=0.005`$, 0.01, 0.03, 0.05, 0.1, 0.3, 0.5, 0.75, 1.0 and $`d=0.3`$, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 1.2, 1.4, 1.7, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5. We will see in § 4 that this is adequate. For the Einstein crossing times, we choose a range $`20\mathrm{day}t_\mathrm{E}200\mathrm{day}`$. Our observations display significant structure for at least 25 days beginning at about the minimum of the caustic region, so it is very unlikely that the event could be shorter than 20 days. In fact the event could be longer than 200 days if it were heavily blended, in which case only the inner, highly-magnified portions of the Einstein ring would give rise to significant structure. In this case, we would find that for each geometry near the geometry characterizing the actual event, the lowest $`\chi ^2`$ fit would have durations at or near our upper limit of $`t_\mathrm{E}=200`$days.
For each geometry, we first create a very densely-sampled representation of the caustic using the algorithm of Witt (1990), which is unevenly sampled with much wider spacing near the cusps than between them. We then resample each of the 1 to 3 closed caustic curves with about 800 roughly equally spaced points, $`\mathrm{}_i`$. At each point we evaluate $`A_{\mathrm{cc}}(\mathrm{}_i)`$ directly on the caustic and $`u_r(\mathrm{}_i)`$ by sampling the magnification at distances $`\mathrm{\Delta }u_{}=0.0001,0.00004,0.00002,`$ and $`0.00001`$ and applying equation (2). This procedure of course fails in the neighborhood of the cusps, but since the largest trial value of $`\mathrm{\Delta }u_{}`$ is more than 10 times smaller than the source, any caustic position where the procedure fails is not a viable candidate for a fold caustic crossing in any case.
The event was not yet at baseline at the last data point. We therefore can estimate only an upper limit $`F_{\mathrm{lim}}>F_{\mathrm{base}}`$ for the baseline flux. We choose $`F_{\mathrm{lim}}=0.55F_{20}`$ based on the upper limit from the last three measurements (see Fig. 4).
We find that stepping through the $`800`$ caustic points $`\mathrm{}_i`$ yields a $`\chi _{\mathrm{min}}^2(\mathrm{}_i)`$ as a function of position $`\mathrm{}_i`$ that is sufficiently well sampled to obtain at least one point with $`\chi _{\mathrm{min}}^2`$ that is within 1 or 2 of the true local minimum.
At each position we sample the range of time scales $`t_\mathrm{E}`$ in increments of 5%. This choice is dictated by the character of the cusp-approach structure seen in Figure 4 with a peak near $`t=988`$ days. The full width at half maximum and the time from the caustic crossing are about equal, approximately 6 days. The 14 daily-averaged measurement errors are typically $`\sigma 4\%`$. Hence a deviation of the trial $`t_\mathrm{E}`$ from the true value by $`\delta 2.5\%`$ would lead to a change in $`\chi ^2`$ of $`14(\sigma /2\delta )^21`$. For each $`t_\mathrm{E}`$, we explore the range of $`\mathrm{sin}\varphi `$ described by equations (19) and (21), and augmented by the $`3\sigma `$ errors for $`Q`$ (eq. 25), stepping in 5% increments. Other choices of increment size could be made. Empirically we find that with our adopted choice of 5% timescale increments, the search can miss a local minimum in $`\chi ^2`$ by $`\mathrm{\Delta }\chi ^210`$. This means that all local minima lying within $`\mathrm{\Delta }\chi ^215`$ of the global minimum must be checked (see § 4).
For each geometry $`(d,q)`$ we record the lowest value of $`\chi ^2`$ and examine the resulting map. We find three very broad areas of $`(d,q)`$ space with very similar values of $`\chi ^2130`$–135. These are roughly described by $`(0.4d0.7)\times (0.3q1)`$, $`(2.5d3.5)\times (0.1q1)`$, and $`(0.6d0.7)\times (0.05q0.1)`$. That is, we appear to have found an extremely broad class of solutions rather than a single unique minimum or even a few well-defined isolated minima.
For several individual $`(d,q)`$ pairs, we also examine the minimum $`\chi ^2`$ as a function of position $`\mathrm{}`$ around the caustics. Typically, we find two distinct minima with comparable $`\chi ^2`$, one with $`\alpha `$ close to 0 (or $`2\pi `$) and the other with $`\alpha \pi `$. These describe second caustic crossings on opposite sides of the caustic region. We therefore conduct two automated searches at each $`(d,q)`$, one with $`\pi /2\alpha <3\pi /2`$ and the other in the complementary region. We remark on the relation between these two solutions at the end of § 4.
## 4 Worked Example II: Refined Search for Minima
In order to investigate this preliminary result further, we search for local minima near the solutions with $`\chi ^2145`$, found at each $`(d,q)`$ encompassing a slightly larger region than the broad apparent plateau discussed at the end of the previous section. We adopt this somewhat looser criterion because, as we discussed above, the initial systematic search could miss the true minimum by $`\mathrm{\Delta }\chi ^210`$.
### 4.1 Basic Approach
Although the standard procedure in such a search would be to allow all parameters to vary simultaneously, we specifically do not follow this usual approach. Instead, we hold $`d`$ and $`q`$ fixed and allow only the remaining parameters to vary. This will permit a test of the hypothesis that there are a set of very broad minima in $`(d,q)`$ space. If the $`\chi ^2`$ minimum at each of these points is essentially the same, then $`d`$ and $`q`$ are indeed highly degenerate. On the other hand, if the minimum $`\chi ^2`$ is found to differ substantially for different fixed $`(d,q)`$, then it would be worthwhile to allow these parameters to vary simultaneously with the others.
For each set of trial parameters, we proceed as follows. For each observation (not binned by day as in the preliminary search), we evaluate the magnification by one of two methods, both semi-analytic. If the source lies entirely outside of the caustics or if its center lies at least 3.5 source radii from a caustic, we simply use the magnification at the source center. Otherwise, we use an approximation for the magnification that is similar in spirit to the approximation used to fit the caustic crossing that we introduced in § 2.1,
$$A(𝐮_p)=yA_3^0(𝐮_p)+A_2^0(𝐮_q)\left(\frac{\mathrm{\Delta }u_{q,}}{\rho _{}}\right)^{1/2}G_0\left(\frac{\mathrm{\Delta }u_{p,}}{\rho _{}}\right),$$
(28)
where $`𝐮_p`$ is the position in the Einstein ring of the center of the source, $`𝐮_q`$ is another position in the Einstein ring to be described below, $`\mathrm{\Delta }u_{p,}`$ and $`\mathrm{\Delta }u_{q,}`$ are respectively the perpendicular distances from $`𝐮_p`$ and $`𝐮_q`$ to the nearest caustic, $`A_3^0(𝐮_p)`$ is the magnification of the 3 non-divergent images at the position $`𝐮_p`$, $`A_2^0(𝐮_q)`$ is the magnification of the 2 divergent images at the position $`𝐮_q`$, and $`\rho _{}`$ is the source size in units of the Einstein ring. If $`\mathrm{\Delta }u_{p,}>\rho _{}`$, then we assign $`𝐮_q=𝐮_p`$. Otherwise, we take $`𝐮_q`$ to lie along the perpendicular to the caustic through $`𝐮_p`$ and halfway from the caustic to the limb of the star that is inside the caustic. The argument of $`G_0`$ is negative if the center of the source lies inside the caustic and positive if it lies outside.
Note that for the second term in equation (28) to be well defined, the arguement of $`A_2^0`$ must be a point inside the caustic. This is the reason for choosing a $`𝐮_q`$ different from $`𝐮_p`$. If the approximation given by equation (2) (and so eq. 6) were exact, then equation (28) would be valid with any choice of $`𝐮_q`$ inside the caustic. Since equation (2) is not exact, we choose $`𝐮_q`$ at the middle of the part of the source inside the caustic in order to minimize the error.
As we discussed in § 2, this approximation should work well whenever the source is small compared to the distance between caustic crossings and to the distance from a caustic crossing to the nearest cusp. It will not work for small (e.g. planetary) caustics or cusp crossings.
The advantage of this approximation is that it allows one to evaluate the magnifications for the several hundred points on the light curve in less than one second, compared to several minutes required for a numerical integration over the source. This advantage will come into play when we discuss our minimization technique below.
Once the magnifications have been calculated we fit for the flux parameters. Recall that for a single observatory there are two parameters, $`F_s`$ and $`F_b`$. In this example, there are four observatories, Canopus 1m, CTIO 0.9m, SAAO 1m, and Yale 1m, which seems to imply 8 flux parameters. However, since all four observatories use very similar $`I`$ band filters and reduce the images relative to a common set of local standards, we take $`F_s`$ to be the same for all four. In addition, we take the $`F_b`$ for Canopus to be the same as SAAO because there is only one data point (and so no room for another parameter) and because it is a high magnification point so differences in $`F_b`$ are unlikely to be important. The linear fit to the remaining 4 parameters requires very little time to compute (see also Rhie et al. 1999).
Since $`d`$ and $`q`$ are held fixed, and $`F_b`$ and $`F_s`$ are determined by linear regression, there remain 5 parameters to fit. These are normally taken to be $`t_0`$, $`u_0`$, $`t_\mathrm{E}`$, $`\alpha `$, and $`\rho _{}`$. However, as discussed in § 2, the time of the caustic crossing, $`t_{\mathrm{cc}}`$, and the caustic-crossing time, $`\mathrm{\Delta }t`$, are much better determined than either $`t_0`$ or $`\rho _{}`$; we therefore use the former in place of the latter. While both $`t_{\mathrm{cc}}`$ and $`\mathrm{\Delta }t`$ are allowed to vary, both tend to move over very small ranges that are consistent with the results from the caustic-crossing fit in § 3.2. Nevertheless, despite the fact that two parameters are held fixed and two others are relatively well constrained, we find that it is not easy to locate local minima. We suspect that the $`\chi ^2`$ function is quite complicated. Moreover, in order to properly explore parameter space, it is necessary to repeat the minimization procedure for several dozen different $`(d,q)`$ pairs, and this will be multiplied several fold in the next section.
We therefore take advantage of the efficient method of magnification calculation summarized in equation (28), which makes possible a rather cumbersome, but fairly robust, method of minimization. For each parameter $`a_i`$ we establish a grid size, $`\delta _i`$, and for every new set of trial parameters evaluate $`\chi ^2`$ at the 51 positions $`(a_i+ϵ_i\delta _i)`$, where $`ϵ_i=1,0,+1`$ and $`_i|ϵ_i|2`$. At each step, the operator is allowed one of three options: move to the lowest value of the 51 positions, move to (or toward by a specified fraction) the predicted minimum of the best fit quadratic to the $`\chi ^2`$ surface, or adjust the grid size. In practice, the procedure is semi-automated so as not to bother the operator while the routine is making adequate progress.
We find that even with this extensive probing of the neighborhood of the trial solution, the path to lower $`\chi ^2`$ is not always apparent. For example, sometimes none of the 50 probes of parameter space has a lower $`\chi ^2`$ than the central position, even if the grid size is decreased by a factor 2 or 4. We then move toward the best estimate of the minimum that is derived from the quadratic fit and find that this also has higher $`\chi ^2`$. However, starting from this new central position, some of the 50 new probes have substantially lower $`\chi ^2`$. Moreover for the next iterations the path downward is clear, and $`\chi ^2`$ may drop by 2–10 over these next few steps. We do not understand the nature of these “hang-ups.” In principle, it is possible they are due to genuine local minima, but we suspect that the $`\chi ^2`$ surface is just extremely complicated and that the paths toward lower values are narrow and not well probed even by our 50 trial points. Although skepticism is warranted, we believe that the true local minimum is eventually reached, for two reasons. First, as we show in the next section, we find many different solutions with almost exactly the same $`\chi ^2`$. It would be a remarkable coincidence if the search process always stalled at the same value of $`\chi ^2`$. Second, if the first attempt does not approach this minimum, we try several other “paths” and we find that there are no significant improvements after the second or third try. Nevertheless, this experience counsels us to be cautious about the interpretation of apparent minima.
### 4.2 Solutions
We search for refined solutions (see § 4.1) near each of the rough solutions found in § 3 considering only those within $`2.5\sigma `$ (i.e., $`\mathrm{\Delta }\chi ^2<6.25`$) of the minimum value found for the entire grid. We find 41 such solutions including all combinations of $`d=(0.4,0.5,0.6,2.5,3.0,3.5)`$ and $`q=(0.3,0.5,0.75,1.0)`$, plus additional solutions at $`(d,q)=(0.6,0.1)`$, (0.7,0.05), (0.7,0.1), and (0.7,0.3). This appears to be only 28 solutions, but for many $`(d,q)`$ pairs we find two distinct solutions, one at $`\alpha 0`$ (or $`2\pi `$) and the other at $`\alpha \pi `$.
Table 2 shows the 41 solutions. The first seven columns are the parameters $`d,q,\alpha ,u_0,t_\mathrm{E},t_{\mathrm{cc}}`$, and $`\mathrm{\Delta }t`$. The next two are the $`x`$ and $`y`$ components of $`𝐮_{\mathrm{cc}}`$, the point of the caustic crossing. These are shown in order to allow easy transformation into other parameterizations of the geometry. Columns 10 and 11 show $`F_s`$ and $`F_b`$ (from SAAO), and column 12 is their sum, $`F_{\mathrm{base}}=F_s+F_b`$, expressed as a magnitude, $`I_{\mathrm{base}}=202.5\mathrm{log}F_{\mathrm{base}}`$. Column 13 is $`t_{}\mathrm{\Delta }t\mathrm{sin}\varphi `$. Recall that the proper motion is given by $`\mu =\theta _{}/t_{}`$. Finally, column 14 is $`\mathrm{\Delta }\chi ^2`$, defined by,
$$\mathrm{\Delta }\chi ^2\frac{\chi ^2\chi _{\mathrm{min}}^2}{\chi _{\mathrm{min}}^2/\mathrm{dof}},$$
(29)
where $`\chi _{\mathrm{min}}^2=467.95`$ is the minimum value of $`\chi ^2`$ found in our search at $`(d,q)=(0.5,0.3)`$, and $`\mathrm{dof}=21211=201`$ is the number data points minus the number of parameters. (Note that the fluxes are calibrated to the Cousins system based on comparison to OGLE stars. See Albrow et al. 1999a.)
The basic result illustrated by Table 2 is that a broad range of parameters are permitted by the data. Two very broad regions in ($`d,q`$) space, one with $`d<1`$ and the other with $`d>1`$ are permitted. Indeed, there is a rough symmetry $`dd^1`$ which was theoretically predicted by Dominik (1999b). There is also a third, smaller region centered at $`(d,q)(0.7,0.1)`$. As expected, $`t_{\mathrm{cc}}`$ and $`\mathrm{\Delta }t`$ lie in an extremely narrow range since they are primarily determined by the caustic structure and not the global parameters. The full solutions for both $`t_{\mathrm{cc}}`$ and $`\mathrm{\Delta }t`$ deviate by about $`0.0015`$ days ($`\mathrm{\hspace{0.17em}2}`$ minutes) from the caustic-crossing solution given in equation (25). This shows that the global parameters do have some influence on the determination of the caustic-crossing parameters, although it is quite small. More striking is the large variation in permitted Einstein crossing timescales $`t_\mathrm{E}`$, from 81 to 227 days. Also of note is the wide variation in allowed values of $`t_{}`$, from 0.70 to 3.42 hours.
Of course, the fact that there are 41 solutions rather than some other number is a result of our specific choice of grid. Since Table 2 shows very little structure in $`\mathrm{\Delta }\chi ^2`$ over broad ranges of $`(d,q)`$ space, we expect that a finer grid would not yield any additional information.
How different are the light curves associated with these various fits? Figures 68 show four representative examples, two from the $`d>1`$ region and two with $`d<1`$. Each figure contains a curve for $`(d,q)=(0.5,0.3)`$, the nominal “best fit,” and also one other, for $`(d,q)=(3.5,0.75)`$, $`(0.7,0.05)`$, and $`(0.6,0.75)`$, respectively. All four solutions have $`\alpha \pi `$ (see Table 2). In Figure 6, the caustic-crossing region is shown separately. This is not done for the other two figures because the caustic-crossing regions look identical for all four solutions. All light curves are normalized to the SAAO data by subtracting $`\mathrm{\Delta }F=F_{b,i}F_{b,\mathrm{SAAO}}`$, i.e. the difference in the fit values for the backgrounds as measured at the two observatories. In each case, the two light curves are barely distinguishable over the time period covered by the data. This shows that a wide variety of geometries can produce essentially identical light curves if one is restricted to data covering the “second half” of the event. On the other hand, in the regions that are not covered, the light curves can differ dramatically.
An important corollary to this observation is that, by time reversal, it is impossible to accurately predict the time of the second caustic crossing even from extremely good data covering the first. The second caustic crossing can only be predicted by frequent monitoring of the event and looking for the inverse square root behavior as the second caustic approaches. Indeed, this is how the second caustic crossing of MACHO 98-SMC-1 was predicted by PLANET; the predictions of MACHO close to the caustic crossing were made in this way as well.
It is interesting to examine the relation between the two solutions with the same $`(d,q)`$. From Table 2, one finds that these generally have similar times scales $`t_\mathrm{E}`$ and angles $`\alpha `$ that differ by approximately $`180^{}`$. However, the caustic crossing angles $`\varphi `$ can be quite different. (Since $`\mathrm{sin}\varphi =t_{}/\mathrm{\Delta }t`$, and $`\mathrm{\Delta }t`$ is essentially identical for all solutions, $`\mathrm{sin}\varphi t_{}.`$) For $`d<1`$, these differences are severe for small value of $`q`$ and diminish at $`q1`$. This behavior can be understood by examining Figure 2: since $`\alpha `$ changes by about $`180^{}`$ and $`u_0`$ remains similar for the two soutions, the trajectory followed in the second solution is roughly the reverse of the first. For $`q1`$, the caustic becomes symmetric, so the angles of the first and second caustic crossings become the same. For $`q`$ different from 1 (e.g. $`q=0.3`$ as in Fig. 2) the caustic is asymmetric, so the two angles are different. This reasoning does not apply to the $`d=3.5`$ solutions because the trajectories are not approximately time reversals of each other.
## 5 Additional Observations to Break Model Degeneracies
Since the PLANET data set covers only a portion of the light curve (albeit very well), one might well ask what additional observations would be required to break the degeneracies presented in Table 2. The light curves of the fits at each $`(d,q)`$ differ substantially in the regions not covered by the data, so it might appear that even data of modest quality in these regions would be adequate to distinguish among the various models. However, it is possible that for a given $`(d,q)`$ there are other models in which the first caustic is at a different time, or the baseline flux has a different value, and while not the absolute “best” fit to the PLANET data, are still compatible with it. If this is the case, then additional data may leave the degeneracies essentially intact.
We therefore explore three examples of additional data that typically might be available: a precise measurement of the baseline, moderately good coverage of the first caustic, and lower quality coverage of the full light curve (including the early part) but that misses the first caustic.
To investigate the role of additional data, we will assume that our “best fit” $`(d,q,\alpha )=(0.5,0.3,177^{})`$ is in fact the true geometry. We emphasize that our data cannot in fact distinguish between the various solutions shown in Table 2. We make this assumption solely for the purpose of exploring the value of additional data.
### 5.1 Baseline
A year (or certainly two) after the caustic crossing, the event will be over and a precise measurement of the baseline can be made. For definiteness, we will assume that this measurement is accurate to 1% and is taken when the event has ended. Inspection of Table 2 shows that $`I_{\mathrm{base}}`$ varies by more than 0.2 mag for the various allowed solutions.
If we add an additional baseline measurement and repeat the entire search procedure, many solutions are eliminated but 27 remain, including examples from all three regions. In particular, all combinations of $`d=(0.4,3.5)`$ and $`q=(0.3,0.5,0.75,1.0)`$ are allowed, as well as $`(d,q)=(0.6,0.1)`$, $`(0.7,0.1)`$, and $`(0.7,0.05)`$, and various combinations of $`d=(0.5,0.6,2.5,3.0)`$ with $`q=(0.3,0.5,0.75,1.0)`$. Among these solutions, $`t_\mathrm{E}`$ varies in the range 100 to 227 days, and $`t_{}`$ varies in the range 0.90 to 3.42 hours. A broad range of solutions survive partly because many of the original solutions had baselines close to that of the “best fit” and so were not affected by the addition of a baseline “measurement.” However, a number of $`(d,q)`$ pairs whose solutions shown in Table 2 would be ruled out at the $`7\sigma `$ level by a baseline measurement, have alternative solutions that nevertheless manage to meet the baseline constraint. This is not true of all solutions. For example, the solution $`(d,q)=(0.6,0.75)`$ which is shown in Figure 8, did not survive the addition of a baseline measurement.
The broad degeneracy in the space of solutions, even with the addition of a precise baseline measurement, confirms the conclusion drawn at the end of § 4, that it is impossible to predict the time of the second caustic crossing from detailed observations of the “first half” of the light curve.
### 5.2 First Caustic Crossing
If the event were alerted before the first caustic crossing, this caustic might be reasonably well covered as a result of routine monitoring by follow-up teams. In this case they might notice the crossing and begin monitoring more intensively. Nevertheless, it is instructive to ask how well simple follow-up monitoring (i.e., without the extra observations triggered by an anomaly alert) over the first caustic crossing would do at resolving the degeneracies seen in Table 2. To be specific, we assume that a total of 5 measurements are made at equal intervals between $`t_{\mathrm{cc},1}0.2`$days to $`t_{\mathrm{cc},1}+0.2`$days, and that these have precision similar to the SAAO data at similar magnitudes, i.e., errors of 7%, 7%, 1.0%, 1.5%, and 2%. Scaling from the error estimates in equation (25) derived from 74 data points, these data should be sufficient to fix the time of the first caustic crossing to $`1`$ hour. By contrast, the curves shown in Figures 68 differ in their times of first caustic crossing by several days. In addition these few measurements also strongly constrain the first-caustic crossing time (analogous to $`\mathrm{\Delta }t`$) and the scale of the first caustic (analogous to $`Q`$). We find that these few data points are sufficient to exclude all solutions found in § 4, except the assumed “true” solution $`(d,q)=(0.5,0.3)`$.
We argued in § 4.1 that the grid sampling was sufficiently fine because $`\chi ^2`$ was approximately flat over large contiguous regions of the grid. In the present case, one point on the grid has significantly lower $`\chi ^2`$ than all others, so this argument fails. However, at least for the region $`d<1`$, the sampling is still adequate to find an approximate local minimum which could then act as a starting point to find the the actual local minimum (as described in the first paragraph of § 4.1). On the other hand, because of the generic nature of the $`dd^1`$ degeneracy (Dominik 1999b), one should be cautious about claiming that there are no $`d>1`$ solutions simply because there are none on the grid. To truly rule this out, it would be necessary to search on a much finer grid where the grid spacing was set by the range of $`(d,q)`$ values around the minimum at $`(d,q)(0.5,0.3)`$ for which $`\mathrm{\Delta }\chi ^21`$. Since we have not conducted such a search, we cannot absolutely rule out the possibility that a $`d>1`$ solution survives the addition of data from the first caustic.
### 5.3 Constant Coverage
Next we assume that the event was covered by routine survey monitoring, once every other day (to allow for weather) for 1000 days before the second crossing and continuing until the end of the PLANET observations on day 1026. In order to complement the investigation in § 5.2, we assume that no observations were taken within two days of either caustic crossing. However, to take account of the fact that survey data are usually taken in non-standard bands, we add two extra parameters to the fit, $`F_s`$ and $`F_b`$ for the survey observations. We assume 20% errors at baseline and that the errors scale inversely as the square root of the flux.
Formally we find that only two solutions survive in addition to the “true solution” at $`(d,q)=(0.5,0.3)`$, both in its immediate neighborhood at (0.5,0.5) and (0.6,0.3). However we also find a cluster of spurious solutions centered at $`(d,q)=(3.5,1)`$ which has $`\mathrm{\Delta }\chi ^2=8.5`$. While it might be possible to formally rule out such a solution in this particular case, this low value of $`\mathrm{\Delta }\chi ^2`$ suggests that additional data of this type may often leave some degeneracies intact.
### 5.4 Summary
In brief, excellent coverage of a single fold caustic is not sufficient to uniquely determine the parameters of the binary lens, even with the addition of a good late-time baseline measurement. On the other hand, a few measurements over the other caustic can break the degeneracy completely. This degeneracy implies that observations of the first caustic crossing alone cannot be used to reliably predict the time of the second crossing. The addition of survey-type data (infrequent sampling with large errors but covering the whole light curve – even if the caustics are missed) can certainly lift some of the degeneracies, but may leave the $`dd^1`$ degeneracy intact.
This work was supported by grants AST 97-27520 and AST 95-30619 from the NSF, by grant NAG5-7589 from NASA, by a grant from the Dutch ASTRON foundation through ASTRON 781.76.018, and by a Marie Curie Fellowship from the European Union.
|
no-problem/9903/nucl-th9903032.html
|
ar5iv
|
text
|
# Core-Polarization Contribution to the Nuclear Anapole Moment
## I Introduction
The recent observation of the anapole moment in <sup>133</sup>Cs , has spurred considerable interest in this subject. As remarked , this is the first observation of a static moment that is due to the violation of reflection symmetry. The existence of the anapole moment was suggested by Vaks and Zeldovich early after the discovery of parity violation in $`\beta `$ -decay.
The anapole moment exists in the situation when parity is violated but time reversal is preserved. Pioneering calculations of the anapole moment were done in ref. . It was suggested that the anapole moment could provide information about the nature of the nucleon-nucleon (N-N) parity violating force, in particular about the $`\pi `$ and $`\rho `$ exchange contribution to the weak N-N interaction. One of the immediate applications of the recent <sup>133</sup>Cs measurement was the attempt to try and deduce the pion-nucleon weak coupling constant $`f_\pi `$ , , by comparing the value of the measured anapole moment with the one calculated using a pure single-particle model. This comparison leads to a value for $`f_\pi `$ that exceeded by a factor 4 the value deduced from a hadronic parity violating measurement in $`{}_{}{}^{18}F`$ . At the present this controversy still exists. In this paper we wish to examine the role of core polarization and calculate its contribution to the values of the anapole moment. Some work in this respect has been done in the work of ref. . Our calculations are analogous to the calculation of effective charges in nuclei, a concept well rooted in the field of nuclear structure.
The anapole operator is given by ,
$$\widehat{a}=\pi 𝑑\stackrel{}{r}r^2\stackrel{}{j}(\stackrel{}{r})$$
(1)
where $`\stackrel{}{j}(\stackrel{}{r})`$ is the nuclear electromagnetic current density. It has been found that the dominant part of the anapole operator stems from the spin part of $`\stackrel{}{j}(\stackrel{}{r})`$ and is given by:
$$\widehat{a}_s=\frac{\pi e\mu }{m}(\stackrel{}{r}\times \stackrel{}{\sigma })$$
(2)
where $`\mu `$ is the nucleon magnetic moment, $`m`$ the nucleon mass, $`\stackrel{}{\sigma }`$ is the nucleon spin operator and $`\stackrel{}{r}`$ is its coordinate.
For the nucleus we write this part of the anapole operator as
$$\widehat{a}_s=\frac{\pi e}{m}\underset{i=1}{\overset{A}{}}[\mu +(\mu _p\mu _n)t_Z(i)](\stackrel{}{r}_i\times \stackrel{}{\sigma }_i)$$
(3)
where $`\mu =\frac{\mu _p+\mu _n}{2}`$, $`\mu _p`$, $`\mu _n`$ are the proton and neutron magnetic moments in units of nuclear magnitons, $`t_Z`$ is the Z - component of the isospin operator, ($`t_Z`$ for a proton is $`+\frac{1}{2}`$, and for a neutron $`\frac{1}{2}`$). The sum $`i`$ is over all nucleons in the nucleus. The operator written in vector spherical harmonics and in terms of a tensor coupled product is:
$$\stackrel{}{r}_i\times \stackrel{}{\sigma }_i=i\sqrt{2}\sqrt{\frac{4\pi }{3}}r_i[Y_{L=1}(\widehat{r}_i)\stackrel{}{\sigma }]^{\mathrm{\Delta }J=1}$$
(4)
which is the $`\mathrm{\Delta }J^\pi =1^{}`$, L=1, S=1 spin-dipole operator . The anapole operator in eq. (3) involves therefore, the isoscalar and isovector $`J=1^{}`$ spin-dipole operators.
The distribution of isovector spin-dipole strength was studied extensively both experimentally and theoretically in the eighties and there is a considerable amount of information about the isovector spin-dipole strength distribution. For the isoscalar spin-dipole there is little information.
The anapole moment is defined as the expectation value of $`\widehat{a}`$.
$$a=<\psi \widehat{a}\psi >_{(J_z=J)}$$
(5)
where $`\psi `$ is the ground state wave function of the nucleus. It is clear that since the $`\mathrm{𝑜𝑝𝑒𝑟𝑎𝑡𝑜𝑟}`$ $`\widehat{a}`$ is odd under parity operation and time reversal operations (P-odd, T-odd) for a spin non-zero nuclear state -$`a`$ will be non-zero only if parity mixing occurs in the wave function $`\psi `$. \[Note that one does not need time reversal violation in this case because, the operator $`\widehat{a}`$ contains, unlike for example the electric dipole, the spin operator $`\stackrel{}{\sigma }`$. The anapole $`\mathrm{𝑚𝑜𝑚𝑒𝑛𝑡}`$ therefore is P -odd and T -even\].
## II General Formalism
### A The Single-Particle Contribution
Let us consider a nucleus with a particle occupying an orbit $`j_+`$ with positive parity. (The consideration that follow can be made by starting with a negative parity state $`j_{}`$ and interchanging simply the + and - indices). The ground state in first approximation can be written as:
$$\varphi _+>=0^+j_+>$$
(6)
($`0^+`$ denotes the ground state spin of the core). Consider now a negative parity orbit with the same spin $`j`$ but opposite parity and lying above the $`j_+`$ orbit. We denote this orbit as $`j_{}`$.
$$\varphi _{}>=0^+j_{}>$$
(7)
In general the negative parity $`j_{}`$ orbit will be energetically about $`1\mathrm{}\omega `$ above $`j_+`$. A parity violating force will mix the two and we will have:
$$\varphi _+^{}>=0^+j_+>_j+\eta _00^+j_{}>_j$$
(8)
with
$$\eta _0=\frac{<\varphi _+W\varphi _{}>}{ϵ_{}ϵ_+}$$
(9)
Where $`W`$ is the parity violating interaction and $`ϵ_+`$, $`ϵ_{}`$ are the single-particle energies. We should remark here that $`W`$ is the effective parity violating interaction and may include also some many-body contributions, such as the excitation of the $`0^{}`$ spin-dipole . The anapole moment from this admixture is:
$$a_{sp}^{(part)}=<\varphi _+^{}\widehat{a}\varphi _+^{}>=2\eta _0<0^+j_+\widehat{a}0^+j_{}>$$
(10)
In addition to the single-particle contribution involving the $`j_{}`$ orbit that is above the given $`j_+`$ orbit, there is the equally important contribution of the orbit with spin $`j`$ equal to $`j_+`$ but of negative parity lying $`1\mathrm{}\omega `$ below the orbit $`j_+`$. We denote this orbit as $`j_{}^1`$, indicating that it is a hole state. The ground state configuration $`0^+j_+>`$ will mix with the 2p-1h configuration $`0^+j_+^2j_{}^1>.`$
$$\stackrel{~}{\varphi ^{}}_+>=0^+j_+>_j+\stackrel{~}{\eta }_00^+j_+^2j_{}^1>$$
(11)
The contribution to the anapole of this mixing, we denote as $`a_{sp}^{(hole)}`$ and it is:
$$a_{sp}^{(hole)}=<\stackrel{~}{\varphi ^{}}_+\widehat{a}\stackrel{~}{\varphi ^{}}_+>=2\stackrel{~}{\eta }_0<0^+j_+\widehat{a}0^+j_+^2j_{}^1>$$
(12)
As mentioned, the contribution of this term is of the same magnitude as $`a_{sp}^{(part)}`$. The sum of the two:
$$a_{sp}=a_{sp}^{(part)}+a_{sp}^{(hole)}$$
(13)
will be considered here to represent the single-particle contribution. Let us now advance a bit and include in the wave function configurations involving excitation of the core.
### B The Core-Polarization Model
Of the many possible types of core excitations let us single out the components $`1^{}j_+^{}>`$ and $`1^{}j_{}^{}>`$ involving single-particle states of positive and negative parity coupled to the spin-dipole resonances (isoscalar and isovector) to give total spin $`j`$. The symbol $``$ denotes, angular momentum coupling. We will from now on not write this symbol in order to simplify notations. Now we have
$$\psi _+>=\alpha 0^+j_+>_j+\underset{j_{}^{}}{}\beta _j^{}1^{}j_{}^{}>_j$$
(14)
and
$$\psi _{}>=\overline{\alpha }0^+j_{}>_j+\underset{j_+^{}}{}\overline{\beta }_{j^{}+}1^{}j_+^{}>_j$$
(15)
In the following we will drop the index $`j`$ under the kets. For the sake of simplicity of our presentation let us limit to one orbit $`j^{}`$ taking $`j_{}^{}=j_{}`$ and $`j_+^{}`$ being the next higher positive parity orbit after $`j_+`$. An extension to many orbits $`j^{}`$ is immediate but complicates matters and notations. We consider therefore:
$$\psi _+>=\alpha 0^+j_+>+\beta 1^{}j_{}>$$
(16)
$$\psi _{}>=\overline{\alpha }0^+j_{}>+\overline{\beta }1^{}j_+^{}>$$
(17)
$$\beta =\frac{<0^+j_+V_N1^{}j_{}>}{\mathrm{\Delta }E_\beta }$$
(18)
$$\overline{\beta }=\frac{<0^+j_{}V_N1^{}j_+^{}>}{\mathrm{\Delta }E_{\overline{\beta }}}$$
(19)
where $`V_N`$ is the nuclear interaction.
The $`W`$ interaction will mix these two states and the parity mixed ground state will be:
$$\stackrel{~}{\psi }_+>=\psi _+>+\eta \psi _{}>$$
(20)
with
$$\eta =\frac{<\psi _+W\psi _{}>}{E_{}E_+}$$
(21)
Since we expect $`\alpha \beta `$ we can take $`\eta \eta _0`$. We evaluate $`<\stackrel{~}{\psi }_+\widehat{a}\stackrel{~}{\psi }_+>`$ (dropping the terms quadratic in $`\eta _0`$), and take $`\alpha \overline{\alpha }=1`$. We find
$$a=2\eta _0[<0^+j_+\widehat{a}0^+j_{}>+\beta <0^+j_{}\widehat{a}1^{}j_{}>+\overline{\beta }<1^{}j_+^{}\widehat{a}0^+j_+>]$$
(22)
At this point we should note that because $`\widehat{a}`$ is a one-body operator, the term involving $`\overline{\beta }`$ will be zero unless $`j_+^{}=j_+`$. In this case $`\psi _{}>=\overline{\alpha }0^+j_{}>+\overline{\beta }1^{}j_+>`$. Then the two configurations $`0^+j_{}>`$ and $`1^{}j_+>`$ might be close in energy (if the spin-dipole is close to its unperturbed position). The contribution of this state will be large because of a large $`\overline{\beta }`$, however, its contribution will be cancelled by the nearby orthogonal partner state: $`\psi _{}^{}>=\overline{\beta }0^+j_{}>\overline{\alpha }1^{}j_+>`$ .
Therefore:
$$a=2\eta _0<0^+j_+\widehat{a}0^+j_{}>\times [1+\frac{\beta <0^+j_{}\widehat{a}1^{}j_{}>}{<0^+j_+\widehat{a}0^+j_{}>}]a_0[1+\chi ]$$
(23)
where $`\chi `$ is the core contribution to the anapole moment.
$$\chi =\frac{\beta <0^+j_{}\stackrel{}{r}_i\times \stackrel{}{\sigma }_i1^{}j_{}>}{<0^+j_+\stackrel{}{r}_i\times \stackrel{}{\sigma }_i0^+j_{}>}$$
(24)
### C A Simple Estimate
We now proceed with some simple estimates, treating only the isovector spin-dipole. First, $`ϵ_{j+}ϵ_j\mathrm{}\omega `$, ($`\mathrm{}\omega =41A^{1/3}`$ MeV) and $`E_1^{}=1\mathrm{}\omega +\mathrm{\Delta }V`$, where $`\mathrm{\Delta }V`$, is a collective shift due to the interaction energy of the 1p - 1h states forming the spin-dipole. The denominator in the expression for $`\beta `$ is approximately equal therefore to $`\mathrm{\Delta }E_\beta 2\mathrm{}\omega +\mathrm{\Delta }V`$. One can rewrite the expression for $`\chi `$ as:
$`\chi ={\displaystyle \frac{<0^+j_+V_N1^{}j_{}><1^{}j_{}\stackrel{}{r}_i\times \stackrel{}{\sigma }_i0^+j_{}>}{<0^+j_+\stackrel{}{r}\times \stackrel{}{\sigma }0^+j_{}>(2\mathrm{}\omega +\mathrm{\Delta }V)}}`$ (25)
Proceeding with approximations we may now limit the $`\stackrel{}{r}_i\times \stackrel{}{\sigma }_i`$ in the matrix elements of the numerator to the core nucleons and write:
$$\chi =\frac{<1^{}\stackrel{}{r}_i\times \stackrel{}{\sigma }_i0^+>}{<j_+\stackrel{}{r}\times \stackrel{}{\sigma }j_{}>}\frac{<0^+j_+V_N1^{}j_{}>}{2\mathrm{}\omega +\mathrm{\Delta }V}$$
(26)
We first note that the ratio
$$\frac{<1^{}\stackrel{}{r}_i\times \stackrel{}{\sigma }_i0^+>}{<j+\stackrel{}{r}\times \stackrel{}{\sigma }j_{}>}\sqrt{N}$$
(27)
could be a quite large number depending on the collectivity of the spin-dipole giant resonance. The symbol $`N`$ stands here for the effective number of particles that contribute to the collective spin-dipole. This number could be a source of enhancement of the core contribution to $`a`$. Let us now estimate the other quantities appearing in eq. (22).
The values of the matrix elements $`<0^+j_+V_N1^{}j_{}>`$, or $`<0^+j_{}V_N1^{}j_+^{}>`$ can be estimated from the value of the collective shift $`\mathrm{\Delta }V=<1^{}V_N1^{}>`$. (See for example the discussion of particle + core coupling models in ref. ). On the average the values of the above particle + core coupling matrix elements should be equal to $`\frac{\mathrm{\Delta }V}{\sqrt{N}}`$ where $`\sqrt{N}`$ is again the number of particles active in the formation of the spin-dipole. We may write the estimate
$$\chi =\frac{\mathrm{\Delta }V}{2\mathrm{}\omega +\mathrm{\Delta }V}$$
(28)
denoting $`\lambda =\mathrm{}\omega /\mathrm{\Delta }V`$
$$\chi =\frac{1}{2\lambda +1}$$
(29)
For a large collective shift $`(\mathrm{\Delta }V=\mathrm{}\omega )`$ $`\chi `$ is large. In ref. the $`1^{}`$ isovector spin-dipole is found to be at an energy $`2\mathrm{}\omega `$, meaning that $`\mathrm{\Delta }V1\mathrm{}\omega `$. In this case $`\chi \frac{1}{3}`$. We should stress that we are dealing with small admixtures of the core states. The admixtures for $`1^{}j>`$ that are implied here are less than $`1\%`$. It is the factor $`\sqrt{N}`$ in the spin-dipole strength that makes the $`\chi `$ correction sizable. If the collectivity of the spin-dipole is not high, one will find $`\chi `$ to be small. Our estimates are crude and one must have a more precise evaluation of the contribution of the core. In the next section, we describe such calculations.
## III Shell-Model Calculations
### A Matrix Elements and Operators
In this section we describe details of the single-particle and configuration mixed anapole moment calculations. The spin-current contribution to the anapole moment is given by
$$a_s=<\psi \widehat{a}_s\psi >_{(J_z=J)}=\left(\begin{array}{ccc}J& 1& J\\ J& 0& J\end{array}\right)<\psi \widehat{a}_s\psi >,$$
(30)
where $`J`$ is the nuclear spin, () is the three-j symbol, and we use reduced matrix element convention of Edmonds . In units where $`\mathrm{}=c=1`$, the operator $`\widehat{a}_s`$ is given by
$$\widehat{a}_s=\frac{\pi e}{m}\underset{i=1}{\overset{A}{}}\mu _i(\stackrel{}{r}_i\times \stackrel{}{\sigma }_i)=\frac{i\sqrt{2}\pi e}{m}\sqrt{\frac{4\pi }{3}}\widehat{a}_s^{},$$
(31)
where
$$\widehat{a}_s^{}=\underset{i=1}{\overset{A}{}}\mu _ir_i[Y_{L=1}(\widehat{r}_i)\stackrel{}{\sigma }_i]^{\mathrm{\Delta }J=1},$$
(32)
and where $`\mu _i`$ are the nucleon magnetic moments in units of nuclear magnetons; $`\mu _p=2.79`$ and $`\mu _n=1.91`$.
It is conventional to relate the anapole moment to a dimensionless constant $`\kappa _s`$ defined by:
$$a_s=\frac{1}{e}\frac{G}{\sqrt{2}}\frac{K\kappa _s}{J(J+1)}<\psi \stackrel{}{J}\psi >_{(J_z=J)}=\frac{1}{e}\frac{G}{\sqrt{2}}\frac{K\kappa _s}{(J+1)},$$
(33)
where
$$K=(J+\frac{1}{2})(1)^{\mathrm{}+\frac{1}{2}j}.$$
(34)
The $`\mathrm{}`$ and $`j`$ in the phase factor are chosen to be those of the dominate single-particle orbital associated with $`\psi `$.
The perturbation expansion of the reduced matrix element gives:
$$<\psi \widehat{a}_s\psi >=2\underset{f}{}\frac{<\psi \widehat{a}_s\varphi _f><\varphi _f\widehat{W}\psi >}{\mathrm{\Delta }E},$$
(35)
where $`\widehat{W}`$ is the weak interaction. For the weak interaction we use the approximation of Eq. (7) of :
$$\widehat{W}=\frac{i}{m}\frac{G}{\sqrt{2}}\underset{i=1}{\overset{A}{}}g_i\frac{\stackrel{}{\sigma }_i}{2}[\stackrel{}{}_i\rho +\rho \stackrel{}{}_i]=\frac{i}{m}\frac{G}{\sqrt{2}}\widehat{W}^{},$$
(36)
with
$$\widehat{W}^{}=\underset{i=1}{\overset{A}{}}g_i\frac{\stackrel{}{\sigma }_i}{2}[\stackrel{}{}_i\rho +\rho \stackrel{}{}_i],$$
(37)
where $`g_p`$ and $`g_n`$ are dimensionless constants representing the weak-interaction strength between the valence protons and neutrons, respectively, and the nuclear-matter density $`\rho `$. The nuclear-matter density is normalized by:
$$\rho (r)𝑑\stackrel{}{r}=A$$
(38)
Finally, we express the dimensionless constant $`\kappa _s`$ in terms of the matrix elements of $`\widehat{a}_s^{}`$ and $`\widehat{W}^{}`$:
$$\kappa _s=\frac{\pi e^2}{m^2}\frac{\sqrt{2}(J+1)}{K}\left(\begin{array}{ccc}J& 1& J\\ J& 0& J\end{array}\right)\sqrt{\frac{4\pi }{3}}\mathrm{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}2}\underset{f}{}\frac{<\psi \widehat{a}_s^{}\varphi _f><\varphi _f\widehat{W}^{}\psi >}{\mathrm{\Delta }E}$$
(39)
Introducing the units of $`\mathrm{}`$ and $`c`$, the constant in front becomes $`\frac{\pi e^2\mathrm{}^2c^2}{m^2c^4}=0.199`$ MeV fm<sup>3</sup>. The dimension of the $`\widehat{a}_s^{}`$ matrix element is fm and the dimension of the $`\widehat{W}^{}`$ matrix element is MeV<sup>-1</sup> fm<sup>-4</sup>.
In terms of the weak interaction, $`\kappa _s`$ is a linear combination of the single-particle coupling constants $`g_p`$ and $`g_n`$ which we will write as
$$\kappa _s=g_p\kappa _{sp}+g_n\kappa _{sn}.$$
(40)
The total value of $`\kappa _s`$ will be obtained using the values $`g_p=4.5`$ and $`g_n=0`$ based upon the DDH “best value” estimates . The ultimate goal of comparing these type of calculations to experimental values for $`\kappa `$ will be to extract empirical values for $`g_p`$ and $`g_n`$ and use these to understand the nucleon-nucleon PNC weak interaction. The total value of $`\kappa `$ also involves the smaller convection-current contribution (see Eq. 67 of ). In this paper we focus only on the nuclear structure properties of the most important spin-current term $`\kappa _s`$.
### B Single-Particle Terms
First we consider the single-particle contributions to the intermediate states in Eq. 39. The cases we consider are those given in Table 3 of . Specifically we start with closed shells for <sup>132</sup>Sn and <sup>208</sup>Pb. For <sup>133</sup>Cs we take a valence 1g<sub>7/2</sub> proton particle relative to <sup>132</sup>Sn, for <sup>203,205</sup>Tl we take a 3s<sub>1/2</sub> proton hole relative to <sup>208</sup>Pb, for <sup>207</sup>Pb we take a 3p<sub>1/2</sub> neutron hole relative to <sup>208</sup>Pb and for <sup>209</sup>Bi we take a 1h<sub>9/2</sub> proton particle relative to <sup>208</sup>Pb. We will discuss here the results based upon densities and radial wave functions in the matrix elements of $`\widehat{a}_s^{}`$ and $`W^{}`$ which were obtained from Hartree-Fock (HF) calculations based on the SKX Skyrme interaction of . The HF results will be compared to those from the Woods-Saxon (WS) potential of for <sup>208</sup>Pb, and the interpolated parameter set of for <sup>132</sup>Sn.
In Eq. 39 we sum over all single-particle states $`\varphi `$. The $`\mathrm{\Delta }E`$ is the single-particle energy difference. This sum includes two type of terms: (1) the “hole” term in which the nucleons in occupied states are excited up to orbit $`\psi `$, e.g. 1p<sub>1/2</sub> and 2p<sub>1/2</sub> to 3s<sub>1/2</sub> for <sup>207</sup>Tl, and (2), the “particle” term in which the nucleon in the orbit $`\psi `$ is excited into unoccupied states $`\varphi `$, e.g. 3s<sub>1/2</sub> to $`n`$p<sub>1/2</sub> with $`n3`$ for <sup>207</sup>Tl. In the oscillator limit the $`\stackrel{}{r}`$ matrix element is nonzero only for the intermediate states which are 1$`\mathrm{}\omega `$ away; e.g. 2p<sub>1/2</sub> and 3p<sub>1/2</sub> for <sup>207</sup>Tl, and we find that with the more realistic HF and Woods-Saxon radial wave functions, that these “1$`\mathrm{}\omega `$” terms are the only important ones. In cases where the unoccupied states are loosely bound or unbound, their radial wave functions are calculated by adding an external square well potential with a radius of 14 fm and a depth of 20 MeV to the HF potential. This has a negligible effect on the HF solution, but gives the unbound states a realistic excitation energy as well as an exponential fall off at large distance which is similar to the bound state $`\psi `$. The results are not sensitive to the exact values for the depth and radius of the external potential as long as it is sufficiently deep to bind the orbits and sufficiently large to not affect the HF bound state solution. The many-body aspect of the problem introduces an extra phase factor of $`1`$ for the “hole” term in Eq. 39.
The HF and WS results are given in Table I. The WS results for the particle plus hole contributions are very close to the WS results given by Dmitriev et al. . Furthermore, the HF results are very similar to WS. Most of the difference between HF and WS is due to the difference in the single-particle energy denominator of Eq. 39.
### C Core-Polarization Correction
Next we consider the admixture of particle-hole states in <sup>208</sup>Pb. The calculation is based on the model space shown in Fig. 1 of which for the <sup>208</sup>Pb closed shell has 1g<sub>7/2</sub>, 2d<sub>5/2</sub>, 2d<sub>3/2</sub>, 3s<sub>1/2</sub> and 1h<sub>11/2</sub> filled orbits for protons, 1h<sub>9/2</sub>, 2f<sub>7/2</sub>, 2f<sub>5/2</sub>, 3p<sub>3/2</sub>, 3p<sub>1/2</sub> and 1i<sub>13/2</sub> empty orbits for protons, 1h<sub>9/2</sub>, 2f<sub>7/2</sub>, 2f<sub>5/2</sub>, 3p<sub>3/2</sub>, 3p<sub>1/2</sub> and 1i<sub>13/2</sub> filled orbits for neutrons and 1i<sub>11/2</sub>, 2g<sub>9/2</sub>, 2g<sub>7/2</sub>, 3d<sub>3/2</sub>, 4s<sub>1/2</sub>, 2g<sub>7/2</sub>, 3d<sub>5/2</sub> and 1j<sub>15/2</sub> empty orbits for neutrons. We note that this model space includes the necessary orbits for the “particle” admixtures in <sup>207</sup>Th and <sup>207</sup>Pb and thus we will focus our calculations of the spin-dipole correlation effects on these two nuclei.
The Hosaka G matrix was used for the residual strong interaction, and the single-particle energies were fixed to reproduce the experimental single-particle energies as given in Fig. 1 of . The use of the Hosaka G matrix for the <sup>208</sup>Pb and its comparison with other G matrix interactions is discussed in .
For the 1<sup>-</sup> states of interest here there are 27 particle-hole configurations. One of these is spurious and it was removed from the spectrum by using the method of Glockner and Lawson of applying a center-of-mass hamiltonian to raise the energy of the spurious state and remove its effect from the low-lying states of interest. (Even though this is a large model space, there are six dipole excitations missing space, e.g. 1h<sub>11/2</sub> to 1i<sub>11/2</sub> for protons.) We show in Fig. 1, the single-particle dipole and spin-dipole response for transitions to 1<sup>-</sup> states in <sup>208</sup>Pb. Specifically, the spin-dipole strength $`B(a)`$ is given by the reduced matrix element of $`\widehat{a}_s^{}`$. The unperturbed results shown in Fig. 1 were obtained using the single-particle energies of and with the center-of-mass hamiltonian, but with no residual interaction. When the Hosaka G matrix is used, the dipole strength moves from its unperturbed position of 7$``$8 MeV to to a collective state at 11.7 MeV. The experimental giant dipole in <sup>208</sup>Pb lies at about 13.5 MeV with an energy weighted sum-rule strength of about 100$`\%`$ of the classical sum-rule value of $`14.8NZ/A=735`$ e<sup>2</sup> fm<sup>2</sup> MeV. The total experimental B(E1) strength is thus about 54 e<sup>2</sup> fm<sup>4</sup> and the total one-particle one-hole (1p-1h) strength in our calculation is 88 e<sup>2</sup> fm<sup>4</sup>. The results for the dipole and spin-dipole strength functions are shown in Fig. 2. We note that the spin-dipole strength is collective and is pushed up in energy compared to the single-particle limit of Fig. 1, but it is not as collective as the isovector dipole. The levels for the low-lying mixed 1p-1h states in <sup>208</sup>Pb obtained with the G matrix interaction are in excellent agreement with experiment, within typically 130 keV .
Next we recalculate the “particle” anapole matrix elements for <sup>207</sup>Th and <sup>207</sup>Pb by using ground state wave which include mixing from the 1p-1h states in <sup>208</sup>Pb. For example for <sup>207</sup>Th, this admixture consists of the 3p<sub>1/2</sub> particle coupled to all of the 1p-1h states. Since this is a “2$`\mathrm{}\omega `$” admixture the spurious center-of-mass motion can only be removed approximately. Its strength was chosen so that the mostly nonspurious admixed states were stabilized around 8-20 MeV excitation energy above the single-particle ground state, whereas the mostly spurious state was pushed to about 100 MeV excitation energy. If the spurious state is not removed it comes at a very low excitation energy and mixes strongly with the single-particle ground state. On the other hand if the center-of-mass hamiltonian is too strong, all states are moved up too high (because of their small but non-zero spurious component) and these is no mixing of the non-spurious components of interest with the single-particle state.
The results are given in Table II next to the column labeled “HF part$`+`$CP”. It turns out the the effect of the core-polarization is rather small, resulting in about a 10$`\%`$ reduction for <sup>207</sup>Th and a 5$`\%`$ enhancement for <sup>207</sup>Pb. These core-polarization corrections arise from both the $`\widehat{a}_s^{}`$ and $`\widehat{W}^{}`$ matrix elements in Eq. 39, but the dominant effect is on the $`\widehat{a}_s^{}`$ term. The core-polarization admixtures in the ground states is only about 1$`\%`$, yet the effect is rather significant. These calculations indicate that the core-polarization is not too large, but not negligible. The calculations for the core-polarization contributions might be expanded in future work by using a perturbation approach.
### D Configuration Mixing and Comparison to Experiment
The final step for the anapole moment calculations will be to go from the “single-particle” nuclei around <sup>132</sup>Sn and <sup>208</sup>Pb to the multi-valence-particle configurations involved in those nuclei where measurements have been carried out, in particular for <sup>133</sup>Cs and <sup>205</sup>Th. The main complication here is that the anapole moment will consist of a linear combination of diagonal (e.g. $`<\psi \widehat{a}\psi >`$) and off-diagonal (e.g. $`<\psi \widehat{a}\psi ^{}>`$) reduced matrix elements within the valence space.
Good shell-model hamiltonians exist for <sup>205</sup>Th . It is known in this case that the diagonal matrix element for the 3s<sub>1/2</sub> orbit gets reduced from by about 0.80 compared to its value in <sup>207</sup>Th due to configuration mixing. We have use the HF wave functions to calculate all of the single-particle anapole matrix elements involved in the <sup>205</sup>Th ground state. When these are combined with the $`\mathrm{\Delta }J=1`$ one-body transition densities for the <sup>205</sup>Th ground state, the anapole moment comes out be 0.40; close to the 3s<sub>1/2</sub> single-particle value. As mentioned, the diagonal matrix element is reduced, but the smaller off-diagonal terms give some enhancement which cancels out the reduction. The core-polarization correction considered above would reduce this to about 0.35. To obtain the final value of the anapole moment we add the additional (smaller) term from the convection-current contribution as given in Table 3 of Dmitriev et al. which is about -0.09. The total calculated anapole moment is thus about 0.24, which should be compared with the experiment value of $`0.22\pm 0.30`$ . The agreement is fair given the large experimental error.
The single-particle value for the spin contribution to the anapole moment for <sup>133</sup>Sb from Table I is 0.29 which, when added to the convection-current contribution from Table 3 of Dmitriev (-0.05), gives a total anapole moment of 0.25. This is in fair agreement with the results of $`0.36\pm 0.06`$ deduced in on the basis of atomic physics considerations from the experiment on <sup>133</sup>Cs atoms . However, one cannot make a final comparison between theory and experiment until one has a calculation for the core-polarization contribution as well as for the structure of <sup>133</sup>Cs. A hamiltonian for this mass region has been developed and applied to the magnetic moment of <sup>137</sup>Cs , where one finds that the diagonal 1g<sub>7/2</sub> term is within a few percent of its value in <sup>133</sup>Sb. However, for <sup>133</sup>Cs the spherical dimensions involved in valence space are extremely large (on the order of 10<sup>9</sup>) and one will have to explore the use of the deformed single-particle model or the shell-model Monte Carlo method in order to carry out a reliable calculation.
## IV Conclusions
We have investigated anapole moments in heavy nuclei in the framework of shell-model configuration mixing. The single-particle anapole moments are broken down into their components coming from the weak-interaction mixing of particle and hole terms. The sum of these terms calculated with Woods-Saxon radial wave functions are close to the values obtained by Dmitriev et al. . We have also evaluated these matrix elements with Skyrme Hartree-Fock radial wave functions and the results are similar to those obtained with the Woods-Saxon potential. We discuss the general principle behind the core-polarization corrections to the anapole moment. Specific calculations are carried out for <sup>208</sup>Pb with a G matrix interaction which incorporates realistic collective states for the spin-dipole excitation. The core-polarization corrections for the “particle” contributions in <sup>207</sup>Th and <sup>207</sup>Pb turn out to be on the order of $`10\%`$; not very large, but not negligible. The single-particle matrix elements which we could consider in these calculations were, however, limited, and it would be desirable to carry out more extensive calculations along these lines. We have also made a configuration mixing calculation for the anapole moment of <sup>205</sup>Th which gave a value close to the previous single-particle estimate and in fair agreement with experiment.
Acknowledgements: This work was supported in part by the US-Israel Binational Science Foundation and by NSF grant PHY-9605207.
Figure Captions:
Figure 1: $`B(E1)`$ and $`B(a)`$ strength distributions in <sup>208</sup>Pb obtained with the unperturbed 1p-1h wave functions.
Figure 2: $`B(E1)`$ and $`B(a)`$ strength distributions in <sup>208</sup>Pb obtained with the mixed 1p-1h wave functions.
|
no-problem/9903/astro-ph9903103.html
|
ar5iv
|
text
|
# A Compact Fireball Model of Gamma Ray Bursts
## 1 Introduction
Gamma ray burst spectra typically peak at several hundred KeV, and frequently (though not always) have an extended non-thermal tail that contains a significant fraction of the total energy. Although the high energy side of the peak is hard to characterize with reliable generalizations, classical GRB’s seem to typically peak above 100 KeV; a significantly softer peak is a sign that the GRB is in a separate class of ”soft repeaters”. Though this has been suggested to be a selection effect, we believe it to be real. The total luminosity in soft X-rays is typically below the luminosity above 100 KeV by an order of magnitude or more. When one considers that, realistically, an emitting surface will have some dispersion in its local temperature (e.g. a hot spot is likely to have cool edges, relativistic beaming from material moving directly at the observer is likely to be mixed with contributions from fluid with a sideways component to its motion etc.) the X-ray paucity seems all the more clean and significant.
We believe that the characteristic peak energy of about 200 KeV is not merely a chance value or selection effect, but is rather telling us something important about GRB’s. In this paper we construct a model that, by design, yields this characteristic peak energy naturally. The basic assertion is that the gamma rays near the peak are emitted from a compact region of radius R not much greater than about $`10^9\mathrm{\Gamma }`$cm, where $`\mathrm{\Gamma }`$ is the bulk Lorentz factor of the expanding fireball. Earlier discussions and variations of this basic idea include those by Eichler (1994) and Thompson (1997). Energization of the pair plasma at this radius could come from internal shocks (e.g., Eichler 1994; Rees & Meszaros 1994; Sari & Piran 1997) but could also come from, say, collimation by surrounding baryonic material. The possibility of collimation may be motivated by the huge fluence from recently discovered GRB’s such as GRB 990123. Interaction with the collimating material could produce strong internal shock, or could proceed via radiative viscosity with the wall of the collimating material.
In the latter process, dissipation (i.e. creation of additional quanta) would proceed very efficiently when the average photon energy in the observer frame approached $`m_ec^2`$, for then large angle scattering by the walls of the collimating material of photons back into the jet, where they could appear in the local frame to be blueshifted to even higher energy, would allow creation of additional pairs. Once the average energy per quantum in the observer frame goes far enough below $`m_ec^2`$, this dissipation mechanism would taper off. This is true of any dissipation mechanism that proceeds by pair creation. The point is that, if the clustering of GRB peaks near or just below $`m_ec^2`$ is to be regarded as a signature of dissipation by pair production, then such dissipation would probably occur at modest $`\mathrm{\Gamma }`$, or else the product of large $`\mathrm{\Gamma }`$ with a small fireball frame energy per quantum consistently yielding a given value for the observer’s frame would seem a puzzling coincidence. If the dissipation is to take place at modest $`\mathrm{\Gamma }`$, we argue that this probably indicates compact scales, before the fireball accelerates to its asymptotic value of $`\mathrm{\Gamma }`$, which according to several considerations is quite large ($`300`$).
At compact scales, thermalization is a strong possibility, and sets a convenient reference point for the analysis, but there is no implication that non-thermal effects are absent, or that the photon spectrum need look thermal on either side of the peak. Our view is that the non-thermal component in GRB’s is not, insofar as can be currently established by observations, as remarkably consistent from burst to burst as is the spectral peak. Thus, non-thermal acceleration might proceed downstream of the photosphere, as in optically thin synchrotron models of bursts with only the constraint that the non-thermal component never dwarf the thermal component (in total photons produced). The rationale for why both components might be of the same order of magnitude is given in section 3, where it is shown that the energy budget of the burst released in the internal shocks of an unsteady flow might be distributed logarithmically over a broad range of radii.
It is also possible that direct shock energization of photons (Blandford and Payne, 1982) can create a non-thermal photon spectrum up to 100 KeV in the frame of the fluid just below the photosphere by a shock that passes through the photosphere of a GRB (Eichler 1994). The photons would escape before they are thermalized or saturatedly Comptonized. We will argue in this paper that the material at the photosphere could be moving with a bulk Lorentz factor $`\mathrm{\Gamma }`$ that exceeds 300, depending on the extent to which non-thermal processes maintain a pair population well above the Boltzmann equilibrium value. Thus the high energy cutoff of the non-thermal spectrum could be above 30 MeV in the frame of the observer. The total photon number would have been established to within a factor of order unity well within the photosphere (at an optical depth of order $`1/\alpha `$ if by bremsstrahlung). On the other hand, the photon number density could change by a factor of order unity over the hydrodynamical timescale if the expansion, assumed to be unsteady, deviates from adiabaticity by a substantial margin. In this case, self absorption, which would keep the photon number steady in a static environment, fails to do so in the course of the expansion. As the Comptonization timescale just below the photosphere is at least as long as the hydrodynamical timescale, the spectrum of soft photons is not fully saturated by Comptonization.
Since the photospheric radius in our model is considerably smaller than in some other models, which by design recognize the restrictions on R and $`\mathrm{\Gamma }`$ implied by the escape of energetic, non-thermal gamma rays, we redo the calculation of this restriction in a transparent manner and show that it is consistent with the other restrictions that we invoke here. These calculations show that prompt very energetic gamma rays are possible, even prompt TeV ones, depending on the strength of the magnetic field. Conceivably, detection and time resolution of very energetic gamma rays in future experiments could set a useful diagnostic of the field strength and bulk Lorentz factor.
While the constraint of X-ray paucity might not be rigorous, and reconcilable with a ”just-so” selection of parameters, we note that, historically, it was taken seriously (Imamura & Epstein 1987) before the cosmological GRB picture was made popular by the all sky isotropy established by BATSE observations.
To avoid confusion, we define several distinct time intervals: The elapsed time in the frame of the fireball as the fireball evolves through radius R is called $`\mathrm{\Delta }t^{}`$ and is equal to $`R/c\mathrm{\Gamma }`$. The characteristic timescale of variation of the emission as seen by the external observer is called $`\delta t`$ and is generally taken to be of order (in any case, at least) $`R/c\mathrm{\Gamma }^2`$.
Henceforth, primed quantities denote quantities measured in the comoving frame, whereas unprimed quantities refer to quantities measured in the frame of the central engine. Subscripted quantities refer to that quantity expressed as the power of ten in cgs units denoted by the subscript. $`R_{12}`$ for example means the radius R expressed in units of $`10^{12}`$ cm.
## 2 X-Ray Paucity
A noteworthy feature of gamma ray burst spectra is the so-called paucity of X-rays. Although some BATSE spectra (though apparently not all, Preece $`\mathrm{𝑒𝑡}`$ $`\mathrm{𝑎𝑙}.`$ 1999) may be consistent with the instantaneous synchrotron radiation spectrum of a mono-energetic electron population, which would go as $`\nu I(\nu )d\nu \nu ^{4/3}d\nu `$, they are often too hard to be the time integrated spectrum of such a population, $`\nu I(\nu )\nu ^{\frac{1}{2}}`$, often too hard to be the optically thin instantaneous (i.e. thin target) emission of a shock accelerated spectrum, $`\nu I\nu ^{\frac{1}{2}}`$, and usually too hard to be the thick target emission spectrum of shock accelerated electrons $`\nu I(\nu )`$ constant. For reviews of spectral distribution see Pendleton et al. (1994) and Band et al. (1993). Popular fireball models often assume that the primary peak of the burst radiation comes from thermal electrons that have been accelerated to large enough energies to synchrotron radiate at several hundred KeV, but this, in our opinion, requires much fine tuning. It requires that the magnetic field value, the electron energy and the bulk Lorentz factor always conspire to put the peak at several hundred KeV; it is seldom below that for most GRB’s. (It has been debated that the narrow range of $`\nu I_\nu `$ peaks seen in GRB’s can be attributed to instrumental effects \[Dermer, et al. 1998\]. Here we are motivated to find a physical explanation.) Another problem with optically thin synchrotron emission is that invoking a thin target spectrum means low radiation efficiency, unless there is additional fine tuning. On the other hand allowing the electrons to radiate most of their energy in the magnetic field (”thick target emission”) would make too much softer X-radiation.
Invoking optically thin inverse Compton emission (IC) for the peak gamma rays would seemingly exacerbate the problem, because the energy of the seed photons is less likely to be monoenergetic than the virtual photons in the magnetic field, and even more fine tuning would be required to ensure that the upscattered photons always peak at $`300`$ KeV. \[A saturated IC spectrum has far fewer free parameters (Liang 1998) and is indeed considered below, but the location of the peak still depends on the total number of quanta produced per unit energy. So we will consider such production first.\]
Here we interpret the X-ray paucity as being due to self absorption. The simplest version of this interpretation is to identify the typical GRB peak at $`300KeV`$ as being the Wien peak of a thermal spectrum with a temperature of $`10^2/\mathrm{\Gamma }`$ KeV, where $`\mathrm{\Gamma }`$ is the bulk Lorentz factor. If we interpret the paucity of X-rays from GRB’s as being due to a black-body limit, then we can infer that the temperature of the radiation as seen by the observer must typically be $`10^2KeV`$ or more. If the brightness temperature at the frequency of maximum brightness temperature $`\omega _m`$ is indeed at the black body limit $`T_mh\omega _m`$, then the energy density in the frame of the fireball, $`aT_m^4`$ is limited by the condition that
$$4\pi R^2c\mathrm{\Gamma }^2aT^410^{51}L_{51}ergs^1.$$
(1)
The condition that the spectra typically peak at $`ϵ_\gamma 300ϵ_{300}`$ KeV implies that
$$\mathrm{\Gamma }T_m10^9Kϵ_{300}.$$
(2)
This implies that
$$\mathrm{\Gamma }10^{4.25}L_{51}^{\frac{1}{2}}R_{13}ϵ_{300}^2.$$
(3)
Although this admits unlimited values for both $`\mathrm{\Gamma }`$ and R, we note the following simple argument that could guide a choice of both: Liberating $`10^{51}`$ ergs within neutron star dimensions creates fewer photons, with larger average energy, than dictated by GRB observations, so some additional photon production must have taken place. If the (photon creating) dissipation takes place at modest $`\mathrm{\Gamma }`$ (say $`\mathrm{\Gamma }10`$), then equation (3) implies that $`R10^{10}`$ cm for $`L_{51}`$ and $`ϵ_{300}`$ of unity. If the dissipation were to take place at large $`\mathrm{\Gamma }`$ and perhaps larger R, the question would arise as to why the combination of the two always conspired to yield average photon energies of order several hundred KeV. On the other hand, assuming that the dissipation takes place at modest $`\mathrm{\Gamma }`$ provides a natural reason why the photon production should stop once the average photon energy is comfortably below (but not much further below) the pair production threshold: the cycle of pair production by photons and photon production by pairs naturally terminates at this point (e.g. Cavallo & Rees, 1978; Blandford & Levinson, 1995). For example, we might associate the additional photon production with the sharp, large angle deflections the flow might experience while being collimated into jets by a surrounding baryonic outflow. If thermalized at modest $`\mathrm{\Gamma }`$, say at a dissipation radius $`R_{dis}`$, and the flow expands more or less adiabatically beyond that point, then the temperature would from there on scale as 1/R, while $`\mathrm{\Gamma }`$ would scale as R as long as the fireball remained baryon pure (to avoid a minimum rest mass per unit energy release) and sufficiently optically thick (so that pairs were in thermal equilibrium with radiation). Thus equation (3) would continue to be satisfied as long as there were no further processing of the thermal photons. Neglecting, tentatively, non-thermal pair production, the thermal pair density would become negligible at a temperature well below $`3\times 10^8`$K (Paczynski 1986; Goodman 1986), and would thus establish a photosphere at a radius of about $`10R_{dis}`$, with a $`\mathrm{\Gamma }`$ of order 10 or less. The value of $`\mathrm{\Gamma }`$ could increase to $`10^3`$ within a radius of $`10^{12}`$ cm as long as there were no baryon loading.
The possibility of non-thermal or grey body (rather than black body) pair production would allow for somewhat larger dissipation radius than given by equation (3). This possiblity should not alter many of the conclusions of the paper much (though see immediately below). The basic point is that the modest $`\mathrm{\Gamma }`$ that we invoke for the production site of the GRB photons seems much smaller than the asymptotic values ($`300`$) that are deduced from other considerations, and the implication is that much of the photon production in GRB’s comes prior to the acceleration of the flow from $`\mathrm{\Gamma }=10`$ to $`\mathrm{\Gamma }=300`$
Given equation (3), with $`\mathrm{\Gamma }`$ of order 10, the characteristic timescale of variation of photons from such a photosphere, if of order $`R_{dis}/c\mathrm{\Gamma }^2`$, comes out naturally to be of order 3 milliseconds.
As has been noted many times, gamma rays observed by COMPTEL at about 20 MeV would pair produce at these radii unless the bulk Lorentz factor $`\mathrm{\Gamma }`$ exceeded about 20. On the other hand, beyond the photosphere, the bulk Lorentz factor of a baryon poor wind can grow in proportion to R, so that the condition $`\mathrm{\Gamma }20`$ is attained not much beyond the photosphere. We contend that any COMPTEL gamma ray emitted at $`R3\times 10^{10}`$ cm at $`\mathrm{\Gamma }30`$ would not be subject to pair production and could in principle reproduce time variations as short as 10 ms.
In the particularly well studied burst GRB 930131, COMPTEL detected gamma rays out to 20 MeV or so with a time profile very similar to that of the BATSE profile, and we suggest that this be interpreted as both energy ranges reflecting the intrinsic time profile of the energy release. On the other hand, the EGRET gamma rays at 30 MeV, of which there were several, typically arrived several seconds after the BATSE and COMPTEL peaks, and this is consistent with the hypothesis that the shortest time profile for the EGRET gamma rays was of order seconds.
## 3 High Energy Gamma Rays: Production
Although it is tempting if only for simplicity to model GRB as coming from a particular characteristic radius, the following argument suggests that a broad range of emission radii is no less natural. As long as the fireball is not baryon loaded and not complicated by non-spherical expansion, the bulk Lorentz factor increases linearly with R. The proper time in the frame of the fireball is then described by
$$dt^{}=dt/\mathrm{\Gamma }dR/R.$$
(4)
It follows that the proper time evolves only logarithmically with R. The duration of shock activity (e.g. due to very clumpy baryon contamination, fluctuations in $`\mathrm{\Gamma }`$ at launch, etc.) in the proper frame $`\mathrm{\Delta }t^{}`$, which is likely to be of order the mean time $`t^{}`$ around which such activity is centered, thus persists over a large range of radii R. That is, the range of radii over which there is shock activity, $`\mathrm{\Delta }R`$, is given by $`\mathrm{\Delta }lnRln(\frac{R}{R_0})`$, where $`R_0`$ denotes the beginning of the acceleration zone and might be close to $`R_{dis}`$. If $`\frac{R}{R_0}1`$, then shocks could persist over many decades of R, particularly shocks associated with the launch of the fireball, which can be delayed or ”dragged out” to $`RR_o`$ by the time dilation. Previous discussions of internal shocks occuring at some characteristic radius (Levinson & Eichler 1993, Eichler 1994, Rees & Meszaros 1994) have perhaps not emphasized this point. (We agree, however, that shocks resulting purely from fluctuations in the saturation radius, at which $`\mathrm{\Gamma }`$ approaches its asymptotic value, are likely to form near the mean value of that radius if the launch conditions are otherwise identical and if the baryonic contamination is not too clumpy.) We contend that there is no reason to expect all of the gamma rays from GRB’s to come from the same radius. We therefore consider the evaluation of gammaspheric radii as a function of gamma ray energy.
In order to emit gamma rays at energy E, the fireball must be able to accelerate the particles to at least energy E, and the gamma rays must then be able to escape.
The first condition is expressed as
$$\gamma _m\sigma _TU^{}/m_ec=\eta eB^{}/\gamma _mm_ec$$
(5)
where $`\gamma _m`$ is the maximum Lorentz factor in the frame of the fluid that is achievable by the acceleration, $`U^{}`$ is the energy density of the radiation field in the frame of the fireball fluid element, $`B^{}`$ is the field strength in the fluid frame in cgs units and $`\eta `$ is the acceleration rate in units of the electron gyrofrequency. For any reasonable acceleration mechanism, $`\eta `$ should be less than unity and for shock acceleration a value of order 1 to 10 percent seems like a reasonable guess. Writing $`U^{}`$ as $`10^{51}L_{51}ergs^1/4\pi R^2c\mathrm{\Gamma }^2`$ equation (5) implies that
$$\gamma _m/\mathrm{\Gamma }=0.5\eta ^{\frac{1}{2}}R_{12}B^{\frac{1}{2}}L_{51}^{\frac{1}{2}}.$$
(6)
For $`L_{51}=1`$, and assuming that the Poynting flux is limited by $`10^{51}L_{51}`$ ergs per $`4\pi `$ steradians, the last equation reduces to
$$\gamma _m\mathrm{\Gamma }=6\times 10^3\eta ^{\frac{1}{2}}\mathrm{\Gamma }^{\frac{3}{2}}R_{12}^{\frac{1}{2}}.$$
(7)
Note that if $`B=\mathrm{\Gamma }B^{}`$ scales as $`1/R`$, then the maximum energy in the frame of the fireball to which electrons can be accelerated increases as $`R^{1/2}\mathrm{\Gamma }^{1/2}`$, as seen from eq. (6). This implies that the ability of the fireball to accelerate electrons to extremely high energies increases sharply with radius. At small radii,
$$R_{12}2\eta ^{\frac{1}{2}}B^{\frac{1}{2}}$$
(8)
a shock cannot accelerate electrons to a high enough energy to pair produce via inverse Compton scattering of thermal photons,. (i.e. to a Lorentz factor of $`\gamma _m\mathrm{\Gamma }`$, which would be needed given that the photons near the spectral peak have energies less than $`m_ec^2/\mathrm{\Gamma }`$ in the fluid frame).
At a radius of order $`10^{12}`$ cm, with $`\mathrm{\Gamma }40`$, equation (7) admits TeV gamma rays which could arrive within a fraction of a second if they escape freely.
If non-thermal pairs are made at larger radii than given in equation (8), they could conceivably Compton scatter the thermal photons that were made at smaller radii. If the thermal photons constitute most of the burst’s photons and most (or much of) its energy, further scattering by pairs downstream cannot greatly alter the parameters of the thermal component.
## 4 High Energy Gamma Rays: Escape
In order to compute the pair production opacity contributed by the beamed photons and the associated gammaspheric radii, we consider a conical beam of emission with an opening semiangle $`\theta _b`$, and a power law spectrum which is taken to be constant with radius: $`n(ϵ_s)=n_oϵ_s^\alpha `$; $`ϵ_{min}<ϵ<ϵ_{max}`$ where $`n_o`$ is expressed in terms of the apparent luminosity, $`L`$, as
$$n_o=\frac{L}{4\pi ^2m_ec^3r^2}(1\mu _b^2)^1G(ϵ_{max},ϵ_{min})\frac{L\mathrm{\Gamma }^2}{4\pi ^2m_ec^3r^2}G(ϵ_{max},ϵ_{min}),$$
(9)
where $`\mu _b=\mathrm{cos}\theta _b`$, $`G(ϵ_{max},ϵ_{min})=[\mathrm{ln}(ϵ_{max}/ϵ_{min})]^1`$ for $`\alpha =2`$, and $`G(ϵ_{max},ϵ_{min})=(2\alpha )/(ϵ_{max}^{2\alpha }ϵ_{min}^{2\alpha })`$ for $`\alpha 2`$. The pair production opacity at energy $`ϵ_\gamma `$ (measured in units of $`mc^2`$) can then be expressed as
$$\kappa _p(ϵ_\gamma )=2\pi _{ϵ_{min}}^{ϵ_{max}}𝑑ϵ_s_{\mu _b}^{\mu _{max}}𝑑\mu \{(1\mu )n(ϵ_s)\sigma _p(\beta )\}.$$
(10)
Here $`\mu _{max}=`$ max$`\{\mu _b;12/ϵ_sϵ_\gamma \}`$, and $`\sigma _P(\beta )`$ is the pair-production cross section, where $`\beta `$ is the speed of the electron and the positron in the center of momentum frame, and is given by
$$(1\beta ^2)=\frac{2}{(1\mu )ϵ_\gamma ϵ_s}.$$
(11)
The threshold energy, obtained for $`\beta =0`$ and $`\mu =\mu _b`$, now reads: $`ϵ_{thrs}=2/(1\mu _b)ϵ_\gamma 4\mathrm{\Gamma }^2/ϵ_\gamma `$. Substituting eq. (9) into eq. (10) yields
$$\kappa _p=\frac{3\sigma _TL}{16\pi m_ec^3r^2}\frac{1}{2^{2\alpha }\mathrm{\Gamma }^{2\alpha }}G(ϵ_{max},ϵ_{min})A(\alpha ,\beta _{max})ϵ_\gamma ^{\alpha 1},$$
(12)
where
$$A(\alpha ,\beta _{max})=\frac{1}{\alpha +1}_0^{\beta _{max}}𝑑\beta \beta (1\beta ^2)^{\alpha 1}\left[(3\beta ^4)\mathrm{ln}\left(\frac{1+\beta }{1\beta }\right)2\beta (2\beta ^2)\right].$$
(13)
The function $`A`$ can be evaluated numerically. It vanishes for energies below the threshold energy at which $`\beta _{max}=0`$. A plot of $`A`$ versus $`\alpha `$ for $`\beta _{max}=1`$ (which is a good approximation well above the threshold energy) is given in Blandford & Levinson (1995). Numerically we find $`A=0.2`$ for a photon index $`\alpha =2`$.
Now, the gamma-spheric radius, $`r_\gamma (ϵ_\gamma )`$, is defined implicitly by the equation
$$_{r_\gamma }^{\mathrm{}}𝑑r\kappa _p(r,ϵ_\gamma )=1.$$
(14)
Using the above expression for the opacity, one obtains
$$r_\gamma (ϵ_\gamma )=1.6\times 10^{21}L_{51}(2\mathrm{\Gamma })^{2\alpha }G(ϵ_{max},ϵ_{min})A(\alpha ,\beta _{max})ϵ_\gamma ^{(\alpha 1)}\mathrm{cm},$$
(15)
up to a numerical factor that depends on the radial profile of the Lorentz factor $`\mathrm{\Gamma }`$. Adopting, for illustration, $`\alpha =2`$, $`ϵ_{min}=2`$, $`ϵ_{max}=2\times 10^3`$, yields a threshold energy of 10$`\mathrm{\Gamma }_2^2`$ MeV, a gammaspheric radius
$$r_\gamma (ϵ_\gamma )=2.8\times 10^{10}\frac{L_{51}}{\mathrm{\Gamma }_2^4}ϵ_\gamma \mathrm{cm}$$
(16)
above the threshold energy, and a corresponding variability time
$$\delta t=\frac{r_\gamma }{2c\mathrm{\Gamma }^2}=5\times 10^5\frac{L_{51}}{\mathrm{\Gamma }_2^6}ϵ_\gamma s.$$
(17)
These last equations admit rather high energy gamma rays emerging from rather compact regions. Even the GeV gamma rays observed by EGRET can in principle emerge from within $`10^{12}`$ cm if the bulk Lorentz factor is 300 or more.
The above results may be considerably altered in the presence of additional radiation component. For instance, large angle scattering of the beamed photons by ambient gas surrounding the fireball, e.g., a confining, baryon-rich outflow (Nakamura 1998; MacFadyen & Woosley 1998; Eichler & Levinson 1999), can produce an unbeamed component that will dominate the pair production opacity within the baryon-poor jet even if its luminosity is only a small fraction of the total. It may then be that high energy gamma rays can escape only from parts of the baryon-poor jet that have run ahead of surrounding baryons.
## 5 Comparison of Constraints on Baryon Contamination
The cooling of the plasma to $`2\times 10^8`$ K, so that the Boltzman factor for pairs becomes miniscule, is a necessary condition for small photospheres, but it is not a sufficient one. Electrons may be entrained from the sides (e.g. via neutron seepage) and non-thermal pairs may be produced under some conditions. Let us then consider the limits on the net baryon number within a jet that features a compact photosphere.
Writing
$$\dot{M}=4\pi m_pnR^2\beta c$$
(18)
and using $`\sigma _TnR=\tau \mathrm{\Gamma }^2`$, where $`\tau `$ is optical depth, and letting $`\tau `$ be unity at the photospheric radius $`R_{ph}`$, we easily obtain
$$\mathrm{\Gamma }_a\mathrm{\Gamma }_{ph}^22.5\times 10^5L_{51}R_{ph13}^1$$
(19)
where $`\mathrm{\Gamma }_a`$ is defined as $`L/\dot{M}c^2`$.
If the photosphere were to occur at $`\mathrm{\Gamma }_{ph}10`$, and $`R_{ph}`$ of order $`10^{10}`$ cm, then $`\mathrm{\Gamma }_a`$ would have to be enormous, of order $`10^6`$, and might be convincing evidence that an event horizon enforces baryon purity on field lines that thread it. On theother hand, the compactness constraint that motivates our model does not imply that the photons that have been produced within $`10^{10}`$ cm necessarily make their last scattering there, it merely implies that they are not swamped by a much larger number of photons made at larger radii, nor drained of their energy by mass loading (which would terminate the linear increase of $`\mathrm{\Gamma }`$ with R). They may be scattered by material that is both dynamically and thermodynamically passive; the timescale and spectrum could remain the same.
We may combine equations (3), (19) and the obvious condition that $`\mathrm{\Gamma }_a\mathrm{\Gamma }_{ph}`$, to obtain that at $`R_{ph}10^{11}`$cm, $`\mathrm{\Gamma }_a`$ must exceed $`200`$. If the photosphere were constrained by some other consideration to be larger than this, then equations (3) and (19) similarly constrain $`\mathrm{\Gamma }_{ph}`$ to be greater as well.
## 6 Conclusions
We have suggested that a significant fraction of the photons from gamma ray bursts are emitted at a radius of about $`10^{10}`$ cm or less , where the bulk Lorentz factor is modest. This allows the burst entropy to be established at modest $`\mathrm{\Gamma }`$, so that the peak energy in the observer’s frame can be tied directly to $`m_ec^2`$ as oppsed to a small fraction of $`m_ec^2`$ to be multiplied by a large $`\mathrm{\Gamma }`$. It also provides a natural account of why GRBs have time substructure of milliseconds.
The photosphere (surface of last scattering) may be larger than the radius of emission, but whether inequality (3) is satisfied is independent of R as long as $`\mathrm{\Gamma }`$ remains proportional to R. Thus, the fulfillment of (3) is unaffected by passive scattering material.
Higher energy, non-thermal gamma rays may be produced even beyond the photosphere of lower energy ones, but we found in section IV that even GeV gamma rays could escape from within $`10^{12}`$ cm for $`\mathrm{\Gamma }`$ of order 300.
TeV gamma rays could, in principle, escape if produced within $`10^{12}`$ to $`10^{13}`$ cm for $`\mathrm{\Gamma }`$ of order $`10^3`$ and would be prompt. Thus, it seems worthwhile to run experiments that could detect prompt, ultrahigh, non-thermal gamma rays from an unannounced direction. The MILAGRO project is such a detector, though the universe may be opaque to the photons ($`ϵ_\gamma 1`$ TeV) to which it is sensitive. A similar experiment with a lower energy threshold - say $`10^2`$ GeV - might detect prompt UHE emission from a sizeable fraction of GRB’s. Detection of prompt gamma rays above 100 GeV might be the best diagnostic of the bulk Lorentz factors of GRB fireballs.
To conclude, the radius of emission and radius of last scattering of thermal gamma rays in GRB’s may be different, but both may occur well within $`10^{12}`$ cm. Non-thermal, high energy gamma rays may be produced at and escape from somewhat larger radii than the thermal ones, but they too can emerge from compact regions provided that the bulk Lorentz factor is sufficiently high. Soft echos of primary gamma radiation scattered off baryonic matter might provide (via their time scales) limits on the photospheric radius for the bulk of the gamma rays.
We acknowledge support from the Israel Science Foundation. We thank Dr. E. Waxman for useful discussions.
|
no-problem/9903/cond-mat9903122.html
|
ar5iv
|
text
|
# The Ground State of the “Frozen” Electron Phase in Two-Dimensional Narrow-Band Conductors with a Long-Range Interelectron Repulsion. Stripe Formation and Effective Lowering of Dimension.
## 1 Introduction.
High-T<sub>c</sub> superconductors studies have caused a surge of interest in properties of narrow-band layered and two-dimensional (2D) conductors. An important consequence of the layerness is substantial weakening of the screening of a Coulomb interaction between the charge carriers. (The screening radius cannot, under any circumstances, be less than the interlayer distance). Besides, in layered conductors it is possible to well separate the charge carriers (for the sake of definiteness, we consider them electrons) from the donors, so that the mean energy, $`u_{ee}`$, of the long-ranged interelectron repulsion prevails over the energy of an electron attraction to the donors. Under these conditions it is the mutual repulsion of narrow-band electrons that can supress their tunneling between the equivalent orbits of the conductor lattice, resulting in formation of a “frozen” electron phase (FEP) which differs principally from any known macroscopic self-localized electron state including the Wigner crystal . The FEP occurs when the electron bandwidth, $`t`$, is less than $`\delta u=(a/r_{ee})u_{ee}`$, where $`\delta u`$ is the typical change in the narrow-band electron Coulomb energy in electron hopping, $`a`$ is the range of hopping, $`r_{ee}`$ is the mean electron separation. The high-$`T_c`$ cuprates, grain boundaries of polycrystal electroceramic materials , as well as some art 2D conductors appear to be most favorable for 2D FEP coming to existence.
Macroscopical behavior of the 2D FEP is rather unconventional. Its distinctive features are rooted in properties of its ground state (GS) at $`t\delta u`$. In the limit $`t/\delta u0`$ the GS of the 2D FEP is much the same as that of other 2D lattice systems with a long-ranged interparticle repulsion. (An example is an ensemble of adsorbed atoms strongly interacting with their substrate and mutually repelling each other ). As far as we know, neither the thermodynamics nor the GS of such systems have been studied adequately. Here we offer a unified approach to the description of the GS of the 2D zero-bandwidth FEP (and similar lattice systems) with an isotropic pair potential of the interelectron repulsion, $`v(r)`$ ($`r`$ is the distance between interacting electrons). The key point of our consideration is a new phenomenon — a zero-temperature effective lowering of dimension (LOD) — which we have revealed to underlay (despite the pair potential isotropy) the main GS properties of the 2D FEP for: i/ arbitrary arrangement of the sites which can be occupied by electrons provided the sites constitute a primitive lattice (it is called host lattice below); ii/any filling factor, $`\rho =N/𝒩`$ ($`N\mathrm{}`$ and $`𝒩\mathrm{}`$ are the total numbers of the electrons and the host-lattice sites respectively); iii/ any physically reasonable $`v(r)>0`$. We take the term LOD to mean that the GS of the 2D FEP is a set of different effective 1D FEP whose “particles” are periodic stripes on the lattice od the 2D conductor. For each 1D system of the set there is its own $`\rho `$ interval where this 1D FEP represents the 2D one, the whole range, $`0\rho 1`$, comprising all the intervals. The LOD enables us to offer a rigorous analytical procedure for the 2D FEP GS description, using the exact results of the general theory of the 1D lattice systems with a long-ranged interparticle repulsion.
## 2 Hamiltonian. S-crystals.
The Hamiltonian, $``$, of the system under consideration has the form
$$\{n(\stackrel{}{r})\}=\frac{1}{2}\underset{\stackrel{}{r}\stackrel{}{r^{}}}{}v\left(|\stackrel{}{r}\stackrel{}{r^{}}|\right)n(\stackrel{}{r})n(\stackrel{}{r^{}}),$$
(1)
where $`\stackrel{}{r}=m_1\stackrel{}{a}_1+m_2\stackrel{}{a}_2`$ are radius vectors of the host-lattice sites, $`m_{1,2}`$ are integers, $`\stackrel{}{a}_{1,2}`$ are host-lattice primitive translation vectors (PTVs); the occupation numbers of the host-latice sites, $`n(\stackrel{}{r})=0\text{ or }1`$, are microscopic variables ; the sum is taken over the whole host lattice. The pair potential is assumed to be an everywhere convex function of the form $`v(r)=\stackrel{~}{v}(r)/r`$, where function $`\stackrel{~}{v}(r)`$ depends on the character of the screening medium and its position with respect to the 2D FEP. In any case $`\stackrel{~}{v}(r)`$ tends to zero as $`r^2`$ or faster when $`r\mathrm{}`$; $`\stackrel{~}{v}(0)=e^2/\kappa `$ ($`e`$ is the electron charge, $`\kappa `$ is the dielectric permittivity). Otherwise $`\stackrel{~}{v}(r)`$ can be reckoned as arbitrary: as will be shown below, its specific form is immaterial to our approach.
Among the GS configurations $`\{n(\stackrel{}{r})\}`$ with different $`\rho `$ the simplest ones are 2D crystals with one electron per cell (“S-crystals”). Their inverse $`\rho `$ values make up an infinite set of integers $`Q_j=|det(m_{\kappa \lambda }^j(\stackrel{}{a}_1,\stackrel{}{a}_2))|`$, where $`j`$ indexes S-crystals, integers $`m_{\kappa \lambda }^j`$ ($`\kappa ,\lambda =1,2`$) are components of S-crystal PTVs in the $`\stackrel{}{a}_\lambda `$ basis; $`Q_j`$ is the $`j`$-th S-crystal elementary-cell area measured in units of that of the host lattice, $`\sigma _0=|\stackrel{}{a}_1\times \stackrel{}{a}_2|`$.
Our strategy is to derive the full description of the GS for any $`\stackrel{}{a}_{1,2}`$, starting with consideration of small vicinities of $`\rho =1/Q_j`$. Since specific $`m_{\kappa \lambda }^j`$ values are irrelevant to this reasoning, we drop index $`j`$ at $`Q`$ and at other characteristics of the S-crystals for a while.
Due to discreteness of the system with the Hamiltonian (1) a macroscopically small change, $`\delta \rho `$, in $`\rho `$ ($`\delta \rho 0,N^{1/2}|\delta \rho |\mathrm{}`$ when $`N\mathrm{}`$) produces only isolated defects in an S-crystal, the space structure of the defects essentially depending on whether they result from an increase or a decrease in $`\rho `$. This fact is expressed by the identity
$$\begin{array}{c}E_g(N\pm |\delta N|,𝒩\pm |\delta 𝒩|)E_g(N,𝒩)=\\ \pm \mu _\pm |\delta N|P_\pm |\delta 𝒩|,\end{array}$$
(2)
where $`E_g`$ is the GS energy, $`\delta N`$ and $`\delta 𝒩`$ are changes in $`N`$ and $`𝒩`$ producing $`\delta \rho `$. The proportionality coefficients, $`\mu _{}<\mu _+`$, $`P_{}<P_+`$, are the values of the chemical potential, $`\mu `$, and the pressure, $`P`$, which are the endpoints of the $`\mu `$ and $`P`$ intervals of S-crystal existence. They are determined by the energies of formation of corresponding defects. Thus, in some vicinity of $`\rho =1/Q`$ the GS is bound to be a superstructure of the defects. Our next step is to find them.
## 3 Zero-dimensional defects and their coalescence.
Adding to or removing from an S-crystal one electron results in formation of a zero-dimensional defect, “$`+`$defecton” or “$``$defecton” respectively. One can be inclined to think that $`\delta N`$ should be identified exactly with the total number of $`\pm `$defectons spatially separated, $`\pm \mu _\pm `$ being simply the energy of $`\pm `$defecton formation, $`ϵ_\pm `$. However, this seemingly evident statement is actually incorrect due to a coalescence of defectons of the same “sign”. In other words, if the number, $`|\nu |`$, of S-crystal electrons removed ($`\nu <0`$) or added ($`\nu >0`$) is more than $`1`$, a bound state of $`|\nu |`$ $`\pm `$defectons arises whose energy is less than $`|\nu |ϵ_\pm `$. We have revealed the coalescence by computation, using a ”dipole” description of the GS with $`\nu =\pm 1,\pm 2,\mathrm{}`$, which we have specially worked out for this purpose. The dipole approach offers a clear view of how the defectons’ bound state arises despite the fact that the defectons of the same sign repel each other, being widely spaced.
At $`\nu 0`$ a perturbed S-crystal is formed where beside electrons placed at host-lattice sites in the interstices of the S-crystal ($`\nu >0`$) or empty S-crystal sites, “holes”, ($`\nu <0`$) there are generally a certain number of S-crystal electrons shifted from their native S-crystal sites. The latter can be considered as “antiparticles” whose charge is equal to the electron one in magnitude but is opposite in sign, a pair “a electron shifted by a vector $`\stackrel{}{\xi }`$ \+ its antiparticle located at an S-crystal site $`\stackrel{}{r}`$ ” being the “$`\stackrel{}{r},\stackrel{}{\xi }`$-dipole”. Thus, the perturbation of the S-crystal can be envisioned as an ensemble consisting of several dipoles and $`|\nu |`$ interstitial particles/holes (IP/Hs). The dipoles interact with the IP/Hs and with each other. The energy of interaction between the IP/H (at $`\stackrel{}{r}=0`$) and $`\stackrel{}{r},\stackrel{}{\xi }`$-dipole is $`u_\stackrel{}{\xi }(\stackrel{}{r})=\text{sign}\nu (v(|\stackrel{}{r}\stackrel{}{\xi }|)v(|\stackrel{}{r}|))\text{sign}\nu \widehat{\mathrm{\Delta }}_\stackrel{}{\xi }v(|\stackrel{}{r}|)`$; the energy of interaction between $`\stackrel{}{r},\stackrel{}{\xi }`$\- and $`\stackrel{}{r}^{},\stackrel{}{\xi }^{}`$-dipole is $`u_{\stackrel{}{\xi },\stackrel{}{\xi }^{}}(\stackrel{}{r}\stackrel{}{r}^{})=\widehat{\mathrm{\Delta }}_\stackrel{}{\xi }\widehat{\mathrm{\Delta }}_\stackrel{}{\xi }^{}v(|\stackrel{}{r}\stackrel{}{r}^{}|)`$. The IP/Hs, in turn, undergo a mutual repulsion and are exposed to an “external” field, $`u(\stackrel{}{r})`$, which is equal to $`2u_0`$ for holes (here and further on $`u_{ee}`$ of the S-crystal is denoted by $`u_0`$), and for IPs it is the field produced at a point $`\stackrel{}{r}`$ by all electrons of the S-crystal. In these terms the change in the GS energy at a given $`\nu `$, $`U_{\text{GS}}(\nu )`$, takes the form
$$U_{\text{GS}}(\nu )=\mathrm{min}\left(V_{\text{rep}}+U_d+U_{\text{exc}}+U\right).$$
(3)
Here $`V_{\text{rep}}=_{\alpha <\beta }v(|\stackrel{}{r}_{\alpha \beta }|)`$ is the energy of the mutual repulsion of the IP/Hs; $`U_d=_{\alpha ,i}u_{\stackrel{}{\xi }_i}(\stackrel{}{r}_{\alpha i})`$ is the energy of their interaction with the dipoles; $`U_{\text{exc}}=_i\delta u_{\stackrel{}{\xi }_i}+_{i<k}u_{\stackrel{}{\xi }_i,\stackrel{}{\xi }_k}(\stackrel{}{r}_{ik})>0`$ is the excitation energy of S-crystal with $`n_d`$ dipoles at $`\nu =0`$; $`\delta u_\stackrel{}{\xi }u_0|\stackrel{}{\xi }|^2/r_{ee}^2>0`$ is the energy of formation of one dipole; $`U=_\alpha u(\stackrel{}{r}_\alpha )`$ is the energy of the IP/Hs in the external field mentioned; indexes $`\alpha =1,\mathrm{},|\nu |`$ and $`i=1,\mathrm{},n_d`$ enumerate the IP/H raduis-vectors and dipoles respectively, $`n_d`$ is the total number of the dipoles; $`\stackrel{}{r}_{ab}\stackrel{}{r}_a\stackrel{}{r}_b`$. The minimum is taken in respect to $`n_d`$, the dipole variables, $`\stackrel{}{r}_i,\stackrel{}{\xi }_i`$, and $`\stackrel{}{r}_\alpha `$. Therefore, the dipole approach allows to work with only a few discrete variables. This facilitates considerably the Monte-Carlo computer simulation of the $`\pm `$defectons ($`U_{\text{GS}}(\pm 1)=\pm ϵ^\pm `$) and their coalescence at $`|\nu |>1`$.
The mechanism of the coalescence can be elucidated by the following heuristic arguments. The GS total dipole energy, $`E_d(\nu )=U_{\text{exc}}(\nu )+U_d(\nu )`$, is negative, so that for any $`|\nu |`$ the GS space structure is determined by an interplay between negative $`U_d`$ and positive $`U_{\text{exc}}`$, $`V_{\text{rep}}`$. The IP/H – dipole interaction gives the maximal gain in energy when each IP/H is embedded in a “shell” of four dipoles which are attracted to it, the dipoles’ antiparticles forming a parallelogram of a size $`r_{ee}Q^{1/2}`$ (Fig.1). The shells of neighboring IP/Hs are bound to share some of their dipoles for $`U_{\text{exc}}`$ (and hence $`n_d`$) to be as small as possible. This requirement can be fulfilled only when all IP/Hs are aligned in a row, the near-neighbor IP/Hs being shifted relative to one another by the same S-crystal PTV with the modulus $`r_{ee}`$. (Fig.1). In such a case $`|E_d(\nu )|`$ is more than the magnitude of the dipole energy of $`|\nu |`$ infinitely separated defectons, $`E_d^{\mathrm{}}=|\nu |E_d(\pm 1)`$. The coalescence arises when the energy gain, $`\mathrm{\Delta }=|E_d(\nu )||E_d^{\mathrm{}}|`$, exceeds $`V_{\text{rep}}`$ of the IP/Hs aligned in the row. Since $`\mathrm{\Delta }|\nu |v(r_{ee})`$ this condition is met if $`v(r)`$ decreases not too slowly, or, more exactly, if
$$\gamma =\underset{r_{ee}}{\overset{\mathrm{}}{}}v(r)𝑑r/r_{ee}v(r_{ee})1.$$
(4)
The computer simulation carried out with the model potential $`v(r)r^\beta \mathrm{exp}(r/R)`$ over a wide range of the parameters, $`\beta ,R`$, has confirmed that the condition (4) is really the criterion of the coalescence for any $`|\nu |`$ (and any $`\stackrel{}{a}_{1,2}`$).
Criterion 4 is for the most part fulfilled. It holds for any $`\stackrel{~}{v}(r)`$ (section 2) such that $`\stackrel{~}{v}(0)\stackrel{~}{v}(r_{ee})\stackrel{~}{v}(0)`$. This case will be the focus of our attention from here on. Parameter $`\gamma `$ becomes $`1`$ if $`\stackrel{~}{v}(r)`$ decreases substantially only for $`r`$ which are exponentially large in $`\gamma `$. In this limit the mutual repulsion of the IP/Hs disrupts their row, and there is no coalescence, at least for sufficiently large $`|\nu |`$. However, in section 8 it is outlined that the LOD governs the GS in this rather special case, too.
## 4 The lowering of dimension.
### 4.1 The elementary stripes in the 2D FEP.
As follows from the aforesaid, the bound state of $`|\nu |`$ defectons is transformed into a periodic stripe-like structure with an infinite increase in $`|\nu |`$. (Fig. 1). It consists of elementary 1D defects which, as will be shown below, repel each other. Therefore, it is the simplest 1D defects that are expected to form the GS superstructure. An arbitrary 1D defect of such a type is a stripe of rarefaction or compression which arises when an S-crystal part adjacent to a line of electrons with some PTV, $`\stackrel{}{d}`$, is shifted as a whole relative to the other one by a host-lattice translation vector, $`\stackrel{}{\xi }`$. Formation of one stripe of length $`L_s`$ changes $`𝒩`$ by $`\delta 𝒩=\pm \sigma L_s`$ ($`\sigma =|\stackrel{}{d}\times \stackrel{}{\xi }|`$, $`L_s`$ is measured in units of $`|\stackrel{}{d}|`$ ). The corresponding change in energy, $`\delta E`$, is proportional to $`\delta 𝒩`$:
$$\delta E/|\delta 𝒩|=\epsilon (\stackrel{}{d},\stackrel{}{\xi })=\sigma ^1_{n=1}^{\mathrm{}}_{\stackrel{}{r}}^{}{}_{}{}^{}u_\stackrel{}{\xi }(\stackrel{}{r}n\stackrel{}{f}).$$
(5)
Here $`_{\stackrel{}{r}}^{}{}_{}{}^{}`$ means summation over the S-crystal semiplane $`\stackrel{}{r}=k\stackrel{}{d}+l\stackrel{}{f}`$ ($`\mathrm{}<k<\mathrm{}`$, $`\mathrm{}<l0`$); $`\stackrel{}{f}`$ is any S-crystal PTV other than $`\stackrel{}{d}`$. The GS is realized by the stripes with $`\stackrel{}{d}=\stackrel{}{d}_\pm `$ and $`\stackrel{}{\xi }=\stackrel{}{\xi }_\pm `$ which minimize $`\epsilon (\stackrel{}{d},\stackrel{}{\xi })`$ at a given sign of $`\delta 𝒩`$ ($``$ or $`+`$ symbolizes rarefaction or compression respectively). We will call these stripes “$``$stripes” or “$`+`$stripes”.
The energies $`\epsilon _\pm =|\epsilon (\stackrel{}{d}_\pm ,\stackrel{}{\xi }_\pm )|`$ are the quantities $`P_\pm `$ (see (2)) associated with $`\pm `$stripes formation. The corresponding $`\mu _\pm `$, as follows from general thermodynamic considerations, are
$$\stackrel{~}{\epsilon }_\pm =u_0+Q\epsilon _\pm .$$
(6)
Lest there be no contradiction with the fact of the coalescence, energies $`\stackrel{~}{\epsilon }_\pm `$ and $`ϵ_\pm `$ are bound to satisfy inequalities
$$ϵ_{}<\stackrel{~}{\epsilon }_{}<\stackrel{~}{\epsilon }_+<ϵ_+.$$
(7)
When $`Q1`$ and $`v(r)`$ goes to zero over distances $`Rr_{ee}Q^{1/2}`$, they follow from simple estimates. Taking into account that $`|\stackrel{}{\xi }_\pm |a_0`$, and, corespondingly, $`|\stackrel{}{d}_\pm \times \stackrel{}{\xi }_\pm |Q^{1/2}\sigma _0`$, from Eq. (5) we obtain: $`\epsilon _\pm (a_0Q^{1/2}/R)u_0`$. On the other hand, $`|ϵ_{}|u_0v(r_{ee})`$, and hence, $`\stackrel{~}{\epsilon }_{}|ϵ_{}|`$. In the case under consideration $`ϵ_+v(r_{min})`$, where $`r_{min}`$ is the least of the distances between the IP and the S-crystal sites. This energy is much more than $`\epsilon _+`$ as $`Rr_{ee}`$.
To make sure that the inequalities (7) hold for other $`v(r)`$ and $`R/r_{ee}`$ we have computed $`\epsilon _\pm `$ (basing on Eq. (5) and Eq. (6)) in parallel to the Monte-Carlo computer studies of the coalescence. They have confirmed that the inequalities are really the case for all $`v(r)`$ under consideration and for all $`Q`$, maybe except $`Q=2`$.
Together with the mutual repulsion of $`\pm `$stripes of the same sign the inequalities (7) lead to the conclusion that $`+`$stripes or $``$stripes do constitute the GS superstructure in the vicinity of $`1/Q`$. The position of each $`\pm `$ stripe – a constituent of the superstructure – is determined by the stripe “coordinate”, $`l`$, which is the total number of particle lines (with the PTV $`\stackrel{}{d}_\pm `$) between this stripe and some fixed one ($`l=0`$). A set of these coordinates determines uniquely the 2D FEP space structure. Therein lies the LOD.
### 4.2 The GS superstructure of stripes.
The GS arrangement of the $`\pm `$stripes is governed by the pair potential of the stripe-stripe interaction,
$$V_{\text{ss}}^\pm (l)=_{n=l+1}^{\mathrm{}}_{\stackrel{}{r}}^{}{}_{}{}^{}u_{\stackrel{}{\xi }_\pm ,\stackrel{}{\xi }_\pm }(\stackrel{}{r}n\stackrel{}{f}_\pm \stackrel{}{\xi }_\pm )$$
(8)
where inter-stripe “distance” $`l=1,2,\mathrm{}`$; $`\stackrel{}{f}_\pm `$ is an S-crystal PTV other than $`\stackrel{}{d}_\pm `$, $`\mathrm{\Sigma }_\stackrel{}{r}^{}`$ means the same as in Eq. (5) ($`\stackrel{}{d},\stackrel{}{f}=\stackrel{}{d}_\pm ,\stackrel{}{f}_\pm `$). For all $`v(r)`$ under consideration $`\mathrm{\Sigma }_\stackrel{}{r}^{}(n)>0`$, and $`\mathrm{\Sigma }_\stackrel{}{r}^{}(n)>\mathrm{\Sigma }_\stackrel{}{r}^{}(n+1)`$. Hence, $`V_{\text{ss}}(l)>0`$ is a convex function of $`l`$. This enables us to describe the $`\pm `$stripes superstructure at $`\vartheta Q0`$ ($`\vartheta =1/\rho `$) on the basis of the universal 1D algorithm , considering the stripes as the ”particles” of an effective 1D FEP:
$$l_m=[m/c_\pm ];c_\pm =|\vartheta Q|/\sigma _\pm ,\sigma _\pm =|\stackrel{}{d}_\pm \times \stackrel{}{\xi }_\pm |$$
(9)
where $`[\mathrm{}]`$ is the integral part of a number, $`m`$ enumerates the $`\pm `$stripes; integer $`l_m`$ is the coordinate of $`m`$-th stripe, which is a pair of neighboring lines of electrons $`\stackrel{}{r}_{m,1}(k)=k\stackrel{}{d}_\pm +l_m\stackrel{}{f}_\pm +m\stackrel{}{\xi }_\pm `$ and $`\stackrel{}{r}_{m,2}(k)=\stackrel{}{r}_{m,1}(k)+\stackrel{}{f}_\pm +\stackrel{}{\xi }_\pm `$ ($`k=0,\pm 1,\mathrm{}`$). The superstructure described by Eq. (9) is thus a mixture of $``$stripes ($`\vartheta Q>0`$) or $`+`$stripes ($`\vartheta Q<0`$) and unperturbed stripes of the S-crystal which are parallel to $`\stackrel{}{d}_\pm `$, so that $`c_\pm =N_s/𝒩_s`$ is the concentration of the $`\pm `$stripes; $`N_s`$ is their number; $`𝒩_s`$ is the total number of the $`\pm `$stripes and the S-crystal ones. The number of unperturbed stripes between $`m`$-th and $`m+1`$-th $`\pm `$stripes equals $`l_{m+1}l_m1`$.
### 4.3 An algorithm for arrangement of electrons’ lines
S-crystals with $`\stackrel{}{f}_\pm =q_\pm \stackrel{}{\xi }_\pm `$, where $`q_\pm =Q/\sigma _\pm `$ is an integer are of frequent occurrence. Particularly, this occurs of necessity for a triangular host lattice (section 7), and also for $`\sigma _\pm =\sigma _0`$, as is typical of S-crystals on a host lattice of a lower symmetry. In such a case the above-mentioned electron lines of both types, $`\stackrel{}{r}_{m,1}(k)`$ and $`\stackrel{}{r}_{m,2}(k)`$, fall into the class of electron lines $`k\stackrel{}{d}_\pm +l\stackrel{}{\xi }_\pm `$ ($`k=0,\pm 1,\mathrm{}`$; $`l`$ is an integer), which can be considered as 1D “particles” with “coordinates” $`l`$. Their arrangment, as follows from Eq. (9), obeys the algorithm:
$$l_m=[\overline{s}m],\overline{s}=q_\pm c_\pm ,$$
where $`l_m`$ is the “coordinate” of the $`m`$-th line, $`\overline{s}`$ is the mean line separation measured in units of $`|\stackrel{}{\xi }_\pm |`$.
## 5 Devil staircase.
The dependence of $`c_\pm `$ (or $`\rho `$) on $`\mu `$, much the same to the 1D FEP , is a well-developed fractal structure, a devil staircase whose steps occur at all rational $`c_\pm =M/L1`$ ($`M,L`$ are coprime integers). At given $`M,L`$ the GS configuration of the 2D FEP is thus a “FEP crystal” with $`L`$ electrons per cell and with PTVs $`\stackrel{}{d}_\pm ,L\stackrel{}{f}_\pm +M\stackrel{}{\xi }_\pm `$.
In the commonly occuring case that $`\stackrel{}{f}_\pm `$ is a multiple of $`\stackrel{}{\xi }_\pm `$ (section 4) the steps’ widths, $`\mathrm{\Delta }\mu =\mathrm{\Delta }\mu (M/L)`$, can be found by direct application of the 1D theory , considering the energy of the line-line repulsion,
$$𝒱(l)=\underset{k=\mathrm{}}{\overset{\mathrm{}}{}}v(|k\stackrel{}{d}_\pm +l\stackrel{}{\xi }_\pm |)$$
( $`l`$ is the distance between interacting lines), as the 1D pair potential. This produces
$$\mathrm{\Delta }\mu =\underset{m=1}{\overset{\mathrm{}}{}}m\left(𝒱(m1)2𝒱(m)+𝒱(m+1)\right),$$
where $`=q_\pm LM`$ is the period of the lines’ pattern. The expression in the brackets is positive since in the case under consideration $`𝒱(l)`$ is a convex function. Generally, $`\mathrm{\Delta }\mu (M/L)`$ are expressed in terms of $`V_{\text{ss}}^\pm (l)`$ by a slight modification of the 1D theory.
## 6 j-branches and first-order transitions in the ground state of the 2D FEP.
The algorithm (9) can be extended over the whole $`c_\pm `$ range, $`0<c_\pm <1`$, provided the crystal with one particle per cell (“S-crystal” with PTVs $`\stackrel{}{d}_\pm ,\stackrel{}{f}_\pm +\stackrel{}{\xi }_\pm `$) which arises at $`c_\pm =1`$ ($`\vartheta =Q\pm \sigma _\pm `$) is stable (i.e. it is another S-crystal) or metastable. This follows from the fact that i/ owing to the coalscence of defectons macroscopically small variations in $`\vartheta `$ generate, at any $`c_\pm `$, 1D defects only; ii/ these 1D defects, according to our computer calculations, have the same PTV, $`\stackrel{}{d}_\pm `$, for all $`c_\pm `$.
Moreover, due to (meta)stability of the S-crystal the algorithm (9) holds over a $`\vartheta `$ range adjacent to the interval $`[Q\sigma _{},Q+\sigma _+]`$. In such a case Eq. (9) determines a mixture of stripes of new geometry which are characterized by a new triple of vectors, $`\stackrel{}{d}_\pm ^{},\stackrel{}{f}_\pm ^{},\stackrel{}{\xi }_\pm ^{}`$, the analogues of $`\stackrel{}{d}_\pm ,\stackrel{}{f}_\pm ,\stackrel{}{\xi }_\pm `$, and the $`\pm `$stripes concentation $`c_\pm ^{}=|\vartheta Q\pm \sigma _{}|/|\stackrel{}{d}_\pm ^{}\times \stackrel{}{\xi }_\pm ^{}|`$. Transition from one geometry to another is continuous in $`\vartheta `$ since $`c_\pm ^{}`$ goes to zero when $`\vartheta Q+\sigma _\pm `$.
Continuously extending the algorithm (9) in the manner shown above, we obtain the “$`j`$-branch” (we introduce the index $`j`$ again) which comprises all (meta)stable structures Eq. (9) connected in continuity with the starting S-crystal. The corresponding energy, $`E_j(\vartheta )`$, can be easily found in terms of $`V_{\text{ss}}^\pm (l)`$, using Eq. (9). As a rule, there exist different S-crystals belonging to the same $`j`$-branch. On the other hand, as we have computed, intersections of different $`E_j(\vartheta )`$, and hence, zero-temperature first-order transitions in variables $`\mu `$ or $`P`$ (a type of polymorphism), are universally present in the 2D FEP. (See example in section 7). The dependence of $`E_g`$ on $`\vartheta `$ is the function which comprises all stable portions of all $`E_j(\vartheta )`$.
Thus, owing to the LOD described above the GS of the 2D FEP is fully determined by the S-crystals PTVs, $`m_{\kappa \lambda }^j`$, the “directors”, $`\stackrel{}{d}_\pm ^j`$, and the displacement vectors, $`\stackrel{}{\xi }_\pm ^j`$, together with the set of $`E_j(\vartheta )`$ intersection points which are the only GS characteristics changing on small variations in $`v(r)`$. All these quantities can be computed on the basis of Eq. (5) and Eq. (9) by a self-consistent procedure, finding the S-crystals together with the $`j`$-branches. We have found the GS for triangular and square host lattices as well as for a number of those with central symmetry only. The computation has not revealed principal differences between GS properties of 2D FEP with different geometry of host lattices, at least for those which are not significantly anisotropic.
## 7 Example.
Here we illustrate the above general results with a triangular host lattice (THL). All triangular lattices on the THL are necessarily S-crystals. This follows from the fact that it is the triangular lattice that realizes the absolute energy minimum of the system whose electrons are free to move. Such S-crystals are “$`p,q`$-crystals” with PTVs $`p\stackrel{}{a}_1+q\stackrel{}{a}_2,p\stackrel{}{a}_2+q\stackrel{}{a}_3`$ and $`\vartheta =p^2+q^2pq`$ ($`p,q`$ are arbitrary integers, $`\stackrel{}{a}_{1,2,3}`$ is a triple of THL PTVs which are equal in the modulus and form an angle of $`120^{}`$ with each other). Using the procedure discussed in section 6, we have found that all $`0,q`$-crystals belong to the same $`j`$-branch (the main branch), which covers the range $`4\vartheta <\mathrm{}`$. The S-crystals of the $`0,q`$-ones are S-crystals too. They occur at $`\vartheta =q(q+1)`$ ($`2q<\mathrm{}`$) and have PTVs $`q\stackrel{}{a}_\kappa ,(q+1)\stackrel{}{a}_\lambda `$ ($`\kappa ,\lambda =1,2,3`$; $`\kappa \lambda `$). The stripe structures (9) have the same PTV, $`q\stackrel{}{a}_\kappa `$, for all $`\vartheta `$ of the interval $`[q(q1),q(q+1)]`$, their $`\stackrel{}{\xi }_\pm `$ being $`\pm \stackrel{}{a}_\lambda `$ ($`\kappa \lambda `$).
When $`p,q0`$, $`j`$-branches of different $`p,q`$-crystals are distinct. They do not have mutual intersections, but all intersect the main branch, the intersections occuring at rather small concentrations of the $`p,q`$-crystals’ $`\pm `$stripes. In other words, the intervals of $`p,q`$-crystals stability ($`p,q0`$), and correspondingly main-branch metastability, turn out to be narrow.
## 8 The limit of $`\gamma 1`$.
So far the case of $`\gamma 1`$ (section 3) has been discussed. Here we outline the limiting case $`\gamma 1`$. It is realized when the Coulomb interelectron forces are screened by conductors which are at distances $`r_{ee}`$ from the 2D FEP. Modelling such a situation by the potential $`v(r)r^1\mathrm{exp}(r/R)`$ with $`Rr_{ee}`$, we have computed that the energies $`ϵ_\pm ,\epsilon _\pm `$ satisfy the inequalities $`\epsilon _{}<ϵ_{}<ϵ_+<\epsilon _+`$, which are opposite to those of Eq.(7). Due to this fact it is separated zero-dimensional defects of the S-crystal that form the GS superstructure for $`\rho `$ which are sufficiently close to $`1/Q`$. We have revealed that these zero-dimensional defects are “bidefectons”, which are complexes consisting of two bound defectons. Well-separated bidefectons can be considered as new particles on the S-crystal as the host lattice, the mean bidefecton separation, $`r_d`$, being equal to $`|2(\rho 1/Q)|^{1/2}`$. The effective pair potential of a mutual bidefecton repulsion is characterized by the same space parameter, $`R`$, as $`v(r)`$. If $`r_dR`$, the bidefectons, according to the general results of sections 3, 4, are bound to be ordered into stripes arranged by the algorithm (9). Extention of this reasoning to the case of $`Rr_d`$ leads to new stripe-like superstructures consisting of zero-dimensional defects of “new” S-crystals, and so on. Eventually a well-developed fractal arises. Though details of its structure are still to be determined, it is safe to say now that the LOD does take place for $`\gamma 1`$, too.
## 9 Summary.
The above consideration shows that the electron ordering into stripes and the effective lowering of dimension reside in the 2D FEP universally. In essence, a combination of discreteness of electrons’ positions with a long-ranged interelectron repulsion is the only factor which gives rise to this phenomenon. For this reason it is also bound to arise with an external disorder present, the stripes being fractured and pinned by the disorder. Thus, stripe formation in 2D and layered narrow-band conductors can be considered to be the principle signature of a 2D FEP.
### 9.1 The charge ordering in cuprates as a manifestation of a 2D FEP.
From the above standpoint the charge ordering in $`CuO_2`$ planes of high-temperature superconductors (cuprates) (neutron scattering), (channeling) is of especial interest. The fact that it takes place even with very low dopping suggests that a 2D FEP might be present in these systems primordially. One can envision that formation of ionized oxygen molecules, $`O_2^{}`$, in oxygen planes gives a certain energy gain even in cuprates of the stoichiometric composition . In consequence, a part of electrons leaves the oxygen planes for $`s`$-orbits of $`Cu^{++}`$ ions in $`CuO_2`$ planes, resulting in formation of a number of ions $`Cu^+`$. Since the amplitude of electron hopping $`Cu^+Cu^{++}`$ is relatively small, the $`Cu^+`$ ensemble should be expected to be a 2D FEP, the concentration of the $`Cu^+`$ and, correspondingly, of the $`O_2^{}`$ being determined by thermodynamic equilibrium between the 2D FEP and the ensemble of the $`O_2^{}`$. It is evident that stripe formation in the 2D FEP of $`Cu^+`$ ions inevitably brings to existence $`O^{}`$ superstructures in $`CuO_2`$ planes. Their PTVs are likely to be the same as that of the $`Cu^+`$ FEP.
In the connection with the aforesaid it should be noted that a simple explanation of the high-temperature superconductivity can be offered in terms of the 2D FEP taking into account the finiteness of the bandwidth . It lies in the fact that a virtual exchange of 2D FEP elementary excitations between oxygen holes (which are known to be free charge carriers in the dopped cuprates) leads inevitably to a mutual effective attraction of the holes and thereby to superconductivity (of purely Coulomb origin) with high $`T_c`$. Our preliminary studies have shown that the lowest-energy elementary excitations in the cuprate 2D FEP are kinks on the disorder-fractured stripes.
### 9.2 Some expectable features of the 2D FEP thermodynamics and conductivity as a consequence of the stripe formation.
Our preliminary studies have shown that the effective lowering of dimension in the ground state of the 2D FEP accounts for a fairly interesting and unusual low-temperature thermodynamics. It is characterized by first-order transitions in $`T,\mu `$-plane ($`T`$ is the temperature) from the FEP crystals (section 5) slightly perturbed by an ideal gas of separate defectons (they arise due to thermal activation) to a strongly correlated liquid of thermally fractured stripes (“FEP liquid”) where there is no long-ranged order. The melting temperature as the function of $`\mu `$ turns out to be reduced to zero at the endpoints of the intervals of the devil staircase. Therefore, at any $`T0`$ there is a set of alternating $`\mu `$ intervals which correspond to the FEP crystals or the FEP liquid.
Conduction in the 2D FEP liquid is expected to be by movement of kinks of the fractured stripes, each kink carrying a fractal charge (measured in units of $`e`$). That in the FEP crystals is of the common Drude type, the charge carriers being $`\pm `$defectons with the charge $`\pm e`$. With a change in $`\mu `$ (at a fixed $`T`$) these conduction mechanims alternate, resulting in pronounced 2D FEP resistivity oscillations which reflect the ground-state devil-staircase dependence of $`\rho `$ on $`\mu `$: the oscillations’ peaks are bound to occur close to the rational filling factors of the FEP crystals which survive at a given $`T`$. This phenomenon is yet another distinctive mark of the 2D FEP. We have found it to be very similar to the resitivity oscillations of a conductive sheet in a system metal – n-type semiconductor – p-type semiconductor , which stil remain to be explained. We are going to publish the results concerning this issue in the near future.
It is remarkable that an artificially created external perturbation localized within a small region can block up conduction over all FEP liquid, pinning only one stripe. The most appropriate systems to test this experimentally are perhaps granular thin films like those described in . A similar phenomenon was reported in . Yet granular films used in these experimental studies were highly disordered, and it is unclear now whether the above theory works in such a situation.
### Acknowledgments.
We gratefully acknowledge discussions with M. Pepper and P. Wiegmann.
|
no-problem/9903/nucl-th9903015.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
Spinodal instabilities were suggested by Bertsch, Siemens and Cugnon already 15 years ago as a possible mechanism leading to multifragmentation of hot expanding nuclear matter.
In heavy-ion collisions we expect that hot nuclear droplets are formed, which subsequently expand leading to low densities in the interior. Below certain values of the density, bulk and surface instabilities may occur and lead to multifragmentation.
In this contribution we report on a study of bulk and surface instabilities of spherical nuclear droplets as function of temperature and density.
## 2 Bulk instabilities
We consider the normal modes of a nuclear droplet. Let us define a complete orthonormal set of functions $`\mathrm{\Psi }_\lambda (r,\vartheta ,\phi )=𝒩_{ln}j_l(k_{nl}r)𝒴_l^m(\vartheta ,\phi )`$, where $`𝒴_l^m(\vartheta ,\phi )`$ are modified (real) spherical harmonics, $`j_l(kr)`$ are spherical Bessel functions and $`𝒩_{ln}`$ are normalization constants. As we aim to consider the distortions of a spherical droplet with radius $`R_0`$, we impose the condition $`j_l(k_{nl}R_0)=0`$. We consider the distortions defined by the irrotational displacement field
$$\stackrel{}{s}(\stackrel{}{r},t)=\stackrel{}{}\underset{\lambda }{}q_\lambda (t)\mathrm{\Psi }_\lambda (\stackrel{}{r})\stackrel{}{}w(\stackrel{}{r},t)$$
(1)
from the general form of the dispacement potential $`w(\stackrel{}{r},t)=_\lambda q_\lambda (t)\mathrm{\Psi }_\lambda (\stackrel{}{r})`$ . Note that the surface is not kept fixed with this definition of the displacement field. The density varies according to the continuity equation
$$\frac{\varrho (\stackrel{}{r},t)}{t}+\text{div}[\varrho (\stackrel{}{r},t)\stackrel{}{v}(\stackrel{}{r},t)]=0,$$
(2)
which guarantees exact conservation of mass, and – together with the condition $`\mathrm{\Psi }_\lambda (R_0,\mathrm{\Omega })=0`$ – also of the center of mass.
In harmonic approximation small oscillations around equilibrium are determined by the set
$$\underset{\lambda }{}B_{\lambda \lambda ^{}}\ddot{q}_\lambda ^{}+\underset{\lambda }{}C_{\lambda \lambda ^{}}q_\lambda ^{}=0$$
(3)
of coupled equations. The eigenmodes are obtained from diagonalizing
$$(C_{\lambda \lambda ^{}}\omega ^2B_{\lambda \lambda ^{}})q_\lambda ^{}=0.$$
(4)
For $`\omega ^2>0`$ the corresponding mode is stable ($`q_\lambda \mathrm{sin}(\omega t)`$). Exponential instability ($`q_\lambda \mathrm{exp}(\gamma t)`$) occurs for $`\omega ^2=\gamma ^2<0`$.
Analytic expressions for the mass $`(B_{\lambda \lambda ^{}})`$ and stiffness $`(C_{\lambda \lambda ^{}})`$ tensors are derived from the velocity field $`\dot{\stackrel{}{s}}`$ and the Skyrme energy-density functional , respectively. The mass tensor is diagonal. The stiffness tensor $`C`$ is the sum of the contributions $`C^V`$, $`C^\tau `$, $`C^W`$, $`C^S`$ and $`C^C`$ resulting, respectively, from the volume, intrinsic kinetic energy, Weizsäcker, surface and Coulomb terms in the energy density. The analytic expressions of the tensors are given in . Modes belonging to different $`l,m`$ (multipoles and their components) are decoupled. The only couplings left are those corresponding to different numbers $`n`$ of nodes in the displacement field for the same multipolarity. The eigenvalues are obtained by numerical diagonalization of the matrix $`B^1C`$. The calculations have been performed for two Skyrme forces, i.e. SkM and SIII implying, respectively, a soft and a stiff equation of state (EOS). The parameters of both forces are listed in Table 1. The contribution from the intrinsic kinetic energy has been calculated in the adiabatic limit (constant entropy with isotropic momentum distribution).
Fig. 1. Borders of spinodal instabilities for infinite symmetric nuclear matter (heavy solid line), for a gold nucleus (A=197, dashed line) and for a smaller nucleus (N=Z=50, dotted line) as calculated for the soft (left) and the stiff (right) EOS, respectively. Temperatures $`T`$ are given in MeV.
From now on, for all figures, $`\rho /\rho _0`$ denotes the ratio of the actual density to normal nuclear density ($`\rho _0=0.16`$ fm<sup>-3</sup>). Fig. 1 displays the areas of spinodal instability for the density modes with $`l=2`$ for three cases, i.e. infinite symmetric nuclear matter, a gold nucleus (Z=79, A=197), and a symmetric, roughly two times smaller fragment (Z=N=50) calculated with SkM (left) and SIII (right) forces, respectively. For infinite nuclear matter the area of spinodal instability is substantially larger than those for finite systems. The difference between the spinodal lines of the two finite systems is small. The areas of spinodal instability for the higher multipoles $`l=3,4,\mathrm{}`$ are further reduced. The results for infinite symmetric nuclear matter have been obtained by neglecting surface, Coulomb and Weizsäcker terms ($`C_{\lambda \lambda ^{}}^{(S)}=C_{\lambda \lambda ^{}}^{(C)}=C_{\lambda \lambda ^{}}^{(W)}=0`$) and taking the limit ($`R\mathrm{}`$) in the volume and kinetic-energy terms ($`C_{\lambda \lambda ^{}}^{(V)}`$, $`C_{\lambda \lambda ^{}}^{(\tau )}`$).
Fig. 2 shows quantitatively the importance of different contributions to the lowest eigenfrequency as function of the density at a typical temperature of 4 MeV. Due to their small values, the surface and Coulomb terms have practically no influence on the stability of the nuclear droplet. There is a delicate balance between the contribution from the kinetic energy term $`C_{\lambda \lambda ^{}}^{(\tau )}`$ and the volume term $`C_{\lambda \lambda ^{}}^{(V)}`$, such that the role of the Weizsäcker term $`C_{\lambda \lambda ^{}}^{(W)}`$ becomes crucial. This term grows substantially with density and thus is important for the increasing stability with increasing density.
Fig. 2. Contributions of different terms to the lowest diagonal element of the $`B^1C`$ matrix at $`T=4`$ MeV for a gold nucleus and soft (left) and stiff (right) EOS. Additionally the lowest eigenvalue $`\omega ^2`$ is displayed by the solid line. The difference of this eigenvalue from $`C^{tot}/B`$ is due to couplings to higher-n modes. Note that contributions from Coulomb and surface terms are very small.
The crucial quantity in the multifragmentation process is the instability growth rate $`\gamma `$ (for $`\omega ^2<0`$, $`q_\lambda \mathrm{exp}(\gamma t)`$) or the characteristic growth time $`\tau =1/\gamma `$ for a particular mode. Multifragmentation, initiated by such instabilities, can occur only if these characteristic times are short enough compared to the characteristic evolution time of the system. In Fig. 3 (left) we present the shortest characteristic times as function of density and temperature for the bulk instabilities of the gold system calculated with the soft EOS.
## 3 Surface instabilities
For early stages of the expansion, where densities and temperatures are still high, no bulk instabilities exist. There instead, e.g. for initial temperatures higher than 8 MeV, the surface vibrations are found unstable. However, the characteristic times for these instabilities are about an order of magnitude larger than those for bulk oscillations in accordance with . We follow here the standard Bohr and Mottelson theory . Fig. 3 (right) presents the smallest characteristic times for these surface instabilities. Although our basis is complete and – in principle – a linear combination of the collective displacement fields $`\mathrm{\Psi }_\lambda `$ can describe an arbitrary collective motion, including surface vibrations of incompressible matter, such surface motion requires a very large number of terms due to slow convergence. Therefore, we calculated the surface instabilities separately. As we see from Fig. 3 bulk instabilities are dominant at small densities and temperatures, while for large enough temperatures and densities only surface modes are unstable.
Fig. 3. Minimal characteristic times for bulk instabilities of a gold nucleus calculated with a soft EOS (left 3d-plot). All modes with $`l=2,3,4,5`$ and $`n=1,\mathrm{}6`$ are taken into account. The contour lines correspond to the values 20, 25, 30, 40, 60 and 100 fm/c. The temperature $`T`$ is given in MeV. The right 3d-plot shows the same (rotated by $`\pi `$ around the vertical axis) for surface instabilities. Here, the contour lines correspond to the values 150, 200, 250, 300, 400 and 600 fm/c. Note that in both diagrams the system is stable in the regions outside the holes.
## 4 Consequences for multifragmentation
The expansion of hot nuclei has been studied in refs. for soft and stiff equations of state and compared with experimental data . For the soft EOS the expansion trajectories in the $`(T,\rho )`$ plane reach turning points around $`T4`$ MeV and $`\rho /\rho _0(0.250.45)`$. Around these turning points the collective motion is very slow, and hence fast enough unstable modes can develop and initialize multifragmentation. Although finite size effects substantially reduce the area of spinodal instabilities with respect to that of infinite nuclear matter, it is clear from Fig. 1 that the turning points essentially remain in the instability regime. From Fig. 3 (left) we see that characteristic times for density instabilities around this turning points are on the order of 25–40 fm/c. Such times seem to be short enough to cause an irreversible decay of the system towards multifragmentation. Usually several modes become unstable in this region, with close characteristic times, such that the production of many different fragments is possible. Our study of surface modes show that initially deformed nuclear droplets can hardly become spherical, because the characteristic times for shape restoration is large compared to the expansion time. For initial temperatures larger than 8 MeV such initial deformation will tend even to grow.
|
no-problem/9903/hep-ph9903522.html
|
ar5iv
|
text
|
# References
Disclaimer
> This document was prepared as an account of work sponsored by the United States Government. While this document is believed to contain correct information, neither the United States Government nor any agency thereof, nor The Regents of the University of California, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial products process, or service by its trade name, trademark, manufacturer, or otherwise, does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or any agency thereof, or The Regents of the University of California. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof, or The Regents of the University of California.
Lawrence Berkeley National Laboratory is an equal opportunity employer.
Prologue
I first met Lev Okun at the 1976 “Rochester” conference held in the USSR, in Tbilisi, Georgia. Sakharov was under strong attack by the government for his human rights activities and was originally not invited but was permitted to attend after he protested the lack of an invitation to the Soviet Academy. Understandably even those Soviet physicists who were sympathetic to Sakharov and his ideas were cautious about associating with him during the meeting. While there may well have been others I did not observe, to me it was remarkable to see one Soviet physicist who did not hesitate to stroll openly with Sakharov on the streets of Tiblisi. This was of course Okun. His behavior then demonstrated the same simple idealism and courage that is reflected now in the decision he has taken since the dissolution of the USSR to remain in Moscow, to preserve the unique physics environment at ITEP, when he could easily have accepted more comfortable positions outside of Russia.
Not unrelated to his moral character is the clarity, depth and humanity with which Okun practices physics. This gives me a selfish reason for submitting the work presented here: I would like to have his view of it. It has a plausible conclusion reached by a strange method and raises questions I do not understand. It is based on a representation of an exactly unitary model of nonresonant $`WW`$ scattering in terms of an effective scalar propagator with simple poles in the complex energy plane. The method was applied and verified for tree approximation amplitudes and is used here to estimate quantum corrections.
Introduction
The electroweak symmetry may be broken by weakly coupled Higgs bosons below 1 TeV or by a new sector of quanta at the TeV scale that interact strongly with one another and with longitudinally polarized $`W`$ and $`Z`$ bosons. Precision electroweak data favors the first scenario, but the conclusion is not definitive, because the relevant quantum corrections are open to contributions from many forms of new physics. Occam’s (an archaic spelling of Okun’s?) razor favors the simplest interpretation, which assumes that the only new physics contributing significantly are the quanta that directly form the symmetry breaking condensate. In that case the data do favor weak symmetry breaking by Higgs scalars. But nature may have dealt us a more complicated hand, with other, probably related, new physics also contributing to the radiative corrections. Then the precision data tells us nothing about the symmetry breaking sector — unless we can “unscramble” the different contributions, which in general we do not know how to do — and implementation of the Higgs mechanism by strong, dynamical symmetry breaking remains a possibility. The nature of the symmetry breaking sector can only be established definitively by its direct discovery and detailed study in experiments at high energy colliders.
Strong $`WW`$ scattering is a generic feature of strong, dynamical electroweak symmetry breaking. The longitudinal polarization modes $`W_L`$ scatter strongly above 1 TeV because the enforcement of unitarity is deferred to the mass scale of the heavy quanta that form the symmetry breaking condensate. To the extent that QCD might be a guide to dynamical symmetry breaking we expect the $`a_{00}`$ partial wave to smoothly saturate unitarity between 1 and 2 TeV. Like the SM (standard model) Higgs boson, nonresonant strong $`WW`$ scattering would also contribute to the low energy radiative corrections probed in precision electroweak measurements. This note presents an estimate of those corrections, based on a novel representation of nonresonant strong $`WW`$ scattering as an effective-Higgs boson exchange amplitude.
Strong $`WW`$ scattering models are customarily formulated in R-gauges. The effective-Higgs representation allows them to be reexpressed gauge invariantly and, in particular, in unitary gauge. It applies to the leading $`s`$-wave amplitudes with $`I=0,2`$. The effective-Higgs representation has a significant practical advantage: it predicts the experimentally important transverse momentum distributions of the final state quark jets and the $`WW`$ diboson in the collider process $`qqqqWW`$, which cannot be obtained from the conventional method based on the effective $`W`$ approximation. The method has been verified numerically for tree amplitudes and gauge (i.e., BRST) invariance has been demonstrated.
The K-matrix model is a useful model of strong $`WW`$ scattering which smoothly extrapolates the $`WW`$ low energy theorems in a way that exactly satisfies elastic unitarity. The effective-Higgs representation of the K-matrix model has a surprisingly simple form: the singularities of the propagator are simple poles in the complex $`s`$ plane, like an elementary scalar. It is then easy to compute the contribution to the $`W`$ and $`Z`$ vacuum polarization tensors from which the “oblique” corrections are obtained.
The final result for the oblique parameters $`S`$ and $`T`$ is like the SM Higgs contribution with $`m_H`$ replaced by a combination of the unitarity scales for strong scattering in the $`I=0,2`$ channels, determined in turn by the low energy theorems as noted in . $`S`$ and $`T`$ are given by
$$S=\frac{1}{18\pi }\left[\mathrm{ln}\left(\frac{16\pi v^2}{\mu ^2}\right)+\frac{1}{2}\mathrm{ln}\left(\frac{32\pi v^2}{\mu ^2}\right)\right]$$
$`(1)`$
$$T=\frac{1}{8\pi \mathrm{cos}^2\theta _W}\left[\mathrm{ln}\left(\frac{16\pi v^2}{\mu ^2}\right)+\frac{1}{2}\mathrm{ln}\left(\frac{32\pi v^2}{\mu ^2}\right)\right]$$
$`(2)`$
where $`v^2=(\sqrt{2}G_F)^1`$, $`\theta _W`$ is the weak interaction mixing angle and $`\mu `$ is the reference scale. For $`\mu =1`$ TeV the corrections are $`S0.036`$ and $`T0.11`$. Similar results follow from the cut-off nonlinear sigma model when the unitarity scales are used for the cutoffs.
In the following sections I review the K-matrix model, derive the effective scalar propagator, deduce the oblique corrections, raise some theoretical issues, and finally discuss the physical interpretation of the result.
K-matrix model for $`WWZZ`$
In the SM the Higgs sector contribution to $`WWZZ`$ is given by just the $`s`$-channel Higgs pole. Therefore we use the K-matrix model for $`WWZZ`$ to abstract the effective-Higgs propagator. The model is summarized in this section.
As is conventional we use the ET (equivalence theorem) to define the model in terms of the unphysical Goldstone bosons, $`w^\pm `$ and $`z`$. Partial wave unitarity is conveniently formulated as
$$\mathrm{Im}\frac{1}{a_{IJ}}=1.$$
$`(3)`$
The K-matrix model is constructed to satisfy the low energy theorems and partial wave unitarity. It is defined by
$$\frac{1}{a_{IJ}^K}=\frac{1}{R_{IJ}}i$$
$`(4)`$
where $`R_{IJ}`$ are the real threshold amplitudes that follow from the low energy theorems,
$$R_{00}=\frac{s}{16\pi v^2}$$
$`(5a)`$
$$R_{20}=\frac{s}{32\pi v^2}.$$
$`(5b)`$
The corresponding $`s`$-wave T-matrix amplitudes are
$$_I^K(s)=16\pi a_{I0}^K$$
$`(6)`$
for $`I=0,2`$. Finally the $`wwzz`$ amplitude is
$$^K(w^+w^{}zz)=\frac{2}{3}(_0^K_2^K)$$
$`(7)`$
Effective-Higgs propagator
To obtain the effective-Higgs propagator we “transcribe” the K-matrix model from R-gauge to U-gauge. The heart of the matter is to find the contribution of the symmetry-breaking sector in U-gauge, which encodes the dynamics specified in the original R-gauge formulation of the model. This is accomplished using the ET as follows.
Suppose that the longitudinal gauge boson modes scatter strongly. At leading order in the weak gauge coupling $`g`$ we write the amplitude $`W_L^+W_L^{}ZZ`$ as a sum of gauge-sector and Higgs-sector terms,
$$_{\mathrm{Total}}=_{\mathrm{Gauge}}+_{\mathrm{SB}}$$
$`(8)`$
where SB denotes the symmetry breaking (i.e., Higgs) sector. Gauge invariance ensures that the contributions to $`_{\mathrm{Gauge}}`$ that grow like $`E^4`$ cancel, leaving a sum that grows like $`E^2`$, given by
$$_{\mathrm{Gauge}}=g^2\frac{E^2}{\rho m_W^2}+\mathrm{O}(E^0,g^4)$$
$`(9)`$
where $`\rho =m_W^2/(\mathrm{cos}^2\theta _Wm_Z^2)`$. The neglected terms of order $`E^0`$ and of higher order in $`g^2`$ include the electroweak corrections to the leading strong amplitude.
The order $`E^2`$ term in equation (9) is the residual “bad high energy behavior” that is cancelled by the Higgs mechanism. It is also precisely the low energy theorem amplitude,
$$_{\mathrm{LET}}=\frac{s}{\rho v^2}=_{\mathrm{Gauge}}+\mathrm{O}(s^0,g^4)$$
$`(10)`$
using $`m_W=gv/2`$ and $`s=4E^2`$. Eqs. (8) and (9) may be used to derive the low energy theorem without invoking the ET.<sup>3</sup><sup>3</sup>3 If the symmetry breaking force is strong, the quanta of the symmetry breaking sector are heavy, $`m_{SB}m_W`$, and decouple in gauge boson scattering at low energy, $`_{SB}_{\mathrm{Gauge}}`$. Then the quadratic term in $`_{\mathrm{Gauge}}`$ dominates $`_{\mathrm{Total}}`$ for $`m_W^2E^2m_{SB}^2`$, which establishes the low energy theorem without using the ET.
Now consider an arbitrary strong scattering model, designated as model “X”, formulated in the usual way in an R-gauge in terms of the unphysical Goldstone bosons, $`_{\mathrm{Goldstone}}^\mathrm{X}(wwzz)`$. The total gauge boson amplitude is gauge invariant and the ET tells us that for $`Em_W`$ it is approximately equal to the Goldstone boson amplitude, i.e.,
$$_{\mathrm{Total}}^\mathrm{X}(W_LW_L)_{\mathrm{Goldstone}}^\mathrm{X}(ww)$$
$`(11)`$
in the same approximation as eq. (9). Eq. (8) holds in any gauge. Specifying U-gauge we combine it with eqs. (9-11) to obtain the U-gauge Higgs sector contribution for model X,
$$_{\mathrm{SB}}^\mathrm{X}(W_LW_L)=_{\mathrm{Goldstone}}^\mathrm{X}(ww)_{\mathrm{LET}}.$$
$`(12)`$
The preceding result applies to any strong scattering amplitude. Now we specialize to $`s`$-wave $`WWZZ`$ scattering and use eq.(12) to obtain an effective-Higgs propagator with standard “Higgs”-gauge boson couplings. Neglecting $`m_W^2s`$ and higher orders in $`g^2`$ as always, the effective scalar propagator is
$$P_X(s)=\frac{v^2}{s^2}_{\mathrm{SB}}^\mathrm{X}(W_LW_L)$$
$`(13)`$
Eqs.(10) and (12) with $`\rho =1`$ then imply
$$P_X(s)=\frac{v^2}{s^2}_R^X(ww)+\frac{1}{s}$$
$`(14)`$
The term 1/$`s`$, corresponding to a massless scalar, comes from $`_{\mathrm{LET}}`$ in eq. (12). It ensures good high energy behavior while the other term in eq. (14) expresses the model dependent strong dynamics.
Finally we substitute the K-matrix amplitude, eq. (7), into eq. (14) to obtain the effective propagator for the K-matrix model as the sum of two simple poles
$$P_K=\frac{2}{3}\left(\frac{1}{sm_0^2}+\frac{1}{2}\frac{1}{sm_2^2}\right)$$
$`(15)`$
where $`m_0`$ and $`m_2`$ are
$$m_0^2=16\pi iv^2$$
$`(16)`$
and
$$m_2^2=+32\pi iv^2.$$
$`(17)`$
It is surprsing to find such a simple expression involving only simple poles. It is not surprising that the poles are far from the real axis since they describe nonresonant scattering. Interpreted heuristically as Breit-Wigner poles they correspond to resonances with widths twice as big as their masses.
Oblique corrections
The oblique corrections are evaluated from the vacuum polarization diagrams that in the SM include the Higgs boson. In place of the SM propagator, $`P_{\mathrm{SM}}=1/(sm_H^2)`$, we substitute $`P_K`$ from eq. (15). Where the SM contribution depends on the log of the Higgs boson mass, $`L_{SM}=\mathrm{ln}(m_H^2/\mu ^2)`$, we now find instead the combination $`L_K`$,
$$L_{SM}=\mathrm{ln}\left(\frac{m_H^2}{\mu ^2}\right)L_K=\frac{2}{3}\mathrm{ln}\left(\frac{m_0^2}{\mu ^2}\right)+\frac{1}{3}\mathrm{ln}\left(\frac{m_2^2}{\mu ^2}\right)$$
$`(18)`$
where $`m_{0,2}`$ are complex masses defined in eqs. (16-17).
The results quoted in eqs. (1-2) follow from the usual expressions for $`S,T`$ where we use the real part of $`L_K`$ in place of $`L_{SM}`$,
$$S=\frac{\mathrm{Re}\left(L_K\right)}{12\pi }$$
$`(19)`$
and
$$T=\frac{3\mathrm{Re}\left(L_K\right)}{16\pi \mathrm{cos}^2\theta _W}$$
$`(20)`$
The imaginary part of $`L_K`$ is an artifact which we discard; it results from the fact that our approximation neglects the $`W`$ mass, as in any application of the ET. At $`q^2=0`$, where the oblique corrections are computed, there is no contributution to the imaginary part of the vacuum polarization from the relevant diagrams.
Combining the $`I=0`$ and $`I=2`$ terms in eq. (18) we have
$$\mathrm{Re}\left(L_K\right)=\mathrm{ln}\left(\frac{2^{1/3}16\pi v^2}{\mu ^2}\right).$$
$`(21)`$
Evaluating eq. (21) we find that the oblique correction from the K-matrix model is like that of a Higgs boson with mass 2.0 TeV.
Questions
The $`I=2`$ component of the effective propagator has peculiar properties, perhaps due to the fact that for the $`I=2`$ channel we are representing $`t`$\- and $`u`$-channel dynamics by an effective $`s`$-channel exchange. The minus sign in the $`I=2`$ low energy theorem, eq. (5b), which may be thought of as arising from the identity $`t+u=s`$, leads to interesting differences between the $`I=0`$ and $`I=2`$ components of the effective propagator $`P_K`$.
First, the $`I=2`$ component of the effective scalar propagator has a negative pole residue, which would correspond to a unitarity violating ghost if it described an asymptotic state (which it does not). In fact the sign is required to ensure unitarity, since it is needed to cancel the bad high energy behavior of the gauge sector amplitude which has a negative sign in the $`I=2`$ channel. In eq. (15) for $`P_K`$ the $`I=2`$ pole appears with a positive sign because of a second minus sign from the isospin decomposition, eq. (7). Neither pole of the effective propagator has a negative (ghostly) residue. In any case the amplitude is exactly unitary by construction.
The sign difference between the pole positions, $`m_0^2`$ and $`m_2^2`$ in eqs. (16) and (17), may also be traced to the phases of the low energy theorems in eq. (5). The position of $`m_0^2`$ on the negative imaginary axis of the complex $`s`$ plane corresponds to poles in the fourth and second quadrants of the complex energy plane, consistent with causal propagation as in the conventional $`m^2iϵ`$ prescription. But the position of $`m_2^2`$ on the positive imaginary axis corresponds to poles in the first and third quadrants of the complex energy plane. This would imply acausal propagation if the poles are on the first sheet but not if they are on the second sheet. Working in the limit of massless external particles as we are it is not apparent on which sheet they occur.<sup>4</sup><sup>4</sup>4I thank Henry Stapp for a discussion of this point.
I conclude that the sign of the pole residue arising from the $`I=2`$ amplitude is not problematic but that the implications of the pole position requires better understanding.
Physical interpretation
We have used a convenient representation of the K-matrix model to estimate the low energy radiative corrections from strong $`WW`$ scattering. The result that the corrections are like those of a Higgs boson with mass at the unitarity scale is plausible and agrees with an earlier estimate using the cut-off nonlinear sigma model. The estimate establishes a ‘default’ radiative correction from the strongly coupled longitudinal gauge bosons in theories of dynamical symmetry breaking. In general there will be additional contributions from other quanta in the symmetry breaking sector. Those contributions are model dependent as to magnitude and sign. In computing their effect it is important to avoid double-counting contributions that are dual to the contribution considered here.
Current SM fits to the electroweak data prefer a light Higgs boson mass of order 100 GeV with a 95% CL upper limit that I will conservatively characterize as $`<300`$ GeV. Since the corrections computed here are equivalent to those of a Higgs boson with a mass of 2 TeV, they are excluded at 4.5 standard deviations. Therefore there must be additional, cancelling contributions to the radiative corrections from other quanta in the theory if strong $`WW`$ scattering occurs in nature. This would not require fine-tuning although it would require a measure of serendipity.
There are good reasons for the widespread view that a light Higgs boson is likely and for the popular designation of SUSY (supersymmetry) as The People’s Choice. But SUSY also begins to require a measure of serendipity to meet the increasing lower limits on sparticle and light Higgs boson masses. While the community of theorists has all but elected SUSY, the question is not one that can be decided by democratic processes. At the end of the day only experiments at high energy colliders can tell us what the symmetry breaking sector contains. Collider experiments, particularily those at the LHC, should be prepared for the full range of possibilities, including the capability to measure $`WW`$ scattering in the TeV region.
Acknowledgements: I wish to thank David Jackson, and Henry Stapp for helpful discussions. This work was supported by the Director, Office of Energy Research, Office of High Energy and Nuclear Physics, Division of High Energy Physics of the U.S. Department of Energy under Contracts DE-AC03-76SF00098.
|
no-problem/9903/cond-mat9903108.html
|
ar5iv
|
text
|
# Small-world networks: Evidence for a crossover picture
## Abstract
Watts and Strogatz \[Nature 393, 440 (1998)\] have recently introduced a model for disordered networks and reported that, even for very small values of the disorder $`p`$ in the links, the network behaves as a small-world. Here, we test the hypothesis that the appearance of small-world behavior is not a phase-transition but a crossover phenomenon which depends both on the network size $`n`$ and on the degree of disorder $`p`$. We propose that the average distance $`\mathrm{}`$ between any two vertices of the network is a scaling function of $`n/n^{}`$. The crossover size $`n^{}`$ above which the network behaves as a small-world is shown to scale as $`n^{}(p1)p^\tau `$ with $`\tau 2/3`$.
PACS numbers: 05.10.-a, 05.40.-a, 05.50.+q, 87.18.Sn
Two limiting-case topologies have been extensively considered in the literature. The first is the regular lattice, or regular network, which has been the chosen topology of innumerable physical models such as the Ising model or percolation. The second is the random graph, or random network, which has been studied in mathematics and used in both natural and social sciences .
Erdös and co-workers studied extensively the properties of random networks —see for a review. Most of this work concentrated on the case in which the number of vertices is kept constant but the total number of links between vertices increases: The Erdös-Rényi result states that for many important quantities there is a percolation-like transition at a specific value of the average number of links per vertex. In physics, random networks are used, for example, in studies of dynamical problems, spin models and thermodynamics, random walks, and quantum chaos. Random networks are also widely used in economics and other social sciences to model, for example, interacting agents.
In contrast to these two limiting topologies, empirical evidence suggests that many biological, technological or social networks appear to be somewhere in between these extremes. Specifically, many real networks seem to share with regular networks the concept of neighborhood, which means that if vertices $`i`$ and $`j`$ are neighbors then they will have many common neighbors —which is obviously not true for a random network. On the other hand, studies on epidemics show that it can take only a few “steps” on the network to reach a given vertex from any other vertex. This is the foremost property of random networks, which is not fulfilled by regular networks.
To bridge the two limiting cases, and to provide a model for real-world systems, Watts and Strogatz have recently introduced a new type of network which is obtained by randomizing a fraction $`p`$ of the links of the regular network. As Ref. , we consider as an initial structure ($`p=0`$) the one-dimensional regular network where each vertex is connected to its $`z`$ nearest neighbors. For $`0<p<1`$, we denote these networks disordered, and keep the name random network for the case $`p=1`$. Reference reports that for a small value of the parameter $`p`$ —which interpolates between the regular ($`p=0`$) and random ($`p=1`$) networks— there is an onset of small-world behavior. The small-world behavior is characterized by the fact that the distance between any two vertices is of the order of that for a random network and, at the same time, the concept of neighborhood is preserved, as for regular lattices \[Fig.1\]. The effect of a change in $`p`$ is extremely nonlinear as is visually demonstrated by the difference between Figs. 1a,d and Figs. 1b,e where a very small change in the adjacency matrix leads to a dramatic change in the distance between different pairs of vertices.
Here, we study the origins of the small-world behavior. In particular, we investigate if the onset of small-world networks is a phase transition or a crossover phenomena. To answer this question we consider not only changes in the value of $`p`$ but also in the system size $`n`$.
The motivation for this study is the following. In a regular one-dimensional network with $`n`$ vertices and $`z`$ links per vertex, the average distance $`\mathrm{}`$ between two vertices increases as $`n/(2z)`$ —the distance is defined as the minimum number of steps between the two vertices. The regular network is similar to Manhattan: Walking along $`5^{th}`$ Avenue from Washington Square Park in $`4^{th}`$ Street to Central Park in $`59^{th}`$ Street, we have to go past 55 blocks. On the other hand, for a random network, each “block” brings us to a point with $`z`$ new neighbors. Hence, the number of vertices increases with the number of steps $`k`$ as $`nz^k`$, which implies that $`\mathrm{}`$ increase as $`\mathrm{ln}n/\mathrm{ln}z`$. The random network is then like a strange subway system that would directly connect different parts of Manhattan and enable us to go from Washington Square Park to Central Park in just one stop. In view of these facts, it is natural to enquire if the change from large-world ($`\mathrm{}n`$) to small-world ($`\mathrm{}\mathrm{ln}n`$) in disordered networks occurs through a phase transition for some given value of $`p`$ or if, for any value of $`p`$, there is a crossover size $`n^{}(p)`$ below which our network is a large-world and above which it is a small-world.
In the present Letter we report that the appearance of the small-world behavior is not a phase-transition but a crossover phenomena. We propose the scaling ansatz
$$\mathrm{}(n,p)n^{}F\left(\frac{n}{n^{}}\right),$$
(1)
where $`F(u1)u`$, and $`F(u1)\mathrm{ln}u`$, and $`n^{}`$ is a function of $`p`$. Naively, we would expect that when the average number of rewired links, $`pnz/2`$, is much less than one, the network should be in the large-world regime. On the other hand, when $`pnz/21`$, the network should be a small-world. Hence, the crossover size should occur for $`n^{}p=O(1)`$, which implies $`n^{}p^\tau `$ with $`\tau =1`$. This result relies on the fact that the crossover from large to small worlds is obtained with only a small but finite fraction of rewired links. We find that the scaling ansatz (1) is indeed verified by the average distance $`\mathrm{}`$ between any two vertices of the network. We also identify the crossover size $`n^{}`$ above which the network behaves as a small-world, and find that it scales as $`n^{}p^\tau `$ with $`\tau 2/3`$, distinct from the trivial expectation $`\tau =1`$.
Next, we define the model and present our results. We start from a regular one-dimensional network with $`n`$ vertices, each connected to $`z`$ neighbors. We then apply the “rewiring” algorithm of to this network. The algorithm prescribes that every link has a probability $`p`$ of being broken and replaced by a new random link. We replace the broken link by a new one connecting one of the original vertices to a new randomly selected vertex. Each of the other $`n2`$ vertices —we exclude the other vertex of the broken link— has an a priori equal probability of being selected but we then make sure that there are no duplicate links. Hence, the algorithm preserves the total number of links which is equal to $`nz/2`$. A quantity that is affected by the rewiring algorithm is the probability distribution of local connectivities. For $`p0`$, this probability is narrowly peaked around $`z`$, but it gets broader with increasing $`p`$. For $`p=1`$, the average and the standard deviation of the local connectivity are of the same order of magnitude and equal to $`z`$.
Once the disordered network is created, we calculate the distance between any two vertices of the network and its average value $`\mathrm{}`$. To calculate all the distances between vertices, we use the Moore-Dijkstra algorithm whose execution time scales with network size as $`n^3\mathrm{ln}n`$. We perform between 100 and 300 averages over realizations of the disorder for each pair of values of $`n`$ and $`p`$.
Here, we present results for three values of connectivity $`z=10,20`$ and $`30`$ and system sizes up to $`1000`$. The scaling ansatz (1) enables us to determine $`n^{}(p)`$ from $`\mathrm{}(n)`$ at fixed $`p`$. Indeed, $`\mathrm{}(nn^{})n^{}\mathrm{ln}n`$ which implies that $`n^{}`$ is the asymptotic value of $`d\mathrm{}/d(\mathrm{ln}n)`$ \[Fig. 2(a)\]. Figure 2(b) shows the dependence of $`n^{}`$ on $`p`$ for different values $`z`$. We hypothesize that
$$n^{}\frac{1}{\mathrm{ln}z}p^\tau g(p),$$
(2)
where the term in $`z`$ arises from the fact that $`\mathrm{}=\mathrm{ln}n/\mathrm{ln}z`$ for a random network ($`p=1`$), and $`g(p1)0`$. Moreover, $`g(p)`$ approaches a constant as $`p0`$, leading to
$$n^{}p^\tau ,$$
(3)
for small $`p`$. Due to the effect of $`g`$ and the fact that $`n<1000`$ in our numerical simulations, we are constrained to estimate $`\tau `$ from the region $`2.5\times 10^4<p<2\times 10^2`$. For all values of $`z`$ we obtain $`\tau =0.67\pm 0.10`$ \[Fig. 2\].
Using this value of $`\tau `$ and the scaling form (1) we are able to collapse all the values of $`\mathrm{}(n,p)`$ onto a single curve \[Fig. 3\]. This data collapse confirms our scaling ansatz and estimate of $`\tau `$.
In summary, we have shown that the onset of small-world behavior is a crossover phenomena and not a phase transition from a large-world to a small one. The crossover size scales as $`p^\tau `$ with $`\tau 2/3`$. The surprising fact that $`\tau <1`$, shows that the rewiring process is highly nonlinear and can have dramatic consequences on the global behavior of the network. This implies that in order to decrease the radius of a network it is necessary to rewire only a few links. We also note that the value of the exponent $`\tau `$ will likely depend on the dimensionality of the initial regular network. This point will be adressed in future work.
We believe that the disordered networks introduced in may constitute a promising topology for more realistic studies of many important problems such as flow in electric-power or information networks, spread of epidemics, or financial systems. The results reported here support this hypothesis because they suggest that, for any given degree of disorder of the network, if the system is larger than the crossover size, the network will be in the small-world regime.
We thank S.V. Buldyrev, L. Cruz, P. Gopikrishnan, P. Ivanoc, H. Kallabis, E. La Nave, T.J.P. Penna, A. Scala, and H.E. Stanley for stimulating discussions. L.A.N.A. thanks the FCT/Portugal and M.B. thanks the DGA for financial support.
|
no-problem/9903/astro-ph9903042.html
|
ar5iv
|
text
|
# Sub-Milliarcsecond Precision of Pulsar Motions: Using In-Beam Calibrators with the VLBA
## 1 Introduction
The determination of pulsar proper motions and parallaxes using position measurements at the sub-milliarcsecond level over many years is relevant for many astrophysical questions: (1) The proper motion may indicate the birth area of the pulsar and its ejection velocity (egs. Kaspi, V. M. (1996)) ; (2) The unambiguous distance determined from the parallax can determine the intrinsic properties of the pulsar and calibrate dispersion-based distance measurements (eg. Taylor et al. (1993)); (3) Comparison of pulsar positions derived from Very Long Baseline Interferometry (VLBI) and that determined from pulsar timing analysis can be used to compare the quasar reference and planetary ephemerides (Fomalont et al. (1984)).
Since the opening of the Very Long Baseline Array (VLBA) in 1994, we have conducted an experimental program to investigate methods for obtaining high precision pulsar motions and parallaxes. With the use of VLBI techniques, radio images are routinely made with a resolution of a few milliarcseconds. If the object being imaged is relatively bright and small in angular extent, positional accuracies of $`1`$% of the resolution are possible. However, because pulsars are generally much stronger at lower frequencies, most observations are made below 3 GHz where ionospheric refraction is large and variable, leading to position errors and image distortion. The primary goal of our program was to characterize these ionospheric errors and minimize their effect with appropriate observational and reduction techniques.
Most previous VLBI astrometric observations of pulsars used the measured group delay and rates to derive accurate positions (Gwinn et al. (1986)). This technique does not use the interferometer phase information directly, but rather the rate of change of phase with frequency (group delay) and time (delay rate). Although this technique does not require long term phase stability, the accuracy is proportional to the spanned bandwidth divided by the observing frequency, generally less than 20% compared with using the interferometer phase directly.
In order to obtain sub-milliarcsecond positional accuracy for pulsars, especially the weaker pulsars, phase connection (also known as phase-referencing) techniques must be used. With these techniques the position of a pulsar is measured with respect to an adjacent celestial source (calibrator) by alternating observations between the pulsar and calibrator (which we will call the nodding calibrator) every few minutes. Interpolation of the phase corrections derived for the calibrator source to the target source removes first order effects of instrumental and electronic delays, and unknown atmospheric, ionospheric and geometric errors (Beasley and Conway (1995)). As long as the calibrator source is stationary and unchanging, it provides a firm fiducial point over time for measuring the relative position of the pulsar position.
However, the temporal and spatial properties of this phase connection process—source/calibrator angular separation, temporal switching cycle, frequency coverage, multi-calibration sources—may limit the positional sensitivity of the technique, rather than that imposed by the noise limits of the observations. Since most previous work has focused on observations at frequencies higher than 3 GHz (eg Beasley and Conway (1995)) where tropospheric effects dominate, additional tests were needed to examine the temporal and spatial properties of ionospheric refraction. Below a frequency of about 2 GHz, differential ionospheric delays between the calibrator and target sources may be large even for very fast switching times and small separations.
This paper discusses one particular solution to this problem – the calibration of a pulsar position using a faint radio source which is within the primary beam of the VLBA antenna—with recommendations on procedures to obtain high precision positional accuracy. The advantages of using in-beam calibrators are twofold: (1) there is no repointing of the antenna required (only recorrelation at the calibrator position), and (2) the angular separation of the target and calibrator (e.g. $`<25^{}`$) minimizes the errors due to spatial variations in ionospheric delay. This technique is not limited to pulsars, of course, but is only applicable at relatively low frequencies where the chance of finding a suitable in-beam calibrator is reasonable.
## 2 The Selection of Pulsars and the Radio Observations
We selected four pulsars which were previously observed with a VLA pulsar astrometric program between 1984 and 1993 (Fomalont et al. (1992, 1996)). These pulsars were generally strong enough to be detected with the VLBA without pulsar gating and all had at least one nearby background source within the 25 m antenna primary beam region ($`<25^{}`$) from the pulsar. Subsequent VLA observations at higher frequency identified those background sources with a flat radio spectrum and angular sizes less than $`2^{\prime \prime }`$. Information on these four pulsars, their nodding calibrator and the possible in-beam calibrator, is given in Table 1.
VLBA observations were made on November 9, 1994, September 23, 1995 and April 1, 1996, each day for 16 hours. These dates were chosen to maximize the parallax offset in right ascension for the sources. In order to obtain relatively long periods of phase connection data on each pulsar, we observed each pulsar and its nodding calibrator for one contiguous hour, alternating hourly among the four pulsar fields. All observation cycles were five minutes on pulsar and two minutes on calibrator, with about 30 seconds lost in slewing between two observations. This cycle time was considered to be short enough to allow interpolation of atmospheric and ionospheric phase changes between calibrator observations most of the time.
The eight independent frequency channels (called IF’s) were tuned to 1410, 1418, 1442, 1586, 1583, 1642, 1678, 1694 MHz, each with 8 MHz bandwidth, in order to span a large frequency range. It is possible to remove ionospheric refraction effects by comparing the images obtained at different frequencies, although this calibration was not needed with the in-beam calibration approach we will discuss in this paper. The pulsar data were correlated twice, at the pulsar position, and at the position of the in-beam calibrator. The data were sampled every 2 seconds and 32 frequency channels were provided for each of the 8 observing frequencies.
Two additional observations of B0919+06 were made on March 26 and March 30, 1998, as part of a larger project not originally intended for use with this in-beam project. The B0919+06 data were recorrelated at the in-beam calibrator position (J0923+068). This additional fourth and fifth epoch (only four days apart) of this pulsar can be used to determine possible systematic error which can not be obtained with only three epochs. The nodding calibrator used for these observations was not 0906+015, but J0914+0245 (Beasley (1998)); however, the nodding calibrator is used only to determine the gross calibration of the observations and not for phase connection.
## 3 The Data Reduction
The first part of the calibration of these data is identical to that used for typical nodding calibration as practiced at the VLBA. These procedures are summarized below. The second part, using the weak in-beam calibration to improve further the phase calibration, is described in more detail. While this calibration method has been used and discussed previously (egs Marcaide and Guirado (1994), Bradshaw et al. (1999)), we are attempting to push this reduction method to faint levels and this requires somewhat different considerations in this part of the phase calibrations.
### 3.1 Phase Connection to the Nodding Calibrator
The data reduction steps used for phase-connection between a nodding calibrator and a target source have been described by Beasley and Conway (1995). The separation of these calibrators from the pulsar fields were in the range 4 to 11 degrees and are now considered to be too separated for good phase connection at 1.4 GHz; but this was not known at the beginning of the project. In addition, we hoped to rely on the next stage, in-beam calibration, for more accurate imaging.
Images for the four pulsars and their in-beam calibrators were made after phase connection to their nodding calibrator. As summarized in Table 1, we did detect B0919+06 and B1857-26 and their in-beam calibrator, but failed to detect the B1822-09 pulsar or its in-beam calibrator. The pulsar B0950+08 was easily detected, but its in-beam calibrator was not. The images of the detected pulsars and in-beam calibrators were significantly distorted because of the large angular distance between the nodding calibrator and the pulsar field. We estimate that the position accuracy was no better than about 10 mas and most images showed several secondary peaks.
For further analysis of the strong pulsar B0950+08, where there is no in-beam calibrator available for further phase calibrations, other methods are being investigated to determine and remove the residual ionospheric phase errors and will be reported on elsewhere (Brisken et al. (1999)). More specifically, the dependence of pulsar position (or visibility phase) with frequency can be used to determine the amount of ionospheric refraction (the cause for the image distortions) which can then be removed to produce an improved image.
### 3.2 Using the In-beam Calibrator
For the pulsars B0119+06 and B1857-26 where both the pulsars and in-beam calibrators were detected, we proceeded with the next stage of phase connection between the in-beam calibrator and the pulsar (or vice-versa). Since the in-beam calibrator and/or pulsar are likely to be relatively weak, special considerations are needed, especially those needed to increase the coherence time in order to determine the phase calibration with small errors. For this reason the following section will be somewhat detailed and associated with the AIPS reduction package, generally used for VLBA calibrations.
Choose the stronger of the in-beam calibrator or the pulsar (it may be gated in order to increase the signal to noise) as the primary phase reference. If the correlation position of this source is not within about 50 mas of the true source position, shift the phase center of the data appropriately. Otherwise, phase drifts due to this large position error will decrease the effective coherence time of the data and produce phases differences between the individual frequency channels. Since we are dealing with weak sources, a more accurate phase solution can be made with longer integration times, or with combined frequency channels.
The AIPS calibration program, CALIB, determines the calibration phase $`\varphi `$ as a function of time $`t`$, frequency $`\nu `$ and telescope $`i`$, $`\varphi (t,\nu ,i)`$, for the in-beam calibrator. This program essentially determines the calibration phase needed in order to produce a point source from the existing data. Since the data have already been calibrated with respect to the nodding calibrator (and an image, even if distorted, already made), the additional phase calibrations should not be very variable in time, permitting the averaging of the data for many minutes. For this reason the initial use of the nodding calibration is important to increase the coherence time of the data associated with the in-beam calibrator in order to use weaker sources.
Before running CALIB some consideration of the expected signal to noise of the solutions should be determined. For VLBA observations the nominal rms noise associated with a phase calibration solution, using one minute of integration, with 8 MHz bandwidth, is 20 mJy. <sup>1</sup><sup>1</sup>1See VLBA sensitivities on NRAO web site. For all subsequent calculations we will assume that the observations used the entire VLBA at 1.5 GHz, system temperature of 40K, with 8 recorded frequencies each with bandwidth of 8 MHz. This sensitivity should be scaled by the relative sensitivity of the array, the number of telescopes used for the self-calibration solution, the integration time and the total bandwidth used in the solution. For example, if the correlated flux density of the source is $`>50`$ mJy, then the phase solution for each frequency channel of 8 MHz bandwidth (there are eight of them) with one minute integration will have signal to noise of about 2.5 to 1 which will produce an rms phase error of about $`20^{}`$. Since this phase error is nearly stochastic, the averaging of eight frequency channels over long periods will average out these fluctations.
For relatively weak in-beam calibrators, solution times of many minutes and averaging of the frequency channels are required to obtain valid solutions. As another example, a source with 5 mJy correlated flux density will require sufficient averaging to obtain a $`<2.5`$ mJy noise level for each phase integration. This would require a integration time of 8 minutes and averaging of all 8 frequency channels, each of 8 MHz, assuming the use of the VLBA. While much longer integration times and frequency averaging may increase the signal to noise of the solutions, coherence may be lost.
An illustration of the phase determination from a weak source is shown in Table 2. We have used the in-beam calibrator J1900-2602 for the pulsar B1857-27 for the 1996 data. For a range of parameters, we have taken the phase calibration determined from J1900 and applied it to B1857 which was then imaged. Because this calibration method determines those phases which make J1900 a point source, the image quality obtained for J1900 is no longer relevant. If the phase calibration is sound, then the image of B1857 should display a reasonable point source.
The averaging time for the solutions ranged from 1 to 20 minutes with a solution made for EACH frequency channel, or for ALL frequency channels combined for better sensitivity. In all cases the resultant in-beam source looked point-like and its peak flux density was approximately equal to the expected solution noise per averaging time. In other words the phase determination algorithm does produce a point source even with noise data. However, when this phase calibration is transferred to the pulsar and images are made, the peak flux density and the quality of the pulsar images indicate the accuracy of the phase calibration. The solutions with at least five minutes solution time, with all frequencies added together, produced good pulsar images. The slight decrease in pulsar peak flux from 5-min integration to 20-min integration may be caused by loss of coherence over this relatively long period of time. The images in which all frequencies have been averaged with a solution interval of 5, 10 or 20 minutes are acceptable and do not differ in the location of the peak (See Table 2).
## 4 THE RADIO IMAGES AND ASTROMETRIC RESULTS
The procedure outlined above was used, with a calibration solution interval of five-minutes with all eight frequencies averaged together. The peak flux density, and its spread over the three observations, of the in-beam calibrator for B0919+06 and for B0857-26 are given in Table 1. Both were substantially unresolved. A typical calibrated phase solution is shown in Figure 1 for the 1996 observations of J1900-2602, the in-beam calibrator for B1857-26.
After these in-beam calibration phases were applied to the pulsar data, images were made for each of the three epochs. They all showed essentially a point source. The images were then CLEANED with tight boxes. This process does increase the positional accuracy somewhat by removing the distortions associated with the point-spread function and permitting a better check on the quality of the image. The position of the pulsar was determined from a Gaussian-fit to the image. The position error is proportional to the resolution divided by the signal-to-noise of the peak of the pulsar.
Since the pulsar observations are tied to the same calibrator in all three observations, they are on the same position grid and the results from the three images can be directly compared to show the motion of the pulsar. Figure 2 shows the composite image for B0919+06, where we have simply summed the three epoch images. This increases the noise background by a factor 1.7, but is illustrative of the general results.
Figure 3 shows the similar results for B1857-26. Since the pulsar was relatively weak during the 1994 observation (upper component) and the location of the 1995 and 1996 positions were relatively close, we did not use a simple sum of the three epochs to obtain this image. This figure is composed of the representative parts of the images from the three epochs of B1857-26, with no overlapping. Both figures are illustrative, with the proper motion and parallax fits made on the positions derived from each observations.
The results for all five epochs for B0919+06 are listed in Table 3. With the additional two observations in 1998, a better analysis of the accuracy of this experiment can be ascertained. For simplicity we have listed the relative position of B0916+06 for the five observations; these relative positions are with respect to a nominal position of B0916+06. All positions have been tied to the in-beam calibrator J0923+0638 with the assumed position given in the table.
For B0919+06 the fit to the five epochs gives $`\mu _\alpha =17.7\pm 0.3`$ mas/yr, $`\mu _\delta =79.2\pm 0.5`$ mas/yr, $`\pi =0.31\pm 0.14`$ mas. In Figure 4 the position of the pulsar for the five epochs, after removal of the best-fit proper motion and position, is compared with the parallax of 0.31 mas/yr, shown by the sinusoid. The error bars are those expected from the image noise and is equal to the image resolution divided by the signal to noise at the peak of the pulsar. Since three parameters have been determined (pulsar position, proper motion and parallax) using four well-separated epochs, there is only one degree of freedom, making the fit look better than it really is. The agreement of the two 1998 observations, separated by four days, is also better than expected. The north/south motion of the pulsar is much less sensitive to the parallax since the observations were preferentially scheduled at maximum E/W parallax signal. The N/S scatter from the best position and proper motion is considerably larger than that for the E/W direction.
Our measured distance of B0919+06 is 3.2 (+2.6,-1.0) kpc and is consistent with a limit of $`>3`$ kpc determined by Taylor et al. (1993). The previous estimate of this pulsar’s proper motion is $`\mu _\alpha =13\pm 29`$ mas/yr, $`\mu _\delta =64\pm 37`$ mas/yr, consistent with, but an order of magnitude less accurate than, the present VLBA results (Harrison et al. (1993)).
The best fit proper motion and parallax for B1857-26, using just three epochs, is $`\mu _\alpha =19.9\pm 0.3`$ mas/yr, $`\mu _\delta =47.3\pm 0.9`$ max/yr, $`\pi =0.5\pm 0.6`$ mas. The VLA results (Fomalont et al. (1996)) gives $`\mu _\alpha =26\pm 5`$ mas/yr, $`\mu _\delta =47\pm 6`$ mas/yr, in excellent agreement with the VLBA results. The distance limit derived with these observations of $`>0.9`$ kpc is consistent with that derived by (Taylor et al. (1993)) of 1.7 kpc.
## 5 DISCUSSION
The parallax and proper motion obtained for B0919+06 and B1857-26 are among the most accurate yet obtained for pulsars or other galactic objects (eg. Bradshaw et al. (1999)). The precision limits are consistent with the signal to noise of the observations. The additional epochs for B0919+06 clearly improve the precision and decouple the proper motion and parallax solutions, and we suggest that a minimum of five well-separated epochs should be considered for obtaining accurate parallaxes. The consistency of the data with the fit for B0919+06 suggests that systematic errors at the level of 0.1 or 0.2 mas are not significant when using in-beam calibrators within about $`10^{}`$ from the target source.
From analysis now underway on determining and removing the ionospheric effects associated with B0950+08 (Brisken et al. (1999)), we estimate that the ionospheric refraction can lead to systematic error of about 5 mas for a source-calibrator separation of about $`7^{}`$. Assuming that this systematic error is caused by the differential ionospheric refraction between the calibrator and source, it should decrease linearly with the source-calibrator separation since the residual phase is a coherent difference rather than a stochastic difference. For a $`12^{}`$ separation we would expect such errors to be about 0.15 mas in size. This value is about the rms level of accuracy of the present experiment. With the additional epochs and the use of pulsar gating, systematic errors (probably caused by differential ionospheric refraction) may start to dominate the errors. However, removal of the ionospheric content by using the image changes over the frequency range 1.4 to 1.7 GHz may reduce this error.
When attempting to reach the 0.1 mas astrometric precision, the variability in structure of the calibrator source can introduce uncertainties at this level. Many bright sources are 10 mas in size, with variable core flux densities and moving components. The cores often shift with frequency because of optical depth effects. Reaching astrometric limits which are only 1% of the source angular size can be difficult. Weaker calibrator sources, at the 10 mJy level, also tend to be smaller in angular size since their stronger counterparts. because of the $`10^{12}`$ K Compton-limit for extragalactic radio sources.
A general rule is that the closer the phase calibrator is to the target source, the higher quality the images made with phase referencing. The strength of the calibrator is secondary as long as it can be detected. In other words, a source which is just barely detected (say 3-sigma for a solution) will produce a phase error of about 10 degrees which will be stochastic since it is determined from random noise processes. In contrast, the use of a very strong calibrator, further away from the target, to obtain phase solutions will have virtually no phase error component caused by noise, but systematic phase errors of tens of degrees may persist over many solution intervals and limit the dynamic range of the resulting images of the target source. Thus, the use of in-beam calibrators which are weak generally provides better astrometric accuracy and image fidelity. These in-beam calibrators also provide simultaneous calibration of the target source, whereas nodding calibrator phases must be interpolated and then interpolated to apply to the target source.
The problem with routinely using in-beam calibrators is that the field of view for which two sources can be simultaneously observed is limited. For the VLBA at 1.5 GHz, with a maximum separation of $`25^{}`$ of target and calibrator (both positioned somewhat within the half-power circle of the primary beam), the NVSS Catalog (Condon et al. (1998)) contains an average of 20 sources above 2.5 mJy in such an area, and about eight sources above 5.0 mJy. At these flux density levels, however, many sources may not have sufficient correlated flux density to be detectable at 5000 km baselines. With the present sensitivity limits of the VLBA, a source should have about 5 mJy correlated flux density to be detectable with 64 MHz bandwidth. Weaker sources can be detected using the phased VLA with the VLBA with 128 MHz bandwidth. If the pulsar is detectable, then an in-beam calibrator is useful as the reference even with a correlated flux density of 1 mJy. Observations are now underway to search several pulsar fields for possible in-beam calibrators and to determine the proportion of faint mJy sources which are detectable with VLBI resolution at 1.5 GHz.
The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. This research at Cornell is supported by an NSF grant AST 95-28391.
|
no-problem/9903/physics9903042.html
|
ar5iv
|
text
|
# Resonance-induced effects in photonic crystals
## I Introduction
Photonic crystals, i.e., dielectrics with spatial periodicity, have triggered much interest recently . One can picture a photonic crystal as a periodic arrangement of dielectric scatterers. For example, its dielectric constant $`\epsilon (𝐫)`$ equals $`\epsilon _s`$ for $`𝐫`$ inside the scatterer and $`\epsilon _b`$ otherwise. Similarly to the case of an electron moving in a periodic potential, a photon traveling in a photonic crystal encounters a periodicaly changing dielectric constant. As a result, a gap can open in the electromagnetic wave spectrum, independent of its polarization and direction of propagation . In a given frequency interval the density of states (DOS) can either be reduced down to zero (photonic band gap) or enhaced with respect to its vacuum value. Such a change in the DOS affects many physical quantities. The most transparent is the change in the spontaneous emission (SE) rate of embedded atoms and molecules. This can be demonstrated already at a relatively low refractive index contrast ($`1.15`$) . Suppression of the SE may have applications for semiconductor lasers, solar cells, heterojunction bipolar transistors, and thresholdless lasers . On the other hand, the enhancement of the SE is a way to create new sources of light for ultra-fast optical communication systems . Unlike conventional (electronic) crystals, photonic crystals are essentially man-made structures and their parameters can be changed at will. There is a common belief that in the near future such systems will allow us to perform many functions with light that ordinary crystals do with electrons.
Thus far, the main emphasis in the study of photonic crystals has been on calculating the band structure . Let $`f`$ be the scatterer filling fraction, i.e., the volume of the scatterer(s) in the unit cell per unit cell volume. Once $`f`$ is fixed, the spectrum is only a function of the dielectric contrast $`\delta =\mathrm{max}(\epsilon _s/\epsilon _b,\epsilon _b/\epsilon _s)`$. In our paper we pose the question of to what extent the single sphere resonance frequencies are related to band gaps, whether a gap width can be enlarged due to nearby resonances, and what other effects, if any, single-scatterer resonances may have on the band structure and properties of a photonic crystal.
In the following, we focus on the case of a simple face-centered-cubic (fcc) photonic crystal of homogeneous spheres. There are at least two reasons to consider this case. Firstly, single-sphere resonances, known also as Mie resonances , are well understood and an analytic solution exists for them. In each angular-momentum channel characterized by the angular-momentum number $`l`$ a single sphere has an infinite number of Mie’s resonances. The properties of Mie’s resonances are discussed in many monographs (see, for example, ) and we only emphasize the following ones. The sharpness of a Mie resonance decreases with increasing $`\sigma r_s`$, where $`r_s`$ is the sphere radius, $`\sigma =\omega \sqrt{\epsilon _b}/c`$, $`\omega `$ is the frequency, and $`c`$ is the speed of light in vacuum. On the other hand, reflecting the centrifugal barrier increasing with $`l`$ as $`l(l+1)`$, the sharpness increases with $`l`$. The spacing between resonances in the higher frequency range is accounted for by the resonance condition $`\sigma r_s=m\pi /2`$, where $`m`$ is an integer (if $`\epsilon _b=1`$).
Secondly, fcc structures of homogeneous spheres are among the most promising candidates to achieve a full photonic gap at optical and near-infrared frequencies. Indeed, in this frequency range one often uses collodial systems of microspheres which can self-assemble into three-dimensional fcc crystals with excellent long-range periodicity . This long-range periodicity gives rise to the strong optical Bragg scattering, clearly visible by the naked eye, and already described in 1963 . Both the case of “dense” spheres ($`\epsilon _s>\epsilon _b`$) and “air” spheres ($`\epsilon _s<\epsilon _b`$) can be realized experimentally.
The outline of our paper is as follows. In the next section, main features of the method that is used to calculate the spectrum of electromagnetic waves in periodic structures are discussed. We also discuss the differences between and similarities of the electronic and electromagnetic bands and give their rough classification. In section III we summarize our numerical results on the effects induced by resonant scattering in a simple fcc crystal of dielectric spheres. Contrary to the suggestions made previously in the literature , no spectacular effects may be expected. Finally, in section IV our conclusions are presented.
## II Resonance scattering and band structure of photonic crystals
Multiple scattering of classical waves in the presence of resonant scatterers was first studied in a disordered medium. There resonances cause a scattering delay due to the storage of wave energy inside a single scatterer, resulting in a sharp decrease (depending on the filling fraction) of the transport velocity $`v_E`$ of light . The discussion of resonance-induced effects in an ordered medium has been initiated by John . He suggested that the coinciding resonance and Bragg scattering is the most favourable condition for opening a gap in the spectrum. In the case of spheres this leads to $`f=1/(2\sqrt{\delta })`$ . Later on, Zhang and Satpathy noticed that a pseudogap in the band structure of an fcc lattice of dense spheres corresponds to a Mie resonance. Recently, Ohtaka and Tanabe , using the KKR method described earlier in , made an attempt to relate the Mie resonances to the photonic bands. They paid attention to flat bands which correspond to “heavy photons” in analogy to their electronic analogue, heavy fermions. For heavy photons, the group velocity can be as low as $`c/100`$ . The role of Mie resonances in bonding of spheres in photonic crystals was investigated by Antonoyiannakis and Pendry .
In our paper, we shall attempt to investigate the complementary relation to that discussed by Ohtaka and Tanabe , namely, to what extent the single sphere resonance frequencies are related to band gaps, a question asked first by John . In obtaining his result, John used a simplified picture in which, regardless of the propagation and polarization, the photon always encounters precisely the same periodic $`\epsilon (𝐫)`$ variation, resulting in a one-dimensional Kronig-Penney’s (KP) model . However, this approximation is only partially justified in the short-wavelength limit when light in a sphere behaves like a field in a slab with thickness $`2r_s`$. For real crystals the situation is different and the above coincidence condition can only be met in restricted regions on the surface of the Brillouin zone. Also, with increasing dielectric contrast $`\delta `$, dispersion curves $`\omega =\omega (𝐤)`$ become nonlinear and the diffraction condition is modified as compared to the Bragg case and fulfilled at lower frequency . Moreover, the model neglects polarization dependent effects.
However, it is quite difficult to relax the simplifying assumptions of the one-dimensional KP model made by John . In our discussion of resonance-induced effects in photonic crystals we shall employ the machinery of the on-shell multiple-scattering theory (MST) (see for electrons and for photons). Note that the on-shell MST is already required in the one-dimensional KP model if one wants to go beyond the dispersion relation and obtain Green’s function or the local density of states . The unique feature of the on-shell MST is that, for nonoverlapping (muffin-tin) scatterers (the present situation), it disentangles single-scattering and multiple-scattering effects (see for a recent discussion). For example, the total T-matrix per unit cell can be written in the form
$$T=1/(R^1B)=R/(1BR).$$
(1)
Here all quantities are ordinary matrices. $`R`$ stands for the on-shell reaction matrix (also known as the k-matrix ) of the scattering sphere (of all scatterers inside the lattice unit cell in the case of a complex lattice). It is diagonal in the angular-momentum basis and can be written as $`R_{LL^{}}=\delta _{LL^{}}\mathrm{tan}\eta _L/\sigma `$, where $`\eta _L`$ is a phase shift. Here $`L`$ is a composite index which labels all the spherical harmonics in the irreducible representation of the rotation group characterized by the principal angular-momentum number $`l`$ and, in the case of electromagnetic waves, it carries an additional polarization dependent index . $`B=B(\sigma ,𝐤)`$ in Eq. (1) is a matrix of so-called structure constants which accounts for the periodicity of the lattice. It depends on $`\sigma `$ and the Bloch momentum $`𝐤`$. The R-matrix is singular at Mie resonance frequencies and $`B(\sigma ,𝐤)`$ is singular whenever $`\sigma ^2=(𝐤+𝐊_n)^2`$, where $`𝐊_n`$ is a vector of the dual lattice.
The exact eigenmodes of a crystal are determined by poles of the total T-matrix, the latter being the zeros of the determinant of a hermitian matrix $`R^1B`$,
$$det(R^1B)=0.$$
(2)
Equation (2) is the familiar Korringa-Kohn-Rostocker (KKR) equation in band structure theory. Recently, we have succesfully used this approach to calculate the band structure of electromagnetic waves in a simple fcc lattice of dielectric spheres , to establish a simple analytic formula describing the width of the lowest lying stop gap in the $`(111)`$ crystal direction (the L point of the Brillouin zone ) , and to calculate the properties of the local DOS in one dimension . In general, the higher frequency, the higher the value of $`l_{max}`$ is needed to ensure convergence. In order to reproduce the first band and the linear part of the spectrum, $`l_{max}=1`$ is enough. In general, the size of a secular equation is reduced by almost a factor $`10`$ compared with that in the plane-wave method which customarily requires well above a thousand plane waves. The precision of the elements of the secular equation is determined by the standard Ewald summation which yields structure constants up to six digits.
At first sight, there seem to be large differences in band formation between the electrons and electromagnetic waves. In the case of electrons, in the medium between two atoms, waves are evanescent in nature, while electromagnetic wave propagates unattenuated between two scatteres. However, electrons are strongly interacting with each other. If the interactions are taken into account, an effective-Hamiltonian single-electron picture emerges of a near-free electron with a positive energy which moves unattenuated between two scatteres like the electromagnetic wave does . Therefore, the same principles apply to the classification of the electronic and the photonic bands. For electrons, a rough classification of the bands can be obtained if the singularities of $`R`$ and $`B`$ are well separated . Thus if $`R(\sigma )`$ is sufficiently small near a singularity of $`B(\sigma ,𝐤)`$, where $`\sigma =\sqrt{E}`$ and $`E`$ is the electron energy, a band is formed with the dispersion relation
$$\omega (𝐤)|𝐤+𝐊_n|+R(\sigma ).$$
(3)
In more intuitive terms, the formation of such a band results from the formation of standing waves in a crystal and it is appropriate to call such a band the Bragg band. The second type of band can form near a singularity of $`R`$ at $`\omega _0`$, $`R=\mathrm{\Gamma }/(\omega ^2\omega _0^2)`$. This is, for example, the case in the transition and noble metals. If $`\mathrm{\Gamma }`$ and $`B(\sigma ,𝐤)`$ in the vicinity of $`\omega _0`$ are sufficiently small, a (usually very narrow) resonance band is formed with the dispersion relation
$$\omega (𝐤)[\omega _0+\mathrm{\Gamma }B(\sigma _0,𝐤)]^{1/2}.$$
(4)
Eq. (4) is in agreement with the observation made in that the resonance-band width is comparable with the lifetime broadening of the DOS profile for a single sphere. In more intuitive terms, formation of the resonance band can be understood as resulting from the broadening of individual resonances when they start to feel the presence of each other, similar to the formation of the electronic bands from individual atomic levels in the tight-binding limit . Such usually very narrow resonance bands describe “heavy photons”.
This shows that one can associate with a resonance a (resonance) band. Is it possible to associate with a resonance a gap? The answer is yes. However, a hybridization of bands must take place. We speak of hybridization if the singularities of $`R`$ and $`B`$ cannot be well separated and neither a pure Bragg nor a pure resonance band is formed. Under certain conditions the two bands can hybridize in such a way as to create a gap over the approximate energy range of the original unhybridized resonance band. An example of such a hybridization is provided by transition metals with characteristic $`d`$ (or $`f`$) resonance which couples with extended band states by tunneling . In the latter case a broad $`sp`$ band hybridizes with a narrow $`d`$ band in such a way as to create a gap over the approximate energy range of the original unhybridized $`d`$ resonance band .
In the following section, we investigate the hybridization of photonic bands in an fcc lattice of dielectric spheres. Although the same principles apply to the classification of the photonic bands as to those for electrons, is does not mean that the respective band structures are qualitatively similar. There is significantly difficult to open a gap in the spectrum of electromagnetic waves than in the case of electrons. Moreover, a gap often does not open between the lowest lying bands, as in the case of electrons, but in an intermediate region. The origin of this difference lies in a different behaviour of $`R`$ and $`B`$ and can be rather easily understood. Indeed, in the tight-binding picture of band formation , individual bands results from the broadening of corresponding atomic levels when the atoms start to feel the presence of each other. The largest gap between atomic levels is between the lowest-lying energy levels. Therefore, for a lattice of atoms, one expects to find a gap essentially between the first and the second energy band, with the gap between higher bands scaling down to zero. On the other hand, for a dielectric scatterer and Maxwell’s equations, bound states are absent. They are replaced by resonances. Moreover, if the wavelength is small compared to the size of the spheres, one can use geometric optics, while in the opposite limit of long wavelengths, the Rayleigh approximation applies. In neither case does a gap open in the spectrum. Therefore, if a gap is present in the spectrum, it should be in the intermediate region between the two limiting cases (see, however, the case of a diamond lattice (, figure 2), which is a complex lattice). The same applies to the localization of light which is also expected at some intermediate frequencies.
Since opening of a gap in the spectrum of electromagnetic waves in much difficult than in the electronic case, one expects also that hybridization will be weaker in the former case. These expectations are confirmed in the following section.
## III Results for an fcc lattice of dielectric spheres
In our case of an fcc crystal of homogeneous spheres, we looked for a correspondence between Mie resonance frequencies and (i) the lowest lying stop gap in the (111) crystal direction (the L point of the Brillouin zone ) in the case of both “dense” spheres ($`\epsilon _s>\epsilon _b`$) and “air” spheres ($`\epsilon _s<\epsilon _b`$), (ii) the full gap in the case of “air” spheres. (Note that there is no full gap in the case of “dense” spheres .) Apart from numerous experimental data now available , there are at least two other reasons to chose the L-gap. First, the width of the first stop gap often takes on its maximum at the L point and, second, experimental techniques make it possible to grow collodial crystals such that the L direction corresponds to normal incidence on the crystal surface.
In the case of “dense” spheres we found that for sufficiently high dielectric contrast $`\delta `$ the L-gap can be associated with the 1M1 resonance. (Our notation $`lAn`$ for a Mie resonance is such that $`l`$ is the angular momentum, $`A`$ stands for either electric (E) or magnetic (M) mode, and $`n`$ is the order of the resonance with increasing frequency in a given $`lA`$ channel.) For all filling fractions one finds that as $`\delta `$ increases over a critical value (see Figure 1), the 1M1 resonance “descends” from above to the L-gap and stays inside it close to the midgap frequency. We verified this behaviour for $`\delta `$ up to $`100`$.
In the case of “air” spheres the hybridization of bands is much less pronounced compared to the case of “dense” spheres. We observe a correspondence between the L-gap and a Mie resonance only for particular filling fractions and dielectric contrasts. Filling fraction $`f0.5`$ or higher is required for hybridization to occur. However, for a given $`f`$, the required dielectric contrast for the onset of hybridization can be as little as half the value for the dense sphere case. For $`f=0.6`$ one finds the 1E1 resonance trapped inside the L-gap already for $`\delta 8`$. In the close-packed case the lowest lying 1E1 resonance descends actually below the L-gap and, if $`\delta 16`$, a hybridization occurs at the frequency corresponding to the second, 1M1, resonance.
Hybridization in the case of the lowest L-gap is only partial since the gap does not extend over the whole Brillouin zone. Let us, therefore, look at the full band gap which can be opened only in the air sphere case (one can open just a single full gap here ). In contrast to that of the L-gap, the opening of the full gap requires a certain threshold dielectric contrast which rapidly increases as $`f`$ decreases from the close-packed case (see figure 2). Hybridization follows the irregular pattern seen in the case of the L-gap. For example, for $`f=0.6`$ hybridization occurs if $`\delta 36`$ where the 2M1 resonance descends from above to the full gap, soon followed by the 3E1 resonance (2M1 $`<`$ 3E1). Both resonances seem to be locked inside the full gap (at least up to $`\delta =100`$). Interesting behaviour is found for $`f=0.64`$. First, the full gap opens at $`\delta 10.6`$ between the closely lying 2M1 and 3E1 resonances. As $`\epsilon _b`$ increases, the 3E1 resonance moves across the gap and stays closely below the lower edge of the full gap. Hybridization around the 3E1 resonance occurs only for $`\delta =(11.5,13)`$. Outside this interval of $`\delta `$ no resonance is inside the full gap. The case in which $`f=0.68`$ shows an anomalous behaviour: no resonance frequency is inside the full gap or close to the the full gap edge. On the other hand, in the close-packed case ($`f0.74`$) one can find three closely lying resonances, namely, 3M1, 4E1, and 1E2 (3M1 $`<`$ 4E1 $`<`$ 1E2), inside the full gap. 3M1 descends to the full gap around $`\epsilon _b=12`$, soon followed by the 4E1 and 1E2 resonances. For $`\epsilon _b>16`$ all three resonances are already inside the full gap and seem to remain trapped there (at least up to $`\epsilon _b=100`$).
Apart from the hybridization, another intriguing issue is that of whether the width of a gap can be enlarged due to the Mie resonances as the one-dimensional KP model suggests . Once a resonance was inside a gap we did not find any significant effect on the gap width. A good illustration of this is the behaviour of the relative width $`\mathrm{\Delta }^r`$ of the full gap (the full gap width divided by the midgap frequency) at $`f=0.68`$ on the one hand and $`f=0.6,\mathrm{\hspace{0.17em}0.64}`$ and $`0.74`$ (close-packed) on the other hand (see figure 2). In the first case, the gap does not correspond to any Mie resonance, while in the second case, there are up to three Mie resonances inside the gap. Had there been some effect of a Mie resonance on the gap width, one would have observed either a sudden increase in $`\mathrm{\Delta }^r`$ after hybridization sets in, or an anomalously small gap width for $`f=0.64`$ (if $`\delta >13`$) and $`f=0.68`$. Instead, one sees a monotonical increase of $`\mathrm{\Delta }^r`$ as $`\delta `$ and $`f`$ increase. However, something different may happen if a Mie resonance frequency is close to a gap edge, before hybridization sets in. Under certain circumstances one can observe a resonance-induced widening of a relative gap width by up to $`5\%`$. Figure 3 shows the relative L-gap width $`\mathrm{\Delta }_L^r`$ for air spheres and $`f=0.6`$. Around $`\delta =7.99`$ the 1E1 resonance crosses the upper edge of the L-gap, which results in a local enhancement of $`\mathrm{\Delta }_L^r`$ by $`5\%`$. As a function of $`\delta `$, this widening occurs within the very narrow interval of $`\delta (7.986,7.998)`$ where it shows a flat peak. In the air-sphere case one can be sure that this widening of a gap can be entirely attributed to a resonance. As shown recently , the lowest lying L-gap for an fcc lattice of dielectric spheres can be understood in terms of simple quantities, namely, the volume averaged dielectric constant, $`\overline{\epsilon }=f\epsilon _s+(1f)\epsilon _b,`$ the volume averaged $`\epsilon ^2(𝐫)`$, $`\overline{\epsilon ^2}=[f\epsilon _s^2+(1f)\epsilon _b^2],`$ and the effective dielectric constant $`\epsilon _{eff}`$. The effective dielectric constant can be well approximated by Maxwell-Garnett’s formula ,
$$\epsilon _{eff}\epsilon _b(1+2f\alpha )/(1f\alpha )$$
(5)
where, for a homogeneous sphere, the polarizability factor
$$\alpha =(\epsilon _s\epsilon _b)/(\epsilon _s+2\epsilon _b).$$
(6)
Then the absolute L-gap width $`\mathrm{}_L`$ is approximated within 3-6% (depending on $`f`$) by the formula
$$\mathrm{}_L=C(f)\left(\sqrt{\overline{\epsilon ^2}}\epsilon _{eff}\right)^{1/2}/\overline{\epsilon }$$
(7)
where
$$C(f)=C_0+0.14f(2f_mf)/f_m^2.$$
(8)
Here $`C_00.74`$ is the minimal value of $`C`$ and $`f_m`$ is the filling fraction for which $`C(f)`$ takes on its maximal value. $`C(f)`$ takes on its minimal value $`C_0`$ at the extreme filling fractions $`f=0`$ and $`f=0.74`$, and its maximal value is $`C_m0.88`$ at $`f_m0.74/2`$. The factor $`0.14`$ in the interpolation formula (8) is the difference $`C_mC_0`$. Let $`k_L`$ be the length of the Bloch vector at the $`L`$ point of the Brillouin zone. In units where the length of the side of the conventional unit cell of the cubic lattice is $`A=2`$, one has $`k_L/\pi =\sqrt{0.75}`$. The formula
$$\mathrm{}_L^r=2\pi n_{eff}\mathrm{}_L/k_L$$
(9)
then describes $`\mathrm{}_L^r`$ for $`1\delta 100`$ with the relative error ranging from $`4\%`$ for $`f`$ around $`0.2`$, to the relative error $`8\%`$ for the close-packed case. Formula (9) describes $`\mathrm{\Delta }_L^r`$ in the air sphere case irrespective whether there is ($`f0.5`$) or is not ($`f<0.5`$) a Mie resonance within the L-gap frequencies. However, the formula is violated in the narrow interval $`\delta (7.986,7.998)`$ where the local widening of $`\mathrm{\Delta }_L^r`$ takes place.
## IV Conclusion
Using the photonic analog of the KKR method , we have investigated Mie-resonance-induced effects for an ordered medium, namely, for a simple fcc lattice of homogeneous spheres in three dimensions. Our work partially fills the gap in the understanding of resonance-induced effects compared to a disordered medium . We showed that the same principles apply to the classification of both electronic and electromagnetic bands in a periodic medium, although their respective qualitative behaviour may be different. For example, due to the absence of bound states for a single scatterer, it is much more difficult to open a full gap in the spectrum of electromagnetic waves than for the case of electrons. Still, under certain conditions, one can identify a resonance band, and a hybridization of the Bragg and resonance bands can take place, leaving behind a gap over the approximate energy range of the original unhybridized resonance band . We investigated both a partial hybridization in the case of the first stop gap in the (111) crystal direction (the L-gap) and a full hybridization in the case of the full band gap. For dense spheres ($`\epsilon _s>\epsilon _b`$), partial hybridization around the lowest Mie resonance occurs for all filling fractions $`f`$, once the dielectric contrast $`\delta `$ reaches a critical value $`\delta _c(f)`$ (see Figure 1). For air spheres ($`\epsilon _s<\epsilon _b`$), the hybridization follows an irregular pattern and it can be observed only if $`f0.5`$. However, the value $`\delta _c(f)`$ can be as little as half the corresponding $`\delta _c(f)`$ in the dense-sphere case. Near close packing ($`f0.74`$), the partial hybridization occurs around the second Mie resonance. The full hybridization in the case of air spheres (there is no full gap in the dense sphere case ) follows again the irregular pattern seen in the case of partial hybridization. Sometimes the full gap opens in a frequency region which does not correspond to any Mie resonance ($`f=0.68`$), but in other situations there are up to three Mie resonances in the frequency range corresponding to the full gap ($`f=0.74`$). The resonant-induced widening of a gap can occur if a Mie resonance is about to cross the edge of the gap (see Figure 3). The values of the dielectric contrast required to observe some of these effects are within experimental reach at microwaves and, in the air-sphere case, even at optical and near-infrared frequencies. However, unlike in the case of the one dimensional Kronig-Penney model , we did not find any evidence that if some of Mie resonance frequencies fall inside a gap, then this leads to its significant widening. Contrary to the suggestions made previously in the literature , no spectacular effects may be expected. This is probably related to the known fact that the hybridization of bands in higher dimensions is weaker . Also, since the opening of a gap in the spectrum of electromagnetic waves in much difficult than in the electronic case, it is natural that also hybridization effects are weaker in the photonic case.
## V Acknowledgement
We would like to thank A. van Blaaderen and B. Noordam for careful reading of the manuscript and useful comments, and members of the photonic crystals interest group for discussion. This work is part of the research program of the Stichting voor Fundamenteel Onderzoek der Materie (Foundation for Fundamental Research on Matter) which was made possible by financial support from the Nederlandse Organisatie voor Wetenschappelijk Onderzoek (Netherlands Organization for Scientific Research). SARA computer facilities are also gratefully acknowledged.
|
no-problem/9903/cond-mat9903066.html
|
ar5iv
|
text
|
# Traveling time and traveling length in critical percolation clusters
## Abstract
We study traveling time and traveling length for tracer dispersion in two-dimensional bond percolation, modeling flow by tracer particles driven by a pressure difference between two points separated by Euclidean distance $`r`$. We find that the minimal traveling time $`t_{min}`$ scales as $`t_{min}r^{1.33}`$, which is different from the scaling of the most probable traveling time, $`\stackrel{~}{t}r^{1.64}`$. We also calculate the length of the path corresponding to the minimal traveling time and find $`\mathrm{}_{min}r^{1.13}`$ and that the most probable traveling length scales as $`\stackrel{~}{\mathrm{}}r^{1.21}`$. We present the relevant distribution functions and scaling relations.
47.55.Mh, 05.60.Cd, 64.60.Ak
The study of flow in porous media has many applications, such as hydrocarbon recovery and ground-water pollution . Here we study an incompressible flow on two-dimensional bond percolation clusters at criticality where fluid is injected at point $`A`$ and recovered at point $`B`$ separated from point $`A`$ by Euclidean distance $`r`$. At time $`t=0`$ we add a passive tracer at the injection point . We investigate the scaling properties of the distributions of traveling time, traveling length, minimal traveling time, and the length of the path corresponding to the minimal traveling time of the tracer particles. We find new dynamical scaling exponents associated with these distributions.
Our first step is to calculate the pressure difference across each bond by solving Kirchhoff’s law, which is equivalent to solving the Laplace equation. The velocity across a given bond is proportional to the pressure difference across the bond; we normalize the velocities assuming the total flow between $`A`$ and $`B`$ is fixed, independent of the distance between $`A`$ and $`B`$ and the realization of the porous media .
We simulate the flow of tracers using a particle-launching algorithm (PLA) , where a tracer particle starting from the injection point $`A`$ travels through the medium along a path connected to recovery point $`B`$ . The probability $`p_{ij}`$ that a tracer particle at node $`i`$ selects an outgoing bond $`(ij)`$ is proportional to the velocity of flow on that bond; $`p_{ij}=v_{ij}/_kv_{ik}`$, where the $`k`$ summation should be taken over all outgoing bonds, i.e., for $`v_{ik}>0`$. In this process, the time taken to pass through the bond $`(ij)`$ is inversely proportional to the velocity of that bond, i.e., $`t_{ij}=1/v_{ij}`$.
We measure the distributions, $`P(\stackrel{~}{t})`$ and $`P(\stackrel{~}{\mathrm{}})`$, of the traveling time $`\stackrel{~}{t}`$ and the traveling length $`\stackrel{~}{\mathrm{}}`$ between $`A`$ and $`B`$ for $`10000`$ tracer particles for each realization. We sample over $`10000`$ different realizations with the two points $`A`$ and $`B`$ fixed. For each realization, we also find the minimal traveling time and the path which corresponds to the minimal traveling time to obtain $`P(t_{min})`$ and $`P(\mathrm{}_{min})`$. We run the simulation for system size $`L\times L`$ where $`L=1000r`$, and find a well-defined region where the distributions follow the scaling form
$$P(x)=A_x\left(\frac{x}{x^{}}\right)^{g_x}f\left(\frac{x}{x^{}}\right)$$
(1)
where $`x`$ denotes $`\mathrm{}_{min}`$, $`t_{min}`$, $`\stackrel{~}{\mathrm{}}`$ or $`\stackrel{~}{t}`$. The normalization constant is given by $`A_x(x^{})^1`$ and we find the scaling functions to be of the form $`f(y)=\mathrm{exp}(a_xy^{\varphi _x})`$. The maximum of the probability is at $`x^{}`$. Simulation shows that $`x^{}`$ has a power-law dependence on the distance $`r`$,
$$x^{}r^{d_x}.$$
(2)
The exponents $`\varphi _x`$ and $`d_x`$ are related by $`\varphi _x=1/(d_x1)`$ . The scaling function $`f`$ decreases sharply when $`x`$ is smaller than $`x^{}`$. The lower cutoff is due to the fact that the traveling distance cannot be smaller than the distance $`r`$.
The path which takes minimal time is not always the shortest path. However we find that the distribution of $`\mathrm{}_{min}`$ coincides with the distribution of the chemical lengths between points separated by distance $`r`$ studied in detail in Ref. .
In Figs. $`1(a)`$, $`2(a)`$, and $`3(a)`$, we show the log-log plots of distributions $`P(t_{min})`$, $`P(\stackrel{~}{\mathrm{}})`$, and $`P(\stackrel{~}{t})`$, respectively. For different distances $`r=4,8,16,32,64`$, and $`128`$, we determine the characteristic size $`x^{}`$ as the peak of the distribution. In Figs. $`1b`$, $`2b`$, and $`3b`$, we plot $`x^{}`$ versus distance $`r`$ in double logarithmic scale and linear fitting yields the exponents $`d_x`$ for each distribution. In Figs. $`1(c)`$, $`2(c)`$, and $`3(c)`$ we collapse the data by rescaling $`x`$ by its characteristic size $`x^{}`$. All distributions are consistent with the scaling form of Eq. (1). The measured values of scaling exponents are summarized in Table I.
As shown in Fig. 2(b), the most probable traveling length $`\stackrel{~}{\mathrm{}}^{}`$ scales as $`\stackrel{~}{\mathrm{}}^{}r^{d_\stackrel{~}{\mathrm{}}}`$ where $`d_\stackrel{~}{\mathrm{}}=1.21\pm 0.02`$. Note that $`d_\stackrel{~}{\mathrm{}}`$ is significantly different from the minimal path exponent $`d_{min}=1.130\pm 0.002`$ , while it is within the error bars of the exponent for the optimal path in random energy landscapes, $`d_{opt}=1.2\pm 0.02`$ , and the shortest path in invasion percolation with trapping, $`d_{opt}=1.22\pm 0.01`$ .
In many transport problems, the characteristic time scales with the characteristic length with a power law, $`t^{}(\mathrm{}^{})^z`$. Since $`t^{}`$ scales as $`r^{d_t}`$ and $`\mathrm{}^{}`$ scales as $`r^d_{\mathrm{}}`$, it is reasonable to assume that $`z=d_t/d_{\mathrm{}}`$. Combining this relation, the relation $`t\mathrm{}^z`$, Eq. (1), and the identity $`P(\mathrm{}_{min})d\mathrm{}_{min}=P(t_{min})dt_{min}`$, we obtain scaling relations between exponents,
$$(g_{\mathrm{}_{min}}1)d_{\mathrm{}_{min}}=(g_{t_{min}}1)d_{t_{min}}$$
(3)
This scaling relation is well satisfied by the set of scaling exponents given in Table I.
Because of flow conservation, the velocity at distance $`r^{}`$ from point $`A`$ should scale inversely proportional to the number of bonds at this distance, which scales as $`(r^{})^{d_B1}`$ where $`d_B`$ is the fractal dimension of the transport backbone. Then the traveling time for a particle to travel the distance $`r`$ is given by
$$\stackrel{~}{t}^{}(r)_0^r\frac{1}{v(r^{})}𝑑r^{}r^{d_B}.$$
(4)
Note that $`\stackrel{~}{t}^{}(r)`$ is the most probable traveling time in our system, so we obtain the scaling relation $`d_{\stackrel{~}{t}}=d_B`$. Thus, the most probable traveling time is characterized by the transport backbone dimension of the media. This result is consistent with the homogeneous case, where $`d_B=2`$. The most recently reported value for the fractal dimension of the backbone is $`d_B=1.6432\pm 0.0008`$ for $`d=2`$, which is in agreement with our results (Table I).
The minimal traveling time is the sum of inverse velocities over the fastest path where as noted above the fastest path is statistically identical to the shortest path. While the velocity distribution has been studied extensively (e.g. it is known to be multifractal), because the velocities along the path are correlated, how the minimum traveling time distribution is related to the local velocity distribution is an open challenge for further research.
We thank A. Coniglio, D. Stauffer, and especially M. Barthélémy for fruitful discussions, and BP Amoco for financial support. We also thank J. Koplik and S. Redner for discussions concerning the limitation of a PLA.
| $`x`$ | $`d_x`$ | $`g_x`$ |
| --- | --- | --- |
| $`\mathrm{}_{min}`$ | $`1.13\pm 0.01`$ | $`2.14\pm 0.05`$ |
| $`t_{min}`$ | $`1.33\pm 0.05`$ | $`1.90\pm 0.05`$ |
| $`\stackrel{~}{\mathrm{}}`$ | $`1.21\pm 0.02`$ | $`2.00\pm 0.05`$ |
| $`\stackrel{~}{t}`$ | $`1.64\pm 0.02`$ | $`1.62\pm 0.05`$ |
Table I. Results for the exponents. Our $`d=2`$ results for $`d_{\mathrm{}_{min}}`$ and $`g_{\mathrm{}_{min}}`$ are within error bars of $`d_{min}`$ and $`g_{\mathrm{}}^{}`$ in Ref. . For comparison, the theoretical values of $`d_x`$ and $`g_x`$ for $`d=6`$ are all $`2`$.
|
no-problem/9903/cond-mat9903334.html
|
ar5iv
|
text
|
# Clustering of volatility as a multiscale phenomenon
## I Introduction
One of the most challenging problem in finance is the stochastic characterization of market returns. This topic not only has an academic relevance but it also has an obvious technical interest. Think, for example, at the option pricing models where distribution and correlations of volatility play a central role.
It is now well established that returns of the most important indices and foreign exchange markets have a distribution with fat tails, and that they are uncorrelated on lags larger than a single day, in agreement with the hypothesis of efficient market. On the contrary, the distribution of volatility and its correlations are still poorly understood. What it is known is that absolute returns (which are a measure of volatility) have memory on a long time range, this phenomenon is known in financial literature as clustering of volatility. Recent studies provide a strong evidence for power-law correlations for absolute returns . Notice that in ARCH-GARCH approach volatility memory is longer than a single time step but it decays exponentially, which implies that ARCH-GARCH modeling is inappropriate. Indeed, GARCH models have been extended in order to take into account this long memory properties .
In this paper we analyze the daily returns of the the New York Stock Exchange (NYSE) composite index from January 1966 to June 1998, and the US Dollar/Deutsch Mark (USD-DM) noon buying rates certified by the Federal Reserve Bank of New York from October 1989 to September 1998. We not only find that volatility correlations are power-laws on long time scales up to a year for NYSE index and six months for USD-DM exchange rate, but, more important, that they exhibit a non-unique exponent (multiscaling). This kind of multiscale phenomenology is known to be relevant in fully developed turbulence and in disordered systems , and it is recently pointed out for a financial series . Our result is based on the fluctuation analysis of a new class of variable that we call generalized cumulative absolute returns.
The second main result of the paper is the study of volatility probability distribution, which is derived by means of Fourier transform analysis. It is shown that it is well approximated by a log-normal distribution for NYSE index, while a log-normal shape is a reasonable fit only around the maximum for USD-DM rate.
The paper is organized as follows: in section II we show that volatility has a long memory by considering the autocorrelation of absolute returns. Nevertheless the power-law behavior cannot be inferred by simply considering autocorrelations. In order to have a sharper evidence for the nature of the long memory phenomenon, in section III, we perform a scaling analysis on the standard deviation of a new class of observables, the generalized cumulative absolute returns. This analysis implies power-law correlations with non-unique exponent. In section IV the attention is focused on volatility probability distribution, computed from returns data by means of Fourier transform analysis, which turns out to be log-normal at least for NYSE index. In section V some final remarks can be found.
## II Correlations for returns
We consider the New York Stock Exchange (NYSE) daily composite index closes (January 1966 to June 1998) and the US Dollar/Deutsch Mark (USD-DM) noon buying rates certified by the Federal Reserve Bank of New York (October 1989 to September 1998). In the first case the dataset contains $`8180`$ quotes, in the second $`2264`$. The quantity we consider is the (de-trended) daily return, defined as
$$r_t=\mathrm{log}\frac{S_{t+1}}{S_t}\mathrm{log}\frac{S_{t+1}}{S_t}$$
(1)
where $`S_t`$ is the index quote or the exchange quote at time $`t`$. The time $`t`$ ranges from 1 to $`N`$ where N is the total number of quotes ($`8180`$ for the NYSE index and $`2264`$ for the USD-DM exchange rate). The notation $``$ indicates the average over the whole sequence of $`N`$ data.
As pointed out by several authors , the distribution of returns is leptokurtic. In , it was firstly proposed a symmetric Lévy stable distribution and more recently in it is argued that the distribution is Lévy stable except for tails, which are approximately exponential. The estimation is that the shape of a Gaussian is recovered only on longer scales, typically for monthly returns.
Let us introduce the autocorrelation for returns, defined as
$$C(L)=r_tr_{t+L}r_tr_{t+L}.$$
(2)
A direct numerical analysis of (2) for the NYSE index (fig. 1a) and for USD-DM rate (fig. 1b) shows that returns autocorrelation is a vanishing quantity for all $`L`$. This simple evidence could induce to the wrong conclusion that description is complete, i.e. returns are i.i.d. variables whose distribution is a truncated Levy. The situation is much more complicated, in fact, even if returns autocorrelation vanishes, one cannot conclude that returns are independent variables. Independence implies that all functions of returns are uncorrelated variables. This is known to be false, in fact volatility have a long memory. On the other hand, the daily volatility is not directly observable, and informations about it can be derived by means of absolute returns $`\left|r_t\right|`$.
It is useful to consider the following autocorrelation for powers of absolute returns
$$C(L,\gamma )=\left|r_t\right|^\gamma \left|r_{t+L}\right|^\gamma \left|r_t\right|^\gamma \left|r_{t+L}\right|^\gamma .$$
(3)
This quantity is plotted for $`\gamma =1`$ in fig. 1a (NYSE index) and in fig. 1b (USD-DM exchange rate). Unlike returns autocorrelation, it turns out to be a non vanishing quantity, at least up to $`L150`$ (see and the references therein). This is a clear evidence that it is not correct to assume returns as independent random variables.
On the other hand, figs. 1 cannot give a satisfactory answer about the shape of absolute returns autocorrelations. In fact, data show a wide spread compatible with different scaling hypothesis. In figs. 1 we have reported two power-law functions with exponents derived by scaling analysis, which will be performed in the next section. The proposed interpolations are consistent with numerical data.
## III Scaling analysis
In the previous section we have seen that, consistently with the efficient market hypothesis, daily returns have no autocorrelations on lags larger than a single day. This fact can be also checked by using of scaling analysis. Consider the cumulative returns $`\varphi _t(L)`$, defined as the sum of $`L`$ successive returns $`r_t,\mathrm{},r_{t+L1}`$, divided by $`L`$
$$\varphi _t(L)=\frac{1}{L}\underset{i=1}{\overset{L}{}}r_{t+i}=\frac{1}{L}\left[\mathrm{log}\frac{S_{t+L}}{S_t}\mathrm{log}\frac{S_{t+1}}{S_t}\right].$$
(4)
One can define $`N/L`$ non overlapping variables of this type, and compute the associated variance $`Var\left(\varphi (L)\right)`$. Assuming that $`r_t`$ are uncorrelated (or short range correlated), it follows that $`Var\left(\varphi (L)\right)`$ has a power-law behavior with exponent $`\alpha =1`$ for large $`L`$ (see Appendix A), i.e.
$$Var\left(\varphi (L)\right)L^1.$$
(5)
The exponent $`\alpha `$ both for the NYSE index and USD-DM exchange market turns out to be around 1 (see figs. 2 and also see ), confirming that returns are uncorrelated.
On the contrary, this is not true for other quantities related to absolute returns. In order to perform the appropriate scaling analysis, let us introduce the generalized cumulative absolute returns defined as the sum of $`L`$ successive returns $`\left|r_t\right|^\gamma ,\mathrm{},\left|r_{t+L1}\right|^\gamma `$, divided by $`L`$
$$\varphi _t(L,\gamma )=\frac{1}{L}\underset{i=1}{\overset{L}{}}\left|r_{t+i}\right|^\gamma $$
(6)
where $`\gamma `$ is a real exponent and, again, these quantities are not overlapping.
In appendix A we show that if the autocorrelation for powers of absolute returns (3) exhibits a power-law with exponent $`\alpha (\gamma )1`$ for large $`L`$, i.e. $`C(L,\gamma )L^{\alpha (\gamma )}`$, it would imply that
$$Var\left(\varphi (L,\gamma )\right)L^{\alpha (\gamma )}.$$
(7)
On the contrary, if the $`\left|r_t\right|^\gamma `$ are short-range correlated or power-law correlated with an exponent $`\alpha (\gamma )>1`$, we would not detect anomalous scaling in the analysis of variance, i.e. $`Var\left(\varphi (L,\gamma )\right)L^1`$.
Our numerical analysis shows very sharply an anomalous power-law behaviour, after a very short transient time, in the range up to one year ($`L=250`$) for NYSE index (fig. 2a), and up to six months ($`L=150`$) for the USD-DM exchange market (fig. 2b). For larger $`L`$ the number of non overlapping variables $`\varphi (L,\gamma )`$ becomes too small for a statistical analysis, as revealed also by the increasing fluctuations on variance $`Var\left(\varphi (L,\gamma )\right)`$ as function of $`L`$. The best fit straight lines are performed in the range, respectively, $`10L250`$ for the NYSE index, and in the range $`10L150`$ for the USD-DM rate.
The crucial result is that $`\alpha (\gamma )`$ is not a constant function of $`\gamma `$, showing the presence of different anomalous scales. The interpretation is that different values of $`\gamma `$ select different typical fluctuation sizes, any of them being power-law correlated with a different exponent. The case $`\gamma =0`$ corresponds to cumulative logarithm of absolute returns. Approximately, in the region $`\gamma 4`$ the averages are dominated by only few events, corresponding to very large returns and, therefore, the statistics becomes insufficient.
In fig. 3, $`\alpha (\gamma )`$ is plotted as a function of $`\gamma `$ with error bars for both cases. In the NYSE index case, the exponent $`\alpha (\gamma )`$ exhibits a large spread, reaching an ordinary scaling exponent $`\alpha (\gamma )=1`$ for $`\gamma 4`$. On the contrary, the USD-DM exponent turns out to be less variable, rising slowly towards $`\alpha (\gamma )=1`$.
We would like to stress that the scaling analysis in figs. 2 definitively proves the power-law behaviour and precisely determine the coefficients $`\alpha (\gamma )`$, while a direct analysis of the autocorrelations (as in figs. 1) would not have provided an analogous clear evidence for multiscale power-law behaviour, since the data show a wide spread compatible with different scaling hypothesis.
The anomalous power-law scaling can be eventually tested against the plot of autocorrelations. For instance, the autocorrelations of $`r_t`$ and of $`|r_t|`$ are plotted in figs. 1 as a function of the correlation length $`L`$, and the full line, which is in a good agreement with the data, is not a best fit but it is a power-law whose exponent $`\alpha (1)`$ is obtained by the scaling analysis of the variance.
## IV Distribution of volatility
All the discussion in previous section concerns absolute returns. An obvious question is: ’what is the relation with volatility?’. The answer is not completely trivial, since from an operative point of view, the volatility is often assumed to coincide with the intra-day absolute cumulative return or, alternatively, with the implied volatility which can be extracted from option prices.
Our point of view is that the exact definition of volatility cannot be independent from the theoretical framework. It is usually assumed that the volatility $`\sigma _t`$ is defined by
$$r_t=\sigma _t\omega _t$$
(8)
where the $`\omega _t`$ are identically distributed random variables with vanishing average and unitary variance. The usual choice for the distribution of the $`\omega _t`$ is the normal Gaussian. This picture is completed by assuming the probabilistic independence between $`\sigma _t`$ and $`\omega _t`$.
In other terms, the returns series can be considered as a realization of a random process based on a zero mean Gaussian, with a standard deviation $`\sigma _t`$ that changes at each time step. According to the above definition, all the scaling property we have found on absolute returns directly apply to volatility.
Volatility $`\sigma _t`$ is an hidden variable, since we can directly evaluate only daily returns. Nevertheless, in appendix B we show how to derive the volatility probability distribution $`p(\sigma )`$ starting from the returns series. The key point is to move the problem in the space of the characteristic functions (Fourier transforms).
The probability distribution $`p(\sigma )`$ is plotted in fig. 4, both for the NYSE index and the USD-DM exchange rate. The results corresponding to extreme values of volatility ($`\sigma 0`$ and $`\sigma 0.02`$) are not confident due to insufficient statistics.
The astonishing fact is that NYSE volatility distribution is well fitted by a log-normal distribution
$$p(\sigma )=\frac{1}{\sqrt{2\pi }s\sigma }\mathrm{e}^{\frac{1}{2}\left(\frac{\mathrm{log}\sigma m}{s}\right)^2}.$$
(9)
The fit is performed in the range $`0.0035\sigma 0.01`$ and gives $`m=4.94\pm 0.01`$ and $`s=0.44\pm 0.01`$, while the USD-DM volatility distribution is consistent with a log-normal distribution with $`m=5.27\pm 0.01`$ and $`s=0.54\pm 0.01`$ only in a narrow region around the maximum ($`0.0025\sigma 0.005`$).
This unexpected log-normal shape for the volatility distribution suggests the existence of some underling multiplicative process for volatility, at least for the NYSE index. This result implies that not only indices prices are multiplicative processes, but also the associated returns. On the other hand, the USD-DM rate analysis might be affect by insufficient statistics problems, which leads to an over-estimation of distribution tail in the range $`\sigma 0.01`$. Under this hypothesis, a log-normal shape could be consistent with the USD-DM volatility distribution, and an underling multiplicative process might be present also for foreign exchange returns.
A possible and reasonable tentative to explain this peculiar behaviour for the volatility distribution can be found in , where a multiplicative cascade process for volatility is proposed, borrowing well-known arguments from turbulence theory.
## V Conclusions
The first result we have found is that the scaling of variance of the generalized cumulative absolute returns is power-law with non-unique exponent, for both the NYSE daily index and the USD-DM exchange rate. This fact implies power-law correlations whose exponent depends on the variable which is considered. The main theoretical consequence is that models with exponential correlations, like ARCH-GARCH, fails in describing the dynamics of financial markets, and that new models should account for the coexistence of long memory with different scales.
The second result is that volatility distribution is log-normal, at least for NYSE index. This fact suggests that volatility itself evolves as a multiplicative process.
These two results show the existence of an underling process that drives daily returns, and indicates that new modelizations of financial markets have to look to returns as a subordinate process of volatility.
Acknowledgements
We thank Roberto Baviera, Rosario Mantegna and Angelo Vulpiani for many interesting conversations concerning data analysis and models for dynamics of prices.
## Appendix A
In this appendix we show that if the correlations $`C(L,\gamma )`$ exhibit a long range memory, $`C(L,\gamma )L^{\alpha (\gamma )}`$, then also the variance $`Var(\varphi (L,\gamma ))`$ of the generalized cumulative absolute returns behaves at large $`L`$ as $`L^{\alpha (\gamma )}`$.
The explicit expression of variance is
$$Var\left(\varphi (L,\gamma )\right)=\frac{1}{L^2}\underset{i=1}{\overset{L}{}}\underset{j=1}{\overset{L}{}}|r_{t+i}|^\gamma |r_{t+j}|^\gamma |r_{t+i}|^\gamma |r_{t+j}|^\gamma .$$
Taking into account that $`r_t`$ is a stationary process, and using the definition of $`C(L,\gamma )`$ (3), one has:
$$Var\left(\varphi (L,\gamma )\right)=\frac{1}{L}C(0,\gamma )+\frac{2}{L^2}\underset{Li>j1}{}C(ij,\gamma )$$
where
$$C(0,\gamma )=|r_t|^{2\gamma }|r_t|^\gamma ^2.$$
The previous expression can be rewritten as
$$Var\left(\varphi (L,\gamma )\right)=\frac{1}{L}C(0,\gamma )+\frac{2}{L^2}\underset{i=1}{\overset{L1}{}}(Li)C(i,\gamma ).$$
Under the hypothesis $`C(L,\gamma )L^{\alpha (\gamma )}`$, one has for large $`L`$
$$\frac{2}{L^2}\underset{i=1}{\overset{L1}{}}(Li)C(i,\gamma )L^{\alpha (\gamma )}$$
which leads to
$$Var\left(\varphi (L,\gamma )\right)=O(L^1)+O(L^{\alpha (\gamma )}).$$
For our data $`\alpha (\gamma )1`$, and then
$$Var\left(\varphi (L,\gamma )\right)L^{\alpha (\gamma )}.$$
On the contrary, if $`\alpha (\gamma )>1`$ or worst, correlations exhibit a faster decay, the variance $`Var(\varphi (L,\gamma ))`$ would be a power-law with scaling exponent equals to $`1`$.
A similar sketch can be repeated for the cumulative returns $`\varphi (L)`$. In this case since correlation has a fast decay, we have
$$Var\left(\varphi (L,\gamma )\right)L^1.$$
## Appendix B
Let us introduce the variables $`_t,𝒮_t,𝒲_t`$, defined as
$$\begin{array}{ccc}_t=\mathrm{log}|r_t|\hfill & & \\ 𝒮_t=\mathrm{log}\sigma _t\hfill & & \\ 𝒲_t=\mathrm{log}|\omega _t|\hfill & & \end{array}$$
which are related among them by virtue of (8) by
$$_t=𝒮_t+𝒲_t.$$
For the associated probability distributions (respectively $`Q(),P(𝒮),T(𝒲)`$) the following relation holds
$$Q()=_{\mathrm{}}^+\mathrm{}𝑑𝒮P(𝒮)T(𝒮).$$
(10)
The distribution $`P(𝒮)`$ retains full information on the volatility probability distribution $`p(\sigma )`$, since $`p(\sigma )=P(\mathrm{log}\sigma )/\sigma `$.
In order to derive from (10) an explicit expression for $`P(𝒮)`$, it is convenient to consider the characteristic functions (Fourier transforms) $`\stackrel{~}{Q}(\stackrel{~}{}),\stackrel{~}{P}(\stackrel{~}{𝒮}),\stackrel{~}{T}(\stackrel{~}{𝒲})`$ of $`Q(),P(𝒮),T(𝒲)`$. In fact, the following simple relation holds
$$\stackrel{~}{Q}(\stackrel{~}{𝒮})=\stackrel{~}{P}(\stackrel{~}{𝒮})\stackrel{~}{T}(\stackrel{~}{𝒮})$$
and the inverse Fourier transform gives
$$P(𝒮)=\frac{1}{2\pi }_{\mathrm{}}^+\mathrm{}𝑑\stackrel{~}{𝒮}\frac{\stackrel{~}{Q}(\stackrel{~}{𝒮})}{\stackrel{~}{T}(\stackrel{~}{𝒮})}\mathrm{e}^{i𝒮\stackrel{~}{𝒮}}.$$
Notice that $`\stackrel{~}{Q}(\stackrel{~}{𝒮})`$ and $`\stackrel{~}{T}(\stackrel{~}{𝒮})`$ are complex objects, but we may consider only the real part of the integrand, since the result of the integration has to be real
$$P(𝒮)=\frac{1}{2\pi }_{\mathrm{}}^+\mathrm{}𝑑\stackrel{~}{𝒮}\mathrm{Re}\left[\frac{\stackrel{~}{Q}(\stackrel{~}{𝒮})}{\stackrel{~}{T}(\stackrel{~}{𝒮})}\mathrm{e}^{i𝒮\stackrel{~}{𝒮}}\right]$$
(11)
where
$$\mathrm{Re}\left[\frac{\stackrel{~}{Q}(\stackrel{~}{𝒮})}{\stackrel{~}{T}(\stackrel{~}{𝒮})}\mathrm{e}^{i𝒮\stackrel{~}{𝒮}}\right]=\frac{\left(\mathrm{Re}\stackrel{~}{Q}\mathrm{Re}\stackrel{~}{T}+\mathrm{Im}\stackrel{~}{Q}\mathrm{Im}\stackrel{~}{T}\right)\mathrm{cos}(𝒮\stackrel{~}{𝒮})}{(\mathrm{Re}\stackrel{~}{T})^2+(\mathrm{Im}\stackrel{~}{T})^2}+$$
$$+\frac{\left(\mathrm{Re}\stackrel{~}{Q}\mathrm{Im}\stackrel{~}{T}\mathrm{Im}\stackrel{~}{Q}\mathrm{Re}\stackrel{~}{T}\right)\mathrm{sin}(𝒮\stackrel{~}{𝒮})}{(\mathrm{Re}\stackrel{~}{T})^2+(\mathrm{Im}\stackrel{~}{T})^2}.$$
From a practical point of view, $`\mathrm{Re}\stackrel{~}{Q}(\stackrel{~}{𝒮})`$ and $`\mathrm{Im}\stackrel{~}{Q}(\stackrel{~}{𝒮})`$ can be directly computed from the returns series
$$\mathrm{Re}\stackrel{~}{Q}(\stackrel{~}{𝒮})=_{\mathrm{}}^+\mathrm{}𝑑Q()\mathrm{cos}(\stackrel{~}{𝒮})\frac{1}{N}\underset{t=1}{\overset{N}{}}\mathrm{cos}(\stackrel{~}{𝒮}_t)$$
$$\mathrm{Im}\stackrel{~}{Q}(\stackrel{~}{𝒮})=_{\mathrm{}}^+\mathrm{}𝑑Q()\mathrm{sin}(\stackrel{~}{𝒮})\frac{1}{N}\underset{t=1}{\overset{N}{}}\mathrm{sin}(\stackrel{~}{𝒮}_t).$$
The Fourier transforms $`\mathrm{Re}\stackrel{~}{T}(\stackrel{~}{𝒮})`$ and $`\mathrm{Im}\stackrel{~}{T}(\stackrel{~}{𝒮})`$ can be evaluated numerically starting from their definitions:
$$\mathrm{Re}\stackrel{~}{T}(\stackrel{~}{𝒮})=_{\mathrm{}}^+\mathrm{}𝑑T()\mathrm{cos}(\stackrel{~}{𝒮})$$
$$\mathrm{Im}\stackrel{~}{T}(\stackrel{~}{𝒮})=_{\mathrm{}}^+\mathrm{}𝑑T()\mathrm{sin}(\stackrel{~}{𝒮})$$
where
$$T()=\sqrt{\frac{2}{\pi }}\mathrm{e}^{\frac{1}{2}\mathrm{e}^2}.$$
Finally, the probability distribution $`P(𝒮)`$, and then $`p(\sigma )`$, can be computed via the numerical evaluation of integral (11).
The key step of this procedure is the numerical inverse Fourier transform, therefore the delicate point is the evaluation of the tails of the probability distribution $`P(𝒮)`$, where the limited number of data leads to spurious fluctuations.
|
no-problem/9903/hep-ph9903477.html
|
ar5iv
|
text
|
# Final State Interaction Phase in B Decays
## I Introduction
Strong final state interactions play an important role in the analysis of CP-violating effects in B decays. Direct CP violation such as a difference in rates for $`B^+F`$ and $`B^{}\overline{F}`$ vanishes in the limit that there are no strong phase shifts. Final state phases play a critical role in amplitude analyses of a set of $`B^0`$ decay experimental results.
An approach to final state phases in inclusive decays was given by Bander, Silverman, and Soni. For decays corresponding to the transition $`bu\overline{u}s`$ they considered at the quark level $`bc\overline{c}su\overline{u}s`$ where the second transition on mass shell yielded the final state phase. Whether or not this is reasonable for inclusive decays, its application to exclusive decays such as $`B\overline{K}\pi `$ has been criticized because the major final state interactions of $`\overline{K}\pi `$ are “soft” scattering to $`\overline{K}n\pi `$ and not to $`c\overline{c}`$ states.
We concentrate here on the decay of $`B`$ to two mesons, referring to $`B\pi \pi `$ to be specific. Arguments have been given that the final state phase shifts should be small. For example, Bjorken argues that there is little final state scattering in $`B\pi \pi `$ because the $`B`$ decays directly to colorless $`q\overline{q}`$ pairs that do not interact as they evolve into $`\pi \pi `$. Taken literally this is not correct since the $`s`$-state scattering at 5 GeV is expected to be sizable as we discuss below. In the present note we seek to analyze the relations between the weak decay amplitude and the strong interaction S-matrix of final states that might lead to large or small final state phases.
## II Multichannel final state interaction
Consider the decay matrix element $`f^{out}|𝒪_i|B`$ for the $`B`$ meson into the hadronic final state $`f`$, where $`𝒪_i`$ is a weak decay operator. The strong interaction S-matrix is defined with the “in” and “out” states by
$$S_{ff^{}}=f^{out}|f^{in}.$$
(1)
We choose states that are eigenstates of $`J`$ not of individual meson momenta. The phases of the in and out states are fixed by the time reversal transformation T:
$`T|f^{in}`$ $``$ $`f^{out}|,`$ (2)
$`T|f^{out}`$ $``$ $`f^{in}|.`$ (3)
With this phase convention, time reversal invariance of strong interactions requires that $`S_{ff^{}}`$ be symmetric matrix:
$$S_{ff^{}}=S_{f^{}f}.$$
(4)
Applying time-reversal operation on $`M_f=f^{out}|𝒪_i|B`$, one obtains
$$M_f\stackrel{T}{}B|T𝒪_iT^1|f^{in}.$$
(5)
If one inserts a complete set of out states and uses Eq.(4), this relation becomes $`M_f=_f^{}S_{ff^{}}M_f^{}^{}`$ for a T-even decay operator $`𝒪_i`$. One can express it in the operator form as
$$M=SM^{},$$
(6)
where $`M`$ is represented in a column vector. This matrix equation is formally solvable as
$$M=S^{1/2}M^0,$$
(7)
where $`M^0`$ is an arbitrary real vector of the same dimension as $`M`$. If one uses the eigenstates $`|\alpha `$ of the S-matrix as a basis, Eq.(7) reduces to the Watson theorem: $`M_\alpha =M_\alpha ^0e^{i\delta _\alpha }`$. We thus may consider the vector $`M^0`$ as representing the decay amplitude in the absence of the final state phases due to the strong interaction. Since $`M`$ and $`M_0`$ are related by a unitary matrix, it holds that $`_f|M_f|^2=_f|M_f^0|^2`$.
If one subtracts the complex conjugate of $`M`$ from both sides in Eq.(6) and divides by $`2i`$, the familiar form $`\mathrm{Im}M=𝐭M^{}`$ emerges for the imaginary part of $`M`$, where $`𝐭=(S1)/2i`$. In components, it reads
$$\mathrm{Im}M_f=\underset{f^{}}{}t_{ff^{}}M_f^{}^{}.$$
(8)
This form is commonly derived starting with Lehmann-Symanzik-Zimmermann’s reduction formula. In applications of interest, the weak decay Hamiltonian $`H_w`$ is given in the form
$$H_w=\underset{i}{}\lambda _i𝒪_i,$$
(9)
where $`\lambda _i`$ is a combination of the CKM matrix elements and $`𝒪_i`$ is a T-even operator. It is to be understood that $`M_f`$ is to be evaluated separately for different operators $`𝒪_i`$.
## III Strong interaction S-matrix
When two mesons such as $`\pi ^+\pi ^{}`$ interact in the $`s`$-state, we believe that they will scatter into a large number of multi-particle final states. Indeed we expect similar inelastic behavior for all partial waves of $`l<kr`$ where $`r`$ is a characteristic hadron radius. The sum over these partial waves can be described by a diffractive scattering formula such as that given by Pomeron exchange. For the case of meson-meson scattering we extrapolate from the analysis of meson-baryon and baryon-baryon scattering and write for the invariant elastic scattering amplitude ,
$$T(s,t)=i\sigma _{tot}se^{bt},$$
(10)
where the constant in front has been fixed by the optical theorem. Defining the Pomeron residue by $`\beta (t)\sigma _{tot}e^{bt}`$, we obtain with the factorization relation $`\beta (t)_{MM^{}}\beta (t)_{pp}=\beta (t)_{Mp}\beta (t)_{M^{}p}`$ <sup>*</sup><sup>*</sup>*The factorization can be proved only for simple $`l`$-plane singularities. It is an assumption for the Pomeron.
$$\sigma _{tot}^{\pi \pi }12\mathrm{m}\mathrm{b},\sigma _{tot}^{\pi K}10\mathrm{m}\mathrm{b},\sigma _{tot}^{K\overline{K}}8\mathrm{m}\mathrm{b},$$
(11)
where $`\sigma _{tot}^{pp}=37`$mb, $`\sigma _{tot}^{\pi p}=21`$mb, and $`\sigma _{tot}^{Kp}=17`$mb have been used for the diffractive contribution of $`\sigma _{tot}`$ at $`\sqrt{s}m_B`$. For the diffractive peak width, the factorization gives
$$b^{\pi \pi }3.6\mathrm{GeV}^2,b^{K\pi }2.8\mathrm{GeV}^2,b^{K\overline{K}}2.0\mathrm{GeV}^2$$
(12)
if we choose $`b^{pp}5.0\mathrm{GeV}^2`$, $`b^{\pi p}4.3\mathrm{GeV}^2`$, and $`b^{Kp}3.5\mathrm{GeV}^2`$. For $`D\pi `$ scattering, we use the quark counting rule for $`\sigma _{tot}`$ and the assumption that the charmed quark interacts with the light quarks much more weakly. Then we obtain a crude estimate
$$\sigma _{tot}^{D\pi }\frac{1}{2}\sigma _{tot}^{\pi \pi },$$
(13)
and $`b^{D\pi }`$ is a little smaller than $`b^{K\pi }`$.
Projecting out the $`s`$-wave from the amplitude in Eq.(10),
$$a_{l=0}(s)=\frac{1}{16\pi s}_s^0T(s,t)𝑑t$$
(14)
yields
$$\mathrm{Im}a_{l=0}\{\begin{array}{cc}0.16& (\pi \pi )\\ 0.17& (K\pi )\\ 0.18& (K\overline{K})\\ 0.12& (D\pi )\end{array}$$
(15)
at $`\sqrt{s}=56`$GeV.
Extraction of the $`s`$-wave amplitude from the diffractive formula may arouse suspicion since one thinks of diffraction as a peripheral process. It would be better to consider $`T(s,t)`$ as describing the scattering from an absorbing gray sphere of radius $`r`$. The values of $`a_l`$ up to $`lkr`$ vary gradually with $`l`$ thus adding up to a large forward peak. As a result about 90% of the contribution to the integral in Eq.(14) comes from $`|t|<1\mathrm{G}\mathrm{e}\mathrm{V}^2`$ even though the $`l=0`$ amplitude itself is, of course, independent of $`t`$.
In what follows we use the estimate
$$S_{ff}0.7$$
(16)
corresponding to
$$a_{l=0}t_{ff}=\frac{S_{ff}1}{2i}=0.15i.$$
(17)
This corresponds to the case of a gray sphere with an inelasticity of 0.85. In the limiting case of a black sphere $`S_{ff}`$ goes to zero and the inelasticity goes to 0.5. If one goes beyond the diffractive scattering approximation, $`S_{ff}`$ is not purely real. In the Regge theory, the real part arises from exchange of the non-Pomeron trajectories such as $`\rho `$ and $`f_2`$. In $`\pi ^\pm p`$ scattering the real-to-imaginary ratio of $`1020\%`$ was observed in the forward scattering amplitudes at $`\sqrt{s}=56`$ GeV. We can make an estimate of the real part for meson-meson scattering by using the factorization. We first determine the Regge parameters at $`t=0`$ from the total cross section differences and then extract their $`t`$-dependences from the angular dependence of the differential cross sections. The analysis is simpler if exchange degeneracy is imposed. The smaller $`\sigma _{tot}`$ and the larger $`\rho `$-$`f_2`$ residues tend to enhance the real to imaginary amplitude ratio for $`\pi \pi `$ scattering over $`\pi p`$ scattering, while the smaller $`b^{\pi \pi }`$ partially compensates the trend. Particularly for $`\pi ^+\pi ^0`$, the real parts of the $`\rho `$ and $`f_2`$ terms add up close to $`30\%`$ of the imaginary part. However, our major goal is to understand the implications of the sizable inelastic scattering; for this purpose we use the simplifying approximation that $`S_{ff}`$ is real.
## IV Two channels
The relation (6) was studied in the case of two channels assuming that the diagonal S-matrix elements $`S_{ff}`$ are purely real. This requirement on the S-matrix turns out to be so strong in the case of two channels that there is only a single parameter left:
$`S`$ $`=`$ $`\left(\begin{array}{cc}\frac{1}{\sqrt{2}}& \frac{1}{\sqrt{2}}\\ \frac{1}{\sqrt{2}}& \frac{1}{\sqrt{2}}\end{array}\right)\left(\begin{array}{cc}e^{2i\theta }& 0\\ 0& e^{2i\theta }\end{array}\right)\left(\begin{array}{cc}\frac{1}{\sqrt{2}}& \frac{1}{\sqrt{2}}\\ \frac{1}{\sqrt{2}}& \frac{1}{\sqrt{2}}\end{array}\right),`$ (24)
$`=`$ $`\left(\begin{array}{cc}\mathrm{cos}2\theta & i\mathrm{sin}2\theta \\ i\mathrm{sin}2\theta & \mathrm{cos}2\theta \end{array}\right).`$ (27)
When $`S^{1/2}`$ is computed from $`S`$ and substituted in Eq.(7), a simple relation results:
$`M_1`$ $`=`$ $`M_{01}\mathrm{cos}\theta +iM_{02}\mathrm{sin}\theta ,`$ (28)
$`M_2`$ $`=`$ $`iM_{01}\mathrm{sin}\theta +M_{02}\mathrm{cos}\theta .`$ (29)
The phase of the decay amplitude in the channel 1 is large if the particle decays preferrentially to the channel 2, while it is small if the channel 1 is the dominant decay channel for small values of $`\theta `$. Though it is an interesting conclusion, this picture turns out to be specific to the two-channel case. When we add one more channel in the final state, there are three real parameters even after Im$`S_{ff}=0`$ is assumed. The nice simple relation of Eq.(18) does not hold any longer. If we go to $`N`$ final channels, there are $`N(N1)/2`$ real parameters even with Im$`S_{ff}=0`$, and no meaningful prediction results. Therefore we must change our strategy in studying the case of $`N1`$ such as the $`B`$ decay.
## V Randomness of weak decay amplitudes
In the presence of many decay channels, strong interactions are so complicated that it is beyond our ability to predict final state interactions accurately. We must substitute lack of our knowledge with reasonable dynamical assumptions and/or approximations. In search of such an assumption, we notice that since $`M_f^{}`$ and $`S_{ff^{}}`$ come from two different sources, weak and strong interactions, the phase of product $`S_{f^{}f}M_f^{}^{}`$ for $`f^{}f`$ takes equally likely a positive or a negative value as $`f^{}`$ is varied with $`f`$ fixed. While $`M_f^{}`$ is related to $`M_f`$ ($`f^{}f`$) by rescattering, there exist so many states that the influence of $`f`$ on $`f^{}`$ can be disregarded. We therefore introduce the postulate that the phase of $`S_{ff^{}}M_f^{}^{}`$ takes random values as $`f^{}`$ varies. It should be noted that randomness is postulated here for the relative phase and sign of the decay matrix element to the S-matrix element, not for the dynamical phases and mixing of S-matrix as it was introduced in the random S-matrix theory of nuclear physics.
We start our analysis with Eqs.(8) and (16). Taking the $`f^{}=f`$ term in the sum to the left-hand side in Eq.(8) and using $`t_{ff}i\mathrm{Im}t_{ff}`$, we write Eq.(8) in the form
$$(1+it_{ff})\mathrm{Im}M_ft_{ff}\mathrm{Re}M_f=\underset{f^{}f}{}t_{ff^{}}M_f^{}^{}.$$
(30)
The first and the second term of the left-hand side are real and imaginary, respectively, for Re$`t_{ff}=0`$. Given the estimate of Eq.(17) the coefficient of the first term is much larger than that of the second term and we consider this primarily as an equation for Im$`M_f`$. With the randomness postulate, the phase of $`M_f`$ is equally often positive or negative if we consider a large ensemble of final states $`f`$. It is some kind of average of the magnitude of the phase of $`M_f`$, not values for individual $`M_f`$, that we can study with our randomness postulate. For this purpose, we take the absolute value squared for both sides of Eq.(30). Then the right-hand side is:
$$\mathrm{RHS}=\frac{1}{4}\underset{f^{},f^{\prime \prime }f}{}S_{ff^{}}M_f^{}^{}S_{ff^{\prime \prime }}^{}M_{f^{\prime \prime }},$$
(31)
where $`t_{ff^{}}=S_{ff^{}}/2i`$ has been used for $`f^{}f`$. The random phase postulate allows us to retain only the terms of $`f^{}=f^{\prime \prime }`$ and to reduce the double sum to a single sum:
$`\mathrm{RHS}`$ $``$ $`{\displaystyle \frac{1}{4}}{\displaystyle \underset{f^{}f}{}}S_{ff^{}}S_{f^{}f}^{}|M_f^{}^2|`$ (32)
$``$ $`{\displaystyle \frac{1}{4}}\overline{|M_f^{}^2|}{\displaystyle \underset{ff^{}}{}}|S_{ff^{}}|^2,`$ (33)
where the second line defines $`\overline{|M_f^{}^2|}`$ as the weighted average of the decay amplitudes into states $`f^{}`$. Then, using the unitarity of S-matrix, we reach
$$\mathrm{RHS}\frac{1}{4}(1S_{ff}^2)\overline{|M_f^{}^2|}.$$
(34)
While our estimate of $`S_{ff}`$ is made on the basis of the Pomeron exchange, it should be noted that contributions to $`S_{ff^{}}`$ from quantum number exchange may be important in determining $`\overline{|M_f^{}^2|}`$ from Eq.(33) if they correspond to states $`f^{}`$ with large values of $`|M_f^{}^2|`$. Identifying Eq.(34) with the absolute value squared of the left-hand side of Eq.(30), we obtain the prediction of our random phase approximation:
$$(1+S_{ff})^2(\mathrm{Im}M_f)^2+(1S_{ff})^2(\mathrm{Re}M_f)^2=(1S_{ff}^2)\overline{|M_f^{}^2|}.$$
(35)
Defining $`\rho `$ by
$`\rho `$ $``$ $`\overline{|M_f^{}^2|}^{1/2}/|M_f|,`$ (36)
$`|M_f|^2`$ $`=`$ $`(\mathrm{Im}M_f)^2+(\mathrm{Re}M_f)^2,`$ (37)
the ratio of the imaginary-to-real part of $`M_f`$ is solved from Eq.(35) as
$$\frac{(\mathrm{Im}M_f)^2}{(\mathrm{Re}M_f)^2}\mathrm{tan}^2\delta _f=\frac{\tau ^2(\rho ^2\tau ^2)}{1\rho ^2\tau ^2},$$
(38)
where
$$\tau =\left(\frac{1S_{ff}}{1+S_{ff}}\right)^{1/2}.$$
(39)
Note that $`\tau ^2`$ is equal to the ratio of elastic to inelastic scattering cross section $`\sigma _{el}/\sigma _{inel}`$ of the relevant partial wave. Since the left-hand side of Eq.(38) is nonnegative, $`\tau `$ and $`\rho `$ are constrained for $`S_{ff}>0`$ by
$$\tau ^2\rho ^21/\tau ^2.$$
(40)
For $`S_{ff}=0.7`$,
$$\tau ^2=0.18,$$
(41)
so that rescattering among the final states does not allow $`\overline{|M_f^{}|^2}`$ and $`|M_f|^2`$ to differ too greatly in magnitude. In the weak limit of rescattering ($`\tau 0`$), of course, Eq.(40) allows any value for $`\rho `$. In the black sphere limit ($`\tau 1`$) Eq.(38) is useless and Eq.(40) constrains $`\rho =1`$. Our approach is only useful to the extent that inelastic scattering dominates very much over elastic for the final state $`f`$. It should be noticed that Eq.(38) reduces to the two-channel case Eq.(18) with
$$\left|\frac{M_2^0}{M_1^0}\right|^2=\frac{\rho ^2\tau ^2}{1\rho ^2\tau ^2}.$$
(42)
Our random approximation amounts to lumping all inelastic channels together as if they were a single inelastic channel with an “average” decay amplitude. However, we now interpret this as something like the standard deviation of the phase for an ensemble of independent final states $`f`$ with a given value of $`\rho `$.
If the relevant states $`f^{}`$ were similar to the state $`f`$, then we might expect $`\rho `$ to be close to unity. For $`\rho =1`$ Eq.(38) reduces to
$`\mathrm{tan}^2\delta _f`$ $`=`$ $`\tau ^2={\displaystyle \frac{1S_{ff}}{1+S_{ff}}},`$ (43)
$`\mathrm{sin}\delta _f`$ $`=`$ $`\sqrt{{\displaystyle \frac{1S_{ff}}{2}}}.`$ (44)
With $`S_{ff}=0.7`$ this gives $`|\delta _f|23^{}`$. Thus a typical value of the final strong interaction phase in this case is not small. This result for a typical state has a simple heuristic interpretation. The original real decay amplitude $`M_1^0`$ is reduced as a result of absorption by a factor $`a`$, but an imaginary term arises due to rescattering from other states. Since the total decay rate is not changed by final-state scattering the final value of $`|M_f|`$ for a typical state will be equal to $`|M_1^0|`$. Thus $`M_f`$ takes the form
$`M_f`$ $`=`$ $`M_1^0[a+i\sqrt{1a^2}],`$ (45)
$`{\displaystyle \frac{\mathrm{Im}M_f}{\mathrm{Re}M_f}}`$ $`=`$ $`\sqrt{1a^2}/a.`$ (46)
This agrees with the result above if the absorption factor is identified as
$$a=\sqrt{(1+S_{ff})/2}.$$
(47)
Any argument that a final state phase is small must be an argument that $`\rho `$ is small. It should be noted that $`\rho `$ depends on the particular final state $`f`$ and on the weak interaction operator $`𝒪_i`$. The quantity $`\overline{|M_f^{}^2|}`$ is an average of the square of the decay amplitude to state $`f^{}`$ via $`𝒪_i`$ weighted by the square of the scattering amplitude from $`f`$ to $`f^{}`$ (cf Eqs.(33) and (34)). Thus a value of $`\rho `$ much smaller than unity means that on average the states to which $`f`$ scatters are much less likely than $`f`$ to be final states in the decay due to operator $`𝒪_i`$. Conversely, if $`f`$ is a particularly unfavored final state $`\rho `$ may well be above unity. Figure 1 shows the dependence of the phase on $`\rho `$ for the choice of $`S_{ff}=0.7`$.
## VI Possibility of small or large strong phases for two-body channels
While our randomness postulate leads to sizable strong phases for “typical” decay channels, dynamical arguments are necessary to estimate the parameter $`\rho `$ for a specific channel and a specific decay operator. We ask what dynamical arguments could let our prediction agree with Bjorken’s argument which favors small phases for decays such as $`B^0\pi ^+\pi ^{}`$. The observed branching fraction of $`B^0\pi ^+\pi ^{}`$ is about $`10^3`$ of the inclusive branching fraction for $`bd\overline{q}q`$ within large uncertainties. This might seem to indicate that $`\pi ^+\pi ^{}`$ is not a favored final state and that the strong phase of $`\pi ^+\pi ^{}`$ might be of order of $`20^{}`$. However this conclusion can be evaded if certain conditions are satisfied.
It should be noted that we are only interested in whether $`\pi ^+\pi ^{}`$ is a favored decay channel relative to those to which it is connected by $`S_{f^{}f}`$. Most of the states $`f^{}`$ are multi-meson states with little jet-like character. It could be argued that these states are not favored as $`B`$ decay products because they are not likely to develop from three quark jets into which $`B`$ naturally decays. It is not clear whether this distinction is really operative for the energy $`m_B`$.
If the above argument were true, it could be considered as an interpretation of the Bjorken argument. In order to produce a $`\pi \pi `$ final state the final quarks must emerge as colorless pairs; alternative quark configurations are unlikely to hadronize into $`\pi \pi `$. Thus $`\rho `$ would be close to its minimum value and Im$`M_f`$ would be small. It is not true, however, that the final $`\pi \pi `$ state has little interaction, but the effect would primarily be a moderate absorption correction to the real part. In the two-channel reduction this corresponds to $`M_{20}/M_{10}`$ close to zero and the change of the real part of $`M_1`$ from $`M_{10}`$ to $`M_{10}\mathrm{cos}\theta `$ would be considered as the absorption correction.
An alternative possibility in which one might expect a large final state phase shift has been emphasized in a number of recent papers. These are cases in which $`M_f`$ vanishes in the naive factorization approximation. An example is the “tree” amplitude proportional to $`\lambda _u=V_{ub}V_{us}^{}`$ for the decay $`B^{}\overline{K}^0\pi ^{}`$. In this case the major contribution to $`M_f`$ is expected to arise from rescattering from favored states so that $`\rho `$ might be larger than unity. It can be argued that in this case, even though the final state scattering mainly goes to multi-particle states, the main contribution to the strong phase arises from the scattering of quantum number exchange involving two particle to two particle transitions
## VII Conclusion
For a “typical” final state the strong final-state phase shift is not small; a typical magnitude is $`20^{}`$. We understand this to be the magnitude averaged over the states that are interconnected by the final-state $`S`$-matrix. We expect there to be sizable fluctuations about the average; in fact, since we cannot predict the sign of the phases, our analysis suggests that the algebraic average phase may be zero.
A simple heuristic understanding of this phase is that the final state absorption reduces the value of the original real decay amplitudes whereas rescattering from other states provides an imaginary amplitude. For a “typical” state the absolute value of the amplitude is not changed since the final state interaction does not change the total decay rate. Our magnitude estimate of $`20^{}`$ is derived from the expected inelasticity of the meson-meson scattering.
It may be possible to argue for a particular final state $`f`$ that the phase is small. Any such argument must show that the states to which $`f`$ is connected by the $`S`$-matrix are generally less likely to be a decay product of $`B`$ than $`f`$. Thus it might be argued that the four-quark operator leads easily to $`\pi ^+\pi ^{}`$ whereas there are many states to which $`\pi ^+\pi ^{}`$ scatters that are not easily reached directly via $`B`$ decay. Conversely it might be argued that for states which are not easily reached via the four-quark operator, the strong phase is large.
###### Acknowledgements.
One of the authors (LW) acknowledges Miller Institute for Basic Research in Science for a Visiting Miller Research Professorship during this work at Berkeley. He is also supported by the U.S. Department of Energy under Contract No. DE-FG02-91-ER-40682. The other author (MS) was supported in part by the Director, Office of Energy Research, Office of High Energy and Nuclear Physics, Division of High Energy Physics of the U.S. Department of Energy under Contract No.DE-AC03-76SF00098 and in part by the National Science Foundation under Grant No. PHY-95-14797.
|
no-problem/9903/cond-mat9903458.html
|
ar5iv
|
text
|
# To appear in: Proc. of Univ. of Miami Conf. on High-Temperature Superconductivity, Jan. 7–13, 1999, (AIP). Experimental Evidence for Topological Doping in the Cuprates
## Introduction
One of the striking features of the layered cuprates is the coexistence of local antiferromagnetism with homogeneous superconductivity. After recognizing that the superconductivity is obtained by doping holes into an antiferromagnetic (AF) insulator, the simplest way to understand the survival of the correlations is in terms of spatial segregation of the doped holes emer99a . If the segregated holes form periodic stripes, then time-reversal symmetry requires that the phases of the intervening AF domains shift by $`\pi `$ on crossing a charge stripe zach98 ; zaan89 ; whit98a . This topological effect is quite efficient at destroying commensurate AF order without eliminating local antiferromagnetism neto96 .
The clearest evidence for stripe correlations has been provided by neutron and x-ray scattering studies of Nd-doped La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub>. Much of this work, together with related phenomena in hole-doped nickelates, has been reviewed recently tran98c ; tran98b and some further details are given in tran99a ; tran99b ; ichi99 . In the Nd-doped system, the maximum magnetic stripe ordering temperature corresponds to an anomalous minimum in the superconducting $`T_c`$. This fact has caused some people to argue that stripes are a special type of order, unique to certain cuprates, that competes with superconductivity. However, there has been a significant number of recent papers that provide experimental evidence for stripe correlations in other cuprates. Some of these are briefly discussed in the next section.
One corollary of the stripe picture is that the dynamic spin susceptibility measured by neutron scattering and nuclear magnetic resonance (NMR) comes dominantly from the Cu spins in instantaneously-defined AF domains and not directly from the doped holes. This has implications for the interpretation of features such as the “resonance” peak found in YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6+x</sub>. Some discussion of this issue is presented in the last section.
## Evidence Supporting Stripes
In La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub>, long-range AF order is destroyed at $`x0.02`$; however, a recent muon-spin-rotation ($`\mu `$SR) study by Niedermayer et al. nied98 (presented at this conference) shows that the change in local magnetic order is much more gradual. At $`T1`$ K, the average local hyperfine field remains unchanged even as LRO disappears, and it decreases only gradually as $`x`$ increases to $`0.07`$. In particular, local magnetic order is observed to coexist with bulk superconductivity.
In contrast, Wakimoto et al. waki99 have shown, using neutron scattering, that the static spatial correlations change dramatically as $`x`$ passes through 0.05. The magnetic scattering near the AF wave vector is commensurate for $`x0.04`$, and incommensurate for $`x0.06`$, consistent with stripes running parallel to the Cu-O-Cu bonds. The scattering is also incommensurate at x=0.05, but with the peaks rotated by 45 compared to the case for $`x0.06`$, suggesting the presence of diagonal stripes, as in La<sub>2-x</sub>Sr<sub>x</sub>NiO<sub>4</sub> tran98c .
Local magnetic inhomogeneity at $`x=0.06`$, consistent with a stripe glass, is confirmed by a <sup>63</sup>Cu and <sup>139</sup>La NMR/NQR study by Julien et al. juli99 . One particularly striking observation is a splitting of the <sup>139</sup>La NMR peak for $`T<100`$ K, in a manner very similar to that observed below the charge-stripe–ordering temperature in La<sub>1.67</sub>Sr<sub>0.33</sub>NiO<sub>4</sub> yosh98 . Another feature noticed by Julien et al. juli99 is a loss of <sup>63</sup>Cu NQR intensity at low temperature. Independently, Hunt et al. hunt99 have investigated this intensity anomaly in a number of systems, including Nd- and Eu-doped La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub>, and shown that the intensity loss correlates with the charge-stripe order parameter observed by neutron and x-ray diffraction tran98c . Their results imply that static charge-stripe order occurs in La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> for $`x0.12`$. This result is quite compatible with recent neutron-scattering work that shows static incommesurate magnetic order at $`x=0.12`$ ($`T31`$ K) and $`x=0.10`$ ($`T17`$ K), but not at $`x=0.14`$ aepp97 .
Static stripes are not restricted to Sr-doped La<sub>2</sub>CuO<sub>4</sub>. Lee et al. lee99 have demonstrated that incommensurate magnetic order occurs, with an onset very close to $`T_c`$ (42 K), in an oxygen-doped sample with a net hole concentration of $`0.15`$. Furthermore, the $`Q`$ dependence of the magnetically-scattered neutron intensity indicates interlayer spin correlations very similar to those found in undoped La<sub>2</sub>CuO<sub>4</sub>, thus showing a clear connection with the AF insulator state.
Stripe spacing, which is inversely proportional to the incommensurability, varies with doping. Yamada et al. yama98 have shown that, for a number of doped La<sub>2</sub>CuO<sub>4</sub> systems with hole concentrations up to $`0.15`$, $`T_c`$ is proportional to the incommensurability. Recently, Balatsky and Bourges bala99 have found a similar relationship in YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6+x</sub>, in which the incommensurability is replaced by the $`Q`$ width of the magnetic scattering about the AF wave vector. Indications that the magnetic scattering might be incommensurate were noted some time ago tran92 ; ster94 ; however, it is only recently that Mook and collaborators mook98 ; dai98 have definitively demonstrated that there is a truly incommensurate component to the magnetic scattering in underdoped YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6+x</sub>. They have also shown that the modulation wave vector is essentially the same as in La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> with the same hole concentration.
As discussed by Mook mook99 and by Bourges bour99 , there is also a commensurate component to the magnetic scattering in YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6+x</sub>. This component, which sharpens in energy below $`T_c`$, is commonly referred to as the “resonance” peak. It has now been observed in an optimally doped crystal of Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8+δ</sub> by Fong et al. fong99 . This observation demonstrates a commonality, at least amoung the double-layer cuprates studied so far. Of course, the significance of the resonance peak itself depends on the microscopic source of the signal, and this is the topic of the next section.
## Magnetic Scattering Comes from <br>Copper Spins
Comparisons of the spin-fluctuation spectra in un- and optimally-doped La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> hayd96a and in un- and under-doped YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6+x</sub> sham93 ; bour97 ; hayd98 show that, although doping causes substantial redistributions of spectral weight as a function of frequency, the integrated spectral weight (over the measured energy range of 0 to $`200`$ meV) changes relatively little. The limited change in spectral weight is most easily understood if the magnetic scattering in the doped samples comes from the Cu spins in magnetic domains defined by the spatially segregated holes.
The spin fluctuations in YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6.5</sub> look very similar to overdamped spin waves bour97 . With increasing $`x`$, the spin fluctuations measured at low temperature gradually evolve into a peak that is sharp in energy mook99 ; regn98 . The intensity of this resonance peak has a well defined dependence on the component of the scattering wave vector perpendicular to the CuO<sub>2</sub> planes, $`Q_z`$. If $`d_{}`$ is the spacing between Cu atoms in nearest-neighbor layers, then
$$I(Q_z)\mathrm{sin}^2(\frac{1}{2}Q_zd_{}).$$
(1)
(It should be noted that the spacing between oxygen atoms in neighboring planes is significantly different from the Cu spacing, and is incompatible with the observed modulation tran92 .) It so happens that this response is precisely what one would get for Cu spin singlets formed between the layers sasa97 . Thus, both the evolution of the resonance peak with doping and the $`Q_z`$ dependence of its intensity suggest that the scattering is coming from antiferromagnetically coupled Cu spins.
Is commensurate scattering compatible with stripe correlations? In order to observe incommensurate peaks, it is necessary that there be interference in the scattered beam between contributions from neighboring antiphase magnetic domains. If the spin-spin correlation length along the modulation direction becomes smaller than the width of two domains, then the scattering from the neighboring domains becomes incoherent, and one observes a broad, commensurate scattering peak. To the extent that singlet correlations form within an individual magnetic domain, the coupling between domains will be frustrated. If the charge stripes in nearest neighbor layers align with each other, then the magnetic domains will also be aligned, and the magnetic coupling between them should enhance singlet correlations. Thus, the weak interlayer magnetic coupling in bilayer systems may enhance commensurate scattering and the spin gap relative to the incommensurate scattering that dominates at low energies in La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub>.
If the resonance peak is associated with the spin fluctuations in itinerant magnetic domains, then it is not directly associated with the superconducting holes. Instead, it corresponds to the response of the magnetic domains to the hole pairing. The temperature and doping dependence of the resonance peak indicates that the Cu spin correlations are quite sensitive to the hole pairing.
## Acknowledgments
While I have benefited from interactions with many colleagues, I would especially like to acknowledge frequent stimulating discussions with V. J. Emery and S. A. Kivelson. Work at Brookhaven is supported by the Division of Materials Sciences, U.S. Department of Energy under contract No. DE-AC02-98CH10886.
|
no-problem/9903/quant-ph9903103.html
|
ar5iv
|
text
|
# Quantum Time Evolution in Terms of Nonredundant Expectation Values
## Acknowledgements
I would like to acknowledge both helpful discussions with J.-P. Amiet and financial support by the Schweizerische Nationalfonds.
|
no-problem/9903/hep-ph9903408.html
|
ar5iv
|
text
|
# L/𝐸-Flatness of the Electron-Like Event Ratio in Super-Kamiokande and a Degeneracy in Neutrino Masses
## 1 Introduction
One of the most remarkable features of the latest Super-Kamiokande data on the atmospheric neutrino anomaly is the $`L/E`$-flatness of the electron-like event ratio . Given the definitions,
$`_e`$ $``$ $`{\displaystyle \frac{\text{Experimentally observed }e\text{-like events}}{\text{Theoretically expected }e\text{-like events (without }\nu \text{ oscillations)}}},`$ (1)
$`_\mu `$ $``$ $`{\displaystyle \frac{\text{Experimentally observed }\mu \text{-like events}}{\text{Theoretically expected }\mu \text{-like events (without }\nu \text{ oscillations)}}},`$ (2)
the Super-Kamiokande data reveals a significant $`L/E`$ dependence for $`_\mu `$, while $`_e`$ is consistent with unity with no $`L/E`$ dependence. In this Letter, following an earlier study , we establish that this $`L/E`$ flatness of $`_e`$, consistent with $`_e=1`$, constrains the neutrino mixing matrix dramatically. All resulting neutrino-oscillation matrices are equivalent to each other, under appropriate redefinitions of the underlying mass eigenstates. They all carry the property that the expectation values of the muon and tau neutrino masses be equal. Moreover, the mixing matrices derived in this Letter naturally lead to a consistency between the mixing angles observed by LSND and Super-Kamiokande. This circumstance makes us suspect that the solar neutrino anomaly points towards a richer phenomenology, perhaps beyond neutrino masses, in the physics of neutrino oscillations.
## 2 Constraints on the Neutrino-Oscillation Matrices
Let us assume that at the top of the atmosphere, at $`t=0`$, the number of $`\nu _e`$ and $`\nu _\mu `$ produced is $`N_e`$ and $`N_\mu `$, respectively. Although both neutrinos and antineutrinos are produced in both flavours, we shall use the terms “electron neutrinos” and “muon neutrinos” loosely, to include both the $`\nu `$ and $`\overline{\nu }`$. In general, the ratio of $`\nu _\mu `$ to $`\nu _e`$ neutrinos,
$`r={\displaystyle \frac{N_\mu }{N_e}}`$ (3)
is a function of energy. However, for the relevant energy range in Super-Kamiokande, it may be assumed constant (as shall be done in this Letter). Within a detector at a distance $`Lt`$ from the production point, the number of electron neutrinos, $`N_e^{}`$, is given by
$`N_e^{}=N_eP_{ee}+N_\mu P_{\mu e},`$ (4)
where $`P_{ee}`$ and $`P_{\mu e}`$ are the neutrino oscillation probabilities $`P(\nu _e\nu _e)`$ and $`P(\nu _\mu \nu _e)`$, respectively. Assuming that the underlying mass eigenstates are relativistic , the neutrino oscillation probability $`P(\nu _{\mathrm{}}\nu _{\mathrm{}^{}})`$ takes the form
$`P(\nu _{\mathrm{}}\nu _{\mathrm{}^{}})=\delta _{\mathrm{}\mathrm{}^{}}`$
$`\mathrm{\hspace{0.17em}4}Re(U_\mathrm{}^{}1U_\mathrm{}1^{}U_\mathrm{}^{}2^{}U_\mathrm{}2)\mathrm{sin}^2(\phi _{12})+2Im(U_\mathrm{}^{}1U_\mathrm{}1^{}U_\mathrm{}^{}2^{}U_\mathrm{}2)\mathrm{sin}(2\phi _{12})`$
$`\mathrm{\hspace{0.17em}4}Re(U_\mathrm{}^{}1U_\mathrm{}1^{}U_\mathrm{}^{}3^{}U_\mathrm{}3)\mathrm{sin}^2(\phi _{13})+2Im(U_\mathrm{}^{}1U_\mathrm{}1^{}U_\mathrm{}^{}3^{}U_\mathrm{}3)\mathrm{sin}(2\phi _{13})`$
$`\mathrm{\hspace{0.17em}4}Re(U_\mathrm{}^{}2U_\mathrm{}2^{}U_\mathrm{}^{}3^{}U_\mathrm{}3)\mathrm{sin}^2(\phi _{23})+2Im(U_\mathrm{}^{}2U_\mathrm{}2^{}U_\mathrm{}^{}3^{}U_\mathrm{}3)\mathrm{sin}(2\phi _{23}).`$ (5)
The kinematic phases $`\phi _{ij}`$, which appear in the above expression, are defined as
$`\phi _{ij}=1.27\mathrm{\Delta }m_{ij}^2{\displaystyle \frac{L}{E}}.`$ (6)
In the above equation, $`E`$ is the neutrino energy (expressed in MeV), $`L`$ is the distance between the generation point and the detection point (expressed in meters), and $`\mathrm{\Delta }m_{ij}^2m_i^2m_j^2`$ (expressed in eV<sup>2</sup>). For Dirac neutrinos, the matrix $`U`$ is a $`3\times 3`$ unitary matrix, parameterized in terms of three mixing angles $`(\theta ,\beta ,\psi )`$ and a CP-violating phase, $`\delta `$:
$$U=\left(\begin{array}{ccc}c_\theta c_\beta & s_\theta c_\beta & s_\beta \\ c_\theta s_\beta s_\psi e^{i\delta }s_\theta c_\psi & c_\theta c_\psi e^{i\delta }s_\theta s_\beta s_\psi & c_\beta s_\psi e^{i\delta }\\ c_\theta s_\beta c_\psi +s_\theta s_\psi e^{i\delta }& s_\theta s_\beta c_\psi c_\theta s_\psi e^{i\delta }& c_\beta c_\psi \end{array}\right)$$
(7)
in the Maiani representation . We use the abbreviated notations $`c_\theta =\mathrm{cos}\theta `$, $`s_\theta =\mathrm{sin}\theta `$, etc., and we shall henceforth set the CP-violating phase $`\delta =0`$. Assuming that the Super-Kamiokande data for electron-like events,
$`_e={\displaystyle \frac{N_e^{}}{N_e}},`$ (8)
is unity over its relevant $`L/E`$ range (an assumption which is certainly valid within the systematic and statistical errors) implies that
$`P_{ee}+rP_{\mu e}=1.`$ (9)
Furthermore, from the unitarity condition, we have
$`P_{ee}+P_{e\mu }+P_{e\tau }=1.`$ (10)
Consequently, for a vanishing CP-violating phase $`\delta =0`$ one has $`P_{e\mu }=P_{\mu e}`$, and we obtain
$`(r1)P_{e\mu }=P_{e\tau }.`$ (11)
Using the explicit expressions for the oscillation probabilities $`P_{e\mu }`$ and $`P_{e\tau }`$ yields
$`U_{e1}U_{e2}\left[(r1)U_{\mu 1}U_{\mu 2}U_{\tau 1}U_{\tau 2}\right]\mathrm{sin}^2(\phi _{12})`$ $`+`$
$`U_{e1}U_{e3}\left[(r1)U_{\mu 1}U_{\mu 3}U_{\tau 1}U_{\tau 3}\right]\mathrm{sin}^2(\phi _{13})`$ $`+`$
$`U_{e2}U_{e3}\left[(r1)U_{\mu 2}U_{\mu 3}U_{\tau 2}U_{\tau 3}\right]\mathrm{sin}^2(\phi _{23})`$ $`=`$ $`0.`$ (12)
Since this condition should hold for all relevant values of $`L/E`$, we obtain the following system of three equations with three unknowns ($`\theta `$, $`\beta `$, and $`\psi `$):
$`U_{e1}U_{e2}\left[(r1)U_{\mu 1}U_{\mu 2}U_{\tau 1}U_{\tau 2}\right]`$ $`=`$ $`0,`$ (13)
$`U_{e1}U_{e3}\left[(r1)U_{\mu 1}U_{\mu 3}U_{\tau 1}U_{\tau 3}\right]`$ $`=`$ $`0,`$ (14)
$`U_{e2}U_{e3}\left[(r1)U_{\mu 2}U_{\mu 3}U_{\tau 2}U_{\tau 3}\right]`$ $`=`$ $`0.`$ (15)
In the following we shall first investigate the possible solutions of this system of equations. We then obtain the advertised result on the neutrino-mass degeneracy. Finally, we briefly study the compatibility of the resulting neutrino-oscillation mixing matrix for the LSND and Super-Kamiokande experiments, as well as its consequences for the solar neutrino deficit.
## 3 The Resulting Neutrino-Oscillation Mixing Matrices
One possible class of solutions to the system of equations above is given by requiring that the expressions in the square brackets in Eqs.(13)–(15) be zero simultaneously. However, this leads to a rather uninteresting class of mixing matrices, namely the unit matrix and others that are basically equivalent to it, under appropriate redefinitions of the mass eigenstates. Hence, the problem of the $`L/E`$ flatness in $`_e`$ is trivially solved, but there would be no neutrino oscillations either. This would contradict the existing data and we therefore discard such solutions.
A more interesting class of solutions follows when one of $`U_{e1}`$, $`U_{e2}`$, or $`U_{e3}`$ is zero. This determines one of the mixing angles and two of the three equations are trivially satisfied. The remaining equation fully determines a second mixing angle. We discuss this class of solutions below.
### 3.1 The $`U_{e1}=0`$ Case
Since $`U_{e1}=c_\theta c_\beta `$, one solution to $`U_{e1}=0`$ is $`c_\beta =0`$. However, this implies that both $`U_{e1}`$ and $`U_{e2}`$ vanish identically, which in turn means that there are no $`\nu _e\nu _\mu `$ and no $`\nu _e\nu _\tau `$ oscillations. We shall, therefore, discard this solution, as it is not of interest in the context of the existing data. The other solution to $`U_{e1}=0`$ is $`c_\theta =0`$, which implies $`s_\theta =\pm 1`$. Considering only the $`s_\theta >0`$ solution (as an illustration), the mixing matrix becomes:
$$U=\left(\begin{array}{ccc}0& c_\beta & s_\beta \\ c_\psi & s_\beta s_\psi & c_\beta c_\psi \\ s_\psi & s_\beta c_\psi & c_\beta c_\psi \end{array}\right).$$
(16)
From Eq.(15) we have
$`s_\beta c_\beta [(r1)s_\psi ^2c_\psi ^2]=0,`$ (17)
with two trivial solutions, $`s_\beta =0`$ and $`c_\beta =0`$. They imply that $`U_{e1}=0`$ and $`U_{e3}=0`$, or $`U_{e1}=0`$ and $`U_{e2}=0`$, respectively. Both cases lead to no $`\nu _e\nu _\mu `$ and no $`\nu _e\nu _\tau `$ oscillations, and are discarded as discussed above. More generally, however:
$`s_\psi `$ $`=`$ $`1/\sqrt{r},`$ (18)
$`c_\psi `$ $`=`$ $`\sqrt{r1}/\sqrt{r}.`$ (19)
Notice that solutions are allowed for both $`s_\psi <0`$ and $`c_\psi <0`$, but we shall restrict our (illustrative<sup>1</sup><sup>1</sup>1The remaining $`U`$’s can be easily enumerated by the reader.) discussion solely to the $`s_\psi >0`$ and $`c_\psi >0`$ case. At this point we explicitly set $`r=2`$ and thus the full mixing matrix becomes:
$`U=\left(\begin{array}{ccc}0& c_\beta & s_\beta \\ 1/\sqrt{2}& s_\beta /\sqrt{2}& c_\beta /\sqrt{2}\\ 1/\sqrt{2}& s_\beta /\sqrt{2}& c_\beta /\sqrt{2}\end{array}\right).`$ (23)
### 3.2 The $`U_{e2}=0`$ Case
Since $`U_{e2}=s_\theta c_\beta `$, one solution to $`U_{e2}=0`$ is $`c_\beta =0`$, which is discarded as discussed above. The $`s_\theta =0`$ solution implies $`c_\theta =\pm 1`$, and considering only the $`c_\theta >0`$ solution, the mixing matrix becomes:
$$U=\left(\begin{array}{ccc}c_\beta & 0& s_\beta \\ s_\beta s_\psi & c_\psi & c_\beta s_\psi \\ s_\beta c_\psi & s_\psi & c_\beta c_\psi \end{array}\right).$$
(24)
From Eq.(14) we have:
$`s_\beta c_\beta [(r1)s_\psi ^2c_\psi ^2]=0,`$ (25)
and disregarding the trivial solutions $`s_\beta =0`$ and $`c_\beta =0`$, the general mixing matrix yields (for $`r=2`$):
$`U=\left(\begin{array}{ccc}c_\beta & 0& s_\beta \\ s_\beta /\sqrt{2}& 1/\sqrt{2}& c_\beta /\sqrt{2}\\ s_\beta /\sqrt{2}& 1/\sqrt{2}& c_\beta /\sqrt{2}\end{array}\right).`$ (29)
### 3.3 The $`U_{e3}=0`$ Case
Since $`U_{e3}=s_\beta `$, $`U_{e3}=0`$ implies simply that $`s_\beta =0`$ and thus $`c_\beta =\pm 1`$. Considering only the $`c_\beta >0`$ solution, the mixing matrix becomes:
$`U=\left(\begin{array}{ccc}c_\theta & s_\theta & 0\\ s_\theta c_\psi & c_\theta c_\psi & s_\psi \\ s_\theta s_\psi & c_\theta s_\psi & c_\psi \end{array}\right).`$ (33)
From Eq.(13) we have:
$`s_\theta c_\theta [s_\psi ^2(r1)c_\psi ^2]=0,`$ (34)
and disregarding the trivial solutions $`s_\theta =0`$ and $`c_\theta =0`$, the general solution leads to the following mixing matrix (for $`r=2`$):
$`U=\left(\begin{array}{ccc}c_\theta & s_\theta & 0\\ s_\theta /\sqrt{2}& c_\theta /\sqrt{2}& 1/\sqrt{2}\\ s_\theta /\sqrt{2}& c_\theta /\sqrt{2}& 1/\sqrt{2}\end{array}\right).`$ (38)
## 4 The Degenerate Muon and Tau Neutrino Masses
Referring to the obtained neutrino-oscillation matrices $`U`$ above, and taking note of the definition for the expectation value of the neutrino masses (recall that for $`\delta =0`$ all the $`U_\mathrm{}j`$ elements are real)
$`m(\nu _{\mathrm{}}){\displaystyle \underset{j}{}}U_\mathrm{}j^2m_j,`$ (39)
we immediately come to the general conclusion on the mass degeneracy of the muon and tau neutrinos:
$`m(\nu _\mu )=m(\nu _\tau ).`$ (40)
For the three cases enumerated above, one readily sees that the “degenerate mass” carries the values
$`{\displaystyle \frac{1}{2}}\left(m_1+s_\beta ^2m_2+c_\beta ^2m_3\right),`$ (41)
$`{\displaystyle \frac{1}{2}}\left(s_\beta ^2m_1+m_2+c_\beta ^2m_3\right),`$ (42)
$`{\displaystyle \frac{1}{2}}\left(s_\theta ^2m_1+c_\theta ^2m_2+m_3\right),`$ (43)
respectively. The enumerated mixing matrices $`U`$ are immediately noted to be equivalent to each other under redefinitions of the underlying mass eigenstates $`m_1`$, $`m_2`$, and $`m_3`$.
## 5 Implications for the Muon-Like Event Ratio: LSND versus Super-Kamiokande
Apart from the mass degeneracy in the muon and tau neutrino masses, a remarkable result that follows directly from the derived neutrino-oscillation matrices is the consistency between the mixing angles observed by LSND and Super-Kamiokande. We briefly discuss this in the following.
The number of muon neutrinos at the Super-Kamiokande detector, $`N_\mu ^{}`$, is given by
$`N_\mu ^{}=N_\mu P_{\mu \mu }+N_eP_{e\mu },`$ (44)
and thus, the muon-like event ratio reads
$`_\mu ={\displaystyle \frac{N_\mu ^{}}{N_\mu }}=P_{\mu \mu }+{\displaystyle \frac{1}{r}}P_{e\mu }.`$ (45)
Without loss of generality, let us consider the mixing matrix given by Eq.(38). The $`\nu _\mu `$ survival probability, $`P_{\mu \mu }`$, and the $`P_{e\mu }`$ oscillation probability read
$`P_{\mu \mu }=1s_\theta ^2c_\theta ^2\mathrm{sin}^2(\phi _{12})s_\theta ^2\mathrm{sin}^2(\phi _{13})c_\theta ^2\mathrm{sin}^2(\phi _{23}),`$ (46)
and
$`P_{e\mu }=2s_\theta ^2c_\theta ^2\mathrm{sin}^2(\phi _{12}),`$ (47)
respectively. Therefore, the Super-Kamiokande muon-like event ratio, $`_\mu `$, yields
$`_\mu =1s_\theta ^2\mathrm{sin}^2(\phi _{13})c_\theta ^2\mathrm{sin}^2(\phi _{23}).`$ (48)
Here we have explicitly set $`r=2`$. At this point one cannot proceed any further without additional information on either the mixing angle $`\theta `$, or the mass differences $`\mathrm{\Delta }m_{13}^2`$ and $`\mathrm{\Delta }m_{23}^2`$.
This is the point where the LSND evidence comes into play. As reported in Refs. , the LSND experiment has obtained evidence for neutrino oscillations in both the $`\overline{\nu }_\mu \overline{\nu }_e`$ and $`\nu _\mu \nu _e`$ channels. Although interpreted in terms of the simpler, two-generations neutrino mixing, the allowed regions obtained by LSND help us gain further insight. Within this framework, the $`\nu _\mu \nu _e`$ oscillation probability is given by
$`P_{\mu e}^{LSND}=\mathrm{sin}^2\left(2\mathrm{\Theta }_{LSND}\right)\mathrm{sin}^2\left(\phi _{12}\right),`$ (49)
which is very similar to the expression in Eq.(47). Indeed for the $`\overline{\nu }_\mu \overline{\nu }_e`$ and $`\nu _\mu \nu _e`$ oscillation channels one may effectively identify $`\mathrm{sin}^2\left(2\mathrm{\Theta }_{LSND}\right)`$ with $`2s_\theta ^2c_\theta ^2=\frac{1}{2}\mathrm{sin}^2(2\theta )`$. Therefore, a very small mixing angle $`\mathrm{\Theta }_{LSND}`$, approximately of $`𝒪(10^1)`$ – as indeed favored by the allowed regions indicated by LSND – implies a very small mixing angle $`\theta `$, also of $`𝒪(10^1)`$, in our formalism. This in turn implies that the muon-like event ratio, $`_\mu `$, reads
$`_\mu =1c_\theta ^2\mathrm{sin}^2(\phi _{23})+𝒪(10^2),`$ (50)
as opposed to simply
$`_\mu =1\mathrm{sin}^2\left(2\mathrm{\Theta }_{SK}\right)\mathrm{sin}^2\left(\phi _{23}\right),`$ (51)
if $`_\mu `$ were to be expressed in the two-generations neutrino oscillations formalism, where only $`\nu _\mu \nu _\tau `$ transitions are allowed – as interpreted by the Super-Kamiokande group. Therefore, since $`c_\theta 1`$, implies that $`\mathrm{sin}^2(2\mathrm{\Theta }_{SK})1`$ as well, as indeed reported in Ref. .
## 6 The Solar Neutrino Deficit
If the Super-Kamiokande/LSND consistency is firmly established by future experiments, then the physics of neutrino oscillations shall be found not only to contain massive neutrinos, but may also point towards new physics. This arises from the long-standing solar neutrino deficit, as measured by a variety of experiments, with different sensitivities and detection techniques . Within the framework of neutrino oscillations, the ratio of measured to predicted solar neutrinos, $`R_e`$, is simply given by the $`\nu _e`$ survival probability, $`P_{ee}`$. Using the mixing matrix in Eq.(38), this reads
$`P_{ee}=14s_\theta ^2c_\theta ^2\mathrm{sin}^2(\phi _{12}),`$ (52)
with an underlying mass scale $`\mathrm{\Delta }m_{12}^2=𝒪(1)\text{eV}^2`$, as indicated by the LSND experiment. Consequently, the kinematic term $`\mathrm{sin}^2(\phi _{12})`$ effectively averages out to 1/2. Furthermore, since the mixing angle $`\theta `$ is of $`𝒪(10^1)`$, as we have argued in the previous Section, the predicted solar neutrino ratio is practically $`R_e=1`$, i.e., no solar neutrino deficit. This is obviously in disagreement with the measured solar neutrino ratio of $`R_e0.5`$, as reported by the above-mentioned experiments. A popular solution to the solar neutrino deficit conjectures the existence of a sterile neutrino(s). This may very well be the way nature is. However, before the sterile neutrino solution is invoked, one must make a fundamental observation that the flavour and mass measurements do not commute . This incompatibility of the flavour and mass measurements can lead to a violation of the principle of equivalence, which in turn modifies the standard neutrino oscillation phenomenology in a fundamental manner . Such a violation of the principle of equivalence will take us into new physics. At the same time, this might provide an elegant solution to the solar neutrino anomaly .
## 7 Conclusions
We conclude that $`L/E`$ flatness of the electron-like event ratio in the Super-Kamiokande data on atmospheric neutrinos implies a mass degeneracy for the muon and tau neutrino. The obtained results support recent considerations on maximal mixing, bi-maximal mixing, and the degenerate neutrino masses . More precise data on the discussed $`L/E`$ flatness would be most helpful to settle the insights gained in this Letter. If the $`L/E`$ flatness of $`_e`$ is firmly established in the future by the data, then one would be able to severely constrain the theoretical models for neutrino masses and neutrino oscillations. The general cases considered by us require that one of the $`U_{ej}`$ vanishes. This result is in agreement with the conclusions reached by several authors, see e.g. Ref. . Since all neutrino-oscillation matrices obtained by us are physically equivalent, we have arrived at a unique $`3\times 3`$ neutrino-oscillation mixing matrix that depends only on one angle. This matrix clearly shows the consistency between the mixing angles observed by the LSND and Super-Kamiokande experiments.
We explicitly note that the constraint implied by the $`L/E`$ flatness of the Super-Kamiokande e-like event ratio, as contained in this Letter, is a generalized version of that contained in Ref. . However, this is not the main purpose of this Letter. Within the standard three-neutrino oscillation framework, the mixing matrices that we now obtain exhaust all possibilities consistent with the Super-Kamiokande implied constraint. In particular, we show that the Super-Kamiokande inferred $`\nu _\mu \nu _\tau `$ oscillation (with a complete decoupling from the $`\nu _e`$ oscillations) is too strong a conclusion. The class of solutions consistent with the Super-Kamiokande data, as systematically derived and analyzed here, is significantly richer and in particular leaves important room for oscillations away from, and into, the $`\nu _e`$ channel. In Ref. only a very specific solution was obtained. This Letter obtains a class of new non-trivial and physically interesting solutions. In particular, the LSND and Super-Kamiokande compatibility (as contained in bins where $`L/E`$ for LSND is close to that for Super-Kamiokande) emerges as a significant new result. With this compatibility established, it should now be clear to the neutrino-oscillation community that Super-Kamiokande and LSND have an overlapping and mutually consistent regime in the neutrino-oscillation parameter space. Thus, either something in the non-overlapping regime of the Super-Kamiokande and LSND results must change to accommodate the solar neutrino anomaly – or, we must accept seriously that some new physics is hinted at.
|
no-problem/9903/math-ph9903016.html
|
ar5iv
|
text
|
# Waves in Open Systems via Bi-orthogonal Basis
## Abstract
Dissipative quantum systems are sometimes phenomenologically described in terms of a non-hermitian hamiltonian $`H`$, with different left and right eigenvectors forming a bi-orthogonal basis. It is shown that the dynamics of waves in open systems can be cast exactly into this form, thus providing a well-founded realization of the phenomenological description and at the same time placing these open systems into a well-known framework. The formalism leads to a generalization of norms and inner products for open systems, which in contrast to earlier works is finite without the need for regularization. The inner product allows transcription of much of the formalism for conservative systems, including perturbation theory and second-quantization.
Introduction
Dissipative systems can be discussed in many ways. The fundamental approach recognizes that energy flows from the system $`S`$ to a bath $`B`$, whose degrees of freedom are then eliminated from the path integral or equations of motion . While rigorous, this approach is inevitably complicated, and often leads to integro-differential equations for time evolution. An alternate phenomenological approach postulates a non-hermitian hamiltonian (NHH) $`H`$, whose left and right eigenvectors form a bi-orthogonal basis (BB) . These NHHs with discrete BBs can sometimes be obtained from a full quantum theory, but usually under some approximations .
This Letter discusses a class of models of waves in open systems. These are scalar fields $`\varphi (x,t)`$ in 1 d, described by the wave equation. Outgoing wave boundary conditions cause the system to be dissipative. We show that these open systems are exactly described by an NHH with a BB formed by the resonances or quasinormal modes (QNMs). This connection on the one hand provides the phenomenological approach with a realization which has an impeccable pedigree rigorously traceable to the fundamental approach, and on the other hand places earlier work on such open systems into a familiar framework. A generalized inner product emerges; in contrast to previous works, it is finite and requires no regularization. Under the generalized inner product, the hamiltonian $`H`$ is symmetric, which opens the way to a clean formulation of perturbation theory and second-quantization in terms of the QNMs of the system.
Waves in Open Systems
We consider waves in 1 d described by $`\left[\rho (x)_t^2_x^2\right]\varphi (x,t)=0`$ on the half line $`[0,\mathrm{})`$, with $`\varphi (x=0,t)=0`$ and $`\varphi (x,t)`$ approaching zero rapidly as $`x\mathrm{}`$ . Let the system $`S`$ be the “cavity” $`I=[0,a]`$, and the bath $`B`$ be $`(a,\mathrm{})`$, where $`\rho (x)=1`$. Energy is exchanged between $`S`$ and $`B`$ only through the boundary $`x=a`$. We impose the outgoing wave condition $`_t\varphi (x,t)=_x\varphi (x,t)`$ for $`x>a`$.
This mathematical model is relevant for many physical systems: the vibrations of a string with mass density $`\rho `$ ; the scalar model of EM in an optical cavity (the node at $`x=0`$ is a totally reflecting mirror, and a partially transmitting mirror at $`x=a`$ can be modeled by $`\rho (x)=M\delta (xa)`$) ; or gravitational radiation from a star with radius $`a`$ . The wave equation can be mapped to the Klein-Gordon equation with a potential $`V(x)`$ , which is relevant for gravitational waves ; here $`\varphi `$ is the perturbation about the spherical background metric of a star, $`x`$ is a radial coordinate related to the circumferential radius $`r`$, and $`V`$ describes the wave scattering by the background metric. Gravitational waves carrying the signature of the QNMs of black holes may soon be observed by new detectors such as LIGO and VIRGO .
For the “cavity” $`I=[0,a]`$, the outgoing condition is imposed at $`x=a^+`$ only. The QNMs are factorized solutions on $`I`$: $`\varphi (x,t)=f_n(x)e^{i\omega _nt}`$, with $`[_x^2+\rho (x)\omega _n^2]f_n(x)=0`$. These are observed in the frequency domain as resonances of finite width (e.g., the EM spectrum seen outside an optical cavity) or in the time domain as damped oscillations (e.g., the numerically simulated gravitational wave signal from the vicinity of a black hole). It would obviously be interesting to be able to describe these QNMs in a manner parallel to the normal modes (NM) of a conservative system.
These QNMs form a complete set on $`I`$ if (a) $`\rho (x)`$ has a discontinuity at $`x=a`$ to provide a natural demarcation of the “cavity”, and (b) $`\rho (x)=1`$ for $`x>a`$, so that outgoing waves are not scattered back into the system . Under these conditions, one can expand $`\varphi (x,t)=_na_nf_n(x)e^{i\omega _nt}`$ for $`xI`$ and $`t0`$, thus allowing an exact description of the system in terms of discrete variables (modes spaced by $`\mathrm{\Delta }\omega \pi /a`$) rather than a continuum. Nevertheless, the analogy with conservative systems is still not apparent: Is there a natural inner product (with which to do projections and thus to prove the uniqueness of expansions)? Is there a norm to scale wavefunctions (noting that $`f_n`$ diverges at spatial infinity)? Can perturbation theory be formulated (noting that the usual proofs require an inner product to define orthogonality)? Can the theory be second-quantized? This Letter shows that all these questions have natural answers in the language of a BB.
Phenomenological non-Hermitian Hamiltonians and Bi-orthogonal Bases
Though not rigorously founded upon a genuine quantum theory, NHHs with BBs are nevertheless well developed as a postulatory system . Consider a space $`W`$ on which is defined a non-hermitian operator $`H`$ and a conjugate linear duality transformation $`D`$: $`D\left(\alpha |\mathrm{\Phi }+\beta |\mathrm{\Psi }\right)=\alpha ^{}D|\mathrm{\Phi }+\beta ^{}D|\mathrm{\Psi }`$, such that $`DH=H^{}D`$ . The BB consists of the two set of eigenvectors $`|F_nW`$ and $`|G_n=D|F_n\stackrel{~}{W}=D(W)`$ satisfying $`H|F_n=\omega _n|F_n`$, $`H^{}|G_n=\omega _n^{}|G_n`$, where the two eigenvalues are related by duality. By projecting the eigenvalue equations on $`G_n|`$ and $`|F_n`$, it follows easily that $`G_n|F_m=0`$, for $`mn`$.
It is usually assumed that these eigenstates are complete, so that any vector can be expanded as $`|\mathrm{\Phi }=_na_n|F_n`$, with $`a_n=G_n|\mathrm{\Phi }/G_n|F_n`$, leading immediately to the resolution of the identity and of the time-evolution operator
$`1`$ $`=`$ $`{\displaystyle \underset{n}{}}{\displaystyle \frac{|F_nG_n|}{G_n|F_n}}`$ (1)
$`e^{iHt}`$ $`=`$ $`{\displaystyle \underset{n}{}}{\displaystyle \frac{|F_ne^{i\omega _nt}G_n|}{G_n|F_n}}`$ (2)
which in principle solves all the dynamics .
Bi-orthogonal Basis for the Wave Equation
BBs are widely used in many disciplines, for example in the theory of wavelets and to describe excited molecular systems . The left and right eigenvectors of the Maxwell operator are typically used to represent the Green’s function for EM fields in open cavities , or to evaluate Fox-Li states . Here we seek a parallel with quantum mechanics, similar to earlier works for generalized oscillators and the classical wave equation (without dissipation due to leakage) . The problem at hand, where there is dissipation due to outgoing waves, was formulated in this manner recently , and is briefly sketched below, especially as it relates to the BB.
It is natural to introduce the conjugate momentum $`\widehat{\varphi }=\rho (x)_t\varphi `$, and the two-component vector $`|\mathrm{\Phi }=(\varphi ,\widehat{\varphi })^\mathrm{T}`$. In terms of this, the dynamics can be cast into the Schrödinger equation with the NHH
$$H=i\left(\begin{array}{cc}0& \rho (x)^1\\ _x^2& 0\end{array}\right)$$
(3)
The identification $`\widehat{\varphi }=\rho _t\varphi `$ follows from the evolution equation .
The natural definition of an inner product between $`|\mathrm{\Psi }=(\psi ,\widehat{\psi })^\mathrm{T}`$ and $`|\mathrm{\Phi }=(\varphi ,\widehat{\varphi })^\mathrm{T}`$ on $`[0,\mathrm{})`$ is
$$\mathrm{\Psi }|\mathrm{\Phi }=_0^{\mathrm{}}\left(\psi ^{}\varphi +\widehat{\psi }^{}\widehat{\varphi }\right)𝑑x$$
(4)
However, on account of the assumed asymptotic behavior, the integral is convergent.
For outgoing waves, we consider only the space $`U`$ of such vectors $`|\mathrm{\Phi }`$ defined on $`[0,\mathrm{})`$ which satisfy the outgoing condition $`\widehat{\varphi }=\varphi ^{}`$ for $`x>a`$. The bath variables are eliminated simply but exactly by projecting to the space $`W`$ of vectors $`|\mathrm{\Phi }`$ defined on $`I`$ and which satisfy $`\widehat{\varphi }=\varphi ^{}`$ at $`x=a^+`$. The QNMs are right-eigenvectors of $`H`$: $`|F_n(f_n,\widehat{f}_n)^\mathrm{T}=(f_n,i\omega _n\rho f_n)^\mathrm{T}`$. The duality transformation is $`D(\varphi _1,\varphi _2)^\mathrm{T}=i(\varphi _2^{},\varphi _1^{})^\mathrm{T}`$.
For open systems, a crucial concept is the inner product between one vector and the dual of another, to which we give a compact notation:
$$(\mathrm{\Psi },\mathrm{\Phi })D\mathrm{\Psi }|\mathrm{\Phi }=i_0^{\mathrm{}}\left(\widehat{\psi }\varphi +\psi \widehat{\varphi }\right)𝑑x$$
(5)
which is linear in both vectors, and cross-multiplies the two components, properties to be emphasized below. This bilinear map plays the role of the inner product for conservative systems.
Our notation does not distinguish between functions (say $`|\mathrm{\Phi }`$) defined on $`[0,\mathrm{})`$ and their restrictions to $`I`$; the former are in $`U`$ and the latter are in $`W`$, with the association between them being many-to-one. As written in (5), the inner product involves the wavefunctions outside $`I`$, i.e., it appears to be defined on $`U`$ rather than $`W`$. However, one can completely eliminate the bath degrees of freedom: because of the outgoing conditions, the integrand on $`(a,\mathrm{})`$ reduces to a total derivative, and (5) can be written purely in terms of the inside variables :
$$(\mathrm{\Psi },\mathrm{\Phi })=i\left\{_0^{a^+}\left(\widehat{\psi }\varphi +\psi \widehat{\varphi }\right)𝑑x+\psi (a^+)\varphi (a^+)\right\}$$
(6)
The surface term is the only remnant of the outside. Thus, (6) can be regarded as a bilinear map (or loosely an inner product) defined on $`W`$ . The somewhat peculiar structure (e.g., the cross-multiplication between the two components and the appearance of the surface term) is now seen to arise naturally from (4) upon the introduction of the duality transformation. In the limit where the escape of the waves is small, the generalized norm of an eigenvector $`(F_n,F_n)`$ reduces to $`2\omega _n`$ times the conventional norm; this is the reason for choosing the phase convention for $`D`$. The ability to normalize QNM wavefunctions is nontrivial, since $`f_n`$ diverges at spatial infinity, and a naive expression such as $`_0^{\mathrm{}}|f_n|^2𝑑x`$ would not be appropriate.
The diagonal version $`(\mathrm{\Phi },\mathrm{\Phi })`$ for the special case of QNMs was first introduced by Zeldovich in a form that involved (a) $`\varphi `$ outside $`I`$ (so that it is defined on $`U`$ rather than $`W`$) and (b) regularization of the divergent integral rather than a surface term; it was later re-cast into the form (6) and generalized to 3 d and EM fields . The off-diagonal form $`(\mathrm{\Psi },\mathrm{\Phi })`$ was later introduced . Here, by relating the discussion to bi-orthogonal states and the duality transformation, it is seen that these concepts emerge naturally, including the specific form of (6).
An inner product equivalent to (6) has also been discussed extensively from other perspectives . In these works, the inner product is defined on $`[0,\mathrm{})`$ rather than a finite interval, with the consequent divergence (e.g., for the inner product between two QNMs each growing exponentially at infinity) handled either (a) by a regulating factor $`\mathrm{exp}(ϵx^2)`$, $`ϵ0^+`$, (b) analytic continuation in the wavenumber $`k`$, or (c) complex rotation in the coordinate $`x`$. Each of these procedures has its limitations; in contrast, (6) makes no reference to the outside or bath, and is computationally convenient and manifestly finite.
Under this bilinear map, $`H`$ is symmetric: $`(\mathrm{\Psi },H\mathrm{\Phi })=(\mathrm{\Phi },H\mathrm{\Psi })`$, which follows very simply from $`DH=H^{}D`$. This key property is analogous to the hermiticity of $`H`$ for conservative systems. It is nontrivial, in that surface terms that arise in the integration by parts are exactly compensated by the surface terms in (6). This symmetry property leads, in the usual way, to the orthogonality of non-degenerate eigenfunctions.
The completeness relation (1) is a dyadic equation. Its $`(1,2)`$ and $`(1,1)`$ components lead to the sum rules
$`{\displaystyle \underset{n}{}}{\displaystyle \frac{f_n(x)f_n(y)}{2\omega _n}}`$ $`=`$ $`0`$ (7)
$`{\displaystyle \underset{n}{}}{\displaystyle \frac{1}{2}}f_n(x)f_n(y)\rho (x)`$ $`=`$ $`i\delta (xy)`$ (8)
for $`x,yI`$, which have been derived and discussed extensively .
The completeness and orthogonality relationships establish the QNMs as a BB, and moreover allow the time evolution to be solved as $`|\mathrm{\Phi }(x,t)=_na_ne^{i\omega _nt}|F_n`$, where $`a_n=G_n|\mathrm{\Phi }(x,0)/(2\omega _n)`$. This is a discrete and exact representation of the dynamics, even though $`I`$ is open to an infinite universe with a continuum of states. Completeness is not proved in most other applications of NHHs to physical systems.
Perturbation theory
These notions allow much of the standard formalism in quantum mechanics to be carried over. As one example consider time-independent perturbation theory. Let $`\rho _0(x)^1`$ be changed to $`\rho (x)^1=\rho _0(x)^1\left[1+\mu V(x)\right]`$, where $`|\mu |1`$ $`V(x)`$ has support in $`I`$. Then the perturbation to the eigenvalues and eigenfunctions can be written in the standard Rayleigh-Schrödinger form, in terms of a discrete series . These formulas, though superficially identical with textbook formulas for conservative systems, are nontrivial in two ways. First, the perturbative formulas apply to complex eigenvalues. Second, the use of resonances implies that there is no “background”, and expressing the corrections in terms of discrete modes also means that the small parameter of expansion is $`\mu /|\mathrm{\Delta }\omega |\mu a/\pi `$, which would not have been apparent in terms of the states of the continuum.
The derivation of these results simply follows the conservative case (everywhere replacing inner products by the bilinear map $`(\mathrm{\Psi },\mathrm{\Phi })`$), and need not be repeated.
Discussion
We have established an exact correspondence between phenomenological NHHs and waves in a class of open systems. This relationship provides a well-founded realization of NHHs. Because we start with a hamiltonian system and remove the bath degrees of freedom without approximations, these open systems can be second-quantized . In other words, one can discuss photons in open cavities using BBs, which makes this class of examples unique and interesting. The relationship also places these open systems into a well-known and convenient framework. Thus, the linear space structure, orthogonality and completeness can all be derived naturally, by transcribing usual derivations for conservative systems and everywhere replacing the inner product by $`(\mathrm{\Psi },\mathrm{\Phi })`$.
The formalism discussed here also applies to the Klein-Gordon equation with a potential $`V(x)`$ , which applies, among other things, to linearized gravitational waves propagating away from a black hole. The first-order perturbation result for the QNM frequencies has been used to understand the shifts in the gravitational wave frequencies when a black hole is surrounded by an accretion shell .
The wave equation discussed here may be regarded as a physical realization of BBs for open systems. Many other inequivalent realizations arise when one considers outgoing waves in a spherically symmetric 3-d system; each angular momentum $`l`$ leads to realizations in which the surface terms in the inner product involves $`l`$ radial derivatives .
However, the entire formalism refers to systems described by second-order differential equations, so that two sets of initial data, namely $`\varphi `$ and $`\widehat{\varphi }`$, are required, and the outgoing condition is expressed as a constraint between them. The formalism does not apply in its entirety to systems described by first-order differential equations, e.g., $`\alpha `$-decays described by the Schrödinger equation with Gamow boundary condition. In any event, the Schrödinger equation formally gives unbounded signal speeds and does not possess outgoing and incoming sectors related by time reversal; thus the concept of outgoing waves is actually quite different. Nevertheless, if one is interested only in frequency domain problems, e.g., eigenvalue problems and time-independent perturbation theory, then the formalism survives even in this case. This is most easily appreciated by starting with the Klein-Gordon equation and simply relabelling $`\omega ^2\omega `$.
Using $`(\mathrm{\Psi },\mathrm{\Phi })`$ rather than the equivalent form $`D\mathrm{\Psi }|\mathrm{\Phi }`$ allows all reference to $`D`$ to be avoided. However, $`(\mathrm{\Psi },\mathrm{\Phi })`$ is a bilinear map (rather than being linear in the ket and conjugate linear in the bra). This property is quite general, since $`D`$ is conjugate linear. But in most applications of the inner product (e.g., for projections), it does not matter whether the map is linear or conjugate linear in the bra; this is why results from conservative systems can be carried over. The only property that is lost is the positivity of $`(\mathrm{\Phi },\mathrm{\Phi })`$, which is unsurprising for a dissipative system. Thus it is useful to think of the states of quantum dissipative systems as vectors in a linear space $`W`$ endowed with such a bilinear map, which is the generalization of the notion of a Hilbert space. Time-evolution is then generated by an operator $`H`$ which is symmetric.
The open systems described here are genuinely dissipative, with $`\text{Im }\omega _n<0`$. This contrasts with some models with NHHs which are nevertheless conservative . For infinite-dimensional NHH models, completeness of the BB is usually assumed, but difficult to prove. Through these wave systems, we have provided explicit examples where completeness can be proved (if the discontinuity and “no tail” conditions are met), as well as examples where the basis is not complete (if these conditions are not met). These should also be useful in furthering understanding of NHH models.
We thank C. K. Au, E. S. C. Ching, S. Y. Liu and A. Maassen van den Brink for discussions. This work is supported in part by the Hong Kong Research Grants Council (Grant no. 452/95P). The work of WMS is also supported by the US NFS (Grant no. PHY 96-00507), and by the Institute of Mathematical Sciences of The Chinese University of Hong Kong. The work of CPS at The Chinese University of Hong Kong is also supported by a C. N. Yang Fellowship.
|
no-problem/9903/hep-lat9903023.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
It is very well known that the center symmetry is of crucial importance for QCD on the lattice. At the beginning of eighties it was shown that the deconfinement phase transition is connected with the spontaneous breaking of center symmetry . In the confined phase field configurations are center symmetric, leading to symmetric distributions of Polyakov loops and to infinite energy of single quarks. In the deconfined phase one center element is favoured by the Polyakov loop distribution, quark charges can be screened by the gluon field.
There is now increasing evidence that center symmetry is not only relevant as an order parameter for confinement, but is also the crucial concept in understanding how confinement comes about. The idea that center vortices are responsible for confinement was put forward at the end of 70’s , but numerical evidence in favor of this idea is rather recent . Our principle tool for locating vortices, and investigating their effect on gauge-invariant quantities, is “center projection” in maximal center gauge. Maximal center gauge is a gauge where all link variables are rotated as close as possible to center elements of the gauge group. Center projection is a mapping from the SU(2) link variables to $`Z_2`$ link variables. “P-vortices”, formed from plaquettes with link product equal to $`1`$ (we will call them P-plaquettes), are simply the center vortices of the projected $`Z_2`$ gauge-field configurations.
P-vortices on the projected lattice are thin, surface-like objects, while center vortices in the unprojected configuration should be surface-like objects of some finite thickness. Our numerical evidence indicates that P-vortices on the projected lattice locate thick center vortices on the unprojected lattice. We also find that these thick vortices are physical objects, and that the disordering effect of such vortices is responsible for the entire QCD string tension. Details may be found in ref. ; a discussion of Casimir scaling in the context of the the center vortex theory is found in ref. and .
In this article we discuss the structure of P-vortices in center-projected field configurations. In section two we show that P-vortices tend to form very large vortices (as required if they are the driving mechanism behind quark confinement, see also ref. ), and that small-scale fluctuations of the vortex surfaces don’t contribute to the string-tension. Further, we determine the orientability and the Euler characteristic of these vortices. In section three we investigate the structure of P-vortices at finite temperature. We find, in agreement with Langfeld et al. , that in the deconfined phase the vortices are oriented along timelike surfaces in the dual lattice, and are closed by the lattice periodicity; thereby explaining the differing behaviour of timelike and spacelike Wilson loops in this phase.
## 2 P-vortices at zero temperature
### 2.1 Finding P-vortices
P-vortices are identified by first fixing to the direct version of maximal center gauge, which in SU(2) gauge theory maximizes
$$\underset{x,\mu }{}\left|\text{ Tr}U_\mu (x)\right|^2.$$
(1)
Then we map the SU(2) link variables $`U_\mu (x)`$ to $`Z_2`$ elements
$$Z_\mu (x)=\text{sign Tr}[U_\mu (x)].$$
(2)
The plaquettes with $`Z_{\mu ,\nu }(x)=Z_\mu (x)Z_\nu (x+\widehat{\mu })Z_\mu (x+\widehat{\nu })Z_\nu (x)=1`$ are the “P-plaquettes.” The corresponding dual plaquettes, on the dual lattice, form the closed surfaces (in D=4 dimensions) associated with P-vortices. This can be easily understood by constructing such a P-vortex out of a trivial field configuration with links $`Z_\mu =1`$. By flipping a single link to $`Z_\mu =1`$ the six space-time plaquettes attached to this link form an elementary P-vortex. In dual space the corresponding plaquettes form the surface of a cube. By flipping neighbouring links in real space we get cubes in dual space attached to each other. The dual vortex is the connected surface of these cubes.
The distribution of these P-vortices in space-time determines the string tension. The number $`n`$ of piercings of P-vortices through a Wilson loop determines the value of the projected Wilson loop $`W_{cp}(I,J)`$ of size $`A=I\times J`$
$$W_{cp}(I,J)=(1)^n.$$
(3)
If $`p`$ is the probability that a plaquette belongs to a P-vortex, then by assuming the independence of piercings we get for the expectation value of $`W_{cp}(I,J)`$
$$W_{cp}(I,J)=\left[(1p)1+p(1)\right]^A=(12p)^A=e^{\sigma _{cp}A}e^{2pA},$$
(4)
where the string tension in center projection is
$$\sigma _{cp}=\text{ln}(12p)\mathrm{\hspace{0.33em}2}p.$$
(5)
However, for small vortices the independence assumption of piercings is not fulfilled, simply because one piercing is always correlated with another piercing nearby, and therefore small vortices in the average do not contribute to the area law fall-off of large Wilson loops. For this reason, confining vortex configurations will have to be very large, with an extension comparable to the size of the lattice. Fig. 1 shows the P-vortex plaquettes in a single time-slice of an equilibrium field configuration on a $`12^4`$-lattice at $`\beta =2.3`$.
The above defined probability $`p`$ scales nicely with the inverse coupling $`\beta `$. This was shown in refs. . However, already from Fig. 1 of ref. it can be seen that the $`\chi (1,1)`$ Creutz ratios lie above the asymptotic string tension $`\sigma `$, and therefore the values of $`p`$ come out higher than those of $`f`$ that can be inferred from $`\sigma `$ using
$$f=(1e^\sigma )/2.$$
(6)
The values of $`p`$ and $`f`$ are shown in Fig. 2 for various $`\beta `$-values. Here $`p`$ is denoted as “unsmoothed” (to stress the difference from values extracted using a smoothing procedure that will be introduced in section 2.2), while $`f`$ is denoted as “stringtension” in Fig. 2 and was extracted from $`\chi (3,3)`$ Creutz ratios of the center projected field configurations. As shown in ref. the values of $`\chi (3,3)`$ are in good agreement with the determinations of the string tension by Bali et al. . Other data sets shown in Fig. 2 denoted as “$`n`$-smoothing” will be discussed in section 2.2.
There appears the interesting question about the difference between $`p`$ depicted in Fig. 2 under the name “unsmoothed” and $`f`$. Introducing various smoothing methods of P-vortices we will show below that this difference originates in short distance fluctuations of P-vortex surfaces.
### 2.2 Size of P-vortices
One of the most basic properties of vortices which is connected with the above raised questions is their size. In order to check the size and number of P-vortices we determine neighbouring P-plaquettes, which can be easily done on the dual lattice. In most cases there is no doubt about the neighbouring plaquettes, since most dual links connect only two dual P-plaquettes. But in some cases there appear ambiguities, when dual links are attached to 4 or 6 dual P-plaquettes. Below we investigate these ambiguities in more detail and discuss possible resolutions.
In general P-vortices can be in contact at sites, links and plaquettes. Let’s have a look at these different possibilities. A contact point is of no importance since we define the connectedness via common links. For our case of the $`Z_2`$ center group a plaquette belonging to two independent vortices leads to a fusion of these vortices. In this context, it is important to understand if the irregular structure of P-vortices depicted in Fig. 1 is due to such fusion process of simpler vortices. The remaining possibility, a contact of P-vortices at links needs a more detailed discussion.
A simple example of such field configurations are two cubes touching like in Fig. 3.
For such a configuration in dual space it is not clear whether it builds one or two vortices. At the given length scale there is no unique solution for the question of connectedness. Connecting 1 with 2, 3 with 4 would result in one vortex, connecting 1 with 3, 2 with 4 in two separated vortices. In most cases the situation can be resolved by postponing the decision about the connectedness of these plaquettes until, by following the vortex surface in all other directions, the indicated plaquettes (usually) turn out to be members of the same vortex. There appear some cases where no decision is possible by these means, as in the simple example of Fig. 3. In order to get a lower limit for the size of vortices we decide in such cases to treat the configuration as two separate vortices.
It may even occur that vortices touch along closed lines. In these cases parts of the vortex surface can’t be reached following regular connections of plaquettes. These cases are even not so rare, their percentage is shown in Fig. 4. The length of the closed line is usually very small and includes in the average 5 to 7 links.
With the above mentioned rules for deciding connectivity of P-vortices in ambiguous cases, we determine the P-vortex sizes. Since most of the plaquettes turn out to belong to the same vortex the most interesting vortex is the largest.
The full line in Fig. 5 shows the percentage of P-vortices belonging to the largest vortex for various values of $`\beta `$, see also . For the evaluation at beta=2.2, 2.3, 2.4 and 2.5 we used 2000, 2000, 800 and 240 field-configurations on $`12^4`$-, $`12^4`$-, $`16^4`$\- and $`22^4`$-lattices, resp. It is obvious that for $`T=0`$ and all investigated $`\beta `$-values there is mainly one huge vortex, which contains around 90% of all P-plaquettes. All other vortices are rather small and should not contribute to the string tension according to the above given arguments. Small vortices means strongly correlated piercings resulting in a perimeter contribution to Wilson loops. P-vortices of diameter $`d`$ lead to correlations for distances larger than $`d`$. Only vortices extending over the whole space contribute to an area law at all length scales. We conclude that the string tension is determined by the area of a single huge P-vortex.
The fact that P-vortices in the confined phase have an extension comparable to the lattice size is a recent result of Chernodub et al. , and our finding is consistent with theirs. The fact that there appears to be only *one* very large vortex may also be related to results reported by Hart and Teper . These authors find that large monopole loops, identified in abelian projection, intersect to form one huge cluster. Since our previous studies indicate that abelian-projection monopoles loops occur mainly on P-vortex surfaces, it seems quite natural that large loops on a single large surface would tend to intersect.
### 2.3 Small-scale fluctuations of P-vortices
In the preceding section we gave an argument why small vortices do not contribute to the string tension. By the same argument, small fluctuations of the vortex surface affect only perimeter law contributions. We will remove these short range fluctuations from P-vortex surfaces and show that the percentage of plaquettes belonging to such smoothed vortices gives directly the string tension $`\sigma `$.
In order to follow this idea we introduce several smoothing steps which are depicted in Fig. 6. In a first step we identify single isolated P-vortex cubes consisting of six dual P-plaquettes only and remove them. Since we substitute in this step 6 plaquettes by 0 we call this step 0-smoothing. In the next step called 1-smoothing we identify cubes covered by 5 P-vortex plaquettes. Such cubes can be substituted by one complementary plaquette which closes the cubes. Finally, we substitute cubes with 4 plaquettes by the complementary 2 plaquettes. There are two different arrangements for these 4, resp. 2 plaquettes. In the 2-smoothing step we substitute both of them in accordance with Fig. 6. In order to visualize the effect of smoothing on the appearance of vortices we show in Fig. 7 the result of 2-smoothing for the configuration in Fig. 1.
In Fig. 5 we compare for various smoothing steps the probability that a P-plaquette belongs to the largest vortex. It is clearly seen that the largest reduction in the number of vortex plaquettes is already achieved with 0-smoothing. After 2-smoothing the probability reaches more than 99 % which shows that only very few small vortices survive the smoothing procedure. In all investigated field configurations we found a single huge P-vortex, we never met a configuration with two large vortices.
The relation between the percentage $`p_i`$ of P-plaquettes after $`i`$-smoothing and the fraction $`f`$ which one would expect from the physical string tension $`\sigma `$ according to Eq. 6 can be seen from Fig. 2. One can see that the value of $`p_i`$ nicely approach $`f`$, especially for the larger values of $`\beta `$ where the P-plaquettes get less dense.
Further we check the Creutz ratios extracted from P-configurations after various smoothing steps. The results are shown in Fig. 8.
It is clearly seen that 0- and 1-smoothing does not change the extracted Creutz ratios. At $`\beta =2.2`$, where the percentage of P-vortex plaquettes is of the order of 10%, 2-smoothing causes some small ($`5\%`$) reduction in $`\chi (3,3)`$, but even this small deviation goes away at larger $`\beta `$.
Center projected Creutz ratios $`\chi (R,R)`$ are nearly independent of $`R`$. Only $`\chi (1,1)`$ (the logarithm of $`1\times 1`$ loop) deviates from the Creutz ratios for large loops. This behaviour indicates that the only significant correlation among P-plaquettes is at a distance of one lattice spacing. The short-range fluctuations which may be responsible for this correlation are removed by our smoothing procedure. When this is done, the values of $`p`$ and $`f`$ defined above come together, and the string tension is almost unchanged.
### 2.4 Topological properties
The basis for the investigation of the topology of P-vortices is the following rule: The type of homomorphy of a surface can be determined if it is connected, compact and closed. Then, it is determined by a) the orientation behaviour, b) the Euler characteristic. P-vortices in dual space would fulfill the requested conditions if every link joined only two plaquettes. As mentioned above this is not always the case.
Therefore, we will proceed in the following way. First, we will treat the determination of the orientability for the case that every link joins uniquely two attached P-vortex plaquettes. For each link of every P-plaquette we specify a sign. We start with an arbitrary P-plaquette and fix an arbitrary rotational direction. Those two links which are run through in positive axis direction get a plus sign, the other two get a minus sign. We continue at an arbitrary neighbouring P-plaquette. Its rotational sense we fix in such a way that the joining link gets the opposite sign than before. We continue this procedure for every plaquette of the given P-vortex. If at the end every link of the P-vortex has two opposite signs we call such P-vortices orientable. The simplest example is a three-dimensional cube. If some links appear with two equal signs the P-vortex is unorientable, e.g. with the topology of a Klein bottle.
We already gave a certain classification for those cases where four or six P-plaquettes are joined by a single link, see also Fig. 3. In cases of this kind, where different pairs of P-plaquettes can be treated as belonging to independent vortices, we determine the orientability as though the vortices did not touch at any link. With this procedure we get an upper bound for the orientability of vortices. Analogously we proceed for the case of a vortex touching itself at a link. We determine the orientability for a configuration where this touching is avoided.
The simulation shows that without exceptions the large vortices in all investigated field configurations, for all investigated $`\beta `$-values, turned out to be unorientable surfaces. We checked for the various employed smoothing steps whether this behaviour remains unchanged. It turns out that P-vortices remain unorientable after smoothing; apparently the smoothing procedure does not remove all of the local structures (e.g. “cross-caps”) responsible for the global non-orientability.
The second property which determines the topological properties of P-vortices is the Euler characteristic $`\chi `$ which is defined by
$$\chi =𝒩_0𝒩_1+𝒩_2,$$
(7)
where $`𝒩_k`$ is the number of $`k`$-simplices: $`𝒩_0`$ is the number of vertices, $`𝒩_1`$ the number of links, and $`𝒩_2`$ the number of plaquettes. $`\chi `$ is directly related to the genus $`g`$ of a surface, in the orientable case by
$$\chi =\mathrm{\hspace{0.33em}2}\mathrm{\hspace{0.33em}2}g$$
(8)
and in the unorientable case by
$$\chi =\mathrm{\hspace{0.33em}2}g.$$
(9)
An orientable surface of genus $`g`$ is homeomorphic to a sphere with $`g`$ attached handles. An unorientable surface of genus $`g`$ corresponds to a sphere with $`g`$ attached Möbius strips (also known as “cross-caps”).
The determination of the Euler characteristic of a P-vortex is not inhibited by possible self-touchings. We can simply treat the vortex as it is; the result is the average between a possible separation and a real fusing of the two parts of the vortex. For a detailed discussion of this case, we refer the interested reader to ref. .
In Fig. 9 we show the genus $`g=2\chi `$ per fm<sup>4</sup> of P-vortices for various values of $`\beta `$. For the prediction of these values we used $`\sqrt{\sigma }/\mathrm{\Lambda }=58`$ and $`\sqrt{\sigma }=440`$ MeV. Without smoothing the genus takes a maximal value around $`\beta =2.3`$. With 0-smoothing, only elementary vortices are removed and therefore $`\chi `$ is unchanged. With 1-smoothing, contact points and contact links can be removed, therefore a reduction of the genus of a vortex by 1-smoothing is to be expected. In Fig. 9 this reduction is of the order of 15 %. By 2-smoothing also regular bridges in vortices can be removed, decreasing in this way the genus. This reduction amounts to 55% at $`\beta =2.2`$ and 43% at $`\beta =2.5`$. After 2-smoothing the genus stays more nearly constant with $`\beta `$ than without smoothing. It has to be investigated how the behaviour of $`g`$ behaves at still higher $`\beta `$-values. The trend in the investigated region seems compatible with a scaling behaviour for genus $`g`$, and is not compatible with a self-similar short-range structure below the confinement length scale. Fractal structure of that kind would lead to an increase of the genus with $`\beta `$, as more handles are uncovered at ever-shorter length scales. Of course, even a smoothed P-vortex surface will be rough at length scales beyond the confinement scale, and an appropriate fractal dimension can be defined. The fractal dimension of unsmoothed P-vortex surfaces, using the definition of dimension $`D=1+2A/L`$, where $`A`$ is the number of plaquettes and $`L`$ the number of links on the vortex surface, has been reported in refs. and .
These investigations of the topology of P-vortices show that they are not topologically 3-spheres. This is not so surprising; there was no particular reason that vortices *should* have this topology. The structure which we identified, huge vortices extending over the whole lattice, unorientable with a lot of handles, is quite consistent with rotational symmetry in four dimensions. But as we will see, at finite temperature this symmetry can be destroyed.
## 3 Topology of P-vortices at finite temperature
The first discussion of the confinement/deconfinement phase transition in the context the vortex theory and center-projection methods, was made by Langfeld et al. in ref. , who give a nice explanation of the space-space string tension in the deconfined phase in terms of vortices closed in the time direction by lattice periodicity. Another very interesting investigation into the effect of finite temperature on vortex structure is due to Chernodub et al. . In this section we will extend our study of P-vortex topology, and the effect of our smoothing steps on P-vortices, to the finite temperature case.
We did our finite temperature calculations on a $`212^3`$-lattice for $`\beta `$-values between 1.6 and 2.6 and on a $`412^3`$-lattice for $`\beta `$-values between 1.8 and 2.6. With a heat-bath-algorithm we measured after 1000 equilibration steps 1000 configurations with a distance of 20 for each investigated $`\beta `$-value.
The most striking difference to zero temperature calculations is the strong asymmetry of P-plaquette distributions in the deconfined phase which can be seen in Fig. 10.
As a short-notation we use E-plaquette for space-time and B-plaquette for space-space P-plaquettes. An investigation of this asymmetry was also performed in . Just below the phase transition the density of E-plaquettes is slightly larger than the density of B-plaquettes. The excess is even larger for a time extent of 2 lattice units where it amounts to more than 10 % as one can see in Fig. 11. The excess at $`N_t=2`$ seems to be connected with short range fluctuations of the vortices, since it is greatly reduced by smoothing. The detected strong asymmetry in the deconfined phase gives a very intuitive explanation for the behaviour of space-time and space-space Wilson loops, as previously discussed in ref. . The dominant vortex which percolates through the lattice is a (mostly) timelike surface on the dual lattice, which is closed via periodicity in the time direction. Polyakov lines are not affected by timelike vortex surfaces, and timelike Wilson loops are also unaffected. Therefore the string tension of timelike loops is lost in the deconfinement phase. On the other hand, large timelike vortex surfaces (composed of B-plaquettes) do disorder spacelike Wilson loops, which accounts for the string tension of spatial loops (c.f. ) in the deconfinement regime.
In the deconfined phase, the density of E-plaquettes is strongly decreasing with smoothing and for 2-smoothing soon reaches values close to 0% as seen in Figs. 10 and 11. E-plaquettes in the deconfined phase appear obviously due to short range fluctuations and can’t contribute to an area law behaviour.
Fig. 12 displays the dual P-plaquettes of a typical field configuration at $`\beta =2.6`$ on a $`212^3`$-lattice. The dual P-plaquettes form cylinders in time direction, closed via the periodicity of the lattice. Vortices of this shape are also well known in finite temperature theory under the name of ordered-ordered interfaces .
The density of P-plaquettes is depicted in Fig. 13 for various smoothing steps. The decrease in the number of P-plaquettes in the 0-smoothing step is almost independent of $`\beta `$. The number of P-plaquettes above the phase transition decreases rather slowly.
As expected from the zero temperature results in the confined phase most of the P-plaquettes belong to a single large vortex. This can be seen in Fig. 14 .The situation changes drastically at the phase transition where the percentage of P-plaquettes in the largest vortex drops considerably, especially for the unsmoothed configurations. The smoothing procedure shows that there is still one largest vortex but its dominance is not so strong as in the zero-temperature case. The increase of the percentage from $`\beta =2.5`$ to $`\beta =2.6`$ could be just a finite size effect; this will require investigation on larger lattices. In any case, the existence of a large space-time vortex on the dual lattice is required, at finite temperature in the deconfinement phase, in order to explain area law behaviour for spacelike Wilson loops.
With the decrease in the percentage of E-plaquettes we find increasing orientability of P-vortices in Fig. 15. The orientability approaches 100% for large $`\beta `$. The smoothing procedure shows which part of the unorientability is due to short range fluctuations of the vortex.
The relations between genus $`g`$ and Euler characteristic $`\chi `$ are different for orientable (8) and unorientable surfaces (9). Since both types of surfaces appear in finite temperature calculations we investigate the value of $`2\chi `$ as in the zero-temperature case. For orientable vortices this expression is the genus $`g`$, for unorientable surfaces half of the genus. In Fig. 16 we show the value of $`2\chi `$ for the largest vortex. These data are not scaled with the lattice constant $`a`$ as in the zero-temperature case (Fig. 9). In the confined phase P-vortices are again complicated surfaces and especially 2-smoothing reduces the number of handles. Above the phase transition $`2\chi `$ approaches the value 2. This is a consequence of the vanishing density of E-plaquettes. The largest P-vortex becomes orientable with genus $`g=1`$ and $`\chi =0`$. It has the topology of a torus.
## 4 Conclusions
We have investigated the size and topology of P-vortices in SU(2) lattice gauge theory; P-vortices are surfaces in the dual lattice which lie at or near the middle of thick center vortices. We have found that in the confined phase the four-dimensional lattice is penetrated by a single huge P-vortex (see also ) of very complicated topology. This huge P-vortex is a closed surface on the dual lattice which is unorientable and has many ($`10/\mathrm{fm}^4`$) handles. There exist also a few very small vortices. The short range fluctuations of the large P-vortex contribute only to the perimeter law falloff of projected Wilson loops. These short range fluctuations may simply be due to a slight ambiguity in the precise location of the middle of a thick center vortex, as was discussed in section 4 of ref. , and are not necessarily characteristic of the thick center vortices themselves.
By a smoothing procedure, we were able to remove these perimeter contributions due to short-range fluctuations, keeping the Creutz ratios constant. Thus the short-range P-vortex fluctuations are found to account for the difference between the percentage $`p`$ of plaquettes which are pierced by P-vortices, and the comparatively smaller fraction $`f`$ which, in the simplest version of the vortex model (with uncorrelated P-plaquettes), contribute to the string tension. Upon smoothing away the short-range fluctuations, we find the fraction $`p`$ closely approaching the value of $`f`$ extracted from the asymptotic string tension.
The density of vortices does not vanish in the deconfined phase, but there is found to be a strong space-time asymmetry. P-vortices at finite temperature are mainly composed of space-space plaquettes forming *timelike* surfaces on the dual lattice. These surfaces are closed via the periodicity of the lattice in the time direction, they are orientable, and have the topology of a torus, i.e. genus $`g=1`$. The dominance of the largest vortex is not as strong as in the zero temperature case. The space-time asymmetry of P-vortices in the deconfined phase nicely explains the corresponding asymmetry in Wilson loops, which have area-law falloff for spacelike, and vanishing string tension for timelike loops.
Acknowledgements
This work was supported in part by Fonds zur Förderung der Wissenschaftlichen Forschung P11387-PHY (M.F.), the U.S. Department of Energy under Grant No. DE-FG03-92ER40711 (J.G.), the “Action Austria-Slovak Republic: Cooperation in Science and Education” (Project No. 18s41) and the Slovak Grant Agency for Science, Grant No. 2/4111/97 (Š.O.).
|
no-problem/9903/astro-ph9903121.html
|
ar5iv
|
text
|
# 1 ABSTRACT
## 1 ABSTRACT
Preliminary results of a project aiming at unveiling the nature of the extremely red galaxies (ERGs) <sup>1</sup><sup>1</sup>1by extremely red galaxies hereafter we mean objects with colours $`RK>6`$ and $`IK>5`$ found in deep optical-NIR surveys are presented. Very little is known about these objects, the critical issue being whether they are old ellipticals at z$`>`$1 or distant star-forming galaxies strongly reddened by dust extinction. We expect to shed light onto the unknown nature of these galaxies by completing our three-step project: (1) the construction of two very deep optical/NIR surveys to select ERGs, (2) subsequent VLT/NIR spectroscopy; (3) observations in the submm-mm region with SCUBA at the JCMT and with MPIfRbolo at the IRAM 30m antenna.
## 2 INTRODUCTION
Optical and NIR deep surveys have recently boosted observational cosmology and allowed important advances in our understanding of young galaxies. It has been possible to detect star-forming objects at high redshifts and construct the star-formation rate (SFR) history of the universe converting the detected UV-optical or line rest-frame luminosities into a star formation rate (Madau et al. 1996; see also Calzetti, these Proceedings):
Several ways of inferring SFRs have been used:
* The star-formation rate (SFR) in Lyman-break galaxies deduced from their UV-rest frame luminosity is of the order of 4-50 h$`{}_{}{}^{2}{}_{50}{}^{}`$ M/yr (Steidel et al. 1996; Madau et al. 1998).
* Lyman-$`\alpha `$ emitters present a comparable SFR obtained from their line luminosity of 1-10 h$`{}_{}{}^{2}{}_{50}{}^{}`$ M/yr Hu et al. (1998)
* OII ($`\lambda `$3727) emitters have a SFR of 5-60 h$`{}_{}{}^{2}{}_{50}{}^{}`$ M/yr Cowie et al. (1996)
* because of a smaller bias against reddening higher values for SFR are inferred from H-$`\alpha `$ and H-$`\beta `$ line luminosities: of 5-200 and 10-140 h$`{}_{}{}^{2}{}_{50}{}^{}`$ M/yr respectively (Beckwith et al. 1998; Mannucci et al. 1998; Teplitz et al. 1998; Pettini et al. 1998).
When these results are combined together with those from galaxy surveys at $`z1`$ strong constraints on the comoving SFR are inferred Madau et al. (1996). But a still debated question is the presence of the peak seen at $`z1.5`$, since surveys based only on UV-optical light are strongly biased towards dust-free objects.
Several arguments suggest that the population of high-$`z`$ galaxies detected so far may not represent the progenitors of all local galaxies and the consequent history of SFR may be strongly underestimated. The presence of other populations of objects is therefore very likely and may be advocated to explain the following.
* There is an absence of passively evolving ellipticals in deep surveys. Where are the associated protospheroids?
* There is a FIR-submm background detected by COBE, very likely due to the integrated emission of a so-far-hidden population of dusty galaxies (Puget et al. 1996; Hauser et al. 1998).
* ISO/SCUBA FIR/submm-mm surveys (Rowan Robinson et al. 1997; Kawara et al. 1997; Barger et al. 1998; Hughes et al. 1998; Smail, Ivison & Blain 1998; Eales et al.) are key to test the presence of these dusty objects and are indeed showing a large number of sub-mm luminous galaxies (at 850$`\mu m`$ $``$ 0.08-0.5 objects arcmin<sup>-2</sup> with flux $`>`$ 3 mJy and 2 objects arcmin<sup>-2</sup> with $`>`$ 1mJy, Hughes et al. 1998). Is the detected dust obscuring a large fraction of the galaxy UV-luminosity?
Maybe some/all of these dusty objects are already showing up in OPTICAL/NIR surveys.
## 3 THE QUEST FOR ERGS
It is important to check whether the new very red galaxies showing up when we combine optical + NIR images represent part of the population of high-z dusty galaxies. These ERGs are missed by the traditional optical surveys because of their faintness at optical magnitudes and they do not show up in the surveys devoted to high-z galaxies such as those mentioned above. They are found thanks to the combination of deep optical and NIR images both in random fields and in those containing an AGN. Their surface density in the field at K$`<`$20 is of the order of 0.1-0.2 arcmin<sup>-2</sup> for R-K$`>`$6 or I-K$`>`$5 and of $``$ 0.01-0.05 arcmin<sup>-2</sup> for R-K$`>`$7 or I-K$`>`$6 (Hu & Ridgway 1994; Cowie et al. 1996; Thompson et al. 1998; Barger et al. 1998) (for comparison the surface density of Lyman-break galaxies is 0.5 arcmin<sup>-2</sup> while that of QSOs with B$`<`$21.5 is 0.015 arcmin<sup>-2</sup>).
If resolved in ground-based and HST images ERGs usually show compact morphologies. They sometimes have asymmetric and distorted morphologies suggesting the presence of an interacting system or a tidal arm. Their faintness hampers the exploitation of Optical/NIR spectroscopy with 4m telescopes to obtain redshifts and to investigate their nature. So far the Keck 10m telescope has provided the only redshift available of one of these galaxies, HR10 Dey et al. (1999).
The existence of a significant population of objects that is extremely red, moderately bright and at high redshifts is difficult to explain using the known properties of nearby galaxies. It is likely that ERGs form a heterogenous sample with observed properties alike not because of intrinsic similarity but only because of selection criteria. It is very unlikely that they are at very high redshifts (z$`>`$3), since this would require that these objects be exotic and very luminous. It is also unlikely that most of them lie at low redshifts since no population with the properties of ERGs is yet known to exist locally. Hints about their nature can be extracted from their extremely red colours and the likely explanations are actually twofold:
* Are they old L ellipticals at z$`>`$1? In this case their red colours are simply due to the passively evolving population of stars (Cohen et al., 1998).
* Are they strongly extincted starbursts or AGNs? In this case their UV-optical light is reddened by dust. Are they then similar to the sub-mm selected galaxies detected from SCUBA?
Even if they may not represent a large hidden population of galaxies in both scenarios they play an important role in understanding the integrated star-formation in the high-z population.
## 4 OUR ON-GOING PROJECTS
A multi-wavelength approach was then tackled to unveil the nature of these objects. Two surveys are presently being carried out to select two complete samples of ERGs: one in random fields and one around AGNs at z$`>`$1.5. Their surface abundance will be inferred and targets for VLT spectroscopy will then be selected. The surveys make use of ESO ground-based (optical+NIR) and HST (WFPC2) data (see Pozzetti et al. 1998).
Meanwhile, a subsample of the selected ERGs will be observed at sub-mm and mm wavelengths using SCUBA+JCMT and MPIfRbolo+IRAM. The aim is to check whether thermal emission from dust in the ISM of these galaxies can be observed. The detection of the dust emission allows also to determine the FIR luminosity and to infer the SFR. These can be then compared with the sub-mm selected galaxies. An important outcome of this research will be then the inference of the ERGs contribution to the global star-formation history of the Universe and to the FIR/sub-mm background.
## 5 RESULTS OF OUR SUB-MM/MM INVESTIGATION: HR10
The ERG HR10 (the only one with redshift available) was independently detected with the IRAM 30m equipped with the MPIfRbolo and with the JCMT equipped with the SCUBA double arrays (Cimatti et al. 1998; see also Dey et al., 1999). The radio emission of this object is extremely weak in comparison with the millimetric flux ($`\frac{S_{15GHz}}{S_{1mm}}<0.02`$); it is very likely therefore that the detected submm/mm fluxes are not due to synchrotron emission but to thermal emission from a dusty medium. Combining these measurements with the ISO upper limit at 175 $`\mu `$m Ivison et al. (1997) one can derive the dust properties. For a range of dust temperatures between 30 and 45 K the corresponding total dust mass lies in the range of 8-4 $`10^8h_{50}^2`$ M (for q<sub>0</sub>=0.5 and a dust emissivity index $`\beta `$ of 2). One must note, however, that although the thermal spectrum is not well constrained due to a lack of data in the Wien region, the uncertainty on the dust mass is not larger than a factor of 2. The total rest-frame far-IR luminosity in the range 10-2000 $`\mu `$m rest-frame is 2-2.5 10$`{}_{}{}^{12}h_{50}^{2}`$ L. This luminosity places HR10 in the class of ultraluminous infrared galaxies and implies a SFR (adopting the relationship SFR=$`\mathrm{\Psi }10^{10}L_{FIR}`$ and assuming no AGN contribution) of $``$ 200-500 h$`{}_{}{}^{2}{}_{50}{}^{}`$ M/yr. It is worthwhile mentioning here that the SFR deduced from H$`\alpha `$ emission was of 80 h$`{}_{}{}^{2}{}_{50}{}^{}`$ M/yr and from the UV continuum of only 1 h$`{}_{}{}^{2}{}_{50}{}^{}`$ M/yr. SFR is then severely underestimated due to the strong dust extinction.
The nature of this galaxy will be further investigated via interferometric imaging of the 1.3mm continuum and CO line emission with the Plateau de Bure IRAM interferometer.
## 6 FURTHER SUBMM-MM OBSERVATIONS
A sample of other 8 ERGs with $`K<20`$ and $`IK>6`$ has been observed so far with SCUBA at the JCMT and with the IRAM 30m antenna, and other observations have been scheduled. The final data reduction is currently under way. For 4 ERGs we could reach the sensitivity required by our survey (rms$`<`$2 mJy at 850$`\mu `$m), whereas the weather was not good enough to obtain deep data at 450$`\mu `$m. A preliminary analysis suggests that we obtained three marginal detections at 850$`\mu `$m that need to be confirmed with deeper observations. We have also searched for the presence of a positive signal from the population of ERGs by coadding the data of the whole sample. The weighted average of the 850$`\mu `$m flux density of the entire sample provides a signal at $`3\sigma `$ level. One further object was detected both at 850 and 1250$`\mu m`$. Together with the detection of HR10, this hints that that at least part of ERGs are dusty, even if not so extreme as HR10. These findings, however, still need to be confirmed and the final results will be presented in a forthcoming paper (Cimatti et al., in preparation). It should be reminded here that the physical interpretation of the submm-mm observations is not severely hampered by the fact that the redshifts of the ERGs are unknown (with the exception of HR10). Thanks to the strong K-correction caused by the steep grey-body dust spectra, the expected flux at $`\lambda _{obs}=850\mu `$m of a dusty star-forming galaxy at $`1<z<10`$ is not a strong function of the redshift. Thus, since ERGs are expected to be at $`z>1`$, a detection at $`\lambda _{obs}=850\mu `$m directly implies a large content of dust and high $`L_{FIR}`$ and SFR irrespective of the redshift (see also Hughes et al. 1998; Barger et al. 1998).
## 7 CONCLUSIONS
For at least one galaxy (HR10) a large amount of star formation is missing from a UV-only census. As such one could consider it as an observational proof of the predictions by models such as those by Zepf and Silk (1996), Franceschini et al (1998), Guiderdoni et al. (1998). One can argue that the space density of ERGs is not greatly different from those predicted by these models at these flux levels. For instance, Guiderdoni et al. (1998) predict a sky surface density of dusty galaxies at 175 $`\mu m`$ of 0.05 arcmin<sup>-2</sup> similar to that of the extreme red galaxies. Sources in the redshift range 0.5-2.5, which contribute to the cosmic FIR background, should have fluxes at 200 $`\mu `$m (observed) in the range 10-100 mJy. For T<sub>d</sub>=18-45 K the expected observed flux of HR10 at this wavelength would be of 10-40 mJy. At this point extrapolation from one object to the entire class is entirely speculative and a better statistics is required before carrying out any meaningful comparison.
HR10 can be fully considered a ULIRG since most of its energy is emitted in the FIR. Caution must be used when extrapolating the star formation rates for UV-selected galaxies to the whole history of SF in the Universe. At least part of this occurs in highly extincted environment where UV and optical light cannot escape. Objects like HR10 would be missed by optical imaging based on the continuum break or on strong emission lines, by IRAS and by traditional quasar surveys. Our result demonstrate the powerful tool provided by the combination of deep optical/NIR imaging with sub-mm/mm observations.
|
no-problem/9903/quant-ph9903056.html
|
ar5iv
|
text
|
# Generation of entanglement in a system of two dipole-interacting atoms by means of laser pulses
## Abstract
Effectiveness of using laser field to produce entanglement between two dipole-interacting identical two-level atoms is considered in detail. The entanglement is achieved by driving the system with a carefully designed laser pulse transferring the system’s population to one of the maximally entangled Dicke states in a way analogous to population inversion by a resonant $`\pi `$-pulse in a two-level atom. It is shown that for the optimally chosen pulse frequency, power and duration, the fidelity of generating a maximally entangled state approaches unity as the distance between the atoms goes to zero.
With recent experimental advances in the methods for coherent manipulation of quantum system on the level of individual particles, many of previously purely speculative problems become surprisingly up-to-date. In particular, much activity of physicists from different research fields is currently devoted to clarification of the entanglement concept , ways for its quantification, purification, and creation. Casual creation of entangled states of atoms by coherent manipulation with light currently poses one of the biggest challenges in the field of quantum optics . Conversely, the resonant dipole-dipole interaction (RDDI) and cooperative relaxation effects associated with it are rather traditional topics of research. Recently the RDDI has been investigated as a source of interference phenomena in emission spectra , super- and sub-radiance , photon bunching , collisions in the laser cooling processes , and as a mechanism for realizing quantum logical gates ). In this paper we address the question: how effectively can the RDDI along with coherent laser pulses be used for creation of multi-atomic entangled states.
In our model two identical two-level atoms are located at a fixed distance $`R`$ and can be driven by laser beam that is either parallel or perpendicular (according to geometries identified, respectfully, as antisymmetric and symmetric, Fig. 1) to the radius vector $`\stackrel{}{R}`$ connecting the atoms. Within the interaction picture and rotating wave approximation, the evolution is described by the following master equation :
$$\frac{\widehat{\rho }}{t}=\frac{i}{h}[\widehat{}_{\mathrm{eff}},\rho ]+\underset{i,j=1,2}{}\frac{\gamma _{ij}}{2}(2\widehat{\sigma }_i^{}\widehat{\rho }\widehat{\sigma }_j^+\widehat{\rho }\widehat{\sigma }_i^{}\widehat{\sigma }_j^+\widehat{\sigma }_i^{}\widehat{\sigma }_j^+\widehat{\rho }),$$
(1)
where
$$\widehat{}_{\mathrm{eff}}=\frac{\mathrm{}}{2}\left[\delta \left(\widehat{\sigma }_1^z+\widehat{\sigma }_2^z\right)+\mathrm{\Omega }_1\widehat{\sigma }_1^++\mathrm{\Omega }_2\widehat{\sigma }_2^++\chi \widehat{\sigma }_1^+\widehat{\sigma }_2^{}+\text{h.\hspace{0.17em}c.}\right].$$
(2)
is the effective Hamiltonian describing the atoms’ self-evolution and interaction with the laser field. Here $`\delta =\omega _L\omega _a`$ is the laser detuning from the atomic transition frequency, $`\mathrm{\Omega }_{1,2}`$ are the complex laser driving Rabi frequencies for each atom, $`\widehat{\sigma }_i^x,\widehat{\sigma }_i^y,\widehat{\sigma }_i^z,\widehat{\sigma }_i^\pm =\widehat{\sigma }_i^x\pm i\widehat{\sigma }_i^y,i=1,2,`$ are (using the well-known analogy between two-level atoms and spins) the spin-$`\frac{1}{2}`$ Cartesian component and transition operators of the $`i`$-th atom, and we define $`g`$, $`f`$ and $`\gamma `$ so that $`\gamma _{11}=\gamma _{22}=\gamma ,\gamma _{12}=\gamma _{21}=g\gamma ,\chi =f\gamma `$. The distance dependent parameters $`g`$ and $`f`$, describing, respectfully, the photon exchange rate and coupling due to the RDDI, are defined differently for different types of the atomic transition in question. Defining $`p=0,q=2`$ for $`\mathrm{\Delta }m=0`$ and $`p=1,q=1`$ for $`\mathrm{\Delta }m=\pm 1`$ transitions (with the quantization axis coinciding with $`\stackrel{}{R}`$) and assuming that dipole matrix elements of the atoms are collinear to each other and perpendicular to $`\stackrel{}{R}`$, we have the following expressions for $`g(\phi )`$ and $`f(\phi )`$ :
$$g(\phi )=\frac{3}{2}\left(p\frac{\mathrm{sin}\phi }{\phi }+q\frac{\mathrm{sin}\phi }{\phi ^3}p\frac{\mathrm{cos}\phi }{\phi ^2}\right),f(\phi )=\frac{3}{2}\left(p\frac{\mathrm{cos}\phi }{\phi }+q\frac{\mathrm{cos}\phi }{\phi ^3}+q\frac{\mathrm{sin}\phi }{\phi ^2}\right),$$
(3)
where $`\phi =kR`$ is the “dimensionless distance” between the atoms and $`k=\omega _a/c`$. Throughout the rest of the article we will consider $`\mathrm{\Delta }m=\pm 1`$ case for determinacy as for the $`\mathrm{\Delta }m=0`$ case the results are qualitatively same.
It can easily be shown that the Dicke states, $`|\psi _e=|e_1|e_2,|\psi _g=|g_1|g_2,|\psi _s=\frac{1}{\sqrt{2}}(|g_1|e_2+|e_1|g_2)`$, and $`|\psi _a=\frac{1}{\sqrt{2}}(|g_1|e_2|e_1|g_2)`$ (where $`|e_i`$ and $`|g_i`$ are the upper and lower levels of the $`i`$th atom) are the eigenvectors of $`\widehat{}_{\mathrm{eff}}`$ (when excluding the laser driving), while the rest of the atomic dynamics can be described as radiative decay to/from the antisymmetric $`|\psi _a`$ state with the rate $`\gamma _{}=(1g)\gamma `$ and to/from the symmetric $`|\psi _s`$ one with the rate $`\gamma _+=(1+g)\gamma `$.
Analytical stationary solutions of (1) can be found for the case of symmetric excitation $`\mathrm{\Omega }_1=\mathrm{\Omega }_2=\mathrm{\Omega }`$ (without loss of generality we can assume that $`\mathrm{\Omega }`$ here is real and positive) with the stationary populations of the Dicke states given by
$$\begin{array}{c}N_e=N_a=\frac{\mathrm{\Omega }^4}{\left(\gamma ^2+4\delta ^2+2\mathrm{\Omega }^2\right)^2+\gamma (\gamma ^2+4\delta ^2)(f^2\gamma +g^2\gamma +2g\gamma 4f\delta )},\\ N_s=\frac{2\mathrm{\Omega }^2(2\gamma ^2+8\delta ^2+\mathrm{\Omega }^2)}{\left(\gamma ^2+4\delta ^2+2\mathrm{\Omega }^2\right)^2+\gamma (\gamma ^2+4\delta ^2)(f^2\gamma +g^2\gamma +2g\gamma 4f\delta )},\\ N_g=1N_eN_aN_s.\end{array}$$
(4)
Graph $`N_s(\mathrm{\Omega },\delta )`$, corresponding to $`\phi =0.5`$, is shown in Fig. 2a. The antisymmetric case, when the laser beam is parallel to $`\stackrel{}{R}`$, allows no such simple analytical solution since in this case the relation between the Rabi frequencies for two atoms is more complex $`\mathrm{\Omega }_1=e^{i\phi }\mathrm{\Omega }_2=\mathrm{\Omega }`$ (although $`\mathrm{\Omega }`$ here is again real and positive). However, the numerical solution is easily obtained and the corresponding dependence $`N_a(\mathrm{\Omega },\delta )`$ is shown in Fig. 2b.
If we aim to transfer the maximum amount of population into one of the maximally entangled states $`|\mathrm{\Psi }_a`$ or $`|\mathrm{\Psi }_s`$ by a short coherent pulse, a good criterion for finding optimal values of the laser field parameters, $`\mathrm{\Omega }`$ and $`\delta `$, is whether the population of the corresponding level is close to 0.5 in the stationary solution. From the analysis of (4) and the graphs in Fig. 2 we deduce that the optimal parameters can be well approximated by $`\delta _{opt}=\pm \chi (\phi )/2`$ and $`\mathrm{\Omega }_{opt}=\sqrt{|\chi (\phi )|\gamma _\pm }`$ with the upper/lower sign for the symmetric/antisymmetric geometries, respectfully. We then have $`|\delta _{opt}|=|\chi |/2\mathrm{\Omega }_{opt}\gamma _\pm `$ for sufficiently small distances, so that the transition of interest is saturated while at the same time the Rabi frequency is moderate enough to avoid broadband excitation of the Dicke level we are not interested in.
Given the optimal parameters obtained in the previous section we proceed to find the fidelity of creation of the maximally entangled states, that is the maximum amount of population one can transfer using pulses of radiation. Considering the dynamics of the populations under optimal parameters laser driving that is turned on at the time instant $`t=0`$ (when all of the population is in $`|\mathrm{\Psi }_g`$ state), we define the optimal pulse duration as the time when the population of the state we are interested in, reaches its first maximum. For the so chosen parameters we plot in Fig. 3a the populations achieved by applying the optimal pulse, as a function of interatomic distance $`\phi `$. The populations approach unity for both geometries as $`\phi `$ goes to zero suggesting that almost perfect transfer of population is attainable at small interatomic distances.
It is easy to explain this result. As $`R`$ goes to zero the energy splitting between the symmetric and antisymmetric state, equal to $`2\chi =2f\gamma `$, grows to infinity (of course, within the evident limitations of negligibility of exchange effects ). The “parasitic” excitation of the levels we are not interested in is then avoided by shifting the laser frequency so that we are in resonance only with one of the excited Dicke states. And if we have in possession arbitrarily strong and arbitrarily tunable laser, as $`\phi `$ goes to zero we can produce shorter and shorter pulses thereby decreasing the decay probability during the pulse.
In addition, in the case of the symmetric excitation laser driving matrix element for the transitions involving the antisymmetric state vanishes. This means that its population can only come from decay of the upper $`|\mathrm{\Psi }_e`$ level. In fact, in the stationary solution (4) the two states even have the same populations, which are negligible for large detunings. In the antisymmetric excitation case, however, the situation works against us. As the interatomic distance goes to zero the Rabi frequencies of the two atoms become closer in phase, diminishing along the way the matrix elements of transitions involving antisymmetric state to zero. But even with such a small value of excitation efficiency, we can still manage to transfer the population to the antisymmetric state because the symmetric excitation remains far off resonance, and get as a reward the increased lifetime $`\tau =1/\left((1g(R))\gamma \right)1/\gamma `$ of our maximally entangled state (using this “durability” of the antisymmetric $`|\mathrm{\Psi }_a`$ state, we can also create some entanglement passively as described in , but it is difficult to obtain a high fidelity that way).
Now, instead of calculating different entanglement measures (and then figuring out which of them better suits our purposes), we will consider how much our created states violate a simple Bell inequality. It can be easily shown that for any classical local variable distribution the probabilities of finding the atoms’ “spins” aligned along $`\stackrel{}{n}_z`$ direction after coherent rotations fulfill the following Bell inequality:
$$P_{\mathrm{diff}}(0,2\pi /3)+P_{\mathrm{diff}}(2\pi /3,2\pi /3)+P_{\mathrm{diff}}(0,2\pi /3)1,$$
(5)
where the $`P_{\mathrm{diff}}(\phi _1,\phi _2)`$ is the probability of getting different results of the measurements of the two spins, i.e., of finding one spin aligned along the $`\stackrel{}{n}_z`$ direction and the second one against it after the first spin is rotated by $`\phi _1`$ and the second one by $`\phi _2`$ around $`OX`$ or $`OY`$ axis. In quantum mechanics, however, the left hand side of (5) amounts to only 0.75 for the pure maximally entangled $`|\mathrm{\Psi }_a`$ state. To apply the same Bell inequality (5) to the case of the symmetric excitation geometry we first perform a $`180^{}`$ rotation along $`\stackrel{}{n}_z`$-axis with one of the atoms (thus turning $`|\mathrm{\Psi }_s`$ state into $`|\mathrm{\Psi }_a`$ one) and then perform the measurement as described above. In Fig. 3b the l.h.s. of (5) is plotted against the dimensionless distance $`\phi `$ showing that for $`\phi `$ less than $`0.5`$ we have the violation of the inequality and as $`\phi 0`$ we recover the pure states limit of 0.75.
In summary, while being admittedly unrealistic, our model offers a few insights into how efficiently the RDDI can be used to entangle atoms or implement quantum logic gates . We have shown that considerable fidelities (up to 0.8) of creation of one of the maximally entangled Dicke states and Bell inequality violations can be realized if the atoms are placed within distances of the order of a tenth of a wavelength of the working transition. Such distances can be achieved, for example, in the ground vibrational states of two atoms in optical lattices .
However, all of the practical applications (say, quantum teleportation ) require stable entanglement. Even the Bell inequalities violation considered above can be verified only if the produced entanglement lives sufficiently long so that the atoms can be spatially separated for individual addressing and photodetection (imperfectness of the detectors constitutes another problem that has not yet been addressed). Of course, this cannot be achieved in our model, since the dipole interaction and the decay have the same physical nature and we cannot avoid the latter while making use of the former.
Relatively stable coherences (with lifetimes of the order of seconds) can be generated if we use the Zeeman sublevels of the atoms as the working levels (qubits). The RDDI is negligible for them, but using three-level atoms in $`\mathrm{\Lambda }`$-configuration instead of two-level atoms we can overcome that difficulty. We can use Raman pulses transferring the population between the Zeeman sublevels via a higher lying “transit” Dicke level (STIRAP techniques might be an alternative), so that the transitions between each of the Zeeman sublevels states and the excited “transit” state benefit from substantial RDDI. Then by choosing one-photon detunings to be in resonance with only one of the higher level Dicke states we can generate the Zeeman sublevels entanglement.
In this paper we present the first tentative quantitative model of the process of entanglement of atoms with the help of the RDDI. Much remains to be done before we can compare the results of the theory with possible practical implementations, but the conclusions presented here are promising and therefore encourage further theoretical developments.
This work was partially supported by Volkswagen Stiftung (grant No. 1/72944) and the Russian Ministry of Science and Technical Policy.
|
no-problem/9903/hep-ex9903054.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
The ATLAS Collaboration is building a general-purpose pp detector which is designed to exploit the full discovery potential of the CERN’s Large Hadron Collider (LHC), a super-conducting ring to provide proton – proton collisions around 14 TeV . LHC will open up new physics horizons, probing interactions between proton constituents at the 1 TeV level, where new behavior is expected to reveal key insights into the underlying mechanisms of Nature .
The bulk of the hadronic calorimetry in the ATLAS detector is provided by a large (11 m in length, 8.5 m in outer diameter, 2 m in thickness, 10000 readout channels) scintillating tile hadronic barrel calorimeter (TILECAL). The technology for this calorimeter is based on a sampling technique using steel absorber material and scintillating plates readout by wavelength shifting fibres. An innovative feature of this design is the orientation of the scintillating tiles which are placed in planes perpendicular to the colliding beams staggered in depth (Fig. 1).
In order to test this concept five 1m prototype modules and the Module-0 were built and exposed to high energy pion, electron and muon beams at the CERN Super Proton Synchrotron.
In the following we consider two test beam setups. The setup 1, shown in Fig. 3-1 given in , consists of five 1m prototype modules. The obtained results about the electron and pion responses and the $`e/h`$ ratio for this setup are used in this paper for comparison. The setup in question (setup 2), shown in Fig. 3-2 given in , has as the basis Module-0.
In this work the detailed experimental information is presented about the electron and pion responses and the $`e/\pi `$ and $`e/h`$ ratios (an intrinsic non-compensation) of the Tile calorimeter Module-0.
## 2 The 1m Prototype Modules
Each module spans 100 cm in the $`Z`$ direction, 180 cm in the $`X`$ direction (about 9 interaction lengths at $`\eta =0`$ or about 80 effective radiation lengths) and with a front face of 100 $`\times `$ 20 cm<sup>2</sup> . The iron structure of each module consists of 57 repeated ”periods”. Each period is 18 mm thick and consists of four layers. The first and third layers are formed by large trapezoidal steel plates (master plates), and spanning the full longitudinal dimension of the module. In the second and fourth layers, smaller trapezoidal steel plates (spacer plates) and scintillator tiles alternate along the $`X`$ direction. These layers consist of 18 different trapezoids of steel or scintillator, each spanning 100 mm along X.
The master plates, spacer plates and scintillator tiles are 5 mm, 4 mm and 3 mm thick, respectively. The iron to scintillator ratio is 4.67:1 by volume.
Wavelength shifting fibres collect scintillation light from the tiles at both of their open edges and bring it to photo-multipliers (PMTs) at the periphery of the calorimeter. Each PMT views a specific group of tiles through the corresponding bundle of fibres.
The modules are longitudinally segmented into four depth segments by grouping fibers from different tiles. As a result, each module is divided into $`5(alongZ)\times 4(alongX)`$ separate cells. The readout cells have the lateral dimensions $`200mm(alongZ)\times (200380)mm`$ (along Y, depending on a depth number) and the longitudinal dimensions 300, 400, 500, 600 mm for depths 1 – 4, corresponding to 1.5, 2, 2.5 and 3 $`\lambda _I`$ at $`\eta =0`$. At the output we have 200 values of responses $`Q_{ijkl}`$ from PMT properly calibrated with pedestal subtracted, for each event. Here $`i=1,\mathrm{},5`$ is the column of cells (tower) number, $`j=1,\mathrm{},5`$ is the module number, $`k=1,\mathrm{},4`$ is the depth number and $`l=1,2`$ is the PMT number.
## 3 The Module-0
The layout of the readout cell geometry for the Module-0 is shown in Fig. 3-3 given in . The Module-0 has three depth segmentations. The thickness of the Module-0 at $`\mathrm{\Theta }=0^o`$ is 1.5 $`\lambda `$ in the first depth sampling, 4.2 $`\lambda `$ in the second and 1.9 $`\lambda `$ in the third with a total depth of 7.6 $`\lambda `$. The Module-0 samples the shower with 11 tiles varying in depth from 97 to 187 mm. The front face area is of $`560\times 22cm^2`$.
In the setup 2 (see Fig.3-2 given in ) the 1m prototype modules are placed on a scanning table on top and at the bottom of the Module-0 with a 10 cm gap between them. This scanning table allowed movement in any direction. Upstream of the calorimeter, a trigger counter telescope (S1-S3) was installed, defining a beam spot of 2 cm in diameter. Two delay-line wire chambers (BC1-BC2), each with $`Z`$, $`Y`$ readout, allowed the impact point of beam particles on the calorimeter face to be reconstructed to better than $`\pm 1`$ mm . A helium Čerenkov threshold counter was used to tag $`\pi `$-mesons and electrons for $`E`$ =10 and 20 $`GeV`$. For the measurements of the hadronic shower longitudinal and lateral leakages back ($`80\times 80cm^2`$) and side ($`40\times 115cm^2`$) ”muon walls” were placed behind and on the side of the calorimeter.
## 4 Data Taking and Event Selection
Data were taken with electron and pion beam of E = 10, 20, 60, 80, 100 and 180 GeV at $`\eta =0.25`$ and $`0.55`$, The following 6 cuts were used. The cuts 1 and 2 removed beam halo. The cut 3 removed muons and non-single-track events. The cuts 4, 5 and 6 carried out the electron-pion separation The cut 4 is connected with Čerenkov counter amplitude. Cut 5 is the relative shower energy deposition in the first two calorimeter depths:
$$C_i=\underset{selectedi}{}\underset{j=3}{}\underset{k=1}{\overset{2}{}}\underset{l=1}{\overset{2}{}}Q_{ijkl}/E,$$
(1)
where
$$E=\underset{ijkl}{}Q_{ijkl}.$$
(2)
and the indexes $`i`$ and $`k`$ in $`Q_{ijkl}`$ determine the regions of electromagnetic shower development. The values $`C_i`$ depend on a particle’s entry angle $`\mathrm{\Theta }`$. The basis for the electron-hadron separation by using the cut 5 is the very different longitudinal energy deposition for electrons and hadrons.
The cut 6 is related with the lateral shower spread :
$$E_{cut}=\frac{\sqrt{_c(E_c^\alpha _cE_c^\alpha /N_{cell})^2}}{_cE_c^\alpha },$$
(3)
where $`1cN_{cell}`$ and $`N_{cell}`$ is the used cells number. The power parameter $`\alpha =0.6`$ have been tuned in to achieve maximum separation efficiency.
The distributions of events as a function of $`C_i`$ and $`E_{cut}`$ for various energies at $`\eta =0.25`$ and $`\eta =0.55`$ are shown in Fig. 2 and Fig. 3. Fig. 4 shows the scatter plots $`E_{cut}`$ versus $`C_i`$. Two groups of events are clearly separated: the left group corresponds to pions, the right group corresponds to electrons.
## 5 Electrons Response
As to the electron response our calorimeter is very complicated object. It may be imagined as a continuous set of calorimeters with the variable absorber and scintillator thicknesses (from $`t`$ = 58 to 28 mm and from $`s`$ = 12 to 6 mm for $`14^o\mathrm{\Theta }30^o`$), where $`t`$ and $`s`$ are the thicknesses of absorber and scintillator respectively.
Therefore an electron response ($`R=E_e/E_{beam}`$) is rather complicated function of $`E_{beam}`$, $`\mathrm{\Theta }`$ and $`Z`$. The energy response spectrum for given run (beam has the transversal spread $`\pm 10mm`$) as a rule is non-Gaussian (Fig. 5 and Fig. 6), since it is a superposition of different response spectra, but it becomes Gaussian for given E, $`\mathrm{\Theta }`$, Z values. Fig. 7 and Fig. 8 show the normalized electron response for E = 10, 20, 60, 80, 100, 180 GeV at $`\eta =0.25`$ and $`0.55`$ as a function of the impact point $`Z`$ coordinate. One can see the clear periodical structure of the response with 18 mm period. The mean values (parameter $`P_2`$) and the amplitudes(parameter $`P_1`$) of these spectra have been extracted by fitting the sine function:
$$f(Z)=P_2+P_1\mathrm{sin}(2\pi Z/P_3+P_4)$$
(4)
Fig. 9 (top) shows the parameter $`P_1`$ as a function of the beam energy. As can be seen this parameter does not depend from the beam energy within errors and decreases with increasing of $`\eta `$ from $`(7.6\pm 0.3)\%`$ at $`\eta =0.25`$ to $`(2.9\pm 0.2)\%`$ at $`\eta =0.55`$.
Fig. 9 (bottom) shows the mean normalized electron response as a function of energy for two values of $`\eta `$. As can be seen there is some increase of the mean normalized electron response with increasing of energy. There is no difference between ones for various values of $`\eta `$. Note that there are the additional systematic errors in these values (not given in this Figure) due to the uncertainties in the average beam energies. These uncertainties are determined by the expression
$$\frac{\mathrm{\Delta }E_{beam}}{E_{beam}}=\frac{25\%}{E_{beam}}0.5\%$$
and range from 2.5 % for $`E_{beam}=10`$ GeV to 0.5 % for $`E_{beam}=180`$ GeV.
We attempted to explain the electron response as a function of $`Z`$ coordinate calculating the total number of shower electrons (positrons) crossing scintillator tiles taking into account the arrangement of tiles and its sizes and using the shower curve (the number of particles in the shower $`N_e`$ as a function of the longitudinal shower development). which is given in . These calculations were performed for some energies and angles for the trajectories entering into four different elements of calorimeter periodic structure — spacer, master, tile, master. The results for E = 10, 100, 180 GeV at $`\eta =0.25`$ are shown in Fig. 10. There is a maximum at the impact point corresponding to tile and a minimum at the spacer plate. Such simple calculations are in agreement with experimental data as to non-dependence from energy and the periodicity in the electron response. But these calculations do not reproduce the values of the amplitude. The latter is connected with non-taking into account the shower lateral spread.
## 6 Electron Energy Resolution
The relative electron energy resolutions, extracted from the energy distributions (Fig. 5 and Fig. 6), are shown in Fig. 11 together with the 1m prototype data as a function of $`1/\sqrt{E}`$. Fit of these data by the expression (5) produced the parameters $`a_{exp}`$ and $`b_{exp}`$ given in Table 1 together with the data for various iron-scintillator calorimeters.
$$\frac{\sigma }{E}=\frac{a}{\sqrt{E}}b,$$
(5)
We compared our results on the energy resolution with the parameterization suggested in :
$$\frac{\sigma }{E}=\frac{a}{\sqrt{E}}=\frac{\sigma _o}{\sqrt{E}}\left(\frac{t}{X_t}\right)^\gamma \left(\frac{s}{X_s}\right)^\delta ,$$
(6)
where $`\sigma _o=6.33\%\sqrt{GeV}`$, $`\gamma `$ = 0.62, $`\delta `$ = 0.21 are the parameters, $`X_t`$ and $`X_s`$ are the radiation lengths of iron and scintillator respectively. In our case the values of $`t`$ and $`s`$ are equal to: $`t=14mm/\mathrm{sin}\mathrm{\Theta }`$, $`s=3mm/\mathrm{sin}\mathrm{\Theta }`$. This formula is purely empirical and the parameters $`\sigma _o,\gamma ,\delta `$ were determined by fitting the Monte Carlo data.
The results of calculations are given in Table 1. As can be seen from this Table the energy resolutions obtained for “ideal” calorimeter are more accurate (about a factor 1.5) than the experimental ones.
## 7 Pion Response
Fig. 12 shows the normalized pion response ($`E_\pi E_{beam}`$) for $`E_{beam}`$ = 20, 100, 180 GeV at $`\eta =0.25`$ and $`0.55`$. Fig. 13 shows the normalized pion response for $`E_{beam}`$ = 20, 100, 180 GeV at $`\eta =0.25`$ and $`0.55`$ as a function of impact point $`Z`$ coordinate. Contrary to electrons these pion Z-dependences do not show any significant periodical structure.
Fig. 17 shows the mean normalized pion response, extracted from Fig. 13, as a function of energy for two values of $`\eta `$. The meaning of lines is given below. As can be expected, since the $`e/\pi `$ ratio is not equal to 1, the mean normalized pion response increases with the beam energy increasing.
As can be seen the pion response is different for various $`\eta `$. The values of the pion response for $`\eta =0.55`$ are larger than ones for $`\eta =0.25`$. We tried to explain if the reason of this difference is the lateral leakage through gaps between the 1m prototype modules. We estimated the lateral leakages to the gaps taking into account the longitudinal energy deposition and the spatial radial deposition. It turned out that the leakage for $`\eta =0.25`$ is larger than for $`\eta =0.55`$ but it is unsufficient, less than 1 %, in order to explain the observed difference in the pion responses.
## 8 $`e/h`$ Ratio
The responses obtained for $`e`$ and $`\pi `$ give the possibility to determine the $`e/h`$ ratio, an intrinsic non-compensation of a calorimeter. In our case the electron – pion ratios reveal complicated structures $`e/\pi =f(E,\mathrm{\Theta },Z)`$. Fig. 14 and Fig. 15 show the $`e/\pi `$ ratios for Module-0 for E = 10, 20, 60, 80, 100 and 180 GeV at $`\eta =0.25`$ and $`0.55`$ as a function of $`Z`$ coordinate. If for the 1m prototype modules the local compensation has been observed (for some $`Z`$ points at 20 GeV and $`\mathrm{\Theta }=10^o`$, see Fig. 4 given in ) as to the Module-0 this is not this case.
The $`e/\pi `$ ratios, averaged over two 18 mm period, are shown in Fig. 16 as a function of the beam energy. The errors include statistical errors and a systematic error of 1 %, added in quadrature.
For extracting the $`e/h`$ ratio we have used two methods: the standard $`e/\pi `$ method and the pion response method.
In the first method, the relation between the $`e/h`$ ratio and the $`e/\pi `$ ratio is:
$$e/\pi =\frac{<E_e>}{<E_\pi >}=\frac{e/h}{1+(e/h1)f_{\pi ^0}},$$
(7)
where $`f_{\pi ^0}`$ is the average fraction of the energy of the incident hadron going into $`\pi ^0`$ production .
In the second method, the relation between the $`e/h`$ ratio and the pion response, $`<E_\pi >`$, is:
$$\frac{<E_\pi >}{E_{beam}}=\frac{e}{e/h}(1+(e/h1)f_{\pi ^0}),$$
(8)
where $`e`$ is the efficiency for the electron detecting. Note that usually this is two parameters fit with parameters $`e`$ and $`e/h`$. In principle, the $`e`$ value can be determined from the ratio $`e=<E_e>/E_{beam}`$.
There are two analytic forms for the intrinsic $`\pi ^o`$ fraction suggested by Groom
$$f_{\pi ^o}=1\left(\frac{E}{E_o^{}}\right)^{m1}$$
(9)
and Wigmans
$$f_{\pi ^o}=kln\left(\frac{E}{E_o^{}}\right),$$
(10)
where $`E_o^{}=1`$ GeV, $`m=0.85`$, $`k=0.11`$.
We used both parameterizations. Fig. 16 shows the $`e/\pi `$ ratio as a function of the beam energy for Module-0 and its fitting of equation (7) with the Wigmans (Groom) parameterization of $`f_{\pi ^o}(E)`$.
Fig. 17 shows the pion response as a function of the beam energy for the Module-0 and its fitting of equation (8) with the Wigmans (solid line) and Groom (dashed line) parameterizations of $`f_{\pi ^o}(E)`$.
The confidence levels of the fits for these parameterizations are good, i.e., $`\chi ^2`$ is less then the numbers of degrees of freedom. So, we could obtain four values for the $`e/h`$ ratio. The results are presented in Table 2.
As can be seen, the $`e/h`$ ratios obtained by the pion response method have the errors about 10 times larger than obtained by the $`e/\pi `$ method. In addition, there is some systematic difference: the $`e/h`$ ratios, obtained by the pion response method, are of 20 – 40 % larger than ones, obtained by the $`e/\pi `$ method. This can be explained by some increase in the electron response in the 60 – 180 GeV energy range. This systematic is cancelled in the $`e/\pi `$ method. It is remarkable that in , in which the $`e/h`$ ratio for the 1m prototype modules have been determined, obtained the contrary result concerning advantages in using these methods. Advantage have been observed for the pion response method. In their case the standard $`e/\pi `$ method led to a larger error (about a factor 2) than the pion response method called in this work the non-linearity method. This can be explained by different scale of errors in the corresponding input data. In their work the $`e/\pi `$ ratios had 3 % errors and the pion response values had 0.3 % errors. In our case, errors in the $`e/\pi `$ ratios and the pion response values have errors at the same 1 % level.
We made preference to the $`e/\pi `$ method and our final results are: $`e/h=1.45\pm 0.014`$ for $`\eta =0.25`$ and $`e/h=1.36\pm 0.014`$ for $`\eta =0.55`$. Fig. 18 shows these values together with ones for the 1m prototype modules as a function of $`\mathrm{\Theta }`$ angle. The difference in $`\mathrm{\Theta }`$ behavior is observed. This can be explained by different behaviour for the electron and pion responses as a function of $`\mathrm{\Theta }`$ for these two calorimeters as shown in Fig. 19. For the Module-0 it is observed slight decrease of the electron response and some increase of the pion response. As a result of the $`e/h`$ ratio has 6 % decrease.
The simple calculations of the responses by counting of the energy depositions in crossing tiles along the shower axes taking into account the arrangement and sizes of tiles and the longitudinal shower profiles confirmed these observations.
The obtained $`e/h`$ values are given in Table 3 with the other existing experimental data and the Monte Carlo calculations for various iron-scintillator calorimeters. The corresponding values of the thickness of the iron absorber ($`t`$), the thickness of the readout scintillator layers ($`s`$), the ratio $`R_d=t/s`$ and the used symbols are also given. These $`e/h`$ values are also shown in Fig. 20 as a function of $`R_d`$ ratio and the iron thickness. As can be seen the $`e/h`$ ratio has very complicated behaviour being the function of the thickness of the passive (iron) layers, the sampling fraction and, in our case, from the $`\mathrm{\Theta }`$ angle and the sizes and replacement of the scintillator tiles.
Besides, the considerable disagreement between different Monte Carlo calculations , and experimental data is observed.
## 9 Conclusions
The detailed experimental information about the electron and pion responses, the electron energy resolution and the $`e/h`$ ratios as a function of the incident energy $`E`$, the impact point $`Z`$ and the incidence angle $`\mathrm{\Theta }`$ of the Module-0 of the ATLAS iron-scintillator barrel hadron calorimeter with the longitudinal tile configuration is obtained. The results are compared with the existing experimental data, obtained for the 1m prototype modules and the various iron-scintillator calorimeters, and with the Monte Carlo calculations. It is shown that the $`e/h`$ ratio has very complicated behaviour being the function of the thickness of the passive (iron) layers, the sampling fraction and, in our case, from the $`\mathrm{\Theta }`$ angle and the sizes and replacement of the scintillator tiles.
## 10 Acknowledgments
This work is the result of the efforts of many people from the ATLAS Collaboration. The authors are greatly indebted to all Collaboration for their test beam setup and data taking. The authors are thankful M. Nessi and J. Budagov for their attention and support of this work.
|
no-problem/9903/hep-ex9903059.html
|
ar5iv
|
text
|
# BES Results on Inclusive D Meson Decays
## I Introduction
In an era of high precision experiments such as the B factories and the LHC, accurate measurements of b-flavored particles can benefit from a better knowledge of charm decays and their branching fractions. The inclusive decay $`\mathrm{D}\varphi \mathrm{X}`$ has not been measured <sup>*</sup><sup>*</sup>*Throughout this paper, charge conjugation is implied.. This branching fraction can serve as an independent check of the existence of additional exclusive decays of D mesons that contain a $`\varphi `$ meson, and for $`\mathrm{B}_\mathrm{s}^0`$ physics studies that use the $`\varphi \mathrm{}`$ pair to tag the $`\mathrm{B}_\mathrm{s}^0`$ meson. In addition, this branching fraction would be helpful in understanding the charm meson decay mechanisms.
In this paper, we report a first measurement of the inclusive $`\varphi `$ decay branching fractions of charged and neutral $`\mathrm{D}`$ mesons and a new search for the exclusive semileptonic decay $`\mathrm{D}^+\varphi \mathrm{e}^+\overline{\nu }`$.
## II Data Sample and Analysis Methods
This measurement is based on 22.3 $`\mathrm{pb}^1`$ of data collected in $`\mathrm{e}^+\mathrm{e}^{}`$ annihilations at $`\sqrt{s}=4.03`$ GeV at the BEPC during the 1992-1994. The BES detector has been described in detail elsewhere.
At $`\sqrt{s}`$=4.03 GeV charm mesons D<sup>0</sup> and D<sup>+</sup> are produced via
$`\mathrm{e}^+\mathrm{e}^{}\mathrm{D}^+\mathrm{D}^{},\mathrm{D}^0\overline{\mathrm{D}^0},`$
$`\mathrm{D}^+\mathrm{D}^{},\mathrm{D}^+\mathrm{D}^{},\mathrm{D}^0\overline{\mathrm{D}^0}`$
$`\mathrm{D}^+\mathrm{D}^{},\mathrm{D}^0\overline{\mathrm{D}^0}`$
followed by cascade decays of the D mesons. However, the D<sup>∗-</sup> can decay either to $`\pi ^{}\overline{\mathrm{D}^0}`$ or $`\pi ^0(\gamma )\mathrm{D}^{}`$, so that reconstructing a D meson does not necessarily determine whether the recoiling D meson is charged or neutral. In order to measure specifically $`B(\mathrm{D}^0\varphi \mathrm{X})`$ and $`B(\mathrm{D}^+\varphi \mathrm{X})`$, the numbers of neutral and charged D mesons recoiling against a reconstructed D meson, and the type of the D meson from which the $`\varphi `$ mesons come, must be determined. To this end two methods have been developed and are used to measure the inclusive branching fractions of the D mesons.
### A The $`\mathrm{D}^0`$ and $`\mathrm{D}^+`$ combinative double tag method (CDTM)
To measure inclusive $`\varphi `$ branching fractions of the $`\mathrm{D}^0`$ and $`\mathrm{D}^+`$ mesons, the $`\varphi `$ is searched in the recoil side against a fully reconstructed D meson, and the numbers of $`\varphi `$ events against the $`\mathrm{D}^0`$ and $`\mathrm{D}^+`$ decays, N$`{}_{}{}^{\varphi }{}_{\mathrm{D}_{\mathrm{tag}}^0}{}^{}`$, N$`{}_{}{}^{\varphi }{}_{\mathrm{D}_{\mathrm{tag}}^+}{}^{}`$, are determined, which can be related via
$$\mathrm{N}_{\mathrm{D}_{\mathrm{tag}}^0}^\varphi =ϵ\mathrm{N}_{\mathrm{D}_{\mathrm{tag}}^0}^\mathrm{D}^{}B(\mathrm{D}^{}\varphi \mathrm{X})+ϵ\mathrm{N}_{\mathrm{D}_{\mathrm{tag}}^0}^{\overline{\mathrm{D}^0}}B(\overline{\mathrm{D}^0}\varphi \mathrm{X}),$$
(1)
$$\mathrm{N}_{\mathrm{D}_{\mathrm{tag}}^+}^\varphi =ϵ\mathrm{N}_{\mathrm{D}_{\mathrm{tag}}^+}^\mathrm{D}^{}B(\mathrm{D}^{}\varphi \mathrm{X})+ϵ\mathrm{N}_{\mathrm{D}_{\mathrm{tag}}^+}^{\overline{\mathrm{D}^0}}B(\overline{\mathrm{D}^0}\varphi \mathrm{X}),$$
(2)
to the branching fractions of their decays, B($`\mathrm{D}^{}\varphi \mathrm{X}`$) and B($`\mathrm{D}^0\varphi \mathrm{X}`$), where $`\mathrm{N}_{\mathrm{D}_{\mathrm{tag}}^0}^\mathrm{D}^{}`$, $`\mathrm{N}_{\mathrm{D}_{\mathrm{tag}}^0}^{\overline{\mathrm{D}^0}}`$, $`\mathrm{N}_{\mathrm{D}_{\mathrm{tag}}^+}^\mathrm{D}^{}`$, and $`\mathrm{N}_{\mathrm{D}_{\mathrm{tag}}^+}^{\overline{\mathrm{D}^0}}`$ are respectively the numbers of $`\mathrm{D}^{}`$ and $`\overline{\mathrm{D}^0}`$ decays on the recoil against $`\mathrm{D}^+`$ and $`\mathrm{D}^0`$ tags, and $`ϵ`$ is the detection efficiency of the $`\varphi `$. The values of $`\mathrm{N}_{\mathrm{D}_{\mathrm{tag}}^0}^\mathrm{D}^{}`$, $`\mathrm{N}_{\mathrm{D}_{\mathrm{tag}}^0}^{\overline{\mathrm{D}^0}}`$, $`\mathrm{N}_{\mathrm{D}_{\mathrm{tag}}^+}^\mathrm{D}^{}`$, and $`\mathrm{N}_{\mathrm{D}_{\mathrm{tag}}^+}^{\overline{\mathrm{D}^0}}`$ are determined from a measurement of the total production cross-sections of ractions $`\mathrm{e}^+\mathrm{e}^{}\mathrm{D}^{}\overline{\mathrm{D}^{}},\mathrm{D}^{}\overline{\mathrm{D}}`$ at 4.03 GeV by BES.
### B The recoil charge method
At $`\sqrt{s}=`$4.03 GeV, $`\mathrm{D}^{}\overline{\mathrm{D}^{}}`$ and $`\mathrm{D}^{}\overline{\mathrm{D}}`$ pairs are produced with no additional charged tracks. Charged pions arising from direct D decays are very slow, and are mostly undetected in the BES detector. As a result, only decay products of the $`\mathrm{D}^+`$ and $`\mathrm{D}^0`$ are visible for most events. Let Q<sub>D</sub> be the charm flavor of the reconstructed D meson, and Q<sub>rec</sub> be the total charge of tracks recoiling against this D meson. The Q<sub>rec</sub> distribution for D<sup>0</sup> (D<sup>+</sup>) centers at 0 (1), and has a spread of $`\pm `$1. The recoil charge method selects neutral and charged D mesons according to
$$Q_{rec}=0,\mathrm{or}Q_{rec}=Q_\mathrm{D}=1\mathrm{for}\mathrm{D}^0tags$$
(3)
and
$$Q_{rec}Q_\mathrm{D}<0\mathrm{for}\mathrm{D}^+tags$$
(4)
For inclusive D decays, the efficiency and the misidentification rate are 0.74$`\pm `$0.02 and 0.25$`\pm `$0.02, respectively, as obtained from Monte Carlo simulations, and are approximately the same for both charged and neutral D mesons. These numbers are confirmed using kinematically selected data events $`\mathrm{e}^+\mathrm{e}^{}\mathrm{D}^+\mathrm{D}^{}`$ and $`\mathrm{e}^+\mathrm{e}^{}\mathrm{D}^0\overline{\mathrm{D}^0}`$. For events in which a D tag and a recoil $`\varphi `$ has been fully reconstructed, the efficiency of the recoil charge method is improved over that of the inclusive D events. A Monte Carlo study of various D decay modes into final states containing a $`\varphi `$ has been performed, and the variations among their efficiencies are included in the systematic errors. For these events, the recoil charge method selects D meson type correctly 0.91$`\pm `$0.01$`\pm `$0.02 of the time, and misidentifies a D for 0.09$`\pm `$0.01$`\pm `$0.02 of the events, where the first error is due to Monte Carlo statistics, and the second is systematic.
## III Data Analysis
### A Reconstruction of D, $`\varphi `$ Mesons
Charged tracks are required to have good helix fits which have a normalized chi-square of less than 9 per degree of freedom. These tracks must satisfy $`|\mathrm{cos}\theta |<`$ 0.8, where $`\theta `$ is the polar angle, and be consistent with coming from the primary event vertex. For charged particles, a particle identification procedure is applied. A combined particle confidence level calculated using the d$`E`$/d$`x`$ and TOF measurements is required to be greater than $`1\%`$ for the $`\pi `$ hypothesis. For the kaon hypothesis, $`L_k>L_\pi `$, where $`L`$ is the likelihood for a particle type, is required.
Charged and neutral D mesons are reconstructed via decays $`\mathrm{D}^0\mathrm{K}^{}\pi ^+`$, $`\mathrm{K}^{}\pi ^{}\pi ^+\pi ^+`$ and $`\mathrm{D}^+\mathrm{K}^{}\pi ^+\pi ^+`$. To reduce combinatorial backgrounds, only D mesons from $`\mathrm{e}^+\mathrm{e}^{}\overline{\mathrm{D}}\mathrm{D}^{},\mathrm{D}^{}\overline{\mathrm{D}^{}}`$ reactions are selected with cuts on the momenta of Kn$`\pi `$ combinations. Figures 1(a), 1(b) and 1(c) show the invariant mass distributions for events that pass the selections. The signals are fitted, and after having accounted for double counting, the number of D events is determined to be $`9054\pm 309\pm 416`$, where the first error is statistical and the second systematic. These D events are used as tagged $`\mathrm{e}^+\mathrm{e}^{}\overline{\mathrm{D}}\mathrm{D}^{},\mathrm{D}^{}\overline{\mathrm{D}^{}}`$ events in which the recoil side contain an unbiased $`\overline{\mathrm{D}}`$ decay.
Table 1 summarizes the numbers of neutral and charged D mesons in the recoil against the reconstructed D tags. The averages from the CDTM method and the recoil charge method, calculated assuming a full correlation between their statistical errors, are 6803$`\pm `$303$`\pm `$322 and 2251$`\pm `$77$`\pm `$112 for D<sup>0</sup> and D<sup>+</sup>, respectively.
The $`\varphi `$ meson is reconstructed through its decay to $`\mathrm{K}^+\mathrm{K}^{}`$. Figure 2 shows the invariant mass distribution of $`\mathrm{K}^+\mathrm{K}^{}`$ pairs selected. Using convoluted Breit-Wigner and Gaussian functions plus a third order polynomial background to fit the mass spectrum, a mass of $`1.0194\pm 0.0002`$ GeV$`/c^2`$ and a total of $`1108\pm 70`$ $`\varphi `$ events are obtained. In this measurement, a $`\varphi `$ signal window is defined as the region from 1.00 to 1.04 GeV/$`c^2`$, as indicated by the arrows in Figure 2.
### B Inclusive $`\mathrm{D}\varphi \mathrm{X}`$
Figures 3(a) and 3(b) show the invariant mass distributions of $`\mathrm{K}^+\mathrm{K}^{}`$ pairs from D<sup>+</sup> and D<sup>0</sup>, respectively, as identified by the recoil charge selection criteria. The $`\mathrm{Kn}\pi `$ invariant masses for the single tag are within $`\pm 2.5\sigma _{\mathrm{M}_\mathrm{D}}`$ of the $`\mathrm{D}`$ masses. In this measurement, $`\mathrm{K}^+\mathrm{K}^{}`$ pairs with masses in the ranges 0.98 - 1.00 GeV/$`c^2`$ and 1.04 - 1.15 GeV/$`c^2`$ are taken as background for the $`\varphi `$. The $`\mathrm{Kn}\pi `$ mass regions from 1.7 to 2.1 GeV/$`c^2`$, excluding regions within $`\pm 3\sigma _{\mathrm{M}_\mathrm{D}}`$ of the fit D masses, are defined as background control regions for the D mesons. As shown in Figures 3(a) and 3(b), 15 events are found as $`\mathrm{D}\varphi `$ candidates, and 14 events are selected as background outside the $`\varphi `$ mass region. Using the D sideband events, a total of 0.5$`\pm `$0.5 background events has been estimated as the background among the D candidates. Subtracting the background contributions to both the D and the $`\varphi `$, we obtain an excess of 10.2$`\pm `$4.0 events in the $`\varphi `$ signal region.
The two D type identification methods, CDTM and the recoil charge method, are applied to these events to extract the numbers of $`\varphi `$ from specific D<sup>0</sup> and D<sup>+</sup> decays. Subtracting backgrounds estimated using the $`\varphi `$ and D side bands, the two methods determine 3.7$`\pm `$4.7 (CDTM) and 9.7$`\pm `$4.2 (recoil charge) $`\mathrm{D}^0\varphi \mathrm{X}`$ events, and 6.5$`\pm `$5.5 (CDTM), and 0.5$`\pm `$1.7 (recoil charge) $`\mathrm{D}^+\varphi \mathrm{X}`$ events, respectively. Averaging over the two methods and assuming a complete correlation in their statistical errors, the number of $`\mathrm{D}^0\varphi \mathrm{X}`$ and $`\mathrm{D}^+\varphi \mathrm{X}`$ events are set to be 6.7$`\pm `$4.5, and 3.5$`\pm `$3.6, respectively, and are used to determine their branching fractions.
### C Search for the decay $`\mathrm{D}^+\varphi \mathrm{e}^+\nu `$
Among the 15 $`\varphi `$ candidates observed in the recoil side of the events, 4 are accompanied by at least one charged track which are within $`|cos\theta |<0.85`$. Each of these tracks is checked for consistency with being an electron using the d$`E`$/d$`x`$ information. This electron identification requires that electron confidence level to be greater than $`1\%`$, and $`\mathrm{L}_\mathrm{e}>\mathrm{L}_\pi `$. None of the accompanying tracks is identified as an electron.
## IV Results
Assuming 10.2$`\pm `$4.0 signal $`\mathrm{D}\varphi \mathrm{X}`$ events, and correcting for $`\varphi `$ meson detection efficiency of 0.084$`\pm `$0.006 obtained from a Monte Carlo simulation, the average branching fraction for the BES mixture of D<sup>0</sup> and D<sup>+</sup> mesons is measured to be
$$B(\mathrm{D}\varphi \mathrm{X})=(1.29\pm 0.51\pm 0.12)\%,$$
where the first errors are statistical and second systematic.
Based on 6.7$`\pm `$4.5 $`\mathrm{D}^0\varphi \mathrm{X}`$ and 3.5$`\pm `$3.6 $`\mathrm{D}^+\varphi \mathrm{X}`$ events, as determined in the previous section, 90% C. L. upper limits are set on specific D<sup>0</sup>, D<sup>+</sup> decays to be
$$B(\mathrm{D}^0\varphi \mathrm{X})<2.5\%,$$
$$B(\mathrm{D}^+\varphi \mathrm{X})<5.0\%$$
The results include systematic errors arising from uncertainties ($`\pm 0.05\%`$, $`\pm 0.06\%`$ and $`\pm 0.04\%`$) in the numbers of singly tagged D mesons due to the choice of a background function and fit interval for the single tag samples and uncertainties ($`\pm 0.08\%`$, $`\pm 0.13\%`$ and $`\pm 0.09\%`$) in the inclusive $`\varphi `$ efficiency. The combined effect of these sources is obtained by adding the uncertainties in quadrature, which yields total systematic errors of $`\pm 0.10\%`$, $`\pm 0.14\%`$ and $`\pm 0.10\%`$ for the D<sup>0</sup>, D<sup>+</sup>, and their sum, respectively.
Based on zero candidate $`\mathrm{D}^+\varphi \mathrm{e}^+\nu `$ events, and a detection efficiency of 0.0652, a 90% C. L. limit is set for the decays at
$$B(\mathrm{D}^+\varphi \mathrm{e}^+\mathrm{X})<1.6\%.$$
## V Conclusion
In summary, the inclusive branching fractions of the charged and neutral D mesons into a $`\varphi `$ have been directly measured. Comparing with the sums of the existing measurements on the exclusive $`\mathrm{D}^0`$ and $`\mathrm{D}^+`$ decays containing a $`\varphi `$ in the final states, these BES branching fraction values indicate little room for additional $`\varphi `$ decay modes of D<sup>0</sup> and D<sup>+</sup> mesons.
|
no-problem/9903/astro-ph9903047.html
|
ar5iv
|
text
|
# Observations of OJ 287 from the Geodetic VLBI Archive of the Washington Correlator
## 1. Introduction
OJ 287 is a highly variable radio source at z = 0.306 (Miller, French, & Hawley 1978; Sitko & Junkkarinen 1985). The nature of the underlying galaxy is not well known; Kinman (1975) and Hutchings et al. (1984) detected a nebulosity that might be a galaxy. The light curve of OJ 287 presents a complex structure, but its most interesting aspect is the series of prominent flares which recur with a period of 12 years (Sillanpää et al. 1988). Recent V-band observations have revealed double-peaked outbursts (Sillanpää et al. 1996; Lehto & Valtonen 1996). To reproduce the cyclic 12 year optical flares, OJ 287 has become the best candidate to harbor a supermassive binary black hole system. Sillanpää et al. (1988) used such a system in which the light variations were related to tidally induced mass flows from the accretion disk into the black hole to explain the periodicity in OJ 287’s light curve. The orbit of the secondary black hole was assumed to be coplanar with the accretion disk. Lehto & Valtonen (1996) suggested a binary system model where the primary is surrounded by an accretion disk with a high inclination angle relative to the orbit of the secondary. A major flare would be observed whenever the secondary crossed the disk of the primary. The binary system model of Villata et al. (1998) with two independent relativistic jets and the nodding disk model of Katz (1997) use a sweeping beam approach to explain the same optical flares.
The radio light curves are dominated by outbursts at irregular intervals of about a year, and modulated by a much longer non-periodic timescale (Usher 1979). Analysis of radio-optical events in OJ 287 has shown correlated activity. Usher (1979) compared events from 1966 to 1978 and proposed that most optical outbursts are synchronous with radio counterparts. Valtaoja, Sillanpää, & Valtaoja (1987) studied isolated flaring events and found that optical variations preceded the radio ones by a few months.
VLBI observations of OJ 287 have been made at 5 GHz (Roberts, Gabuzda, & Wardle 1987; Gabuzda, Wardle, & Roberts 1989), 8 GHz (Vicente, Charlot, & Sol 1996), and 43 and 100 GHz (Tateyama et al. 1996). Superluminal motion was first detected from VLBI polarization data at 5 GHz (Roberts et al. 1987). Previously to this paper, three superluminal knots — named K1, K2 and K3 — have been followed from 1981 to 1988. The observed proper motion of these components corresponds to an apparent superluminal speed between 3.7 and 5.1$`c`$ (H<sub>0</sub> = 65 km s<sup>-1</sup> Mpc<sup>-1</sup>, q<sub>0</sub> = 0.5). In a single epoch VLBI observation at 8.3 GHz, a new jet component K4 and another polarization component located between the core and K4 named K5 have been added to the list of components of OJ 287 by Gabuzda & Cawthorne (1996). More recently, Vicente et al. (1996) fitted a helical path with two consecutive loops to the trajectory of component K3.
We present 27 geodetic VLBI maps at 8.3 GHz obtained from the Washington correlator’s database. Geodetic VLBI maps have been successfully used for astrophysical studies of compact sources (e.g., Guoquiang, Rönnäng & Bååth 1987; Charlot 1990; Britzen et al. 1994; Vicente et al. 1996). More recently, Piner & Kingham (1998), and Tateyama et al. (1998) have obtained interesting results for a group of EGRET blazars and BL Lac respectively using the Washington VLBI correlator’s database.
## 2. Observations
The VLBI observations used in this paper are from geodetic VLBI data archived at the Washington correlator. These data were obtained from geodetic dual-frequency VLBI experiments (Rogers et al. 1983) carried out by the Naval Observatory (Eubanks et al. 1991), the National Oceanographic and Atmospheric Administration (Carter, Robertson, & MacKay 1985), the Crustal Dynamics Project, and the Space Geodesy Project, (Coates et al. 1985; Smith & Turcotte 1993). The VLBI observations were processed at the Washington VLBI Correlator at the U.S. Naval Observatory (USNO).
All observations were obtained with Mark-III dual-frequency VLBI receivers at both X- and S-band (centered at 8.5 and 2.3 GHz respectively), providing noise temperatures of 70 K - 200 K. X-band consists of 8 individual channels of 2 MHz bandwidth spanning the range 8.2 to 8.9 GHz. All stations were equipped with H-masers as the local frequency standards. Geodetic VLBI data useful for imaging OJ 287 begins to appear in the Washington Correlator archive around 1990. Table 1 lists all observations used in this work: column 1 gives the epochs of the observations, column 2 the names of the experiments, column 3 the antennas participating in the experiments, column 4 the peak brightness of the images, column 5 the interferometric dirty beams, column 6 the position angles of the beams measured from north through east, and the last column the number of scan-baselines in the observations, where the number of scan-baselines is the sum over all the scans of the number of baselines per scan. The dirty beams were about 0.5 - 0.6 mas in size for all maps. We have used a restored circular beam of 0.6 mas on the maps. The dynamic range defined as the ratio of the peak flux per beam to the lowest positive contour in the maps is about 300:1. The total length in which the moving components in OJ 287 can be followed with the geodetic network appears to be limited to radii not much greater than 1.5 mas. This is about half the distance reached by the knots in the 5 GHz VLBI observations made by Roberts et al. (1987) and Gabuzda et al. (1989). Except for CRD96GH (Table 1) where the resulting map is the sum of the experiments CRD96G and CRD96H observed on Sep 19 and Sep 20 of 1996 respectively, all maps were obtained from a single geodetic observation.
The data were coherently averaged to four seconds to determine the visibilities. The data were calibrated and fringed using standard routines from AIPS software package, and the images were produced using the self-calibration procedures (e.g. Pearson & Readhead 1984) of the Caltech Difmap software package.
## 3. Results
Throughout this paper we assume a standard Friedmann cosmology with H<sub>0</sub> = 65 km s<sup>-1</sup> Mpc<sup>-1</sup> and q<sub>0</sub> = 0.5. At the distance of OJ 287 (z = 0.306), an angular size of 1 mas corresponds to a linear distance of 4.3 pc. The 8 GHz radio maps are presented in Figure 1. The source shows a core-jet structure, with jet components moving in a westerly direction. Table 2 lists the flux, position, and size of each component obtained by model fitting the observed visibilities with elliptical Gaussian components. Position angle is measured from north through east.
The motion of the components away from the core can be better appreciated by inspecting Figure 2, where the separation of components from the core is plotted against time. The straight lines provide estimates of the presumed zero separation of the components C1 to C6 under the assumption of rectilinear motion at constant velocity. The measured proper motions of the components C1 to C6 are 0.74 $`\pm `$ 0.40, 0.44 $`\pm `$ 0.05, 0.52 $`\pm `$ 0.09, 0.40 $`\pm `$ 0.13, 0.46 $`\pm `$ 0.05, and 0.58 $`\pm `$ 0.07 mas yr<sup>-1</sup> respectively, corresponding to an average superluminal speed of approximately 9$`c`$. These values are almost twice as high as those seen in VLBI observations made at 5 GHz by Roberts et al. (1987) and Gabuzda et al. (1989). A similar proper motion has also been obtained by Vicente et al. (1996) for the fastest part of the motion of component K3 (0.4 mas yr<sup>-1</sup>). The number of components observed over this time range, together with their measured speeds, suggests more frequent ejection of VLBI components than has been previously estimated for this source by Vicente et al. (1996), who concluded that VLBI component ejections occurred at intervals of one-half the orbital period of the putative binary black hole system, or once every six years. The small separation between successive components obtained by Gabuzda & Cawthorne (1996) at 8 GHz and Tateyama et al. (1996) at 43 GHz also suggests a higher ejection rate. We can constrain these results using radio light curves, provided that radio outbursts are connected with VLBI components.
## 4. Discussion
### 4.1. Radio Light Curves
Figure 3 shows the optical and radio light curves of OJ 287. The beginning of each radio outburst is indicated by a vertical line. There is a good agreement between the epochs of the radio outbursts and the epochs of zero separation of components C1 to C6 which are 1989.5, 1990.0, 1991.7, 1992.5, 1993.3, and 1995.4 respectively. Only C2 would require a higher proper motion for its zero separation epoch to match with the beginning of the radio outburst around 1990.2. However, we may place the 43 GHz data point a little closer to the core, accounting for a possible frequency-dependent separation between it and the 8 GHz data. Since the 8 GHz measurement of Gabuzda & Cawthorne (1996) is not clearly resolved from the core, it may also be placed closer to the core, resulting in a zero separation epoch for C2 more consistent with the beginning of the outburst.
The zero separation epoch of C1 is estimated from only two points. The point closer to the core was taken from 8 GHz VLBI observations obtained by Gabuzda & Cawthorne (1996). The extrapolation of the fitted motion to zero separation agrees well with the start of the radio outburst at 1989.5. Component C3 would be related to the outburst that started in 1991.5. There is also a weak knot observed near the core in 1991.27 at 43 GHz, which may either be component C3 itself or another component. The radio light curve shows a substructure which may be linked with this component, suggesting that it might have been an intrinsically weak or short-lived component, and faded or merged with C3. Component C4 has 3 data points; the farthest point from the core is a measurement obtained from VLBA data (Fey, Clegg, & Fomalont 1996). This component is associated with a single radio outburst that started in 1992.5. The next component, C5, has the largest separation from the core, and is probably associated with a radio outburst that started in 1993.4. Two points from VLBA data at 8 GHz (Fey & Charlot 1997) have been included in the curve. The small separation between C4 and C5 raises the question of whether or not they can be regarded as one component. In the one radio outburst per VLBI component scheme, C4 and C5 are individual components which merge later into C5. One weak point to this interpretation is the non-detection of C5 closer to the core in the 1993 December data. However, it could be too close to C4 to be discernible as a separate component at this time. If C4 and C5 are separate components, we would expect a merging of these two components around the end of 1994. In fact, an increase in the flux density of component C5 in 1994 December could be taken as an indication of such an event. Finally, component C6 could be related to the outburst that started in 1995.4.
We have also examined a series of geodetic VLBI maps at 8 GHz obtained by Vicente et al. (1996). They have interpreted the structure of OJ 287 in terms of helical motion, with an indication of a small loop at a distance of about 0.6 mas from the core. They claim a second loop at 2.4 mas is consistent with VLBI data at 5 GHz (Roberts et al . 1987; Gabuzda et al. 1989). Even so, they found no evidence of boosting of K3 as would be expected from the geometrical effects of a helically moving feature, and the positions of K2 appeared well below the calculated helical trajectory. Even revising the K2 positions upwards (Vicente et al. 1996), the fit does not improve much. Now, if we are allowed to break the component corresponding to epoch 1986.81 (Figure 3 of Vicente et al. 1996) into 2 subcomponents at 0.4 and 0.8 mas, a resolution of 0.5 mas would hardly distinguish between the two models. We could then have rectilinear motion for both components with a proper motion of about 0.4 mas yr<sup>-1</sup>. An appealing support for the two rectilinear components is the existence of well-defined radio outbursts corresponding to their estimated zero-separation epochs, as shown on Figure 3.
We have also looked for VLBI/radio-outburst correlations for the older VLBI components K1, K2, and K3 (Roberts et al. 1987). The derived proper motion for these components (0.20 - 0.28 mas yr<sup>-1</sup>) is about twice as low as our measured proper motion. This indicates that, in the past, different components may have been regarded as the same component. The imbalance between the number of knots and the number of radio outbursts is also noticeable on Figure 3. There are at least 2 well-defined radio outbursts between the one related to the birth of K2 in 1978.4 and K3 in 1982.3 (Roberts et al. 1987). A similar occurrence is also present for component K1, indicating that VLBI components may have been missed.
### 4.2. Optical Light Curve
On the top part of Figure 3 is the optical light curve from Sillanpää et al. (1996). There is a remarkable similarity between it and the radio light curves. Despite the complexity of the structures, most features can be found in both emission bands. The most easily recognizable correlated radio-optical feature is the double structure that occurred in 1983-84, which had a similar amplitude in the radio and in the optical. The double outbursts of 1971-73 and 1994-95 present a pair of almost identical optical flares, while in the radio these features appear almost as single outbursts coincident with the second optical flare. However, upon more detailed inspection, small protrusions on the radio light curves coincident with the first flare of the double peak are also present in 1971 and 1994. Even weak optical-radio substructures can be pinpointed in the feature beginning in 1994. This can be seen in Figure 4, which shows an expanded section of the light curve corresponding to the period of VLBI observations (1990 to 1997). It is also clear that radio features are delayed by a few months relative to optical features. The inclined dashed lines on the figure correspond to a delay of 0.14 yr. The smoother profile at lower frequencies and the time delay between optical and radio bands gives strong support for the synchrotron self-absorption process (Melnikov 1998). We propose that radio-optical structures are self-absorbed synchrotron sources, and the absence of radio features not correlated with optical features may be due to a more compact synchrotron source.
### 4.3. VLBI Light Curves
Provided that radio and optical emission are synchrotron, it remains to be seen whether this emission originates in the core, the jet or the accretion disk. The high density time coverage offered by the geodetic VLBI observations, particularly for components C5 and C6, enabled us to study the evolution of the flux of the VLBI components (core and moving components) with time as shown on Figure 4 along with the radio light curves. It is clear that the time variation of the flux of the core follows closely the shape of the radio light curves, while the flux of the moving components remains nearly constant with time. With less data, similar results can be seen for C4 in December 93. The same behavior can also be seen in geodetic VLBI data studied by Vicente et al. (1996), as shown in Figure 5. Another aspect of the flux of the moving components is that it does not depend on the strength of the radio outbursts. For instance, a radio outburst peaking at 7 Jy (K3 of Vicente et al. 1996) has an associated moving component with a flux similar to those associated with outbursts peaking at 2 Jy (e.g., C6 of present work). These results seem to indicate that the VLBI components contribute to the profile of the radio light curve only when they are just emerging from the core at the time of a radio outburst, and are still merged with the core at the resolution of these VLBI observations.
### 4.4. Binary Systems
We may speculate that the flares are related to an increase in the accretion rate being induced by the passage of the companion mass (Sillanpää et al. 1988). We assume that the synchrotron emission is optically thin at optical bands and self absorbed at radio bands. The absence of radio outbursts coincident with the first peak of the double-peaked optical outbursts could be an indication of a more compact synchrotron source. We also assume that the major flux variations come from the core. Keeping in mind a synchrotron emission mechanism for optical and radio outbursts; and knowing that radio outbursts have irregular periods of about a year while major optical flares appear every 12 years, we may conceive a binary model where the roughly annual outbursts of the core-jet system of the primary are visited every 12 years by the secondary, increasing the flux of the outbursts.
A narrow jet placed on the disk axis of either the coplanar binary model of Sillanpää et al. (1988) or the steep inclined orbit model of Lehto & Valtonen (1996) would provide an appropriate scenario to explain the observed evolution of the VLBI components as well as the radio and optical outbursts. Lehto & Valtonen (1996) have already suggested a narrow cone on the disk axis to explain a sudden fading “eclipse” caused by the secondary passing over the line of sight. However, those binary systems based on the sweeping beam models of Villata et al. (1996) and Katz (1997) would be discarded because they would not account for the observed evolution of the VLBI components and the number of radio outbursts. While a binary system through its accretion disk increases the accretion rate at evenly spaced intervals, the non-periodic nature of nonthermal outbursts would spread the exact timing of the observable effects of this increased accretion rate. This is indeed in accordance with the small variation observed around the period of 12 years. It would also be possible, depending on the duration of the increased accretion rate and the timing of the irregular ejection rate, to observe a flare with a triple structure as reported by Sundelius et al. (1997).
## 5. Conclusions
We have presented 27 geodetic VLBI maps obtained from the Washington VLBI Correlator Facility at the U.S. Naval Observatory. These maps showed a sequence of 6 VLBI components associated with radio outbursts. The proper motion of the components was found to be around 0.5 mas yr<sup>-1</sup>, which is almost twice as high as that seen in previous VLBI observations of this source. Such a higher proper motion, along with the larger number of VLBI components, implied a higher component ejection rate for OJ 287, a result supported by the close relationship between the radio outbursts and the appearance of VLBI components.
We have assumed that the radio-optical outbursts are synchrotron emission. The increase in the accretion rate due to the pericenter passage of the companion mass would not directly produce a VLBI component or radio outburst, but would rather provide a means to energize the system and increase the flux of the synchrotron emission. The irregular appearance of radio outbursts/VLBI components (about once a year), which is intrinsic to the engine, operates continuously. Every 12 years the system is more apt to produce a higher flux; however, the ejection rate of VLBI components and hence the number of radio outbursts is not affected by this process.
C.E.T. thanks the grant received from FAPESP - Fundação de Amparo a Pesquisa do Estado de São Paulo (proc. n. 96/6267-1) to undertake 3 months of work with geodetic VLBI data at USNO (U.S. Naval Observatory). We also thank Aimo Sillanpää for supplying the optical data and the referee for his suggestions and comments which helped with the improvement of the paper. The University of Michigan Radio Astronomy Observatory is supported by the National Science Foundation and by funds from the University of Michigan. The Fortaleza VLBI facility was built and is operated with partial support from U.S. NASA, USNO and NOAA, Brazil ministry of Science and Technology, MCT-FINEP, Mackenzie, INPE and CRAAE (joint center between Mackenzie, INPE, USP and UNICAMP, Brazil).
|
no-problem/9903/cond-mat9903036.html
|
ar5iv
|
text
|
# Dynamic States of a Continuum Traffic Equation with On-Ramp
## I INTRODUCTION
Traffic flow, a many body system of strongly interacting vehicles, shows various complex behaviors. Numerous empirical data of the highway traffic have been obtained, which demonstrate the existence of distinct dynamic states and dynamic phase transitions between them. Recent studies reveal physical phenomena such as hysteresis, self-organized criticality, and phase transitions in the traffic flow .
The transition from the homogeneous free flow to the jammed state has been studied by microscopic and macroscopic models without any inhomogeneity in the system . The traffic jam, one of the dynamic phases of the traffic flow, appears spontaneously when the vehicle density is between the two critical values $`\rho _{\mathrm{c1}}`$ and $`\rho _{\mathrm{c2}}`$ $`(>\rho _{\mathrm{c1}})`$. The traffic jam, however, can appear even below $`\rho _{\mathrm{c1}}`$. The traffic jam can be triggered by localized perturbations provided that the density is larger than a different critical value $`\rho _\mathrm{b}`$ ($`<\rho _{\mathrm{c1}}`$). As a result, in the density range between $`\rho _\mathrm{b}`$ and $`\rho _{\mathrm{c1}}`$, both the free flow and the traffic jam can exist, resulting in metastability and hysteresis. It is observed that some features of the traffic jam are uniquely determined by underlying dynamics, and independent of initial conditions of the traffic flow that lead to the jam. The presence of such characteristic features is also reproduced by analytic and numerical studies of traffic flow models.
The synchronized traffic flow, another dynamic phase of the traffic flow, is identified in recent measurements on highways . The synchronized traffic flow resembles the traffic jam in the sense that both states produce inhomogeneous density and flow profiles. The dynamics of the synchronized flow is however much more complicated than that of the traffic jam. One notable property of the synchronized traffic flow is the high level of its average flow, which almost matches the flow of the free flow state. The synchronized traffic flow is observed, in nearly all occasions, localized near ramps and it is thus believed that ramps are important for the stability of the synchronized traffic flow. The discontinuous transition from the free flow to the synchronized flow can be induced by localized perturbations of finite amplitudes. Measurements show a hysteresis effect in the phase transitions between the free flow and the synchronized flow: the transition from the synchronized flow to the free flow occurs at a lower on-ramp flux, or lower upstream flux, than that for the reverse transition. In Ref. , the recurring hump (RH) state is proposed as an origin of the (nonstationary type) synchronized traffic flow , and the dynamic phase transitions between the RH state and the free flow are investigated using continuum traffic equations that take into account the effect of ramps. In the RH state, the vehicle density and the velocity show temporal oscillations which are localized near on-ramps. That the synchronized flow is maintained for several hours can be explained from one important property of the RH state, its being a limit cycle of the traffic equations. The RH state can be characterized as a self-excited oscillator, where constant vehicle flux from an on-ramp serves as a source of the repeated excitation and each excitation is subsequently relaxed within a localized region. The traffic equations also describe the hysteresis phenomena between the RH state and the free flow.
The traffic jam and the synchronized traffic flow are distinct phases of traffic flow. However, the distinction between the conditions for the appearance of the jam state and the synchronized flow is not clearly identified yet, both in measurements and in model studies. Highway measurements analysis reports that almost identical initial states of the traffic flow can evolve to both the traffic jam and the synchronized flow .
To describe the hysteretic phase transitions between the free flow and the synchronized flow, a different macroscopic model based on a gas-kinetic approach is also proposed . In this model, a peak of the inflow from an on-ramp produces a congested but homogeneous region near the on-ramp, which spreads in the upstream direction. This homogeneous congested traffic (HCT) state is proposed as an explanation for the (stationary type) synchronized traffic flow . The subsequent study investigated the phase diagram of the model and identified additional dynamic phases such as the standing localized cluster (SLC), the triggered stop and go (TSG), and the oscillating congested traffic (OCT) states. Analytical conditions for the existence of these phases are provided and it is suggested that the phase diagram is universal for a class of traffic models. The study is however restricted to the traffic states generated from a particular initial condition and thus important issues such as multistability and hysteresis are not addressed.
In this paper, we investigate the phase diagram of the traffic flow in the presence of an on-ramp using a different continuum model , which tests the idea of the universal phase diagram. Various traffic states in Ref. are reproduced. However, the phase diagram is found to be qualitatively different. For instance, some traffic states, which represent distinct phases in Ref. , make smooth crossovers to other traffic states without any sharp phase boundaries in between, implying that they are different limiting behaviors of a single dynamic phase. The investigation is also performed for a large variety of initial conditions using two effective search methods. The conditions for the stable existence of the free flow, the RH state, and the traffic jam are examined. In some parameter ranges, it is found that multiple dynamic phases can remain stable with respect to sufficiently small perturbations. In such parameter ranges, finite perturbations may induce transitions between those phases, resulting in metastability. Due to the presence of the on-ramp, the evolution process of the jam shows several different patterns and the phase boundaries for the formation of the jam are significantly modified.
The paper is organized as follows. In the next section, we investigate the possible traffic phases for given values of the upstream flux and the input flux through the on-ramp. Various new features are discovered, which are absent in homogeneous highways. We examine the conditions for the stabilities of the traffic jam and the RH state. The metastability among the free flow, the RH state, and the traffic jam is investigated, and the travel time distributions of the three states are compared. We also discuss the several different evolution processes of the jam due to the presence of the on-ramp. Based on the phase diagrams, we find that the on-ramp flux becomes a more important factor for the formation of the traffic jam than the total flux, the sum of the on-ramp flux and the upstream flux. In Sec. III, we demonstrate analytically that our macroscopic model possesses nontrivial solutions which are indeed found in numerical simulations. Finally, Sec. IV summarizes our results.
## II Phase Diagrams of Traffic equations with an On-ramp
In this work, we adopt the continuum model of the highway traffic flow proposed by Kerner and Konhäuser ,
$`{\displaystyle \frac{\rho }{t}}+{\displaystyle \frac{(\rho v)}{x}}`$ $`=`$ $`q_{\mathrm{in}}(t)\phi (x),`$ (1)
$`\rho \left({\displaystyle \frac{v}{t}}+v{\displaystyle \frac{v}{x}}\right)`$ $`=`$ $`{\displaystyle \frac{\rho }{\tau }}[V(\rho )v]c_0^2{\displaystyle \frac{\rho }{x}}+\mu {\displaystyle \frac{^2v}{x^2}},`$ (2)
where $`\rho (x,t)`$ is the local vehicle density and $`v(x,t)`$ the local velocity. $`q_{\mathrm{in}}(t)\phi (x)`$ is the source term representing the external flux through an on-ramp. The spatial distribution of the external flux $`\phi (x)`$ is localized near $`x=0`$ (on-ramp position) and normalized so that $`q_{\mathrm{in}}(t)`$ denotes the total incoming flux. $`V(\rho )`$ is the safe velocity that is achieved in the time-independent and homogeneous traffic flow. In Eq. (2), the second term on the right hand side represents an effective “pressure” gradient on vehicles due to the anticipation driving and the velocity fluctuations , and the third term takes into account an intrinsic dampening effect that is required to fit the experimental data . Here $`\tau ,c_0,\mu `$ are appropriate constants. The flux or flow, $`\rho v`$, is denoted below by either $`q`$ or $`f`$.
In order to investigate the effects of a single on-ramp, we use the open boundary condition. The upstream boundary values of the density and velocity are fixed at $`\rho (x=L/2,t)=\rho _{\mathrm{up}}`$ and $`v(x=L/2,t)=V(\rho _{\mathrm{up}})`$, respectively. On the other hand, the values at the downstream boundary ($`x=L/2`$) are linearly extrapolated from their values at neighboring points, $`x=L/2\mathrm{\Delta }x`$ and $`L/22\mathrm{\Delta }x`$ where $`\mathrm{\Delta }x`$ is spacing used in the discretization. The numerical simulations are performed using the two-step Lax-Wendroff scheme. We choose the following parameters : $`\tau =`$ 0.5 min, $`\mu =`$ 600 vehicles km/h, $`c_0=54`$ km/h, and $`V(\rho )=V_0(1\rho /\widehat{\rho })/(1+E(\rho /\widehat{\rho })^4)`$ where the maximum density $`\widehat{\rho }=`$ 140 vehicles/km, $`V_0=120`$ km/h, and $`E=100`$ . Concerning the discretization, spatial intervals of $`\mathrm{\Delta }x=37.8`$ m and time intervals of $`\mathrm{\Delta }t=10^4`$ min are used. We choose the spatial distribution of the external flux as $`\phi (x)=(2\pi \sigma ^2)^{1/2}\mathrm{exp}(x^2/2\sigma ^2)`$ with $`\sigma =56.7`$ m. With this choice of parameters, critical values are $`\rho _\mathrm{b}=21.1`$ vehicles/km, $`\rho _{\mathrm{c1}}`$=25.3 vehicles/km, $`\rho _{\mathrm{c2}}`$=62.3 vehicles/km, $`f_\mathrm{b}\rho _\mathrm{b}V(\rho _\mathrm{b})=2047`$ vehicles/h, $`f_{\mathrm{c1}}\rho _{\mathrm{c1}}V(\rho _{\mathrm{c1}})=2249`$ vehicles/h, and $`f_{\mathrm{c2}}\rho _{\mathrm{c2}}V(\rho _{\mathrm{c2}})=843`$ vehicles/h. The maximum flow that can be achieved in the time-independent homogeneous flow is $`f_{\mathrm{max}}=\text{max}_\rho \{\rho V(\rho )\}=2336`$ vehicles/h. Below we are interested mainly in the low density regime ($`<\rho _{\mathrm{c1}}`$), and thus we will use for brevity the subscript c in place of c1.
In the real highway traffic, there are many kinds of noises which perturb the traffic out of its steady states. The real traffic state is hence under an infinite sequence of perturbations and subsequent responses of dynamic states. Previous studies on highway traffic without ramps however showed that many observed features of the traffic flow can be explained from the steady state properties of the continuum model without any noise. Motivated by previous successes, we will ignore noises in this paper.
In the absence of noises, each dynamic phase of the traffic flow corresponds to a steady state, or equivalently an attractor of the nonlinear hydrodynamic model \[Eqs. (1,2)\]. A steady state may exhibit complicated time dependences depending on the nature of the corresponding attractor. We examine in this Section the linearly stable steady states (or phases of traffic flow) for a given upstream flux $`f(L/2)=f_{\mathrm{up}}\rho _{\mathrm{up}}V(\rho _{\mathrm{up}})`$ and the vehicle flux $`q_{\mathrm{in}}(t)=f_{\mathrm{rmp}}`$ through the on-ramp at $`x=0`$. Linearly stable states are, however, often unstable to large perturbations and multistability can occur. Here it is worth emphasizing that the concept of the multistability in dynamic systems is somewhat different from that in equilibrium systems. In equilibrium systems the free energy selects one particular state as a “true” stable state and other states become metastable. In dynamic systems, on the other hand, the free energy cannot be defined and the concept of the “true” stable state is not applicable. In this sense, all states are metastable and they all should be treated equally.
Possible presence of multiple steady states makes it very difficult to search completely for all phases that are stable for given parameters, since it requires examinations of many different initial conditions. To search out all multiple steady states, we use two methods : One is to apply a triggering pulse to a steady state, for example, by changing the value of $`f_{\mathrm{rmp}}`$ for a short time. For a sufficiently strong pulse, a transition to a different steady state can be induced, allowing the identification of a new steady state. The other is the adiabatic sweeping method. Starting with a given steady state for a particular set of the system parameters, one increases or decreases one parameter adiabatically. This way, one can find the range of the parameter values where a dynamic state remains stable. These two methods effectively simulate a large variety of initial conditions. Using these, we investigate the steady states for given system parameters $`f_{\mathrm{up}}`$ and $`f_{\mathrm{rmp}}`$. In particular, we concentrate on three representative values of $`f_{\mathrm{up}}`$, and for each of them construct a phase diagram for the entire range of $`f_{\mathrm{rmp}}`$. However, since too large a value of $`f_{\mathrm{rmp}}`$ is unrealistic, we restrict our attention to the range $`f_{\mathrm{rmp}}f_{\mathrm{rmp}}^{\mathrm{max}}f_{\mathrm{max}}f_{\mathrm{up}}`$.
The values of $`f_{\mathrm{up}}`$ studied in this work are chosen from the following considerations. In the previous studies of homogeneous highways without ramps, it was found that the flux $`f_\mathrm{b}`$ provides an important boundary. Whereas the free flow is the only stable phase below $`f_\mathrm{b}`$, the traffic jam can be created above this value. In the presence of ramps, one can expect that the appearance of the traffic jam depends on whether the upstream flux $`f_{\mathrm{up}}`$ is larger or smaller than $`f_\mathrm{b}`$. (Below we show that this expectation is not true, due to the nontrivial effect of an on-ramp.) This property motivated us to choose one representative value of $`f_{\mathrm{up}}`$ in the range larger than $`f_\mathrm{b}`$ and another smaller. We also choose a very small value for $`f_{\mathrm{up}}(f_b)`$, which later reveals the importance of $`f_{\mathrm{rmp}}`$ on the formation of the congested traffic.
### A $`f_{\mathrm{up}}>f_\mathrm{b}`$
The phase diagram of the traffic states for $`f_{\mathrm{up}}=2119`$ vehicles/h appears in Fig. 1(a). Here $`f_{\mathrm{rmp}}^\mathrm{c}`$ is the critical input flux through the on-ramp above which the free flow in the downstream of the on-ramp becomes linearly unstable. The critical on-ramp flux $`f_{\mathrm{rmp}}^\mathrm{c}`$ is determined from
$$f_{\mathrm{rmp}}^\mathrm{c}=f_\mathrm{c}f_{\mathrm{up}},$$
(3)
where $`f_\mathrm{c}=f_{\mathrm{c1}}`$. For $`0f_{\mathrm{rmp}}f_{\mathrm{rmp}}^\mathrm{c}`$, the flux, both in the upstream and downstream, is lower than $`f_\mathrm{c}`$ but higher than $`f_\mathrm{b}`$. Hence the traffic jam can be created from the free flow by triggering events but it does not appear spontaneously. The finite amplitude perturbation to $`f_{\mathrm{rmp}}`$ generates a cluster, which grows to a traffic jam since the upstream flux is larger than $`f_\mathrm{b}`$. The traffic jam propagates to the upstream with its characteristic group velocity.
When $`f_{\mathrm{rmp}}>f_{\mathrm{rmp}}^\mathrm{c}`$, the flux of the free flow in the downstream is larger than the critical flux $`f_\mathrm{c}`$ and the free flow is linearly unstable with respect to long wavelength perturbations of infinitesimal amplitude. The growth of infinitesimal perturbations leads to spontaneously formed clusters and as pointed out in Ref., complex sequences of traffic jams may appear in the downstream region. In a certain range of $`f_{\mathrm{rmp}}`$, we also observe that clusters form a periodic regular sequence. It turns out that this regular sequence is caused by the presence of the on-ramp, whose detailed discussion will be given in the next subsection.
### B $`f_{\mathrm{up}}<f_\mathrm{b}`$
In Fig. 1(b), we present the phase diagram for $`f_{\mathrm{up}}=1948`$ vehicles/h. The free flow can exist until $`f_{\mathrm{rmp}}`$ reaches $`f_{\mathrm{rmp}}^\mathrm{c}`$, as in the previous subsection. When $`f_{\mathrm{rmp}}`$ is smaller than 92 vehicles/h, the free flow (with a transition layer) is the only stable phase.
For $`f_{\mathrm{rmp}}>92`$ vehicles/h, we find another time-independent state beside the free flow, which is shown in Fig. 2(a). In our simulation, this new state can be generated from the free flow by applying the triggering pulse in $`f_{\mathrm{rmp}}`$ for a short time. Far away from the on-ramp, the density and flow are homogeneous both in the upstream and downstream. Near the on-ramp, a localized cluster appears, which does not propagate in either direction but stays motionless. Due to this immobility, such state is named as the “standing localized cluster” (SLC) state in Ref.. The immobility of the SLC state is in contrast to the situations without ramps where all inhomogeneities should propagate. Hence the property is due to a novel effect of the on-ramp. Another interesting property of the SLC state becomes manifest in the density-flow relations. Notice that the density-flow relations \[circles in Fig. 2(b)\] measured at several locations near the on-ramp do not necessarily fall on the homogeneous density-flow relation curve ($`\rho ,\rho V(\rho )`$) \[solid line in Fig. 2(b)\] even though the relation at each measurement location remains stationary with time. More remarkably, the circles lie in the linearly unstable density region.
Incidentally, an experimental data which may be relevant to this has been reported . It was observed that when the traffic is in the stationary synchronized flow state, the density and flux can remain stationary during a relatively long time interval (2-5 min). Their stationary values often lie in the linearly unstable density region and they form a two-dimensional area in the density-flow plane instead of falling on a single well-defined density-flow relation curve. In Ref., the stationary values are interpreted as an indication of the spatially homogeneous traffic, and Helbing, Hennecke, and Treiber proposed the HCT state as an origin of the stationary synchronized flow. The HCT state provides an explanation for the stability of the traffic in the linearly unstable density region but it leads to the formation of the well-defined density-flow relation curve, failing to explain the absence of such a curve in the measurement.
Present analysis of the SLC state raises an alternative possibility. The SLC state shows that being stationary does not necessarily imply the homogeneity, and it also explains the stability in the linearly unstable density region. Furthermore it can explain the absence of the well-defined density-flow relation curve. We mention that upon the adiabatic variations of $`f_{\mathrm{up}}`$, $`f_{\mathrm{rmp}}`$ and the external flux profile $`\phi (x)`$, the density-flow relation at a single measurement location can cover a two-dimensional area in the density-flow plane. These agreements raises an interesting possibility of an alternative explanation for the stationary synchronized traffic flow based on the SLC state. We judge however that it is yet premature to draw a definite conclusion from these agreements alone. Further experimental investigation of the stationary synchronized traffic flow is necessary. In the next Section, we demonstrate analytically that the traffic equations (1,2) do have the SLC state solution.
As the on-ramp flux $`f_{\mathrm{rmp}}`$ increases adiabatically, one finds the phase transition from the SLC state to the recurring hump (RH) state \[Fig. 3(a)\]. In the RH state, a cluster, or a hump, does not remain stationary but moves back and forth in a localized region near the on-ramp. Its drift to far upstream is not allowed since the upstream vehicle density is lower than the boundary value $`\rho _\mathrm{b}`$. The RH state is investigated in detail in Ref. using the periodic boundary condition, and many interesting properties are found such as the discontinuous transition from the free flow to the RH state induced by localized perturbations of finite amplitudes, hysteresis, gradual spatial transitions from the RH state to the free flow, and synchronized oscillations. These properties are identical to those of the synchronized flow (nonstationary type) , and based on these common properties, the RH state is proposed as the origin of the synchronized flow.
In addition to the properties of the RH state discussed in Ref. , we investigate here the transition between the RH state and the SLC state. Our simulation shows that the transition from the SLC state to the RH state and the reverse transition occur at the same critical value of $`f_{\mathrm{rmp}}`$ without hysteresis. We also examine the oscillation amplitude of the RH state. The amplitude decreases to zero continuously as $`f_{\mathrm{rmp}}`$ approaches the critical value \[Fig. 3(b)\]. Below the critical value, the hump does not oscillate and it becomes a standing cluster. These properties suggest that these transitions are a result of the supercritical (or very weak subcritical) Hopf bifurcation of the SLC state to the RH state.
These transitions between the SLC state and the RH state are not observed in the previous study , where the adiabatic decrease of the ramp flux leads to the discontinuous transition of the RH state to the free flow instead (Fig. 3 in Ref. ). We attribute this difference to the different boundary condition adopted in this paper. Unlike the open boundary condition where $`f_{\mathrm{up}}`$ and $`f_{\mathrm{rmp}}`$ can be controlled independently, the periodic boundary condition used in is such that the increase (decrease) of $`f_{\mathrm{rmp}}`$ is always accompanied by the decrease (increase) of $`f_{\mathrm{up}}`$ since the average density of the total system is fixed. Therefore the “scanning” direction in Ref. is different from that in this paper.
We next discuss the traffic jam state. In homogeneous highways without ramps, the formation and propagation of the jam can not occur when the flux is smaller than $`f_\mathrm{b}`$. In the present case with an on-ramp, the flux $`f_{\mathrm{up}}`$ in the upstream region is lower than $`f_\mathrm{b}`$ while the flux in the downstream region can be controlled by $`f_{\mathrm{rmp}}`$. Thus a usual jam that consists of a single localized cluster should decay after they reach the upstream region. So in this sense, a usual traffic jam is not a steady state. Our investigations show that a different type of traffic jams (Fig. 4) can occur even when $`f_{\mathrm{up}}<f_\mathrm{b}`$ due to the nontrivial effect of the on-ramp: Clusters are self-generated near the on-ramp repeatedly, forming a “train” of clusters moving upstream. Although each constituting cluster decays during its upstream propagation, the train can still remain stable provided that the decay rate is smaller than the self-generation rate, which is controlled by the extent of the inhomogeneity, $`f_{\mathrm{rmp}}`$, rather than by the upstream or downstream flux. Thus the stability limits of the new traffic jam do not coincide with those of the usual traffic jams \[Fig. 1(b)\]. Notice that this train structure is different from usual traffic jams in homogeneous highways. To indicate the structural difference, we will call this state the “oscillating congested traffic” (OCT) state.
We note that a traffic jam state very similar to the OCT state appears for $`f_{\mathrm{up}}>f_\mathrm{b}`$. Clusters are self-generated near the on-ramp repeatedly, forming a regular sequence of clusters. For $`f_{\mathrm{up}}>f_\mathrm{b}`$, however, each cluster does not decay in their upstream movement because of the high upstream density.
The structure of the OCT state can be compared to the RH state. In both states, clusters appear recurrently near the on-ramp. In the OCT state, however, the area of the congested region expands with time, while in the RH state, clusters are localized. This difference is due to the larger size of clusters in the former.
It is also worth mentioning that the structure of the OCT state shows an interesting crossover as $`f_{\mathrm{rmp}}`$ varies. For small values of $`f_{\mathrm{rmp}}`$ (close to the lower stability limit), the distance between the clusters is relatively large so that there exist homogeneous flow regions in between \[Fig. 4(a)\]. As $`f_{\mathrm{rmp}}`$ increases, the distance between the clusters shrinks and for sufficiently large values of $`f_{\mathrm{rmp}}`$, the homogeneous regions between them disappear \[Fig. 4(b)\], and the clusters are “closely packed” inside the congested region.
In Ref. , these structural differences are discovered using a different hydrodynamic model and the OCT states for small and large $`f_{\mathrm{rmp}}`$ are identified as two distinct phases. The former was called the “triggered stop and go” (TSG) flow and the latter OCT. In this paper, however, we find that these apparently different states transform smoothly to each other as $`f_{\mathrm{rmp}}`$ is varied, without any signature of singularities. Thus we group these two states as a single dynamic phase in this paper. This difference between this paper and Ref. may be due to the different models used, but presently we do not know the precise origin of the difference.
We emphasize that in a certain range of $`f_{\mathrm{rmp}}`$, three phases, the free flow, the RH state, and the traffic jam (OCT) can coexist. In this metastable region of $`f_{\mathrm{rmp}}`$, small differences in the initial traffic condition may result in quite different final states. We mention that in a recent measurement , very similar initial states of the free flow are observed to undergo different phase transitions either to the synchronized flow or to the traffic jam. Here we obtain the three phases from the traffic equations with a fixed parameter set.
The difference between the three phases, the free flow, the RH state, and the traffic jam (OCT), is manifest in the travel time distributions which are shown in Fig. 5. In order to calculate the exact travel time distributions, we determine the trajectory of a vehicle, which is initially located at $`x_{\mathrm{veh}}(t_0)`$, as follows:
$$x_{\mathrm{veh}}(t)=x_{\mathrm{veh}}(t_0)+_{t_0}^t𝑑t^{}v(x_{\mathrm{veh}}(t^{}),t^{}).$$
(4)
From the trajectory of each vehicle, we obtain the vehicle travel time passing through the region from $`x_0=5`$ km to $`x_1=5`$ km. With the same $`f_{\mathrm{up}}=1948`$ vehicles/h and $`f_{\mathrm{rmp}}=222`$ vehicles/h, the travel time distributions of the three states show different behaviors. While it consists of a single peak for the free flow, those for the RH state and the traffic jam (OCT) show broad distributions due to the nonstationary nature of these phases. Also notice that in average, the travel time for the traffic jam (OCT) is greater than those for the free flow and the RH state.
### C $`f_{\mathrm{up}}f_\mathrm{b}`$
Fig. 1(c) shows the phase diagram for $`f_{\mathrm{up}}=1497`$ vehicles/h (about 25% lower than $`f_\mathrm{b}=2047`$ vehicles/h). The free flow remains linearly stable for $`f_{\mathrm{rmp}}<f_{\mathrm{rmp}}^\mathrm{c}`$. In a narrow range of $`f_{\mathrm{rmp}}`$, 480 vehicles/h$`f_{\mathrm{rmp}}<`$ 492 vehicles/h, the SLC state is found, and for $`f_{\mathrm{rmp}}492`$ vehicles/h, the OCT state is found. For this low upstream flux, however, the RH state does not appear. We find the critical value of $`f_{\mathrm{up}}`$ below the RH state is absent is about 1872 vehicles/h.
It is interesting to notice that the upper stability limit of the SLC state and the lower stability limit of the OCT state coincide within our numerical accuracy. We verified that upon the adiabatic increase of $`f_{\mathrm{rmp}}`$, the SLC state undergoes the phase transition to the OCT state, and upon the adiabatic decrease of $`f_{\mathrm{rmp}}`$, the reverse phase transition occurs, both at $`f_{\mathrm{rmp}}=492`$ vehicles/h. This coincidence raises an interesting possibility of a close relation between the two phases. This possibility is also supported by the expansion rate of the congested region, which seems to approach zero smoothly as $`f_{\mathrm{rmp}}`$ is reduced to the lower stability limit of the OCT state.
We next examine the evolution of the traffic jam state. Fig. 6 shows the evolution of the structure of the congested region as $`f_{\mathrm{rmp}}`$ is increased. For relatively small $`f_{\mathrm{rmp}}`$, the structure is the same as in Fig. 4. As $`f_{\mathrm{rmp}}`$ increases, however, a homogeneous flow region appears near the on-ramp, which expands with time \[Fig. 6(a)\]. Hence the congested region is partitioned into an inhomogeneous part and a homogeneous one. For an even higher value of $`f_{\mathrm{rmp}}`$ (=730 vehicles/h), the inhomogeneous part shrinks in length with time and after this transient process, the whole congested region consists of a homogeneous part \[Fig. 6(b)\]. In Ref., this state of traffic flow is named the “homogeneous congested traffic” (HCT) state and is identified as a distinct phase.
Unlike Ref. , however, it is not so clear in our simulations whether the OCT and HCT states are distinct phases. The distinction between the OCT and the HCT state is obscured further in our simulations by the presence of an intermediate traffic state where both the OCT-like inhomogeneous part and the HCT-like homogeneous part expand with time. We call this intermediate state the “mixed congested traffic” (MCT) state \[Fig. 6(a)\]. As $`f_{\mathrm{rmp}}`$ increases, the change from the OCT state to the MCT state and then to the HCT state seems to occur in a smooth way. We thus infer that the OCT, MCT, and HCT are different forms of a single jam phase.
We now focus on the HCT state. Notice that even though $`f_{\mathrm{up}}<f_\mathrm{b}`$, the area of the congested traffic increases monotonically with the group velocity of the upstream front about $`7.68`$ km/h, which is considerably lower than the usual jam propagation velocity $``$15 km/h . This monotonic widening of the congested region is caused by the “blockage” effect of the ramp, which can be understood easily by recalling that in real highways, a large flux from the on-ramp can almost block the flow of vehicles on main highways. We also mention that the structure of the HCT state is identical to the traffic jam (shock) caused by a blockage , which implies that for large $`f_{\mathrm{rmp}}`$, the ramp works as a bottleneck.
Interestingly, the density in the congested region lies in the linearly unstable region of the homogeneous flow, $`\rho _{\mathrm{c1}}(=\rho _\mathrm{c})<\rho <\rho _{\mathrm{c2}}`$. Hence according to Refs. , long wavelength fluctuations of even infinitesimal amplitude should grow in this region. In our simulations, we find that small inhomogeneities in the initial state indeed grow to form clusters. These clusters however disappear when they reach the upstream boundary of the congested region and the congested region becomes homogeneous afterwards. This result implies that inside the linearly unstable region, there exists a range of density where the homogeneous flow is convectively stable (that is, the instability drifts away in one particular direction leaving the regions behind unaffected). The same observation is made in Ref. , where the HCT state is related to the stationary synchronized flow. As mentioned in the preceding subsection, however, the density-flow relation in congested region of the HCT state lies on the single curve $`(\rho ,\rho V(\rho ))`$, which differs from the experimental observations of scattered data . We mention that the HCT state may be very sensitive to the presence of noises since it is only convectively stable.
It is instructive to compare the stability ranges in Fig. 1(c) with those in Fig. 1(b). The stability range of the SLC state lies entirely below $`f_{\mathrm{rmp}}^\mathrm{b}`$ in Fig. 1(c) while it is mostly above $`f_{\mathrm{rmp}}^\mathrm{b}`$ in Fig. 1(b). Also the stability range of the OCT reaches below $`f_{\mathrm{rmp}}^\mathrm{b}`$ in Fig. 1(c) while it lies entirely above $`f_{\mathrm{rmp}}^\mathrm{b}`$ in Fig. 1(b). This suggests that when the sum $`f_{\mathrm{up}}+f_{\mathrm{rmp}}`$ is the same, the formation of the cluster is easier for larger $`f_{\mathrm{rmp}}`$ and thus the actual total flux level in the downstream is lower. Physically this tendency can be understood as resulting from the larger density gradient near the on-ramp when the relative portion of $`f_{\mathrm{rmp}}`$ is larger. This trend is indeed observed in highway measurements and is called the “capacity reduction”.
## III Analytic solutions of the SLC and the HCT states
In Sec. II, we showed that various forms of traffic flows occur near an on-ramp. In the case of the free flow and the usual traffic jam, they are affected by the on-ramp in minor ways and their properties are essentially the same as those without an on-ramp, which have already been investigated intensively . For other phases, however, the presence of the on-ramp is crucial and understanding of their properties are relatively poor. In this Section, we present analytic studies of two forms of traffic flow, the SLC state and the HCT state.
### A Standing localized cluster (SLC) state
The analytic examination of the SLC state is relatively simple since all time dependence disappears. This analysis also provides a good starting point for future analysis of the RH state and the OCT state since they are closely related to the SLC state as discussed in the preceding Section. Hence we present below the analysis of the SLC state in detail.
In a homogeneous highway, inhomogeneities in the density or the velocity always propagate. In the presence of an on-ramp, on the other hand, the numerical investigation in the previous Section shows that inhomogeneities may form a standing cluster without propagation. Here we demonstrate analytically that the model \[Eqs. (1,2)\] indeed allows standing cluster solutions in the presence of an on-ramp.
To obtain the SLC solution, one imposes
$$\frac{\rho }{t}=\frac{v}{t}=0.$$
(5)
By integrating Eq. (1) with respect to $`x`$, one obtains
$$\rho (x)v(x)=f_{\mathrm{rmp}}_{\mathrm{}}^x\phi (x)𝑑x+f_{\mathrm{up}}q(x).$$
(6)
Since the function $`q(x)`$ is completely determined for given $`f_{\mathrm{rmp}}`$ and $`f_{\mathrm{up}}`$, one can use this equation to express $`\rho (x)`$ in terms of $`v(x)`$. Using this, one can rewrite Eq. (2) as follows:
$$\mu \frac{d^2v}{dx^2}=q\left(1\frac{c_0^2}{v^2}\right)\frac{dv}{dx}\frac{q(x)}{\tau v}\left[V\left(\frac{q(x)}{v}\right)v\right]+\frac{c_0^2}{v}q^{}(x),$$
(7)
where $`/x`$ has been replaced by $`d/dx`$ since all time dependence disappears.
For further analysis, it is convenient to assume a particular form of the influx profile $`\phi (x)`$. We take the localized influx limit and choose $`\phi (x)=\delta (x)`$. Then $`q(x)`$ becomes $`f_{\mathrm{up}}`$ for $`x<0`$, $`f_{\mathrm{up}}+f_{\mathrm{rmp}}`$ for $`x>0`$, and $`q^{}(x)=f_{\mathrm{rmp}}\delta (x)`$. Then Eq. (7) can be decomposed into two separate problems defined on two semi-infinite regions, $`x<0`$ and $`x>0`$, with the matching conditions at $`x=0`$ ,
$`v(x)|_{x=0+}`$ $`=`$ $`v(x)|_{x=0},`$ (8)
$`{\displaystyle \frac{dv}{dx}}|_{x=0+}`$ $`=`$ $`{\displaystyle \frac{dv}{dx}}|_{x=0}+{\displaystyle \frac{c_0^2}{\mu v(0)}}f_{\mathrm{rmp}}.`$ (9)
For each semi-infinite region, it is instructive to rewrite Eq. (7) as follows,
$`\mu {\displaystyle \frac{dw}{dx}}`$ $`=`$ $`q_\mathrm{s}\left(1{\displaystyle \frac{c_0^2}{v^2}}\right)w{\displaystyle \frac{q_\mathrm{s}}{\tau v}}\left[V\left({\displaystyle \frac{q_\mathrm{s}}{v}}\right)v\right],`$ (10)
$`{\displaystyle \frac{dv}{dx}}`$ $`=`$ $`w,`$ (11)
where s=p(ositive) for $`x>0`$ and s=n(egative) for $`x<0`$, and $`q_\mathrm{p}=f_{\mathrm{up}}+f_{\mathrm{rmp}}`$, $`q_\mathrm{n}=f_{\mathrm{up}}`$. Notice that after the variable transformations $`vy,xt,\mu m`$, Eq. (11) can be regarded as the equation of motion of a particle subject to a potential $`U_\mathrm{s}(y)`$ where $`dU_\mathrm{s}/dy=(q_\mathrm{s}/\tau y)[V(q_\mathrm{s}/y)y]`$ and to the “strange” coordinate-dependent damping force.
For the safe velocity $`V(\rho )`$ adopted in this paper, Eq. (11) is highly nonlinear and it does not seem feasible to write down solutions in a closed form. However, qualitative properties of the solutions can be still investigated by taking Eq. (11) as a set of flow equations defined on the phase space $`(v,w)`$.
Since the global structure of the flow is largely determined from properties of fixed points, we first find fixed points of Eq. (11). Simple algebra shows that there are three fixed points, $`(v,w)=(0,0),(v_{\mathrm{s1}},0),(v_{\mathrm{s2}},0)`$. The first one is unphysical since $`v=0`$ implies $`\rho \mathrm{}`$. This unphysical fixed point appears since we set $`V(\rho )=0`$ for $`\rho >\widehat{\rho }`$, and we ignore this below. The other two come from the two solutions $`v_{\mathrm{s1}}`$, $`v_{\mathrm{s2}}`$ $`(<v_{\mathrm{s1}})`$ of $`V(q_\mathrm{s}/v)=v`$. (It can be easily verified that for $`q_\mathrm{s}<f_{\mathrm{max}}`$, there are always two solutions, the larger one corresponding to the maximum point of the potential $`U_\mathrm{s}(y)`$ and the smaller to the minimum point.)
We are interested in the solutions where
$$v(x)\{\begin{array}{cc}v_{\mathrm{n1}}\hfill & \text{for}x\mathrm{}\hfill \\ v_{\mathrm{p1}}\hfill & \text{for}x\mathrm{}.\hfill \end{array}$$
(12)
Thus $`(v_{\mathrm{n1}},0)`$ \[point $`A`$ in Fig. 7(a,b)\] is the relevant fixed point for $`x<0`$. By linearizing Eq. (11), it can be verified that it is a saddle point. Then the flow \[path 1 in Fig. 7(a,b)\] that is associated with the unstable eigen-direction of the fixed point determines the entire flow in the semi-infinite region $`x<0`$. Similarly $`(v_{\mathrm{p1}},0)`$ \[point $`C`$ in Fig. 7(a,b)\] is the relevant fixed point for $`x>0`$, which is again a saddle point. The entire flow in the positive semi-infinite region is then determined by the flow \[path 2 in Fig. 7(a,b)\] that is associated with the stable eigen-direction of the fixed point.
In order to construct a legitimate solution from the path 1 and 2, one should join the two paths using the matching conditions \[Eq. (9)\]. It is convenient to regard the conditions \[Eq. (9)\] as a definition of a mapping defined in the phase space $`(v,w)`$, from a point $`(v,w)`$ to $`(v,w+c_0^2f_{\mathrm{rmp}}/\mu v)`$. The effect of the mapping is shown in Fig. 7(a), where the path 1 is mapped to the path 3. The path 3 crosses the path 2 at a point denoted as $`F`$ in the inset. Then one can construct a full solution by connecting the curve $`AE`$ with the curve $`FC`$. This solution exists for an arbitrary small value of $`f_{\mathrm{rmp}}`$ and represents the free flow solution (with a transition layer at the on-ramp).
Different solutions appear for sufficiently large $`f_{\mathrm{rmp}}`$. The mapping of the path 1 to the path 3 for a larger $`f_{\mathrm{rmp}}`$ is depicted in Fig. 7(b). Now the path 3 crosses the path 2 at three points $`F,H,I`$, which implies the presence of three solutions. The solution associated with the crossing point $`F`$ again corresponds to the transition layer solution. The two other solutions associated with the crossing points $`H`$ and $`I`$ correspond to the desired SLC solutions. Each solution provides the velocity profile for $`\mathrm{}<x<\mathrm{}`$, from which the density profile can be obtained from $`\rho (x)=q_\mathrm{s}/v(x)`$. Fig. 7(c) compares the density profile associated with the crossing point $`H`$ with that from the direct numerical simulation of the traffic model \[Eqs. (1,2)\] with the Gaussian form of $`\phi (x)`$. Notice that the two profiles are essentially identical except for a small difference in the ramp region, which arises due to the approximation of the input flux profile by a delta function. The density profile associated with the crossing point $`I`$ can be obtained in a similar way. This solution, however, is not found in the direct numerical simulation, which suggests that this solution is linearly unstable.
This analysis implies that the SLC solutions appear only for $`f_{\mathrm{rmp}}`$ larger than a critical value. Precisely at the critical value, the two crossing points $`H`$ and $`I`$ coincide. For $`f_{\mathrm{rmp}}`$ larger than the critical value, numerical simulations indicate that only one solution (one through the crossing point $`H`$) is stable and the other is not. Thus one finds that a turning point connecting the stable and unstable SLC solutions appear at the critical ramp flux.
Above considerations use a specific form of an influx profile. We believe that the precise profile does not change the qualitative nature of above discussion. In fact, the value of the critical ramp flux for the SLC state obtained using the approximation $`\phi (x)=\delta (x)`$ is found identical within the numerical accuracy to that obtained from the direct numerical simulation of Eqs. (1,2) (with $`\phi (x)`$ Gaussian).
### B Homogeneous congested traffic (HCT) state
When the on-ramp influx is added, we observe a new kind of the traffic jam: The downstream front is fixed at the on-ramp and the upstream front moves with a fixed group velocity. Between the downstream and the upstream fronts, the congested region of the homogeneous flow is maintained \[Fig. 6(b)\]. We notice that the upstream front is a steady structure in a proper reference frame. Below we show analytically that Eqs. possess the HCT state solution. Since the congested region is homogeneous, one can split the discussion into two parts, one for the moving upstream front and the other for the fixed downstream front. For the upstream front, the relevant equations of motion are
$`{\displaystyle \frac{\rho }{t}}+{\displaystyle \frac{(\rho v)}{x}}`$ $`=`$ $`0,`$ (13)
$`\rho \left({\displaystyle \frac{v}{t}}+v{\displaystyle \frac{v}{x}}\right)`$ $`=`$ $`{\displaystyle \frac{\rho }{\tau }}[V(\rho )v]c_0^2{\displaystyle \frac{\rho }{x}}+\mu {\displaystyle \frac{^2v}{x^2}}.`$ (14)
Since this front is surrounded by wide regions of the homogeneous flow both in the upstream and the downstream, one can impose the following boundary conditions,
$`\rho (x=\mathrm{},t)`$ $`=`$ $`\rho _{}=\rho _{\mathrm{up}},`$ (15)
$`v(x=\mathrm{},t)`$ $`=`$ $`v_{}=V(\rho _{}),`$ (16)
$`\rho (x=+\mathrm{},t)`$ $`=`$ $`\rho _+,`$ (17)
$`v(x=+\mathrm{},t)`$ $`=`$ $`v_+=V(\rho _+).`$ (18)
Here $`\rho _{}`$ is the density of the far upstream region and $`\rho _+`$ of the congested region Also the spatial coordinate is chosen in such a way that $`x=+\mathrm{}`$ corresponds to a location deep in the congested region instead of the far downstream in the original equations .
Let us assume that Eqs. (13) and (14) allow a steady state solution that satisfies the boundary conditions Eqs. (15), (16), (17), and (18). Here the steady state means that in a proper reference frame, all time-dependence disappears. We perform the change of the reference frame:
$`x^{}`$ $`=`$ $`xv_gt,`$ (19)
$`t^{}`$ $`=`$ $`t,`$ (20)
and neglect all time-dependence in this new reference frame to find
$`{\displaystyle \frac{dq^{}}{dx^{}}}`$ $`=`$ $`0,`$ (21)
$`q^{}{\displaystyle \frac{dv}{dx^{}}}`$ $`=`$ $`\rho {\displaystyle \frac{V(\rho )v}{\tau }}c_0^2{\displaystyle \frac{d\rho }{dx^{}}}+\mu {\displaystyle \frac{d^2v}{dx^2}},`$ (22)
where $`q^{}\rho v\rho v_g`$. We can determine the constants $`q^{}`$ and $`v_g`$ from the boundary conditions Eqs. (15), (16), (17), and (18). From the condition that the in-flux to and the out-flux from the front should be the same in the primed reference frame, we obtain
$`v_g`$ $`=`$ $`{\displaystyle \frac{\rho _+v_+\rho _{}v_{}}{\rho _+\rho _{}}},`$ (23)
$`q^{}`$ $`=`$ $`{\displaystyle \frac{\rho _+\rho _{}(v_{}v_+)}{\rho _+\rho _{}}}.`$ (24)
Using Eq. (21), one has
$$\rho =\frac{q^{}}{vv_g}.$$
(25)
Plugging in this expression into Eq. (22), one finds
$$\mu \frac{d^2v}{dx^2}+q^{}\left[\frac{c_0^2}{(vv_g)^2}1\right]\frac{dv}{dx^{}}+F(v;q^{},v_g)=0,$$
(26)
where
$$F(v;q^{},v_g)\frac{q^{}}{\tau (vv_g)}\left[V\left(\frac{q^{}}{vv_g}\right)v\right].$$
(27)
Eq. (26) is again an equation of motion of a particle subject to a conservative force $`F`$ and the coordinate-dependent damping force. The boundary conditions Eqs. (15), (16) ensure that $`F(v_\pm ;q^{},v_g)`$=0 automatically. Since $`v_{}>v_+`$, the root $`v_{}(v_+)`$ corresponds to the potential maximum (minimum). Thus the nature of the stationary state is clear in the particle motion analogy. At $`x=\mathrm{}`$, the particle is at the unstable maximum point $`v=v_{}`$. As “time” $`x`$ increases, the particle slides down the hill and after some time, it settles down at $`v=v_+`$ due to the friction, provided the damping coefficient in Eq. (26) remains positive. This particle motion describes the upstream front of the HCT state \[Fig. 6(b)\].
Before we begin the next analysis of the downstream front of the HCT state \[Fig. 6(b)\], one remark is in order. In the direct numerical simulation of Eqs. , the vehicle velocity in the congested region of the HCT state is fixed for given $`f_{\mathrm{up}}`$ and $`f_{\mathrm{rmp}}`$. However, in the above analysis of the upstream front, where $`f_{\mathrm{up}}`$ is used to fix $`v_{}=v_{\mathrm{up}}`$, the value of $`v_+`$ is still a free parameter. We show below that this degree of freedom should be used to allow the downstream front solution.
Analysis of the stationary downstream front is very similar to the analysis of the SLC state in the preceding subsection. Using the condition of the stationarity and integrating Eq. (1), one obtains Eqs. (5) and (6), respectively. Adopting the approximation $`\phi (x)=\delta (x)`$, one recovers the matching conditions at the ramp \[Eq. (9)\] and the set of the flow equations \[Eq. (11)\]. The value of $`q_\mathrm{n}`$ should be $`\rho _+v_+`$ to allow a continuous connection of $`v(x)`$ to the upstream front solution and $`q_\mathrm{p}=q_\mathrm{n}+f_{\mathrm{rmp}}`$. The asymptotic behavior of $`v(x)`$ far away from the on-ramp should be chosen differently as follows:
$$v(x)\{\begin{array}{cc}v_{\mathrm{n2}}=v_+\hfill & \text{for}x\mathrm{}\hfill \\ v_{\mathrm{p1}}\hfill & \text{for}x\mathrm{},\hfill \end{array}$$
(28)
where $`v_{\mathrm{p1}}`$ and $`v_{\mathrm{n2}}`$ are defined in the same way as in the preceding subsection, and $`x\mathrm{}`$ corresponds to the region deep in the congested region.
Notice that for $`x\mathrm{}`$, $`v(x)`$ approaches $`v_{\mathrm{n2}}`$ instead of $`v_{\mathrm{n1}}`$ since $`v_+`$ corresponds to the smaller of the two solutions of $`V(q_\mathrm{n}/v)=v`$ \[or since the corresponding density $`\rho _+`$ lies on the descending slope of the homogeneous density-flow relation\]. This difference in the asymptotic behavior results in a qualitative change. It can be verified through the linearization of Eq. (11) that $`(v_{\mathrm{n2}},0)`$ is a stable fixed point. Then we find $`v(x)=v_{\mathrm{n2}}`$ for the entire semi-infinite region $`x<0`$, which should be contrasted to the preceding subsection. Near the fixed point $`(v_{\mathrm{p1}},0)`$, on the other hand, the situation is similar to the preceding subsection and the flow in the positive semi-infinite region becomes a continuous path, like the path 2 in Fig. 7(a,b).
To construct a full solution of the HCT downstream front, the separate solutions for $`x<0`$ and $`x>0`$ should be joined using the matching condition \[Eq. (9)\]. Since the velocity for $`x<0`$ is constant, the matching conditions reduce to
$`v(x)|_{x=0+}`$ $`=`$ $`v_{\mathrm{n2}},`$ (29)
$`{\displaystyle \frac{dv}{dx}}|_{x=0+}`$ $`=`$ $`{\displaystyle \frac{c_0^2}{\mu v(0)}}f_{\mathrm{rmp}}.`$ (30)
This matching conditions can be satisfied only when the phase space trajectory for $`x>0`$ passes the point $`(v_{n2},c_0^2/(\mu v_{\mathrm{n2}})f_{\mathrm{rmp}})`$. For given $`v_{\mathrm{n2}}`$ and $`f_{\mathrm{rmp}}`$, the path in general does not pass the point except for a particular value of $`v_{\mathrm{n2}}=v_+`$. This tuning thus fixes the free parameter $`v_+`$ as mentioned before. In general, $`v_+`$ is a function of the influx profile $`\phi (x)`$ since the matching conditions depend on $`\phi (x)`$.
Fig. 8 shows the density profile (solid line) of the HCT state obtained from this matching method for $`\phi (x)=\delta (x)`$. It is essentially identical to the profile (dotted line) obtained from the numerical simulation of Eqs. (1,2), except for the larger density peak at the on-ramp caused by the approximation $`\phi (x)=\delta (x)`$. We confirm from the simulation result that the group velocity of the moving front is consistent with $`7.68`$ km/h which is given by Eq. (23) and the damping coefficient $`q^{}[c_0^2/(vv_g)^21]`$ is positive in the congested region.
## IV CONCLUSION
The traffic equation with a source term representing the on-ramp influx of a highway displays a variety of novel traffic flow states not present in the homogeneous equations. To understand the role of the source term, we map out the phase diagram using the continuum traffic model \[Eqs. (1,2)\] proposed by Kerner and Konhäuser . In our numerical simulation, we use the open boundary condition, which allows one to handle the single on-ramp without using very large system sizes. Due to possible presence of multiple metastable states, detailed simulation is carried out for a limited number of representative values of the upstream flux $`f_{\mathrm{up}}`$ and for the whole range of the on-ramp flux $`f_{\mathrm{rmp}}`$. Various traffic states are identified and characterized. The phase diagrams thus obtained are summarized in Fig. 1. It is found that an inhomogeneous but stationary traffic state (SLC) can appear near the on-ramp due to $`f_{\mathrm{rmp}}`$. This state is related to the recent measurement of the homogeneous synchronized flow. The on-ramp also generates a new kind of traffic jam, which can appear even below the stability limit of the usual traffic jam in homogeneous highways. The structure of the new traffic jam varies qualitatively with $`f_{\mathrm{rmp}}`$. The capacity reduction due to $`f_{\mathrm{rmp}}`$ is also observed. In a certain range of $`f_{\mathrm{rmp}}`$, the free flow, the RH state, and the new traffic jam can all coexist so that the free flow can undergo phase transitions either to the RH state or to the traffic jam state. Analytic investigations are also performed and two nontrivial solutions of Eqs. (1,2) are found. These solutions describe the SLC state and the HCT state.
## ACKNOWLEDGMENTS
H.Y.L. thanks Daewoo Foundation for financial support, and M. Schreckenberg for hospitality during her stay at Duisburg University. H.-W.L. is supported by the Korea Science and Engineering Foundation through a fellowship. This work is supported by the Korea Science and Engineering Foundation through the SRC program at SNU-CTP, and also by Korea Research Foundation (1998-015-D00055).
|
no-problem/9903/hep-ph9903239.html
|
ar5iv
|
text
|
# Inflation
## Inflation
The most attractive models of inflation occur while the internal dimensions are still small, far away from their final stabilized value $`b_0`$ .In particular, we avoid all the problems which are met if inflation occurs after the new dimensions reach their current, large, size. Also note that post-stabilization inflation cannot explain the age of the universe as wall-only inflation cannot begin before $`tH^1M_{\mathrm{pl}}/M_{}^2>>M_{}^1`$, when the universe is already very large and old . Concretely, the basic story is:
* The quantum creation of the universe takes place with the initial size of all dimensions close to the fundamental Planck scale $`M_{}^1`$.
* A prolonged period of inflation in a direction parallel to our brane takes place. The approximately scale-invariant nature of the observed primordial perturbation spectrum implies that, during inflation, the internal dimensions must expand more slowly than the universe on the wall. Thus we are led to consider a form of asymmetric inflationary expansion of the higher-dimensional world ($`b(t)=b_I`$ is essentially static). Since the internal dimensions are small the effective 4-dimensional Newton’s constant is large: $`G_{\mathrm{N},\mathrm{initial}}=1/b_I^nM_{}^{n+2}1/M_{}^2`$.
* Thus the Hubble constant during this initial period of inflation can be large even though the energy density is quite small, $`V\mathrm{TeV}^4`$,
$$H_{\mathrm{infl}}^2\frac{V}{6b_I^nM_{}^{(n+2)}}\frac{V}{M_{}^2}.$$
(1)
Therefore inflation can be rapid, and moreover, the density perturbations can be large, being determined to be
$$\frac{\delta \rho }{\rho }=\frac{5}{12\pi \sqrt{2n(n1)}}\frac{H_{\mathrm{infl}}}{M_{}(M_{}b_I)^{n/2}S},$$
(2)
where $`S`$ is a potential-dependent parameter that encapsulates both the duration of $`a(t)`$ inflation and the deviation of the perturbation spectrum from scale-invariant Harrison-Zeldovich. (We argue that $`S<1/50`$.)
* Specifically, the deviation of the spectral index $`n_\rho `$ of density perturbations $`\delta \rho /\rho `$ from scale invariance is given by:
$$n_\rho 1\frac{n(n+2)}{2}\left(S^2+ST(b/b_I1)^2\right),$$
(3)
where $`T`$ is another potential-dependent parameter. Thus to have sufficient scale invariance we need just mildly tune $`S,T<1/50`$, or alternatively $`S<10^3`$ and $`T1`$. Similarly, the number of efolds is given by
$$N_e\frac{1}{S+T}\left(\frac{2\sqrt{T}\mathrm{tan}^1(1/\sqrt{S})}{\sqrt{S}}\mathrm{log}(1+1/S)+2\mathrm{log}(1+1/\sqrt{T})\right)$$
(4)
which for $`N_e>100`$ is actually slightly more stringent than (3).
* By going to an effective 4d theory on the wall where $`b(t)`$ appears as a Brans-Dicke like field (in Einstein frame), it is possible to see that the conditions on $`S`$ and $`T`$ are nothing but the usual slow-roll conditions for a scalar-gravity theory.
* In the minimal approach, the inflaton field is just the moduli describing the size of the new dimensions (the radion field of ), the role of the inflationary potential being played by the stabilizing potential of this internal space. In the case of a wall-localized inflaton, this early inflation might even result from the electroweak phase transition, in which case the inflaton is the Higgs. Actually, an important remark in this regard is that when the internal dimensions are small $`b_IM_{}^1`$ the distinction between on-the-wall and off-the-wall physics is not meaningful: e.g., the inflationary features in $`V(b)`$ at small $`b`$ could be due to Higgs physics on the wall.
## Evolution towards stabilization point<sup>§</sup><sup>§</sup>§This section contains material in addition to that presented at COSMO98.
Under quite general conditions, the inflationary era is followed by an epoch where the scale factor of our brane-universe undergoes a slow contraction while the internal dimensions expand towards their final stabilized value. Even with the inclusion of a potential for $`b`$, it is possible to exactly solve for the evolution during this epoch, which generalizes the usual vacuum Kasner solutions. The history can be summarized as:
* Wall inflation ends, and simultaneously the radion starts to evolve to its’ minimum at $`b_0`$. Almost immediately our scale factor $`a(t)`$ begins to contract. Both $`a`$ and $`b`$ have determined (subluminal) power-law dependence on time.
* At generic values of $`b`$ away from the stabilization point the potential $`V(b)`$ can be approximated by $`V=Wb^p`$ ($`V`$ is the effective 4d potential, $`[W]=4p`$).An additional logarithmic dependence of $`V`$ on $`b`$ is quite possible and harmless. Then the asymptotic form of the exact solutions places an upper bound on the amount of contraction of the brane as a function of $`b`$:
$$\frac{a_f}{a_i}\left(\frac{b_i}{b_f}\right)^\zeta $$
(5)
where the parameter $`\zeta `$ is given by ($`\mathrm{\Delta }=6n4npn^2p^2`$)
$$\zeta =\frac{n(n+p2)}{2(2n+p)}\mathrm{for}\mathrm{\Delta }>0,\zeta =\frac{3n\sqrt{3n(n+2)}}{6}\mathrm{for}\mathrm{\Delta }<0.$$
(6)
* Remarkably this epoch of contraction ends. There are two generic possibilities for how this can happen: The first involves the reheating of the wall. Contraction of $`a(t)`$ stops and reverses when $`\rho _{wall}`$ satisfies
$$\rho _{wall}=n(n1)M_{}^{n+2}b^nH_b^2\rho _b$$
(7)
Here $`\rho _b`$ is the radion (kinetic) energy. Interestingly there is a model-independent form of such reheating that results from the primordial $`\rho _{wall}`$ left over from the inflationary epoch. If there is sufficient $`a(t)`$ contraction, this blue-shifting de Sitter phase remnant $`\rho _{wall}`$ can become comparable to $`\rho _b`$ before $`b`$ reaches the stabilization point, $`b_0`$. It is also possible that a form of reheating takes place on the wall which is totally unconnected with the contraction of $`a(t)`$, but that again leads to $`\rho _{wall}\rho _b`$. Possibilities in this class include the decay of some metastable state on the wall, or the collision of some other brane with our brane. After the contraction of $`a(t)`$ reverses, the radion and wall-localized energy densities scale together until the stabilization point is reached. The second case is where $`b(t)`$ reaches the stabilization point before $`\rho _{wall}=\rho _b`$. By going to the Einstein frame it is possible to show that once stabilization of $`b`$ occurs, the period of $`a(t)`$ contraction automatically ceases.
* The total amount of contraction of $`a(t)`$ is bounded and varies between at most $`7`$ efoldings in the case of $`n=2`$ to at most $`12`$ efoldings when $`n=6`$.
## Bulk graviton production, and moduli problem?
Naively one would worry that during the era of radion evolution the bulk becomes full of (dangerous) gravitons. There are actually two slightly different issues here: i) The energy density in Kaluza-Klein (KK) excitations of the graviton in the bulk, and, ii) the energy density in the (would-be) zero mode of the bulk graviton, namely the radion. The constraints on the energy density in KK excitations is actually more severe than that on the radion energy density, which just comes from overclosure. The reason for this is the diffuse gamma-ray background constraint arising from KK decays . Again an executive summary of the relevant arguments is:
* A time-dependent gravitational field can produce particles from the vacuum. Including this, the equation for the effective 4d KK energy density becomes
$$\dot{\rho }_{KK}+3H_a\rho _{KK}+H_b\rho _{KK}=H^5.$$
(8)
* From this, and the expressions for $`a(t)`$ and $`b(t)`$ during the contraction phase, one can show that the dominant contribution to bulk KK graviton production arises from early times. In fact $`\rho _{KK,\mathrm{final}}`$ differs from the final energy density of the blue-shifted wall-localized radiation at the end of the epoch of contraction only in that it is further suppressed by a factor of $`(a_f/a_i)(b_i/b_f)`$, which comes from the fact that the KK gravitons are red-shifted by the bulk expansion, and only concentrated and not blue-shifted by the wall contraction. Thus we get
$$\frac{\rho _{\mathrm{KK},\mathrm{f}}}{\rho _{\mathrm{wall},\mathrm{f}}}\left(\frac{a_fb_i}{a_ib_f}\right)<\left(10^{130/n}\right)^{1+\zeta },$$
(9)
using the conservative estimates $`b_i10M_{}^1`$ and $`b_f=b_0`$. Evaluating this in, for example, the case of the simple Kasner contraction with exponents given in (6) leads to $`\rho _{KK}/\rho _{wall}`$ varying between $`3\times 10^{17}`$ for $`n=2`$ to $`1\times 10^8`$ for $`n=6`$.
* This shows that the effective temperature of the KK gravitons is well below the diffuse gamma-ray bound, even before any dilution necessary to solve the radion moduli problem. It also demonstrates that the vast majority of the energy in the bulk is in the motion of the zero mode radion $`\rho _b`$, rather than in the bulk KK modes. This is simply because that $`\rho _b\rho _{wall}`$ is the natural circumstance at the end of the epoch of contraction.
We now turn to the radion moduli problem:
* Finally around the stabilization point $`b_0`$ the radion field starts to oscillate freely. Since this energy density scales as $`1/a^3`$, and the wall-to-radion energy densities are initially comparable at the start of the oscillation era, the radion energy eventually dominates the total energy density.
* The most serious question that early universe cosmology presents in the world-as-a-brane scenario is how do we dilute this energy in radion oscillations to an acceptable level. The radion is long-lived, its’ decay width back to light wall states being given byThis is increased if there are many branes in the bulk. The “brane crystallization” mechanism of stabilization requires $`N_{\mathrm{wall}}(M_{\mathrm{pl}}/M_{})^{2(n2)/n}`$ branes in the bulk. If each of these have $`O(1)`$ light modes then the total decay width to all branes is greatly enhanced.
$$\mathrm{\Gamma }_\phi \frac{m_\phi ^3}{M_{\mathrm{pl}}^2}.$$
(10)
We thus require some dilution in the radion energy density, either by a short period of late inflation followed by reheating, or by a delayed reheating after $`\rho _b`$ has sufficiently red-shifted. The amount of dilution of the radion energy density that we require is relatively modest, given roughly by $`10^7`$, so that only about 5 efolds of late inflation are needed.
## Conclusions
We have argued that early inflation when the internal dimensions are still small can successfully accomplish all that is required of inflation, including generation of suitable $`\delta \rho /\rho `$ without the unpleasant introduction of very light or fine-tuned wall fields. Remarkably the era of post-inflation brane-contraction is harmless, and automatically ends via a “Big Bounce”. During the phase of $`b(t)`$ evolution to the stabilization point, the production of bulk gravitons by the time-varying metric remains completely suppressed, ensuring that the bulk is very cold at, and after, the stabilization of the internal dimensions. The primary remaining issue is the radion moduli problem, which is no more severe than in gauge-mediated supersymmetry breaking models. Overall, then, early universe cosmology in these models is quite interesting!
|
no-problem/9903/nucl-th9903026.html
|
ar5iv
|
text
|
# 𝑒⁺𝑒⁻ pair production from 𝛾A reactions
## I Introduction
In-medium properties of hadrons are of fundamental interest with respect to an understanding of QCD in the non-perturbative regime (cf. ). During the past decade especially the properties of vector mesons have found widespread attention as they may be related to chiral symmetry .
A comparison of experimental data on dilepton production in nucleus-nucleus collisions at SPS energies with transport theoretical calculations seems to indicate a lowering of the $`\rho `$-meson mass in the nuclear medium. However, since in a heavy ion collision the final dilepton yield is obtained by an integration over dileptons emitted at different densities and temperatures a discrimination between different scenarios of in-medium modifications for the vector mesons is difficult .
Moreover, there is a more fundamental concern: ultrarelativistic heavy ion reactions proceed, at least in their initial stages, far from equilibrium, whereas all theoretical predictions of in-medium properties are based on equilibrium assumptions. Therefore it is necessary to probe in-medium properties of vector mesons under ’cleaner’ conditions. For that purpose photon or pion induced reactions are promising tools. In such reactions the nuclear medium is very close to equilibrium at normal nuclear density and temperature zero. The predicted in-medium effects for the vector mesons are so large that they should have observable consequences already for densities $`\rho =\rho _0`$. Here one should note that even the dileptons seen in ultrarelativistic heavy ion reactions stem to a large part from densities $`\rho 2\rho _0`$.
Our calculations are based on a semi-classical BUU transport model that has recently been very successfully applied to the description of heavy ion collisions at SIS energies and photoproduction of pions and etas in nuclei . Meanwhile we have extended the model to the description of dilepton as well as strangeness production and to the high energy regime by including the Lund string fragmentation model FRITIOF . This allows us to calculate inclusive particle production in heavy ion collisions from 200 AMeV to 200 AGeV, in photon and in pion induced reactions with the very same physical input. To our opinion, the simultaneous description of as many experimental observables as possible is necessary because of the large number of parameters, like unknown cross sections, and the strong assumptions that enter semi-classical transport models. In heavy-ion collisions particle production depends not only on a large number of elementary reaction channels but also on the global space-time evolution of the system. On the other hand photon, pion or proton induced reactions allow to fix certain ingredients of the transport model as observables depend in general only on a few reaction channels. In Ref. we have, for example, determined the in-medium $`\eta `$-nucleon cross section from photoproduction of $`\eta `$-mesons. Just recently we were able to improve considerably our treatment of the $`\mathrm{\Delta }`$-resonance by comparing our calculations to experimental data on photoproduction of pions .
Within our model we have already given predictions for dilepton production in pion nucleus reactions that will be measured by the HADES collaboration at GSI . In the present paper we want to look at $`e^+e^{}`$ production in photonuclear reactions in the energy range from 0.8 to 2.2 GeV that will be accessible at TJNAF and at its lowest energy also at MAMI.
Our paper is organized as follows: In Section II we describe our model. Here we focus on the treatment of broad resonances and our description of elementary photon nucleon reactions. In Section III we present our results for dilepton production in photon nucleus reactions and discuss the possibility to subtract the Bethe-Heitler contribution. In Section IV we include in-medium modifications for the vector mesons into our calculations and present their effect on the dilepton yield. We close with a summary in Section V.
## II The BUU model
The transport model used here has been developed starting from the model that has been described in full detail in Refs. . Here we restrict ourselves to the description of the essential new features of our method.
### A Resonance properties
Instead of the baryonic resonances in Ref. that were taken with their parameters from the PDG we now use the resonances from the analysis of Manley and Saleski . This has the advantage of supplying us with a consistent set of resonances. In particular, now the resonance parameters are consistent with the parameterizations of the resonance widths. We take into account all resonances that are rated with at least 2 stars in Ref. : $`P_{33}`$(1232), $`P_{11}`$(1440), $`D_{13}`$(1520), $`S_{11}`$(1535), $`P_{33}`$(1600), $`S_{31}`$(1620), $`S_{11}`$(1650), $`D_{15}`$(1675), $`F_{15}`$(1680), $`P_{13}`$(1879), $`S_{31}`$(1900), $`F_{35}`$(1905), $`P_{31}`$(1910), $`D_{35}`$(1930), $`F_{37}`$(1950), $`F_{17}`$(1990), $`G_{17}`$(2190), $`D_{35}`$(2350). The resonances couple to the following channels: $`N\pi `$, $`N\eta `$, $`N\omega `$, $`\mathrm{\Lambda }K`$, $`\mathrm{\Delta }(1232)\pi `$, $`N\rho `$, $`N\sigma `$, $`N(1440)\pi `$, $`\mathrm{\Delta }(1232)\rho `$. The cross section for the production of a resonance $`R`$ in a collision of a meson $`m`$ with a baryon $`B`$ is given by:
$$\sigma _{mBR}=\frac{2J_R+1}{(2J_m+1)(2J_B+1)}\frac{4\pi }{k^2}\frac{s\mathrm{\Gamma }_{mB}^{in}\mathrm{\Gamma }_{tot}^{out}}{(sM_R^2)^2+s\mathrm{\Gamma }_{tot}^{out}{}_{}{}^{2}},$$
(1)
where $`J_R`$, $`J_m`$, $`J_B`$ denote the spins of the resonance, the baryon and the meson, respectively. $`k`$ is the cms momentum of the incoming particles, $`s`$ is the squared invariant energy, and $`M_R`$ is the pole mass of the resonance. The total decay width $`\mathrm{\Gamma }_{tot}^{out}`$ is given as sum over the partial decay widths of the resonance. For a specific channel $`mB`$ it is:
$$\mathrm{\Gamma }_{mB}^{out}=\mathrm{\Gamma }_{mB}^0\frac{\rho _{mB}(s)}{\rho _{mB}(M_R)},$$
(2)
with $`\mathrm{\Gamma }_{mB}^0`$ being the partial decay width at the pole of the resonance and $`\rho _{mB}(s)`$ is given as:
$$\rho _{mB}(s)=𝑑\mu _m𝑑\mu _B𝒜_m(\mu _m)𝒜_B(\mu _B)\frac{q(s,\mu _m,\mu _B)}{\sqrt{s}}B_{l_{mB}}^2(qR),$$
(3)
where $`𝒜_m`$ and $`𝒜_B`$ denote the spectral functions of the outgoing particles. $`q`$ is their cms momentum, $`l_{mB}`$ is their relative orbital angular momentum, and $`B_{l_{mB}}`$ is a Blatt-Weisskopf barrier penetration factor (interaction radius $`R=1\mathrm{fm}`$). For the spectral function $`𝒜_i`$ of an unstable particle $`i`$ we use:
$$𝒜_i(\mu )=\frac{2}{\pi }\frac{\mu ^2\mathrm{\Gamma }_{tot}(\mu )}{(\mu ^2M_i^2)^2+\mu ^2\mathrm{\Gamma }_{tot}^2(\mu )},$$
(4)
where $`M_i`$ denotes the pole mass and $`\mathrm{\Gamma }_{tot}(\mu )`$ is the total width. Here we neglect any spin degrees of freedom as well as a momentum dependence of the real part of the self-energy. For a stable particle (with respect to the strong interaction) we simply have:
$$𝒜_i(\mu )=\delta (\mu M_i).$$
(5)
The incoming width $`\mathrm{\Gamma }_{mB}^{in}`$ in Eq. (1) is given by:
$$\mathrm{\Gamma }_{mB}^{in}=C_{mB}^{I_R}\mathrm{\Gamma }_{mB}^0\frac{kB_{l_{mB}}^2(kR)}{\sqrt{s}\rho _{mB}(M_R)},$$
(6)
where $`C_{mB}^{I_R}`$ is the appropriate Clebsch-Gordan coefficient for the coupling of the isospins of the baryon and the meson to the isospin $`I_R`$ of the resonance.
We note that we use – in contrast to Ref. – relativistic propagators in Eqs. (1),(4) and momentum dependent widths in the spectral functions of the outgoing particles. However, this has only a very small effect on the resonance production cross sections and does not require a readjustment of the resonance parameters.
The mesonic resonances are treated analogously to the baryonic ones, i.e. their two-body decay widths are calculated according to Eq. (2). The used parameters are given in Table I. The three pion decay width of the $`\omega `$-meson is assumed to be constant since it is very small.
### B The collision term
#### 1 Baryon-baryon collisions
For invariant energies $`\sqrt{s}<2.6\mathrm{GeV}`$ we describe baryon-baryon collisions as in Ref. with the same matrix elements. Our modified treatment of the resonance properties preserves the very good agreement with the experimental data on one- and two-pion production in nucleon-nucleon collisions shown in Ref. . For higher energies we use the string fragmentation model FRITIOF with the same parameters as in Ref. . This approach is similar to the ones in the hadronic transport models described in Refs. .
#### 2 Meson-baryon collisions
For meson-baryon collisions we use the string fragmentation model for $`\sqrt{s}>2.2\mathrm{GeV}`$. For lower energies the most important contributions come from intermediate nucleon resonances which are described according to Eq. (1).
In case of $`\pi ^{}p`$-scattering the incoherent sum of all resonance contributions gives a very good agreement with experimental data for pion momenta $`p_\pi 1.1\mathrm{GeV}`$, corresponding to invariant energies $`\sqrt{s}1.73\mathrm{GeV}`$. In Fig. 1 we show the total $`\pi ^{}p`$ cross section. For higher energies ($`1.73\mathrm{GeV}<\sqrt{s}<2.2\mathrm{GeV}`$) we additionally include a background $`\pi N\pi \pi N`$ cross section in order to reproduce the total cross section for which we use a parameterization from the PDG (shown as solid line in Fig. 1). In Fig. 2 we show that the resonance contributions give a very good description of experimental data on elastic scattering, charge exchange, two-pion and eta production cross sections.
For $`\pi ^+p`$-scattering one gets, also for lower energies, a good agreement with experimental data only if one takes into account all resonances from Ref. , including the 1 star ones, as can be seen from Figs. 3 and 4. Since we do not explicitly propagate the 1 star resonances we put their contributions into background cross sections for $`\pi N\pi N`$ and $`\pi N\pi \pi N`$. For higher energies we, again, include a two-pion production background term that is fitted to the total cross section from Ref. .
For $`\pi ^0p`$-scattering we have from isospin symmetry:
$`\sigma _{\pi ^0p}={\displaystyle \frac{1}{2}}(\sigma _{\pi ^+p}+\sigma _{\pi ^{}p})`$
for the total cross section. The cross sections for pion-neutron scattering also follow from isospin symmetry.
In Fig. 5 we compare the resonance contributions to $`\pi ^{}pn\rho ^0`$ to the experimental data from Ref. . While for invariant energies $`\sqrt{s}`$ above 1.8 GeV the experimental data are reasonably well described by the resonances there is a strong disagreement at lower energies, in particular in the region of the $`D_{13}(1520)`$-resonance. The $`\rho `$-mesons produced at these energies have invariant masses essentially below the pole mass $`m_\rho ^0`$. In Ref. the $`\rho `$-meson cross section has been obtained by a fit to invariant mass spectra of the outgoing pions in $`\pi ^{}pn\pi ^+\pi ^{}`$. For low $`\sqrt{s}`$ the shapes of the $`\rho `$-meson and the ’background’ contributions become very similar and a determination of the $`\rho `$-meson cross section gets very difficult.
In Ref. the couplings of the baryonic resonances to the $`N\rho `$ channel have been determined by using amplitudes for $`\pi NN\rho `$ that were obtained by a partial wave analysis of all available $`\pi NN\pi \pi `$ data in Ref. . We also note that the large coupling of the $`D_{13}(1520)`$-resonance to the $`N\rho `$ channel found in Ref. is in line with other similar analyses . Therefore we consider the experimental data in Ref. to be wrong for low $`\sqrt{s}`$.
In addition to the resonance contributions we include the following processes:
$`\pi N`$ $``$ $`\omega N`$
$`\pi N`$ $``$ $`\omega \pi N`$
$`\omega N`$ $``$ $`\pi \pi N`$
$`\omega N`$ $``$ $`\omega N`$
$`\pi N`$ $``$ $`\varphi N`$
$`\pi N`$ $``$ $`\varphi \pi N`$
$`\varphi N`$ $``$ $`\pi \pi N`$
$`\varphi N`$ $``$ $`\varphi N,`$
where we adopt the cross sections from Ref. . For cross sections involving an $`\omega `$-meson we, of course, subtract our resonance contributions from these cross sections.
### C Treatment of broad resonances
In our model broad resonances, like the baryon resonances or the $`\rho `$-meson, are not just produced and propagated with their pole mass but according to their spectral function. The transport equation for a system of $`N`$ particle species reads:
$$(\frac{}{t}+\frac{H_i}{\stackrel{}{p}}\frac{}{\stackrel{}{r}}\frac{H_i}{\stackrel{}{r}}\frac{}{\stackrel{}{p}})F_i=G_i𝒜_iL_iF_i(i=1,\mathrm{},N),$$
(7)
where $`F_i(\stackrel{}{r},\stackrel{}{p},\mu ,t)`$ denotes the one-particle spectral phase space density of particle species $`i`$ with $`\stackrel{}{r}`$ and $`\stackrel{}{p}`$ being the spatial and momentum coordinates of the particle. $`\mu `$ is the invariant mass of the particle and $`H_i(\stackrel{}{r},\stackrel{}{p},\mu ,F_1,\mathrm{},F_N)`$ stands for the single particle mean field Hamilton function which, in our numerical realization , is given as:
$$H_i=\sqrt{(\mu +S_i)^2+\stackrel{}{p}^2},$$
(8)
where $`S_i(\stackrel{}{r},\stackrel{}{p},\mu ,F_1,\mathrm{},F_N)`$ is a scalar potential. We note that we neglect a vector potential and $`S_i`$ is an effective scalar potential that is – for the nucleons – obtained from a non-relativistic potential (for details see Ref. ). The terms $`G_i(\stackrel{}{r},\stackrel{}{p},\mu ,F_1,\mathrm{},F_N)`$ and $`L_i(\stackrel{}{r},\stackrel{}{p},\mu ,F_1,\mathrm{},F_N)`$ stand for a gain and a loss term, respectively, and $`𝒜_i(\stackrel{}{r},\stackrel{}{p},\mu ,F_1,\mathrm{},F_N)`$ is the spectral function of particle $`i`$. The distribution function $`f_i`$ is defined by:
$$f_i(\stackrel{}{r},\stackrel{}{p},\mu ,t)=\frac{F_i(\stackrel{}{r},\stackrel{}{p},\mu ,t)}{𝒜_i(\stackrel{}{r},\stackrel{}{p},\mu ,t)};$$
(9)
for stable particles it reduces to the usual phase space density.
In order to be more specific about $`G_i`$ and $`L_i`$ let us consider, as an example, a system of nucleons, rho-mesons, pions and a single baryonic resonance species R that are coupled only via $`RN\rho `$ and $`\rho \pi \pi `$. Then the gain term $`G_\rho `$ is:
$$G_\rho =\frac{1}{𝒜_\rho }\frac{d^3p_R}{(2\pi )^3}𝑑\mu _RF_R(\stackrel{}{r},\stackrel{}{p}_R,\mu _R,t)\frac{d\mathrm{\Gamma }_{RN\rho }}{d^3p_\rho d\mu _\rho }(1f_n(\stackrel{}{r},\stackrel{}{p}_n,t)),$$
(10)
where the factor $`(1f_n)`$ accounts for the Pauli principle of the outgoing nucleon and $`\stackrel{}{p}_n=\stackrel{}{p}_R\stackrel{}{p}_\rho `$ due to momentum conservation. For simplicity, we have neglected a possible finite width of the nucleons as well as a Bose enhancement factor $`(1+f_\rho )`$ for the $`\rho `$-meson. In the differential decay width $`d\mathrm{\Gamma }_{RN\rho }`$ the spectral function of the $`\rho `$-meson $`𝒜_\rho `$ enters simply as a multiplicative factor (Eqs. (2),(3)). Therefore, here $`G_\rho `$ does not depend on $`𝒜_\rho `$ since we neglect any process where more than one $`\rho `$-meson is produced, like e.g. $`R\rho \rho N`$.
The loss term $`L_\rho `$ reads:
$$L_\rho =\mathrm{\Gamma }_{\rho \pi \pi }+\frac{d^3p_n}{(2\pi )^3}f_n(\stackrel{}{r},\stackrel{}{p}_n,t)v_{n\rho }\sigma _{\rho nR},$$
(11)
with $`v_{n\rho }`$ being the relative velocity of the nucleon and the $`\rho `$-meson and $`\sigma _{\rho nR}`$ their cross section for the production of resonance $`R`$ (Eq. (1)). $`\mathrm{\Gamma }_{\rho \pi \pi }`$ denotes the two-pion decay width of the $`\rho `$-meson in the calculational frame.
The total in-medium width $`\mathrm{\Gamma }_{tot,\rho }`$ appearing in the spectral function from Eq. (4) is directly related to the loss rate $`L_\rho `$:
$$\mathrm{\Gamma }_{tot,\rho }=\gamma L_\rho ,$$
(12)
where $`\gamma `$ is a Lorentz factor which appears since $`\mathrm{\Gamma }_{tot,\rho }`$ is the decay rate in the rest frame of the $`\rho `$-meson.
The loss and gain term for the resonance $`R`$ can be written down in an analogous way. One immediately sees that the transport equations of the $`\rho `$-meson and the resonance $`R`$ are coupled in a higly non-linear way. Especially the in-medium widths, that are functions of space time and 4-momentum, of both particles depend on each other through integral equations (Eqs. (1),(2),(3),(4),(11)).
The above described equations can easily be extended to a transport model with more particle species and a realistic collision term. We also note that it is straightforward to formulate the theory consistently for real and imaginary part of the self-energies of the particles. However, in our model we treat real and imaginary part of the self-energies completely independently. This violates analyticity, but in the present stage it would already require a considerable effort to treat the imaginary parts of all particles in a realistic transport model in a completely self-consistent way.
In Ref. it has recently been stressed that, because of unitarity, it is important to respect Eq. (12) in transport calculations. This means that in the population of a resonance the same width has to be used in the spectral function that enters the dynamical calculation via the collision term. In Refs. we have already taken into account the in-medium widths of the baryonic resonances $`\mathrm{\Delta }(1232)`$, $`N(1520)`$, $`N(1535)`$, and $`N(1680)`$ in a consistent way for population and propagation. Since during a photoproduction reaction the nucleus remains, in the time interval relevant for meson production, close to its ground state the calculational effort is in this case manageable. As we describe the nuclear ground state in a local Thomas-Fermi approximation we can use nuclear matter values for the in-medium widths that depend only on the invariant mass $`\mu `$, the absolute value of the 3-momentum $`|\stackrel{}{p}|`$, and the density $`\rho `$:
$`\mathrm{\Gamma }(\stackrel{}{r},t,\stackrel{}{p},\mu )\mathrm{\Gamma }(\rho (\stackrel{}{r},t),|\stackrel{}{p}|,\mu ).`$
In the present paper we only take into account the in-medium widths of the $`\rho `$\- and $`\omega `$-mesons. In particular, we neglect any medium-modifications of the $`N\rho `$-widths of the baryonic resonances that would, in a self-consistent calculation of the self-energies, directly follow from a modification of the spectral function of the $`\rho `$-meson. As was shown in Ref. such effects might give large modifications of the baryon widths and also influence the spectral function of the $`\rho `$-meson. However, an inclusion of such effects would enhance the numerical effort dramatically, especially if one takes also the modifications of the real parts of the self-energies into account. Moreover, we do not expect such effects to be relevant for photon energies above 1 GeV because here the nucleon resonances that lie below $`M_N+m_\rho ^0`$ and might thus get strongly modified, like the $`D_{13}(1520)`$, play only a minor role. Observable effects of different medium modifications of the $`D_{13}`$ will be reported elsewhere.
The transport equations Eq. (7) do not yet give the correct asymptotic spectral phase space densities for particles that are stable in vacuum. This can be seen by noting that a collision broadened particle does not automatically lose its collisional width when being propagated out of the nuclear environment. The same problem appears for resonances whose imaginary part of the self-energy is non-zero in-medium in kinematical regimes where it is zero in vacuum.
The reason for this deficiency is directly related to the semi-classical approximations on which the transport equation Eq. (7) is based, in particular the neglect of coherence effects. However, since a realistic quantum transport theory is numerically not yet realizable we will nevertheless work with Eq. (7) as it is certainly a step beyond the usual on-shell approximation. Moreover, if the time evolution of the system is such that the rate of particle production and absorption is much larger than the change of the width with time the problem with surviving off-shell contributions will be negligible. Under the assumption that the gain term in the collision term is in magnitude comparable to the loss term we can formulate this in the following way:
$$\mathrm{\Gamma }dt\frac{d\mathrm{\Gamma }}{\mathrm{\Gamma }}.$$
(13)
In case of a particle moving in a static nuclear medium with density profile $`\rho (\stackrel{}{r})`$ we can rewrite this condition:
$$\frac{\stackrel{}{}\rho \stackrel{}{e}_p}{\rho }\frac{1}{\lambda },$$
(14)
where $`\stackrel{}{e}_p`$ is a unit vector along the momentum direction of the particle and $`\lambda `$ denotes its mean free path. In Section IV A we will discuss the validity of this condition for our calculations and present possible remedies to the problem.
### D Parameterization of the elementary $`\gamma N`$ cross sections
For invariant energies $`\sqrt{s}<2.1\mathrm{GeV}`$, corresponding to $`E_\gamma <1.88\mathrm{GeV}`$ on a free nucleon at rest, we describe one-pion, two-pion and eta production as in Ref. . For the two-pion production cross sections on the neutron we meanwhile use the experimental data from Refs. instead of the recipe described in Ref. . For larger energies we use, like for the hadronic interactions, the string fragmentation model FRITIOF where we initialize a zero mass $`\rho ^0`$-meson for the photon following a VMD picture. For the total cross section we use a parameterization from Ref. . The Lund model is then used to determine the probabilities for the different final states. In Fig. 6 we show that this gives a very good description of charged particle multiplicities in photon-proton collisions; the agreement seen there is better than could be expected from a model that had been developed for applications at high energies. However, we do not expect the Lund model to give correct predictions for all specific channels, especially with respect to isospin. The role of the Lund model for our calculations is to supply us with an overall description of the elementary reaction dynamics in order to allow us to take into account multi-step processes where, for example, a primary produced pion produces a vector meson on a second nucleon.
The vector meson ($`\rho ,\omega ,\varphi `$) production in $`\gamma NVN`$ collisions is fitted to experimental data and treated independent of the Lund model also for high energies. The cross section is given as:
$$\sigma _{\gamma NNV}=\frac{1}{p_is}𝑑\mu |_V|^2p_f𝒜_V(\mu ),$$
(15)
where $`\sqrt{s}`$ is the total energy of the $`\gamma N`$ system, $`p_i,p_f`$ are the momenta of the initial and final particles in the center-of-mass system, and $`𝒜_V`$ is the spectral function of vector meson $`V`$ (Eq. 4). The matrix elements $`_V`$ are parameterized in the following way:
$`|_\rho |^2=0.16\mathrm{mb}\mathrm{GeV}^2`$ (16)
$`|_\omega |^2={\displaystyle \frac{0.08p_f^2}{2(\sqrt{s}1.73\mathrm{GeV})^2+p_f^2}}\mathrm{mb}\mathrm{GeV}^2`$ (17)
$`|_\varphi |^2=0.004\mathrm{mb}\mathrm{GeV}^2.`$ (18)
In Fig. 7 we show the resulting cross sections (dash-dotted curves) for $`\gamma pp\rho ^0`$ (upper part), $`\gamma pp\omega `$ (middle part) and $`\gamma pp\varphi `$ (lower part) in comparison with the experimental data. For the angular distribution of the produced vector mesons we use
$$\frac{d\sigma }{dt}\mathrm{exp}(Bt),$$
(19)
where $`t`$ denotes the square of the 4-momentum transfer of the photon to the vector meson. In Ref. the parameter $`B`$ was, dependent on photon energy, fitted to $`\rho ^0`$-production. Here, we adopt these values and also use them for $`\omega `$\- and $`\varphi `$-production.
In our calculations there is an additional contribution to $`\gamma NN\rho ^0`$ coming from the decays of the $`N(1520)`$ and $`N(1680)`$ resonances which is shown in Fig. 7 by the dashed line. These decays predominantly contribute to low mass $`\rho `$-mesons below the experimentally seen $`\rho `$ production threshold.
Besides the exclusive process $`\gamma NVN`$ we also have, for the photon energies considered here, to take into account additional channels for photoproduction of vector mesons. For energies above 2.1 GeV we calculate these cross sections using the Lund model. Below 2.1 GeV we absorb everything into the channel $`\gamma NV\pi N`$ for which we use the following cross section:
$$\sigma _{\gamma NV\pi N}=\frac{16(2\pi )^7}{p_i\sqrt{s}}𝑑\mu 𝑑\mathrm{\Phi }_3|_{V\pi }|^2𝒜_V(\mu ),$$
(20)
where $`d\mathrm{\Phi }_3`$ denotes the 3-body phase space element as, for example, given by Eq. (35.11) in Ref. . The matrix elements are adjusted to give a continous transition to the string fragmentation model at $`\sqrt{s}=2.1\mathrm{GeV}`$. We use:
$`|_{\rho ^0\pi }|^2=|_{\omega \pi }|^2=0.5\mathrm{mb}.`$
The calculated inclusive rho and omega production cross sections (after the subtraction of the exclusive part) are shown as dotted lines in Fig. 7. The total inclusive vector meson production cross sections are indicated as solid lines.
For photon energies above 1 GeV we take into account nuclear shadowing of the incoming photon by adopting the model of Ref. . Since this has, for the photon energies considered here, only a small impact on the final results our specific transport theoretical realization will be described elsewhere .
### E Dilepton production
In our analysis we calculate dilepton production by taking into account the contributions from the Dalitz-decays $`\mathrm{\Delta }Ne^+e^{}`$, $`\eta \gamma e^+e^{}`$, $`\omega \pi ^0e^+e^{}`$, $`\pi ^0\gamma e^+e^{}`$ and the direct dilepton decays of the vector mesons $`\rho ,\omega ,\varphi `$.
The Dalitz decays of the $`\pi ^0`$ and the $`\eta `$ are parameterized according to Ref. . For the Dalitz decay of the $`\omega `$ we use the parameterization from Ref. . The $`\mathrm{\Delta }`$ Dalitz-decay is described in line with Ref. where we, however, use $`g=5.44`$ for the coupling constant in order to reproduce the photonic decay width $`\mathrm{\Gamma }_0(0)=0.72`$ MeV.
The dilepton decay of the vector mesons is calculated assuming strict vector meson dominance as in Ref. :
$$\mathrm{\Gamma }_{Ve^+e^{}}(M)=C_V\frac{m_V^4}{M^3},$$
(21)
where $`C_\rho =8.814\times 10^6`$, $`C_\omega =0.767\times 10^6`$ and $`C_\varphi =1.344\times 10^6`$, respectively . Within an extended vector meson dominance picture one has a dilepton decay amplitude that consists of 2 terms, one describing the coupling of the virtual photon to the hadron with a strength proportional to $`M^2`$ and another with strength proportional to $`M^0`$. However, since we neglect a direct coupling of the virtual photon the use of strict vector meson dominance is more appropriate. We also note that this dilepton decay width together with our parameters for the $`\rho `$-meson gives a very good description of the experimental data for $`e^+e^{}\pi ^+\pi ^{}`$.
In our transport model the dilepton yield is obtained from the phase space distributions of the respective sources by a time integration. For the vector mesons the massdifferential dilepton production is given as:
$$\frac{dN_{Ve^+e^{}}}{d\mu }=_0^{\mathrm{}}𝑑td^3r\frac{d^3p}{(2\pi )^3}F_V(\stackrel{}{r},t,\stackrel{}{p},\mu )\frac{\mathrm{\Gamma }_{Ve^+e^{}}}{\gamma },$$
(22)
where $`\gamma `$ is a Lorentz factor which appears since $`\mathrm{\Gamma }_{Ve^+e^{}}`$ is the width in the rest frame of the vector meson. The Dalitz decay contributions contain an additional mass integration. For the $`\mathrm{\Delta }`$-resonance we have for example:
$$\frac{dN_{\mathrm{\Delta }Ne^+e^{}}}{d\mu }=_0^{\mathrm{}}𝑑td^3r\frac{d^3p}{(2\pi )^3}𝑑\mu _2F_\mathrm{\Delta }(\stackrel{}{r},t,\stackrel{}{p},\mu _2)\frac{1}{\gamma }\frac{d\mathrm{\Gamma }_{\mathrm{\Delta }Ne^+e^{}}}{d\mu }.$$
(23)
## III Dilepton production in $`\gamma A`$ reactions
In Figs. 810 we present the calculated dilepton spectra $`d\sigma /dM`$ for $`\gamma `$C, $`\gamma `$Ca, $`\gamma `$Pb reactions at photon energies $`E_\gamma =0.8,1.5,2.2`$ GeV. A mass resolution of 10 MeV is included through a convolution of our calculated spectrum with a Gaussian. Here neither collisional broadening nor a mass shift of the vector mesons were taken into account.
The thin lines indicate the individual contributions from the different production channels; i.e. starting from low $`M`$: Dalitz decay $`\pi ^0\gamma e^+e^{}`$ (short-doted line), $`\eta \gamma e^+e^{}`$ (dotted line), $`\mathrm{\Delta }Ne^+e^{}`$ (dot-dashed line), $`\omega \pi ^0e^+e^{}`$ (dot-dot-dashed line); for $`M`$ 0.8 GeV: $`\rho ^0e^+e^{}`$ (dashed line), $`\omega e^+e^{}`$ (dot-dot-dashed line), $`\varphi e^+e^{}`$ (dashed line). The full solid lines represent the sum of all sources. The dominant processes in the low mass region up to $`M500`$ MeV are the $`\eta `$, $`\omega `$ and $`\mathrm{\Delta }`$ Dalitz decays. Above $`M0.6`$ GeV the spectrum is dominated by the vector meson decays with a low background from other hadronic sources.
In our calculations we only take into account $`\rho `$-mesons with masses above $`2m_\pi `$ which is the threshold of the strong decay since in the calculation of the spectral function (Eq. (4)) we neglect contributions from electroweak decays. Therefore we get a discontinuity of our spectra in Figs. 810 at the two-pion mass that is, however, because of the other sources and the mass resolution hardly visible.
### A Bethe-Heitler contribution
Besides the ’hadronic’ contributions as discussed above we also have to take into account dilepton production via the so-called Bethe-Heitler (BH) process for which the Feynman diagrams are depicted in Fig. 11 that contribute to lowest order in the electromagnetic coupling constant $`\alpha `$.
On a single nucleon, with the electromagnetic form factors known from electron scattering, the BH process is completely determined by QED dynamics and can easily be calculated. For a detailed description of the involved matrix elements we refer to Ref. from which we also adopted the parameterizations of the electromagnetic form factors $`W_1(Q^2,\nu )`$ and $`W_2(Q^2,\nu )`$ of the nucleon.
For our calculations we take only the incoherent sum over BH contributions on single nucleons into account and neglect contributions where the intermediate photon couples to the charge of the whole nucleus. While the latter will, because of the $`Z^2`$ dependence, dominate all integrated cross sections it can experimentally easily be suppressed by appropriate missing mass cuts.
In Fig. 12 we compare the BH contributions for $`\gamma `$Pb at 1.5 (upper part) and 2.2 GeV (lower part) with the ’hadronic’ contributions that we have already shown in Fig. 10. One sees that the sum of the elementary cross sections on the proton and the neutron (dashed lines) is much larger than the ’hadronic’ contributions (solid lines) for dilepton masses below 0.6 GeV. In the region of the $`\rho `$ and $`\omega `$ meson the BH contribution is about a factor of 4 smaller but still non-negligible. With the inclusion of Fermi motion and Pauli blocking (dotted lines) the BH contribution is reduced significantly for low invariant masses but hardly affected for masses larger than 700 MeV.
In order to suppress the BH contribution we implemented the cuts $`(kp),(kp_+)>0.01`$ GeV<sup>2</sup>, where $`k`$ is the 4-momentum of the incoming photon, and $`p`$, $`p_+`$ are the 4-momenta of electron and positron, respectively. These cuts, which reflect the pole-like behaviour of the intermediate electron propagator, suppress the BH contribution by a factor of 10 – dot-dashed lines in Fig. 12 – and practically do not have any influence on the ’hadronic’ contributions.
In our calculations we do not take into account interference terms between the BH and the ’hadronic’ contributions. For an inclusive cross section that contains to each $`e^+e^{}`$ pair the pair with exchanged momenta the interference term vanishes and the BH contribution can easily be subtracted. Therefore we will in the following discuss only the hadronic contributions.
## IV In-medium effects in dilepton production
### A Collisional broadening
The in-medium widths of the $`\rho `$ and $`\omega `$ mesons are calculated as sketched in Section II C. In the rest frame of the meson the total in-medium width is given as:
$$\mathrm{\Gamma }_{tot}^V(\mu ,|\stackrel{}{p}|,\rho )=\mathrm{\Gamma }_{vac}^V(\mu )+\mathrm{\Gamma }_{coll}^V(\mu ,|\stackrel{}{p}|,\rho ),$$
(24)
where the collisional width $`\mathrm{\Gamma }_{coll}^V`$ reads:
$$\mathrm{\Gamma }_{coll}^V(\mu ,|\stackrel{}{p}|,\rho )=\gamma \rho <v_{VN}\sigma _{VN}^{tot}>,$$
(25)
and $`\mathrm{\Gamma }_{vac}^V`$ is the vacuum decay width. In Eq. (25) the brackets stand for an average over the Fermi sea of the nucleons, $`v_{VN}`$ is the relative velocity between vector meson and nucleon, and $`\sigma _{VN}^{tot}`$ is their total cross section. $`\rho `$ is the nuclear density and $`\gamma `$ the Lorentz factor for the boost to the rest frame of the vector meson.
The upper part of Fig. 13 shows the collisional width of the $`\rho `$ meson as a function of momentum and mass at nuclear matter density $`\rho =\rho _0`$. The structure at low $`\mu `$ comes from the resonance contributions, especially from the $`D_{13}(1520)`$. Note that the width becomes very large (up to about 600 MeV), corresponding to a complete melting of the $`\rho `$-meson. The lower part of Fig. 13 shows the momentum and density dependence of the $`\omega `$ collisional width calculated at the pole mass $`\mu =m_\omega ^0`$. At nuclear matter density $`\rho _0`$ and a momentum of $`p=1\mathrm{GeV}`$ we obtain a collisional width of about 80 MeV which is about a factor of 10 larger than the vacuum decay width.
In Fig. 14 (upper part) we show the contribution of the $`\rho `$-meson to the $`e^+e^{}`$-yield for $`\gamma `$Pb at 1.5 GeV. The solid line indicates the bare mass case (as in Fig. 10), i.e. without collisional broadening. The curve labelled ’coll. broadening’ (dash-dotted line) results if we take into account the collision broadening effect in the production of the $`\rho `$-mesons. Here we calculate the $`\rho `$-meson production cross sections in photon-nucleon collisions (Eqs. (15),(20)) with the in-medium spectral function. Since the in-medium spectral function depends on the momentum of the $`\rho `$-meson with respect to the nuclear medium an additional angular integration has to be performed. For the case of exclusive production $`\gamma NN\rho `$ we use the angular distribution from Eq. (19). For $`\gamma NN\rho \pi `$ we assume an isotropic three-body phase space distribution.
As discussed in Section II C we do not modifiy the $`N\rho `$-width of the baryonic resonances but the masses of the $`\rho `$-mesons stemming from these decays are distributed according to the (phase space weighted) in-medium spectral function. We, again, neglect $`\rho `$-mesons with masses below the two-pion threshold.
From Fig. 14 one sees that the inclusion of collisional broadening leads to a depletion of the $`\rho `$-meson peak by about 30% and a shift of strength to lower dilepton masses. One also observes a very large peak at $`M=2m_\pi `$ which is in fact a pole but here finite due to our numerical solution. The reason for this divergence is directly related to our discussion in Section II C of the asymptotic solutions of the semi-classical transport equation when including in-medium spectral functions. At the two-pion mass the vacuum spectral function of the $`\rho `$-meson is zero while the in-medium spectral function has some finite value since the collision width from Eq. (25) does not vanish. When travelling to the vacuum the respective component of the spectral phase space density becomes infinitely long lived and leads to the divergence. If we included the electroweak decay width of the $`\rho `$-meson into the collision term of the transport equation the problem would not be solved. The pole would only be replaced by a numerically indistinguishable large peak.
In Fig. 14 (lower part) we show the contribution of the $`\omega `$-meson when including collisional broadening (dash-dotted line) in comparison to the calculation with the vacuum spectral function (solid line, as in Fig. 10). One observes a strong broadening of the $`\omega `$-peak which is partly covered up by the inclusion of a mass resolution of 10 MeV. However, such a strong broadening is not realistic because most of the $`\omega `$-mesons that contribute to the dilepton spectrum decay in the vacuum. This is also reflected by the violation of the condition Eq. (14) for the validity of our off-shell transport equation in this case.
#### 1 Prescriptions to obtain correct asymptotic behaviour
In the following we want to discuss a prescription that allow us to obtain a divergence free $`\rho `$-meson and a reasonable $`\omega `$-meson contribution. For that purpose we introduce a potential that shifts a particle to its vacuum spectral function when it propagates to the vacuum. Such a potential can not be defined on the level of the transport equation Eq. (7). However, we can introduce such a potential on the level of our numerical realization. We recall that we solve the transport equation by a so-called test particle method, i.e. we make an ansatz for the spectral phase space density $`F`$:
$$F(\stackrel{}{r},t,\stackrel{}{p},\mu )\underset{i}{}\delta (\stackrel{}{r}\stackrel{}{r}_i(t))\delta (\stackrel{}{p}\stackrel{}{p}_i(t))\delta (\mu \mu _i),$$
(26)
where $`\stackrel{}{r}_i(t)`$, $`\stackrel{}{p}_i(t)`$, and $`\mu _i`$ denote the spatial coordinate, the momentum and the mass of the test particle $`i`$, respectively. Now, we can define for each test particle a density dependent scalar potential $`s_i`$ in the following way:
$$s_i(\rho _i(t))=(\mu _i^{med}\mu _i^{vac})\frac{\rho _i(t)}{\rho _i^{cr}},$$
(27)
where $`\mu _i^{med}`$ is the ’in-medium’ mass of the test particle chosen according to the mass differential production cross section. $`\mu _i^{vac}`$ is the ’vacuum’ test particle mass which is chosen according to the production cross section with a vacuum spectral function. $`\rho _i^{cr}`$ is the baryon density at the creation point, whereas $`\rho _i(t)=\rho (\stackrel{}{r}_i(t))`$ is the baryon density during the propagation. Setting the test particle mass $`\mu _i`$ to the ’vacuum’ mass $`\mu _i^{vac}`$ the effective in-medium mass $`\mu _i^{}`$ is given as:
$$\mu _i^{}(\rho _i(t))=\mu _i^{vac}+s_i(\rho (t)).$$
(28)
Eq. (28) gives the correct asymptotic behaviour: the effective mass at the creation point is kept unchanged ($`\mu _i^{}(\rho _i^{cr})=\mu _i^{med}`$); during the propagation the test particle mass changes linearly with density and outside the nucleus it becomes equal to the bare mass $`\mu _i^{}(\rho _i=0)=\mu _i^{vac}`$. The potential $`s_i`$ enters into the test particle propagation as a usual potential and therefore guarantees that this prescription does not violate energy conservation. After this paper was submitted two other papers dealing with the propagation of broad resonances appeared on the preprint server .
The potential Eq. (27) can assume rather large values in the case of broad resonances even if the in-medium corrections are small. Its effect is, nevertheless, in this case negligible. As a check we have performed a calculation of photoproduction of $`\rho `$-mesons in which we distributed the masses $`\mu _i^{med}`$ according to the vacuum spectral function. This gave practically the same result as the calculation without potential since the lifetime of the $`\rho `$-mesons is so short that only very few propagate through a relevant density gradient for which the potential becomes important.
In Fig. 14 we show the results of our calculations with the described prescription for the $`\rho `$ and the $`\omega `$ mesons (curves labelled ’propagation to bare mass’, dashed lines). For the $`\rho `$-meson the divergence at the two-pion threshold is removed. For invariant masses above 500 MeV we get practically the same result as without the prescription because of the reason described in the preceding paragraph. The broadening of the $`\omega `$-peak is reduced because now only the $`\omega `$-mesons that decay inside the nucleus contribute to the broadening.
Another possibility to overcome the problems with the transport theoretical treatment of broad resonances is to avoid their explicit propagation. Therefore we have also performed simulations in which we calculated the cross sections for elementary dilepton production via vector mesons as instantaneous one-step processes. The results are shown in Fig. 14 by the curves labelled ’instantaneous decay’ (dot-dot-dashed lines). The $`\rho `$-meson contribution is almost identical to the one calculated with the prescription described above because of its very short life time. For the $`\omega `$-meson we get a reduction by about a factor of 3 since in the instantaneous decay scheme we neglect the possibility that an $`\omega `$-meson can escape from the nucleus and then contribute to dilepton production with a much larger branching ratio than inside the nucleus. As such dynamical effects are important to take into account we consider the description via an instantaneous process to be inadequate.
The easiest way to cope with the divergence of the $`\rho `$-meson contribution is to use a minimum two-pion decay width. In Fig. 14 we show the result of a calculation where we set this minimum width to 10 MeV (dotted line). One sees that still a large peak remains. Moreover, this approach is in any case questionable.
For the reasons described above we consider the prescription ’propagation to bare mass’ as the only possibilty to implement collision broadening effects in our calculations. However, we want to stress that this prescription is not fully satisfactory since it is only formulated on the level of our specific numerical solution of the transport equation and since it neglects any memory effects.
### B ’Dropping’ vector meson mass
In order to explore the observable consequences of vector meson mass shifts at finite nuclear density the in-medium vector meson masses are modeled according to Brown/Rho scaling or Hatsuda and Lee by introducing a scalar potential $`S_V(\stackrel{}{r})`$:
$$S_V(\stackrel{}{r})=\alpha m_V^0\frac{\rho (\stackrel{}{r})}{\rho _0},$$
(29)
where $`\rho (\stackrel{}{r})`$ is the nuclear density, $`m_V^0`$ the pole mass of the vector meson, $`\rho _0=0.168\mathrm{fm}^3`$, and $`\alpha 0.18`$ for the $`\rho `$ and $`\omega `$. The effective mass $`\mu ^{}`$ is then given as:
$$\mu ^{}=\mu +S_V.$$
(30)
For the effective pole mass $`\mu _0^{}`$ we thus get:
$`\mu _0^{}=\left(1\alpha {\displaystyle \frac{\rho (\stackrel{}{r})}{\rho _0}}\right)m_V^0.`$
In photon-nucleus reactions vector mesons are produced with large momenta relative to the nuclear medium. Within a resonance-hole model for the $`\rho `$-meson self energy in the nuclear medium it has been shown in Ref. that the real part of the in-medium $`\rho `$-meson self energy increases with momentum and crosses zero for a momentum of about 1 GeV. In order to explore the implications of such a behaviour we also use a momentum dependent scalar potential $`S_V^{mom}`$ for the vector mesons:
$$S_V^{mom}(\stackrel{}{r},\stackrel{}{p})=S_V(\stackrel{}{r})\left(1\frac{|\stackrel{}{p}|}{1\mathrm{GeV}}\right).$$
(31)
In our calculations we take the vector meson potentials into account for the calculation of the phase space factors in $`\gamma NNV`$ (Eq. (15)) and $`\gamma NNV\pi `$ (Eq. (20)). We neglect these modifications for the vector meson production in the string fragmentation model FRITIOF and also do not modify the $`N\rho `$-widths of the baryonic resonances.
### C Dileptons from $`\gamma A`$ reactions: In-medium effects
In Fig. 15 we show the contribution coming from the $`\omega `$-meson to $`e^+e^{}`$-production in $`\gamma `$Pb at a photon energy of 1.5 GeV. A dropping mass scenario according to Eq. (29) (dot-dashed line) gives a two peak structure corresponding to $`\omega `$-mesons decaying inside and outside the nucleus, respectively. An additional inclusion of collisional broadening, as described in Section IV A gives a substantial broadening of the dilepton yield from $`\omega `$-mesons that decay inside the nucleus. The height of the peak around the vacuum pole mass of the $`\omega `$-meson is hardly affected by the dropping mass scenario. On the one hand the lowering of the mass reduces the vacuum peak because $`\omega `$-mesons decaying inside the nucleus contribute to lower masses. On the other hand the total production of $`\omega `$-mesons is enhanced since the phase space factors entering the elementary photoproduction cross sections (Eqs. (15), (20)) are increased for lower masses. In our calculation both effects nearly cancel each other for masses around the vacuum pole.
In Fig. 16 (upper part) we show the total $`e^+e^{}`$ yield for the same reaction. A dropping mass scenario for the vector mesons (dashed line) leads to a second peak structure at invariant masse of about 650 MeV. The peak around 780 MeV remains practically unchanged since it is dominated by $`\omega `$-mesons decaying outside the nucleus.
With the inclusion of collisional broadening the in-medium peak gets completely washed out (dotted line). The dilepton yield at intermediate masses is about a factor of 2 larger than in the bare mass case. At the two-pion threshold there is a visible discontinuity which results from our neglect of $`\rho `$-mesons with effective masses below the two-pion mass.
Using the momentum dependent scalar potential from Eq. (31) we obtain the curves labelled ’momentum dependent potential’ (dot-dot-dashed lines). The result is very close to the bare mass case as the vector mesons are mainly produced with momenta around 1 GeV for which the potential is zero.
In the lower part of Fig. 16 we show that the effect of a dropping vector meson mass at a photon energy of 2.2 GeV is qualitatively the same as for 1.5 GeV. The calculation with a momentum dependent potential gives again a result that is practically the same as for the bare mass case.
In photonuclear reactions vector mesons are in general produced with larger momenta relative to the nuclear medium than in heavy-ion collisions. Since the in-medium spectral functions of the vector mesons are momentum dependent one might thus observe rather different in-medium effects in both reactions. These, together with additional information from pion-nucleus and proton-nucleus reactions , might help to discriminate between different scenarios of medium modifications. Therefore a calculation of all reactions within one model is necessary for a conclusive interpretation of the experimental data. Our BUU transport model provides such a tool.
## V Summary
We have studied $`e^+e^{}`$ production in $`\gamma `$C, $`\gamma `$Ca, and $`\gamma `$Pb reactions at photon energies of 0.8, 1.5, and 2.2 GeV within a semi-classical transport model. Various contributions were taken into account for dilepton production: Dalitz decays of $`\mathrm{\Delta }(1232)`$, $`\pi ^0`$, $`\eta `$, and $`\omega `$ as well as direct dilepton decays of the vectors mesons $`\rho `$, $`\omega `$, and $`\varphi `$. We have focused on observable effects of in-medium modifications of the vector mesons $`\rho `$ and $`\omega `$.
It was shown that the Bethe-Heitler process which dominates all integrated cross sections for dilepton production can be sufficiently suppressed by appropriate cuts on the lepton momenta. For dilepton invariant masses above 600 MeV the spectrum is dominated by the the decays of the vector mesons ($`\rho `$, $`\omega `$, $`\varphi `$). A mass shift of these mesons as proposed in Refs. leads to a substantial enhancement of the dilepton yield at invariant masses of about 650 MeV by about a factor of 3 and should clearly be visible in experiments that will be carried out at TJNAF . However, a calculation for which we used a linearly momentum dependent potential for the vector mesons gave practically no effect compared to the bare mass case.
We have stressed the necessity of a simultaneous description of vector meson production in different nuclear reactions as one probes in-medium properties at different momenta relative to the nuclear medium.
Exemplarily for the case of the $`\rho `$ and the $`\omega `$ mesons we have discussed the conceptual problems in the treatment of broad resonances in semi-classical transport models. We have presented a prescription that allows to obtain reasonable results when taking into account in-medium spectral functions.
###### Acknowledgements.
The authors are grateful for discussions with W. Cassing. This work was supported by DFG and GSI Darmstadt.
|
no-problem/9903/astro-ph9903169.html
|
ar5iv
|
text
|
# Theory of Bipolar Outflows from High-Mass Young Stellar Objects
## 1 Introduction
The subject of accretion-driven outflows from luminous, high-mass stars is still a fairly new area of research. In order to define the framework within which the significance of the various observational findings could be assessed and meaningful theoretical models formulated, it is useful to first review what is already known, and what the remaining open questions are, in the more mature field of research concerning accretion-driven outflows from low-luminosity stars.
There are now over 150 catalogued optical outflow sources associated with low-luminosity ($`L_{\mathrm{bol}}<10^3L_{\mathrm{}}`$) young stellar objects (YSOs). They appear as high-velocity (radial speeds $`200400`$ km s<sup>-1</sup>) ionized and neutral gas jets and as bipolar (molecular) flows, which evidently represent ambient gas entrained and driven by the jets (see Edwards et al. 1993a and Bachiller 1996 for reviews). There is a strong apparent correlation between the presence of outflow signatures (such as P Cyg line profiles, forbidden line emission, and thermal radio radiation) and accretion disk diagnostics (such as ultraviolet, infrared, and millimeter excess emission) in these sources (e.g., Hartigan et al. 1995). Most notably, the so-called classical T Tauri stars (cTT’s) consistently exhibit both types of properties, whereas the weak-lined T Tauri stars (wTT’s), which in most other respects closely resemble cTT’s, lack both outflow and accretion characteristics. Direct evidence for the presence of disks in YSOs has been obtained from millimeter and submillimeter interferometric mappings, which have resolved the structure and velocity fields of disks down to scales of a few tens of AU (e.g., Sargent 1996; Guilloteau et al. 1997; Kitamura et al. 1997; Wilner and Lay 1999). High-resolution images of disks in low-luminosity YSOs have also been obtained in the near infrared (NIR) using adaptive optics and in the optical using the Hubble Space Telescope (e.g., Stapelfeldt et al. 1997; McCaughrean et al. 1999).
Another important observational finding in low-mass ($`M2M_{\mathrm{}}`$) YSOs (where, from here on, “low-$`M`$” is used interchangeably with “low-$`L_{\mathrm{bol}}`$”) is that many of them have been inferred to possess a strong ($`10^3`$ G) stellar magnetic field that truncates the disk at a distance of a few stellar radii from the YSO and channels the flow toward high-latitude accretion shocks on the stellar surface. The evidence for this comes from the detection of periodic surface “hot spots” (e.g., Herbst et al. 1994) as well as from spectral line profiles, particularly of the upper Balmer lines and Na D (e.g., Edwards et al. 1994), Br$`\gamma `$ (e.g., Najita et al. 1996a), He I and He II (e.g., Guenther and Hessman 1993; Hamann and Persson 1992; Lamzin 1995), and the Ca II infrared triplet (e.g., Muzerolle et al. 1998). Direct measurements of stellar magnetic field strengths are difficult, but several kilogauss-strength detections have already been reported (e.g., Basri et al. 1992; Guenther et al. 1999; Johns-Krull et al. 1999). The magnetic interaction between the star and the disk could in principle account for the typically low rotation rates of cTT’s (e.g., Königl 1991) as well as for the systematically shorter rotation periods measured in wTT’s (Bouvier et al. 1993; Edwards et al. 1993b).
Finally, it is worth mentioning that accretion onto low-mass YSOs is evidently nonsteady. In particular, these objects exhibit episodic accretion events that have been inferred to last $`10^2`$ yr and to repeat on a time scale of $`10^3`$ yr during the initial $`10^5`$ yr of the YSO lifetime (e.g., Hartmann and Kenyon 1996). The mass accretion rate during these episodes is quite high, and it has been estimated that most of the mass that ends up in the central star could be accreted in this fashion. It has also been determined that these so-called FU Orionis outbursts give rise to high-velocity gas outflows that originate at the surfaces of the circumstellar accretion disk (Calvet et al. 1993).
The current “paradigm” of bipolar outflows in low-$`M`$ YSOs, which attempts to interpret the above observational results, can be summarized as follows. The outflows are powered by accretion, and probably represent centrifugally driven winds from the disk surfaces (see Königl and Ruden 1993 and Königl and Pudritz 1999 for reviews). The accretion and outflow are mediated by a magnetic field that corresponds either to interstellar field lines that had been advected by the inflowing matter (e.g., Wardle and Königl 1993) or to a stellar, dynamo-generated magnetic field (e.g., Shu et al. 1994). The origin of the field (and, correspondingly, the origin of the outflow in relation to the YSO), as well as the precise manner by which a sufficiently strong open field configuration is maintained along the disk, or, alternatively, the manner by which a stellar field can both channel an inflow and drive an outflow, are among the key issues of the theory that are not yet fully resolved. The currently favored interpretation of FU Orionis outbursts is that they represent a dwarf nova-like thermal instability in the innermost, weakly ionized (in quiescence) region of the disk. The effect of a magnetic field on the evolution of this instability and its possible role in driving the associated outflows are other important open questions in the theory.
Having outlined the relevant observations of low-luminosity YSOs, I now turn to examine the data on high-luminosity outflow sources. I focus attention on pre–main-sequence (PMS) stars and examine, first, whether the observations of higher-mass YSOs can be interpreted within the same framework as their lower-mass counterparts, and, second, whether the new information on high-luminosity outflow sources can shed light on any of the outstanding questions in the theory of low-$`L_{\mathrm{bol}}`$ YSOs.
## 2 Observations of Outflows from High-Luminosity Stars
Energetic outflows from luminous young stars have been detected by similar means to those used in identifying bipolar outflows in low-luminosity YSOs, namely, through molecular line emission from the swept-up ambient gas, and through optical and radio emission from the ionized gas component in stellar jets. Since high-mass YSOs are often found in regions of low-mass star formation, confusion with low-luminosity objects may complicate the determination of the flow structure as well as the identification of the driving source (which, for example, may be based on the presence of an isolated IRAS source or of an ultracompact HII region on or near the flow axis). For example, in the case of NGC 2024, Chernin (1996) has argued (on the basis of $`4\mathrm{}`$-resolution maps) that several outflows are, in fact, present in the region and that they do not appear to be driven by the known far-infrared sources. He suggested that the outflows might be driven, instead, by as yet unidentified low-mass stars.<sup>1</sup><sup>1</sup>1In this connection it is worth noting that NIR speckle interferometry of HAeBe stars has revealed that a significant ($`31\pm 10\%`$) fraction of them possess a close IR companion with a projected separation in the range $`501300\mathrm{AU}`$ (Leinert et al. 1997). In a similar vein, radio continuum sources interpreted as ultracompact HII regions could instead trace the sites of shock excitation by the outflow: such a misidentification has, for example, been claimed to have occurred in the case of the outflow from the high-luminosity YSO Cep A (see Corcoran et al. 1993 and Hughes and Wouterloot 1984). Nevertheless, the number of bipolar flows studied with adequate resolution by means of molecular line interferometry has been gradually increasing, and there is accumulating evidence that, as in the case of low-mass objects, they are a common property also of newly formed high-mass stars (see Richer et al. 1999 for a review). It appears that the higher-luminosity objects typically produce less well-collimated molecular flows than their lower-$`L_{\mathrm{bol}}`$ counterparts, although this could possibly be due to the fact that these outflows tend to emerge from their surrounding molecular cloud cores at a relatively early stage. The basic spatial and kinematic structures of the flows do not, however, seem to depend on the underlying source luminosity, and the momentum deposition rate in the outflow evidently increases as a simple power law of the luminosity for $`L_{\mathrm{bol}}`$ ranging all the way from $``$1 to $`10^6L_{\mathrm{}}`$.
Optical observations of jets from high-luminosity sources are subject to several detection biases (Mundt and Ray 1994), including short evolutionary timescales, typical association with comparatively distant star-formation regions, and confusion by bright, extended reflection nebulae as well as by background HII emission. Another bias can be traced to the effect of the ionizing flux from the central object (see Fig. 1 below). Despite these complicating factors, a significant number of sources with outflow signatures have already been detected. Mundt and Ray (1994) list 24 examples of optical jets associated with Herbig Ae/Be stars (HAeBe’s) and other high-luminosity YSOs, whereas Corcoran and Ray (1997) report that 28 out of 56 HAeBe’s that they studied had detectable \[OI\]$`\lambda `$6300 forbidden line emission. As discussed in the above references, the jet outflow speeds in high-luminosity ($`L_{\mathrm{bol}}10^3L_{\mathrm{}}`$) sources lie in the range $`600900`$ km s<sup>-1</sup>, which are a factor $`23`$ higher than the corresponding speeds in the low-$`L_{\mathrm{bol}}`$ sources, and have inferred mass outflow rates that are a factor $`10100`$ times higher than in the low-luminosity YSOs. There is also an indication that a larger fraction of the jets in luminous sources are poorly collimated.
## 3 The Accretion Disk Connection
There is now growing evidence that the correlation found in low-mass YSOs between the signatures of energetic outflows and accretion disks extends also to the more massive HAeBe stars. Corcoran and Ray (1997) discovered that, in most cases, the centroid velocity of the low-velocity component of the \[OI\]$`\lambda `$6300 emission line is blueshifted with respect to the stellar rest velocity. The same behavior is found in cTT’s and has been convincingly interpreted as evidence for the presence of extended, optically thick disks that block the redshifted line-emission region from our view. The forbidden emission lines in cTT’s often exhibit both a low-velocity component (LVC), which has been attributed to a disk-driven outflow, and a high-velocity component (HVC), whose interpretation is still controversial but which evidently originates in the vicinity of the YSO. The HVC is also observed in the \[OI\] line emission from some HAeBe’s, but it is found less frequently than in cTT’s (see also Böhm and Catala 1994). The latter finding was attributed by Corcoran and Ray (1997) to an evolutionary effect (wherein the HVC disappears before the LVC as the outflow activity gradually diminishes), although it is conceivable that at least in some luminous sources the absence of a high-velocity neutral oxygen component may be the result of photoionization near the outflow axis (S. Martin 1994, personal communication). In view of the fact that the ionization potential of neutral oxygen is nearly identical to that of hydrogen, one would not expect to detect \[OI\] emission within the Strömgren surface bounding the HII region around the star. If the disk is a source of a centrifugally driven outflow, the density distribution around the star will be highly stratified (e.g., Safier 1993) and the Strömgren surface will have a roughly conical shape centered on the symmetry axis (see Fig. 1). Under these circumstances, the HVC \[OI\] emission, produced near the stellar surface, will be absent, but the LVC, presumably generated above the disk surface further out in a region that is shielded from the ionizing radiation, will be detectable. This interpretation is supported by observations of a source like LkH$`\alpha `$ 234, in which a well collimated, high-velocity jet is detected (Ray et al. 1990)<sup>2</sup><sup>2</sup>2It is has been suggested, however, that the jet in this source is driven by a cold mid-infrared companion rather than by LkH$`\alpha `$234 itself (Cabrit et al. 1997). even though only a low-velocity \[OI\] component is seen in the vicinity of the central star.
Another robust correlation, identified by Corcoran and Ray (1998), relates the \[OI\]$`\lambda `$6300 line luminosity (a signature of an outflow) and the infrared excess luminosity (a possible signature of a disk). It appears that the relationship between these two quantities, originally found in cTT’s (e.g., Cabrit et al. 1990), extends smoothly to YSOs with masses of up to $`10M_{\mathrm{}}`$ and spans 5 orders of magnitude in infrared luminosity. Corcoran and Ray (1998) analyzed additional correlations between the forbidden-line and NIR emission properties of HAeBe’s and pointed out that they all follow the same trends as in cTT’s. They also found that all the HAeBe stars in their sample that exhibit both forbidden line emission and IR excesses have NIR colors that are consistent with the presence of an optically thick disk or a disk surrounded by a dusty envelope. Previous infrared studies of HAeBe’s (e.g., Hillenbrand et al. 1992) have revealed that many of these objects show infrared excesses with a spectral shape $`\lambda F_\lambda \lambda ^{4/3}`$ ($`\lambda 2.2\mu `$m). Such spectra are characteristic of optically thick disks that are either “active” viscous accretion disks or “passive” reprocessing flat disks. The apparent spectral decline below $`2.2\mu `$m has been interpreted as indicating the presence of effective “holes” in the optically thick disks on scales $`325`$ times the stellar radius (see also Lada and Adams 1992). It was originally suggested that the holes could represent regions where the disk is either truncated by a stellar magnetic field or else is optically thin. For reasonable accretion rates, a stellar magnetic field is unlikely to truncate a disk beyond a few stellar radii (e.g., Königl 1991). However, the innermost regions might be optically thin because of a low local mass accretion rate (Bell 1994; see §5). The mass accretion rate required to reproduce the $`3\mu `$m peak in the NIR spectral energy distribution is too large ($`10^6M_{\mathrm{}}\mathrm{yr}^1`$) for the innermost disk regions to remain optically thin, but Hartmann et al. (1993) argued that the observed peak could, instead, be due to the transient heating of grains in a dusty envelope by ultraviolet photons from the central star (see also Natta et al. 1992, 1993).
The interpretation of the infrared and sub-mm spectra of HAeBe’s in terms of disks has not been universally accepted: several authors have, in fact, claimed that the spectra can be explained entirely in terms of dusty spherical envelopes (e.g., Miroshnichenko et al. 1997; Pezzuto et al. 1997). It was similarly suggested that much of the millimeter emission in these systems arises in extended envelopes, and, furthermore, that the contribution from ionized gas may have led to an overestimate of the dust emission in many sources (e.g., Di Francesco et al. 1997). Furthermore, in some cases there are indications that the measured far-infrared emission may not even arise in the immediate vicinity of the HAeBe’s (Di Francesco et al. 1998). However, several strong disk candidates have by now been identified by mm-wavelength interferometry (e.g., Mannings and Sargent 1997). Among the sources observed by Mannings and Sargent, two appear as elongated molecular line-emission regions and exhibit ordered velocity gradients along their major axes, which is strongly suggestive of the presence of rotating disks. The disk radii and masses determined by these authors are similar to those found in cTT’s, although, in view of the short clearing time of optically thick disks inferred for HAeBe’s ($`0.3\mathrm{Myr}`$, as compared with $`0.3\mathrm{Myr}`$ for cTT’s; Hillenbrand et al. 1992), this may reflect the comparatively large age of the objects in their sample ($`510\mathrm{Myr}`$, compared to $`1\mathrm{Myr}`$ for typical cTT’s). Further support for the presence of disks around HAeBe’s has come from adaptive-optics IR imaging polarimetry and HST optical images of the object at the origin of the R Mon outflow, which were interpreted in terms of a $`10^2AU`$ optically thick accretion disk surrounding a $`10M_{\mathrm{}}`$ HAeBe star (Close et al. 1997).
Another suggestive piece of evidence is the detection in several HAeBe’s of CO overtone bandhead emission that exhibits broadening by a peristellar velocity distribution that scales with radius as $`v(r)r^{1/2}`$ (e.g., Chandler et al. 1995; Najita et al. 1996b). This velocity field is consistent with Keplerian rotation and the emission has therefore been attributed to a circumstellar disk. An alternative interpretation (which, like the disk model, also applies to low-luminosity YSOs in which CO bandhead emission has been detected) is that the emission originates in a magnetic accretion funnel that channels the inflowing matter from an accretion disk, with the observed broadening produced as the gas free-falls along the stellar magnetic field lines toward the stellar surface (Martin 1997). There are, in fact, other tantalizing observational clues that point to the presence of magnetospheric accretion in certain HAeBe’s. These include, in particular, the detection of inverse P Cygni (IPC) H$`\beta `$ line profiles in a number of such stars (see Fig. 2). In one survey of HAeBe stars (Ghandour et al. 1994), 4 out of 29 objects were found to show clear evidence for such profiles, with the redshifted absorption feature occurring between $`100`$ and $`700\mathrm{km}\mathrm{s}^1`$ relative to the rest velocity (L. Hillenbrand 1996, personal communication).<sup>3</sup><sup>3</sup>3For comparison with this $`14\%`$ apparent frequency, $`40\%`$ (6/15) of the cTT’s surveyed by Edwards et al. (1994) exhibited IPC profiles in H$`\beta `$. The detection frequency for the above sample of HAeBe’s would, however, increase to $`25\%`$ if one also included objects with a more tentative IPC classification; see Ghandour et al. 1994. Sorelli et al. (1996) proposed a similar interpretation of the redshifted Na D absorption components observed in several HAeBe’s. As the latter authors have noted, the Na D lines are likely to originate in a region containing neutral hydrogen, which could give rise to detectable Ly$`\alpha `$ absorption features. These may, however, prove difficult to identify in the presence of broad emission features originating in an associated outflow.<sup>4</sup><sup>4</sup>4Blondel et al. (1993) detected Ly$`\alpha `$ emission lines in several HAeBe stars and attributed them to radiation from magnetic accretion funnels; however, one can argue that a wind origin is more likely in this case (L. Hartmann 1996, personal communication). It is, however, important to bear in mind that alternative interpretations of the IPC profiles have been proposed. For example, they have been attributed to the evaporation of comet-like bodies that reach the vicinity of the star (e.g., Grinin et al. 1994) as well as to ongoing quasi-spherical infall onto the YSO (Bertout et al. 1996). A detailed comparison between the data and the specific predictions of each model would therefore be necessary before one could accept the presence of IPC profiles in these stars as unequivocal evidence for magnetospheric accretion (see Sorelli et al. 1996). Firm evidence might be provided, for example, by the discovery of periodic surface “hot spots” of the type previously found in some cTT’s, although the low expected contrast between the effective temperatures of the accretion shock and of the surrounding stellar photosphere in a hot star would make such detections difficult.
Disk signatures (including NIR excess, optical veiling, and CO bandhead emission) have been reported also in higher-mass ($`520M_{\mathrm{}}`$) stars (Hanson et al. 1997). These findings are of particular interest, since, for a representative mass inflow rate $`\dot{M}_{\mathrm{in}}=10^5M_{\mathrm{}}\mathrm{yr}^1`$, stars with masses $`M>8M_{\mathrm{}}`$ are predicted to reach the main sequence before disk accretion has ceased (Palla and Stahler 1993). Such disks may therefore be expected to show the effects of the interaction with the strong radiation field and stellar wind that are produced by a high-mass main-sequence star. The direct imaging of disk candidates in high-$`M`$ stars and a comparison between their properties and those of disks in low-$`M`$ YSOs are thus an important challenge for future observations. Since the structure of a circumstellar disk likely depends on how the central star is formed, these studies could also help to test the suggestion (Stahler et al. 1999) that stars with masses $`M10M_{\mathrm{}}`$ are assembled through the coalescence of lower-mass objects at the centers of young stellar clusters. One piece of evidence that has been cited in support of this interpretation is the apparent stellar clustering around HAeBe stars: Testi et al. (1997; see also Hillenbrand 1995) found that such clustering becomes significant for stars of spectral type B7 or earlier and argued that this is consistent with intermediate-mass YSOs representing a transition between the low-mass and high-mass modes of star formation. If so, then the study of disks in HAeBe’s could also be useful for testing the coalescence picture.
It is noteworthy that several independent studies have established correlations of the type $`\dot{M}L_{\mathrm{bol}}^{0.6}`$ for the mass accretion rate (from IR continuum measurements; Hillenbrand et al. 1992), the ionized mass outflow rate in the jets (from radio continuum observations; Skinner et al. 1993), and the bipolar molecular outflow rate (from CO line measurements; Levreault 1988) in both low-luminosity and high-luminosity YSOs. Taken together, these relationships suggest that a strong link between accretion and outflow exists in both low-mass and high-mass stars.
## 4 Modeling Issues
The detection of similar accretion and outflow signatures in low-$`L_{\mathrm{bol}}`$ and high-$`L_{\mathrm{bol}}`$ YSOs and the evidence for a strong correlation between them that continues smoothly from low- to high-luminosity objects provide strong arguments in favor of a similar underlying physical mechanism operating in all newly formed stars. In particular, the disk-driven hydromagnetic wind scenario, which is currently the leading model for the origin of bipolar outflows in low-$`L_{\mathrm{bol}}`$ stars (see §1), may also apply to HAeBe’s and even higher-mass stars. It is important to note, however, that the basic model worked out for the low-luminosity objects would need to be extended and modified by the inclusion of several new effects that are specific to high-luminosity objects. Some of these effects have already been considered before in a different context, but incorporating them all together into a self-consistent accretion/outflow model is one of the main theoretical challenges in this new field of research. Among the anticipated new elements of the theory, one can list the following:
* Enhanced field–matter coupling near the disk surface due to both the direct and the diffuse ionizing radiation from the central star, leading to higher mass accretion and outflow rates (e.g., Pudritz 1985).
* Disk photoevaporation (e.g., Hollenbach et al. 1994; Yorke and Welz 1996), creating a low-velocity ($`v`$ of order the sound speed $`c_\mathrm{s}10\mathrm{km}\mathrm{s}^1`$) disk outflow beyond the “gravitational radius” $`r_\mathrm{g}=GM/c_\mathrm{s}^210^{15}(M/10M_{\mathrm{}})\mathrm{cm}`$ (or even further in the presence of a strong stellar wind). Photoevaporation may facilitate the injection of mass into a centrifugally driven disk outflow, but a hydromagnetic wind from the inner disk could reduce the mass evaporation rate further out by intercepting some of the ionizing radiation from the central star.
* A strong, radiatively driven stellar wind, which may be transformed into a highly collimated jet through a dynamical interaction with a disk-driven wind (e.g., Frank and Mellema 1996) or a disk magnetic field (e.g., Kwan and Tademaru 1995). A strong, radiatively driven outflow may also be induced in the disk by the intercepted stellar radiation: this outflow is predicted to originate within a few stellar radii from the YSO’s surface, to be predominantly equatorial, and to have significantly lower speeds than those of the stellar wind (Drew et al. 1998).
* Radiation pressure effects on dust. Because of the large dust scattering cross section at UV and optical wavelengths, the effective Eddington luminosity for a dusty gas is much lower than the electron-scattering critical luminosity, and is given by $`L_{\mathrm{crit},\mathrm{dust}}4\times 10^2(M/10M_{\mathrm{}})L_{\mathrm{}}`$ (e.g., Wolfire and Cassinelli 1987). Radiation pressure effects could thus be important beyond the dust sublimation radius $`r_{\mathrm{sub}}1(L_{\mathrm{bol}}/10^2L_{\mathrm{}})^{1/2}\mathrm{AU}`$ and might contribute to the flow acceleration and also lead to the “opening up” of the streamlines (see Königl and Kartje 1994). The latter effect could be at least partially responsible for the apparently lower degree of collimation of jets from high-$`L_{\mathrm{bol}}`$ sources (Mundt and Ray 1994; see §2).
* Photoionization heating and radiative excitation. The strong stellar radiation field is expected to be the main heating mechanism of the gas in the stellar vicinity. In particular, it may dominate the ambipolar diffusion heating that is important for weakly ionized outflows in low-luminosity YSOs (Safier 1993) and could, in fact, cut it off altogether within the Strömgren surface. Radiative excitation may give rise to unique emission signatures, including Ly$`\alpha `$ lines (e.g., Blondel et al. 1993) and enhanced overtone emission from the higher CO bandheads (Martin 1997).
The incorporation of low-luminosity and high-luminosity YSOs into the same theoretical framework may help resolve some of the outstanding issues in the modeling of bipolar outflows from low-mass stars. For example, the origin of the LVC and HVC forbidden line emission is not yet fully understood. Magnetically driven outflows have been leading candidates for their interpretation, but both stellar field-based (e.g., Shang et al. 1998) and disk field-based (e.g., Cabrit et al. 1999) models have been proposed. As noted in §3, the apparent decrease in the detection frequency of the \[OI\] HVC (relative to the \[OI\] LVC) in higher-luminosity YSOs might be related to the locations of the HVC and LVC emission regions with respect to the star. If so, this could prove useful in the attempt to discriminate between the competing models.
## 5 Further Questions
### 5.1 The Role of a Stellar Magnetic Field in Channeling Accretion and Driving an Outflow
In one of the proposed models for the origin of bipolar outflows and jets from low-mass stars, the stellar magnetic field plays a pivotal role in the generation of the underlying centrifugally driven wind (e.g., Shu et al. 1999). If all YSO outflow sources can indeed be described by the same general model (see §4), then this interpretation requires that HAeBe’s (as well as higher-mass stars) should have a strong stellar magnetic field if they give rise to an energetic outflow.<sup>5</sup><sup>5</sup>5As discussed by Catala et al. (1986), at least $`20\%`$ of HAeBe’s (those belonging to the P Cygni subclass; see Finkenzeller and Mundt 1984) give rise to powerful winds with mass loss rates in the range $`10^810^6M_{\mathrm{}}\mathrm{yr}^1`$. The existence of a stellar magnetic field is also required in models that attribute the acceleration of these winds to hydromagnetic waves (e.g., Strafella et al. 1998). As was pointed out in §3, there is, in fact, persuasive evidence in at least some HAeBe’s for the existence of a magnetic field that is strong enough to channel accreting gas onto the stellar surface. There is also evidence for a strong, ordered magnetic field in nonaccreting HAeBe stars. In particular, VLBI radio measurements have revealed the presence of extended ($`310R_{}`$), organized magnetic field configurations that are similar to those observed in certain wTT’s, and which also resemble the field structures of magnetic Ap and Bp stars, in at least two such objects (André et al. 1992).
The evidence for strong magnetic fields in at least some HAeBe’s is puzzling, since such stars<sup>6</sup><sup>6</sup>6HAeBe stars are commonly taken to be YSOs in the mass range $`2M_{\mathrm{}}M8M_{\mathrm{}}`$. are not expected to have a deep convective layer, and therefore, according to the conventional picture, could not generate a strong field through dynamo action. In fact, according to the calculations of Palla and Stahler (1993), stars that accrete at a constant rate of $`10^5M_{\mathrm{}}\mathrm{yr}^1`$ during the PMS phase should be fully convective for $`M<2.4M_{\mathrm{}}`$, have a subsurface convection layer due to deuterium burning for $`2.4M_{\mathrm{}}<M<3.9M_{\mathrm{}}`$, and be fully radiative for $`M>3.9M_{\mathrm{}}`$. One possible resolution of this apparent difficulty is that the observed structures represent a fossil magnetic field and that HAeBe’s possessing such strong fields are, in fact, the precursors of the main-sequence Ap/Bp stars (André et al. 1992). The association with Ap/Bp stars could in principle be tested by comparing the rotation rates of HAeBe’s that are strong nonthermal radio sources (accepting André et al.’s argument that such emission is a reliable tracer of a large-scale, organized stellar field) with the rotation rates of other HAeBe’s. As a class, HAeBe’s are intermediate rotators (rotating at $`0.3`$ of breakup with mean projected speeds of $`100\mathrm{km}\mathrm{s}^1`$; e.g., Böhm and Catala 1995), but those objects that have extended magnetospheres may be expected to rotate more slowly, in accordance with the observed trend in Ap/Bp stars. It remains to be explained, however, how a closed magnetic field configuration would arise from the unipolar field that is likely to be incorporated into the star during the PMS accretion phase (e.g., Li and Shu 1996).
An alternative interpretation of the field is that it originates in a stellar dynamo that taps directly into the large rotational energy reservoir of the star without requiring convection to also be present. Tout and Pringle (1995) and Lignières et al. (1996) have explored specific models along these lines. A dynamo mechanism is a promising candidate for the enhanced surface activity exhibited by HAeBe stars of the P Cygni subclass (e.g., Böhm and Catala 1995), and it may also account for the X-ray emission detected toward a fair number of HAeBe’s (Zinnecker and Preibisch 1994).<sup>7</sup><sup>7</sup>7Zinnecker and Preibisch (1994) discuss stellar wind shocks as the most likely source of the X-rays. However, Skinner and Yamauchi (1996), analyzing ASCA data for a bright Herbig Ae star, have concluded that the X-rays in that case originate from the immediate stellar vicinity, although possibly from a late-type companion. In fact, confusion with nearby low-mass YSOs is a potentially serious problem for measurements with existing X-ray telescopes, whose angular resolution typically does not exceed $`5\mathrm{}`$. Tout and Pringle’s (1995) model requires the mean poloidal field to be rather weak and thus cannot account for either the large-scale organized field or the magnetically channeled accretion inferred in several of these objects. It is also unclear how any model that relates the dynamo action to the stellar rotation can explain the apparent lack of a correlation between the various activity tracers and the projected rotation speed (Zinnecker and Preibisch 1994; Böhm and Catala 1995), which contrasts with the observed trend in T Tauri stars (e.g., Neuhäuser 1997).<sup>8</sup><sup>8</sup>8Note, however, that if some of the observed activity in HAeBe’s is powered by magnetically channeled accretion, then the absence of a clear correlation with the stellar rotation might be due to the interaction between the stellar magnetic field and a circumstellar disk, which would tend to reduce the rotation rate (see §1). As a test of the latter effect, it would be useful to search for a correlation (similar to the one found in T Tauri stars) between the rotation speeds of HAeBe’s and the strength of their accretion signatures. It is interesting to note in this connection that even the youngest low-mass YSOs, which likely are not yet fully convective objects, already show evidence for strong outflows. In fact, the momentum discharges inferred in these so-called Class 0 sources are a factor of $`10`$ higher than the values implied by the tight correlation between the momentum discharge and the source bolometric luminosity that is found in older (Class I) low-mass YSO’s (Bontemps et al. 1996). If the Class 0 outflows are driven by a stellar magnetic field, then the mechanisms invoked for producing strong fields in non-convective YSOs could be relevant not only to HAeBe’s but also to very young low-mass objects. In order to properly address the above questions, it would be necessary to carry out a self-consistent calculation of the stellar and magnetic field evolution that would take into account the the spatial distribution of radiative and convective regions within the star as well as the effects of field-mediated accretion onto the YSO, magnetic braking of the stellar rotation, etc.
### 5.2 Does the FU Orionis Phenomenon Have an Analog in Higher-Mass Stars?
Bell (1994) suggested that the large apparent “holes” that have been inferred from the infrared spectral modeling of HAeBe’s (see §3) may be associated with the “low” phase of a thermal ionization instability that operates in the inner disk region. As was mentioned in §1, the “high” phase of this instability has been proposed as the origin of the FU Ori outburst phenomenon in low-mass YSOs (e.g., Bell and Lin 1994). Given that several other arguments also point to a comparatively low ($`10^7M_{\mathrm{}}\mathrm{yr}^1`$; e.g., Hartmann et al. 1993) current mass accretion rate onto the central object in a number of HAeBe’s, it is conceivable that a similar mode of nonsteady accretion is present in higher-mass stars. One possible check on this idea might be a search for multiple bow shocks along the associated jets: such shocks have been detected in several low-$`L_{\mathrm{bol}}`$ objects, and it has been argued that they are likely associated with the strong disk-outflow episodes that characterize FU Ori outbursts (e.g., Reipurth 1989). In the case of the higher-luminosity YSOs it would, however, be necessary to examine the effect of the strong stellar radiation field on the evolution of the instability. In particular, one would need to check to what extent the inner disk could be maintained in a state of low temperature and ionization during the “low” phase of the instability (see van Paradijs 1996 and King et al. 1996).
### 5.3 Disk Evolution around Massive Stars
Accretion disks around high-mass stars are probably the predecessors of the Vega-type systems first discovered by IRAS; in particular, $`\beta `$ Pictoris has likely evolved from an HAeBe star. The evolution of such disks could in principle be traced by comparing the high-$`M`$ analogs of cTT’s with the corresponding analogs of wTT’s as well as with Vega-type systems (e.g., Strom et al. 1991). The analogs of cTT’s and wTT’s can be identified through their accretion signatures (in particular, their infrared spectra; Lada and Adams 1992, Hillenbrand et al. 1992) as well as through their outflow signatures (in particular, their forbidden line emission and the H$`\alpha `$ equivalent width; Corcoran and Ray 1998). A systematic search for the high-mass analogs of wTT’s may potentially be carried out by means of an X-ray survey (cf. Casanova et al. 1995), although this approach is subject to the caveats that the X-ray emission from HAeBe’s might be associated with outflows rather than being intrinsic to the YSOs (Zinnecker and Preibisch 1994) or that it originates in close, low-mass companions (e.g., Skinner and Yamauchi 1996). It is, however, also conceivable that a sufficiently large data base might be obtained as part of a more general mapping project, such as the Sloan Digital Sky Survey.
Hillenbrand et al. (1992) concluded from an analysis of a sample of 47 HAeBe’s that the shorter evolutionary timescales of more massive stars are reflected in the clearing timescales of their respective disks, with optically thick disks around such stars surviving for less than $`0.3\mathrm{Myr}`$ (as compared with $`3\mathrm{Myr}`$ for a typical T Tauri star). It has, in fact, been surmised that the disk clearing time may in some cases be even shorter than the stellar evolution time, resulting in a possibly significant shortening of the evolutionary phase over which these YSOs would be classified as HAeBe stars (de Winter et al. 1997). The faster disk evolution in the more luminous YSOs may reflect the effects of photoevaporation and strong stellar winds in such objects (see §4), although it is also possible that the time-averaged mass accretion rate through these disks is higher (perhaps as a result of a stronger ionizing flux from the central star; see §4). The disk clearing mechanism in YSOs is still an open question, and its resolution could benefit from continued comparative studies of low-mass and high-mass objects. Such studies, coupled with searches for low-luminosity companions, could also shed light on the issue of planet formation in protostellar disks (e.g., by constraining the timescale of planetesimal growth; see Strom et al. 1991).
I am grateful to Steve Martin for his valuable input into the contents of this paper. I also thank Lee Hartmann, Lynne Hillenbrand, Debra Shepherd and Frank Shu for useful discussions and correspondence, Lynne Hillenbrand for kindly providing data in advance of publication, and Joseph Cassinelli for inspiring this undertaking. This work was supported in part by NASA grant NAG 5-3687.
|
no-problem/9903/hep-ph9903532.html
|
ar5iv
|
text
|
# The static 𝑄𝑄̄ interaction at small distances and OPE violating terms.
## Abstract
Nonperturbative contribution to the one-gluon exchange produces a universal linear term in the static potential at small distances $`\mathrm{\Delta }V=\frac{6N_c\alpha _s\sigma r}{2\pi }`$. Its role in the resolution of long–standing discrepancies in the fine splitting of heavy quarkonia and improved agreement with lattice data for static potentials is discussed, as well as implications for OPE violating terms in other processes.
1. Possible nonperturbative contributions from small distances have drawn a lot of attention recently . In terms of interquark potential the appearance of linear terms in the static potential $`V(r)=`$ const $`r`$, where $`r`$ is the distance between charges, implies violation of OPE, since const $``$ (mass)<sup>2</sup> and this dimension is not available in terms of field operators. There is however some analytic and numerical evidence for the possible existence of such terms $`O(m^2/Q^2)`$ in asymptotic expansion at large $`Q`$.
On a more phenomenological side the presence of linear term at small distances, $`r<T_g`$, where $`T_g`$ is the gluonic correlation length , is required by at least two sets of data.
First, the detailed lattice data do not support much weaker quadratic behaviour of $`V(r)`$ const $`r^2`$, following from OPE and field correlator method , and instead prefer the same linear form $`V(r)=\sigma r`$ at all distances (in addition to perturbative $`\frac{C_2\alpha _s}{r}`$ term). Second, the small–distance linear term is necessary for the description of the fine splitting in heavy quarkonia, since the spin–orbit Thomas term $`V_t=\frac{1}{2m^2r}\frac{dV}{dr}`$ is sensitive to the small $`r`$ region and additional linear contribution at $`r<T_g`$ is needed to fit the experimental splitting . Moreover lattice calculations display the $`1/r`$ behaviour of $`V_t`$ in all measured region up to $`r=0.1fm`$.
Of crucial importance is the sign of the $`O(m^2/Q^2)`$ term, since the usual screening correction (real $`m`$) leads to negative sign of linear potential, and one needs small–distance nonperturbative (NP) dynamics, which produces negative (tachyonic) sign of $`m^2`$ . Phenomenological implications of such contributions have been thouroughly studied in. In what follows we show that interaction of gluon spin with NP background indeed yields tachyonic gluon mass at small distances.
2. In this letter we report the first application of the systematic background perturbation theory to the problem in question. One starts with the decomposition of the full gluon vector potential $`A_\mu `$ into nonperturbative (NP) background $`B_\mu `$ and perturbative field $`a_\mu `$,
$$A_\mu =B_\mu +a_\mu ,$$
(1)
and make use of the ’tHooft identity for the partition function
$$Z=DA_\mu e^{S(A)}=\frac{1}{N}DB_\mu \eta (B)Da_\mu e^{S(B+a)}$$
(2)
where $`\eta (B)`$ is the weight for nonperturbative fields, defining the vacuum averages, e.g.
$$<F_{\mu \nu }^B(x)\mathrm{\Phi }^B(x,y)F_{\lambda \sigma }^B(y)>_B=\frac{\widehat{1}}{N_c}(\delta _{\mu \lambda }\delta _{\nu \sigma }\delta _{\mu \sigma }\delta _{\nu \lambda })D(xy)+\mathrm{\Delta }_1$$
(3)
where $`F_{\mu \nu }^B,\mathrm{\Phi }^B`$ are field strength and parallel transporter made of $`B_\mu `$ only; $`\mathrm{\Delta }_1`$ is the full derivative term not contributing to the string tension $`\sigma `$, which is
$$\sigma =\frac{1}{2N_c}d^2xD(x)+O(<FFFF>)$$
(4)
The background perturbation theory is an expansion of the last integral in (2) in powers of $`ga_\mu `$ and averaging over $`B_\mu `$ with the weight $`\eta (B_\mu )`$, as shown in (3). Referring the reader to for explicit formalism and renormalization, we concentrate below on the static interquark interaction at small $`r`$. To this end we consider the Wilson loop of size $`r\times T`$, where $`T`$ is large, $`T\mathrm{}`$, and define
$$<W>_{B,a}=<Pexpig_C(B_\mu +a_\mu )𝑑z_\mu >_{B,a}exp\{V(r)T\}$$
(5)
Expanding (5) in powers of $`ga_\mu `$, one obtains
$$<W>=W_0+W_2+\mathrm{};V=V_0(r)+V_2(r)+V_4(r)+,$$
(6)
where $`V_n(r)`$ corresponds to $`(ga_\mu )^n`$ and can be expressed through $`D,\mathrm{\Delta }_1`$ and higher correlators ;
Coming now to $`V_2(r)`$, describing one exchange of perturbative gluon in the background, one finds from the quadratic in $`a_\mu `$ term in $`S(B+a)`$ in the background Feynman gauge the gluon Green’s function
$$G_{\mu \nu }=(D_\lambda ^2\delta _{\mu \nu }+2igF_{\mu \nu }^B)^1,D_\lambda ^{ca}=_\lambda \delta _{ca}+gf^{cba}B_\lambda ^b$$
(7)
Expanding in powers of $`gF_{\mu \nu }^B,G_{\mu \nu }`$ can be written as
$$G=D^2+D^22igF^BD^2D^22igF^BD^22igF^BD^2+\mathrm{},$$
(8)
the first term on the r.h.s. of (8) corresponds to the spinless gluon exchange, propagating in the confining film covering the Wilson loop . As it was shown recently , the term $`D^2`$ produces only week corrections $`O(r^3)`$ to the usual perturbative potential at small distances, while it corresponds to the massive spinless propagator with mass $`m_0`$ at large distances.
In what follows we concentrate on the third term in (9), yielding for $`W_2^{(3)}`$
$$W_2^{(3)}=T\frac{\alpha _s(k^2)}{\pi ^2}\frac{d^3ke^{i\stackrel{}{k}\stackrel{}{r}}\mu ^2(k^2)}{(\stackrel{}{k}^2+m_0^2)^2}=\mathrm{\Delta }V_2(r)T$$
(9)
where we have defined, having in mind (4)
$$\mu ^2(k^2)=6\frac{D(z)e^{ikz}d^4z}{4\pi ^2z^2};\mu ^2(0)=\frac{6\sigma N_c}{2\pi },$$
(10)
From (10) one obtains the following positive contribution to the potential $`V_2(r)`$ at small $`r`$: (we neglect a constant term $`O(1/m_0)`$
$$\mathrm{\Delta }V_2(r)=\mu ^2(k_{eff})\alpha _s(k_{eff})r+O(r^2),r\begin{array}{c}<\hfill \\ \hfill \end{array}T_g.$$
(11)
Analysis of the integral (9) shows that $`k_{eff}1/r`$, and therefore $`\mathrm{\Delta }V_2(r)`$ is defined mostly by the short–distance dynamics.
3. The analysis done heretofore concerns static interquark potential and reveals that even at small distances NP background ensures some contributions which is encoded in the negative mass squared term $`\mu ^2`$.
Applying the same NP background formalism to other processes of interest, one would get similar corrections of the order of $`\frac{\mu ^2}{p^2}`$, as was investigated in .
To check the selfconsistency of our results, one can find the contribution of $`\mu ^2(k)`$ to the correlator $`D`$,
$$D(q)\alpha _s(q)\frac{d^4p\mu ^2(p)}{(p^2+m_0^2(p))^2(qp)^2}\frac{1}{q^2},q^2\mathrm{}$$
(12)
which is positive and consistent with recent lattice data . Insertion of (12) into (10) yields constant $`\mu ^2(p)`$ at large $`p`$ (modulo logarithms), which implies selfconsistent NP dynamics at small distances (large $`p`$). It is worthwhile to note also that negative sign of $`\mu ^2`$ contribution is directly connected to the asymptotic freedom, where the same paramagnetic term in the effective action $`S_{eff}`$ enters with the negative sign, and one can take into account that $`\mu ^2(x,y)\frac{\delta ^2S_{eff}}{\delta a_\mu (x)\delta a_\mu (y)}.`$
The author is grateful for discussions, correspondence and very usefull remarks to V.I.Zakharov and useful discussions to V.A.Novikov and V.I.Shevchenko.
The financial support of RFFI through the grants 97-02-16404 and 97-0217491 is gratefully acknowledged.
|
no-problem/9903/quant-ph9903091.html
|
ar5iv
|
text
|
# Tunneling Proximity Resonances: Interplay between Symmetry and Dissipation
## Abstract
We report the first observation of bound-state proximity resonances in coupled dielectric resonators. The proximity resonances arise from the combined action of symmetry and dissipation. We argue that the large ratio between the widths is a distinctive signature of the multidimensional nature of the system. Our experiments shed light on the properties of 2D tunneling in the presence of a dissipative environment.
Tunneling and dissipation are ubiquitous phenomena in physics. A detailed understanding of their combined action would be highly desirable given the relevance of the problem for atomic physics, condensed matter physics, chemistry and biology . However, the incorporation of dissipative effects is by no means trivial. Due to limitations of the available analytical and computational methods, up-to-date descriptions are still restricted to a few manageable cases, the prototype situation involving a bistable potential in 1D .
In this Letter we report the observation of novel aspects of tunneling in 2D potentials and its interplay with classical dissipation. In experiments utilizing microwave dielectric resonators, we find that symmetry not only plays a crucial role while shaping the eigenstates of the system, but also influences the way they couple to the external environment acquiring a finite width. In the observed resonance multiplets, we find that one of the members is extremely sharp due to the symmetry of the configuration. The large ratios of the observed widths appear to be a peculiar consequence of the multidimensional nature of the system.
The experiments were carried out using $`MgTi`$ dielectric cylinders placed between two parallel copper plates, 30 cm square, separated by a gap $`l=6.38`$ mm (Fig. 1). The disks had diameter $`D=12.65`$ mm and dielectric constant $`\epsilon _r=16`$. After establishing input/output coupling to the near field of the resonators by inserting coax lines terminated by loops, measurements of the transmission amplitude as a function of the frequency were performed using an HP8510B network analyzer.
The eigenvalue problem of a single dielectric resonator can be solved analytically by regarding the system as a waveguide along the direction $`z`$ orthogonal to the plates . The entire field configuration can be derived from the knowledge of the longitudinal components $`\{H_z,E_z\}`$ alone, that separately obey the Helmoltz equation:
$$(^2+k^2)\{H_z(𝐫),E_z(𝐫)\}=0.$$
(1)
Here, $`𝐫=(\varphi ,\rho ,z)`$ in cylindrical coordinates and $`k=\sqrt{\epsilon _r}(\omega /c)`$ denotes the medium wave number for a mode at frequency $`\omega =2\pi f`$ ($`\epsilon _r=1`$ outside the dielectric). For perfectly conducting walls, boundary conditions require $`k_z`$ $`=p\pi /l,`$ $`p`$ integer. We have verified through explicit measurements of the field profile that $`p1`$, and all modes are evanescent with a decay constant close to the expected value $`\kappa _r=\sqrt{k_z^2\omega ^2/c^2}`$ . A generic mode of the dielectric is classified according to its azimuthal, radial and vertical quantum numbers $`(m,n,p)`$. If $`m=0,`$ the mode has cylindrical symmetry and can be either TE or TM. Hybrid HEM modes arise whenever $`m>0`$. There is no TEM mode for a dielectric guide. The agreement beween the calculated resonances and the data is found to be within $`2\%`$ for all the peaks .
The quality factors $`Q`$ of the resonances are determined by the observed widths $`\gamma `$ in the frequency domain, $`Q=f/\gamma `$. For a single resonator, estimates of $`Q`$ are possible by calculating the ratio between the energy stored per unit time and the average power dissipated . Owing to the localized nature of the field eigenmodes, losses due to the open boundary conditions at the edge of the plates are irrelevant, power dissipation being introduced by dielectric and conductor losses. Detailed calculations, which yield results consistent with the measurements, indicate that finite absorption in the metallic plates outweighs dielectric losses by at least a factor 5, thus providing the leading dissipation mechanism . In an equivalent time-domain picture, this implies that the metal acts as an environmental decay channel for the bound electromagnetic modes, the coupling between the dielectric and the metal being proportional to the copper surface resistence.
When two dielectric resonators are placed in proximity to each other, each resonance splits into two. The doublets have the structure of a broad resonance at a lower frequency $`f_l`$ accompanied by a narrow resonance at higher frequency $`f_h`$. This effect is particularly pronounced for TM modes ($`H_z=0`$). The most noteworthy example is the TM<sub>011</sub> single-disk resonance found at $`9.45`$ GHz with $`Q_070`$, which splits into two peaks with $`Q_h2400`$ and $`Q_l50`$ for an edge distance $`d=1.0`$mm (Fig. 2). The narrow peak can be experimentally assigned to an antisymmetric $`E_z`$-field configuration by establishing electric field coupling with the pick-up antenna and by probing the behavior at the mirror symmetry plane . The doublet splitting as a function of the disk separation $`\mathrm{\Delta }f(d)=f_hf_l`$ is displayed in Fig. 3(a). The splitting vanishes exponentially with $`d`$ until the limit of noninteracting resonators is approached. The measured decay constant is in good agreement with the single-disk value $`\kappa _r=0.45`$ mm<sup>-1</sup>, as expected on the basis of semiclassical estimates in a tunneling regime where $`\kappa _rd1`$ . The widths $`\gamma _l`$, $`\gamma _h`$ and their ratio $`\gamma _l/\gamma _h`$ are plotted in Fig. 3(b) and in Fig. 4 (Dots) respectively. Again, single-disk behavior is recovered for sufficiently large $`d`$, where $`\gamma _l,\gamma _h\gamma _0.`$ For small separations, the width $`\gamma _h`$ is highly suppressed, leading to the high $`Q`$-values noted above. As indicated by Fig. 4, a maximum ratio $`\gamma _l/\gamma _h`$ $`50`$ is seen at $`d=0.73`$ mm. It is very remarkable that, thanks to the proximity effect, $`Q^{}s`$ in the range of $`10^3`$ are achievable without resorting to closed-walls cavities.
A quantitative account of the above results can be only achieved by numerically solving Eq. (1) with the appropriate boundary conditions. Even if the problem is simplified since the $`z`$-dependence is separable, an accurate calculation of the lineshape factors $`Q`$ requires the complete knowledge of the electric and magnetic field distribution within the cavity volume. By referring to for more detail on the full electromagnetic analysis, our primary goal here is to gain simple qualitative insights. Let us focus henceforth on the TM<sub>011</sub> configuration. In the two-disk system, the splitting into modes of well defined parity is easily understood as a consequence of the perturbation introduced by the resonator-resonator coupling. The latter is known to take contributions only from the unperturbed evanescent fields $`E_z(𝐫)=\mathrm{cos}(k_zz)E_z(x,y)`$ of each resonator . Experimentally, the antisymmetric mode is found to be able to store an extra amount of electromagnetic energy compared to the symmetric one . This takes place through an amplification of field components (e.g., $`E_z,H_z`$), which do not contribute to power dissipation, whereby the higher observed $`Q`$.
The fact that the stabilisation of the antisymmetric mode only manifests at small separations suggests to picture the phenomenon in terms of a collective effect arising when two discrete states (the bound single-disk modes) are coupled to each other and, in addition, to a common environment (the metal) that renders them unstable. Similar effects are found in quantum physics, where they require an appropriate modification of the standard Weisskopf-Wigner decay theory . The correspondence between electromagnetic (em) and quantum mechanical ( qm) systems is well established for stationary problems . In particular, taking into account the boundary conditions at the dielectric surface, the component $`E_z(x,y)`$ in the waveguide plays the role of the wavefunction $`\psi `$ in a 2D quantum mechanical system, the dielectric medium corresponding to a square potential well at a fixed energy . Accordingly, the two-disk system maps into a 2D tunneling problem. In the presence of losses, the damping of the em field amplitude is usually accounted through a complex frequency $`fi\gamma /2`$ whose imaginary part provides the time decay rate . Despite the fact that, due to the different time-dependent equations of motion, the correct mapping to complex energies of quantum unstable states is a nonlinear relation of the form $`(fi\gamma /2)_{\text{ }\text{em}}^2(\epsilon i\gamma /2)_{\text{qm}}`$, a qm configuration which is stabilized against decay will still be mapped into an em non-decaying state.
A simple argument supporting the stability of the antisymmetric state goes as follows. Let $`|L,|R`$ denote degenerate ket states localized in the left, right well respectively. The two levels are coupled to each other by a tunneling perturbation of the form $`H_T=T[|LR|+|RL|]`$, $`T>0`$, and to a common continuum of states with a strength $`W_L,W_R`$. It is possible to show that the coupling to the environment mediates an extra interaction between the two discrete states, which strongly affects the decay properties of the combined system and thereby its spectral response . If $`W_L=W_R`$, one predicts that the symmetric combination $`[|L+|R]`$ corresponds to a Lorentzian resonance line at frequency $`f_S`$, whose width is twice larger than the width of each single level coupled to a continuum of the same strength, while the antisymmetric combination $`[|L|R]`$, found at frequency $`f_A`$, is completely stabilized. This behavior can be regarded as the counterpart of Heller’s predictions for the proximity effect based on a point scatterer model . It is worth mentioning that the overall effect of the extra interaction indicated above is to dress the tunneling matrix element $`T`$ with an additional imaginary part.
The actual situation, where the width of the antisymmetric mode is limited by the dielectric losses, would be more adequately modeled by invoking two distinct environments. To complement the previous analysis, we also explored a phenomenological description of dissipation in terms of an effective non-hermitian Hamiltonian . By introducing a drastic approximation, we only consider an effective 1D potential projected along the horizontal symmetry axis . Within the theory of multidimensional tunneling, this is supported by the fact that the interaction is dominated by instanton orbits between the two centers . Thus, we use a potential energy function of the form $`V_{\text{ pot}}(x)=V_0+iV_1`$ (for $`|x|>d/2+D`$), $`iV_2`$ (for $`d/2<|x|<d/2+D`$), $`V_b+iV_1`$ (for $`|x|<d/2`$), all parameters being real numbers. The imaginary terms $`iV_1,`$ $`iV_2`$ account for the losses outside and inside the double-well region respectively. We assume $`\left|V_{1,2}\right|/V_{0,b}`$ $`1`$.
The real part of $`V_{\text{pot}}(x)`$ is depicted in Fig. 4 (inset). We allow for the possibility of a height barrier $`V_bV_0`$ to effectively include corrections arising from the 2D nature of the problem. The presence of extra-contributions to the tunneling interaction, which are lost in the 1D model, is simulated by a more transparent barrier. The noninteracting limit corresponds to $`d\mathrm{}`$. For finite $`d`$, even and odd states are generated, with eigenenergies $`E_P=\epsilon _Pi\gamma _P/2`$, $`P=S,A`$. If $`\kappa =\kappa _r+i\kappa _i`$ denotes the inter-well wave vector, each complex eigenvalue has a structure involving exponentials $`e^{\kappa _rd}`$, convoluted with oscillating functions of $`\kappa _id`$ whose details depend on the state $`|L`$, $`|R`$ . The signature of tunneling shows up through the exponential dependence of the energy splitting, $`\mathrm{\Delta }\epsilon =\epsilon _A\epsilon _Se^{\kappa _rd}F(\kappa _id)`$. The oscillatory terms in $`F`$ are responsible for a “rippled” structure of the splitting decay, which is apparent in the data (Fig. 3(a)).
The transcendental equations determining $`E_A`$ and $`E_S`$ have been solved numerically for different sets of parameters with both $`V_b=V_0`$ and $`V_b<V_0`$ and the results compared with the experimental ones . The leading exponential decay of the energy separation and the asymmetric small-distance splitting of the widths are correctly predicted. However, we find a major difference between the two models in their capability to reproduce the exceedingly large ratio between the widths. In the simulations with $`V_b=V_0,`$ we were unable to reach ratios larger than 4, regardless of the values of the parameters $`V_1,V_2,`$ mainly affecting the absolute range of the widths. This order of magnitude is in agreement with independent results on symmetry splittings of resonances due to semiclassical creeping orbits . In the reduced-height configuration, the ratio $`\gamma _s/\gamma _a`$ can be controlled over a broad range (up to 50) by varying $`V_b/V_0`$. Some representative behaviors are summarized in Fig. 4 for barrier opacity $`V_b/V_0=1,`$ $`1/4,`$ $`1/9`$. Maximum stability of the antisymmetric mode is reached at an intermediate distance $`d`$, corresponding to the $`\gamma _s/\gamma _a`$ -peak value. The better qualitative agreement attainable with the reduced-height model suggests that dimensionality effects also play a key role in the experiment.
In order to confirm the conclusion that a judicious use of symmetry leads to a $`Q`$-sharpening, we carried out experiments on a 3-disk system, with the disks placed at the vertices of an equilateral triangle. Resonance triplets are observed. In analogy to the 2-disk system, we predict that modes tranforming antisymmetrically with respect to reflections in each of the vertical mirror planes show enhanced stability against dissipation . We focus on a single-resonator mode with $`m=3`$ at 10.8 GHz, whose behavior is displayed in Fig. 5. A very sharp component at intermediate frequency is clearly seen. The $`Q`$ factor is increased by roughly a factor 20 compared to the original one. The sharpening effect turns out to be very sensitive against symmetry-breaking effects. The influence of a geometric symmetry-breaking has been studied by shifting one of the disks by $`b`$ along the bisectrix of the triangle. It is evident from Fig. 5 (inset) that the sharp resonance is dramatically affected, the $`Q`$ factor being exponentially degraded. No sensible change is observed for any of the broad resonances.
In the spirit of the original definition by Heller , our observations indicate that interesting proximity phenomena arise in the spectral response of nearby systems. Proximity resonances have been recently detected in the scattering of a TEM electromagnetic mode in a parallel-plate waveguide . Despite some superficial similarity with the present work, it is essential to realize that our experiments probed a completely different regime of the microwave field, where direct evanescent-wave coupling between bound modes rather than scattering resonances from two dielectrics illuminated by the same wave field were investigated. In particular, the power law behavior characterizing scattering states should be contrasted with the exponential dependences that are intrinsically associated with tunneling. Thus, bound-state proximity resonances form a novel complementary manifestation of a similar physical phenomenon, whose detailed understanding poses new challenges to both numerical simulations and semi-classical treatments.
Our results have a variety of implications. First, the tunneling interaction in a 2D integrable potential has been probed sensitively. There is no difficulty, in principle, to extend the experimental work to chaotic potentials where important results on complex periodic orbits theory have been obtained . Second, the experiments demonstrate how symmetry properties can be usefully exploited to protect a system against the effects of its environment. This general mechanism provides a unifying explanation for proximity resonances, regardless of the unbound (scattering) or bound (confined) nature of the wave field. Third, our work can be related to recent observations of the symmetry splitting between optical modes in photonic molecules where, however, the behavior of widths was not addressed. It is conceivable that some counterpart of proximity phenomena may be relevant on the mesoscopic scale as well. From the practical perspective, the possibility of symmetry-based $`Q`$-amplification in electromagnetic or photonic structures represents another exciting area of applications. Finally, the electromagnetic phenomenon evidenced here displays intriguing similarities with concepts investigated in the context of quantum dissipative processes , . The possibility of establishing some mapping between the electromagnetic and quantum realm even in the presence of dissipative mechanisms would clearly open up a fruitful arena of interchange and deserves further investigation.
Work at Northeastern was supported by NSF-PHY-9752688. We thank V. Kidambi for help with the data analysis software and N. D. Whelan for discussions.
|
no-problem/9903/cond-mat9903424.html
|
ar5iv
|
text
|
# Structure and energetics of the Si-SiO2 interface
Silicon has long been synonymous with semiconductor technology. This unique role is due largely to the remarkable properties of the Si-SiO<sub>2</sub> interface, especially the (001)-oriented interface used in most devices. Although Si is crystalline and the oxide is amorphous, the interface is essentially perfect, with an extremely low density of dangling bonds or other electrically active defects. With the continual decrease of device size, the nanoscale structure of the silicon/oxide interface becomes more and more important. Yet despite its essential role, the atomic structure of this interface is still unclear. Using a novel Monte Carlo approach, we identify low-energy structures for the interface. The optimal structure found consists of Si-O-Si “bridges” ordered in a stripe pattern, with very low energy. This structure explains several puzzling experimental observations.
Experiments offer many clues to the interface structure, but their interpretation remains controversial, because of the complexities inherent in studying disordered materials. Proposed models range from a graded interface to a sharp interface and even to a crystalline oxide layer at the interface . Most theoretical studies have involved guessing reasonable structures , sometimes even using hand-built models . More recently, there have been attempts to obtain an unbiased structure using unconstrained molecular dynamics (MD) and Monte Carlo (MC) studies . However, because of kinetic limitations these studies have not been able to identify the equilibrium structure. Calculations of the interface energy are also not possible with existing methods.
Here we employ a novel approach in which the Si-SiO<sub>2</sub> system is modeled as a continuous network of bonds connecting the atoms, and the thermodynamic ensemble of possible network topologies is explored via Monte Carlo (MC) sampling. The basic method has been described elsewhere in a simpler context. But the present application demonstrates that this approach makes possible an entirely new class of computational studies of disordered systems.
Our approach samples only defect-free configurations, in which Si and O have four and two bonds respectively, and there are no O-O bonds. Because of this restriction, the energy may be reasonably approximated by a valence-force model:
$`E_{\{r\}}`$ $`=`$ $`{\displaystyle \frac{1}{2}}{\displaystyle \underset{i}{}}k_b(b_ib_0)^2`$ (2)
$`+{\displaystyle \frac{1}{2}}{\displaystyle \underset{i,j}{}}k_\theta (\mathrm{cos}\theta _{ij}\mathrm{cos}\theta _0)^2.`$
Here $`\{𝐫\}`$ is the set of atom positions, $`E_{\{r\}}`$ is the total energy for a given network topology and given $`\{𝐫\}`$. $`i`$ represents the $`i`$th bond, $`b_i`$ is ts length, $`\theta _{ij}`$ is the angle between bonds $`i`$ and $`j`$ connected to a common atom. The material parameters depend implicitly on the type of atom, where $`b_0`$ is the preferred bond length, $`\theta _0`$ is the preferred bond angle, and $`k_\theta `$ and $`k_b`$ are “spring constants.” We take $`k_{b,\text{Si-Si}}=9.08eV/\AA ^2`$, $`k_{\theta ,\text{Si-Si-Si}}=3.58eV`$, $`b_{0,\text{Si-Si}}=2.35\AA `$, $`cos(\theta _{0,\text{Si}})=1/3`$, $`k_{b,\text{Si-O}}=27.0eV`$, $`b_{0,\text{Si-O}}=1.6\AA `$, $`k_{\theta ,\text{O-Si-O}}=4.32eV`$ and $`k_{\theta ,\text{Si-O-Si}}=0.75eV`$, and $`cos(\theta _{0,\text{O}})=1`$. For Si-Si-O bonds we set the spring constant to be the geometric mean $`k_{\theta ,\text{Si-Si-O}}=(k_{\theta ,\text{Si-Si-Si}}k_{\theta ,\text{O-Si-O}})^{1/2}`$. (There is an additional term in the energy which simply enforces the restriction of two and four neighbors for O and Si, respectively .)
In order to focus on the role of network structure, we treat the energy as a function solely of bond topology, minimizing $`E_{\{r\}}`$ with respect to the geometrical coordinates $`\{𝐫\}`$. Thus for a given network topology
$$E=\underset{\{𝐫\}}{\mathrm{min}}E_{\{r\}}.$$
(3)
The structure of the system is allowed to evolve toward thermodynamic equilibrium through Monte-Carlo bond-switching moves . (We adapt the original move to preclude O-O bonds.) At each step a random trial bond-switch is accepted with probability $`e^{\mathrm{\Delta }E/kT}`$ (or 1 for energy-lowering moves), guaranteeing that the system will evolve toward thermodynamic equilibrium. This approach gives a fairly accurate description of the structure of both amorphous Si and amorphous SiO<sub>2</sub>. Specifically, we have verified that the average bondlength and bond angles are in agreement with experiment, and the elastic constants are accurate to better than 20%.
Our model for the energy is rather simple compared with the more accurate ab initio methods used in some recent studies . Since the Si-SiO<sub>2</sub> system is dominated by steric constraints, our approach should nevertheless be reasonably accurate for the defect-free structures considered here. More important, it allows the large-scale MC sampling necessary for the system to move toward thermodynamic equilibrium, which is not feasible with ab initio methods. It also allows us to determine the actual interface energy, using thermodynamic averaging. More accurate methods are not at present able to determine the interface energy, even for a given interface structure, because it is impractical to average over the statistical ensemble of configurations of the amorphous oxide.
We begin with a convenient though unphysical structure, a perfect interface between crystalline Si and highly strained $`\alpha `$-crystobalite. We use 10 layers of Si, and SiO<sub>2</sub> containing an equal number of Si atoms, periodically repeated in the interface-normal (z) direction. In the other two dimension, we use cells with 2$`\times `$2 and 4$`\times `$4 periodicity, for a total of 160 and 640 atoms respectively. To accurately describe an oxide at a free Si surface, the cell size is constrained to match Si(001) in two directions, while the period normal to the interface is allowed to vary to maintain zero stress in that direction.
We first perform MC bond-switching within the oxide, allowing it to amorphize and to relax the large strain by viscous flow. We then perform unconstrained MC switching of the entire system, allowing it to equilibrate for up to a total of 300,000 MC steps. To accelerate the evolution, the MC “temperature” used is quite high, 2600C; but this refers only to the degree of disorder allowed in the network topology .
We have carried out 10 independent MC simulations for a 2$`\times `$2 cell. The resulting interface structures are shown in Fig. 1. The key structural element is an oxygen bridge between each pair of Si atoms terminating the Si crystal. This eliminates half the bonds from the Si side, correcting the mismatch between the bond densities in the two very different materials. This structure allows each atom to maintain its preferred coordination, with essentially no additional distortion of the bond angles or bond length beyond that already present due to the amorphous nature of the oxide. Bridge bonds have appeared in several previous models of the Si-SiO<sub>2</sub> interface. However, it has apparently not been previously recognized that these are the key element, giving an ideal low-energy interface.
All 10 simulations gave fully bridge-bonded structures. However, two distinct arrangements are possible within our 2$`\times `$2 periodicity, and both occur in the simulations. We refer to these as the “stripe” and “check” phases, respectively, and they are compared in Fig. 1. We have obtained the same structures using a somewhat different energy function , showing that it does not depend on the precise parameter values used.
Similar runs with a 4$`\times `$4 cell also give bridge-bonded structures, but the system typically becomes “stuck” in a metastable state with incomplete (of order 75%) bridge bonding. The energy is consistently lower in structures with more complete bridge bonding. The key role of the bridge bonds is illustrated in Fig. 2, where the total energy of the system and the number of bridge bonds at the interface are plotted against MC “time” for a typical 4$`\times `$4 simulation. There is a clear drop in energy each time a new bridge bond is formed. A fully bridge-bonded structure has the lowest energy and is stable under annealing. Thus it seems clear that, with sufficiently long annealing, the 4$`\times `$4 cell would always reach the ideal stripe structure. A side view of this structure is shown in Fig. 3a.
In equilibrium, the actual interface structure is that which minimizes the interface energy (or more precisely, the free energy). The interface energy can be calculated by subtracting the bulk energy of the amorphous oxide and crystalline Si (obtained in independent calculations) from the total energy. In all cases the energy is averaged over roughly 10,000 MC steps after the system reaches equilibrium. For the stripe phase, the calculated interface energy is 6.8$`\pm 1.3`$ meV/Å<sup>2</sup> (0.10 eV per 1$`\times `$1 cell), an order of magnitude smaller than the energy of a free Si surface. For the check phase, we find a slightly higher energy of 9.5$`\pm `$1.9 meV/Å<sup>2</sup> (0.14 eV per 1$`\times `$1 cell).
We can gain further insight into the energetics by decomposing the total energy of the system into individual atomic contributions. This decomposition is not unique, but a natural choice is to divide the bond-stretching energy in Eq. 1 equally between the two atoms. Half of the bond-angle energy is assigned to the vertex atom, and one quarter to each of the other atoms.
In Fig. 3b, the energy of each atom is plotted versus the z coordinate, for the low-energy 4$`\times `$4 stripe structure after equilibration for 300,000 MC steps. A striking feature is that the main contribution to the interface energy comes from local distortions inside the crystalline Si. The energy within the oxide is rather uniform, even right up to the interface.
There has been considerable interest in the possibility of a crystalline interfacial oxide . We can form an interface between Si(001) and tridymite (0001) which resembles the stripe phase above, but the tridymite is under considerable strain (about 7$`\%`$ in one direction and 13$`\%`$ in the other). The properties of this interface are summarized in Fig. 4. The interface energy is actually much higher than that for amorphous SiO<sub>2</sub>, about 29 meV/Å<sup>2</sup> (0.43 eV per 1$`\times `$1 cell). Thus there appears to be no interfacial driving force for formation of a crystalline oxide.
Yet several experiments have suggested the presence of a crystalline oxide layer roughly 5Å thick at the interface, based on both electron microscopy and x-ray diffraction . These results have remained an outstanding puzzle, but they are immediately explained by our structure.
Electron microscopy suggested a 5Å layer of tridymite at the interface . Comparison of Fig. 3a and Fig. 4a shows that the structure of the Si-tridymite interface is indistinguishable from the more realistic crystal-amorphous interface, in a region several angstroms thick at the interface. Thus our proposed interface structure is entirely consistent with the electron microscopy results. However, it is best viewed as an ordered interface structure, without reference to any crystalline bulk phase. In no case did we see evidence for an ordered phase extending further into the oxide.
X-ray diffraction experiments show an ordered 2$`\times `$1 structure at the interface, with a thickness of under 6 Å and a domain size comparable to the step spacing . The stripe phase exactly satisfies these characteristics. It has an overall 2$`\times `$1 periodicity. Moreover, every interface step causes a 90 rotation from 2$`\times `$1 to 1$`\times `$2, so the step spacing sets an upper bound on the domain size. The presence of random small atomic displacements, reflected in Figs. 1 and 3, explains the inability of Ref. to determine precise atomic positions from the diffraction data.
Finally we note that in several experiments, photoemission has been used to measure the number of Si atoms at the interface having intermediate oxidation states . Many theoretical studies have attempted to reproduce or explain these statistics , but the interpretation is surprisingly subtle . Nevertheless, there appears to be some concensus that the primary connection between Si and SiO<sub>2</sub> occurs via Si$`^{\text{+2}}`$ , as in our model.
In conclusion, we have identified a simple, defect-free, ordered structure for the Si-SiO<sub>2</sub> interface. It has low energy, and appears to reconcile the various puzzling experimental observations. The computational method focuses on a more complete exploration of the thermodynamic ensemble, even when this requires significant approximations in treating the energetics. It is our hope that this approach will open the door to a new class of computation studies of disordered systems.
Figure Captions
Fig. 1. Plan view of two Si-SiO<sub>2</sub> interfaces. The last three layers of Si are shown in gold, with atoms further from the interface shown smaller. The first layer of O is shown in red. (a) Stripe phase, having (2$`\times `$1) symmetry. (b) Check phase, having c(2$`\times `$2) symmetry.
Fig. 2. Total energy $`E`$, and number of interfacial bridge bonds, versus number of accepted Monte Carlo steps. The decrease in the energy each time a bridge bond forms illustrates their crucial role in giving a low interface energy.
Fig. 3. (a) Side view of 4$`\times `$4 stripe phase ( projection). The Si and O atoms are represented by gold and red spheres respectively. Each arrow points to a row of oxygen atoms that form the bridges at the interface. Notice the substantial voids above each bridge bond. (b) Energy of each atom versus its z coordinate. Red circles represent oxygen atoms and gold circles represent silicon atoms. The green line is the local energy per atom, averaged over 20 configurations (and over a z range of $`1\AA `$ for smoothness).
Fig. 4. Same as Fig. 3, for interface between Si and tridymite.
|
no-problem/9903/cond-mat9903420.html
|
ar5iv
|
text
|
# Formation of Liesegang patterns: A spinodal decomposition scenario
## Acknowledgments
We thank M. Zrinyi, P. Hantz, and T. Unger for useful discussions. This work has been supported by the Swiss National Science Foundation and by the Hungarian Academy of Sciences (Grant No. OTKA T 029792).
|
no-problem/9903/hep-ph9903449.html
|
ar5iv
|
text
|
# Implications of Precision Electroweak Measurements for Physics Beyond the SMTalk presented at the Division of Particles and Fields Conference (DPF 99), Los Angeles, CA, January 5–9, 1999.
## I Introduction
Using the top quark and $`Z`$ boson masses, $`m_t`$ and $`M_Z`$, the QED coupling, $`\alpha `$, and the Fermi constant, $`G_F`$, as input, other precision observables can be computed within the SM as functions of the Higgs boson mass, $`M_H`$. For relatively low values of $`M_H`$, the agreement with the measurements is found to be excellent, establishing the SM at the one-loop level. I will briefly review the constraints on $`M_H`$ and the experimental situation before moving beyond the SM.
Besides the recent high precision measurements of the $`W`$ boson mass , $`M_W`$, the most important input into precision tests of electroweak theory continues to come from the $`Z`$ factories LEP 1 and SLC . The vanguard of the physics program at LEP 1 with about 20 million recorded $`Z`$ events is the analysis of the $`Z`$ lineshape. Its parameters are $`M_Z`$, the total $`Z`$ width, $`\mathrm{\Gamma }_Z`$, the hadronic peak cross section, $`\sigma _{\mathrm{had}}`$, and the ratios of hadronic to leptonic decay widths, $`R_{\mathrm{}}=\frac{\mathrm{\Gamma }(\mathrm{had})}{\mathrm{\Gamma }(\mathrm{}^+\mathrm{}^{})}`$, where $`\mathrm{}=e`$, $`\mu `$, or $`\tau `$. They are determined in a common fit with the leptonic forward-backward (FB) asymmetries, $`A_{FB}(\mathrm{})=\frac{3}{4}A_eA_{\mathrm{}}`$. With $`f`$ denoting the fermion index,
$$A_f=\frac{2v_fa_f}{v_f^2+a_f^2}$$
(1)
is defined in terms of the vector ($`v_f=I_{3,f}2Q_f\mathrm{sin}^2\theta _f^{\mathrm{eff}}`$) and axial-vector ($`a_f=I_{3,f}`$) $`Zf\overline{f}`$ coupling; $`Q_f`$ and $`I_{3,f}`$ are the electric charge and third component of isospin, respectively, and $`\mathrm{sin}^2\theta _f^{\mathrm{eff}}\overline{s}_f^2`$ is an effective mixing angle.
An average of about 73% polarization of the electron beam at the SLC allows for a set of competitive and complementary measurements with a much smaller number of $`Z`$’s ($`\stackrel{>}{_{}}500,000`$). In particular, the left-right (LR) cross section asymmetry, $`A_{LR}=A_e`$, represents the most precise determination of the weak mixing angle by a single experiment (SLD) . Mixed FB-LR asymmetries, $`A_{LR}^{FB}(f)=\frac{3}{4}A_f`$, single out the final state coupling of the $`Z`$ boson.
For several years there has been an experimental discrepancy at the $`2\sigma `$ level between $`A_{\mathrm{}}`$ from LEP and the SLC. With the 1997/98 high statistics run at the SLC, and a revised value for the FB asymmetry of the $`\tau `$ polarization, $`𝒫_\tau ^{FB}`$, the two determinations are now consistent with each other,
$$\begin{array}{c}A_{\mathrm{}}(\mathrm{LEP})=0.1470\pm 0.0027,\hfill \\ A_{\mathrm{}}(\mathrm{SLD})=0.1503\pm 0.0023.\hfill \end{array}$$
(2)
The LEP value is from $`A_{FB}(\mathrm{})`$, $`𝒫_\tau `$, and $`𝒫_\tau ^{FB}`$, while the SLD value is from $`A_{LR}`$ and $`A_{LR}^{FB}(\mathrm{})`$. The data is consistent with lepton universality, which is assumed here. There remains, however, a $`2.5\sigma `$ discrepancy between the two most precise determinations of $`\overline{s}_{\mathrm{}}^2`$, namely $`A_{LR}`$ and $`A_{FB}(b)`$ (assuming no new physics in $`A_b`$).
Of particular interest are the results on the heavy flavor sector including $`R_q=\frac{\mathrm{\Gamma }(q\overline{q})}{\mathrm{\Gamma }(\mathrm{had})}`$, $`A_{FB}(q)`$, and $`A_{LR}^{FB}(q)`$, with $`q=b`$ or $`c`$. There is a theoretical prejudice that the third family is the one which is most likely affected by new physics. Interestingly, the heavy flavor sector has always shown the largest deviations from the SM. E.g., $`R_b`$ deviated at times by almost $`4\sigma `$. Now, however, $`R_b`$ is in good agreement with the SM, and thus puts strong constraints on many types of new physics. At present, there is some discrepancy in $`A_{LR}^{FB}(b)=\frac{3}{4}A_b`$, and $`A_{FB}(b)=\frac{3}{4}A_eA_b`$, both at the $`2\sigma `$ level. Using the average of Eqs. (2), $`A_{\mathrm{}}=0.1489\pm 0.0018`$, both can be interpreted as measurements of $`A_b`$. From $`A_{FB}(b)`$ one would obtain $`A_b=0.887\pm 0.022`$, and the combination with $`A_{LR}^{FB}(b)=\frac{3}{4}(0.867\pm 0.035)`$ would yield $`A_b=0.881\pm 0.019`$, which is almost $`3\sigma `$ below the SM prediction. Alternatively, one could use $`A_{\mathrm{}}(\mathrm{LEP})`$ above (which is closer to the SM prediction) to determine $`A_b(\mathrm{LEP})=0.898\pm 0.025`$, and $`A_b=0.888\pm 0.020`$ after combination with $`A_{LR}^{FB}(b)`$, i.e., still a $`2.3\sigma `$ discrepancy. An explanation of the 5–6% deviation in $`A_b`$ in terms of new physics in loops, would need a 25–30% radiative correction to $`\widehat{\kappa }_b`$, defined by $`\overline{s}_b^2\widehat{\kappa }_b\mathrm{sin}^2\widehat{\theta }_{\overline{\mathrm{MS}}}(M_Z)\widehat{s}_Z^2`$. Only a new type of physics which couples at the tree level preferentially to the third generation , and which does not contradict $`R_b`$ (including the off-peak measurements by DELPHI ), can conceivably account for a low $`A_b`$. Given this and that none of the observables deviates by $`2\sigma `$ or more, we can presently conclude that there is no compelling evidence for new physics in the precision observables, some of which are listed in Table I. Very good agreement with the SM is observed. Only $`A_{LR}`$ and the two measurements sensitive to $`A_b`$ discussed above, show some deviation, but even those are below $`2\sigma `$.
The data show a strong preference for a low $`M_H𝒪(M_Z)`$. Unlike in previous analyses, the central value of the global fit to all precision data, including $`m_t`$ and excluding further constraints from direct searches,
$$M_H=107_{45}^{+67}\text{ GeV},$$
(3)
is now above the direct lower limit, $`M_H>90\text{ GeV [95\% CL]}`$, from searches at LEP 2 . It coincides with the $`5\sigma `$ discovery limit from LEP 2 running at 200 GeV center of mass energy with 200 pb<sup>-1</sup> integrated luminosity per experiment . The 90% central confidence interval from precision data only is given by $`39\text{ GeV}<M_H<226\text{ GeV}`$. The fit result (3) is consistent with the predictions for the lightest neutral Higgs boson , $`m_{h^0}\stackrel{<}{_{}}130`$ GeV, within the Minimal Supersymmetric Standard Model (MSSM) \[and its extensions\].
For the determination of the proper $`M_H`$ upper limits, we scan equidistantly over $`\mathrm{ln}M_H`$, combining the likelihood $`\chi ^2`$ function from the precision data with the exclusion curve (interpreted as a prior probability distribution function) from LEP 2 . This curve is from Higgs searches at center of mass energies up to 183 GeV. We find the 90 (95, 99)% confidence upper limits,
$$M_H<220\text{ (255, 335) GeV}.$$
(4)
Notice, that the LEP 2 exclusion curve increases the 95% upper limit by almost 30 GeV. The upper limits (4) are rather insensitive to the $`\alpha (M_Z)`$ used in the fits. This is due to compensating effects from the larger central value of $`\alpha (M_Z)`$ (corresponding to lower extracted Higgs masses) and the larger error bars in the data driven approach as compared to evaluations relying more strongly on perturbative QCD . While the limits are therefore robust within the SM, it should be cautioned that the results on $`M_H`$ are strongly correlated with certain new physics parameters, as discussed in Section II.
The accurate agreement of theory and experiment allows severe constraints on possible TeV scale physics, such as unification or compositeness. For example, the ideas of technicolor and non-supersymmetric Grand Unified Theories (GUTs) are strongly disfavored. On the other hand, supersymmetric unification, as generically predicted by heterotic string theory, is supported by the observed approximate gauge coupling unification at an energy slightly below the Planck scale, and by the decoupling of supersymmetric particles from the precision observables. As I will discuss in the following Sections, those types of new physics which tend to decouple from the SM are favored, while non-decoupling new physics generally conflicts with the data.
## II Oblique parameters: bounds on extra matter
The data is precise enough to constrain additional parameters describing physics beyond the SM. Of particular interest is the $`\rho _0`$-parameter, which is a measure of the neutral to charged current interaction strength and defined by
$$\rho _0=\frac{M_W^2}{M_Z^2\widehat{c}_Z^2\widehat{\rho }(m_t,M_H)}.$$
(5)
The SM contributions are absorbed in $`\widehat{\rho }`$. Examples for sources of $`\rho _01`$ include non-degenerate extra fermion or boson doublets, and non-standard Higgs representations.
In a fit to all data with $`\rho _0`$ as an extra fit parameter, there is a strong (73%) correlation<sup>*</sup><sup>*</sup>*$`\rho _0`$ is also strongly anticorrelated with the strong coupling $`\alpha _s`$ ($`53\%`$) and $`m_t`$ ($`46\%`$). with $`M_H`$. As a result, upper limits on $`M_H`$ are weaker when $`\rho _0`$ is allowed. Indeed, $`\chi ^2(M_H)`$ is very shallow with $`\mathrm{\Delta }\chi ^2=\chi ^2(1\text{ TeV})\chi ^2(M_Z)=4.5`$, and its minimum is at $`M_H=46`$ GeV, which is already excluded. For comparison, within the SM a 1 TeV Higgs boson is excluded at the 5 $`\sigma `$ level. We obtain,
$$\begin{array}{ccc}\rho _0\hfill & =& \hfill 0.9996_{0.0006}^{+0.0009},\\ m_t\hfill & =& \hfill 172.9\pm 4.8\text{ GeV},\\ \alpha _s(M_Z)\hfill & =& \hfill 0.1212\pm 0.0031,\end{array}$$
(6)
in excellent agreement with the SM ($`\rho _0=1`$). The central values are for $`M_H=M_Z`$, and the uncertainties are $`1\sigma `$ errors and include the range, $`M_ZM_H167`$ GeV, in which the minimum $`\chi ^2`$ varies within one unit. Note, that the uncertainties for $`\mathrm{ln}M_H`$ and $`\rho _0`$ are non-Gaussian: at the $`2\sigma `$ level ($`\mathrm{\Delta }\chi ^24`$), Higgs boson masses up to 800 GeV are allowed, and we find
$$\rho _0=0.9996_{0.0013}^{+0.0031}\text{ (}2\sigma \text{)}.$$
(7)
This implies strong constraints on the mass splittings of extra fermion and boson doublets ,
$$\mathrm{\Delta }m^2=m_1^2+m_2^2\frac{4m_1^2m_2^2}{m_1^2m_2^2}\mathrm{ln}\frac{m_1}{m_2}(m_1m_2)^2,$$
(8)
namely, at the $`1\sigma `$ and $`2\sigma `$ levels, respectively, ($`C_i`$ is the color factor)
$$\underset{i}{}\frac{C_i}{3}\mathrm{\Delta }m_i^2<\text{ (38 GeV)}^2\text{ and (93 GeV)}^2.$$
(9)
Due to the restricted Higgs mass range in the presence of supersymmetry (SUSY), stronger $`2\sigma `$ constraints result here,
$$\rho _0\text{ (MSSM) }=0.9996_{0.0013}^{+0.0017}\text{ (}2\sigma \text{)}.$$
(10)
The $`2\sigma `$ constraint in (9) would therefore tighten from $`\text{(93 GeV)}^2`$ to $`\text{(64 GeV)}^2`$.
Constraints on heavy degenerate chiral fermions can be obtained through the $`S`$ parameter , defined as a difference of $`Z`$ boson self-energies,
$$\frac{\widehat{\alpha }(M_Z)}{4\widehat{s}_Z^2\widehat{c}_Z^2}S\frac{\mathrm{\Pi }_{ZZ}^{\mathrm{new}}(M_Z^2)\mathrm{\Pi }_{ZZ}^{\mathrm{new}}(0)}{M_Z^2}.$$
(11)
The superscripts indicate that $`S`$ includes new physics contributions only. Likewise, $`T=(1\rho _0^1)/\widehat{\alpha }`$ and the third oblique parameter, $`U`$, also vanish in the SM. A fit to all data with $`S`$ allowed yields,
$$\begin{array}{ccc}S\hfill & =& \hfill 0.20_{0.17}^{+0.24},\\ M_H\hfill & =& \hfill 390_{310}^{+690}\text{ GeV},\\ m_t\hfill & =& \hfill 172.9\pm 4.8\text{ GeV},\\ \alpha _s\hfill & =& \hfill 0.1221\pm 0.0035.\end{array}$$
(12)
It is seen, that in the presence of $`S`$ constraints on $`M_H`$ virtually disappear. In fact, $`S`$ and $`M_H`$ are almost perfectly anticorrelated ($`92\%`$). By requiring $`M_ZM_H1`$ TeV, we find at the $`3\sigma `$ level,
$$S=0.20_{0.33}^{+0.40}\text{ (}3\sigma \text{)}.$$
(13)
A heavy degenerate ordinary or mirror family contributes $`2/3\pi `$ to $`S`$. A degenerate fourth generation is therefore excluded at the 99.8% CL on the basis of the $`S`$ parameter alone. Due to the correlation with $`T`$, the fit becomes slightly better in the presence of a non-degeneracy of the new doublets. A non-vanishing $`T=0.15\pm 0.08`$ is favored, but even in this case a fourth family is excluded at least at the 98.2% CL. This is in agreement with a different constraint on the generation number, using very different assumptions: allowing the invisible $`Z`$ width as a free parameter, yields the constraint, $`N_\nu =2.992\pm 0.011`$, on the number of light standard neutrino flavors.
A simultaneous fit to $`S`$, $`T`$, and $`U`$, can be performed only relative to a specified $`M_H`$. If one fixes $`M_H=600`$ GeV, as is appropriate in QCD-like technicolor models, one finds
$$\begin{array}{ccc}\hfill S& =& \hfill 0.27\pm 0.12,\\ \hfill T& =& \hfill 0.00\pm 0.15,\\ \hfill U& =& \hfill 0.19\pm 0.21.\end{array}$$
(14)
Notice, that in such a fit the $`S`$ parameter is significantly smaller than zero. From this an isodoublet of technifermions, assuming $`N_{TC}=4`$ technicolors, is excluded by almost 6 standard deviations, and a full technigeneration by more than $`15\sigma `$. However, the QCD-like models are excluded on other grounds, such as FCNC. These can be avoided in models of walking technicolor in which $`S`$ can also be smaller or even negative .
## III Extra $`Z^{}`$ bosons
Many GUTs and string models predict extra gauge symmetries and new exotic states. For example, $`SO(10)`$ GUT contains an extra $`U(1)`$ as can be seen from its maximal subgroup, $`SU(5)\times U(1)_\chi `$. The $`Z_\chi `$ boson is also the unique solution to the conditions of (i) no extra matter other than the right-handed neutrino, (ii) absence of gauge and mixed gauge/gravitational anomalies, and (iii) orthogonality to the hypercharge generator. Relaxing condition (iii) allows other solutions (including the $`Z_{LR}`$ appearing in left-right models with $`SU(2)_L\times SU(2)_R\times U(1)`$ gauge symmetry) which differ from the $`Z_\chi `$ boson by a shift proportional to the third component of the right-handed isospin generator . Equivalently, a non-vanishing kinetic mixing term can also parametrize these other solutions .
Similarly, $`E_6`$ GUT contains the subgroup $`SO(10)\times U(1)_\psi `$, giving rise to another $`Z^{}`$. It possesses only axial-vector couplings to the ordinary fermions. As a consequence its mass, $`M_{Z_\psi ^{}}`$, is generally less constrained (see Fig 1).
The $`Z_\eta `$ boson is the linear combination $`\sqrt{3/8}Z_\chi \sqrt{5/8}Z_\psi `$. It occurs in Calabi-Yau compactifications of the heterotic string if $`E_6`$ breaks directly to a rank 5 subgroup via the Hosotani mechanism.
The potential $`Z^{}`$ boson is in general a superposition of the SM $`Z`$ and the new boson associated with the extra $`U(1)`$. The mixing angle $`\theta `$ satisfies the relation ,
$$\mathrm{tan}^2\theta =\frac{M_{Z_1^0}^2M_Z^2}{M_Z^{}^2M_{Z_1^0}^2},$$
(15)
where $`M_{Z_1^0}`$ is the SM value for $`M_Z`$ in the absence of mixing. Note that $`M_Z<M_{Z_1^0}`$, and that the SM $`Z`$ couplings are changed by the mixing. If the Higgs $`U(1)^{}`$ quantum numbers are known, as well, there will be an extra constraint,
$$\theta =C\frac{g_2}{g_1}\frac{M_Z^2}{M_Z^{}^2},$$
(16)
where $`g_{1,2}`$ are the $`U(1)`$ and $`U(1)^{}`$ gauge couplings with $`g_2=\sqrt{\frac{5}{3}}\mathrm{sin}\theta _W\sqrt{\lambda }g_1`$. $`\lambda =1`$ (which we assume) if the GUT group breaks directly to $`SU(3)\times SU(2)\times U(1)\times U(1)^{}`$. $`C`$ is a function of vacuum expectation values (VEVs). For minimal Higgs sectors it can be found in Table III of reference . Fig. 1 shows allowed contours for $`\rho _0`$ free (see Section II), as well as $`\rho _0=1`$ (only Higgs doublets and singlets). Notice, that in the cases of minimal Higgs sectors the $`Z^{}`$ mass limits are pushed into the TeV region. For more details and other examples see Ref. .
## IV Supersymmetry
The good agreement between the SM predictions and the data favors those types of new physics for which contributions decouple from the precision observables. In particular, supersymmetric extensions of the SM with heavy (decoupling) superpartners are in perfect agreement with observation. Other regions of parameter space, however, where some of the supersymmetric states are relatively light are strongly constraint by the data.
In a recent analysis we systematically studied these constraints within the MSSM with various assumptions about the mediation of SUSY breaking (i.e. about the soft SUSY breaking terms). In a first step, we identified the allowed region in parameter space taking into account all direct search limits on superparticles, but ignoring the additional information from the precision data. We then added the indirect constraints arising from SUSY loop contributions. We found that a significant region of MSSM parameter space has to be excluded, and that the lower limits on superparticles and extra Higgs states strengthen. See the results in Fig. II from an update of our analysis for this conference .
## V Conclusions
The precision data confirms the validity of the SM at the electroweak loop level, and there is presently no compelling evidence for deviations. A low Higgs mass is strongly favored by the data. While the precise range of $`M_H`$ is rather sensitive to $`\alpha (M_Z)`$, the upper limit is not. However, in the presence of non-standard contributions to the $`S`$ or $`T`$ parameters, no strong $`M_H`$ bounds can be found.
There are stringent constraints on parameters beyond the SM, such as $`S`$, $`T`$, $`U`$, and others. This is a serious problem for models of dynamical symmetry breaking, compositeness, and the like, and excludes a fourth generation of quarks and leptons at the $`3\sigma `$ level. Those constraints are, however, consistent with the MSSM, favoring its decoupling limit. Moreover, the low favored $`M_H`$ is in agreement with the expected mass range for the lightest neutral Higgs boson in the MSSM. Precision tests also impose stringent limits on extra $`Z^{}`$ bosons suggested in many GUT and string models. They limit their mixing with the ordinary $`Z`$, and put competitive lower limits on their masses, especially in concrete models in which the $`U(1)^{}`$ charges of the Higgs sector are specified.
## Acknowledgement
It is a pleasure to thank Paul Langacker and Damien Pierce for collaboration.
|
no-problem/9903/astro-ph9903403.html
|
ar5iv
|
text
|
# X-ray Nova XTE J1550-564: Discovery of a QPO Near 185 Hz
## 1 Introduction
The X-ray nova and black hole candidate XTE J1550-564 was first detected on 1998 September 6 (Smith et al. (1998)) with the All Sky Monitor (ASM; Levine et al. (1996)) aboard the Rossi X-ray Timing Explorer (RXTE). It is the brightest X-ray nova yet observed with RXTE. The ASM light curve and further background information for this source are provided in a companion paper by Sobczak et al. (1999b; hereafter paper I). Extensive observations of the optical counterpart are described by Jain et al. (1999; hereafter paper III). The 2.5-20 keV spectrum of XTE J1550-564 resembles the sources that are dynamically established to be black hole binaries. The X-ray and optical intensities of this source suggest a distance of roughly 6 kpc (paper I).
We present results from 60 RXTE observations of XTE J1550-564 that were made between 12 and 83 days after the outburst began. Results from earlier RXTE observations (outburst days 2–10) were reported by Cui et al. (1999), who found that during the initial rise (0.6–2.1 Crab) the source exhibited QPOs with a frequency that systematically increased from 0.08 to 8 Hz as the X-ray flux increased. These QPOs were very strong, with rms amplitudes typically $``$15% of the mean flux over the full PCA band. In each observation, the QPO amplitude (rms / mean flux) increased by a factor of two between 2 and 30 keV. The temporal variability displayed by XTE J1550-564 also resembles some of the black hole systems observed during outburst (Cui et al. (1999); paper I).
Herein we show a series of power spectra in which there are frequent appearances of QPOs in the range 2-13 Hz. There are also a few occasions in which we detect a high frequency QPO near 185 Hz which is analogous to the stationary QPOs observed for two black hole candidates: GRS1915+105 (67 Hz; Morgan, Remillard, & Greiner 1997) and GRO J1655-40 (300 Hz; Remillard et al. 1999b ).
## 2 Observations and Analysis
The times of the 60 RXTE pointed observations and a summary of some X-ray properties of XTE J1550-564 are given in Table 1 of paper I. We have analyzed the X-ray timing properties of XTE J1550-564 using data from the PCA instrument (Jahoda et al. (1996)). Within the constraints of spacecraft telemetry, we obtained moderately good time resolution in at least a few energy bands by conducting the observations as follows. In most cases, the PCA Event Analyzers (EAs) were configured to deliver 122 $`\mu `$s time resolution in three broad energy bands, which are approximately 2-6 keV, 6-12 keV, and 12-30 keV. The 30 keV boundary is an effective limit imposed by the source spectrum, not by exclusion of high energy events in the data processing. Lower time resolution was occasionally used to avoid possible telemetry saturation due to high count rates: the time resolution was 250 $`\mu `$s for observations #4–6 (see paper I, Table 1) and 500 $`\mu `$s for observations #9–10. In parallel, we usually used a fourth EA to provide 8 energy bands with 4 ms time resolution within the energy range 2-13 keV.
For each PCA observation, we computed power spectra for each of the 3 energy bands sampled with high time resolution and also for the 2–30 keV sum band. Power spectra were computed for every 256 s data segment. Then for each observation and energy band, we averaged together all of the 256 s power spectra. We subtracted the contribution from counting statistical noise, corrected for dead-time effects as described by Morgan et al. (1997). The power spectra are normalized such that the power in each frequency bin is the square of the rms amplitude divided by the mean count rate. At high frequencies, residual continuum power $`<10^6`$ Hz<sup>-1</sup> is likely to represent inaccuracies in our subtraction of statistical noise, rather than source behavior.
We used a chi-squared minimization technique to derive the central frequency and the width of an X-ray QPO. We fit each QPO feature with a Lorentzian function, while the continuum on both sides of the QPO was modeled with a power law function. On some occasions (e.g., see Sept 21a,b below), it was necessary to add a quadratic term to the relationship between log power density and log frequency, in order to adequately model the curvature in the power continuum. The QPO fit parameters include the QPO central frequency ($`\nu `$) and the full width at half maximum ($`\mathrm{\Delta }\nu `$). The amplitude of the QPO, expressed as a fraction of the mean count rate, is the square root of the integrated power in the QPO feature. The central frequencies of the 2–13 Hz QPOs are included in Table 1 of Paper I, while the results for the fast ($`185`$ Hz) QPOs are given in Table 1 below.
## 3 Results
It can be seen in paper I that there are both short-term and long-term variations in the intensity, X-ray spectrum, and QPO properties of XTE J1550-564. Furthermore, the changes in timing and spectral parameters are highly correlated (see Table 1 of paper I). To facilitate our sensitivity to the high frequency QPOs, we average together power spectra from sequential time intervals in which the changes in source behavior are relatively minor. There are 12 such groups, and their power spectra are shown in Figure X-ray Nova XTE J1550-564: Discovery of a QPO Near 185 Hz. The averaging has smeared the low frequency QPOs (3–7 Hz) in panels d–g of Figure X-ray Nova XTE J1550-564: Discovery of a QPO Near 185 Hz, as can be discerned from Table 1 of paper I, but this does not alter our conclusions below.
In our observations of 1998 September and October, XTE J1550-564 is bright in X-rays with 2-30 keV intensity above 0.5 Crab. The majority of the power spectra during this interval show a continuum that is relatively flat below a few Hz; the power density values ($``$0.01) imply $``$10% rms variations at timescales longer than $`0.2`$ s. There is a sharp break in the continuum power near 5–10 Hz, with a QPO feature near or somewhat above the break frequency. A second peak is typically seen at the first harmonic frequency (2$`\nu `$), and a weaker peak often appears at the first subharmonic (0.5$`\nu `$). These power spectra resemble the earlier results for XTE J1550-564 that were reported by Cui et al. (1999).
The QPOs in the range 2.6–13.1 Hz have the following characteristics. While the source is bright, the detected QPOs have a coherence parameter, $`Q=\nu /\mathrm{\Delta }\nu `$, that is in the range 3.5 $`<Q<`$12.0. However, as the source dims, there are infrequent detections of broad QPOs with $`Q1.6`$ (see Figure X-ray Nova XTE J1550-564: Discovery of a QPO Near 185 Hz and Table 1 of paper I: Oct 29, Nov 9, and after Nov 20). The narrow QPOs are further distinguished by their high amplitudes. In particular, the individual observations between September 22 and October 13 generally yield rms amplitudes of 8-14 % of the mean count rate. Thus the X-ray luminosity, which is $`1.5\times 10^{38}`$($`d`$/6kpc)<sup>2</sup> erg s<sup>-1</sup> (paper I), is modulated at 3–6 Hz with a crest-to-trough ratio as high as 1.5. Previously, only the microquasar GRS1915+105 has shown QPOs with such a large amplitude and a high luminosity (Morgan, Remillard, & Greiner (1997)). These QPOs place significant constraints on physical models designed to explain the power-law spectrum in accreting black hole systems (e.g. Molteni, Sponholz, & Chakrabarti (1996); Titarchuk, Lapidus, & Muslimov (1998)). Further analyses of these QPOs will be presented in a later publication.
While XTE J1550-564 is still in a bright state, the general shape of the power spectrum diverges from the norm on September 19 and during October 20–29. There is less power at low frequencies, and the continuum can be very roughly described as a single power law with index between 0.5 and 1.0. More importantly, these observations reveal an additional QPO near 185 Hz. This high frequency QPO is strongest in panels b and h of Figure X-ray Nova XTE J1550-564: Discovery of a QPO Near 185 Hz. These QPOs are shown more clearly, along with the profile fits, in the left panels of Figure X-ray Nova XTE J1550-564: Discovery of a QPO Near 185 Hz. The detections are significant at the level of 6–7 $`\sigma `$, and the central frequencies are located at $`184\pm 6`$ and $`186\pm 7`$ Hz, respectively.
Power spectra and QPO fits in two different energy bands are shown for the same two observations in Figure X-ray Nova XTE J1550-564: Discovery of a QPO Near 185 Hz, with fit parameters given in Table 1. The uncertainty (1 $`\sigma `$) in each parameter is calculated while fixing the other parameters at their best-fit value. To investigate whether this method underestimates the uncertainty, we plotted the surface contours of the chi-square statistic for each pair of QPO parameters in the 2–30 keV fits reported in Table 1. The asymmetries in these contours have only minor significance, and they imply that the multi-parameter uncertainties would be larger than the given ones by only 2–10%. For the QPO fits in the individual energy bands, where the statistics are less reliable, the centroid frequency and width were fixed at the values determined from the corresponding 2–30 keV fit. As shown in Table 1, we are able to measure the QPO amplitude independently at 2–6 keV and 6-12 keV, and there is clearly an increase in the QPO amplitude with photon energy. In addition, there is a weak indication that this trend continues into the 12–30 keV band. These results for XTE J1550-564 are qualitatively consistent with the increasing amplitude with energy seen in the 67 Hz QPO of GRS 1915+105 (Morgan, Remillard, & Greiner (1997)). Thus, the fast X-ray oscillations in black hole candidates are intimately tied to the hard X-ray component in the energy spectrum.
Weaker high-frequency QPOs are seen during the days following each QPO detection at 185 Hz. The QPO fits for September 21a,b and October 24-29 are shown in the right panels of Figure X-ray Nova XTE J1550-564: Discovery of a QPO Near 185 Hz, and the fit parameters are included in Table 1. The detections are significant at the level of 4–5 $`\sigma `$, and these QPOs are centered at 161 $`\pm `$ 7 Hz and 238 $`\pm `$ 18 Hz, respectively. These frequencies are inconsistent with 185 Hz at a confidence level $`3\sigma `$. On the other hand the derived $`Q`$ values and the amplitudes per energy band are consistent with the results for the stronger QPOs at 185 Hz. We must conclude that the fast QPO in XTE J1550-564 shows significant variations in frequency, the first such evidence among the three black hole candidates that display high frequency QPOs. At the 90% confidence level, the high frequency QPO in XTE J1550-564 must vary by $`\pm 10`$ % to be consistent with these observations. We further note that while this paper was under review and as the X-ray outburst of XTE J1550-564 continued, there were reports of even larger variations in frequency, as a QPO appeared at 284 Hz and then settled back to 182 Hz (Homan, Wijnands, & van der Klis (1999); Remillard et al. 1999a ).
## 4 Discussion
Spectral analyses of these 60 RXTE observations (paper I) were made using the standard model composed of a disk blackbody plus a power-law component. The results characterize the spectral evolution of XTE J1550-564 through the peak and initial decay phases of the outburst. The September 19 detection of the QPO at 184 Hz is coincident with a huge (6.8 Crab) X-ray flare that lasted between one and two days. On the other hand, the October 20-23 detection occurs during a very minor increase in intensity ($``$1.5 Crab; note the small arrows in Fig. 1 of paper I). Thus intensity is a poor diagnostic of the conditions that produce the fast QPO. Far more useful indicators are the color temperature ($`T_{col}`$) and normalization of the disk component, which is proportional to the square of the color radius ($`R_{col}`$). The data in columns 8 & 9 of Table 1 in paper I show that all of the fast QPOs occur when the color temperature is relatively high ($`T_{col}>0.84`$ keV), while the disk color radius (scaled to 6 kpc with a pole-on view) is relatively small ($`<40`$ km). We further note that the power law component contributes more than half of the total X-ray luminosity during all of our observations that occur before October 29. This entire scenario, i.e. the detection of fast QPOs when the inner disk appears small and hot while the hard X-ray power law is very strong, is the same suite of conditions that accompanied the 300 Hz QPO in GRO J1655-40 (Remillard et al. 1999b ; Sobczak et al. 1999a ). Furthermore, the rms amplitude of the fast QPO in XTE J1550-564 ($``$ 1%) and its broad profile ($`Q3.5`$) are also very similar to the values derived for the 300 Hz QPO seen in GRO J1655-40.
As noted in the previous studies of fast QPOs in black hole candidates, it is natural to hypothesize that these msec timing signatures in the emission from very hot material represent a fundamental timescale of the inner disk. However the cause of this QPO appears to involve both the disk and power-law components, since the onset of the QPO is related to the temperature of the disk, while the energy dependence of the QPO amplitude implies that the oscillation is tied to the power-law component.
Two different physical models have been advanced for these high-frequency QPOs, and both are effects of general relativity that depend on the mass and spin of the black hole. In the case of “Lense-Thirring” precession, or the “frame-dragging” model, vertical structure in the inner disk gives rise to a relativistic precession, and the precession frequency could impose a timing signature on the X-ray emission (Stella & Vietri (1998); Cui, Zhang, & Chen (1998); Merloni et al. (1998)). Merloni et al. (1998) have shown that fast precession ($`\nu >10`$ Hz) signifies a rapidly rotating black hole with spin parameter $`a>0.5`$. A change in the precession frequency might correspond to a shift in the radius of peak X-ray emissivity. Computations by Markovic & Lamb (1998) indicate that some high-frequency modes of this oscillation may survive against strong damping. However, the means to initially excite these modes is unclear. Furthermore, it is unclear how precession of the inner disk would produce a QPO amplitude that increases with photon energy (Table 1).
An alternative model is the “diskoseismic” oscillation in which normal mode oscillations are trapped via relativistic effects in the inner disk (Perez et al. (1997); Wagoner (1998); see also Chen & Taam (1995)). This model predicts oscillations in density and disk thickness that could produce observable effects in the X-ray emission. The oscillation frequency depends on the mass and spin of the black hole, as well as the radius of peak emissivity. Some of the oscillation modes also depend on the thickness of the disk, which is expected to depend on the mass accretion rate (Wagoner (1998)). Therefore this model can also accomodate observed changes in the QPO frequency. Again, the mechanism by which these oscillations would reproduce the energy dependence of the QPO amplitude is not evident.
Clearly, the accumulation of numerous high-quality measurements of fast QPOs from a variety of black hole systems is a necessary step in developing a sound physical theory for this phenomenon. Guidance on interpreting the fast QPO of XTE J1550-564 may come from optical observations which may yield a determination of the mass of the black hole (paper III).
This work was supported, in part, by NASA contract NAS5-30612 and NASA grant NAG5-3680. Partial support for J.E.M. was provided by the Smithsonian Institution Scholarly Studies Program.
Figure Legends
|
no-problem/9903/astro-ph9903322.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
The importance of Cepheids is well known in many fields of astronomy. In this contribution I would like to show how it is possible to obtain indications about the internal structure of a Cepheid and how we can test models of this class of variable stars.
Section 2 introduces the Fourier decomposition, a tool to describe quantitatively the light curves of pulsating stars. In the past years I have been involved in a project concerning Cepheids with $`P<`$ 8 d and in Sect. 3 I will show how an observational result was progressively built on the basis of old and new data; the latter were collected on selected targets just to clarify some controversial points. Since different pulsation modes were suspected among these stars, an independent confirmation was searched for.
To do that we applied the least–squares technique to Double Mode Cepheids. By obtaining a very satisfactory description of their pulsational content (Sect. 4), we demonstrated how powerful the method is. Moreover, we could confirm that between Cepheids with $`P<`$ 8 d they are both fundamental and first overtone pulsators (Sect. 5). The detection of small amplitude cross–coupling terms and higher harmonics in the light curves of Double Mode Cepheids allowed us to quantify the properties of the high–order terms and hence to discover other peculiarities (Sect. 6), very useful to complete the scenario of the resonance effects and to test some theoretical models (Sect. 7).
## 2 The application of Fourier decomposition to Cepheid light curves
Following the notation proposed by Simon and Lee (1981), the Fourier decomposition consists in interpolating the measurements by means of the series
$$V(t)=A_o+\underset{i=1}{\overset{N}{}}A_i\mathrm{cos}[2\pi i(tT_o)f+\varphi _i]$$
(1)
$`V(t)`$ is the magnitude observed at times $`t`$, $`A_0`$ the mean magnitude, $`A_i`$ the amplitudes of each component, $`f`$ the frequency ($`f`$=1/$`P`$, where $`P`$ is the period of the light variation), $`\varphi _i`$ the $`i`$–th phase at $`t=T_o`$. The components $`2f,3f,4f`$ … are the first, second, third …. harmonics of the main frequency $`f`$. Note that the use of the sin term instead of the cos one can lead to spurious results, owing to the $`\pi /2`$ shift of the phase component. Another trouble can originate from a different formula, for example considering $`2\pi \varphi _i`$ in the development.
This technique provides quantitative parameters to define the shape of the light curves and it is therefore a powerful tool for classification purposes. The Fourier parameters can be subdived into two groups: the amplitude ratios $`R_{ij}=A_i/A_j`$ (i.e. $`R_{21}=A_2/A_1`$, $`R_{31}=A_3/A_1`$, $`R_{32}=A_3/A_2`$, …) and the phase shifts $`\varphi _{ji}=i\varphi _jj\varphi _i`$ (i.e. $`\varphi _{21}`$=$`\varphi _22\varphi _1`$, $`\varphi _{31}`$=$`\varphi _33\varphi _1`$, $`\varphi _{32}=2\varphi _33\varphi _2`$, …).
Figure 1 shows how the light curve of a pulsating stars is progressively changed by adding harmonic terms. The upper panel represents a perfect sine–shaped curve having frequency $`f`$. When adding the first harmonic $`2f`$ ($`R_{21}=`$0.30 and $`\varphi _{21}`$=4.5 rad), the light curve immediately becomes asymmetrical (middle panel). Adding the second harmonic $`2f`$ ($`R_{31}`$=0.10 and $`\varphi _{31}`$=2.5 rad), the brightness increase is again much steeper; the effect of higher harmonics is to originate curves which are even again more asymmetrical (i.e. with a decreasing $`Mm`$ value, where $`M`$ is the phase of maximum brightness and $`m`$ the phase of minimum brightness) and to fit some small jumps of the light curves. The case of S Cru ($`P`$=4.68997 d, 6<sup>th</sup> order fit) is shown in Fig. 2.
## 3 The application to Cepheids with $`P<`$ 8 d
We could verify that the sequence formed by the classical Cepheids is very narrow and it can be described by the linear fit
$$\varphi _{21}=3.332+0.216P$$
(2)
This fit is the mathematical representation of the well-known Hertzsprung progression. An observed scatter of 0.30 rad in the $`\varphi _{21}`$value puts a star well outside of the progression. By applying the Fourier decomposition to all the available light curves of Cepheids with $`P<`$8 d, we could evidence two other sequences: an upper one with 2.0 d$`<P<`$ 3.5 d, $`\varphi _{21}`$$`>`$4.2 rad and a lower one with 3.0 d$`<P<`$5.5 d, $`\varphi _{21}`$$`<`$ 4.0 rad (Fig. 3, first panel). The four panels of Fig. 3 show the successive modifications of the $`\varphi _{21}`$$`P`$ plot from the first analysis (Antonello & Poretti 1986) to the last one (Poretti 1994).
After the first work, it was not evident what was the reason for the three sequences. Gieren et al. (1990) showed that the stars located on the upper sequence are pulsating in the first overtone mode. Different interpretations were proposed for the stars located on the lower sequence: Gieren et al. (1990) suggested that these stars are fundamental pulsators but differ from classical Cepheids for another reason, perhaps a different $`ML`$ relationship. Antonello et al. (1990) suggested the presence of a resonance between the first and a higher overtone at $`P`$3 d. They also called $`Ca`$ stars those forming the classical progression and $`Cb`$ stars those forming the upper and lower sequences. Let us consider hereinafter this latter nomenclature; in Sect. 6 we shall define the pulsational properties of the Cepheids and we shall propose a unique classification.
The controversial aspects of the matter were an incentive to observing a greater number of Cepheids. The upper and lower sequences were not as defined as the other one; hence we decided to perform new decompositions and, if necessary, new observations. For this reason our group supplied very accurate photoelectric photometry of some selected stars whose position was not very clear in the Fourier parameter spaces.
The new observational data collected by Mantegazza & Poretti (1992) brought some clarification into the matter. The link between the two sequences could be established by considering the $`\varphi _{31}`$$`P`$ plane. In this plane the progression is continuous and it is formed by stars located on the upper and lower sequences in the $`\varphi _{21}`$$`P`$ plane. Hence, all the $`Cb`$ stars have a common nature. But the $`\varphi _{21}`$ sequence is really interrupted by a resonance? To verify this point, let us consider the case of BY Cas. Its $`\varphi _{21}`$ value is quite normal (Fig.4, left panel), as it is located on the $`Ca`$ sequence; however, its $`\varphi _{31}`$ value falls exactly on the $`Cb`$ sequence (Fig. 4, right panel). It is quite evident that a star falling on the resonance interval can display any $`\varphi _{21}`$ value: a very high one, as the stars on the upper sequence do, a very low one, as the stars in the lower sequence do, a “$`Ca`$” one, as BY Cas does. This is the signature of a resonance.
Hence, we can summarize 6 years of investigations on the light curves of Cepheids with $`P<`$ 8 d in this way:
* The $`\varphi _{31}`$$`P`$ plane strengthens the hypothesis that $`Cb`$ stars (i.e. the stars forming the upper and lower sequences in the $`\varphi _{21}`$$`P`$ plane) have a common nature;
* The case of BY Cas demonstrates that at $`P`$ 3 d the $`\varphi _{21}`$ values are spread over a wide interval. The two sequences are then ideally connected; this is the fingerprint of a resonance;
* Since the theoretical models of Cepheids do not support resonances at $`P`$ 3 d involving the fundamental mode, we are forced to consider $`Cb`$ stars as first overtone pulsators;
* the $`Ca`$ stars are fundamental mode pulsators, the $`Cb`$ stars are first–overtone pulsators. The $`\varphi _{21}`$ values can then be successfully used to discriminate between pulsation modes.
Hence, the suggestion firstly made by Antonello et al. (1990) was confirmed. The effect of the $`P3`$ d resonance was the same as the one observed for the classical Cepheid sequence at $`P`$10 d; the complication here is the simultaneous presence of the two classes of pulsators, which partly masks the discontinuity.
Looking at Fig. 5 we can compare the light curves of two Cepheids having similar periods, but belonging the one to the $`F`$–pulsator class (BE Mon, $`P`$=2.705510 d) and one to the $`1O`$–pulsator class (V526 Mon, $`P`$=2.674985 d). As can be easily noted, the light curve of BE Mon is much more asymmetric and moreover the maximum is sharper.
## 4 The double–mode Cepheids: the detection of the frequency content
The Double–Mode Cepheids (DMCs) supply the laboratory where the conclusions described in the previous sections can be verified: it is a well established fact that in 13 cases out of 14 the two excited modes are the fundamental and the first overtone mode. The light curve of a DMC can be considered as the sum of the contributions of a set of frequencies. Two are really independent ($`f_1`$and $`f_2`$); since each of the corresponding curves is not perfectly sine–shaped, the harmonics 2$`f_1`$, 3$`f_1`$, …, 2$`f_2`$, 3$`f_2`$, … are also observed. Moreover, the two independent modes are interacting and the cross coupling terms are observed; they are defined as $`|`$ i$`f_1`$$`\pm `$j$`f_2`$ $`|`$ (i.e. $`f_2f_1`$, $`f_1+f_2`$ , 2$`f_1`$+$`f_2`$, 2$`f_2`$$`f_1`$and so on). Pardo & Poretti (1997) submitted all the available photometry on DMCs to a frequency analysis with the following objectives:
* to quantitatively determine the importance of harmonics and of the cross–coupling terms;
* to compare the Fourier parameters with those of $`Ca`$ and $`Cb`$ stars;
* to search for the fingerprints of resonances between modes, by using Fourier parameter plots;
* to establish properties of Fourier parameters as a function of their order.
In the approach to the light curve analysis we took advantage of our experience on small amplitude pulsating variables ($`\delta `$ Sct and $`\gamma `$ Dor stars). As a matter of fact, after finding the main constituents, the other terms have a very small amplitude and a well tested procedure is recommended to detect them in a reliable way. Hence, we used the least–squares power spectrum method (Vanicek 1971). Let us discuss the methodology in detail using the available measurements on VX Pup. In the first power spectrum of Fig. 6 the peak at $`f_1`$=0.3320 cd<sup>-1</sup> and its alias at 1.33 cd<sup>-1</sup> are clearly visible. The aliases are particularly strong in this dataset since the measurements were obtained in a single site; when merging measurements obtained at two or more sites the height of the aliases will decrease. Then we introduced $`f_1`$ as a known constituent (hereinafter k.c.) searching for the second term: in the second power spectrum the $`f_2`$=0.4674 cd<sup>-1</sup>term and its whole alias structure appeared (i.e. the 1–$`f`$, $`f`$+1, 2–$`f`$, $`f`$+2, 3$`f`$ terms). It is important to note that no prewhitening was done: only the frequency value $`f_1`$ was considered as a k.c. and in the second search the unknowns were $`V_o,A_1,\varphi _1,f_2,A_2,\varphi _2`$. Before proceeding further with a new frequency search, the values of $`f_1`$ and $`f_2`$ were refined by a simultaneous least–squares fit and then they were introduced as k.c. in the third search, which allowed us to detect the $`f_1+f_2`$ term (third panel). Now, frequency refinement is a delicate step because the third component must always satisfy the relationship $`f_1+f_2`$; to do this refinement, we use the MTRAP code (Carpino et al. 1987) which keeps this relationship locked throughout the best fit search. After the refinement, we introduced the $`f_1`$, $`f_2`$, $`f_1+f_2`$terms as k.c.’s ($`V_o,A_1,\varphi _1,A_2,\varphi _2,A_{f_1+f_2},\varphi _{f_1+f_2},f_3,A_3,\varphi _3`$ are the unknowns) searching for the new light curve component: we detected $`f_2f_1`$. Once again, the refinement was performed by keeping the $`f_1+f_2`$ and $`f_2f_1`$ relationships locked; new frequency values were then obtained and introduced as k.c.’s, the fifth component 2$`f_1`$was detected and so on. Following this process, we detected 11 terms. We note that in some spectra, especially in the $`f_1+f_2`$ and 2$`f_1`$ cases, the highest peaks are not the expected term, but their alias at 1 cd<sup>-1</sup>. This overtaking is due to the interaction between noise and spectral window (we were dealing here with single–site measurements). When observing this event, the exact value of the expected term is considered to proceed further. The decision to stop the term selection was taken when no more term was visible over the noise distribution, i.e. when all the terms giving a significant contribution to the light curve shape were presumably identified. In Fig. 6 the 12<sup>th</sup> panel clearly shows that no other term can be detected in a clear way as the noise distribution is quite uniform. Of course, very small amplitude terms can remain hidden in the noise level, especially when dealing with inaccurate measurements.
## 5 Comparison between double– and single–mode Cepheids
In the previous sections we mentioned the separation between $`Ca`$ and $`Cb`$ stars in the space of the Fourier parameters. The very reliable Fourier parameters now at our disposal for the galactic DMCs allow us to give an independent confirmation of the proposed intrepretations. Figure 7 shows the distribution of the $`\varphi _{21}`$ values of the galactic DMCs superimposed to the single–mode ones. The $`\varphi _{21}`$ values corresponding to the $`F`$ radial mode occupy the same region as the Classical Cepheids. In like manner, the $`\varphi _{21}`$ values of the the 1$`O`$ radial mode mimics the “$`Z`$” shape: note the overlap between DMCs and $`Cb`$ in the upper part, the high value at 3.0 d (BQ Ser) and the positioning of the two $`\varphi _{21}`$ values belonging to the longest period DMCs (EW and V367 Sct) just on the lower part. It appears quite evident that in the DMCs the light curves of the $`F`$–radial mode and the 1$`O`$–mode are very similar to the curves of the $`Ca`$ and $`Cb`$, respectively. In turn, this fact proves without any doubt that $`Cb`$ stars are pulsating in the 1$`O`$ mode and that the $`\varphi _{21}`$ value can be considered a powerful discriminant between these modes. It should be also noted that the $`F`$–mode light curve of a DMC follows the Hertzsprung progression. A discontinuity is present near 3.0 d in the light curves of 1$`O`$ modes of DMCs.
As a result of our step–by–step analysis, we can conclude that Cepheids can be subdived into two groups on the basis of the different pulsation mode:
* the fundamental radial mode pulsators. They are classified as CEP by the GCVS, are the Classical Cepheids in the current literature and are designed as $`Ca`$ stars in Antonello et al. (1990);
* the first overtone radial mode pulsators. They are classified as DCEPS by the GCVS, are the $`s`$–Cepheids in the current literature and are designed as $`Cb`$ stars in Antonello et al. (1990).
It should be noted that the old definition of $`s`$–Cepheid, i.e. a star showing a sinusoidal light curve, should now dropped out as too generic. The Fourier decomposition supplies us with a quantitative tool to describe it and small asymmetries can be measured. As Fig. 8 shows, the stars forming the “$`Z`$” sequence also show a small $`R_{21}`$ value and hence the light curve deviates very slightly from a sinewave shape. However, it can be noted as the $`R_{21}`$ values for the $`F`$–mode of some DMCs are smaller than the expected ones. We stressed once more that the Fourier parameters have to be considered globally to perform a reliable identification.
## 6 The generalized phase differences
Pardo & Poretti (1997) fitted the $`V`$ magnitudes of DMC by means of the formula
$$V(t)=V_o+\underset{z}{}A_z\mathrm{cos}[2\pi f_z(tT_o)+\varphi _z]$$
(3)
where $`f_z`$ is the generic frequency, which can be an independent frequency ($`f_1`$and $`f_2`$), a harmonic or a cross coupling term. Their analysis demonstrated that each component in the DMC light curves can be defined as a combination of two basic frequencies $`f_1`$and $`f_2`$; by defining $`z=(i,j)`$, we have $`f_z=f_{i,j}`$=$`i`$$`f_1`$+$`j`$$`f_2`$. Some examples: for $`(i,j)`$=2,0 we have the harmonic 2$`f_1`$; for $`(i,j)`$=1,1 the $`f_1+f_2`$ term; for $`(i,j)`$=–1,1 the $`f_2f_1`$ term; for $`(i,j)`$=3,–2 the 3$`f_1`$–2$`f_2`$ and so on.
In order to define the properties of the Fourier parameters of the DMC light curves it is very useful to recall to mind the generalized phase differences introduced by Antonello (1994b), here noted as $`G_{i,j}`$. They are a linear combination of the phases of each term $`f_{i,j}`$ and of the phases $`\mathrm{\Phi }_1`$ and $`\mathrm{\Phi }_2`$ of the independent frequencies $`f_1`$and $`f_2`$. Their expression is given by
$$G_{i,j}=\varphi _{i,j}i\mathrm{\Phi }_1j\mathrm{\Phi }_2+2k\pi $$
(4)
The numerical application to the U TrA fit provides some examples (the integer $`k`$ values have to be selected so that $`G_{i,j}[0,2\pi ]`$):
$$G_{1,1}=\varphi _{1,1}\mathrm{\Phi }_1\mathrm{\Phi }_2+2k\pi =$$
$$=2.935.206.25+4\pi =4.04$$
$$G_{1,1}=\varphi _{1,1}+\mathrm{\Phi }_1\mathrm{\Phi }_2+2k\pi =$$
$$=4.79+5.206.25=3.74$$
$$G_{4,1}=\varphi _{4,1}4\mathrm{\Phi }_1\mathrm{\Phi }_2+2k\pi =$$
$$=6.0045.206.25+8\pi =4.07$$
It is quite interesting to plot the $`G_{i,j}`$ values against their fit order. Light curves of DMCs are often quoted as an example of erratic behaviour and cycle–to–cycle variations, both in amplitude and in phase. Pardo & Poretti (1997) have already proved that these light curves seem to be much more stable than reported and that a frequency locked fit yields a satisfactory representation.
The suspicion that the DMC light curves have a predictable behaviour is confirmed by the natural upper and lower limits that can be easily observed in Fig. 9. The second order terms are confined in the region just below 3/2$`\pi `$; the third order terms have $`\pi /2<G_{i,j}<\pi `$, the fourth order ones cluster around 2$`\pi `$ (or 0), the fifth order ones seem to have $`\pi <G_{i,j}<3/2\pi `$.
The mean $`G_{i,j}`$ values are 4.30$`\pm `$0.34 rad for the second order (i.e. $`i+j`$=2), 2.20$`\pm `$0.23 rad for the third one, 6.24$`\pm `$0.31 for the fourth one, 3.85$`\pm `$0.21 for the fifth one. These mean values are roughly equispaced, with a slight tendency to increase: indeed, the differences between the mean $`G_{i,j}`$ of adjacent orders are 2.10, 2.24, 2.39 rad, respectively. The latter result and the boundary values established above yield an experimental confirmation of the rule of uniformity of phase differences in monoperiodic Cepheids. However, the observed separation ($``$2.2 rad) is a bit larger than expected ($`\pi `$/2) in the case of adiabatic pulsations in a one–zone model (see Poretti & Pardo 1997 for a more detailed discussion).
The second order $`G_{i,j}`$ values (2$`f_1`$, $`f_1+f_2`$, $`f_2f_1`$, 2$`f_2`$) range from 3.00 to 5.23 rad; it was expected to see a little spread of the $`G_{0,2}`$ values owing to the resonance at 3.0 d. Indeed the two extrema are just related to the 2$`f_2`$ components of the BQ Ser (the DMC approaching resonance from the shorter periods) and EW Sct (the DMC approaching resonance from the longer periods) light curves. Antonello (1994a) reported another possible resonance between the third overtone and the $`f_1+f_2`$ term near 6.5 d; Fig. 10 definitely proves it. The last point (4.02$`\pm `$0.09 rad, V367 Sct, $`P`$=6.293 d) is clearly out of the progression followed by the other points. Combined with the progressive weakening of the amplitude of the $`f_1+f_2`$ term, this fact strongly supports the action of a resonance effect involving the $`f_1+f_2`$ term.
## 7 The resonance signatures and the influence on the models
There are many effects ascribed to resonances between modes:
1. The evidence of the resonance at $`P10`$ d was firstly evidenced by Simon & Lee (1981). The values of the $`\varphi _{21}`$ parameter were spread on a very large interval and the progression is abruptly interrupted. The involved modes are the second overtone and the fundamental mode ($`2O/F`$=2);
2. In this paper we reconstructed the methodological procedure used to show how we recognized the effect of the resonance at $`P`$3 d in the 1$`O`$ pulsator light curves; the involved modes are the fourth and the first overtone. Aikawa (1993) tried to obtain the first implications of its effect in nonlinear models;
3. A resonance is expected around 6–7 d for fundamental pulsators, involving the fourth overtone and the fundamental modes (Moskalik et al 1989). The small feature related to it was noted by Antonello (1994a);
4. As regards the DMCs, the Fourier decomposition of the light curve of V367 Sct suggests a possible resonance around 6.5 d. In such a case, the cross–coupling term $`f_1`$+$`f_2`$ and the third overtone were involved;
5. When considering longer periods, features suggesting the action of two resonances were observed (Antonello & Morelli 1997). The fundamental and the third overtone are the involved modes for that at $`P`$ 27 d ($`3O/F`$=3), while the fundamental and the first overtone are those for that at $`P`$ 24 d ($`1O/F`$= 3/2)
It should be noted that their effects are not very strong and a careful analysis is necessary to identify them. Moreover, other dedicated, photometric observations can be useful. For an application of the same technique to radial velocity curves and related results see Kienzle et al. (1999).
Figure 11 shows the positions of the resonances and the involved modes in a $`(BV)P`$ plot. They were predicted by linear models and for nonstandard Mass–Luminosity relationship. Theoretical models can be obtained by varying some input parameters as opacity, overshooting effects, $`ML`$ relationship. A general agreement between theoretical models and observational effects was found. Sequences of models were realized using both standard and overshooting–type $`ML`$ relationships (Antonello 1997). Other models were obtained by using different artificial viscosity parameters and temperature values. In all these cases the theoretical light curves were decomposed and the Fourier parameters straightly compared with the observed ones. Recent results indicated that the models with mild overshooting have $`ML`$ relationships in better agreement with observations than others based on standard assumptions (Antonello 1997).
## 8 Conclusion
This paper shows how the analysis of the light curves can be used to probe the structure of the Cepheids. Therefore, we can study the Cepheids from the point of view of
asteroseismology
since to find resonance effects is as to sound stellar interiors. The discontinuities at $`P`$10 d and $`P`$3 d were confirmed by the data obtained in the framework of large–scale projects as MACHO and EROS (observations of Cepheids located in the Small and Large Magellanic Clouds). It will be interesting to carefully check also the other in the same large databases.
|
no-problem/9903/cond-mat9903031.html
|
ar5iv
|
text
|
# Vortex phase transformations probed by the local ac response of Bi2Sr2CaCu2O8+δ single crystals with various doping
## Abstract
The linear ac response of the vortex system is measured locally in Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8+δ</sub> crystals at various doping, using a miniature two-coil mutual-inductance technique. It was found that a step-like change in the local ac response takes place exactly at the first-order transition (FOT) temperature $`T_{FOT}(H)`$ determined by a global dc magnetization measurement. The $`T_{FOT}(H)`$ line in the $`H`$-$`T`$ phase diagram becomes steeper with increasing doping. In the higher-field region where the FOT is not observed, the local ac response still shows a broadened but distinct feature, which can be interpreted to mark the growth of a short-range order in the vortex system.
The vortex phase diagram of the highly-anisotropic high-$`T_c`$ superconductor Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8+δ</sub> (BSCCO) has been intensively studied in the past few years. In superconductors, strong supercurrents flow near the surface, which produce nonuniform magnetic-field distribution in the sample. Such nonuniformity broadens the thermodynamic phase transitions and thereby hinders the study of the phase diagram. Also, the surface (or edge) currents produce a geometrical barrier in flat samples at low fields, which gives rise to a hysteretic behavior . In higher fields, the Bean-Livingston surface barrier is known to be strong in BSCCO and this makes the global properties of the vortex system complicated . A useful way to get rid of the effects of the surface and the magnetic-field nonuniformity is to measure the electromagnetic properties locally. There have been a number of efforts along this line , and the true nature of the vortex phases of BSCCO is beginning to be fully understood. For example, local magnetization measurements using microscopic Hall probes have found, quite conclusively, the presence of a first-order transition (FOT) of the vortex system . With the improvement of instrumentation and crystal quality, it has become clear that the first-order transition can also be determined as a step in the global dc magnetization measured with SQUID magnetometer .
The miniature two-coil mutual-inductance technique has been used for the study of the vortex phase diagram of BSCCO . With this technique, a small ac perturbation field is applied near the center of the crystal and therefore the the surface barrier, which hinders vortex entry and exit at the edge, has minimal effect on the measured response. Because of this advantage, a sharp distinct change in the local ac response has been observed and such a feature has been associated with a decoupling transition of the vortex lines. It is naturally expected that the “decoupling line” thus determined is identical to the FOT measured by dc magnetization measurements, although there has been no direct comparison between the two phenomenon measured on an identical sample. Since it is known that the first-order transition in the dc magnetization has a critical point and thus disappears above a certain field , it is intriguing how the “decoupling” signal of the miniature two-coil technique transforms at higher fields, above the critical point. In fact, the nature of the vortex matter in the field range above the critical point is still controversial ; since the ac technique can probe the growth of the correlation lengths of the vortex system , it is expected that the local ac measurement using the two-coil technique gives a new insight into the vortex phase transformations.
In this paper, we present the results of our miniature two-coil measurements and the global dc magnetization measurements on the same crystals. It is found that these two techniques detect the anomaly at the same temperature $`T_{FOT}(H)`$, directly demonstrating that the two phenomena are of the same origin. We measured crystals with three different dopings and confirmed that the result is reproducible among systems with different anisotropy. In higher fields where the FOT is not observed by the global dc magnetization measurement, a distinct feature is still observable in the local ac response and the position of such feature is weakly frequency dependent. We discuss that the frequency-dependent feature above the critical point is likely to originate from the growth of a short-range order in the vortex system.
The single crystals of BSCCO are grown with a floating-zone method and are carefully annealed and quenched to obtain uniform oxygen content inside the sample. We obtained three different dopings by annealing the crystals at different temperatures in air; annealing at 800C for 72 hours gives an optimally-doped sample with $`T_c`$=91 K (sample A), 650C for 100 hours gives a lightly-overdoped sample with $`T_c`$=88 K (sample B), and 400C for 10 days gives an overdoped sample with $`T_c`$=80 K (sample C). All the samples have the transition width of less than 1.5 K. A tactful quenching at the end of the anneal is essential for obtaining such a narrow transition width. $`T_c`$ is defined by the onset temperature of the Meissner signal in the dc magnetization measurement. The crystals are cut into platelets with lateral sizes larger than 3 $`\times `$ 3 mm<sup>2</sup> and the thickness of the samples are typically 0.02 mm. We used a very small (0.6 mm diameter) coaxial set of pickup and drive coils for our two-coil mutual-inductance measurements (see the inset to Fig. 1). The details of our technique have been described elsewhere . The amplitude of the drive current $`I_d`$ was 7.5, 7.5, and 1.0 mA for the measurements of samples A, B, and C, respectively. The linearity of the measured voltage with respect to $`I_d`$ was always confirmed. These $`I_d`$ produce the ac magnetic field of about 0.01 - 0.1 G at the sample. We emphasize that our two-coil geometry mainly induces and detects shielding currents flowing near the center of the sample, while usual ac-susceptibility measurements are most sensitive to shielding currents flowing near the edge of the sample. All the two-coil measurements are done in the field-cooled procedure. The global dc magnetization measurements are done with a Quantum Design SQUID magnetometer equipped with a slow temperature-sweep operation mode.
Figure 1(a) shows the temperature dependence of the in-phase signals of our two-coil measurement on sample A in 190 G, taken at various frequencies from 3 kHz to 24 kHz. To compare the signals from different frequencies, the data are plotted in the unit of inductance change. It is apparent that there is a frequency-independent step-like change at a temperature $`T_d`$, which is 68.5 K here. The temperature dependence of the global dc magnetization in the same field is shown in Fig. 1(b), which shows that the FOT is taking place at exactly the same temperature as the step-like change in the two-coil signal.
According to the linear ac-response theory of the vortex system, the ac response is governed by the ac penetration depth $`\lambda _{ac}`$ . $`\lambda _{ac}`$ in our configuration is related to the in-plane resistivity $`\rho _{ab}`$ in the manner $`\rho _{ab}`$=Re$`(i\omega \mu _0\lambda _{ac}^2)`$ . It has been reported that the apparent resistivity measured in the mixed state of BSCCO is largely dominated by the surface current . Recent measurement of the bulk and surface contributions to the resistivity found that the bulk contribution shows a sharp change at the FOT, while the surface contribution is governed by the surface barrier and shows a broader change. Since our measurement is not sensitive to the edge current, it is expected that $`\lambda _{ac}`$ of our measurement reflects mainly the bulk resistivity. Therefore, the step-like change in the local ac response is most likely to originate from the reported sharp change in the bulk resistivity . We note that there has been a confusion about the origin of the step-like change in the local ac response measured by the miniature two-coil technique and it was discussed that the source of the sudden change may be related to a change in the $`c`$-axis resistivity .
Figure 2(a) shows the $`T`$ dependence of the in-phase signals of our two-coil measurement on sample A in three different magnetic fields. We observed that the sharp step-like change in the two-coil signal becomes broadened when the magnetic field exceeds a certain limit $`H_{lim}`$; in the case of sample A, the step-like change is observed in up to 400 G, but becomes broadened at 500 G. It was found that this $`H_{lim}`$ corresponds to the magnetic-field value at the critical point of the FOT; namely, the FOT in the dc magnetization measurement also disappears in fields above $`H_{lim}`$. Figures 2(b) and 2(c) show that the FOT is observed in the dc magnetization at 400 G but is not detectable at 500 G. This is also a clear evidence that the origin of the step-like change in the two-coil signal is the FOT.
In Fig. 2(a), the 500-G data do not show a step-like change, but clear changes in the slope at two separate temperatures, $`T_{k1}`$ and $`T_{k2}`$, are discernible. The signal changes much rapidly between $`T_{k1}`$ and $`T_{k2}`$ compared to the temperatures outside of this region, so the data look like that the step-like change at $`T_d`$ is broadened to the temperature region of $`T_{k1}<`$$`T`$$`<T_{k2}`$. Figure 3 shows the in-phase signals of sample A in 600 G, which is above $`H_{lim}`$, taken at various frequencies. Apparently, $`T_{k1}`$ and $`T_{k2}`$ inferred from the 600-G data change with frequency, although the change is small. This indicates that $`T_{k1}`$ and $`T_{k2}`$ do not mark a true phase transition but mark a crossover.
Figures 4(a) and 4(b) show the in- and out-of-phase signals of samples B and C, respectively, in two selected magnetic fields below and above $`H_{lim}`$. Also in these two samples, the $`T`$ dependence of the two-coil signals show a step-like change in magnetic fields below $`H_{lim}`$, while the change is broadened in $`H`$$`>`$$`H_{lim}`$. Figure 5 shows the $`T_d(H)`$ lines for the three samples determined by our two-coil measurements. Clearly, the $`T_d(H)`$ line tends to be steeper for more overdoped samples. The $`T_{FOT}`$ data obtained from the dc magnetization are also plotted in Fig 5; apparently, $`T_d(H)`$ and $`T_{FOT}(H)`$ agree very well in all the three samples. The inset to Fig. 5 shows the $`T_d(H)`$ lines together with the $`T_{k1}(H)`$ and $`T_{k2}(H)`$ lines at higher fields (determined with 12 kHz), plotted versus normalized temperature $`T/T_c`$. The $`T_{k1}(H)`$ and $`T_{k2}(H)`$ lines are much steeper compared to the $`T_d(H)`$ line.
After the existence of the first-order transition of the vortex system in BSCCO has been established , much efforts have been devoted to the clarification of the details of the phase diagram. There have been accumulating evidences that the FOT line is a sublimation line, at which a solid of vortex lines transforms into a gas of pancake vortices . In the $`H`$-$`T`$ phase diagram, there are two lines other than the FOT line, called “depinning line” and the “second-peak line” . The three lines merge at the critical point; the depinning line separates the low- and high-temperature regions at fields above $`H_{lim}`$ and the second-peak line separates the high- and low-field regions at low temperatures. Apparently, our $`T_{k1}(H)`$ and $`T_{k2}(H)`$ lines are very similar to the depinning line; thus, an examination of the $`T_{k1}(H)`$ and $`T_{k2}(H)`$ lines is expected to give an insight into the nature of the depinning line.
Since the step-like change at $`T_d(H)`$ marks an abrupt onset of the long-range correlation in the vortex system, the broadened change between $`T_{k1}(H)`$ and $`T_{k2}(H)`$ is expected to indicate an increase of a (short-range) correlation in the vortex system. In general, a probe with higher frequency looks at physics at shorter length scale ; in the case of our local ac response, $`\lambda _{ac}(\omega )`$ is smaller for larger $`\omega `$. With decreasing temperature, it is expected that the local ac response shows a qualitative change when the $`c`$-axis correlation length $`L_c`$ of the vortex system starts to grow, and another qualitative change at a lower temperature is also expected when $`L_c`$ becomes comparable to $`\lambda _{ac}(\omega )`$. This is one possible scenario for what is happening at $`T_{k2}`$ and $`T_{k1}`$. The facts that $`T_{k1}`$ and $`T_{k2}`$ are dependent on frequency and that a higher frequency gives a higher apparent $`T_{k1}`$ are consistent with the above scenario.
Recently, Fuchs et al. used the change in the surface-barrier height for the determination of the vortex phase transformations (note here that the surface barrier is different from the geometrical barrier which is only effective at low fields near $`H_{c1}`$) and the presence of a new transition line, $`T_x`$ line, at temperatures higher than the depinning line (and above the FOT line) was suggested. Since it is almost clear that the vortex phase above the FOT line is a gas of pancake vortices at temperatures higher than this new $`T_x`$ line , the existence of the $`T_x`$ line implies that the depinning line separates a highly disordered entangled vortex solid (low-temperature side) from either (a) disentangled liquid of lines with hexatic order or (b) some kind of solid which consists of an aligned stack of ordered two-dimensional pancake layers . Our data suggests that the latter possibility (b) is more likely, because the growth of the short-range correlation between $`T_{k1}`$ and $`T_{k2}`$ has a natural meaning of a growth of the alignment of the pancake layers in the latter picture. Note that we did not observe any feature which can be associated with the $`T_x`$ line; this is reasonable because the $`T_x`$ line only manifests itself in a change in the surface barrier, which has little effect on our measurement.
Finally, let us briefly discuss the magnetic-field dependence of $`T_d`$. As has been reported , the $`T_d(H)`$ line measured with the two-coil technique can be well fitted with the formula for the decoupling line . This is actually a matter of course, because our $`T_d(H)`$ line is identical to the $`T_{FOT}`$ line and the FOT is most likely to be a sublimation transition, which is essentially a decoupling transition. Fittings of our data to the decoupling formula $`HH_0(T_cT_d)/T_d`$ give the anisotropy ratio $`\gamma `$ of $``$100, $``$85, and $``$77 for samples A, B, and C, respectively (the prefactor is given by $`H_0\alpha _D\gamma ^2\varphi _0^3/(4\pi \lambda (0))^2T_cd`$, where $`\alpha _D`$0.1 is a constant, $`d`$=15 $`\mathrm{\AA }`$ is the spacing between the bilayers, and $`\lambda (0)`$2000 $`\mathrm{\AA }`$ is the penetration depth).
In summary, we measured the local ac response of three BSCCO crystals (optimally doped, lightly overdoped, and overdoped samples) using a miniature two-coil technique and compared the result with a global dc magnetization measurement. The origin of the step-like change in the two-coil measurement is identified to be the first-order transition (FOT), where the bulk resistivity (which is free from the edge contribution) is reported to show a sharp change . The sudden step-like change in the two-coil signal starts to be broadened at fields above $`H_{lim}`$, where the FOT is no longer observed. This broadened change takes place between $`T_{k1}`$ and $`T_{k2}`$ and these two temperatures are still well defined, although they are frequency dependent. We discuss that the observation of the feature at $`T_{k1}`$ and $`T_{k2}`$ is likely to indicate the growth of a short-range correlation of the vortex matter, which gives a clue to identify the nature of the depinning line.
|
no-problem/9903/hep-ex9903023.html
|
ar5iv
|
text
|
# PRECISION CRYSTAL CALORIMETRY IN HIGH ENERGY PHYSICS
## I INTRODUCTION
Total absorption shower counters made of inorganic scintillating crystals have been known for decades for their superb energy resolution and detection efficiency. In high energy and nuclear physics, large arrays of scintillating crystals have been assembled for precision measurements of photons and electrons. Recently, several crystal calorimeters have been designed and are under construction for the next generation of high energy physics experiment. Table I summarizes design parameters for these crystal calorimeters. One notes that each of these calorimeters requires several cubic meters of high quality crystals.
CsI(Tl) crystals are known to have high light yield, so was chosen by two B Factory experiments where low noise is essential for low end of energy reach. PbWO<sub>4</sub> crystals are distinguished with their high density, short radiation length and small Molière radius, so was chosen by CMS experiment to construct a compact crystal calorimeter of 25 radiation length. The low light yield of PbWO<sub>4</sub> crystals can be overcome by gains of the photo-detector, such as PMTs and avalanche photodiodes (APD). The unique physics capability of crystal calorimetry is the result of its superb energy resolution, hermetic coverage and fine granularity . Recently designed crystal calorimeters, however, face a new challenge: radiation damage caused by increased center of mass energy and luminosity. While dose rate is expected to be a few rad per day for CsI(Tl) crystals at two B Factories, it would reach 15 to 600 rad per hour for PbWO<sub>4</sub> crystals at LHC.
This paper discusses two key issues related to precision of crystal calorimetry in situ, and cause and cure of radiation damage in crystals. Light response uniformity and calibration in situ are discussed in Sections II and III. Effect of radiation damage is elaborated in Section IV. Section V discusses damage mechanism for alkali halides, such as BaF<sub>2</sub> and CsI, and oxides, such as bithmuth gemanade (Bi<sub>4</sub>Ge<sub>3</sub>O<sub>12</sub>, BGO) and PbWO<sub>4</sub>. Finally, a brief summary is given in Section VI.
All measurements, except specified otherwise, were carried out at Caltech with samples from Beijing Glass Research Institute (BGRI), Bogoroditsk Techno-Chemical Plant (BTCP), Khar’kov and Shanghai Institute of Ceramics (SIC).
## II Crystal Light Response Uniformity
GEANT simulation shows that an adequate light response uniformity profile is a key to precision of a crystal calorimeter.
The left side of Figure 1 shows a GEANT prediction of energy fraction (top) and the intrinsic resolution (bottom) calculated by summing the energies deposited in a 3 $`\times `$ 3 sub-array, consisting of tapered BaF<sub>2</sub> crystals of 25 radiation length, as a function of the light response uniformity. In this simulation, light response (y) of the crystal was parametrized as a normalized linear function:
$$\frac{y}{y_{mid}}=1+\delta (x/x_{mid}1),$$
(1)
where y<sub>mid</sub> represents light response at the middle of the crystal, $`\delta `$ represents deviation of light response uniformity, and x is the distance from the small (front) end of tapered crystal.
While changes of amplitude of light output can be inter-calibrated, the loss of the energy resolution, caused by degradation of light response uniformity is not recoverable. To preserve crystal’s intrinsic energy resolution light response uniformity thus must be kept within tolerance. According to above simulation, the $`\delta `$ value is required to be less than 5% so that its contribution to the constant term of the energy resolution is less than 0.5%. A recent GEANT simulation for CMS PbWO<sub>4</sub> crystals confirmed this conclusion. The right side of Figure 1 shows specification of CMS PbWO<sub>4</sub> uniformity profile. While the slope at the front 3 X<sub>0</sub> is not restricted, it must be kept within 0.3%/X<sub>0</sub> in the middle 10 X<sub>0</sub>, and it is required to have a positive value of 8% in the back 12 X<sub>0</sub>, so that rear leakage at high energies can be compensated.
By using PbWO<sub>4</sub> crystals tuned according to this specification, an energy resolution of $`\frac{\delta E}{E}=\frac{\mathbf{4.1}\%}{\sqrt{E}}\mathbf{0.37}\%0.15/E`$ was achieved in CMS test beam at CERN using current production PbWO<sub>4</sub> crystals with Si APD of 25 mm<sup>2</sup> . Figure 2 shows the distributions of stochastic (left) and constant (middle) terms of energy resolution, and 0.45% energy resolution reconstructed in 3 $`\times `$ 3 PbWO<sub>4</sub> crystals for 280 GeV electrons (right). This 4.1% stochastic term will be reduced to 3% by using two APDs instead of one in final design . Note, this constant term of 0.37% achieved does not include uncertainties of calibration in situ.
## III Precision Calibration in situ
Precision calibration is the key factor in maintaining the crystal calorimetry precision in situ. Although all individual cells of a crystal calorimeter may be calibrated in a test beam at several different energies to provide a set of initial calibration constants before installation, the change in response over time differs from one calorimeter element (crystal, photo detector, readout chain) to the next. The left plot in Figure 3 shows BGO aging as a function of time of operation for two half barrels and two endcaps . Inter-calibrations in situ therefore are required to track down the evolution of each channel independently.
Calibration in situ is most commonly achieved by using physics processes produced by beam, such as electrons or photons of known energy, electron or photon pairs reconstructible to a known invariant mass, E/p of electrons with momentum measured, and energy of minimum ionizing particles with known path length. Low energy $`\gamma `$-rays from radioactive sources or from radiative capture reactions are often used as a low energy calibration source. This is particularly important for those crystal calorimeters where physics processes do not occur at a high enough rate for a frequent calibration, such as L3 BGO . Finally, a light pulser system is a useful tool to monitor the light collection in a crystal and the readout response. As discussed in Section IV, it can also serve as inter-calibration in situ, if the scintillation mechanism of crystals is not damaged.
Because of limited statistics of physics processes, L3 experiment uses a Radiofrequency Quadrupole (RFQ) based accelerator system for BGO crystal calibration. The 17.6 MeV $`\gamma `$-rays from a radiative capture reactions
$$\mathrm{p}+_3^7\mathrm{Li}_4^8\mathrm{Be}+\gamma $$
(2)
is used as calibration source, which was produced by bombarding a Li target mounted inside the calorimeter with a proton beam. Shown in the middle of Figure 3 is the installation of RFQ calibration system in L3 detector. Combining with Bhabha events, the RFQ system provides sub percent calibration in situ, as shown in the right plot of Figure 3 .
For recently designed crystal calorimeters listed in Table I, KTeV uses E/p of electrons from $`K_L\pi ^+e^{}\nu `$, $`BaBar`$ and BELLE will use electrons from Bhabha scattering, and CMS will use E/p from electrons and Z $`e^+e^{}`$ mass reconstruction. In addition, $`BaBar`$ also uses 6.13 MeV $`\gamma `$-rays from a meta stable state of <sup>16</sup>O with $`t_{1/2}`$ of 7 sec, which is produced by circulating a fluorine containing fluid through a neutron source, and CMS also uses a light monitoring system to catch change of light collection in PbWO<sub>4</sub> crystals in situ. By using E/p calibration, the KTeV CsI calorimeter has achieved a 0.6% resolution for electrons with energy larger than 20 GeV, indicating an accuracy of better than 0.5% is achieved in calibration in situ . The goal of CMS experiment is to calibrate PbWO<sub>4</sub> calorimeter to a similar or better level by using physics processes combined with light monitoring.
## IV Radiation Damage in Scintillating Crystals
All known crystal scintillators suffer from radiation damage. The most common damage phenomenon is the appearance of radiation-induced absorption bands caused by color center formation. Absorption bands reduce crystal’s light attenuation length (LAL), and hence the light output. Color center formation, however, may or may not cause a degradation of the light response uniformity. Radiation also causes phosphorescence (afterglow), which leads to an increase of readout noise. Additional effect may include a reduced intrinsic scintillation light yield (damage of scintillation mechanism), which would lead to a reduced light output and a deformation of the light response uniformity. Damage may recover under room temperature, which leads to a so called “dose rate dependence”. Finally, thermal annealing and optical bleaching may be effective in eliminating color centers in crystals. Reference and the references therein provide detailed information for readers with interest. Because of limited scope we only highlight two points below.
First, scintillation mechanism in scintillating crystals is usually not damaged by radiation. Degradation of light output is thus due only to radiation-induced absorption, i.e. color center formation. As a consequence, irradiation does not change light response uniformity. Figure 4 shows light response uniformity as a function of accumulated dose for full size CsI(Tl) (left) and PbWO<sub>4</sub> (right) crystals. Pulse heights measured in nine points evenly distributed along the longitudinal axis of the crystal is fit to Equation 1, showing clearly that the slope ($`\delta `$) does not change up to 10 krad for a CsI(Tl) sample, even only the front few cm of the sample was irradiated , and to 2.2 Mrad for a PbWO<sub>4</sub> sample . This result is understood, as the intensity of all light rays attenuates equally after passing the same radiation-induced absorption zone in the crystal. A ray-tracing simulation shows that the slope of light response uniformity depends only on crystal geometry for crystals with long enough light attenuation length, and will change only if light attenuation length degrades to less than about 4 times crystal length . This leads to a conclusion that crystal’s energy resolution would not degrade by radiation although its calibration does change, which was confirmed by beam test at CERN . Since degradation of the amplitude of light output can be inter-calibrated with physics events, or by a light monitoring system if it is caused by optical absorption, crystal precision can be maintained in situ even radiation damage does occur.
Second, the level of light output degradation under continuous irradiation of certain dose rate approaches an equilibrium, leading to a dose rate dependent damage, which was also later confirmed in CERN beam test . This “dose rate dependence” of light output degradation is understood to be caused by color center kinetics of the creation and annihilation of radiation induced color centers .
If both annihilation and creation coexist, the color center density at the equilibrium depends on the dose rate applied. Assuming annihilation speed of a color center $`i`$ is proportional to a constant $`a_i`$ and its creation speed is proportional to a constant $`b_i`$ and dose rate ($`R`$), the differential change of color center density when both processes coexist can be written as :
$$dD=\underset{i=1}{\overset{n}{}}\{a_iD_i+(D_i^{all}D_i)b_iR\}dt,$$
(3)
where $`D_i`$ is the density of the color center $`i`$ in the crystal and the summation goes through all centers. The solution of Equation 3 is
$$D=\underset{i=1}{\overset{n}{}}\{\frac{b_iRD_i^{all}}{a_i+b_iR}[1e^{(a_i+b_iR)t}]+D_i^0e^{(a_i+b_iR)t}\},$$
(4)
where $`D_i^{all}`$ is the total density of the trap related to the center $`i`$ and $`D_i^0`$ is its initial density. The color center density in equilibrium ($`D_{eq}`$) thus depends on the dose rate ($`R`$).
$$D_{eq}=\underset{i=1}{\overset{n}{}}\frac{b_iRD_i^{all}}{a_i+b_iR},$$
(5)
By using color center kinetics, one can calculate, or predict, crystal damage at one dose rate by using data collected at another dose rate .
## V Damage Mechanism in Scintillating Crystals
Understanding damage mechanism in scintillators would help to improve quality of mass produced crystals, which is usually achieved by material analysis. Glow Discharge Mass Spectroscopy (GDMS) analysis was tried in Charles Evans & Associates and Shiva Technology, looking for correlations between the trace impurities in crystals and their radiation hardness. Samples were taken 3 to 5 mm below the surface of the crystal to avoid surface contamination. For both CsI(Tl) and PbWO<sub>4</sub> crystals, a survey of 76 elements, including all of the lanthanides, indicates that there are no obvious correlations between the detected trace impurities and crystal’s susceptibility to the radiation damage. This indicates possible role of other defects, such as oxygen contamination or stoichiometric vacancies, which can not be determined by GDMS.
### A Damage Mechanism in Alkali Halides
Oxygen contamination is known to cause radiation damage in alkali halide scintillators. In BaF<sub>2</sub> , for example, hydroxyl (OH<sup>-</sup>) may be introduced into crystal through a hydrolysis process, and latter decomposed to interstitial and substitutional centers by radiation through a radiolysis process. Equation 6 shows a scenario of this process:
$$OH^{}H_i^0+O_s^{}orH_s^{}+O_i^0,$$
(6)
where subscript $`i`$ and $`s`$ refer to interstitial and substitutional centers respectively. Both $`O_s^{}`$ and U ($`H_s^{}`$) centers were identified .
Following BaF<sub>2</sub> experience, effort was made to remove oxygen contamination in CsI(Tl) crystals. A scavenger was used at SIC to remove oxygen contamination, leading to significant improvement of CsI(Tl) quality . The left side of Figure 5 shows the light output as a function of accumulated dose for full size CsI(Tl) samples, compared to the $`BaBar`$ radiation hardness specification (solid line). While the late samples SIC-5, 6, 7 and 8 (with scavenger) satisfy the $`BaBar`$ specification, early samples SIC-2 and 4 did not. The function of the scavenger is to form oxide with density less than CsI, so will migrate to the top of ingot during growing process, similar to zone-refining. By doing so, both oxygen and scavenger are removed from the crystal.
Quantitative identification of oxygen contamination in CsI(Tl) samples needs additional analysis. Gas Fusion (LECO) at Shiva Technologies West, Inc., found that oxygen contamination in all CsI(Tl) samples is below detection limit of 50 ppm. Secondary Ionization Mass Spectroscopy (SIMS) was tried at Charles Evans & Associates. A Cs ion beam of 6 keV and 50 nA was used to bombard the CsI(Tl) sample. All samples were freshly cleaved prior before being loaded to the UHV chamber. An area of 0.15 $`\times `$ 0.15 mm<sup>2</sup> on the cleaved surface was analyzed. To further avoid surface contamination, the starting point of the analysis is at about 10 $`\mu `$m deep inside the fresh cleaved surface. The right side of Figure 5 shows depth profile of oxygen contamination for two rad-soft (SIC-T1 and SIC-2) and two rad-hard (SIC-T3 and Khar’kov) CsI(Tl) samples. Crystals with poor radiation resistance have oxygen contamination of 10<sup>18</sup> atoms/cm<sup>3</sup> or 5.7 ppmW, which is 5 times higher than the background count (2$`\times 10^{17}`$ atoms/cm<sup>3</sup>, or 1.4 ppmw). The radiation damage in CsI(Tl) is indeed caused by oxygen contamination.
### B Damage Mechanism in Oxides
Crystal defects, such as oxygen vacancies, is known to cause radiation damage in oxide scintillators. In BGO, for example, three common radiation induced absorption bands at 2.3, 3.0 and 3.8 eV were found in a series of 24 doped samples , indicating defect-related color centers, such as oxygen vacancies. Following the BGO experience, an effort was made at SIC to reduce oxygen vacancies in PbWO<sub>4</sub> crystals by oxygen compensation through post-growth thermal annealing in an oxygen-rich atmosphere, and result was positive .
Particle Induced X-ray Emission (PIXE) and quantitative wavelength dispersive Electron Micro-Probe Analysis (EMPA) was tried in Charles Evans & Associates to quantify stoichiometry deviation and oxygen vacancies in PbWO<sub>4</sub> crystals. Crystals with poor radiation hardness were indeed found to have a non-stoichiometric W/Pb ratio . However, both PIXE and EMPA did not provide oxygen analysis. X-ray Photoelectron Spectroscopy (XPS) at Charles Evens & Associates was found to be very difficult to reach a stable quantitative conclusion because of large systematic uncertainties in oxygen analysis .
By using Transmission Electron Microscopy (TEM) a localized stoichiometry analysis was possible to identify oxygen vacancies. A TOPCON-002B Scope was first used at 200 kV and 10 $`\mu `$A. Samples were made to powders of an average grain size of a few $`\mu `$m, and then placed on a sustaining membrane. Figure 6 shows TEM pictures taken for a pair of samples of poor (left) and good (right) radiation hardness. Black spots of a diameter of 5 – 10 nm were clearly observed in the poor sample, but not in the good sample. These black spots were identified as regions with severe oxygen deficit by a localized stoichiometry analysis using TEM coupled to Energy Dispersion Spectrometry (EDS). Approaches to reduce oxygen deficits were taken by crystal vendors, leading to production crystals of much improved quality.
## VI Summary
Precision crystal calorimetry extends physics reach in experimental high energy physics because of its best achievable resolution for electrons and photons. An optimized light response uniformity is the key to reach crystal energy resolution. A precision calibration is the key to maintain crystal precision in situ.
Predominant radiation damage effect in crystal scintillators is radiation induced absorption, or color center formation, not damage of scintillation mechanism. For precision calorimetry, crystal scintillator must preserve its light response uniformity under irradiation, which requires a long enough initial light attenuation length and a low enough radiation induced color center density. A precision light monitoring may function as inter-calibration for such crystals.
Radiation damage in alkali halides is caused by oxygen and/or hydroxyl contamination, as evidenced by a SIMS analysis and the effectiveness of a scavenger in removing oxygen contamination in CsI(Tl) crystals. Radiation damage in oxides is caused by stoichiometry-related defects, e.g. oxygen vacancies, as evidenced by a localized stoichiometry analysis using TEM/EDS, and the effectiveness of the oxygen compensation for PbWO<sub>4</sub> crystals.
## Acknowledgements
Measurements at Caltech were carried out by Mr. Q. Deng, H Wu, D.A. Ma, Z.Y. Wei and T.Q. Zhou. Part of the PbWO<sub>4</sub> related work was carried out by Dr. C. Woody and his group at Brookhaven National Laboratory.
|
no-problem/9903/astro-ph9903238.html
|
ar5iv
|
text
|
# Thermal Relaxation in One-Dimensional Self-Gravitating Systems
## 1 Introduction
The one-dimensional self-gravitating many-body system was originally discussed mainly as a simple toy model to understand the violent relaxation, because the thermal relaxation timescale of its discrete realization, the sheet model, was believed to be long. Until 1980s, it had been generally accepted that the thermal relaxation time of the system of $`N`$ equal-mass sheets is of the order of $`N^2t_c`$, where $`t_c`$ is the crossing time of the system.
However, by means of numerical simulation Luwel et al. have demonstrated that the relaxation time is of the order of $`Nt_c`$. Reidel and Miller reached a similar conclusion, though they reported the presence of systems which apparently did not relax for much longer timescale.
In a series of papers, Tsuchiya et al. have studied the thermal relaxation process of one-dimensional self-gravitating systems in detail, by means of the numerical integration over very long timescale (some of their experiments covered $`5\times 10^8t_c`$). They claimed that the thermal relaxation of the sheet model proceeds in a highly complex manner. In the “microscopic relaxation timescale” of $`Nt_c`$, each sheet forgets its initial condition, and the system is well mixed. However, according to them, the system does not really reach the thermal equilibrium in this timescale, and the distribution function remains different from that of the isothermal state. They called this state a quasiequilibrium
By pursuing the time integration for much longer timescale, Tsuchiya et al. found that the system exhibits the transition from one quasiequilibrium to another, and they claimed that the thermal equilibrium is only realized by averaging over the timescale longer than the timescale of these transitions. Thus, they argued that there exists the timescale for “macroscopic” relaxation, which is much longer than the usual thermal relaxation (what they called “microscopic relaxation”).
In this paper, we try to examine the nature of this “macroscopic” relaxation of the one-dimensional sheet model. In section 2, we describe the numerical model. In section 3, we present the result of the measurement of the relaxation time. It is shown that the relaxation time, defined as the timescale in which individual sheets change their energies, depends very strongly on the energy itself, and is very long for high energy sheets. This strong dependence of the relaxation timescale on the energy naturally explains the apparent “transient” phenomena observed by Tsuchiya et al. Section 4 discusses the implication and relevance of our results.
## 2 The Model
### 2.1 Sheet model
The Hamiltonian of the sheet model is given by
$$H=\frac{m}{2}\underset{i=1}{\overset{N}{}}v_i^2+2\pi Gm^2\underset{i<j}{}|x_ix_j|,$$
(1)
where $`x_i`$ and $`v_i`$ are the position and velocity of sheet $`i`$, $`m`$ is the mass of the sheets, $`N`$ is the number of the sheets and $`G`$ is the gravitational constant. The crossing time is defined as
$$t_c=\frac{1}{4\pi GM}\sqrt{\frac{4E}{M}},$$
(2)
where $`M=mN`$ is the total mass of the system. Following Tsuchiya et al. and others, we use the system of units in which $`M=4E=4\pi G=1`$. In this system, $`t_c=1`$.
A unique nature of the one-dimensional gravitational system is that there exists the thermal equilibrium, unlike its counterpart in three dimensions. Rybicki obtained the distribution function
$$f(\epsilon )=\frac{1}{8}\left(\frac{1}{2\pi }\right)^{1/2}\left(\frac{3M}{2E}\right)^{3/2}\mathrm{exp}\left(\frac{3M}{2E}\epsilon \right),$$
(3)
where $`\epsilon `$ is the specific binding energy defined as
$$\epsilon =\frac{v^2}{2}+\mathrm{\Psi }(x)\mathrm{\Psi }(0).$$
(4)
Here, $`\mathrm{\Psi }(x)`$ is the specific potential energy. This distribution function satisfies the relation
$$\mathrm{exp}\left(\frac{3M}{2E}\epsilon \right)=\mathrm{sech}^2\left(\frac{3x}{8E}\right).$$
(5)
We performed the time integration of the system with $`N=16`$, 32, 64, 128 and 256. For all systems, the initial condition is a water-bag with the aspect ratio $`x_{max}/v_{max}=2.5`$.
### 2.2 Numerical method
The important character of the sheet model is that one can calculate the exact orbit of each sheet until two sheets cross each other. Thus, we can integrate the evolution of the system precisely (except for the round-off error). This may sound like a great advantage, compared to the systems in higher dimensions whose orbits can be calculated only numerically. Instead of numerically integrating the orbit of each sheet, we can calculate the exact orbit for any sheet, until it collides with the neighboring sheet. Thus, by arranging the pairs using heap, we can handle each collision in $`\mathrm{log}N`$ calculation cost.
Note, however, that typically each sheet collides with all other sheets in one crossing time. Thus, the calculation cost is $`O(N^2\mathrm{log}N)`$ per crossing time. Our simulation with $`N=64`$ for $`2\times 10^7t_c`$ took 8 hours on a VT-Alpha workstation with DEC Alpha 21164A CPU running at 533 MHz. For this run, the total energy of the system was conserved better than $`3\times 10^{12}`$.
## 3 Results
### 3.1 Approach to the thermal equilibrium
Figure 1 shows the time-averaged energy distribution function $`N(\epsilon )`$, for different time periods and number of sheets. In all figures, the thin solid curve is the energy distribution of the isothermal distribution function of equation (3). What we see is quite clear. As we make the time interval longer, the time-averaged distribution function approaches to the isothermal distribution. Thus, the numerical result suggests the system is ergodic. However, it also shows that the time needed to populate the high-energy region is very long. The sampling time interval is 128 time units for $`N=16`$, and 512 time units for $`N=64`$ and 128. Thus, in the case of $`T=2^{18}`$ and $`N=16`$ (dash-dotted curve in figure 1a), total number of sample points is $`2^{15}=32768`$.
If we can assume that the sample points are uncorrelated, the possibility that no sample exceeds energy level $`\epsilon _0`$ is given simply by
$$P(\epsilon _0,N)=[1P(\epsilon <\epsilon _0)]^n,$$
(6)
where
$$P(\epsilon <\epsilon _0)=_0^{\epsilon _0}N(\epsilon )𝑑\epsilon ,$$
(7)
and $`n`$ is the number of sample points. Figure 2 shows $`1P(\epsilon )`$ as a function of $`\epsilon `$. For $`\epsilon =1.25`$, $`P(\epsilon )=0.996`$, and therefore the probability that none of 32768 samples does not exceed $`\epsilon =1.25`$ is practically zero ($`<e^{100}`$). In other words, the numerical result seems to suggest that the system is not in the thermally relaxed state even after $`2\times 10^5`$ crossing times.
Of course, this result is not surprising if the relaxation time is long. Samples taken with the time interval shorter than the relaxation time have a strong correlation, and therefore the effective number of freedom can be smaller than $`n`$. Roughly speaking, if the relaxation time is longer than $`10^4`$, our numerical result is consistent with the assumption that the system is in the thermal equilibrium. In the next subsection, we investigate the relaxation time itself.
### 3.2 Relaxation timescale
We measured the following quantities:
$`D_1`$ $`=`$ $`{\displaystyle \frac{<\epsilon _i(t_0)\epsilon _i(t_0+\mathrm{\Delta }t)>}{\mathrm{\Delta }t}},`$ (8)
$`D_2`$ $`=`$ $`{\displaystyle \frac{<[\epsilon _i(t_0)\epsilon _i(t_0+\mathrm{\Delta }t)]^2>}{\mathrm{\Delta }t}}.`$ (9)
These quantities correspond to the coefficients of the first and second-order terms in the Fokker-Planck equation for the distribution function, and have been used as the measure of the relaxation in many studies (see, e.g., Hernquist and Barnes, Hernquist et al. ), for three-dimensional systems. However, to our knowledge this measure has not been used for the study of the sheet model.
In order to see the dependence of these diffusion coefficients on the energy, we calculated them for intervals of $`\mathrm{\Delta }\epsilon =0.15`$. Figure 3 shows the results, for $`N=16,64`$ and 256. The time interval $`\mathrm{\Delta }t`$ was taken equal to $`Nt_c`$. We used smaller values for $`\mathrm{\Delta }t`$ and confirmed that the choice of $`\mathrm{\Delta }t`$ has negligible effect if $`\mathrm{\Delta }t`$ is larger than $`10t_c`$ and smaller than $`4Nt_c`$. Time average is taken over the whole simulation period. We can see that both the first- and second-order terms show very strong dependence on the energy of the sheets, and of the order of $`1/100N`$ for $`e1.5`$. Figure 3 suggests that the relaxation timescale grows exponentially as energy grows. This behavior is independent of the value of $`N`$.
We can define the relaxation timescale as
$$t_r=\epsilon ^2/D_2,$$
(10)
that is, the timescale in which energy changes significantly. Figure 4 shows this relaxation timescale for different values of $`N`$ and $`\epsilon `$. The relaxation time shows very strong dependence on the energy and the relaxation of high-energy sheets is much slower than that of sheets in lower energies. This is partly because of the dependence of $`t_r`$ on $`\epsilon `$ itself. However, as w can see in figure 3, the dependence of the diffusion coefficient is the main reason.
This result resolves the apparent contradiction between the fact that the relaxation timescale is of the order of $`Nt_c`$ and that the system reaches the true thermal equilibrium only in much longer timescale. It is true that the relaxation timescale is $`O(N)`$, but the coefficient before $`N`$ is quite large, in particular for sheets with high energies.
An important question is why the relaxation timescale depends so strongly on the energy. This is provably due to the fact that high-energy sheets have the orbital period significantly longer than the crossing time. Typical sheets have the period comparable to the crossing time, and therefore they are in strong resonance with each other. However, a high-energy sheet has the period longer than the crossing time, and thus it is out of resonance with the rest of the system. Therefore, the coupling between high energy sheets and the rest of the system is much weaker than the coupling between sheets with average energy. This explains why the relaxation of high energy sheets is slow.
## 4 Summary and Discussion
In this paper, we studied the thermal relaxation process in one-dimensional self-gravitating systems. We confirmed the result obtained by Tsuchiya et al. that the thermal relaxation takes place in the timescale much longer than $`Nt_c`$. However, we found that this is simply because the thermal relaxation timescale is much longer than $`Nt_c`$. Even for typical sheets, the relaxation timescale is around $`10Nt_c`$. In order to obtain good statistics, we need to take average over many relaxation times. Moreover, the relaxation time for sheets in the high-energy end of the distribution function is even longer, since the relaxation timescale grows exponentially as the energy grows. Thus, it is not surprising that we have to wait for more than $`10^4Nt_c`$ to obtain good statistics.
Does this finding have any theoretical/practical relevance? Theoretically, there is nothing new in our result. What we found is simply that numerical simulation should cover the period much longer than the relaxation timescale to obtain statistical properties of the system, and that the relaxation timescale of a sheet depends on its energy. Both are obvious, but some of the previous studies neglected one or both of the above, and claimed to have found a complex behavior, which, in our view, is just a random walk.
Our finding of the long relaxation time by itself has rather little astrophysical significance, since in the large $`N`$ limit, the relaxation time is infinite anyway. However, since any numerical simulation suffers some form of numerical relaxation, it is rather important to understand how the relaxation effect changes the system. To illustrate this, we examine the claims by Tsuchiya et al. in some detail here.
They argued that the evolution of the mass sheet model proceeds in the following four steps: (1) viliarization, (2) dynamical equilibrium, (3) quasiequilibrium, and (4) thermal equilibrium. According to them, the viliarization timescale is order of $`t_c`$, and the energy of each sheet is “conserved” in the dynamical equilibrium phase, which continues up to $`tNt_c`$. Then, “microscopic relaxation” takes place in the timescale of $`tNt_c`$, where the energy of each sheet is relaxed, but the whole system needs timescale much longer to reach the true equilibrium, because of some complex structure in the phase space.
Our numerical results are in good agreement with those of Tsuchiya et al., but our interpretation is much simpler: First system virializes, and then relaxation proceeds in the timescale of thermal relaxation, which depends on the energy of the individual sheets. Thus, the central region with short relaxation time relaxes to the distribution close to the thermal relaxation in less than $`100Nt_c`$, but the distribution in the high-energy tail takes much longer to settle. In addition, the small number statistics in the high-energy region makes it necessary to average over many relaxation times to obtain good statistics. In other words, there are no distinction between the “microscopic” and “macroscopic” relaxation, and the evolution of the system is perfectly understood in terms of the standard thermal relaxation.
## Acknowledgments
I would like to thank Yoko Funato, Toshiyuki Fukushige and Daiichiro Sugimoto for stimulating discussions, and Shunsuke Hozumi for comments on the manuscript. This work is supported in part by the Research for the Future Program of Japan Society for the Promotion of Science (JSPS-RFTP97P01102).
|
no-problem/9903/astro-ph9903357.html
|
ar5iv
|
text
|
# Best-fit parameters of MDM model from Abell-ACO power spectra and mass function
## 1 Introduction
The observable data on large scale structure of the Universe obtained during last years and coming from current experiments and observational program give a possibility to determine more exactly the parameters of cosmological models and the nature of the dark matter. Up till now the most certain data are about the largest scale inhomogeneities of the current particle horizon of the order of $`7000h^1\text{ Mpc }`$ ($`hH_0/100km/s/Mpc`$, $`H_0`$ is today Hubble constant) which are obtained from the study of all-sky temperature fluctuations of cosmic microwave background (CMB) with $`10^o`$ angular resolution by the space experiment COBE (\[Smoot et al. 1992, Bennett et al. 1994, Bennett et al. 1996\]). According to them the primordial power spectrum of density fluctuations is approximately scale invariant $`P_{pi}(k)=Ak^n`$ with $`n=1.1\pm 0.2`$ that well agrees with the predictions of standard inflation model of the Early Universe ($`n=1`$, $`\mathrm{\Omega }_0=1`$). Besides, they most certainly determine the amplitude of a linear power spectrum (or normalization constant $`A`$) which does not depend on any transition processes, nonlinearity effects and other phenomena connected with the last stages of large scale structure formation. On the contrary, the CMB temperature fluctuations at degree and sub-degree scales as well as the space distributions of the cluster of galaxies, galaxies, quasars, Lyman-$`\alpha `$ clouds, etc. are defined by those processes and also depend essentially on the nature of the dark matter. Theoretically it is taken into account by introducing the transfer function $`T(k)`$ which transforms the primordial (post-inflation spectrum) into the postrecombination (initial) one - $`P(k)=P_{pi}(k)T^2(k)`$, which defines all characteristics of the large scale structure of the Universe. The transfer function depends also on the curvature of the Universe or the present energy density in units of critical density, $`\mathrm{\Omega }_0`$, vacuum energy density or cosmological constant $`\mathrm{\Omega }_\mathrm{\Lambda }`$, content of baryons $`\mathrm{\Omega }_b`$, and values of the Hubble constant. The theory of a large scale structure formation is so far advanced today that all these dependencies can be accurately calculated for the fixed model by public available codes (e.g. CMBfast one by \[Seljak & Zaldarriaga 1996\]). The actual problem now is the determination of the nature of the dark matter and the rest of the above mentioned parameters by means of comparison of theoretically predicted and observable characteristics of the large scale structure of the Universe.
As most advanced candidates for the dark matter are cold dark matter (CDM), particles like axions, hot dark matter (HDM), particles like massive neutrinos with $`m_\nu 120eV`$ and baryon low luminosity compact objects. The last ones can not dominate as it results from the cosmological nucleosynthesis constraints ($`\mathrm{\Omega }_bh^20.024`$, \[Tytler et al. 1996, Songalia et al. 1997, Schramm and Turner 1997\]) and observation of microlensing events in the experiments like MACHO, DUO, etc. The pure HDM model conflicts with the existence of high redshift objects, the pure CDM one, on the contrary, overpredicts them. Therefore mixed dark matter model (CDM+HDM+baryons) with $`\mathrm{\Omega }_{HDM}\mathrm{\Omega }_\nu 0.3`$ looks more viable. The advantage of these models is a small number of free parameters. But today it is understood already that models with the minimal number of free parameters, such as a standard cold dark matter (sCDM, one parameter) or a standard cold plus hot mixed dark matter (sMDM, two parameters) only marginally match the observable data. A better agreement between theoretical predictions and observable data is achieved in the models with a larger number of free parameters (tilted CDM, open CDM, CDM or MDM with the cosmological term, see review in \[Valdarnini et al. 1998\] and references therein).
The oscillations of solar and atmospheric neutrinos registered by SuperKamiokande experiment show that the difference of rest masses between $`\tau `$ and $`\mu `$-neutrinos is $`0.02<\mathrm{\Delta }m_{\tau \mu }<0.08eV`$ \[Fukuda et al. 1998, Primack & Gross 1998\]. It also gives a lower limit for the mass of neutrino $`m_\nu |\mathrm{\Delta }m|`$ and does not exclude models with cosmologically significant values $`120eV`$. Therefore, at least two species of neutrinos can have approximately equal masses in this range. Some versions of elementary particle theories predict $`m_{\nu _e}m_{\nu _\tau }2.5eV`$ and $`m_{\nu _\mu }m_{\nu _s}10^5eV`$, where $`\nu _e`$, $`\nu _\tau `$, $`\nu _\mu `$ and $`\nu _s`$ denote the electron, $`\tau `$, $`\mu `$ and sterile neutrinos accordingly (e.g. \[Berezhiani et al. 1995\]). The strongest upper limit for the neutrino mass comes from the data on a large scale structure of our Universe: $`_im_{\nu _i}/93h^20.3`$ (\[Holtzman 1989, Davis, Summers & Schlegel 1992, Schaefer & Shafi 1992, Van Dalen & Schaefer 1992\], Novosyadlyj 1994, \[Pogosyan & Starobinsky 1995, Ma 1996, Valdarnini et al. 1998\]), that for $`h=0.8`$ (the upper observable limit for $`h`$) gives $`_im_{\nu _i}18eV`$. It is interesting that the upper limit for the mass of electron neutrino obtained from supernova star burst SN1987A neutrino signal is approximately the same $`m_{\nu _e}20eV`$.
Is it possible to find the best fit neutrino mass from experimental data on a large scale structure of the Universe? The problem is that it must be determined together with other large number uncertain parameters such as $`h`$, $`\mathrm{\Omega }_m`$, $`\mathrm{\Omega }_b`$, etc. Here we study the possibility of finding them by $`\chi ^2`$ minimization method. Realization of such a task became possible in principle after the appearance in literature of accurate analytical approximations of transfer function for mixed dark matter model in at least 4-dimension space of the above mentioned cosmological parameters $`T(k;\mathrm{\Omega }_b,m_\nu ,N_\nu ,h)`$ (\[Eisenstein & Hu 1997b, Novosyadlyj et al. 1998\]). That is why that even CMBfast codes are too bulky and slow yet for searching the cosmological parameters by the methods of minimization of $`\chi ^2`$, like Levenberg-Marquardt one (see \[Press et al. 1992\]).
The next problem is a choice of the observable data suitable for the solution of this task. They must be enough accurate, sensitive to those parameters and not too dependent on the model assumptions about the formation and nature of objects. The most sensitive to the presence of neutrino component are scales of order and smaller of its free-streaming (or Jeans) scale $`kk_J(z)=8\left(\frac{m_\nu }{10eV}\right)/\sqrt{1+z}h^1\text{ Mpc }`$ because perturbations at these scales are suppressed and it is imprinted in the transfer function of the HDM component. At $`z0`$ for cosmologically significant neutrino masses it is approximately galaxy clusters scale. The power spectrum reconstructed from space distributions of galaxies is distorted significantly by nonlinearity effects the accounting of which is model dependent (\[Peacock & Dodds 1994\]). The models of the formation of smaller scale structures or high redshift objects (e.g. Lyman-$`\alpha `$ damped systems, Lyman-$`\alpha `$ clouds, quasars etc.) contain the additional assumptions and parameters which makes their using rather problematic in such an approach. The CMB temperature anisotropy at subdegree angular scales (first and second acoustic peaks) has minimal additional assumptions (e.g. secondary ionization) but its sensibility to the presence of neutrino component is low ($`10\%`$, \[Dodelson et al. 1995\]). These data are sensitive and suitable for determination by $`\chi ^2`$ minimization methods other set of parameters such as tilt of primordial spectrum $`n`$, $`\mathrm{\Omega }_0`$, $`h`$, $`\mathrm{\Omega }_b`$, $`\mathrm{\Omega }_\mathrm{\Lambda }`$ or/and parameters of scaling seed models of structure formation (see \[Lineweaver & Barbosa 1997, Durrer et al. 1997\]).
The data on Abell-ACO power spectrum and function mass of rich clusters seem to be suitable for determining the best fit values of $`m_\nu `$ and $`N_\nu `$ because they do not depend on above mentioned additional assumptions.
The data on rich clusters power spectrum (\[Einasto et al. 1997\]) were used by Eisenstein et al. 1997 and Atrio-Barandela et al. 1997 for analyzing $`100h^1\text{ Mpc }`$ clustering. The first collaboration group tried to explain the narrow peak in the power spectrum at $`100h^1\text{ Mpc }`$ scale by baryonic acoustic oscillations in low- and high-$`\mathrm{\Omega }_0`$ models ($`\mathrm{\Omega }_0=\mathrm{\Omega }_{CDM}+\mathrm{\Omega }_b`$). In both cases such an approach needs very high content of baryons $`\mathrm{\Omega }_b`$ ($`>0.3`$), that is essentially out of the cosmological nucleosynthesis constraints. The second one has shown that this feature is in agreement with Saskatoon data (\[Netterfield et al. 1997\]) on $`\mathrm{\Delta }T/T`$ power spectrum at subdegree angular scales. They have concluded that these data prefer models with built-in scale in the primordial power spectrum which can be generated in the more complicated inflation scenario (e.g. double one).
For reducing the number of free parameters we restrict ourselves to analysis within the framework of the matter dominated Universe and standard inflation scenario: $`\mathrm{\Omega }_m=\mathrm{\Omega }_0=1`$, $`n=1`$ without the tensor mode of cosmological perturbations. The free parameters in our task will be baryon content $`\mathrm{\Omega }_b`$, dimensionless Hubble constant $`h`$, neutrino mass $`m_\nu `$, and number species of neutrinos with equal masses $`N_\nu `$.
The outline of this paper is as follows: the observable data which will be used here are described in Section 2. The method of determination of parameters and its testing are described in Sect. 3. Results of best fit finding of parameters under different combination of free and fixed ones are presented in Sect. 4. Discussion of results and conclusions are given in Sect. 5 and 6 accordingly.
## 2 Experimental data set
The most favorable data for the search of best fit cosmological parameters are real power spectrum reconstructed from redshift-space distribution of Abell- ACO clusters of galaxies (\[Einasto et al. 1997, Retzlaff et al. 1997\]). It is biased linear spectrum reliably estimated for $`0.03k0.2h/Mpc`$ whose position of maximum ($`k_{max}0.05h/Mpc`$), inclination before and after it are sensitive to baryon content $`\mathrm{\Omega }_b`$, Hubble constant $`h`$, neutrino mass $`m_\nu `$ and number species of massive neutrinos $`N_\nu `$ (see Fig.1-4). Here in numerical calculations the data of last estimation of power spectrum by \[Retzlaff et al. 1997\] will be used. All the sources of systematic and statistical uncertainties as well as window function and differences between Abell and ACO parts of sample have been accurately taken into account there. The values of the Abell-ACO power spectrum for 13 values of $`k`$ $`\stackrel{~}{P}_{A+ACO}(k_j)`$ ($`j=1,13`$) and their $`1\sigma `$ errors are presented in Table 1 and are shown in figures.
Other observable data which will be used here are constraints of amplitude of the fluctuation power spectrum at cluster scale derived from cluster mass and X-ray temperature functions. It is usually formulated as a constraint for density fluctuations in top-hat sphere of 8$`h^1`$ Mpc radius, $`\sigma _8`$, which can be easy calculated for the given initial power spectrum $`P(k)`$:
$$\sigma _8^2=\frac{1}{2\pi ^2}_0^{\mathrm{}}k^2P(k;\mathrm{\Omega }_b,h,m_\nu ,N_\nu )W^2(8k/h)𝑑k,$$
$`(1)`$
where $`W(x)=3(sinxxcosx)/x^3`$ is Fourier transformation of top-hat window function. The different collaboration groups gave similar results which are in the range of $`\stackrel{~}{\sigma }_80.50.7`$. The new optical determination of the mass function of nearby galaxy clusters (\[Girardi et al. 1998\]) gives median values: $`\stackrel{~}{\sigma }_8=0.60\pm 0.04`$<sup>1</sup><sup>1</sup>1Jeans scale for neutrino component in all cases analysed here is smaller than the cluster scales therefore all the matter is clustered and the term $`\mathrm{\Omega }^{0.6}`$ in the original form is omited. It matches very well the cluster X-ray temperature function (\[Viana & Liddle 1996\]). For taking into account the data of other authors I shall be more conservative and will use it with $`3\sigma `$ error bars instead of $`1\sigma `$ one. But, as we will see, it does not rule out predicted $`\sigma _8`$ value from the $`1\sigma `$ limit of the observable one by \[Girardi et al. 1998\] for best fit parameters determined here.
The COBE 4-year data will be used here for normalization of power spectra. A useful fit for them is the amplitude of density perturbation of the horizon crossing scale $`\delta _h`$, which for a flat model with the $`n=1`$ equals $`\delta _h=1.9410^5`$ (\[Liddle et al. 1996, Bunn and White 1997\]). Taking into account the definition of $`\delta _h`$ (\[Liddle et al. 1996\]) and the power spectrum, the normalization constant $`A`$ is calculated as
$$A=2\pi ^2\delta _h^2(3000/h)^4Mpc^4.$$
## 3 Method and its testing
The Abell-ACO power spectrum is connected with matter one by means of the cluster biasing parameter $`b_{cl}`$:
$$P_{A+ACO}(k)=b_{cl}^2P(k;\mathrm{\Omega }_b,h,m_\nu ,N_\nu ).$$
$`(2)`$
For fixed parameters $`\mathrm{\Omega }_b`$, $`h`$, $`m_\nu `$, $`N_\nu `$ and $`b_{cl}`$ the values of $`P_{A+ACO}(k_j)`$ are calculated for the same $`k_j`$ as in Table 1 and $`\sigma _8`$ according to (1). Let’s denote them by $`y_j`$ ($`j=1,\mathrm{},14`$), where $`y_1,\mathrm{},y_{13}`$ correspond $`P_{A+ACO}(k_1),\mathrm{},P_{A+ACO}(k_{13})`$, and $`y_{14}`$ is $`\sigma _8`$. Their deviation from observable data set (noted by the tilde) can be described by $`\chi ^2`$:
$$\chi ^2=\underset{j=1}{\overset{14}{}}\left(\frac{\stackrel{~}{y}_jy_j}{\mathrm{\Delta }\stackrel{~}{y}_j}\right)^2,$$
$`(3)`$
where $`\stackrel{~}{y}_j`$ and $`\mathrm{\Delta }\stackrel{~}{y}_j`$ are experimental data set and their dispersion accordingly. Then parameters $`\mathrm{\Omega }_b`$, $`h`$, $`m_\nu `$, $`N_\nu `$ and $`b_{cl}`$ or some part from them can be determined by minimizing $`\chi ^2`$ using Levenberg-Marquard method (\[Press et al. 1992\]). The derivatives of predicted values on search parameters which are required by this method will be calculated numerically. The step for their calculation was experimentally assorted and is $`10^5`$ of the values for all parameters.
The analytical approximation of MDM transfer function will be used in the form:
$$\begin{array}{c}T_{MDM}(k;\mathrm{\Omega }_b,h,m_\nu ,N_\nu ;z)=\hfill \\ T_{CDM+b}(k;\mathrm{\Omega }_b,h;z)D(k;\mathrm{\Omega }_b,h,\mathrm{\Omega }_\nu ,N_\nu ;z),\hfill \end{array}$$
where $`T_{CDM+b}(k;\mathrm{\Omega }_b,h;z)`$ is the transfer function by \[Eisenstein & Hu 1997a\] for CDM+baryon system ($`z`$ is redshift), the correction factor for the HDM component $`D(k)`$ was used in the form given by \[Novosyadlyj et al. 1998\]. It is correct in a sufficiently wide range of search parameters (for a more detailed analysis of its accuracy see in \[Novosyadlyj et al. 1998\]). We suppose the scale invariant primordial power spectrum because the initial power spectra of MDM models now is as follows: $`P_{MDM}(k)=AkT_{MDM}^2(k;\mathrm{\Omega }_b,h,m_\nu ,N_\nu ;z)`$.
The method was tested in the following way. I calculated the MDM power spectrum for the given parameters (e.g. $`\mathrm{\Omega }_b=0.15`$, $`\mathrm{\Omega }_\nu =0.2`$, $`N_\nu =1`$, $`h=0.5`$) using CMBfast code, normalized to 4-year COBE data, calculated $`\stackrel{~}{\sigma }_8`$ and interpolated $`P(k)`$ for the same $`k_j`$ ($`j=1,\mathrm{},13`$) which are in Table 1. Then I have took cluster biasing parameter $`b_{cl}=3`$ and calculated model $`\stackrel{~}{P}_{A+ACO}(k_j)`$. The ’experimental’ errors for them as well as for $`\stackrel{~}{\sigma }_8`$ I have suggested to be the same as relative errors from Table 1. These model experimental data like the ones in Table 1 were used for search of parameters $`\mathrm{\Omega }_b`$, $`h`$, $`\mathrm{\Omega }_\nu `$, and $`b_{cl}`$ ($`N_\nu `$ is fixed and the same). The initial (or start) values of the parameters I have put as random deviated from the given ones. In all cases the code found all the given parameters with high accuracy.
## 4 Dependence of density fluctuations power spectra at cluster scale on cosmological parameters
Before finding of the best-fit parameters let’s look how the power spectrum of density fluctuations at cluster scale depends on search parameters. For this we leave only $`b_{cl}`$ as a free parameter and fix the remaining ones. In Fig.1 such a dependence of rich cluster power spectra on $`\mathrm{\Omega }_\nu `$ is shown for $`h=0.5`$, $`\mathrm{\Omega }_b=0.05`$ and $`N_\nu =1`$. The r.m.s. of density fluctuations in the top-hat sphere of $`8h^1\text{ Mpc }`$ radius in models with $`\mathrm{\Omega }_\nu =`$0.1, 0.2, 0.3, 0.4 are $`\sigma _8=`$ 0.93, 0.81, 0.75, 0.71 accordingly. The best-fit values of $`b_{cl}`$ are presented in the caption of Fig.1. The deviations of the predicted rich cluster power spectra and mass function in these models from the observable ones are correspondingly $`\chi ^2=17.3,\mathrm{\hspace{0.33em}9.88},\mathrm{\hspace{0.33em}6.64},\mathrm{\hspace{0.33em}5.33}`$. Therefore, for the MDM model with $`h=0.5`$, $`\mathrm{\Omega }_b=0.05`$ and $`N_\nu =1`$ Abell-ACO power spectrum and mass function prefer high $`\mathrm{\Omega }_\nu `$ ($`0.30.4`$).
Now we repeat the same calculations for different number species of massive neutrinos $`N_\nu =1,\mathrm{\hspace{0.33em}2},\mathrm{\hspace{0.33em}3}`$ and fixed $`\mathrm{\Omega }_\nu =0.2`$ (Fig.2). The $`\sigma _8`$’s for these 3 models are 0.81, 0.73, 0.68 accordingly, the corresponding deviations of predicted rich cluster power spectra and mass functions from the observable ones respectively are $`\chi ^2=`$9.88, 6.48, 5.54. So, the MDM model with three species of equal mass neutrino is preferable.
In the first two cases ($`h`$ fixed and equal 0.5) the mass of neutrino was different for differing $`\mathrm{\Omega }_\nu `$ ($`N_\nu `$ fixed) and $`N_\nu `$ ($`\mathrm{\Omega }_\nu `$ fixed) because they are connected by relations
$$m_\nu =93\mathrm{\Omega }_\nu h^2/N_\nu .$$
$`(4)`$
Let’s fix the neutrino mass ($`m_\nu =2.5eV`$), suggest that $`N_\nu =2`$ and repeat calculations for different $`h=`$0.5, 0.6, 0.7. The results are shown in Fig.3. $`\sigma _8`$ for these 3 models are following 0.71, 0.98, 1.24. The $`\chi ^2`$ for all points of power spectrum and $`\sigma _8`$ are 5.72, 19.9 and 42.6 accordingly. Therefore, when neutrino mass is fixed (by laboratory experiments for example) the data prefer low $`h`$.
Similarly, one shall calculate rich cluster power spectra for different $`\mathrm{\Omega }_b`$ when the rest of the parameters are fixed. The results for $`\mathrm{\Omega }_b=`$0.05, 0.1, 0.15, 0.2, 0.25, 0.3 are presented in Fig.4. The corresponding $`\sigma _8`$’s are following 0.71, 0.64, 0.58, 0.53, 0.48, 0.44, the characteristics of deviations of the predicted values from the observable ones $`\chi ^2`$ for these models are 5.72, 4.28, 3.61, 3.70, 4.59, 6.39. The minimum $`\chi ^2`$ is for model with $`\mathrm{\Omega }_b=0.15`$.
As we see the theoretically predicted values of the chosen data are sensitive to search parameters $`m_\nu `$, $`N_\nu `$, $`\mathrm{\Omega }_b`$ and $`h`$. It is interesting now where the global minimum of $`\chi ^2`$ in space of these parameters is when all or a part of them are free.
## 5 Results
The searching of $`m_\nu `$, $`N_\nu `$, $`\mathrm{\Omega }_b`$ and $`h`$ by $`\chi ^2`$ Levenberg-Marquardt minimization method can be realized in the following way. We shall put $`m_\nu `$, $`\mathrm{\Omega }_b`$, $`h`$ and $`b_{cl}`$ or part of them free and find the minimum of $`\chi ^2`$ for $`N_\nu `$=1, 2, 3 in a series. The lowest value from them will be suggested as minimum of $`\chi ^2`$ for each set of free parameters. This is because the $`N_\nu `$ possesses the discrete value.
The key point is narrowing the range of search parameter values. The analytical approximation of the MDM power spectra used here is accurate enough in the following range of parameters: $`0.3h0.7`$, $`\mathrm{\Omega }_\nu 0.5`$, $`\mathrm{\Omega }_b0.3`$, $`N_\nu 3`$ (\[Novosyadlyj et al. 1998\]). By the upper and lower boundaries of $`h`$, $`\mathrm{\Omega }_\nu `$ and $`\mathrm{\Omega }_b`$ availability of the used analytical approximation we admeasure the range of search values of these parameters. We make these boundaries as ’mirror walls’.
### 5.1 All parameters are free
The minima of $`\chi ^2`$ in a 4-dimensional space of parameters $`\mathrm{\Omega }_\nu `$, $`\mathrm{\Omega }_b`$, $`h`$ and $`b_{cl}`$ for models with 1, 2 and 3 species of massive neutrinos are achieved for the set of parameters presented in Table 2. The spectra for them are shown in Fig.5 and $`\sigma _8`$’s are presented in the Table 2. (The accuracy of analytical approximation of MDM spectra is better than 5%).
As we can see $`\chi ^2`$ is few times lower than the formal degree of freedom, $`d=nm`$, where $`n`$ is the number of data points, $`m`$ is the number of free parameters. The reason is that not all the points of the Abell-ACO power spectrum presented in Table 1 are independent. The numerical experiment has shown that the minimal number of points which determine the same MDM parameters is $`7`$ (odd points of $`P_A(k_i)`$ in Table 1, for example). Indeed, such a spectrum can be described by amplitude and inclination at small and large scale ranges and the second order curve at the peak (or maximum) range. It means that real $`d34`$.
Therefore, in the 5-dimension space of free parameters ($`\mathrm{\Omega }_\nu `$, $`N_\nu `$, $`\mathrm{\Omega }_b`$, $`h`$ and $`b_{cl}`$) the global minimum of $`\chi ^2`$ is achieved for the MDM model with 3 sorts of massive neutrinos. It has the lowest $`m_\nu `$ and the highest $`h`$ which better matches the data on immediate measurements of Hubble constant. However, it is unexpected that the found $`\mathrm{\Omega }_\nu `$ is so high and $`\mathrm{\Omega }_b`$ is so low. They contradict the data on high redshift objects and nucleosynthesis constraint ($`0.007\mathrm{\Omega }_bh^20.024`$, \[Tytler et al. 1996, Songalia et al. 1997, Schramm and Turner 1997\]) accordingly. The MDM models with so high a $`\mathrm{\Omega }_\nu `$ ($`0.40.5`$) also have a problem with the galaxy formation, $`\sigma _01`$ for them. Let’s analyze the cases with additional constraints which can lead us out of this difficulty.
### 5.2 Coordination with nucleosynthesis constraint
The increasing of baryon content can decrease this difficulty (see \[Eisenstein et al. 1997\]). We shall fix baryon content by the upper limit which is resulted from the nucleosynthesis constraint $`\mathrm{\Omega }_bh^2=0.024`$ and keep up the rest parameters as free. The found best-fit parameters are in the Table 3, rich power spectrum for the case with 3 sorts of massive neutrino is shown in Fig.6 (dotted line). The spectra for the cases with 1 and 2 sorts are close to this one.
As we can see $`\mathrm{\Omega }_\nu `$ increases when $`\mathrm{\Omega }_b`$ decreases and the minima of $`\chi ^2`$ are achieved at high $`\mathrm{\Omega }_\nu `$ again. But they are quite close to the corresponding minima from the previous table ($`\mathrm{\Delta }\chi ^2<1`$).
### 5.3 When the mass of neutrino is known
An interesting question ensuing from last two items is: which best-fit values of $`\mathrm{\Omega }_b`$ and $`h`$ can be obtained from these data on the Abell-ACO power spectrum and mass function in the case when mass of neutrino is determined by any physical or astrophysical experiments and is known. Let’s assume that $`m_\nu `$ is fixed but the number of species $`N_\nu `$ is unknown. We fix $`\mathrm{\Omega }_\nu `$ by relation (4) and the rest of parameters leave free. The search in such an approach was unsuccessful because it halted in the upper limit of $`\mathrm{\Omega }_b`$=0.3. When this ’mirror wall’ was removed the solutions were found but with extremely high content of baryons for which an accuracy of analytical approximation for MDM spectra is worse ($`1520\%`$). Results for $`m_\nu `$=2.5eV and 3eV are presented in Table 4. The rich cluster power spectrum for $`m_\nu =2.5eV`$ and $`N_\nu =3`$ is shown in Fig.6 (dashed line). The spectra for 1 and 2 sorts are close to this one.
The $`\chi ^2`$’s in all cases here are lower than in Table 2 because the performance of analytical approximation of MDM spectra for so high a $`\mathrm{\Omega }_b`$ and $`h`$ is essentially worse than in the allowance range. Therefore we can not conclude that the global minimum of $`\chi ^2`$ in the 4-dimension space of parameters $`m_\nu `$, $`N_\nu `$, $`\mathrm{\Omega }_b`$ and $`h`$ is in the range of high $`\mathrm{\Omega }_b`$ and $`h`$. It is in point with the parameters which are in the last row of Table 2. But we certainly conclude that when $`m_\nu 23eV`$ the minimum is absent in the range of $`\mathrm{\Omega }_b0.3`$, $`0.3h0.7`$. Therefore the Abell-ACO power spectrum and mass function among the MDM models with $`m_\nu 4eV`$ and $`N_\nu 3`$ prefer $`\mathrm{\Omega }_b>0.3`$ and $`h0.8`$ that agrees well with the results by \[Eisenstein et al. 1997\].
### 5.4 $`\mathrm{\Omega }_b`$ and $`m_\nu `$ are fixed
One can look now which $`h`$ is preferable by Abell-ACO power spectrum and mass function when neutrino mass and baryon content are fixed by the other observable constraints or theoretical arguments. Let’s put that $`m_\nu =2.5eV`$ ($`\mathrm{\Omega }_\nu =m_\nu N_\nu /93h^2`$) and $`\mathrm{\Omega }_b=0.024/h^2`$ is fixed by the upper limit of nucleosynthesis constraint. Only $`h`$ and $`b_{cl}`$ are free parameters. Their best-fit values found for 1, 2 and 3 sorts of massive neutrino are presented in the Table 5. The rich cluster power spectrum for $`N_\nu =3`$ MDM model with those parameters is shown in Fig.6 (dashed dotted line). The spectra for 1 and 2 sorts are close to this one.
As we see in the MDM model with 3 sorts of 2.5eV neutrinos the best-fit value of $`h`$ and $`\sigma _8`$ are closer to the corresponding observable data than in models with 1 or 2 sorts.
## 6 Discussion
Rich cluster power spectra of models with the best fit parameters are within the error bars of the corresponding experimental data (Fig.5-6). But none of them explains the peak at $`k0.05h/Mpc`$ that corresponds to the linear scale $`120h^1\text{ Mpc }`$. It has excess power at $`50\%`$ in comparison with the best-fit model and $`30\%`$ in comparison with the high-$`\mathrm{\Omega }_b`$ one. It is more prominent yet in the data by \[Einasto et al. 1997\]. Apparently, it is a real feature of the power spectrum. The necessity of a similar feature in the power spectrum was argued earlier by the explanation of Great Attractor phenomenon (\[Hnatyk et al. 1995, Novosyadlyj 1996\]). A sample of the Abell-ACO clusters of galaxies used by \[Retzlaff et al. 1997\] is placed in $`60^o`$ double-cone with the axis pointing towards the Milky Way pole. The Great Attractor, on the contrary, placed in the plane of our galaxy. Therefore, they are an independent experimental demonstration of the reality of those peak. Other important arguments for its validity come from pencil-beam redshift survey by \[Broadhurst et al. 1990\] and from 2-dimensional power spectrum of the Las Campanas Redshift Survey (\[Landy et al. 1996\]). The angular correlations in the APM survey (\[Gaztanaga & Baudh 1997\]) and high-redshift absorption lines in quasar spectrum (\[Quashnock et al. 1996\]) also show similar features at these scales. It was shown also by \[Atrio-Barandela et al. 1997\] that this $`120h^1\text{ Mpc }`$ peak well agree with Saskatoon data on the $`\mathrm{\Delta }T/T`$ power spectrum. Therefore, the data used here on rich cluster power spectrum are based on the surveys which represent a fair sample of $`120h^1\text{ Mpc }`$ structures and that peak is significant despite the large error bars of experimental data.
Obviously, that turnabout to open ($`\mathrm{\Omega }_0<1`$) models or flat with cosmological term ($`\mathrm{\Omega }_0+\mathrm{\Omega }_\mathrm{\Lambda }=1`$) does not improve the situation with the explanation of that peak in our approach. It is because the maximum of power spectra in those models is shifted to larger scales in comparison with matter dominated flat models analyzed here. Explaning of it by baryonic acoustic oscillations calls for extremely high content of baryons that disagree with nucleosynthesis constraint (see \[Eisenstein et al. 1997\]). Therefore we face a necessity to consider models with a built-in scale in the primordial power spectrum again.
Let’s determine the parameters of this peak. The comparison of rich cluster power spectrum predicted by the MDM model with the best-fit parameters (Table 2) with the observable one showed that the peak has approximately the Gaussian form. Therefore we approximate it by the function $`p(k)=1+a_pexp(2(k_pk)^2/w_p^2)`$, where $`a_p`$, $`k_p`$ and $`w_p`$ are amplitude, center and width of the peak accordingly. We set the power spectrum in the form of $`P_{MDM+p}(k)=P_{MDM}(k;\mathrm{\Omega }_b,h,m_\nu ,N_\nu )p(k;a_p,k_p,w_p)`$, and repeat previous calculations with additional free parameters $`a_p`$, $`k_p`$ and $`w_p`$.
It should seem that this peak causes such high best-fit values of $`\mathrm{\Omega }_\nu `$ or $`\mathrm{\Omega }_b`$ in Tables 2-4. The results of the search for best-fit parameters in the 8-dimensional space of the MDM+peak model parameters showed that it is not so, that well agrees with the numerical results by \[Retzlaff et al. 1997\]. The introducing of the peak really decreases the $`\chi ^2`$ but the MDM model parameters are changed weakly. It is because they are determined mainly by the inclination of the Abell-ACO power spectrum after the peak and $`\stackrel{~}{\sigma }_8`$ as the most accurate value of the data set used here. The models with 3 sorts of massive neutrino are preferable like in the previous cases. In Table 9 the best-fit parameters of the MDM models with 3 sorts of massive neutrino as well as best-fit parameters of the peak are presented for 4 cases: all the MDM parameters were free (1st row), baryon content $`\mathrm{\Omega }_b`$ was fixed by the upper limit of nucleosynthesis constraint (2), neutrino mass was fixed at $`m_\nu =2.5eV`$ (3), $`\mathrm{\Omega }_b`$ and $`m_\nu `$ were fixed (4). The $`\chi ^2`$ for them are 0.81, 0.86, 1.11, 1.04 accordingly. In all the cases except (3) the $`\sigma _8=0.6`$, in (3) case the $`\sigma _8=0.66`$. The rich cluster power spectrum for these cases are shown in Fig.7.
The introducing of such a peak increases the predicted bulk velocities in a top-hat sphere of the radius R whose r.m.s. values can be calculated according to
$$V_R^2=\frac{H_0^2}{2\pi ^2}_0^{\mathrm{}}𝑑kP_{MDM}(k)W^2(kR),$$
where $`W(kR)`$ is the Fourier transform of this sphere. So, for $`R=50h^1\text{ Mpc }`$ it increases from $`340km/s`$ to $`360km/s`$ for the best-fit model (3rd row of Table 2, 1st row of Table 6) and from $`330km/s`$ to $`345km/s`$ for a model with fixed $`m_\nu `$ and $`\mathrm{\Omega }_b`$ (3rd row of Table 5, the last row of Table 6). The observable value of bulk velocity for this scale is $`\stackrel{~}{V}_{50}=375\pm 85km/s`$, which follows from Mark III POTENT results (\[Kolatt & Dekel 1997\]). Therefore, this peak is preferable also by the data on large scale peculiar velocity of galaxies and Great Attractor like structures. However, the models with high values of $`\mathrm{\Omega }_\nu 0.40.5`$ ($`m_\nu 47eV`$), which are best-fit ones for the Abell-ACO data, have problems with the explanation of galaxy scale structures and high redshift objects. But models with median $`\mathrm{\Omega }_\nu 0.20.3`$ ($`m_\nu 2.5`$, $`N_\nu 23`$) are not ruled out by these data ($`\mathrm{\Delta }\chi ^2<1`$). On the contrary, the CDM model with $`\mathrm{\Omega }_b0.2`$ and $`h0.5`$ is ruled out by these data at a high confidence level because for them $`\mathrm{\Delta }\chi ^215`$.
At last it must be noted that primordial spectrum feature like this peak is inherent for double inflation models (\[Kofman et al. 1985, Kofman & Linde 1987\], Kofman $`\&`$ Pogosyan 1988, Gottloeber et al. 1991, \[Polarski & Starobinsky 1992\]) and inflationary model wherein an inflation field evolves through a kink in the potential (\[Starobinsky 1992\]). Both classes of these models were confronted with the observational data on the Abell-ACO power spectrum by \[Lesgourgues et al. 1997\] and \[Retzlaff et al. 1997\] accordingly.
## 7 Conclusions
The Abell-ACO power spectrum by \[Retzlaff et al. 1997\] and mass function by \[Girardi et al. 1998\] in the parameter space of the MDM model ($`\mathrm{\Omega }_0=1`$) prefer a region with high $`\mathrm{\Omega }_\nu `$ ($`0.40.5`$), low $`\mathrm{\Omega }_b`$ ($`0.01`$) and $`h`$ ($`0.40.6`$). The best-fit parameters are as follows: $`N_\nu =3`$, $`m_\nu =4.4eV`$, $`h=0.56`$, $`\mathrm{\Omega }_b0.01`$. Unfortunately, experimental uncertainties of the data used here for the determination of these parameters give no chance to rule out models with a different set of parameters at a sufficiently high confidence level. The MDM models with baryon content at the upper limit of the nucleosynthesis constraint ($`\mathrm{\Omega }_bh^2=0.024`$) do not outstep $`\mathrm{\Delta }\chi ^2=1`$ of best-fit model (see Table 3). The high-$`\mathrm{\Omega }_b`$ ($`0.40.5`$) solutions are obtained when neutrino mass are fixed and $`3eV`$.
Introducing artificially into the primordial power spectrum a peak of Gaussian form decreases the $`\chi ^2`$, increases the bulk motions but does not change essentially the best-fit parameters of the MDM models. It means that determinative for these parameters is mainly inclination of the Abell-ACO power spectrum at the scales smaller than the scale of the peak position and $`\stackrel{~}{\sigma }_8`$ as the most accurate value of the data set used here.
Hereby, the power spectrum of the Abell-ACO clusters of galaxies and mass function are a sensitive test for the MDM model parameters. But more accurate data on power spectrum of matter density fluctuations are necessary for more certain determination of cosmological parameters.
Acknowledgments This work was performed thanks to the financial support granted by Swiss National Science Foundation (grant NSF 7IP050163) and DAAD in Germany (Ref.325). The author also thanks AIP for hospitality and S. Gottloeber for useful discussions.
|
no-problem/9903/hep-ph9903406.html
|
ar5iv
|
text
|
# Leptogenesis and Yukawa textures
## I Introduction
Relationships between fermion masses and mixings have been the subjects of much theoretical interest starting with a postulated relationship between the Cabibbo angle and the down and strange quark masses. Most of the unknown parameters in the Standard Model (SM) occur in the Yukawa sector and any relationships between these parameters is welcome theoretically and experimentally testable. The interest in models of fermion masses and mixings accelerated with the advent of grand unified theories. In these models, the gauge multiplets of the Standard Model are unified into multiplets of the grand unified gauge group, and relationships between the parameters emerge naturally as a consequence of the larger symmetry. These models can be augmented by global symmetries or texture zeros in the Yukawa coupling matrices can be assumed, leading to further predictions.
A mystery of the Yukawa sector is the obvious hierarchy that exists in fermion masses and mixings. The top quark mass is much larger than the charm quark mass which is still much larger than the up quark mass, for example. The (Cabbibo-Kobayashi-Maskawa) CKM matrix is measured experimentally have small mixing angles. Clearly a fundamental theory that explains the origin of the couplings in the Yukawa sector rather than just parameterizing them, should explain these features. The Yukawa sector of the Standard Model is parametrized in terms of $`3\times 3`$ matrices, so the hierarchy exhibits itself as a hierarchy among the elements of these matrices.
The usual predictions from these models of fermion masses and mixings are relationships between the masses of quarks and leptons or between the mixing angles of the CKM matrix and dimensionless ratios of the quark and lepton masses. In addition there are often predictions for the amount of charge conjugation-parity (CP) violation and predictions for the CP asymmetries of meson decays. It is of interest to consider whether further constraints are obtained after making reasonable assumptions about some other physical observable. In this paper we consider textures (or patterns of zero entries) of Yukawa coupling matrices and assume that the baryon asymmetry has its origins in the decay of heavy Majorana neutrinos which violate lepton number. The asymmetry in lepton number is recycled into a baryon number asymmetry via the sphaleron process. This idea was first put forward by Fukugita and Yanagida and has come to be known as baryogenesis via leptogenesis. While this constraint is admittedly more speculative than the comparison of masses and mixing angles derived from experiment, it is instructive to determine which properties of the Yukawa textures are essential for the baryogenesis via leptogenesis to work.
Sakharov pointed out that a small baryon asymmetry may have been produced in the early universe if three conditions are satisfied: 1) baryon number is violated, 2) charge conjugation symmetry (C) and CP are violated, and 3) there is a departure from thermal equilibriumA baryon asymmetry could arise even in thermal equilibrium if CPT is violated. See, for example, Ref. .. Since both C and CP are violated in the Standard Model, and baryon number is violated by a nonperturbative effect called sphalerons, the natural place to look first to explain the generation of a baryon asymmetry is within the Standard Model itself. However this line reasoning does not work: the required Higgs masses is too small and has been ruled out by the direct searches at LEP. One can try to extend the Standard Model: one popular attack is to consider the Minimal Supersymmetric Standard Model (MSSM) and assume that the source of the CP violation is still contained within the CKM matrix. One can achieve the observed baryon asymmetry, but only at the expense of going to a corner of parameter space, namely one requires a light scalar top quark (top squark or stop). This kind of solution should rightly be regarded as unnatural on theoretical grounds, although it is of much experimental and phenomenological interest primarily because it is just out of reach.
We pursue in this paper the alternative approach of baryogenesis via leptogenesis: With the confirmation of the atmospheric neutrino anomaly in the SuperKamiokonde, it seems plausible to start from the point of view that it is likely that the neutrino mixing is occuring and the neutrinos have masses given by a see-saw mechanism. The heavy Majorana neutrino mass matrix is obtained by inverting the type-I see-saw formula
$$m_{\mathrm{eff}}=m_D^T\frac{1}{M_N}m_D,$$
(1)
and using the neutrino masses and mixings required by the solar and atmospheric neutrino oscillation experiments as input<sup>§</sup><sup>§</sup>§We neglect the possibility that there are contributions from a left-handed triplet Higgs. The neutrino sector contains a new source of CP violation; the interference between tree-level and one-loop contributions to the Majorana neutrino decays can give rise to a lepton asymmetry. In this scenario, the amount of CP violation that gives rise to a lepton asymmetry and ultimately to a baryon asymmetry depends critically on the Dirac mass matrix of the neutrinos in two ways: (1) the overall hierarchy pattern of the matrix and (2) the placement of the texture zerosIf a texture zero predicts a small level of CP violation at the GUT scale, this suppression will be preserved by the renormalization group scaling.. Furthermore, the generated lepton asymmetry can be erased by subsequent lepton-number violating scattering and this dilution can depend on the placement of the texture zeros.
In this paper we start from the position that the positive observation in the solar and atmospheric neutrino experiments suggest that there is a new scale of heavy physics. We assume that heavy right-handed neutrinos exist and the lightness of the observed neutrinos is the result of a seesaw mechanism. In this framework we study the plausibility of leptogenesis in the case of neutrino Dirac mass matrices with texture zeros and hierarchical structure similar to the ones that are consistent with low-energy data in the quark sector.
## II Yukawa Textures
Ramond, Ross and Roberts (RRR) performed a systematic search for all possible symmetric quark and lepton mass matrices with five texture zeros at the unification scale that are compatible with low-energy measurements. They found a total of five possible solutions, which we display again in Table I for convenience.
We assume the Dirac neutrino mass matrix at the GUT scale has the same texture zeros as the up quark matrix
$$m_D^{}m_u=\lambda _uv\mathrm{sin}\beta .$$
(2)
In certain situations where the Yukawa interactions are minimal, grand unified symmetry enforces an exact equality. More generally one might expect the equality of elements in the neutrino texture and the up quark texture not be exact, but be related by Clebsch coefficients (very often 3). These Clebsch factors are typically small and do not upset the hierarchy of the matrices. The general qualitative features of our analysis is not affected by these factors of order one, since the amount of baryon asymmetry generated in a model with a particular texture is governed by the hierarchy (given in terms of the parameter $`\lambda `$ which is fixed by the Cabibbo angle) and the position of the texture zeros.
## III Numerical Solutions
We consider small angle MSW solution of the solar neutrino problem through the mixing $`\nu _e\nu _\mu `$ and maximal mixing solution of atmospheric neutrino oscillation through the mixing $`\nu _\mu \nu _\tau `$. We take as inputs the following neutrino masses consistent with the experimental measurements, namely
$`0.8<\mathrm{sin}^22\theta _{23}<1,`$ $`10^3<\mathrm{\Delta }m_{23}^2<10^2,`$ (3)
$`3\times 10^3<\mathrm{sin}^22\theta _{12}<2\times 10^2,`$ $`5\times 10^6<\mathrm{\Delta }m_{12}^2<10^5.`$ (4)
The inverse neutrino mass matrix is
$`m_{\mathrm{eff}}^1`$ $`=`$ $`\left(\begin{array}{ccc}1/m_1& 0& 0\\ 0& 1/m_2& 0\\ 0& 0& 1/m_3\end{array}\right)`$ (5)
This can be rotated by a mixing matrix
$`V`$ $`=`$ $`V_{13}V_{12}V_{23},`$ (6)
where $`V_{ij}`$ is a rotation matrix in between the $`i,j`$ rows and columns by an angle $`\theta _{ij}`$. For example
$`V_{12}`$ $`=`$ $`\left(\begin{array}{ccc}c_{12}& s_{12}& 0\\ s_{12}& c_{12}& 0\\ 0& 0& 1\end{array}\right),`$ (7)
where $`s_{ij}=\mathrm{sin}\theta _{ij}`$ and $`c_{ij}=\mathrm{cos}\theta _{ij}`$. We have taken the mixing matrix to be real for simplicity. Then the Majorana mass matrix in this basis is
$`M_N`$ $`=`$ $`m_D^TVm_{\mathrm{eff}}^1V^Tm_D^{},`$ (8)
The lepton asymmetry is created by the decay of the lightest of the heavy majorana neutrinos. Consequantly we have to go to a basis in which the Majorana mass matrix is diagonal. This can be diagonalized by a matrix $`K`$ so that
$`M_N^{\mathrm{diag}}`$ $`=`$ $`K^TM_NK,`$ (9)
Note the Dirac and Majorana mass matrices are related through Eqn (1). It can be easily seen that the Dirac neutrino mass matrix in such a basis is,
$`m_D`$ $`=`$ $`Km_D^{}K^T.`$ (10)
The CP-asymmetries in the neutrino decays arize from the interference between the tree level and one-loop level decay channels
$`ϵ_j`$ $`=`$ $`{\displaystyle \frac{1}{8\pi v_2^2}}{\displaystyle \frac{1}{(m_D^{}m_D)_{jj}}}{\displaystyle \underset{nj}{}}\mathrm{Im}\left[(m_D^{}m_D)_{nj}^2\right]g\left({\displaystyle \frac{a_n}{a_j}}\right),`$ (11)
where
$`g(x)`$ $`=`$ $`\sqrt{x}\left[\mathrm{ln}\left({\displaystyle \frac{1+x}{x}}\right)+{\displaystyle \frac{2}{x1}}\right],`$ (12)
and $`v_2=v\mathrm{sin}\beta `$. The other parameter of most interest is the mass parameter
$`\stackrel{~}{m}_1`$ $`=`$ $`{\displaystyle \frac{(m_D^{}m_D)_{11}}{M_1}},`$ (13)
which largely controls the amount of dilution caused by the lepton number violating scattering This parameter is especially important in the supersymmetric scenarios there exists a large number of scattering diagrams which are lepton number violating, and the Yukawa interactions are much more important.. A large enough lepton asymmetry can result only if $`\stackrel{~}{m}_1`$ is in the range $`10^5<\stackrel{~}{m}_1<10^2`$. For too small values of $`\stackrel{~}{m}_1`$, the Yukawa interactions are too weak to bring the neutrinos into equilibrium at high temperatures. For too high $`\stackrel{~}{m}_1`$, the lepton number violating scatterings wash out most of the asymmetry after it is generated.
Scanning over the alloowed ranges for the neutrino mixing angles and taking $`\lambda =0.22`$, one can find the regions in the $`m_1m_2m_3`$ parameter space for which the two conditions are satisfied
* $`|ϵ_1|>10^6`$
* $`10^5<\stackrel{~}{m}_1<10^2`$
We assume for definiteness in our numerical results that the neutrino Dirac mass matrix is identical to the up quark mass matrix<sup>\**</sup><sup>\**</sup>\**Relaxing this choice, or choosing a different sign for the neutrino mixing angles will change the quantitative results, but not the qualitative ones.. There is an undertermined phase in this procedure which we can assume is such that the maximal CP-asymmetry is obtained since we are determining the points for which it is possible to obtain the required baryon asymmetry.
As an example the allowed regions for Texture 4 is shown in Fig. 1 with the neutrino mixings set to $`s_{23}=0.55`$, $`s_{12}=0.07`$ and $`s_{13}=0.03`$. For a particular choice of the neutrino mixing angles only a narrow three dimension region is allowed in the full parameter space. This region is characterized by larger values of the lightest right-handed neutrino mass ($`10^7<M_1<10^8`$ GeV), and smaller values for the neutrino quark mass matrix parameter $`(m_D^{}m_D)_{11}`$. This represents a moderately fine-tuned solution which can be understood from the hierarchical structure of the mass matrices as follows.
Fig. 1. The allowed solutions for $`m_1`$ and $`m_3`$ that generate a sufficient baryon asymmetry for Texture 4.
Consider a generic matrix exhibiting the hierarchy given by the textures
$`\lambda _D^{\mathrm{generic}}`$ $`=`$ $`\left(\begin{array}{ccc}A\lambda ^8& B\lambda ^6& C\lambda ^4\\ B^{}\lambda ^6& D\lambda ^4& E\lambda ^2\\ C^{}\lambda ^4& E^{}\lambda ^2& 1\end{array}\right),`$ (14)
where $`A`$, $`B`$, $`B^{}`$$`E^{}`$ are coefficients of order one. The leading $`\lambda `$ dependence of the parameters most important to leptogenesis for this generic hierarchy is shown in Table II. Also shown are the $`\lambda `$ dependence of the five RRR textures; one sees that Textures 2, 4 and 5 have CP violation $`ϵ_1`$ of the same order as the generic case, while Textures 1 and 3 are further suppressed by powers of $`\lambda `$ (the suppression by $`\lambda ^4`$ assumes that the texture zeros are exact). Furthermore Texture 1 has a enhanced $`\stackrel{~}{m}_1`$ which implies that for this texture the dilution factor is very large.
If one assumes the hierarchy suggested by the RRR textures then there must be some fine-tuning of the light neutrino masses to get a solution with a sufficient amount of leptogenesis. That is because the parameter $`\stackrel{~}{m}_1`$ which governs the dilution of the produced lepton asymmetry is too large for typical values of the light neutrino masses, related via the seesaw as
$`\stackrel{~}{m}_1\lambda ^8{\displaystyle \frac{v_2^2}{M_1}}.`$ (15)
The RRR textures fall into the category of the generic texture defined above, where the eigenvalues are in the ratio $`1:\lambda ^4:\lambda ^8`$. We find that in this case there is some fine-tuning required to achieve the required amount of leptogenesis; in Fig. 1 there is only a small pencil-like region which produces an adequate lepton asymmetry. For other choices of the neutrino mixing angles there is a different linear correlation between the neutrino masses, but the same fine-tuning is required. This is because $`\stackrel{~}{m}_1`$ predicted by the RRR textures is typically too large by a factor $`\lambda ^4`$ and the CP-violation $`ϵ_1`$ is too small by the same factor. If instead a more modest hierarchy for the neutrino Dirac masses is assumed, say $`1:\lambda ^2:\lambda ^4`$, then one finds that a typical value of the light neutrino masses allowed by the solar and atmospheric neutrino experiments can generated the required level of leptogenesis. This suggests that the leptogenesis occurs more naturally in cases with the reduced hierarchy, and the RRR textures must be fine-tuned to achieve the required leptogenesis.
Since leptogenesis can only occur after the end of inflation, the subsequent thermal production of massive gravitinos can occur. The gravitinos interact weakly and the late decays of these can modify the observed abundances of light elements or overclose the universe. The right-handed neutrino mass $`M_1`$ in our numerical solutions is sufficiently light to admit a solution to the gravitino problem.
## IV Boltzmann Equations
The size of the lepton asymmetry that results can be calculated using the full set of Boltzmann equations. These have been studied in the scenarios where there are only the Standard Model particles, but have become available recently in the full supersymmetric case as well. We consider the supersymmetric case here since the supersymmetric Yukawa interactions are sufficient to produce a thermal population of right-handed neutrinos after reheating (the nonsupersymmetric model requires the introduction of new interactions). The Boltzmann equations become quite involved for the supersymmetric case, where it is know that the dilution factor can be enhanced over the non-supersymmetric case because of the enhanced effect of the Yukawa interactions.
In principle what is desirable is to scan over all possible values for the masses and mixings of the neutrinos that are consistent with the solar and atmospheric neutrino oscillation experiments, and to determine the viable parameter choices. We do not do that here for three reasons: (1) computational power is exhausted after a few points, as each solution of the Boltzmann equation for a parameter choice involves numerically integrating a set of differential equations each of which involves a further numerical integration (this integration is needed to calculate the reaction density for the two body scatterings which can occur over the full kinematic range), (2) the exact equality between the neutrino and up quark Dirac matrices is probably only approximate, so our results must be considered qualitative only, and (3) there is an unknown phase in the Dirac neutrino mass matrix that controls the amount of CP violation in the heavy neutrino decays, so one can only determine an upper bound on the amount of lepton asymmetry generated. So we confine ourselves here to demonstrating that a particular parameter and texture choice can produce a baryon asymmetry consistent with observed result
$`Y_B={\displaystyle \frac{n_B}{s}}=(0.61)\times 10^{10},`$ (16)
where $`n_B`$ is the number density of baryons, and $`s`$ is the entropy density. This quantity conveniently is insensitive to the dilution that comes about from the expansion of the universe. Similar densities $`Y_i`$ can be defined for all number densities $`n_i`$.
Figure 2 shows the evolution of the neutrino densities and the lepton asymmetry as a function of the temperature $`T`$ through $`z=M_1/T`$ for Texture 4 with neutrino masses of $`m_1=1.5\times 10^5`$ eV, $`m_2=3.0\times 10^3`$ eV and $`m_3=4.0\times 10^2`$ eV. This point is one of the allowed solutions shown in Fig. 1 that satisfies the requirements for neutrino oscillations and for the requirements on $`ϵ_1`$ and $`\stackrel{~}{m}_1`$. For these masses the right-handed Majorana mass is $`M_1=2.9\times 10^7`$ GeV. Assuming a maximal CP-violating phase, the amount of CP-violation from the decays of the lightest Majorana neutrino is $`ϵ_1=2.1\times 10^6`$. These masses are consistent with the constraints from the solar and atmospheric neutrino experiments in Eq. 4. The evolution of the densities proceeds to the right as the temperature of the universe decreases. The figure shows the equilibrium density of the lightest Majarona neutrino $`Y_{N_1}^{\mathrm{eq}}`$ along with the computed density $`Y_{N_1}`$. Nonzero asymmetries of lepton number from fermions $`Y_{L_f}`$ and from scalars $`Y_{L_s}`$ develop, and change sign (hence the dip in the figure), and finally asymptote to a constants for values of $`z=M_1/T>34`$. Sscattering processes involving exchange of supersymmetric particles enforce that $`Y_{L_f}Y_{L_s}`$. Finally, for completeness, we show the scalar neutrino asymmetry for the supersymmetric partner to the lightest Majorana neutrino $`Y_1=Y_{\stackrel{~}{N_1^c}}Y_{\stackrel{~}{N_1^c}}`$. This asymmetry also changes sign before eventually vanishing for large values of $`z`$. The total density of scalar neutrinos $`Y_{1+}=Y_{\stackrel{~}{N_1^c}}+Y_{\stackrel{~}{N_1^c}}`$ is indistinguishable from $`Y_{N_1}`$ and is omitted from the figure.
Fig. 2. The evolution of the fermionic and baryonic lepton asymmetries. The asymmetries $`Y_{L_f}`$ and $`Y_{L_s}`$ asymptote to a constant value which is recycled into a baryon asymmetry that is sufficient to account for experiment if $`Y_{L_f}=Y_{L_s}=(0.91.4)\times 10^{10}`$.
The baryon asymmetry is related to the lepton asymmetry (in the supersymmetric case) via
$`Y_B={\displaystyle \frac{8}{15}}Y_L,`$ (17)
so the observed baryon asymmetry is generated provided that the asymptotic value (for small $`T`$) of the lepton asymmetry is
$`Y_{L_f}=Y_{L_s}=(0.60.9)\times 10^{10}.`$ (18)
So Fig. 2 shows a consistent solution of Texture 4 that explains the baryon asymmetry of the universe which is arising after a fine-tuning of the neutrino masses.
## V Conclusion
We studied the possibility that the baryon asymmetry of the universe could result from lepton number violating decays of heavy Majorana neutrinos. We assumed the Dirac neutrino texture was given by the set of Ramond-Roberts-Ross textures with five zeros (which gives rise to correct masses and mixings of the charged fermions). The heavy Majonana neutrino mass matrix is obtained by inverting the type-I see-saw formula where the contributions from the left handed triplet Higgs are neglected, and using the neutrino masses and mixings required by the solar and atmospheric neutrino oscillation experiments. The lepton asymmetry is produced due to the lepton number violating decay of the lightest right handed neutrino. Contrary to naive expectations, the lightest eigenvalue of the heavy Majorana neutrino mass matrix is in the range $`10^510^7`$ GeV even though the right handed gauge symmetry breaks at $`M_X=10^{16}`$ GeV. This is due to the hierarchy of the Dirac-type neutrino texture. We obtained the following results for the feasibility for each texture for generating the required baryon asymmetry: (a) A generic neutrino Dirac mass matrix with eigenvalues in the ratio $`1:\lambda ^4:\lambda ^8`$ can produce the observed baryon asymmetry via the baryogenesis via leptogenesis scenario in narrow ranges of light neutrino masses. This predicts a strong correlation between the light neutrino masses, but only because the masses of the neutrinos must be carefully tuned to achieve the required magnitude of leptogenesis. A neuriino Dirac mass matrix eigenvalues in the ratio $`1:\lambda ^2:\lambda ^4`$ naturally gives a lepton asymmetry of the required level. (b) Textures 2,4,5 generate the amount of leptogenesis expected in models with a neutrino Dirac mass hierarchy with eigenvalues in the ratio $`1:\lambda ^4:\lambda ^8`$. (b) The position of the texture zeros in Textures 1 and 3 result in a further suppression of the generated lepton asymmetry.
We carried out a detailed analysis of the generated baryon asymmetry by solving the Boltzmann equations for a supersymmetric model numerically for Texture 4. This demonstrates that the required texture can be compatible with baryogenesis via leptogenesis for some specific values of the light neutrino masses consistent with observations in solar and atmospheric neutrino experiments.
|
no-problem/9903/hep-ph9903430.html
|
ar5iv
|
text
|
# Sensitivities of one-prong tau branching fractions to tau neutrino mass, mixing, and anomalous charged current couplings Invited talk at the tau-charm Workshop, 6-9 March 1999, SLAC, USA
## 1 INTRODUCTION
We analyse the sensitivity to new physics of the $`\tau `$ partial widths for the following decays<sup>1</sup><sup>1</sup>1Throughout this paper the charge-conjugate decays are also implied. We denote the branching ratios for these processes as $`_e,_\mu ,_\pi ,_K`$ respectively; $`_{\mathrm{}}`$ denotes either $`_e`$ or $`_\mu `$ while $`_h`$ denotes either $`_\pi `$ or $`_K`$. : $`\tau ^{}e^{}\overline{\nu }_e\nu _\tau `$, $`\tau ^{}\mu ^{}\overline{\nu }_\mu \nu _\tau `$, $`\tau ^{}\pi ^{}\nu _\tau `$, and $`\tau ^{}K^{}\nu _\tau `$. We determine constraints on the mass $`m_{\nu _3}`$ of the third generation neutrino $`\nu _3`$, its mixing with a fourth generation neutrino $`\nu _4`$ of mass $`>M_Z/2`$, anomalous weak charged current magnetic and electric dipole couplings , and the Michel parameter $`\eta `$ . In each case, we present quantitative results using current experimental data (which update our previous analyses ) and estimate the future constraints which would be achievable using the expected precision of measurements at a tau-charm factory. The results for the $`\eta `$ parameter are used to constrain extensions of the Standard Model which contain more than one Higgs doublet and hence charged Higgs bosons.
## 2 THEORETICAL PREDICTIONS
### 2.1 Tau neutrino mass and mixing
The theoretical predictions for the branching fractions $`_{\mathrm{}}`$ allowing for the $`\nu _\tau `$ mass and mixing with a fourth lepton generation are given by :
$`_{\mathrm{}}^{\mathrm{th}.}`$ $`=`$ $`{\displaystyle \frac{G_\mathrm{F}^2m_\tau ^5\tau _\tau }{192\pi ^3}}\left(18x12x^2\mathrm{ln}x+8x^3x^4\right)`$ (1)
$`\times `$ $`\left[\left(1{\displaystyle \frac{\alpha (m_\tau )}{2\pi }}\left(\pi ^2{\displaystyle \frac{25}{4}}\right)\right)\left(1+{\displaystyle \frac{3}{5}}{\displaystyle \frac{m_\tau ^2}{m_W^2}}\right)\right]`$
$`\times `$ $`\left[1\mathrm{sin}^2\theta \right]\left[18y(1x)^3+\mathrm{}\right]`$
where $`x=m_{\mathrm{}}^2/m_\tau ^2`$, $`y=m_{\nu _3}^2/m_\tau ^2`$, $`G_\mathrm{F}=(1.16639\pm 0.00002)\times 10^5\mathrm{GeV}^2`$ is the Fermi constant , and $`\tau _\tau `$ is the tau lifetime. The tau mass, $`m_\tau `$, is taken only from production measurements at tau-pair threshold since values derived from kinematic reconstruction of tau decays depend on tau neutrino mass. The first term in square brackets allows for radiative corrections, where $`\alpha (m_\tau )1/133.3`$ is the QED coupling constant and $`m_W=80.41\pm 0.10`$ GeV is the $`W`$ mass .
The tau neutrino weak eigenstate is given by the superposition of two mass eigenstates $`|\nu _\tau =\mathrm{cos}\theta |\nu _3+\mathrm{sin}\theta |\nu _4`$, such that the mixing is parametrised by the Cabibbo-like mixing angle $`\theta `$. The second term in square brackets describes mixing with a fourth generation neutrino which, being kinematically forbidden, causes a suppression of the decay rate. The third term in brackets parametrises the suppression due to a non-zero mass of $`\nu _3`$, where the ellipsis denotes negligible higher order terms .
The branching fractions for the decays $`\tau ^{}h^{}\nu _\tau `$, with $`h=\pi /K`$, are given by
$`_h^{\mathrm{th}.}`$ $`=`$ $`\left({\displaystyle \frac{G_\mathrm{F}^2m_\tau ^3}{16\pi }}\right)\tau _\tau f_h^2|V_{\alpha \beta }|^2\left(1x\right)^2`$ (2)
$`\times `$ $`\left(1+{\displaystyle \frac{2\alpha }{\pi }}\mathrm{ln}\left({\displaystyle \frac{m_Z}{m_\tau }}\right)+\mathrm{}\right)\left[1\mathrm{sin}^2\theta \right]`$
$`\times `$ $`\left[1y\left({\displaystyle \frac{2+xy}{1x}}\right)\left(1{\displaystyle \frac{y(2+2xy)}{(1x)^2}}\right)^{\frac{1}{2}}\right]`$
where $`x=m_h^2/m_\tau ^2`$, $`m_h`$ is the hadron mass, $`f_h`$ are the hadronic form factors, and $`V_{\alpha \beta }`$ are the CKM matrix elements, $`V_{ud}`$ and $`V_{us}`$, for $`\pi ^{}`$ and $`K^{}`$ respectively. The quantities $`f_\pi |V_{ud}|=(127.4\pm 0.1)\mathrm{MeV}`$ and $`f_K|V_{us}|=(35.18\pm 0.05)\mathrm{MeV}`$ are obtained from analyses of $`\pi ^{}\mu ^{}\overline{\nu }_\mu `$ and $`K^{}\mu ^{}\overline{\nu }_\mu `$ decays \[13, and references therein\]. The ellipsis represents terms, estimated to be $`𝒪(\pm 0.01)`$, which are neither explicitly treated nor implicitly absorbed into $`G_\mathrm{F}`$, $`f_\pi |V_{ud}|`$, or $`f_K|V_{us}|`$. The first term in square brackets describes mixing with a fourth generation neutrino while the second parametrises the effects of a non-zero $`m_{\nu _3}`$.
The fourth generation neutrino mixing affects all the tau branching fractions with a common factor whereas a non-zero tau neutrino mass affects all channels with different kinematic factors. Therefore, given sufficient experimental precision, these two effects could in principle be separated.
Analyses which determine the tau mass from a kinematic reconstruction of the tau decay products are also sensitive to tau neutrino mass. For example, from an analysis of $`\tau ^+\tau ^{}`$ $``$ $`(\pi ^+n\pi ^0\overline{\nu }_\tau )`$ $`(\pi ^{}m\pi ^0\nu _\tau )`$ events (with $`n2,m2,1n+m3`$), CLEO determined the $`\tau `$ mass to be $`m_\tau =(1777.8\pm 0.7\pm 1.7)+[m_{\nu _3}(\mathrm{MeV})]^2/1400`$ MeV. Such measurements may be used to further constrain $`m_{\nu _3}`$.
### 2.2 Anomalous couplings
The theoretical predictions for the branching fractions $`_{\mathrm{}}`$ for the decay $`\tau ^{}\mathrm{}^{}\overline{\nu }_{\mathrm{}}\nu _\tau (X_{\mathrm{EM}})`$, with $`\mathrm{}^{}=\mathrm{e}^{},\mu ^{}`$ and $`X_{\mathrm{EM}}=\gamma ,\gamma \gamma ,e^+e^{},\mathrm{}`$, are given by:
$`_{\mathrm{}}^{\mathrm{th}.}`$ $`=`$ $`{\displaystyle \frac{G_\mathrm{F}^2m_\tau ^5\tau _\tau }{192\pi ^3}}\left(18x12x^2\mathrm{ln}x+8x^3x^4\right)`$ (3)
$`\times `$ $`\left(1{\displaystyle \frac{\alpha (m_\tau )}{2\pi }}\left(\pi ^2{\displaystyle \frac{25}{4}}\right)\right)\left(1+{\displaystyle \frac{3}{5}}{\displaystyle \frac{m_\tau ^2}{m_W^2}}\right)`$
$`\times `$ $`\left[1+\mathrm{\Delta }_{\mathrm{}}\right].`$
The term in square brackets describes the effects of new physics where the various $`\mathrm{\Delta }_{\mathrm{}}`$ we consider are defined below.
The effects of anomalous weak charged current dipole moment couplings at the $`\tau \nu _\tau W`$ vertex are described by the effective Lagrangian
$``$ $`=`$ $`{\displaystyle \frac{g}{\sqrt{2}}}\overline{\tau }\left[\gamma _\mu +{\displaystyle \frac{i\sigma _{\mu \nu }q^\nu }{2m_\tau }}(\kappa _\tau i\stackrel{~}{\kappa }\gamma _5)\right]P_L\nu _\tau W^\mu `$ (4)
$`+(\mathrm{Hermitian}\mathrm{conjugate}),`$
where $`P_L`$ is the left-handed projection operator and the parameters $`\kappa `$ and $`\stackrel{~}{\kappa }`$ are the (CP-conserving) magnetic and (CP-violating) electric dipole form factors respectively . They are the charged current analogues of the weak neutral current dipole moments, measured using $`Z\tau ^+\tau ^{}`$ events , and the electromagnetic dipole moments recently measured by L3 and OPAL using $`Z\tau ^+\tau ^{}\gamma `$ events . In conjunction with Eq. 3, the effects of non-zero values of $`\kappa `$ and $`\stackrel{~}{\kappa }`$ on the tau leptonic branching fractions may be described by
$`\mathrm{\Delta }_{\mathrm{}}^\kappa `$ $`=`$ $`\kappa /2+\kappa ^2/10;`$ (5)
$`\mathrm{\Delta }_{\mathrm{}}^{\stackrel{~}{\kappa }}`$ $`=`$ $`\stackrel{~}{\kappa }^2/10.`$ (6)
The dependence of the tau leptonic branching ratios on $`\eta `$ is given, in conjunction with Eq. 3, by
$`\mathrm{\Delta }_{\mathrm{}}^\eta `$ $`=`$ $`4\eta _\tau \mathrm{}\sqrt{x},`$ (7)
where the subscripts on $`\eta `$ denote the initial and final state charged leptons. Both leptonic tau decay modes probe the charged current couplings of the transverse $`W`$, and are sensitive to $`\kappa `$ and $`\stackrel{~}{\kappa }`$. In contrast, only the $`\tau ^{}\mu ^{}\overline{\nu }_\mu \nu _\tau `$ channel is sensitive to $`\eta `$, due to a relative suppression factor of $`m_e/m_\mu `$ for the $`\tau ^{}\mathrm{e}^{}\overline{\nu }_\mathrm{e}\nu _\tau `$ channel. Semi-leptonic tau branching fractions are not considered since they are insensitive to $`\kappa `$, $`\stackrel{~}{\kappa }`$, and $`\eta `$.
## 3 RESULTS
Three sets of fits are performed, as follows.
* Case 1
We use current world averages of the experimental measurements.
* Case 2
We use estimated errors on measurements which would be possible with a tau-charm factory assuming that there is no improvement in the tau lifetime compared to current measurements.
* Case 3
This is identical to Case 2 except that, in order to assess the limiting factors of our method, we assume somewhat arbitrarily that CLEO and the b-factories succeed in reducing the tau-lifetime error by a factor of two.
For Cases 2 and 3 the central values are clearly unknown, therefore in making our predictions we adjust the branching fractions to their standard model values, such that our predictions is not arbitrarily biased by the current experimental central values. The input parameters for the three cases are summarised in Tab. 1.
We derive constraints on $`m_{\nu _\tau }`$ and $`\mathrm{sin}^2\theta `$ from combined likelihood fits to the four tau decay channels, using equations 1 and 2. The likelihood for the CLEO and BES measurements of $`m_\tau `$ to agree, as a function of $`m_{\nu _3}`$, is included in the global likelihood. We derive constraints on $`\kappa `$, $`\stackrel{~}{\kappa }`$, and $`\eta _{\tau \mu }`$ using the two leptonic tau decay channels and Eq. 3. Each of the five parameters is analysed separately, conservatively assuming in each case that the other four parameters are zero.
In the fit, the uncertainties on all the quantities in Eqs. 12, and 3 are taken into account. The likelihood is constructed numerically following the procedure of Ref. by randomly sampling all the quantities used according to their errors.
Tab. 2 summarises the results obtained.
For Cases 2 and 3 the limiting error is that on the tau lifetime; arbitrarily setting all other errors to zero yields negligible improvement in the fit results.
## 4 DISCUSSION
### 4.1 Tau neutrino mass
The limit on $`m_{\nu _3}`$ can be reasonably interpreted as a limit on $`m_{\nu _\tau }`$, since $`\mathrm{sin}^2\theta `$ is small as well as the mixing of $`m_{\nu _3}`$ with lighter neutrinos . The best direct experimental constraint on the tau neutrino mass is $`m_{\nu _\tau }<18.2`$ MeV at the 95% confidence level which was obtained using many-body hadronic decays of the $`\tau `$. While our constraint is less stringent, it is statistically independent. Moreover, it is insensitive to fortuitous or pathological events close to the kinematic limits, the absolute energy scale of the detectors, and the details of the resonant structure of multi-hadron $`\tau `$ decays .
Although the constraint on $`m_{\nu _\tau }`$ which we estimate does improve with the tau-charm input, this method would not be competitive with direct reconstruction analyses which are predicted to be sensitive at the $`O(2\mathrm{MeV})`$ level .
### 4.2 Fourth generation mixing
Our upper limit on $`\mathrm{sin}^2\theta `$ is already the most stringent experimental constraint on mixing of the third and fourth neutrino generations. This constraint will improve by a factor of up to two using future tau-charm factory data, depending on the improvement in the error on $`\tau _\tau `$. We anticipate that this technique will continue to provide the most stringent constraints in the foreseeable future.
### 4.3 Anomalous couplings and tau compositeness
Our results for $`\kappa `$ and $`\stackrel{~}{\kappa }`$ are currently the most precise. The less stringent constraint on $`\stackrel{~}{\kappa }`$ compared to that on $`\kappa `$ is due to the lack of linear terms in Eq. 6.
Derivative couplings necessarily involve the introduction of a length or mass scale. Anomalous magnetic moments due to compositeness are expected to be of order $`m_\tau /\mathrm{\Lambda }`$ where $`\mathrm{\Lambda }`$ is the compositeness scale . We can then interpret the 95% confidence level on $`\kappa `$, the quantity for which we have a more stringent bound, as a statement that the $`\tau `$ appears to be a point-like Dirac particle up to an energy scale of $`\mathrm{\Lambda }m_\tau /0.017=105`$ GeV. These results are comparable to those obtained from anomalous weak neutral current couplings and more stringent than those obtained for anomalous electromagnetic couplings . While the decay $`W\tau \nu `$ which is measured at LEP II is also sensitive to charged current dipole terms, given that the energy scale is $`m_W`$, the interpretation in terms of the static properties $`\kappa `$ and $`\stackrel{~}{\kappa }`$ is less clear.
The results for $`\kappa `$ and $`\stackrel{~}{\kappa }`$ will improve by using tau-charm data, and will probe the point-like nature of the tau up to a scale of $`\mathrm{\Lambda }=O(180\mathrm{GeV})`$ (assuming no improvement in $`\tau _\tau `$) or $`\mathrm{\Lambda }=O(300\mathrm{GeV})`$ (assuming a factor of two improvement in the error on $`\tau _\tau `$).
### 4.4 $`\eta _{\tau \mu }`$ and extended Higgs sector models
Our value for $`\eta _{\tau \mu }`$ is currently the most precise. The uncertainty is significantly smaller than determinations using the shape of momentum spectra of muons from $`\tau `$ decays, $`(\eta _{\tau \mu }=0.04\pm 0.20)`$ .
Many extensions of the Standard Model, such as supersymmetry (SUSY), involve an extended Higgs sector with more than one Higgs doublet. Such models contain charged Higgs bosons which contribute to the weak charged current with couplings which depend on the fermion masses. Of all the Michel parameters, $`\eta _{\tau \mu }`$ is especially sensitive to the exchange of a charged Higgs. Following Stahl , $`\eta _{\tau \mu }`$ can be written as
$$\eta _{\tau \mu }=\left(\frac{m_\tau m_\mu }{2}\right)\left(\frac{\mathrm{tan}\beta }{m_H}\right)^2$$
(8)
where $`\mathrm{tan}\beta `$ is the ratio of vacuum expectation values of the two Higgs fields, and $`m_H`$ is the mass of the charged Higgs. This expression applies to type II extended Higgs sector models in which the up-type quarks get their masses from one doublet and the down-type quarks get their masses from the other. From current data we determine the one-sided constraint $`\eta _{\tau \mu }>0.0232`$ at the 95% C.L. which rules out the region $`m_H<(2.01\mathrm{tan}\beta )\mathrm{GeV}`$ at the 95% C.L. as shown in Fig. 1.
An almost identical constraint on the high $`\mathrm{tan}\beta `$ region of type II models may be obtained from the process $`B\tau \nu `$ . The most stringent constraint, from the L3 experiment, rules out the region $`m_H<(2.09\mathrm{tan}\beta )\mathrm{GeV}`$ at the 95% C.L. . Within the specific framework of the minimal supersymmetric standard model, the process $`B\tau \nu X`$ rules out the region $`m_H<(2.33\mathrm{tan}\beta )\mathrm{GeV}`$ at the 95% C.L. . This limit, however, depends on the value of the Higgsino mixing parameter $`\mu `$ and can be evaded completely for $`\mu >0`$. The non-observation of proton decay also tends to rule out the large $`\mathrm{tan}\beta `$ region but these constraints are particularly model-dependent. The very low $`\mathrm{tan}\beta `$ region is ruled out by measurements of the partial width $`\mathrm{\Gamma }(Zb\overline{b})`$. For type II models the approximate region excluded is $`\mathrm{tan}\beta <0.7`$ at the $`2.5\sigma `$ C.L. for any value of $`M_H`$ . Complementary bounds for the full $`\mathrm{tan}\beta `$ region are derived from the CLEO measurement of $`BR(bs\gamma )=(2.32\pm 0.57\pm 0.35)\times 10^4`$ which rules out, for type II models, the region $`M_H<244+63/(\mathrm{tan}\beta )^{1.3}`$ . This constraint can, however, be circumvented in SUSY models where other particles in the loops can cancel out the effect of the charged Higgs. Direct searches at LEP II exclude the region $`m_H<54.5`$ GeV for all values of $`\mathrm{tan}\beta `$ . The CDF search for charged Higgs bosons in the process $`tbH^+`$ rules out the region of low $`m_H`$ and high $`\mathrm{tan}\beta `$ . The 95% C.L. constraints in the $`m_H`$ vs. $`\mathrm{tan}\beta `$ plane, from this and other analyses, are shown in Fig. 1.
We anticipate that the constraints from $`Zb\overline{b}`$ and $`bs\gamma `$ will improve somewhat with new measurements from LEP, CLEO, and the b-factories and from refinements in the theoretical treatment . CLEO and the b-factories may also improve the measurements of $`B\tau \nu (X)`$ which rule out a similarly-shaped region of the $`m_H\mathrm{tan}\beta `$ plane as that of this analysis.
Some caution is advised in the interpretation of the large $`\mathrm{tan}\beta `$ regime which becomes non-perturbative for $`\mathrm{tan}\beta >O(70)`$. Future improved measurements of the tau branching fractions and lifetime will, however, extend the constraints on $`\mathrm{tan}\beta `$ towards lower values, where perturbative calculations are more applicable.
In particular, for the tau-charm factory we estimate the one-sided constraint $`\eta _{\tau \mu }>0.014`$ at the 95% C.L. This rules out the region $`m_H<(2.55\mathrm{tan}\beta )\mathrm{GeV}`$ at the 95% C.L., as shown in Fig. 1, and corresponds to $``$25% reduction in the maximum allowed value of $`\mathrm{tan}\beta `$ for a given value of $`m_H`$, compared to current constraints.
## 5 SUMMARY
From an analysis of tau leptonic and semileptonic branching fractions we determine constraints on $`m_{\nu _\tau }`$, $`\mathrm{sin}^2\theta `$, $`\kappa `$, $`\stackrel{~}{\kappa }`$, and $`\eta _{\tau \mu }`$ using current experimental data. We then assess the future sensitivity to these parameters using predictions for the uncertainties on experimental quantities measured at a tau-charm factory. We find that in each case the future sensitivity is completely limited by the uncertainty on the tau lifetime.
The constraint on $`m_{\nu _\tau }`$ using current data is complementary to, but less stringent than, that already obtained from multi-hadronic tau decays. Our technique will benefit slightly from improved tau-charm factory data but will be considerably less competitive than other techniques available at such a facility.
Using current experimental data we find that our technique yields the most stringent constraints to-date on $`\mathrm{sin}^2\theta `$, $`\kappa `$, $`\stackrel{~}{\kappa }`$, and $`\eta _{\tau \mu }`$. All these constraints are expected to improve by a factor of approximately two using future data from a tau-charm factory and, and in the absence of novel competing techniques, will continue to yield the most precise determinations of these quantities.
The result for $`\kappa `$ indicates that the tau is point-like up to an energy scale of approximately 105 GeV (today) and $`O(300\mathrm{GeV})`$ (using tau-charm data and assuming a factor of two improvement in the tau-lifetime error).
The result for $`\eta _{\tau \mu }`$ constrains the charged Higgs of type II two-Higgs doublet models such that we can exclude, at the 95% C.L., the region $`m_H<(2.01\mathrm{tan}\beta )\mathrm{GeV}`$ (today) and $`m_H<(2.55\mathrm{tan}\beta )\mathrm{GeV}`$ (using tau-charm data and assuming a factor of two improvement in the tau-lifetime error).
## Acknowledgements
J.S. would like to thank the organisers and participants for a stimulating and productive workshop. We would like to thank CONICET, Argentina (M.T.D.) and the NSF, USA (J.S. and L.T.) for financial support.
|
no-problem/9903/gr-qc9903078.html
|
ar5iv
|
text
|
# Thermodynamics of charged anti-de Sitter black holes in canonical ensemble
## Acknowledgment
This work was completed in the Theory Division of CERN, whose support is gratefully acknowledged.
|
no-problem/9903/astro-ph9903150.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
Chemical evolution models have been developed to reproduce the observational features obtained from the solar neighbourhood and the whole Galaxy structure studies. These data give us information about the galactic evolution up to a galactocentric distance of $`1525`$ kpc and in the last $`1116`$ Gyr. However, in many cases, models which give similar results for the present time diverge significantly at different epochs. New data on high redshift objects – i.e. objects far away in space and time – can now be used to test model predicted past galaxy properties which can then constitute important constraints for evolution models.
We use the multiphase model results calculated for MWG and compare them with observational data from , and , in order to test if damped Ly$`\alpha `$ systems have abundances consistent with those expected for high redshift spiral galaxies. We have assumed values of $`q_0`$ and $`H_0`$ of 0.1 and 50 km s<sup>-1</sup> Mps<sup>-1</sup> respectively. The redshift $`z`$ corresponds to the redshift of damped Ly$`\alpha `$ systems.
## 2 The chemical evolution model
We use the evolutionary history of the Milky Way, assumed to be a typical spiral galaxy. We take the multiphase model results from for MWG, where we computed the element distributions, in the halo and in the disk, by varying the radial galactocentric distance of every considered region. We have extracted the outputs for the regions centered on 8 kpc (solar) and 5 kpc and 14 kpc (here-on called inner and outer zones) from the galactic center. Assuming that the absorbing gas in the line of sight to the damped Ly$`\alpha `$ systems may belong to both halo and disk components, we evaluate the abundance by averaging the abundances for the two populations with their corresponding weights.
Our results are shown in Figure 1. In the left panel, we show the average abundance for the three defined regions. The inner and solar region evolutions fit better the higher abundance data points, while the outer region abundance fits the lower observed abundances. The history of the different regions can reproduce the spread in the observations. Observations possibly average also over a great part of the Galactic structure: the dashed line shows our average for both disk and halo at all radii. In the right panel we plot the gas column density results which also have contributions from the gas present both in the spheroidal and disk components of the galaxy. The column density is sensitive to the inclination angle of the disk respect to the line of sight: the two solid lines represent the evolution for a face–on galaxy (lower line) and for an edge–on galaxy (upper line). The distribution of the observed values is bound by these simple estimates.
## 3 Conclusions
The possibility that Damped Lyman $`\alpha `$ systems be indeed normal spiral galaxies () must be considered: observations probably refer to gas belonging to different components and therefore the observed abundances are averaged over the halo and disk gas, if not over the whole galactic content. We confirm that the large metallicity spread observed in DLA systems can be explained by the heterogeneous mixture of galactic regions and galaxy types.
|
no-problem/9903/physics9903013.html
|
ar5iv
|
text
|
# GIST: A tool for Global Ionospheric Tomography using GPS ground and LEO data and sources of opportunity with applications in instrument calibration
## 1. Introduction
Radio waves traversing the ionosphere suffer a delay of a well-known dispersive nature and it is common to suppress this effect by using a combination of signals at two separated frequencies. However, there are two aspects here to be considered: first, the electronic equipment of on board instrumentation has to be periodically calibrated, and second, duplicating systems to operate at two frequencies adds cost and complexity to the instruments. Therefore it is desirable to have a system able to reproduce the status of the ionosphere, and use it for monitoring, and single- and dual-frequency instrument calibration. Tomographic techniques are applied to this end ingesting data from different sources. In previous references , , we have discussed the tomographic methodology and some different implementations, which we will here briefly summarize. This work intends to highlight the successful elaboration of a software package that implements those techniques and also to emphasize the possibility of ingesting data other than GPS to densify the receivers network.
## 2. Tomographic technique
The ionospheric delay can be determined in a bistatic dual-frequency system from phase measurements following the equation:
$`L_I(\stackrel{}{r},t)=L_1L_2=\gamma {\displaystyle _{ray}}𝑑l\rho (\stackrel{}{r},t)+c_r+c_t,`$ (2.1)
where we have noted the phase measurements with $`L`$. The factor $`\gamma `$ depends on the frequencies in use (for GPS $`\gamma =1.0510^{17}`$ m<sup>3</sup>/el) and $`\rho `$ is the electron density. The two constants $`c_r`$ and $`c_t`$ are the biases associated to the transmitter and receiver (). Tomographic analysis consist in obtainting the solution fields ($`\rho `$) from the integrated value along the ray paths and Equation 2.1 is termed as the “tomographic equation”. If $`\rho `$ is expressed as a linear combination of a set of basis functions $`\rho =_jx_j(t)\mathrm{\Psi }_j(\stackrel{}{r})+ϵ(\stackrel{}{r},t)`$ then the above equation becomes $`L_I=y_i=_Jx_J(t)_{s.l.}\mathrm{\Psi }_J(\stackrel{}{r})𝑑\stackrel{}{l}+\zeta (\stackrel{}{r},t)+c_r+c_t`$ and can be written for each ray to obtain a set of linear equations such as $`𝐲=𝐀𝐱`$. In our tomographic system, we choose voxels as the basis functions. Voxels are 3-D pixels or fuctions valued 1 inside the volume of the voxel and 0 elsewhere. Empirical Orthogonal Functions can also be used as shown in .The system, however, may not have a solution because data are not uniformly distributed, and thus we seek to minimize the functional
$`\chi ^2(x)=(yAx)^T(yAx).`$ (2.2)
In we discussed the use of a correlation functional to confine the spatial spectrum of the solution to the low portion of the frequency space. The same concept can be expressed by adding new equations (constraints) that impose that the density in a voxel be a weighted average of its neighbours (). To take into account variation in time, a Kalman filter is implemented, considering the density to behave as a random walk stochastic process. Instrumental constants are also considered and resolved as constants or eliminated by differencing , . While differencing reduces the number of unknowns, estimation furnishes the solution with more information and provides nuisance parameters to absorb noise from the system.
## 3. The GIST tool for ionospheric tomography
The software tool GIST implements the above described technique including differencing and constant estimation strategies (for a block diagram see Figure 1). In addition, since the previous equations are valid for any dual-frequency system, different sources of data should be used. It has to be remembered, however, that the tomographic solution is possible thanks to the different directions of the rays received from different satellites which permit the system to distinguish between layers. Therefore, GPS data serve as the basic source on which the solution is based and any additional data such as altimetric data (which is always in the same direction) should be fed as an aiding source of information and with the main goal of constraining the values of $`\rho `$ to obtain the calibration constants. In monostatic systems the two constants are merged into one. In this fashion, we can calibrate the instrument as part of the overall solution.
The package GIST shares common modules with the package LOTTOS, oriented to Tropospheric Tomography (see ) and has the following features:
* Raw RINEX data conditioning: cycle slip detection, phase alignment, and data decimation.
* Altimeter Data conditioning
* Linear System Construction
* Kalman Filtering with Random Walk Stochastic Process.
* Different Constraints Strategies
The input data are GPS raw phases and pseudoranges, precise orbits for all the satellites in ECI format and time-tagged Total Electron Contents data from other sources. In we discussed the convenience of the constant estimation in the data processing due to the robustness of the system and the existence of systematic noise sinks. However, this approach is computationally intensive and in some cases, for system testing, it is interesting to have a rapid solution even if it is with low accuracy. In such cases, differencing is an attractive approach because it reduces the number of unknowns and it is hence included as an option in the GIST package; it has to be advised, however, that this technique is more sensitive to systematic noise in the data or mismodeling.
## 4. Results
We have taken data from 106 IGS ground stations for 21st February 1997, GPS/MET low rate data and TOPEX/POSEIDON data from the on-board GPS receiver (zenith-looking for navigation purposes) and the on-board NRA altimeter data. A global grid with 20 divisions in longitude, 10 divisions in latitude and 6 layers (5 below the TOPEX/POSEIDON orbit and 1 above to absorbe the protonosphere) has been used and the data divided into 3-hour batches for Kalman filtering. The data were weighted according to the sigma value of the measurements (0.1 m for GPS data and 1 TECU for TOPEX/POSEIDON , ) and the orbits for the LEO were estimated using the GIPSY-OASIS II software . In Figures 2 and 3 we see the 6 layers of the ionosphere, and in Figure 4 the residues for the T/P altimeter data. The bias constant is 2.98 TECU with a formal error of 2.58 mTECU for the T/P Radar Altimeter, which agrees fairly well with what was reported in .
## 5. Conclusions
We have successfully developed a solid software tool GIST for ionospheric tomography and applied it to one day of data to yield 4D ionospheric maps. These maps are consistent with previous work and, in addition, the ingestion of altimeter data into the model permits the direct calibration of the instrumentation. We foresee this technique to be a very useful technique particularly when other sources of opportunity such as GPS data from satellites or airplanes are included because of the great densification of measurements.
## 6. Acknowledgements
The authors would like to thank N. Picot (CNES), B. Haines (JPL) and C. Rocken (UCAR) for providing the data. This work was supported by the EC grant WAVEFRONT PL-952007 and the Comissionat per a Universitats i Recerca de la Generalitat de Catalunya.
|
no-problem/9903/quant-ph9903025.html
|
ar5iv
|
text
|
# Quantum Mechanics of Extended Objects
## I Introduction
The representation of a particle as an idealized point has long been used in physics. In fact, this representation is central to classical mechanics and serves us well even in quantum mechanics. In this paper we adopt a viewpoint in which the finite extent or fuzziness of a particle is taken into consideration thereby treating the particle as an extended object. Such a treatment becomes important and necessary when the confines of the quantum system in which the particle is placed becomes comparable to the finite extent of the particle. The finite extent or fuzziness of a particle is quantified via its Compton wavelength which can be defined as the lower limit on how well a particle can be localized. In nonrelativistic quantum mechanics, the lower limit is zero since we admit position eigenkets $`|x`$. But in reality, as we try to locate the particle with greater accuracy we use more energetic probes, say photons to be specific. To locate a particle to some $`\mathrm{\Delta }x`$ we need a photon of momentum
$$\mathrm{\Delta }p\frac{\mathrm{}}{\mathrm{\Delta }x}.$$
(1)
The corresponding energy of the photon is
$$\mathrm{\Delta }E\frac{\mathrm{}c}{\mathrm{\Delta }x}.$$
(2)
If this energy exceeds twice the rest energy of the particle, relativity allows the production of a particle-antiparticle pair in the measurement process. So we demand
$$\frac{\mathrm{}c}{\mathrm{\Delta }x}2mc^2\text{or}\mathrm{\Delta }x\frac{\mathrm{}}{2mc}\frac{\mathrm{}}{mc}.$$
(3)
Any attempt to further localize the particle will lead to pair creation and we will have three (or more) particles instead of the one we started to locate. Therefore, the Compton wavelength of a particle measures the distance over which quantum effects can persist The point particle approximation used in nonrelativistic quantum mechanics suffices to describe the dynamics since the confines of the quantum systems under consideration are much larger than the finite extent of the confined particles. For example, in the analysis of the hydrogen atom, the fuzziness or the size of the electron is $`\alpha `$ times smaller than the size of the atom $`a_0`$
$$\frac{\mathrm{}/mc}{a_0}=\alpha \frac{1}{137}.$$
(4)
Thus, in the case of the hydrogen atom and in general, for the quantum theory of atoms, the quantum mechanics of point particles gives an accurate description.
In this paper we develop the Hilbert space representation theory of the quantum mechanics of extended objects. We use this representation to demonstrate the quantization of spacetime following which we analyze two paradigm examples: fuzzy harmonic oscillator and the Yukawa potential. In the second example, the quantum mechanics of extended objects enables us to predict the phenomenological coupling constant of the $`\omega `$ meson as well as the radius of the repulsive nucleon core.
## II Quantum Mechanics of Extended Objects
We have established the necessity for taking into consideration the nonzero size of a particle. In order to incorporate the fuzziness or size of a particle into our dynamics we introduce the following representation for position and momentum in one dimension in units where $`\mathrm{}=c=1`$. For position space,
$`X_f`$ $`=`$ $`(Xe^{P^2/m^2})(xe^{P^2/m^2})`$ (5)
$`P`$ $``$ $`i{\displaystyle \frac{d}{dx}}`$ (6)
$`[X_f,P]`$ $`=`$ $`ie^{P^2/m^2},`$ (7)
and for momentum space,
$`X_f`$ $`=`$ $`e^{P^2/2m^2}Xe^{P^2/2m^2}ie^{P^2/2m^2}{\displaystyle \frac{d}{dp}}e^{P^2/2m^2}`$ (8)
$`P`$ $``$ $`p`$ (9)
$`[X_f,P]`$ $`=`$ $`ie^{p^2/m^2}.`$ (10)
where $`(AB)(AB+BA)/2`$. Symmetrization has also been employed in the momentum space representation in order to preserve the Hermiticity of the noncommuting fuzzy position operator $`X_f`$. In contradistinction to the quantum mechanics of point particles where the position operator has a smooth coordinate representation consisting of a sequence of points, the fuzzy position operator is convolved with a Gaussian in momentum space which has as its width the Compton wavelength $`1/m`$. The convolution with the Gaussian has the effect of smearing out these points and in the limit as the Compton wavelength vanishes we recover the standard operator assignments of ordinary quantum mechanics. For simplicity, consider the effect of the fuzzy position operator $`X_f`$ on an acceptable wavefunction in position space, that is, one which is square integrable and has the right behavior at infinity:
$`X_f\psi (x)`$ $`=`$ $`(xe^{P^2/m^2})\psi (x)`$ (11)
$`=`$ $`{\displaystyle \frac{m}{4\sqrt{\pi }}}\left[{\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑\lambda xe^{iP\lambda m^2\lambda ^2/4}\psi (x)+{\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑\lambda e^{iP\lambda m^2\lambda ^2/4}[x\psi (x)]\right]`$ (12)
$`=`$ $`{\displaystyle \frac{m}{4\sqrt{\pi }}}{\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑\lambda (x+{\displaystyle \frac{\lambda }{2}})\psi (x+\lambda )e^{m^2\lambda ^2/4}.`$ (13)
The translation of $`\psi (x)`$ by $`\lambda `$ and the subsequent integration over all possible values of $`\lambda `$ weighted by a Gaussian measure has the effect of smearing out the position. The commutation relation obeyed by $`X_f`$ and $`P`$ is manifestly noncanonical and does not depend on the representation. A direct consequence of this commutation relation is the uncertainty relation.
$$\mathrm{\Delta }X_f\mathrm{\Delta }P\frac{1}{2}|e^{P^2/m^2}|.$$
(14)
Now, for any two observables $`A`$ and $`B`$ which satisfy $`[A,B]|\psi =0`$ for some nontrivial $`|\psi `$, with uncertainties $`\mathrm{\Delta }A`$ and $`\mathrm{\Delta }B`$ such that $`|\mathrm{\Delta }A/A|1`$ and $`|\mathrm{\Delta }B/B|1`$, we have the relation
$$\mathrm{\Delta }((AB))=A\mathrm{\Delta }B+B\mathrm{\Delta }A,$$
(15)
where again $`(AB)(AB+BA)/2`$. The special case $`[A,B]=0`$ corresponds to compatible variables. We observe that whenever simultaneous eigenkets exist
$`AB`$ $`=`$ $`{\displaystyle 𝑑a𝑑bP(ab)ab}={\displaystyle 𝑑a𝑑bP(a)P(b)ab}`$ (16)
$`=`$ $`AB`$ (17)
where $`P(ab)=|ab|\psi |^2`$ and the proof of Eq. (15) follows. In our case,
$$[X,e^{P^2/m^2}]|\psi =0\text{ only if }|\psi =\mathrm{constant}.$$
(18)
Hence, there exists at least one nontrivial simultaneous eigenket for which $`[X,e^{P^2/m^2}]`$ has a zero eigenvalue. We can always choose this eigenket to establish the validity of Eq. (15) for our operators $`X`$ and $`e^{P^2/m^2}`$ along the lines shown above. As a consequence, we obtain the modified uncertainty principle (reinserting $`\mathrm{}`$ for clarity)
$$\mathrm{\Delta }X\mathrm{\Delta }P\frac{\mathrm{}}{2}+\frac{2XP}{m^2}(\mathrm{\Delta }P)^2.$$
(19)
The uncertainty product goes up because of the fuzziness we have introduced in the position. Consequently, there exists a minimal uncertainty in position given by
$$\mathrm{\Delta }X_0=\frac{2}{m}\sqrt{XP\mathrm{}}.$$
(20)
The existence of minimal uncertainties and their consequences for structure were first examined by Kempf, albeit, in a different context . We note that the product $`XP`$ is in general nonnegative. It can be made negative by moving the center of coordinates but this would imply that the Hamiltonian of the underlying system is translationally invariant such as the free particle or the particle in a box (for bound systems $`P=0`$). For all such systems the Hamiltonian does not depend on the position (or fuzzy position) and incorporating the fuzziness of the particle into our quantum description is irrelevant to the dynamics. Hence, the Compton wavelength can be set to zero in such cases which is the correspondence limit with ordinary quantum mechanics. If we view the uncertainty product as a measure of the cell volume of phase space we observe that quantized phase acquires an added fuzziness and the cell volume no longer has a uniform value equal to the Planck constant. Fuzzy phase space has a direct implication for the quantization of spacetime as we will demonstrate in section V.
In view of the special theory of relativity, particles are actually located at spacetime points. The introduction of smearing in the spatial direction demands that we introduce fuzziness in the time direction, otherwise, the instantaneous annihilation of a particle of finite extent would violate causality. As was the case with the fuzzy position the smearing is achieved by convolving the time coordinate with a Gaussian in the zeroth component of the momentum operator (the Hamiltonian) giving rise to
$`T_f`$ $`=`$ $`(Te^{H^2/m^2})(te^{H^2/m^2})`$ (21)
$`H`$ $``$ $`i{\displaystyle \frac{d}{dt}}.`$ (22)
We observe that in our representation we choose to view time as an operator on the same footing as the position operator. This is in keeping with the modern unified view of spacetime and is further evidenced when we discuss the nontrivial commutation relations between the 4-positions. The smeared time operator $`T_f`$ reverts to its smooth time coordinate representation in the limit as the characteristic times of the quantum system become much longer than the flight time of the particle. The time of flight of a particle is defined as the time it takes to traverse a distance of the Compton wavelength at the maximally allowable speed c. Due to the fuzziness we have introduced in the time direction the energy-time uncertainty principle gets modified in a manner analogous to the phase space uncertainty product giving rise to
$$\mathrm{\Delta }H\mathrm{\Delta }T\frac{\mathrm{}}{2}+\frac{2HT}{m^2}(\mathrm{\Delta }H)^2.$$
(23)
This relation implies a minimal uncertainty in time given by
$$\mathrm{\Delta }T_0=\frac{2}{m}\sqrt{HT\mathrm{}}$$
(24)
which is expected since the time operator has been smeared out. The product $`HT`$ is in general non-negative. It can be made negative by moving the center of the time coordinate but this would imply that the Hamiltonian of the underlying system obeys time translational invariance. For all such systems the Hamiltonian is time independent and incorporating the time smearing into our quantum description is irrelevant to the dynamics. Hence, the Compton wavelength can be set to zero in such cases which is the correspondence limit with ordinary quantum mechanics. Thus, by introducing these self-adjoint operator representations for position and time we are able to quantify and characterize the finite extent of a particle. We now proceed to formulate the Hilbert space representation theory of these operators.
## III Hilbert Space Representation
The fuzzy position operator $`X_f`$ and the momentum operator P satisfy the uncertainty relation Eq. (14). This relation does not imply a minimal uncertainty in the fuzzy position or the momentum. As a consequence, the eigenstates of the self-adjoint fuzzy position and momentum operators can be approximated to arbitrary precision by sequences $`|\psi _n`$ of physical states of increasing localization in position or momentum space:
$$\underset{n\mathrm{}}{lim}\mathrm{\Delta }X_{f_{|\psi _n}}=0\text{or}\underset{n\mathrm{}}{lim}\mathrm{\Delta }P_{|\psi _n}=0.$$
(25)
Hence, the fuzzy position and momentum operators admit a continuous position or momentum space representation in the Hilbert space. Since the momentum operator is identical to the one used in ordinary quantum mechanics it has the usual orthogonal plane wave eigenstates. The eigenvalue problem of the fuzzy position operator
$$X_f\psi =\lambda \psi $$
(26)
can be written in the momentum basis (which we choose for convenience) as
$$e^{p^2/2m^2}\frac{d}{dp}(e^{p^2/2m^2}\psi )=i\lambda \psi .$$
(27)
Defining the function $`\varphi =e^{p^2/m^2}\psi `$ and introducing the measure transformation $`dr=e^{p^2/m^2}dp`$ we obtain the eigensolutions as
$$\psi (p)=\frac{1}{\sqrt{2\pi }}e^{p^2/2m^2+i\lambda r},$$
(28)
where freedom in scale has been used to normalize the solution. The eigenfunctions are orthogonal with respect to the transformed measure $`L^2(e^{p^2/m^2}dr)`$ because
$$\psi _\lambda (p)|\psi _\lambda ^{}(p)=\frac{1}{2\pi }_{\mathrm{}}^{\mathrm{}}e^{i(\lambda \lambda ^{})r}𝑑r=\delta (\lambda \lambda ^{}).$$
(29)
The inner product $`\psi _\lambda (p)|\psi _\lambda ^{}(p)`$ is divergent in the space $`L^2(dp)`$ but is equal to the Dirac delta function in the space $`L^2(e^{p^2/m^2}dr)`$. As $`p`$ ranges from $`\mathrm{}`$ to $`\mathrm{}`$ the volume element $`dp`$, under the measure transformation, is squeezed into a Gaussian width times the line element $`dr`$, and consequently the orthogonality of the fuzzy position eigenstates is preserved. We note that had we tried to construct the formal position eigenstates (eigenstates of $`X`$) we would have had to sacrifice orthogonality due to the appearance of the minimal uncertainty in position. The eigenfunctions of the fuzzy position operator in the position representation will be Fourier transforms of the eigensolutions in the momentum representation since the Fourier transform of an $`L^2`$ function will be an $`L^2`$ function in the same measure.
## IV Translational and Rotational Invariance
We will now examine the behavior of the quantum mechanics of extended objects under translations and rotations and solve the eigenvalue problem of fuzzy angular momentum.
### A Translational Invariance
Under a translation of the coordinate $`xx+ϵ`$ we have the fuzzy translation
$`X_f`$ $``$ $`X_f+ϵe^{P^2/m^2},`$ (30)
$`P`$ $``$ $`P.`$ (31)
In the passive transformation picture
$`T^{}(ϵ)X_fT(ϵ)`$ $`=`$ $`X_f+ϵe^{P^2/m^2},`$ (32)
$`T^{}(ϵ)PT(ϵ)`$ $`=`$ $`P,`$ (33)
where $`T(ϵ)`$ is the translation operator which translates the state $`|\psi `$. Expanding $`T(ϵ)`$ to first order and feeding into Eq. (32)we obtain
$$[X_f,G]=ie^{P^2/m^2},$$
(34)
where $`G`$ is the generator of infinitesimal translations. Thus, the momentum is still the generator of fuzzy spatial translations and analogously, we find that the Hamiltonian is the generator of fuzzy time translations. Since these are the same generators as found in ordinary quantum mechanics, we can conclude by similar reasoning and by Ehrenfest’s theorem that fuzzy space (time) translational invariance will ensure the time independence of the momentum (Hamiltonian).
### B Rotational Invariance
Let us denote the operator that rotates two-dimensional vectors by $`R(\varphi _0\widehat{k})`$ for a rotation by $`\varphi _0`$ about the z-axis. Let $`U[R]`$ be the operator associated with this rotation. For an infinitesimal rotation $`ϵ_z\widehat{k}`$ we set
$$U[R]=Iiϵ_zL_{f_z},$$
(35)
where $`L_{f_z}`$ is the generator of fuzzy rotations. We can determine $`L_{f_z}=X_fP_yY_fP_x`$ by feeding this $`U[R]`$ into the passive transformation equations for an infinitesimal rotation:
$$U^{}[R]X_fU[R]=X_fY_fϵ_z,$$
(36)
and so on. $`L_{f_z}`$ is conserved in a problem with rotational invariance: if
$$U^{}[R]H(X_f,P_x;Y_f,P_y)U[R]=H(X_f,P_x;Y_f,P_y)$$
(37)
it follows (by choosing an infinitesimal rotation) that
$$[L_{f_z},H]=0\text{or}\dot{L}_{f_z}=0$$
(38)
by Ehrenfest’s theorem.
### C The eigenvalue problem of $`L_{f_z}`$
In the momentum basis the two dimensional fuzzy angular momentum operator can be written as
$$L_{f_z}e^{p^2/2m^2}(i\frac{}{_{p_x}}e^{p^2/2m^2}p_yi\frac{}{_{p_y}}e^{p^2/2m^2}p_x),$$
(39)
where $`p^2=p_x^2+p_y^2`$. This is the correct generalization of the smeared position operator to higher dimensions (in this case two) as can be seen by letting $`X_f`$ act on a wavefunction in two dimensions. We can further simplify the derivatives in $`L_{f_z}`$ and switch to polar coordinates to obtain
$$L_{f_z}ie^{p^2/2m^2}\frac{}{_{p_\varphi }}e^{p^2/2m^2}.$$
(40)
The eigenvalue problem of $`L_{f_z}`$,
$$L_{f_z}\psi (p_\rho ,p_\varphi )=l_{f_z}\psi (p_\rho ,p_\varphi ),$$
(41)
can be written in the momentum basis as
$$ie^{p^2/2m^2}\frac{}{_{p_\varphi }}(\psi e^{p^2/2m^2})=l_{f_z}\psi .$$
(42)
Defining $`\varphi =\psi e^{p^2/m^2}`$ and using the transformed measure,
$$dp_\varphi =\frac{1}{2\pi }[\frac{\sqrt{\pi }m}{2i}erf(2\pi i)]e^{p_\varphi ^2/m^2}dr$$
(43)
we arrive at
$$\psi (p_\rho ,p_\varphi )e^{il_{f_z}e^{p_\rho ^2/m^2}r+p^2/2m^2},$$
(44)
where the numerical factor in the measure transformation has been chosen so that as $`p_\varphi `$ ranges from 0 to $`2\pi `$, $`r`$ also ranges from 0 to $`2\pi `$. The eigenfunctions are orthogonal with respect to the transformed measure $`L^2(e^{p_\varphi ^2/m^2}p_\rho dp_\rho dr)`$ where the numerical factor has been suppressed. We observe that $`l_{f_z}`$ seems to be arbitrary and even complex since the range of $`r`$ is restricted. The fact that complex eigenvalues enter the solution signals that we are overlooking the Hermiticity constraint. Imposing this condition we have
$$\psi _1|L_{f_z}|\psi _2=\psi _2|L_{f_z}|\psi _1^{},$$
(45)
which becomes in the momentum basis
$$_0^{\mathrm{}}_0^{2\pi }\varphi _1^{}(i\frac{}{_{p_\varphi }})\varphi _2p_\rho 𝑑p_\rho 𝑑p_\varphi =\left[_0^{\mathrm{}}_0^{2\pi }\varphi _2^{}(i\frac{}{_{p_\varphi }})\varphi _1p_\rho 𝑑p_\rho 𝑑p_\varphi \right]^{},$$
(46)
where $`\varphi =\psi e^{p^2/2m^2}`$. If this requirement is to be satisfied by all $`\varphi _1`$ and $`\varphi _2`$, one can show (by integrating by parts) that it is enough if each $`\varphi (p_\rho ,p_\varphi )`$ obeys
$$\varphi (p_\rho ,0)=\varphi (p_\rho ,2\pi ).$$
(47)
If we impose this constraint on the $`L_{f_z}`$ eigenfunctions we find that the eigenvalues $`l_{f_z}`$ have to obey the following relation
$$l_{f_z}=e^{p_\rho ^2/m^2}k,$$
(48)
where $`k`$ is an integer. The fuzzy angular momentum is equal to an integral multiple of $`\mathrm{}`$ times a smearing factor. This is an example of smeared or fuzzy quantization and as the Compton wavelength vanishes we regain the usual relation for ordinary quantized angular momentum.
## V Quantization of Spacetime
The raised phase space uncertainty product which we have discussed before implies that phase space acquires an added fuzziness due to the smearing of the position operator. By considering the algebra of smooth functions over fuzzy phase space generated by fuzzy positions and momenta, and by using the Gel’fand and Naimark reconstruction theorem one can recover all information about the underlying space. However, since we already know the mathematical form of the fuzzy position operator, we use a more simple approach and directly construct the nontrivial commutators between the fuzzy positions. In the momentum basis the commutator between fuzzy positions in 4-dimensional spacetime is
$$[X_{f_\mu },X_{f_\nu }]=e^{p^2/2m^2}(_{p_\mu }e^{p^2/m^2}_{p_\nu }_{p_\nu }e^{p^2/m^2}_{p_\mu })e^{p^2/2m^2}.$$
(49)
The derivative terms can be further simplified and introducing $`X_\mu i_{p_\mu }`$ and $`Pp`$ we obtain
$$[X_{f_\mu },X_{f_\nu }]=\frac{i}{m^2}e^{P^2/2m^2}(P_\nu X_\mu P_\mu X_\nu )e^{P^2/2m^2}.$$
(50)
The nontrivial commutation relation between the fuzzy positions implies that fuzzy spacetime is quantized. When the confines are much larger than the Compton wavelength, that is, when we are viewing a larger patch of spacetime, $`p^2/m^21`$, and the Gaussian (smearing) factors in Eq. (50) become negligible. In this limit $`X_{f_\mu }X_\mu `$, and we obtain
$$[X_{f_\mu },X_{f_\nu }][X_\mu ,X_\nu ]=\frac{i}{m^2}(P_\nu X_\mu P_\mu X_\nu ).$$
(51)
Thus, as long as the Compton wavelength is nonzero, the ordinary 4-positions also exhibit a nontrivial commutation relation given by Eq. (51).This result is identical to the one obtained by Snyder in 1947. In his paper Snyder demonstrates that the assumption of Lorentz covariance does not exclude a quantized spacetime which he develops by defining the 4-positions in terms of the homogenous (projective) coordinates of a De Sitter space. In the limit as the natural unit of length (the Compton wavelength) vanishes our quantized spacetime changes to the ordinary continuous spacetime and the commutators revert to their standard values. Therefore, our formulation of the quantum mechanics of extended objects implies that spacetime is quantized and that it has a Lorentz covariant structure.
## VI Fuzzy (extended object) Harmonic Oscillator
Before we study the quantum mechanical fuzzy harmonic oscillator let us understand the classical analog of such an oscillator. Classically, we can model an extended object as a point mass connected to a nonlinear spring of stiffness constant, say $`k_1`$. When this spring-mass system is connected to another linear spring of stiffness constant, say $`k_2`$ we essentially have a classical, one dimensional, extended object oscillator. When the wavelength of oscillation is small compared to the size of the extended object (in this case the length of the nonlinear spring of stiffness constant $`k_1`$) the oscillator will exhibit harmonic behavior since the small oscillations do not disturb the configuration of the extended object. As the wavelength of oscillation becomes comparable to the size of the extended object, anharmonic vibrations set in. Again, as the wavelength of oscillation becomes much larger than the size of the extended object, the point particle approximation becomes tenable and harmonic vibrations are recovered. We would expect the quantum version of the extended object oscillator to exhibit similar behavior albeit with quantized energy levels. In the first regime, when the wavelength of oscillation is small compared to the size of the extended object, since small oscillations do not disturb the configuration of the extended object to any appreciable extent we will obtain the usual quantized energy levels of the simple harmonic oscillator. It is in the second and third regimes where we would need to apply the quantum mechanics of extended objects. The Hamiltonian for a one dimensional fuzzy harmonic oscillator can be written as
$$H=\frac{P^2}{2m}+\frac{1}{2}m\omega ^2X_f^2.$$
(52)
Introducing the operator representation for the fuzzy position and momentum in the momentum basis and simplifying terms, we obtain
$$\frac{1}{2}m\omega ^2\left[\frac{d^2\varphi }{dp^2}(\frac{p^2}{m^4}\frac{1}{m^2})\varphi \right]=(\frac{p^2}{2m}E)e^{2p^2/m^2}\varphi ,$$
(53)
where $`\varphi =e^{p^2/m^2}\psi `$, $`H\psi =E\psi `$, and $`\varphi `$ lies in $`L^2(dp)`$. When the wavelength of oscillation (the confines) is large compared to the size of the extended object, $`p^2/m^21`$, in which case we can approximate $`e^{2p^2/m^2}1+2p^2/m^2`$. In this approximation Eq. (53) can be rewritten as:
$$\frac{d^2\varphi }{dp^2}+2m(\stackrel{~}{E}\frac{1}{2}m\mathrm{\Omega }^2)\varphi =0,$$
(54)
where
$`2m\stackrel{~}{E}`$ $`=`$ $`{\displaystyle \frac{2E}{m\omega ^2}}+{\displaystyle \frac{1}{m^2}},`$ (55)
$`m^2\mathrm{\Omega }^2`$ $`=`$ $`{\displaystyle \frac{4E}{m^3\omega ^2}}+{\displaystyle \frac{1}{m^4}}+{\displaystyle \frac{1}{m^2\omega ^2}}.`$ (56)
This is simply the differential equation for a simple harmonic oscillator in terms of the dummy energy $`\stackrel{~}{E}`$ and frequency $`\mathrm{\Omega }`$. For well behaved solutions we require the quantization condition
$$\stackrel{~}{E}_n=(n+\frac{1}{2})\mathrm{\Omega },n=0,1,2,\mathrm{}.$$
(57)
Re-expressing this relation in terms of the physical energy $`E`$ and frequency $`\omega `$ and retaining terms up to $`o(\mathrm{}^2)`$, we obtain
$$E_n=(n+\frac{1}{2})\omega \frac{\omega ^2}{2m},n=0,1,2,\mathrm{}.$$
(58)
As we would expect, the fuzzy particle exhibits harmonic behavior when the wavelength of oscillation is large compared to the size of the particle. In this approximation, the eigenvalue spectrum of the fuzzy harmonic oscillator is equivalent to the spectrum of a displaced simple harmonic oscillator. The shift in the energy spectrum can be understood by observing that in the classical spring-mass model, the extended object (the nonlinear spring) would undergo compression due to the oscillations of the linear spring thereby displacing the equilibrium position. The quantum counterpart exhibits the same behavior and when $`\omega m`$ in Eq. (58), that is, when the point particle approximation becomes tenable we obtain the eigenspectrum of the simple harmonic oscillator. In the classical analog this would mean that, at sufficiently large oscillation wavelengths the compression of the nonlinear spring becomes insignificant. Retaining terms up to $`o(\mathrm{}^2)`$, the eigenfunctions of the harmonic oscillator in this approximation are given by:
$$\psi (p)e^{p^2/m^2(1\frac{m}{2\omega })}H_n[\sqrt{(m\omega )^1}p],$$
(59)
where $`H_n`$ are the Hermite polynomials. Since $`\psi `$ lies in $`L^2(e^{2p^2/m^2}dp)`$, the eigenfunctions will be normalizable. By inserting these approximate solutions into the exact differential equation Eq. (53) we find that they do not differ by derivative terms and hence they are close in some sense to the exact solutions.
If we include higher values of momenta in our approximation and write $`e^{2p^2/m^2}1+2p^2/m^2+2p^4/m^4,`$ we obtain the differential equation
$$\frac{d^2\varphi }{dp^2}+2m(\frac{\alpha }{2m}\frac{\beta }{2m}p^2\frac{\gamma }{2m}p^4)\varphi =0,$$
(60)
where
$`\alpha `$ $`=`$ $`{\displaystyle \frac{2E}{m\omega ^2}}+{\displaystyle \frac{1}{m^2}},`$ (61)
$`\beta `$ $`=`$ $`{\displaystyle \frac{4E}{m^3\omega ^2}}+{\displaystyle \frac{1}{m^4}}+{\displaystyle \frac{1}{m^2\omega ^2}},`$ (62)
$`\gamma `$ $`=`$ $`{\displaystyle \frac{2}{m^4\omega ^2}}{\displaystyle \frac{4E}{m^5\omega ^2}}.`$ (63)
This is the differential equation for an anharmonic oscillator. As we would expect when higher momentum values become important or equivalently as the wavelength of oscillation becomes comparable to the size of the fuzzy particle, anharmonic vibrations set in. We can compute the eigenspectrum of the anharmonic oscillator using perturbation theory. We note that the perturbation expansion breaks down for some large enough $`n`$. Retaining terms up to $`o(\mathrm{}^2)`$ the eigenspectrum is found to be
$$E_n=(n+\frac{1}{2})\omega \frac{\omega ^2}{2m}+\frac{3\omega ^2}{4m}(1+2n+2n^2),n=0,1,2,\mathrm{}.$$
(64)
Figure 1 shows a plot of the first two anharmonic oscillator eigenfunctions. For comparison the first two harmonic oscillator eigenfunctions are also shown. The anharmonic oscillator eigenfunctions have a steeper slope because the particle is placed in a stronger potential as compared to the harmonic oscillator potential. If we include even higher values of momenta in our approximation we find that the anharmonicity increases and in the limit of large quantum numbers our quantum descriptions pass smoothly to their classical counterparts. Therefore, the quantum mechanics of extended objects provides a description of the fuzzy harmonic oscillator which augments our classical intuition. Such a description could be useful when we study harmonic excitations of quasiparticles which cannot be localized to arbitrary precision. The quantum mechanics of extended objects can also be used to describe compound particles such as baryons or mesons in situations where their nonzero size matters but the details of the internal structure do not contribute. One such situation is the description of the nucleon-nucleon interaction at very short distances which we proceed to examine.
## VII The Yukawa Potential
At present the physics of the nucleon-nucleon interaction can be divided into three major regions
1. The long-distance region $`r2`$ fm $`1.5m_\pi ^1`$ where one-pion exchange dominates and the quantitative behavior of the potential is very well established;
2. The intermediate region $`0.8`$ fm $`r2`$ fm where the dynamical contributions from two-pion exchange (effective boson exchange) compete with or exceed the one-pion exchange potential;
3. The inner region $`r0.8`$ fm has a complicated dynamics not readily accessible to a quantitative theoretical description. This region is expected to be influenced by heavy mesons and or by quark/gluon degrees of freedom. It is usually approached in a phenomenological way.
Moreover, the inner region contains a repulsive hard core of radius $`0.6`$ fm which was first proposed by Jastrow in 1951 in order to fit nucleon-nucleon scattering data. The presence of a repulsive nucleon core is necessary to explain the saturation of nuclear forces. This short range and repulsive nucleon force is believed to be mediated by an $`\omega `$ meson of mass $`782`$ MeV and the intermediate range attractive nucleon force is mediated by a $`\sigma `$ meson (effective boson) of mass $`550`$ MeV. Once the masses are fixed, the coupling constants which measure the strength of the coupling between a meson and a baryon are chosen to reproduce nucleon-nucleon scattering phase shifts and deuteron properties. These phenomenological coupling constants are found to be $`g_\omega ^2/4\pi =10.83`$ and $`g_\sigma ^2/4\pi =7.303`$. It is our objective to theoretically determine the radius of the repulsive nucleon core and to reproduce the phenomenological $`\omega `$ meson coupling constant using the quantum mechanics of extended objects which becomes relevant to the dynamics in the inner region due to the finite extent of the nucleon.
In order to reproduce consistent results we will focus attention on the bound state nucleon-nucleon interaction, namely, the deuteron. The deuterium nucleus ($`A=2,Z=N=1`$) is a bound state of the neutron-proton system, into which it may be disintegrated by irradiation with $`\gamma `$ rays of energy above the binding energy of $`2.226`$ MeV. The ground state of the deuteron is a triplet $`S`$ state and it has no excited states. The force between the proton and the neutron can be described in good approximation by a potential energy function of the form
$$V(r)=V_0\frac{e^{r/r_0}}{r/r_0}.$$
(65)
This is the well known Yukawa potential and is central to the mesonic theory of nuclear forces. The range of the force $`r_0`$ is equal to $`1/\mu `$, where $`\mu `$ is the mass of the associated meson and the strength $`V_0`$, or depth of the potential well is connected with the strength of the coupling between the meson and the nucleon field. In the center-of-mass coordinates the Hamiltonian for the $`S`$ state of the deuteron is
$$H=\frac{p^2}{2m}+V(r),$$
(66)
where $`m`$ is the reduced mass of the deuteron and $`r`$ determines the neutron-proton separation. For ease of comparison with the quantum mechanics of extended objects in which the momentum basis is more convenient, we can transcribe the Hamiltonian to the momentum basis by virtue of the exchange transformation
$$rpr_0^2,\mathrm{and}pr/r_0^2.$$
(67)
The exchange transformation is a canonical transformation and does not affect the dynamics. The Hamiltonian in the momentum basis is
$$H=\frac{r^2}{2mr_0^4}+V(p),$$
(68)
where $`𝐫i_𝐩`$ is the position operator and $`V(p)=V_0e^{pr_0}/pr_0`$. The binding energy $`E_0=2.226`$ MeV can be estimated by means of the variational principle using the simple trial wavefunction
$$\psi (p)=e^{\alpha pr_0},$$
(69)
in which we treat $`\alpha `$ as a variable parameter. Our choice of the trial wavefunction is motivated by the fact that we expect the ground state wavefunction to have no angular momentum, no nodes, and for $`p\psi (p)`$ to vanish as $`p\mathrm{}`$ as required for bound states. The variational method determines the energy as
$$E=\frac{\psi |H|\psi }{\psi |\psi }.$$
(70)
The energy $`E`$ serves as an upper bound on the ground state energy $`E_0`$. If we substitute $`E_0=2.226`$ MeV for $`E`$ we can perform an approximate calculation of the relation between $`V_0`$ and $`r_0`$ (range-depth relation) that must hold if the potential function $`V(p)`$ is to give the value $`E_0=2.226`$ MeV for the binding energy. Figure $`2`$ shows a plot of the range-depth relation for the Yukawa potential (deuteron) as determine by this method. By comparing the values of $`V_0`$ for various values of $`r_0`$ with the results of an exact calculation using numerical integration we are able to estimate the accuracy of our approximate result. The approximate result is within a few percent of the exact result and the error decreases with increasing $`r_0`$. Therefore, our choice of the trial wavefunction is justified.
Let us now analyze the same potential problem using the quantum mechanics of extended objects. In the momentum basis the fuzzy Hamiltonian for the $`S`$ state of the deuteron is
$$H=\frac{r_f^2}{2mr_0^4}+V(p),$$
(71)
where
$$𝐫_fie^{p^2/2m^2}_𝐩e^{p^2/2m^2}$$
(72)
is the fuzzy position operator which now determines the neutron-proton separation. Figure $`3`$ shows a plot of the $`S`$ state eigenfunctions as a function of momentum for $`r_0=1.43`$ fm, which correspond to a $`\pi `$ meson of mass $`139.6`$ MeV, and for $`r_0=0.3596`$ fm, which corresponds to a $`\sigma `$ meson of mass $`550`$ MeV. The eigenfunctions obtained from ordinary quantum mechanics are also shown for comparison. The eigenfunctions obtained from the quantum mechanics of extended objects are pushed out in comparison to the usual eigenfunctions implying that there is a repulsive component to the potential which has the effect of pushing out the eigenfunctions as at the edge of an infinite well (compare with figure 1). By examining the plots of $`\varphi (p)=e^{p^2/m^2}\psi (p)`$ (figure 4 shows one such plot for $`r_0=1.43`$ fm) where $`\psi (p)`$ are the eigenfunctions obtained from the quantum mechanics of extended objects, we observe that $`\varphi (p)`$ lies in $`L^2(d^3p)`$. Therefore, the eigenfunctions obtained from the extended object analysis are normalizable with respect to $`L^2(e^{2p^2/m^2}d^3p)`$. This motivates us to choose as our trial wavefunction
$$\psi (p)=e^{p^2/m^2\alpha pr_0}.$$
(73)
The normalizability criterion in this measure ensures that
$$e^{p^2/m^2}p\psi (p)0\text{ as }p\mathrm{}$$
(74)
as required for bound states (and as is the case with our trial wavefunction). Furthermore, when the confines are large ($`p^2/m^21`$), $`\psi (p)`$ in Eq. (73) passes smoothly into the trial wavefunction we had used when we applied ordinary quantum mechanics and which had yielded an accurate range-depth relation. Hence, our choice of the trial wavefunction is justified and with the given volume element we can determine the approximate range-depth relation that must hold if the potential function $`V(p)`$ is to give the value $`E_0=2.226`$ MeV for the binding energy. Numerical calculations performed in Mathematica reveal the range-depth relation shown in figure 5. The strength of the potential or depth of the well $`V_0^{}`$ in figure 5 is lower than the strength of the potential $`V_0`$ obtained from ordinary quantum mechanics (figure 2) particularly for smaller values of $`r_0`$. The existence of a repulsive component to the potential which we have already observed from a plot of the eigenfunctions shown in figure 3 is verified. Moreover, the depth of the well $`V_0^{}`$ in figure 5 is negative for $`r_00.563`$ fm. This implies the existence of a repulsive nucleon core with a radius $`r_c=0.563`$ fm, which is consistent with the phenomenologically obtained value of $`0.6`$ fm.
Let us model the effective nucleon-nucleon interaction by a potential of the form
$$V(r)=V_0\frac{e^{r/r_0}}{r/r_0}+V_1\frac{e^{r/r_1}}{r/r_1},$$
(75)
where $`r_0=0.3596`$ fm corresponding to $`\sigma `$ meson exchange (attraction)and $`r_1=0.2529`$ fm corresponding to $`\omega `$ meson exchange (repulsion). This potential describes the main qualitative features of the nucleon-nucleon interaction: a short range repulsion between baryons coming from $`\omega `$ exchange and an intermediate range attraction coming from $`\sigma `$ exchange. The repulsive component of the effective nucleon-nucleon interaction must be held accountable for the drop in the well depth from $`V_0`$ to $`V_0^{}`$, which is observed at $`r_0=0.3596`$ fm. Since the $`\omega `$ exchange occurs at a range of $`r_1=0.2529`$ fm we require that
$$V(r=r_1)=V_0^{}\frac{e^{r_1/r_0}}{r_1/r_0}.$$
(76)
The quantities $`V_0=660.77`$ MeV and $`V_0^{}=81.0`$ MeV can be computed numerically or can be read from figures 2 and 5. A simple calculation yields the strength of the repulsive potential as $`V_1=1419.07`$ MeV. Figure 6 shows a plot of the effective nucleon-nucleon interaction. The potential is attractive at large distances and repulsive for small $`r`$. In terms of the coupling constants we can rewrite the effective nucleon-nucleon interaction as
$$V(r)=\frac{g_\sigma ^2}{4\pi }\frac{e^{r/r_0}}{r}+\frac{g_\omega ^2}{4\pi }\frac{e^{r/r_1}}{r}.$$
(77)
Comparison with Eq. (75) yields $`g_\sigma ^2/4\pi =1.20`$ and $`g_\omega ^2/4\pi =1.815`$. Note that we are working in units with $`\mathrm{}=c=1`$. These theoretically obtained values of the coupling constants will differ from the phenomenological coupling constants because in our simple Yukawa model of the effective nucleon-nucleon interaction we have neglected important tensor interactions and spin-orbit terms which contribute to the form of the potential. However, the ratio of the theoretical coupling constants $`g_\omega ^2/g_\sigma ^2=1.512`$ which compares the relative strength of the repulsive coupling and the attractive coupling must be equal to the ratio of the phenomenologically determined coupling constants $`g_{\omega _p}^2/g_{\sigma _p}^2`$ in order for our simple Yukawa model to successfully describe the effective nucleon-nucleon interaction and to ensure the stability of the deuteron. Using the value $`g_{\sigma _p}^2/4\pi =7.303`$ and multiplying by the ratio 1.512 we obtain the value of the phenomenological coupling constant of the $`\omega `$ meson as $`g_{\omega _p}^2/4\pi =11.03`$. This value of the coupling constant differs by $`1.85`$ percent from the value obtained from fitting the nucleon-nucleon scattering phase shifts and deuteron properties which is equal to $`10.83`$. Therefore, the quantum mechanics of extended objects leads us to values of the $`\omega `$ meson coupling constant and of the repulsive core radius which are consistent with the phenomenologically obtained values.
## VIII Conclusion
In this paper we have developed the Hilbert space representation theory of the quantum mechanics of extended objects and applied it to the fuzzy harmonic oscillator and the Yukawa potential. The results of the fuzzy harmonic oscillator are consistent with our classical intuition and in the case of the Yukawa potential we obtain accurate theoretical predictions of the hitherto phenomenologically obtained nucleon core radius and the $`\omega `$ meson coupling constant. In an age of increasing miniaturization, it is conceivable that as the confines of various quantum systems become comparable to the finite extent of the confined particles, the quantum mechanics of extended objects will play an important role in determining the dynamics. Furthermore, the infinite dimensional generalization of the quantum mechanics of extended objects, namely, the quantum field theory of extended objects needs to be understood. Since the ubiquitous and troublesome vertex in quantum field theory is effectively smeared out in such a treatment, it is possible that the problem of nonrenormalizable quantum field theories can be rendered tractable. The author is pursuing investigations in this direction.
Acknowledgements
I would like to thank E.C.G. Sudarshan and L. Sadun for insightful discussions. I would also like to thank R. Zgadzaj for helping me with the numerical calculations in Mathematica.
|
no-problem/9903/cond-mat9903326.html
|
ar5iv
|
text
|
# Debye-Waller factor in He→Cu(001) collisions revisited: the role of the interaction potentials
## Abstract
Following the recently accumulated information on the vibrational properties of the Cu(001) surface acquired through single- and multiphonon He atom scattering experiments and the concomitant theoretical investigations, we have reexamined the properties of the Debye-Waller factor (DWF) characteristic of the He$``$Cu(001) collisions using the recently developed fully quantal and three-dimensional model of inelastic He atom scattering from surfaces. We have focused our attention on the role which the various He-surface model potentials with their characteristic interaction parameters (range of the interaction, momentum and energy transfer cut-offs etc.) employed in the interpretation of the scattering data may play in determining the magnitude of the DWF. By combining the He-Cu(001) potential whose repulsive and attractive components are both allowed to vibrate with the substrate phonon density of states encompassing anharmonic effects, we obtain the values of the DWF which agree nicely with the experimental data without invoking additional fitting parameters. On the other hand, by taking the phonon momentum transfer cut-off $`Q_c`$ as an adjustable parameter, as has been frequently exploited in the literature, all the considered potentials can produce agreement with experiments by varying $`Q_c`$. The magnitudes of such best fit $`Q_c`$ values are compared with those available in the literature and their physical significance is discussed.
1. Introduction
A number of experimental studies of the vibronic properties of single crystal surfaces carried out in the past decade by utilizing thermal energy He atom scattering (HAS) have emphasized the importance of the microscopic properties of the interactions between He beam atoms and the surfaces investigated. The interpretation of both the single phonon HAS data and the multiphonon scattering spectra required a relatively detailed specification of the form of the interaction potentials and the corresponding matrix elements. The latter proved to be one of the key quantities in establishing a meaningful comparison between experiment and theory .
In a number of investigations of the vibrational properties of solid surfaces a successful theoretical interpretation of the relative single-phonon intensities was achieved by using adatom-substrate pairwise potentials and the distorted wave Born approximation (DWBA) for description of the projectile motion in the static component of this potential . The studies of Cu(001) surfaces by HAS emerged as a particularly illustrative example in the context of the atom-surface interaction potentials. To interpret the He$``$Cu(001) time of flight (TOF) spectra certain refinements of the theory based on the pairwise potentials have been proposed either through the concept of a pseudocharge model or the introduction of anisotropic He-substrate atom pair interactions . In both cases the modified matrix elements were typified by some cut-off parameters whose variation could significantly affect the one-phonon intensities. However, thus calculated one-phonon scattering spectra could reveal correctly only the relative intensities of distinct phonon modes.
An additional test of the accuracy of some aspects of the scattering potentials may be provided by the multiphonon spectra whose dependence on the details of the interaction is more complex. Namely, in the multiphonon collision regime already small changes in the projectile-phonon coupling may give rise to a cumulative effect in the scattering intensity. Surprisingly enough, the intensities of the multiphonon scattering spectra of the He$``$Cu(001) collision system were relatively successfully reproduced theoretically by using the earlier expressions for He-surface potentials and two somewhat different approaches to multiphonon HAS. However, no consensus on the values of the characteristic potential parameters has been achieved in these two studies as in one of them different sets of the interaction parameters have been introduced for single and multiphonon scattering regimes.
Another basic quantity characteristic of the atom-surface scattering spectra which is sensitive to the features of the atom-surface interaction potentials is the so called Debye-Waller factor (DWF). In the surface scattering experiments it gives a measure of the intensity $`I_0`$ of the elastically scattered specular beam relative to the incoming beam intensity $`I_{in}`$ and in this respect it differs from the notion introduced in neutron scattering from crystals . So defined DWF, commonly written in the form $`I_0/I_{in}=\mathrm{exp}(2W)`$ where $`2W`$ is the corresponding Debye-Waller exponent, becomes essential in the multiphonon scattering regime because it provides the normalization proper of the scattering spectrum which should obey the unitarity principle (optical theorem). Also, in contrast to the measurements of the single phonon scattering spectra, the measurements of the DWF provide values which in a sense are ”absolute”, i.e. unaffected by the time of flight (TOF) technique, angular and kinematic restrictions, etc.
A rough estimate of the magnitude of the Debye-Waller attenuation in atom-surface scattering was given long ago by Weare and later rederived by Levi and Suhl . They arrived at an approximate expression for the DW exponent as a function of the substrate temperature $`T_s`$ in the form:
$$\underset{T_s>\mathrm{\Theta }_D}{lim}2W(T_s)=\frac{3(\mathrm{}\mathrm{\Delta }k_z)^2}{M_{crys}k_B\mathrm{\Theta }_D}\left(\frac{T_s}{\mathrm{\Theta }_D}\right)=24\frac{M_{He}E_i\mathrm{cos}^2\theta _i}{M_{crys}k_B\mathrm{\Theta }_D}\left(\frac{T_s}{\mathrm{\Theta }_D}\right).$$
(1)
Here $`\mathrm{\Theta }_D`$ is the surface Debye temperature of the substrate, $`\mathrm{\Delta }k_z`$ is the change of the projectile momentum normal to the surface, $`E_i`$ and $`\theta _i`$ are the incoming energy and angle of scattering of the projectile, respectively, $`M_{crys}`$ is the mass of the crystal atoms and $`k_B`$ is the Boltzman constant. $`E_i`$ in this expression is sometimes also corrected for the surface potential well depth $`D`$ (Beeby’s correction ) in which case $`(\mathrm{}\mathrm{\Delta }k_z)^2`$ is replaced by $`(\mathrm{}\mathrm{\Delta }k_z)^2+8M_{He}D`$. However, the form of the DW exponent (1) can be justified only in the regime of impulsive scattering and therefore its validity is of limited range. In particular, for incident energies typical of thermal energy He atom scattering from surface phonons and soft projectile-surface interactions the approximation of impulsive scattering has been shown to become unreliable for making quantitative comparisons with the experimental data.
The measurements of the DWF in HAS from metal surfaces have been systematically performed for Ag(111) , Pt(111) , Cu(001) , Cu(110) and Ni(115) and theoretical interpretations given within two different approaches developed to treat multiphonon excitations is atom-surface collisions. The calculations of the DWF in HAS from Ag(111) and Pt(111) were based on a three-dimensional scattering formalism developed in Ref. . They reproduced successfully the experimentally observed magnitude and the linear $`T_s`$-dependence of the Debye-Waller exponent $`2W`$ up to $`T_s=700K`$ with the ”vibrating soft atom model potential” . The experimental data on the DWF available for Cu(001) and Cu(110) surfaces were interpreted by carrying out perturbation expansion of the scattering matrix in a distorted wave basis . Although these calculations were essentially one-dimensional as regards the scattered particle dynamics, their real merit lies in finding that for the atom-phonon coupling to all orders in the lattice displacements the repeated single phonon exchange processes give much larger contribution to the DWF than the simultaneous many-phonon exchange of the same multiplicity . Upon introducing some adjustable parameters the DWF was calculated by assuming only the repulsive component of the total potential to vibrate and retaining in the scattering matrix the lowest order dominant contributions in powers of $`T_s`$. Such a truncation of the series for the DWF, which violates the unitarity of the scattering spectrum, gives rise to an artifact in the curvature of the DWF versus $`T_s`$ on a semilogarithmic plot.
Recently developed fully quantum, three dimensional (as regards the particle dynamics) and unitary multiphonon He atom scattering formalism embodies also a general expression for the DWF as an essential ingredient. The application of this formalism to multiphonon He$``$Cu(001) collisions produced a nice agreement of theoretical predictions with the experimental multiphonon spectra by using the same interaction parameters as in the one-phonon theory, i.e. different from those used by the authors of Ref. to fit the experimental data. Hence, the state of affairs regarding the forms of the interaction potentials in HAS from Cu(001) surfaces seems to be critical as different expressions and parameters have been used to optimally describe the same physical situation.
The selection of an adequate model atom-surface potential to describe inelastic processes in He$``$Cu(001) collisions can be also appropriately tested in the evaluation of an ”absolute” quantity such as the complete DWF provided the latter is calculated within a genuine three dimensional scattering model and by taking into account all dominant multiphonon contributions in powers of $`T_s`$. This task is carried out in the present work by making use of the quoted novel multiphonon scattering formalism . In the following sections we focus our attention specifically on the problem of which effects the various model potentials employed in recent interpretations of the one- and multiphonon HAS spectra from Cu(001) may have on the magnitude of the DWF and its variation with $`T_s`$.
2. Atom-surface scattering potentials and the DWF
In a series of earlier publications we have shown that in in the case of linear projectile-phonon coupling, which gives by far the largest contribution to the multiphonon processes , one can derive a closed form unitary expression for the scattering spectrum valid both in the one- and multiphonon scattering regimes. The point of departure in this approach is a distorted wave basis set of projectile wave functions which exactly describe elastic reflections from a flat surface. Inelastic processes are then treated as a perturbation of the distorted waves brought about by the vibrations of the surface. In the weak coupling limit the thus calculated inelastic scattering spectrum reduces, up to a kinematic Jacobian-like factor, to the standard inelastic reflection coefficient calculated in the DWBA . On the other hand, for strong coupling the scattering spectrum is expressed in terms of uncorrelated and correlated multiphonon scattering processes in which only one phonon can be exchanged at a certain instant, and the normalization of the entire spectrum is given by the DWF . In the collision regimes typical of HAS experiments the uncorrelated multiphonon processes are dominant over the correlated ones and in this limit the scattering spectrum acquires a form of the exponentiated Born approximation (EBA) expression . The corresponding DWF to all orders in the coupling constant reads :
$$e^{2W^{EBA}}=\mathrm{exp}[\underset{fi}{}R_{fi}^{DWBA}],$$
(2)
where $`R_{fi}^{DWBA}`$ is the temperature dependent one-phonon inelastic reflection coefficient calculated in the DWBA. $`R_{fi}^{DWBA}`$ is quadratic in the projectile-phonon coupling and describes the transition of the collision system from the initial state $`i`$, characterized by the particle distorted wave quantum numbers $`𝐤_𝐢=(𝐊_𝐢,k_{iz})`$ and the initial phonon distribution, to a final state $`f`$ in which the particle quantum numbers are $`𝐤_𝐟=(𝐊_𝐟,k_{fz})`$ and the final phonon distribution differs from the initial one by one phonon quantum. Here $`\mathrm{}𝐊`$ denotes the lateral particle momentum and $`\mathrm{}k_z`$ the perpendicular particle momentum outside the range of the static distorting potential $`\overline{U}(z)`$ of the planar surface. Explicitly, one has :
$`2W^{EBA}`$ $`=`$ $`{\displaystyle \underset{𝐐,j,k_z}{}}\left[𝒱_{k_z,k_{zi},j}^{𝐊_𝐢,𝐐}(+)^2[\overline{n}_{ph}(\mathrm{}\omega _{𝐐,j})+1]+𝒱_{k_z,k_{zi},j}^{𝐊_𝐢,𝐐}()^2\overline{n}_{ph}(\mathrm{}\omega _{𝐐,j})\right],`$ (3)
where $`𝐐`$, $`j`$ and $`\omega _{𝐐,j}`$ denote the phonon wavevector parallel to the surface, the branch index and frequency, respectively, and $`\overline{n}_{ph}(\mathrm{}\omega _{𝐐,j})`$ is the Bose-Einstein distribution of phonons in thermal equilibrium at $`T_s`$. The on-the-energy-shell one phonon absorption and emission matrix elements are given by
$$𝒱_{k_z,k_z^{},j}^{𝐊,𝐐}()=2\pi V_{k_z,k_z^{},j}^{𝐊,𝐊^{},𝐐}\delta _{𝐊^{},𝐊\pm 𝐐}\delta (E_{𝐊^{},k_z^{}}E_{𝐊,k_z}\mathrm{}\omega _{𝐐,j})$$
(4)
where $`E_{𝐊,k_z}`$ denotes the particle energy. $`𝒱_{k_z,k_z^{},j}^{𝐊,𝐐}()^2`$ represent first order DWBA probability for a state-to-state transition $`𝐊,k_z𝐊^{},k_z^{}`$ of the particle. Their seemingly divergent contribution to (3) disappears upon conversion of the $`\delta `$-functions to Kronecker symbols and summation over $`𝐐`$ and $`k_z`$. The matrix elements $`V_{k_z,k_z^{},j}^{𝐊,𝐊^{},𝐐}`$ of the projectile-phonon interaction $`V(𝐫)`$ are taken with respect to the distorted waves corresponding to the projectile motion in $`\overline{U}(z)`$. A detailed derivation of these formulae and their application to HAS problems has been given in .
It is evident from Eqs. (3) and (4) that the DWF in atom-surface scattering will be sensitive to the form and variations of the dynamical atom-surface potential $`V(𝐫)`$ through its matrix elements $`V_{k_z,k_z^{},j}^{𝐊,𝐊^{},𝐐}`$. These potentials are neither known a priori nor readily available in analytical form but have to be determined from independent calculations using various approximate schemes, often yielding them only numerically. On the other hand, in the three dimensional numerical calculations it is often convenient to have analytical representations of both the static and dynamic He atom-surface interactions as this greatly simplifies the computing. This has stimulated the development of the various approximate analytical expressions for the potentials and their matrix elements as functions of the characteristic interaction parameters such as the strength and the range of the potential, typical cut-offs etc.
In the majority of theoretical interpretations of inelastic one-phonon HAS experiments on smooth metal surfaces the static projectile-surface interaction is conveniently represented by a sum of pair potentials $`v(𝐫𝐫_𝐥)`$ acting between the He atom at $`𝐫`$ and substrate atoms at $`𝐫_𝐥`$:
$$U(𝝆,z)=\underset{𝐥}{}v(𝐫𝐫_𝐥),$$
(5)
and then averaged over the surface to yield the static projectile-surface interaction potential $`\overline{U}(z)`$.
The matrix elements of the dynamical interaction in the case of linear atom-phonon coupling acquire a simple form :
$$V_{k_z,k_z^{},j}^{𝐊,𝐊^{},𝐐}=\underset{𝐆,\kappa }{}𝐮_\kappa (𝐐,j)𝐅_\kappa (𝐊𝐊^{},k_z^{},k_z)\delta _{𝐊𝐊^{},𝐐+𝐆},$$
(6)
where the sum ranges over all reciprocal lattice vectors $`𝐆`$ of the surface mesh and the positions $`𝐫_\kappa `$ of the crystal atoms in the surface unit cell. $`𝐮_\kappa (𝐐,j)`$ is the quantized displacement of the crystal atoms corresponding to the phonon mode characterized by the quantum numbers $`(𝐐,j)`$ and the polarization vector $`𝐞_\kappa (𝐐,j)`$. The matrix element of the force $`𝐅_\kappa (𝐐,k_z^{},k_z)`$ exerted on the projectile atom is expressed as :
$$𝐅_\kappa (𝐐,k_z^{},k_z)=e^{i\mathrm{𝐐𝐫}_\kappa }\chi _{k_z^{}}(i𝐐,\frac{}{z})v_{vib}(𝐐,z)\chi _{k_z},$$
(7)
where $`\chi _{k_z}`$ is the distorted wave describing the particle motion normal to the surface and $`v_{vib}(𝐐,z)`$ is a two-dimensional Fourier transform of the vibrating part $`v_{vib}(𝐫)`$ of the pair interaction $`v(𝐫)`$. As yet, there is no general consensus on which part of the total pair potential contributes to $`v_{vib}(𝐫)`$ and several models have been proposed in the literature.
Another important parameter characteristic of the various expressions for $`v(𝐐,z)`$, and thereby also of $`v_{vib}(𝐐,z)`$, is the cut-off wavevector (or wavevectors) $`Q_c`$ which gives an approximate upper bound of the lateral momentum transfer in the one-phonon exchange processes (Hoinkes-Armand effect ). In the case of exponential potentials of range $`1/\beta `$ the value of $`Q_c`$ is approximately given by :
$$Q_c=\sqrt{\frac{\beta }{z_t}}$$
(8)
where $`z_t`$ is the average value of the He atom turning point in the surface potential $`U(𝝆,z)`$. Numerical evaluation of the matrix elements (7) avoids the explicit introduction of $`Q_c`$ but the physical effect of the cut off in the space of lateral momentum transfer persists.
Expressions (2)-(8) provide a framework for a fully three dimensional calculation of the DWF and thereby enable a test of the adequacy of the various expressions for the interaction potentials employed in the derivation of the matrix elements (7).
The characteristics of the static He atom-Cu surface potentials have been extensively discussed in the literature . Quite generally, this interaction is repulsive at short distances and exhibits an attractive van der Waals (vdW) tail at large distances, with a shallow potential well (6-7 meV) whose minimum is located around 7 bohrs away from the last crystal plane. The dynamic He atom-surface potential in the case of linear atom-phonon coupling is obtained as a gradient of the vibrating part of the pair potential, as implicit in Eq. (7). However, there is no unanimous agreement as to which part of the total potential this gradient should involve, which is equivalent to the question whether only the repulsive or both the repulsive and attractive components of the total potential vibrate. We shall calculate the force matrix elements (7) by using several different forms of $`\overline{U}(z)`$ and $`v_{vib}(𝐐,z)`$ employed recently in the interpretation of both single and multiphonon scattering data on He$``$Cu(001) collisions and then test their relevance by making a comparison with the experimental values of the DWF.
3. The effect of the potentials on the DWF
In our assessment of the effects of the various interaction potentials on the DWF pertinent to He$``$Cu(001) collisions we shall investigate all three possibilities of different forms of the static and dynamic interactions. Our point of departure will be the earlier calculated static He-Cu(001) potential which was also in a good agreement with empirically determined potentials and experimental fits . By requiring that the surface averaged sum of model pair interactions produces as close as possible this potential we may in principle determine the characteristic potential parameters corresponding to $`\overline{U}(z)`$ and $`v_{vib}(𝐐,z)`$ which both derive from $`v(𝐫)`$. Then, by assuming various forms of $`v_{vib}(𝐐,z)`$ (originating either from only the repulsive or both the repulsive and attractive components of $`v(𝐫)`$) we may be able to select the physically relevant $`\overline{U}(z)`$ and $`v_{vib}(𝐐,z)`$ after comparing the calculated and measured values of the DWF. To pursue this goal but also remain in correspondence with earlier treatments of single- and multiphonon spectra in obtaining the wavefunctions needed for the calculation of the matrix elements (7) we shall fit the $`z`$ dependence of the potential of Ref. to the following analytical expressions:
(i) Exponentially repulsive potential
$$\overline{U}(z)=U_{exp}(z)=U_0e^{\beta z}$$
(9)
by requiring that the value and the slope of the two potentials at the He-atom turning points be the same. This implies the pair potential in the form $`v(𝐫)=v_0e^{\beta r}`$ and renders $`U_0`$ and $`\beta `$ as functions of the incoming energy $`E_i`$ and angle of incidence $`\theta _i`$ of the projectile. This procedure also enables us to estimate the values of $`Q_c`$ using Eq. (8) because the latter can be most easily justified in the case of exponentially repulsive surface potentials . Here the whole potential is assumed to vibrate, i.e. $`v_{vib}(𝝆,z)=v(𝝆,z)`$, and the calculation outlined in Ref. leads to:
$$v_{vib}^{exp}(𝐐,z)=U_0e^{\beta z}e^{Q^2/2Q_c^2}$$
(10)
with $`Q_c`$ given by (8). The nontrivial component of the interaction matrix elements in (7), i.e. $`\chi _{k_z^{}}^{exp}(i𝐐,\frac{}{z})v_{vib}^{exp}(𝐐,z)\chi _{k_z}^{exp}`$, are then obtainable in analytical form .
(ii) The Morse potential
$$\overline{U}(z)=U^M(z)=U_{rep}^M(z)+U_{att}^M(z)=D(e^{2\alpha (zz_0)}2e^{\alpha (zz_0)})$$
(11)
where the well depth $`D`$, position $`z_0`$ of its minimum and the range $`\alpha `$ are determined by requiring that for given $`E_i`$ and $`\theta _i`$ the potentials (11) and the one computed in Ref. have the same values at the minimum and at the classical turning point. An alternative requirement that the values of the minima and the derivatives at the turning point coincide leads to practically the same fitted Morse potential. The pair potential which leads to (11) is of the form $`v(𝐫)=v_0(e^{2\alpha r}e^{\alpha r})`$ and the different range of the repulsive and attractive components gives rise to different $`Q`$-dependence of $`v(𝐐,z)`$. Here we assume that both components of the pair potential $`v(𝐫)`$ vibrate which leads to
$$v_{vib}^M(𝐐,z)=D(e^{2\alpha (zz_0)}e^{Q^2/2Q_c^2}2e^{\alpha (zz_0)}e^{Q^2/Q_c^2})$$
(12)
with $`Q_c=\sqrt{2\alpha /z_t}`$. Here the factor of 2 in the second term in the bracket on the RHS of (12) and the different cutoffs in the $`Q`$-space ($`Q_c`$ versus $`Q_c/\sqrt{2}`$) arise as a consequence of the different range of the two components of $`v(𝐫)`$. Again in this case the corresponding matrix elements $`\chi _{k_z^{}}^M(i𝐐,\frac{}{z})v_{vib}^M(𝐐,z)\chi _{k_z}^M`$ are available in analytical form .
(iii) The static potential $`\overline{U}(z)`$ is given by the Morse potential as in (ii) but only the repulsive component of $`v(𝐫)`$ is allowed to vibrate. This yields:
$$v_{vib}^{rep}(𝐐,z)=De^{2\alpha (zz_0)}e^{Q^2/2Q_c^2},$$
(13)
and the corresponding matrix elements are also expressible in analytical form .
The interaction potentials (10), (12) and (13) and the corresponding matrix elements should be representative enough to span the range of possible but physically different dynamical regimes of the projectile-surface encounter which the DWF can be sensitive to. The calculations were performed by using these potentials and the Debye model of surface phonons in Cu parametrized in terms of the surface Debye temperature $`\mathrm{\Theta }_D=290`$ K which is a mean over the surface projected modes and directions in the surface Brillouin zone. This effective value is a little higher than the value of 267 K reported in for the Rayleigh wave and the longitudinal resonance in $`100`$ and $`110`$ directions of the Cu(001) surface, and the value of 280 K found in as appropriate to the regime of multiphonon scattering from the same surface. The difference between the present and the other two temperatures arises because the latter were determined from fitting the strength of the projectile-surface coupling using a potential in which the effect of the attractive well was neglected.
Lattice vibrations of copper surfaces exhibit anharmonicity even at moderately high temperatures and in order to take this effect into account we have additionally corrected the surface phonon density of states for anharmonic effects as discussed in Refs. . The actual effect of anharmonicity on surface phonon dispersion within the two dimensional Brillouin zone pertinent to the Cu(001) surface was estimated from the data presented in Ref. .
In Fig. 1 we show a comparison of the results of calculations of the DWF for the three potentials described above, for $`E_i`$=63 meV and $`\theta _i=39^0`$ for which the shadowing effect in scattering from defects should not be very important. In addition to this we have also shown for illustration the DWF calculated by using Eq. (1) with the same $`\mathrm{\Theta }_D`$ as in other expressions. As is seen from Fig. 1, the Morse potential with both components vibrating gives an excellent agreement with the experimental data. The Morse potential with only the repulsive component vibrating and the adjusted exponentially repulsive potential do not produce results which would adequately describe the experiments for the given set of collision parameters, and neither does expression (1).
Fig. 2 is analogous to Fig. 1 in that it displays the same comparisons, but here for $`E_i=21`$ meV and $`\theta _i=31.8^0`$. The same general trends as observed in Fig. 1 persist for this set of collision parameters as well although the agreement is not as good as at $`E_i=63`$ meV. Presumably this is due to the presence of diffuse elastic scattering which is more pronounced at lower collision energies but hasn’t been accounted for by our model. A relatively good agreement between the measured DWF values and those calculated from Eq. (1) is a mere coincidence which doesn’t occur at other collision energies and scattering angles (c.f. Fig. 1).
A poor description of the DWF in terms of the adjusted exponential potential (designated by (10) above) in Figs. 1 and 2 signifies the importance of the presence of the potential well, and this becomes more apparent as the normal component of the projectile incoming energy is lowered. On the other hand, the large difference between the results for the same Morse potential but with different vibrating components arises from smaller interaction matrix elements in the case of the total vibrating potential. This is mainly because the derivative of the complete potential changes sign at the well bottom giving rise to cancellation effects in the scattering amplitude (7). Another point worth emphasizing is that the lateral momentum cut-off function associated with the attractive component of the total vibrating potential attenuates much faster than the one associated with the repulsive vibrating component, giving correspondingly smaller contribution to (7). This simply reflects the fact that the potentials of longer range give rise to smaller inelastic contributions for given energy and momentum transfer. The same trends regarding the three model potentials are also retrieved for other scattering angles at He incoming energies of 63 and 21 meV. Hence, in the scattering regime characterized by the present collision parameters all these findings render, out of the potentials (10), (12) and (13), the total vibrating potential designated by (12) as the best model potential underlying the physics of the DWF in the present collision system.
In Figs. 3 and 4 we present further comparisons of the available experimental values of the DWF with the ones calculated using the potential (12). Here we want to reiterate that in these calculations we haven’t introduced any free parameters which could be adjusted so as to obtain a good fit to the experimental data but that all the parameters typical of the potential and the phonon density of states have been either predetermined or obtained within the present model.
Fig. 3 displays the results of the calculations for incident energy $`E_i=63`$ meV and the angles of incidence used in the experiments . The agreement between the calculated and experimental results is very good in the whole range of experimental $`\theta _i`$ in which also the earlier empirical fits were successful . An exception occurs only at $`\theta _i=71.6^0`$ for which due to the low normal component of the incident energy the contribution to scattering by surface defects (steps, kinks, adatoms, vacancies etc.) may become important. In this situation two effects come into play. First, the diffuse scattering by static defects reduces the DWF at incident angles nearer to grazing because of the shadowing effect and, second, the enhanced anharmonicity of vibrations associated with defects would further suppress the magnitude of such DWF.
The general trend of reduction of the DWF with the angle of incidence closer to the normal signifies the strongest coupling of the He atom to perpendicular surface vibrations. A deviation of the experimental results from this general behaviour for $`\theta _i=19^0`$ and 400 K$`T_s`$800 K is probably an artifact connected with the early measurements .
Fig. 4 displays the experimental and theoretical DWF values for incident energy $`E_i=21`$ meV. Although the trends observed here are the same as in Fig. 3, due to the lower incoming energy the deviations between the two sets of data becomes apparent even at lower incoming angle. For $`\theta _i=73.5^0`$ the corresponding $`E_i^z`$ is already so small (1.7 meV) that the scattering from defects may give rise to contributions to the Debye-Waller exponent $`2W`$ which are larger than that induced by phonons. As our model doesn’t encompass this type of effects, the difference between the experimental and calculated values of the DWF for such collision parameters is not surprising.
A crucial element in obtaining a good agreement between the calculated and experimental values of the DWF without invoking the fitting parameters was a realistic form of the He-surface interaction potential with both components allowed to vibrate, and the fact that the attractive component is much more sharply cut off in the $`Q`$-space due to its longer range. Thereby the attractive component gives only a correction to the one-phonon matrix elements (7) whose magnitude is dominantly determined by the repulsive component of the interaction.
Quite generally, the matrix elements (7) are so sensitive to even small variations of $`Q_c`$ that the latter may be taken as a fit parameter which could be adjusted so as to produce a good agreement between the calculated and measured values of the DWF for any of the potentials discussed under (i)-(iii) above. Fig. 5 shows a comparison of the magnitudes of such best fit $`Q_c`$ values together with the values available in the literature. However, a physical justification of such best fit $`Q_c`$’s may not be easy in all the cases considered because some of them considerably deviate from the values predicted by the expression $`\sqrt{2\alpha /z_t}`$ here found to provide a consistent description of the experimental results for the DWF in He$``$Cu(001) collisions.
4. Conclusions
In this work we have studied the effect of the characteristics of the various model interaction potentials on the magnitude of the Debye-Waller factor (DWF) in He$``$Cu(001) scattering. For this collision system the experimental data on the DWF , single phonon and multiphonon spectra are available for a wide range of collision parameters (incident energy and angle) and surface temperature which all facilitates a comparison between the theoretical predictions and experimental results. In our theoretical description we have employed the earlier developed realistic, fully three-dimensional quantum scattering model to calculate the DWF to all orders in the projectile-phonon coupling , here assumed to be linear in phonon displacements. The calculations were carried out within the so called exponentiated Born approximation (EBA) which takes into account uncorrelated multiple phonon exchange processes which have been shown earlier to give a dominant contribution to the scattering spectra in the collision regime studied . By considering several types of He-surface interaction potentials we have shown that the model potential which correctly reproduces the gross features of the earlier calculated static He-Cu(001) interaction and whose both components, repulsive and attractive, are assumed to vibrate, produces results which nicely agree with the experimental data without invoking any fitting parameters. Important elements in obtaining this agreement were the difference in the cut off wavevector $`Q_c`$ characterizing the longer range attractive and shorter range repulsive components of the vibrating potential and the variation of the phonon dispersion with temperature due to the anharmonicity of surface vibrations. Other model potentials which have also been frequently used in the interpretation of the DWF and the single phonon data, like the vibrating exponentially repulsive potential, fail to reproduce the experimental values of the DWF in the He$``$Cu(001) collision system. This is mainly because they do not take into account the attractive component of either the static or dynamic potentials which become important at lower incoming energies. On the other hand, at very low normal incoming energies the scattering from defects starts to affect the magnitude of the DWF. In this limit the agreement between the measured values and those predicted by the present model which doesn’t account for scattering by defects, is lost.
We have further shown that it is in principle possible to reproduce the measured DWF data with all the model potentials considered by assuming the characteristic cut off wavevector as an adjustable parameter for each set of collision parameters. Thereby one can determine the best fit $`Q_c`$ values as function of the normal component of the projectile incident energy for the studied model potentials and correlate them with the values given in the literature . The best fit $`Q_c`$ values for the potential with both vibrating components were found to be practically identical to those of our no-fit-calculations, thereby confirming the consistence of this potential and the validity of the results obtained by using it. They are also found to be very close to the majority of the values cited in the literature in connection with the theoretical interpretation of both single- and multiphonon scattering spectra which gives additional credibility to these and the present calculations.
Acknowledgments:
One of the authors (B.G.) would like to acknowledge the Senior Associateship Award which enabled his stay at the International Centre for Theoretical Physics in Trieste. The work in Zagreb has been supported in part by the National Science Foundation grant JF-133.
Fig. 1. Debye-Waller factor as function of the substrate temperature for He incoming energy $`E_i=63`$ meV and incident angle $`\theta _i=39^0`$ calculated with three different interaction potentials. Full squares denote experimental datapoints and the long dashed, full and dashed-dotted lines denote the values calculated with the adjusted exponentially repulsive potential (10), total vibrating Morse potential (12) and repulsive vibrating Morse potential (13), respectively. The short dashed line denotes the DWF calculated from Eq. (1).
Fig. 2. Same as in Fig. 1 but for $`E_i=21`$ meV and $`\theta _i=31.8^0`$.
Fig. 3. Comparison of calculated (full lines) and experimental values of the DWF for $`E_i=63`$ meV and angles of incidence 19<sup>0</sup> (squares), 39<sup>0</sup> (circles), 51<sup>0</sup> (triangles), 61.7<sup>0</sup> (inverted triangles) and 71<sup>0</sup> (diamonds).
Fig. 4. Same as in Fig. 3 for $`E_i`$=21 meV and incident angles 31.8<sup>0</sup> (squares), 55.5<sup>0</sup> (circles) and 73.5<sup>0</sup> (triangles).
Fig. 5. Values of the cut-off wavevector $`Q_c`$ as function of the normal component of He atom incident energy as determined from various procedures. Dashed line: $`Q_c=(2\alpha /z_t)^{1/2}`$ where $`\alpha `$ is the range parameter of the Morse potential. This form of $`Q_c`$ has been used in the calculations leading to Figs. 3 and 4. Full squares: best fit $`Q_c`$ values for the vibrating repulsive component of the Morse potential. Full triangles: best fit $`Q_c`$ values for the adjusted exponential potential. Full circles denote the best fit $`Q_c`$ values for the vibrating Morse potential and they practically coincide with the dashed line for $`E_i^z15`$ meV. Open symbols denote $`Q_c`$ values pertinent to exponentially repulsive potentials given in the literature: open squares are from , open circle from and open triangle from .
|
no-problem/9903/hep-th9903227.html
|
ar5iv
|
text
|
# 1 Log-Log plot of the 𝐸/𝑈 function for the multi-centered 𝐴𝑑𝑆₅×𝑆₅. The solid line is for 𝐾/𝑁=10⁻⁴. The dashed line is for 𝐾/𝑁=0.
## Acknowledgements
I would like to thank Steve Gubser, Sunny Itzhaki, Finn Larsen, and Joe Polchinski for illuminating discussions. This work was supported in part by the National Science Foundation under Grant No. PHY94-07194.
|
no-problem/9903/astro-ph9903065.html
|
ar5iv
|
text
|
# WFPC2 OBSERVATIONS OF THE URSA MINOR DWARF SPHEROIDAL GALAXY Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555.
## 1 INTRODUCTION
The Ursa Minor (UMi) dwarf spheroidal (dSph) galaxy was independently discovered by Wilson (wilson1955 (1955)) and Hubble. Ursa Minor is the second closest satellite galaxy of the Milky Way at a distance of $`69\pm 4`$ kpc ($``$220,000 light years) from the Sun. Color-magnitude diagrams of the brightest stars of this faint ($`M_V8.9`$ mag: Kleyna et al. kleyna\_etal\_1998 (1998)) small ($`r_{\mathrm{tidal}}=628\pm 74`$ pc: Irwin & Hatzidimitriou irha1995 (1995)) galaxy feature a strong blue horizontal branch (e.g., van Agt vanagt1967 (1967); Cudworth, Olszewski, & Schommer cuolsc1986 (1986); Kleyna, et al. kleyna\_etal\_1998 (1998)) — a unique horizontal branch morphology amongst the nine Galactic dSph satellite galaxies. The deep $`BV`$ CCD observations of Olszewski & Aaronson (olaa1985 (1985)) indicate that Ursa Minor has an age and abundance very similar to that of the ancient metal-poor Galactic globular cluster M92 (NGC 6341). Ursa Minor may be the only dwarf galaxy in the Local Group which is composed exclusively of stars older than 10 Gyr (Mateo mateo1998 (1998)).
In this work we investigate the star formation history of the Ursa Minor spheroidal galaxy using archival Hubble Space Telescope WFPC2 data. Section 2 is a discussion of the observations and photometric reductions. We present and compare our results with previous work in Sec. 3. The paper is summarized in Sec. 4. Appendix A describes a new robust algorithm for the computation of of fiducial sequences from high-quality stellar photometry.
## 2 OBSERVATIONS AND PHOTOMETRY
The Ursa Minor dwarf spheroidal galaxy was observed with the Hubble Space Telescope (HST) Wide Field Planetary Camera 2 (WFPC2) on 1995 July 4 through the F555W ($``$$`V`$) and F814W ($``$$`I`$) filters. The WFPC2 WFALL aperture (Biretta et al. biet1996 (1996)) was centered on the target position given in Table WFPC2 OBSERVATIONS OF THE URSA MINOR DWARF SPHEROIDAL GALAXY<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. and shown in Figure WFPC2 OBSERVATIONS OF THE URSA MINOR DWARF SPHEROIDAL GALAXY<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. . Two low-gain observations were obtained in each filter. These observations were secured as part of the HST Cycle 5 program GTO/WFC 6282 (PI: Westphal) and were placed in the public data archive at the Space Telescope Science Institute on 1996 July 5. The datasets were recalibrated at the Canadian Astronomy Data Centre and retrieved electronically by us using a guest account which was kindly established for KJM.
These WFPC2 observations contain several types of image defects. Figure WFPC2 OBSERVATIONS OF THE URSA MINOR DWARF SPHEROIDAL GALAXY<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. shows a negative mosaic image of the U2PB0103T dataset. Besides exhibiting normal cosmic ray damage, this 1100-s F555W exposure also shows (1) a satellite trail on the WF4 CCD, (2) an elevated background near the inner corner of the PC1 CCD, and (3) shadows are seen on all four CCDs. The elevated background near the inner-corner of the PC1 CCD is probably due to stray light patterns from a bright star just outside of the PC1 field-of-view (cf. Fig. 7.1.a of Biretta, Ritchie, & Rudloff biet1995 (1995)). The shadows seen on all four CCDs are indicative of a serious problem with these observations because the shadows are generally seen against an elevated background throughout the entire WFPC2 field-of-view. This phenomenon is due to light from the bright sun-lit Earth reflecting off the optical telescope assembly (OTA) baffles and the secondary mirror supports (“spider”) and into the WFPC2 instrument. Elevated backgrounds occur when the angle between the Earth and the OTA axis is $`<`$25 degrees (cf. Fig. 11.2.a of Biretta et al. biet1995 (1995)). The background “sky” brightened significantly during the course of these observations (see Figure WFPC2 OBSERVATIONS OF THE URSA MINOR DWARF SPHEROIDAL GALAXY<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. ) indicating that the Hubble Space Telescope experienced earthrise during these WFPC2 observations of the Ursa Minor dwarf spheroidal galaxy.
The experimental design of these WFPC2 observations was nearly identical to that of the Carina dwarf spheroidal program GTO/WFC 5637 (PI: Westphal) which was analyzed by Mighell (mighell1997 (1997)). We therefore planned to follow Mighell’s Carina photometric reduction procedures in this investigation of the Ursa Minor dwarf spheroidal. Unfortunately, the standard cosmic-ray removal procedure failed spectacularly due to earthrise causing the background sky level to change rapidly. We had to improvise more complicated analysis techniques than ones used by Mighell in his Carina study in order to obtain stellar photometry of comparable quality.
We found stellar candidates on cosmic-ray cleaned images which were suitable for the detection of point sources but unsuitable for further photometric analysis. The cosmic rays were removed by using the crrej task of the iraf stsdas.hst\_calib.wfpc package with the sky subtraction parameter set to sky=mode instead of the default value of sky=none — this unusual option was required because the sky levels of the observations did not scale with exposure time. We used crrej to make a clean F555W observation of 2100 s from the U2PB0102T and U2PB0103T datasets and a clean F814W observation of 2300 s from the U2PB0105T and U2PB0106T datasets. Figure WFPC2 OBSERVATIONS OF THE URSA MINOR DWARF SPHEROIDAL GALAXY<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. shows that this procedure repaired most of the cosmic-ray damage seen in Figure WFPC2 OBSERVATIONS OF THE URSA MINOR DWARF SPHEROIDAL GALAXY<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. . This procedure is clearly not perfect since traces of the satellite trial are still visible. The sky=mode option produces cosmic-ray cleaned images with modal pixel values near zero. Many background pixels will thus have negative values which implies negative background-flux values. Such physically unrealistic background data values are quite rightly rejected by many standard CCD stellar photometry packages.
Unsharp mask images of the clean F555W and F814W observations were made using the lpd (low-pass difference) digital filter which was designed by Mighell to optimize the detection of faint stars in HST WF/PC and WFPC2 images (Appendix A of Mighell & Rich miri1995 (1995), and references therein). The F555W unsharp mask image (see Figure WFPC2 OBSERVATIONS OF THE URSA MINOR DWARF SPHEROIDAL GALAXY<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. ) and the F814W unsharp mask image were then added together to create a master unsharp mask image of each WF CCD. A simple peak detector algorithm was then used on the master unsharp images to create a list of point source candidates with coordinates $`60x790`$ and $`60y790`$ on each WF CCD. This allowed the use of almost the entire field-of-view of each WF camera while avoiding edge-effects in the outer regions. We only present the analysis of data obtained from the WF cameras in this paper.
The data were analyzed with the ccdcap<sup>1</sup><sup>1</sup>1 IRAF implementations of ccdcap are now available over the Wide World Web at the following site: http://www.noao.edu/staff/mighell/ccdcap/ digital circular aperture photometry code developed by Mighell to analyze HST WFPC2 observations (Mighell et al. mighell1997 (1997), and references therein). A fixed aperture with a radius of 2.5 pixels was used for all stars on the WF CCDs. The local background level was determined from a robust estimate of the mean intensity value of all pixels between 2.5 and 6.0 pixels from the center of the circular stellar aperture. Point source candidates were rejected if either (1) the measured signal-to-noise ratio of either instrumental magnitude was SNR$`<`$10 ; or (2) the center of the aperture \[which was allowed to move in order to maximize the SNR \] changed by more than 1.8 pixels from its detected position on the master unsharp mask. The Charge Transfer Effect was removed from the instrumental magnitudes by using a 4% uniform wedge along the Y-axis of each CCD as described in Holtzman et al. (1995b ). We used the standard WFPC2 magnitude system (Holtzman et al. 1995b ) which is defined using 1″ diameter apertures. We measured the stars with a smaller aperture (0.5″ diameter) in order to optimize the measured stellar signal-to-noise ratios; usage of 1″ diameter apertures resulted in significantly poorer photometry for the faint stars. The instrumental magnitudes, $`v_r`$ and $`i_r`$, were transformed to Johnson $`V`$ and Cousins $`I`$ magnitudes using the following equations $`V=v_r+\mathrm{\Delta }_r+\delta _r+[0.052\pm 0.007](VI)+[0.027\pm 0.002](VI)^2+[21.725\pm 0.005]`$ and $`I=i_r+\mathrm{\Delta }_r+\delta _r+[0.062\pm 0.009](VI)+[0.025\pm 0.002](VI)^2+[20.839\pm 0.006]`$ where an instrumental magnitude of zero is defined as one DN s<sup>-1</sup> at the high gain state ($``$14 e<sup>-</sup> DN<sup>-1</sup> ). The constants come from Table 7 of Holtzman et al. (1995b ). The values for average aperture corrections<sup>2</sup><sup>2</sup>2 Observed WFPC2 point spread functions (PSFs) vary significantly with wavelength, field position, and time (Holtzman et al. 1995a ). There were not enough bright isolated stars in these WFPC2 observations to adequately measure the variation of the point spread function across each WF CCD using the observations themselves. We measured artificial point spread functions synthesized by the tiny tim version 4.4 software package (Krist kr1993 (1993), Krist & Hook krho1997 (1997)) to determine the aperture corrections, $`\mathrm{\Delta }_r`$, required to convert instrumental magnitudes measured with an aperture of radius 2.5 pixels to a standard aperture of radius 5.0 pixels (1″ diameter). A catalog of 289 synthetic point spread functions of a G-type star was created with a $`17\times 17`$ square grid for each filter (F555W and F814W) and CCD (WF2, WF3, and WF4). The spatial resolution of one synthetic PSF every 50 pixels in $`x`$ and $`y`$ allowed for the determination of aperture corrections for any star in the entire WFPC2 field-of-view to have a spatial resolution of $``$35 pixels. , $`\mathrm{\Delta }_r`$, for each filter/CCD combination are listed in Table WFPC2 OBSERVATIONS OF THE URSA MINOR DWARF SPHEROIDAL GALAXY<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. . The zero-order (“breathing”) aperture corrections<sup>3</sup><sup>3</sup>3 Spacecraft jitter during exposures and small focus changes caused by the HST expanding and contracting (“breathing”) once every orbit are another two important causes of variability in observed WFPC2 point spread functions. These temporal variations of WFPC2 PSFs can cause small, but significant, systematic offsets in the photometric zeropoints when small apertures are used. Fortunately, these systematic offsets can be easily calibrated away by simply measuring bright isolated stars on each CCD twice: once with the small aperture and again with a larger aperture. The robust mean magnitude difference between the large and small apertures is then the zero-order aperture correction, $`\delta _r`$, for the small aperture which, by definition, can be positive or negative. Zero-order aperture corrections are generally small for long exposures, however, they can be quite large for short exposures that were obtained while the WFPC2 was slightly out of focus (by a few microns) due to the expansion/contraction of the HST during its normal breathing cycle. for these observations ($`\delta _r`$ : see Table WFPC2 OBSERVATIONS OF THE URSA MINOR DWARF SPHEROIDAL GALAXY<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. ) were computed using a large aperture with a radius of 3.5 pixels and a background annulus of $`3.5r_{\mathrm{sky}}7.0`$ pixels.
Two ($`V,I`$) datasets pairs, (U2PB0102T, U2PB0105T) and (U2PB0103T, U2PB0106T), were reduced independently using ccdcap and the resulting instrumental magnitudes were transformed to Johnson $`V`$ and Cousins $`I`$ magnitudes. We determined which objects probably had acceptable photometry from these independent measurements. The $`V`$ measurements of a star, $`V_1`$ \[$``$ (U2PB0102T, U2PB0105T) \] and $`V_2`$ \[$``$ (U2PB0103T, U2PB0106T) \] , with photometric errors, $`\sigma _{V_1}`$ and $`\sigma _{V_2}`$, were determined to be acceptable if the following condition was true: $`|V_1V_2|\mathrm{max}(\left[3\sqrt{2}\mathrm{min}(\sigma _{V_1},\sigma _{V_2})\right],0.06)`$ . If the condition was satisfied, we then adopted the quantity, $`\mathrm{max}(V_1,V_2)2.5\mathrm{log}\left[(1+\left\{10^{0.4}\right\}^{|V_1V_2|})/2\right]`$, as the $`V`$ magnitude of the star and adopted the quantity, $`\sqrt{(\sigma _{V_1}^2+\sigma _{V_2}^2)/2}`$, as a conservative estimate of its $`V`$ photometric error, $`\sigma _V`$. We assumed that cosmic rays would be the primary cause of poor photometry and therefore adopted the photometry of the faintest measurement of the star whenever the acceptability condition failed. The adopted $`I`$ magnitude and $`I`$ photometric error, $`\sigma _I`$, was determined from both $`I`$ measurements, $`I_1`$ \[$``$ (U2PB0102T, U2PB0105T) \] and $`I_2`$ \[$``$ (U2PB0103T, U2PB0106T) \] , in an analogous manner. Figure WFPC2 OBSERVATIONS OF THE URSA MINOR DWARF SPHEROIDAL GALAXY<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. shows the outlier measurements we have identified in this manner. Figure WFPC2 OBSERVATIONS OF THE URSA MINOR DWARF SPHEROIDAL GALAXY<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. gives our preliminary $`V`$ versus $`VI`$ color-magnitude diagram CMD of the observed stellar field in Ursa Minor dwarf spheroidal galaxy.
We present our WFPC2 stellar photometry of 696 stars in the central region of the Ursa Minor dwarf spheroidal galaxy in Table WFPC2 OBSERVATIONS OF THE URSA MINOR DWARF SPHEROIDAL GALAXY<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. . The first column gives the identification (ID) of the star. The second and third columns give the $`V`$ magnitude and its rms ($`1\sigma `$) photometric error $`\sigma _V`$. Likewise, the fourth and fifth columns give the $`VI`$ color and its rms ($`1\sigma `$) photometric error $`\sigma _{(VI)}`$. The sixth column gives the quality flag value of the star. We only present photometry of stars with signal-to-noise ratios SNR$``$10 in both the F555W and F814W filters.
## 3 DISCUSSION
### 3.1 Color-Magnitude Diagram
The $`V`$ versus $`VI`$ color-magnitude diagram of the observed stellar field in Ursa Minor is shown in Figure WFPC2 OBSERVATIONS OF THE URSA MINOR DWARF SPHEROIDAL GALAXY<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. . This CMD features a sparsely populated blue horizontal branch, a steep thin red giant branch, and a narrow subgiant branch. The main sequence reaches $``$2 magnitudes below the turnoff of the main stellar population of the Ursa Minor galaxy.
Figure WFPC2 OBSERVATIONS OF THE URSA MINOR DWARF SPHEROIDAL GALAXY<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. shows a small amount of foreground contamination by foreground stars in our Galaxy. Ratnatunga & Bahcall (ratnatunga\_bahcall\_1985 (1985)) used the Bahcall and Soneira Galaxy model (Bahcall & Soneira bahcall\_soneira\_1980 (1980), bahcall\_soneira\_1984 (1980); Bahcall et al. bahcall\_etal\_1985 (1980)) to predict that 2.3 foreground stars brighter than $`V=25`$ mag would be found in one square arcmin in the direction of Ursa Minor. Our observation surveys 4.44 arcmin<sup>2</sup> of Ursa Minor and we would therefore expect, from the prediction of Ratnatunga and Bahcall, to find $``$10 foreground stars brighter than $`V=25`$ mag in our color-magnitude diagrams. A direct check with observations is provided by Figure 2 of Kleyna et al. (kleyna\_etal\_1998 (1998)) which indicates that while foreground contamination towards Ursa Minor is small it can not be ignored. The 4 bright blue stars near $`V20`$ mag with colors $`(VI)<0.3`$ mag will be shown below to be probable Ursa Minor horizontal branch stars. There are a few fainter blue stars seen in Fig. WFPC2 OBSERVATIONS OF THE URSA MINOR DWARF SPHEROIDAL GALAXY<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. which are within $`<`$2 magnitudes of the main-sequence turnoff of the main Ursa Minor stellar population. Determining whether these “blue stragglers” are actually members of the Ursa Minor galaxy or are simply Galactic foreground stars is beyond the scope of this paper.
### 3.2 Fiducial Sequence
The robust median $`VI`$ color as a function of $`V`$ magnitude of the Ursa Minor main sequence, subgiant branch, and base of the red giant branch ($`21.5V25.0`$ mag) is listed in Table WFPC2 OBSERVATIONS OF THE URSA MINOR DWARF SPHEROIDAL GALAXY<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. and shown in Figure WFPC2 OBSERVATIONS OF THE URSA MINOR DWARF SPHEROIDAL GALAXY<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. . The robust median $`VI`$ color of a given $`\mathrm{\Delta }V=0.2`$ mag data subsample was determined after $`>`$2.4$`\sigma `$ outliers were iteratively rejected in 5 iterations of a robust fiducial sequence algorithm (see Appendix A) recently developed by Mighell. The data in Table WFPC2 OBSERVATIONS OF THE URSA MINOR DWARF SPHEROIDAL GALAXY<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. is given in intervals of $`\mathrm{\Delta }V=0.1`$ mag. Since a sampling of $`\mathrm{\Delta }V=0.2`$ was used to determine the robust median $`VI`$ colors, we see that there are actually two realizations of the Ursa Minor fiducial sequence given in Table WFPC2 OBSERVATIONS OF THE URSA MINOR DWARF SPHEROIDAL GALAXY<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. since only every other row in that table represents an independent measurement of the true Ursa Minor fiducial sequence. The first fiducial sequence is given at $`V_{\mathrm{UMi}}=21.6,21.8,\mathrm{},24.8`$ mag in Table WFPC2 OBSERVATIONS OF THE URSA MINOR DWARF SPHEROIDAL GALAXY<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. and is shown with open diamonds in Figure WFPC2 OBSERVATIONS OF THE URSA MINOR DWARF SPHEROIDAL GALAXY<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. . The second fiducial sequence is given at $`V_{\mathrm{UMi}}=21.7,21.9,\mathrm{},24.9`$ mag in Table WFPC2 OBSERVATIONS OF THE URSA MINOR DWARF SPHEROIDAL GALAXY<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. and is shown with open squares in Figure WFPC2 OBSERVATIONS OF THE URSA MINOR DWARF SPHEROIDAL GALAXY<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. .
We compare the Ursa Minor fiducial sequences (Table WFPC2 OBSERVATIONS OF THE URSA MINOR DWARF SPHEROIDAL GALAXY<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. ) with those of the ancient metal-poor Galactic globular cluster M92 (Table A1 in Appendix A) in Figure WFPC2 OBSERVATIONS OF THE URSA MINOR DWARF SPHEROIDAL GALAXY<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. . We get an excellent fit of the Ursa Minor fiducial sequences to the M92 fiducial sequences when we make the M92 fiducial sequence fainter by $`\mathrm{\Delta }V=4.60`$ mag and add a small color offset of $`\mathrm{\Delta }(VI)=0.01`$ mag. We show below that the fit is so good that these fiducials are statistically equivalent over a 3 magnitude range ($`22.0V_{\mathrm{UMi}}<25.0`$ mag) from the base of the red giant branch of Ursa Minor to $``$1.7 magnitudes below its main-sequence turnoff. This suggests that the ancient metal-poor Galactic globular cluster M92 is an excellent stellar population analog for the median stellar population of the Ursa Minor dwarf spheroidal galaxy. It would not be surprising if the M92 analogy weakens sometime in the future when deeper observations with smaller photometric scatter are analyzed — especially if these future observations survey a significantly larger fraction of the Ursa Minor galaxy.
### 3.3 $`\mathrm{\Delta }V_{\mathrm{UMi}\mathrm{M92}}`$ and $`\mathrm{\Delta }(VI)_{\mathrm{UMi}\mathrm{M92}}`$
The $`V`$ magnitude offset, $`\mathrm{\Delta }V_{\mathrm{UMi}\mathrm{M92}}`$, and the color offset, $`\mathrm{\Delta }(VI)_{\mathrm{UMi}\mathrm{M92}}`$, between the Ursa Minor dwarf spheroidal galaxy and the Galactic globular cluster M92 may be determined by comparing our fiducial sequences of Ursa Minor (Table 5) and M92 (Table A1). The parameter space may be investigated through the application of the following chi-square statistics:
$$\chi _{22.2}^2\underset{j=1}{\overset{14}{}}\frac{\left[(VI)\text{UMi}(V_j)(VI)^{}\text{M92}(V_j\mathrm{\Delta }V_{\mathrm{UMi}\mathrm{M92}})\mathrm{\Delta }(VI)_{\mathrm{UMi}\mathrm{M92}}\right]^2}{\left[\sigma \text{UMi}(V_j)\right]^2+\left[\sigma ^{}\text{M92}(V_j\mathrm{\Delta }V_{\mathrm{UMi}\mathrm{M92}})\right]^2}$$
(1)
where $`V_j22.0+0.2j`$ mag and
$$\chi _{22.1}^2\underset{k=1}{\overset{15}{}}\frac{\left[(VI)\text{UMi}(V_k)(VI)^{}\text{M92}(V_k\mathrm{\Delta }V_{\mathrm{UMi}\mathrm{M92}})\mathrm{\Delta }(VI)_{\mathrm{UMi}\mathrm{M92}}\right]^2}{\left[\sigma \text{UMi}(V_k)\right]^2+\left[\sigma ^{}\text{M92}(V_k\mathrm{\Delta }V_{\mathrm{UMi}\mathrm{M92}})\right]^2}$$
(2)
where $`V_k21.9+0.2k`$ mag. The color errors are approximated as
$$\sigma \frac{1.25\mathrm{adev}}{\sqrt{n}}$$
(3)
where adev is the average deviation (column 3 of Tables 5 and A1) and $`n`$ is the number of stars in the subsample (column 6 of Tables 5 and A1). We use cubic spline interpolations wherever the M92 fiducial sequence (Table A1) does not have a tabulated value at $`V`$ magnitude values of $`V_j\mathrm{\Delta }V_{\mathrm{UMi}\mathrm{M92}}`$ mag and $`V_k\mathrm{\Delta }V_{\mathrm{UMi}\mathrm{M92}}`$ mag. Usage of cubic spline interpolations is denoted by prime superscripts over the appropriate terms in the definitions of these chi-square statistics.
We now use these chi-square statistics to determine the $`V`$ magnitude and $`VI`$ color offset between Ursa Minor and M92. Tables WFPC2 OBSERVATIONS OF THE URSA MINOR DWARF SPHEROIDAL GALAXY<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. and WFPC2 OBSERVATIONS OF THE URSA MINOR DWARF SPHEROIDAL GALAXY<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. give the reduced chi-square values $`\chi _{22.2}^2/14`$ and $`\chi _{22.1}^2/15`$, respectively, using $`V`$ magnitude offsets of $`4.400\mathrm{\Delta }V_{\mathrm{UMi}\mathrm{M92}}4.800`$ mag and color offsets of $`\text{-0.010}\mathrm{\Delta }(VI)_{\mathrm{UMi}\mathrm{M92}}0.030`$ mag. The residuals of individual fits (see footnotes a–i in Tables WFPC2 OBSERVATIONS OF THE URSA MINOR DWARF SPHEROIDAL GALAXY<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. and WFPC2 OBSERVATIONS OF THE URSA MINOR DWARF SPHEROIDAL GALAXY<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. ) are shown in Figure WFPC2 OBSERVATIONS OF THE URSA MINOR DWARF SPHEROIDAL GALAXY<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. .
Tables WFPC2 OBSERVATIONS OF THE URSA MINOR DWARF SPHEROIDAL GALAXY<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. and WFPC2 OBSERVATIONS OF THE URSA MINOR DWARF SPHEROIDAL GALAXY<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. indicate that a color offset of $`\mathrm{\Delta }(VI)_{\mathrm{UMi}\mathrm{M92}}=\text{+0.010}`$ mag always produces the lowest reduced chi-square value — at any given $`V`$ magnitude offset. This is clearly seen in Figure WFPC2 OBSERVATIONS OF THE URSA MINOR DWARF SPHEROIDAL GALAXY<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. . The residuals systematically become more negative as the color offset is increased from -0.01 to +0.03 mag; the residual scatter is minimized (the best fits occur) at +0.01 mag. We have thus established that the color offset between Ursa Minor and M92 is approximately +0.01 mag.
The top-center panel of Figure WFPC2 OBSERVATIONS OF THE URSA MINOR DWARF SPHEROIDAL GALAXY<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. shows that a $`V`$ magnitude offset of $`\mathrm{\Delta }V_{\mathrm{UMi}\mathrm{M92}}=4.5`$ mag and a $`VI`$ color offset of $`\mathrm{\Delta }(VI)_{\mathrm{UMi}\mathrm{M92}}=\text{+0.010}`$ mag gives systematically large positive residuals in the range $`22V<23`$ mag. This poor fit in the subgiant branch region of Ursa Minor indicates that the UMi SGB is systematically fainter than the shifted M92 SGB. We have thus established a lower limit of the $`V`$ magnitude offset between Ursa Minor and M92: $`\mathrm{\Delta }V_{\mathrm{UMi}\mathrm{M92}}>4.5`$ mag.
The bottom-center panel of Figure WFPC2 OBSERVATIONS OF THE URSA MINOR DWARF SPHEROIDAL GALAXY<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. shows that using offsets of $`\mathrm{\Delta }V_{\mathrm{UMi}\mathrm{M92}}=4.7`$ mag and $`\mathrm{\Delta }(VI)_{\mathrm{UMi}\mathrm{M92}}=\text{+0.010}`$ mag gives systematically large negative residuals in the range $`22V<23`$ mag. This poor fit in the subgiant branch region of Ursa Minor indicates that the UMi SGB is systematically brighter than the shifted M92 SGB. We have thus established an upper limit of the $`V`$ magnitude offset between Ursa Minor and M92: $`\mathrm{\Delta }V_{\mathrm{UMi}\mathrm{M92}}<4.7`$ mag.
The 90%, 95%, and 99% confidence limits ($`\chi _{22.2}^2/14`$: 1.50, 1.69, and 2.08) of the fits given in Table WFPC2 OBSERVATIONS OF THE URSA MINOR DWARF SPHEROIDAL GALAXY<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. are shown in Figure WFPC2 OBSERVATIONS OF THE URSA MINOR DWARF SPHEROIDAL GALAXY<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. . Table WFPC2 OBSERVATIONS OF THE URSA MINOR DWARF SPHEROIDAL GALAXY<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. shows that fits assuming a $`V`$ magnitude offset of $`\mathrm{\Delta }V_{\mathrm{UMi}\mathrm{M92}}=\text{4.575}`$ mag produce the smallest reduced chi-square value for any given $`VI`$ color offset. This is clearly seen in Figure WFPC2 OBSERVATIONS OF THE URSA MINOR DWARF SPHEROIDAL GALAXY<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. where the confidence contours are widest at the same $`V`$ magnitude offset.
The 90%, 95%, and 99% confidence limits ($`\chi _{22.1}^2/15`$: 1.48, 1.66, and 2.04) of the fits given in Table WFPC2 OBSERVATIONS OF THE URSA MINOR DWARF SPHEROIDAL GALAXY<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. are shown in Figure WFPC2 OBSERVATIONS OF THE URSA MINOR DWARF SPHEROIDAL GALAXY<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. . Table WFPC2 OBSERVATIONS OF THE URSA MINOR DWARF SPHEROIDAL GALAXY<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. shows that fits assuming a $`V`$ magnitude offset of $`\mathrm{\Delta }V_{\mathrm{UMi}\mathrm{M92}}=\text{4.625}`$ mag produce the smallest reduced chi-square value for any given $`VI`$ color offset. This is clearly seen in Figure WFPC2 OBSERVATIONS OF THE URSA MINOR DWARF SPHEROIDAL GALAXY<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. where the confidence contours are widest at the same $`V`$ magnitude offset.
A conservative analysis of Figure WFPC2 OBSERVATIONS OF THE URSA MINOR DWARF SPHEROIDAL GALAXY<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. yields a determination that the $`V`$ magnitude offset for the Ursa Minor dSph galaxy from the Galactic globular cluster M92 is $`\mathrm{\Delta }V_{\mathrm{UMi}\mathrm{M92}}=4.60\pm 0.03`$ mag with 99% confidence limits of $`4.500\mathrm{\Delta }V_{\mathrm{UMi}\mathrm{M92}}4.700`$ mag. Similarly, a conservative estimate of the $`VI`$ color offset between Ursa Minor and M92 is $`\mathrm{\Delta }(VI)_{\mathrm{UMi}\mathrm{M92}}=0.010\pm 0.005`$ mag with 99% confidence limits of $`0.005\mathrm{\Delta }V_{\mathrm{UMi}\mathrm{M92}}0.020`$ mag.
Figure WFPC2 OBSERVATIONS OF THE URSA MINOR DWARF SPHEROIDAL GALAXY<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. shows our Ursa Minor color-magnitude diagram with the addition of the M92 fiducial sequence of Johnson & Bolte (jobo1998 (1998)) which has been plotted with a $`V`$ magnitude offset of 4.6 mag and a $`VI`$ color offset of 0.01 mag. We see that the 4 bright blue stars near $`V20`$ mag with colors $`(VI)<0.3`$ mag lie underneath the shifted M92 blue horizontal branch; these stars are probable Ursa Minor horizontal branch stars. The brighter part of the Ursa Minor red giant branch ($`V<22`$ mag – where our fiducials were not compared) is seen to be slightly redder than the M92 red giant branch. This could be evidence that Ursa Minor is slightly more metal-rich than M92 — however we caution the reader not to over-interpret such a small sample of Ursa Minor red giants. The current observations can not rule out that the main stellar population of Ursa Minor has the same metallicity as M92.
### 3.4 Distance, Reddening, and Age of UMi
With an accurate estimate of $`\mathrm{\Delta }V_{\mathrm{UMi}\mathrm{M92}}`$ in hand, we are now able to determine the apparent $`V`$ distance modulus of the Ursa Minor dSph galaxy if we know the apparent $`V`$ distance modulus of M92: $`(mM)_V^{\mathrm{UMi}}(mM)_V^{\mathrm{M92}}+\mathrm{\Delta }V_{\mathrm{UMi}\mathrm{M92}}`$. The above analysis suggests that the uncertainty in the the $`V`$ magnitude offset between Ursa Minor and M92 is small ($``$0.03 mag). This implies that the largest source of uncertainty in the value of $`(mM)_V^{\mathrm{UMi}}`$ will probably be the error associated with apparent $`V`$ distance modulus of M92 itself. Pont et al. (poet1998 (1998)) recently estimated $`(mM)_V^{\mathrm{M92}}=14.67\pm 0.08`$ mag from their analysis of Hipparcos subdwarf parallaxes. A conservative estimate of the apparent $`V`$ distance modulus of the Ursa Minor dSph is then $`(mM)_V^{\mathrm{UMi}}=(14.67+4.60)\pm (0.08+0.03)=19.27\pm 0.11`$ mag.
Let us now assume that the $`VI`$ color offset between M92 and Ursa Minor is completely due to reddening. The difference in $`BV`$ reddening between M92 and Ursa Minor would then be $`\mathrm{\Delta }E(BV)_{\mathrm{UMi}\mathrm{M92}}=\mathrm{\Delta }(VI)_{\mathrm{UMi}\mathrm{M92}}/1.3=0.008\pm 0.004`$ mag assuming that $`E(VI)1.3E(BV)`$ (Dean, Warren, & Cousins dean\_warren\_cousins\_1978 (1978)). Adopting a $`BV`$ reddening for M92, $`E(BV)_{\mathrm{M92}}=0.02\pm 0.01`$ mag (e.g., Stetson & Harris stha1988 (1988), Bolte & Hogan bolte\_hogan\_1995 (1995)), we now determine the $`BV`$ reddening for Ursa Minor to be $`E(BV)_{\mathrm{UMi}}=E(BV)_{\mathrm{M92}}+\mathrm{\Delta }E(BV)_{\mathrm{UMi}\mathrm{M92}}=0.03\pm 0.01`$ mag. The absorption in V is determined to be $`A_V^{\mathrm{UMi}}=0.09\pm 0.03`$ assuming that $`A_V=3.1E(BV)`$ (Savage & Mathis sama1979 (1979)). Our new $`BV`$ reddening estimate for UMi agrees well with previous estimates in the literature: 0.03 mag (Zinn zi1981 (1981)) and $`0.02_{0.02}^{+0.03}`$ mag (Nemec, Wehlau, & de Oliveira neet1988 (1988)).
Reddening estimates based on COBE/DIRBE and IRAS/ISSA data give $`E(BV)`$ values of 0.023$`\pm `$0.003 mag and 0.033$`\pm `$0.004 mag at the respective positions of the Ursa Minor<sup>4</sup><sup>4</sup>4 Estimate derived at the Galactic longitude and latitude of $`(\mathrm{l},\mathrm{b})_{\mathrm{UMi}}=(105\stackrel{}{\mathrm{.}}00,44\stackrel{}{\mathrm{.}}85)`$. dwarf spheroidal galaxy and the Galactic globular cluster M92<sup>5</sup><sup>5</sup>5 Estimate derived at the Galactic longitude and latitude of $`(\mathrm{l},\mathrm{b})_{\mathrm{M92}}=(68\stackrel{}{\mathrm{.}}34,34\stackrel{}{\mathrm{.}}86)`$. (Schlegel et al. schlegel\_etal\_1998 (1998)). The difference between these two values, $`\mathrm{\Delta }\mathrm{E}(BV)_{\mathrm{UMi}\mathrm{M92}}=0.010\pm 0.005`$ mag, agrees well with our own estimate of the difference in $`BV`$ reddening ($`0.008\pm 0.004`$ mag) which we determined above with a completely different method (fiducial-sequence fitting).
King et al. (king\_etal\_1998 (1998)) recently suggested that the $`BV`$ reddening of M92 may be $`0.04`$$`0.05`$ mag greater than canonical values: $`\mathrm{E}(BV)_{\mathrm{M92}}=0.06`$$`0.07`$ mag. Reid & Gizis (reid\_gizis\_1998 (1998)) observed that the standard $`BV`$ reddening estimate of M92, $`E(BV)_{\mathrm{M92}}=0.02`$ mag, is confirmed by Schlegel et al. (see above paragraph); they also note that high reddening estimate of King et al. is at odds with other studies. Our determination of the $`BV`$ reddening difference between UMi and M92 could be consistent with the high reddening estimate of King et al. only if the $`BV`$ reddening of UMi is also $`0.04`$$`0.05`$ greater than canonical values. Thus while it is true that reddening is patchy across the sky, it is rather unlikely that both M92 and UMi have exactly the same amount of extra reddening beyond that predicted from maps of infrared dust emission. We have thus adopted the traditional $`BV`$ reddening estimate for M92 for the sake of consistency with Schlegel et al. (schlegel\_etal\_1998 (1998)) and older studies of Galactic extinction (e.g., Burstein & Heiles buhe1982 (1982)).
We calculate the distance modulus of the Ursa Minor dwarf spheroidal galaxy to be $`(mM)_0^{\mathrm{UMi}}=19.18\pm 0.12`$ based on $`(mM)_V^{\mathrm{M92}}=14.67\pm 0.08`$ mag (Pont et al. poet1998 (1998)) which was derived assuming $`\text{E}(BV)_{\mathrm{M92}}=0.02`$ mag and $`\text{[Fe/H]}_{\mathrm{M92}}=2.2`$ dex (cf. Caretta & Gratton cagr1997 (1997), Zinn & West ziwe1984 (1984)). Decreasing the adopted $`BV`$ reddening for M92 by 0.01 mag decreases the distance modulus estimate by 0.02 mag and increasing the metallicity for M92 by 0.1 dex increases the distance modulus by 0.03 mag (Pont et al. poet1998 (1998)).
Our new distance estimate for Ursa Minor is in good agreement with previous determinations based on early CCD observations in the 1980’s once earlier estimates are placed on the same distance scale. For example, Cudworth, Olszewski, & Schommer (cuolsc1986 (1986)) derived a distance modulus for Ursa Minor, $`(mM)_0^{\mathrm{UMi}}=19.0\pm 0.1`$ mag, based on a sliding fit to M92. They also got the same value from their measurement of the V magnitude of the horizontal branch at the RR Lyrae gap, $`V_{\mathrm{RR}}=19.7`$ mag, their absorption value, $`A_V^{\mathrm{UMi}}=0.1`$ mag, and the assumption that the absolute $`V`$ magnitude of the RR Lyraes is $`M_V^{\mathrm{RR}}=0.6`$ mag. Harris (harris\_1996 (1996)) gives the $`V`$ magnitude of the horizontal branch of M92 as $`V_{\mathrm{HB}}^{\mathrm{M92}}=15.10`$ mag. With our $`V`$ magnitude offset value between Ursa Minor and M92, we expect that the $`V`$ magnitude of the Ursa Minor horizontal is $`V_{\mathrm{HB}}^{\mathrm{UMi}}=19.70\pm 0.03`$ mag which exactly agrees with the measurement of Cudworth et al. (cuolsc1986 (1986)). Our distance modulus estimate for Ursa Minor implies that the absolute visual magnitude of the horizontal branch (at a metallicity of $`\text{[Fe/H]}=2.2`$ dex) is $`M_V^{\mathrm{HB}}=V_{\mathrm{HB}}^{\mathrm{UMi}}(mM)_V^{\mathrm{UMi}}=0.43\pm 0.12`$ mag which is consistent with the Lee, Demarque, & Zinn (lee\_demarque\_zinn\_1990 (1990), hereafter LDZ) distance scale value $`M_{V,\mathrm{LDZ}}^{\mathrm{RR}}=0.17\text{[Fe/H]}+0.82=0.45`$ mag assuming, of course, that $`M_V^{\mathrm{HB}}M_V^{\mathrm{RR}}`$. Placing the Ursa Minor distance modulus estimate of Cudworth et al. (cuolsc1986 (1986)) on the LDZ distance scale and assuming our $`V`$ absorption value, $`A_V^{\mathrm{UMi}}=0.09\pm 0.03`$, we get a revised estimate of $`(mM)_0^{\mathrm{UMi}}=19.16\pm 0.11`$ mag which is just 0.02 mag lower than our own estimate.
How old is the main stellar population of Ursa Minor? We have shown that the ancient metal-poor Galactic globular cluster M92 is an excellent stellar population analog for the median stellar population of the Ursa Minor dwarf spheroidal galaxy. Continuing further with the M92 analogy, we propose that Ursa Minor and M92 are coeval. The determination of the age of the main population of Ursa Minor reduces then to the problem of determining the age of M92. The Harris et al. (harris\_etal\_1997 (1997)) analysis of the Galactic globular clusters NGC 2419 and M92 found that while the full impact of Hipparcos data and improving stellar models has yet to be felt, an age range of 12–15 Gyr for the most metal-poor Galactic globular clusters is well supported by the current mix of theory and observations. Last year, Pont et al. (poet1998 (1998)) estimated that M92 is 14 Gyr based on their analysis of the luminosities of cluster turnoff and subgiant branch stars. They noted that their age estimate for M92 should probably be reduced by $``$1 Gyr if diffusion is important in the cores of globular cluster stars. Our above analysis used the Pont et al. (poet1998 (1998)) distance to M92, and so we now adopt their age estimate for M92. Using the M92 analogy one last time, we conclude that the age of the main stellar population of the Ursa Minor dwarf spheroidal galaxy is $``$14 Gyr old.
## 4 SUMMARY
The findings of this paper can be summarized as follows:
* Our comparison of the fiducial sequence of the Ursa Minor dwarf spheroidal galaxy with the Galactic globular cluster M92 (NGC 341) indicates that that the median stellar population of the UMi dSph galaxy is metal poor ($`[\mathrm{Fe}/\mathrm{H}]_{\mathrm{UMi}}[\mathrm{Fe}/\mathrm{H}]_{\mathrm{M92}}2.2`$ dex) and ancient ($`age_{\mathrm{UMi}}age_{\mathrm{M92}}14`$ Gyr).
* The $`V`$ magnitude offset and $`VI`$ color offset between Ursa Minor and M92 are estimated to be $`\mathrm{\Delta }V_{\mathrm{UMi}\mathrm{M92}}=4.60\pm 0.03`$ mag and $`\mathrm{\Delta }(VI)_{\mathrm{UMi}\mathrm{M92}}=0.010\pm 0.005`$ mag.
* The Ursa Minor $`BV`$ reddening and the absorption in $`V`$ are estimated to be E$`(BV)=0.03\pm 0.01`$ mag and $`A_V^{\mathrm{UMi}}=0.09\pm 0.03`$ mag assuming that the $`BV`$ reddening for M92 is $`0.02\pm 0.01`$ mag.
* We have determined that the distance modulus of the Ursa Minor dwarf spheroidal galaxy is $`(mM)_0^{\mathrm{UMi}}=(mM)_0^{\mathrm{M92}}+\mathrm{\Delta }V_{\mathrm{UMi}\mathrm{M92}}A_V^{\mathrm{UMi}}=19.18\pm 0.12`$ mag based on the the adoption of the apparent $`V`$ distance modulus for M92 of $`(mM)_V^{\mathrm{M92}}=14.67\pm 0.08`$ mag (Pont et al. poet1998 (1998)). The Ursa Minor dwarf spheroidal galaxy is then at a distance of $`69\pm 4`$ kpc from the Sun.
These HST observations indicate that Ursa Minor has had a very simple star formation history consisting mainly of a single major burst of star formation about 14 Gyr ago which probably lasted $`<`$2 Gyr. While we may have missed minor younger stellar populations due to the small field-of-view of the WFPC2 instrument, these observations clearly show that most of the stars in the central region Ursa Minor dwarf spheroidal galaxy are ancient. If the ancient Galactic globular clusters, like M92, formed concurrently with the early formation of Milky Way galaxy itself, then the Ursa Minor dwarf spheroidal is probably as old as the Milky Way.
We would like to thank Sylvia Baggett for helping us understand the cause of all the image defects we encountered in these archival images. We thank the anonymous referee whose comments and suggestions have improved this article. We wish to thank Don VandenBerg for bringing to our attention the article on the distance to NGC 6397 by Reid & Gizis which appeared while we were finishing the manuscript. KJM was supported by a grant from the National Aeronautics and Space Administration (NASA), Order No. S-67046-F, which was awarded by the Long-Term Space Astrophysics Program (NRA 95-OSS-16). Figure WFPC2 OBSERVATIONS OF THE URSA MINOR DWARF SPHEROIDAL GALAXY<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. was created with an image from the Digitized Sky Survey<sup>6</sup><sup>6</sup>6 Based on photographic data obtained using The UK Schmidt Telescope. The UK Schmidt Telescope was operated by the Royal Observatory Edinburgh, with funding from the UK Science and Engineering Research Council, until 1988 June, and thereafter by the Anglo-Australian Observatory. Original plate material is copyright (c) the Royal Observatory Edinburgh and the Anglo-Australian Observatory. The plates were processed into the present compressed digital form with their permission. The Digitized Sky Survey was produced at the Space Telescope Science Institute under US Government grant NAG W-2166. . This research has made use of NASA’s Astrophysics Data System Abstract Service and the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory at the California Institute of Technology, under contract with NASA.
## Appendix A A ROBUST FIDUCIAL-SEQUENCE ALGORITHM
Johnson & Bolte (jobo1998 (1998), hereafter JB98) recently published a $`V`$ versus $`VI`$ fiducial sequence for the ancient Galactic globular cluster M92 which is shown in Figure WFPC2 OBSERVATIONS OF THE URSA MINOR DWARF SPHEROIDAL GALAXY<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. on top of their stellar photometry which was kindly provided to use by Jennifer Johnson. JB98 found that mean and mode fitting proved to be susceptible to outliers due to not having enough stars to form a strong ridge line in some areas of the color-magnitude diagram; their M92 fiducial sequence was determined from the best measured stars and was subsequently drawn by hand and eye. We now demonstrate that, given enough stars, it is possible to obtain similar results with a new robust fiducial-sequence algorithm which we present herein.
The median value of a normal (a.k.a. Gaussian) distribution is the mean value of the distribution. The mean value, ($`\overline{x}`$), of a small nearly-normally-distributed sample is sensitive to the presence of outlier data values; the median value is less sensitive to outliers and is therefore considered to be a more robust statistic than the mean. Likewise, the average deviation (a.k.a. mean deviation), $`a\frac{1}{N}_{i=1}^N|x_i\overline{x}|`$, of a nearly-normally-distributed sample is, by definition, less sensitive to outliers than the standard deviation, $`\sigma [\frac{1}{N1}_{i=1}^N(x_i\overline{x})^2]^{1/2}`$, of the sample. The average deviation of a normal distribution is $``$0.8 times the standard deviation of the distribution<sup>7</sup><sup>7</sup>7 The average deviation of a normal distribution with a mean of zero and a standard deviation $`\sigma `$ is
$$a=_{\mathrm{}}^{\mathrm{}}|x|\left[\frac{1}{\sigma \sqrt{2\pi }}e^{x^2/\left(2\sigma ^2\right)}\right]𝑑x=\sigma \sqrt{\frac{2}{\pi }}.$$
A normal distribution with a standard deviation of one ($`\sigma 1`$), in the limit of an infinite number of observations, would thus have an average deviation of $`a=\sqrt{2/\pi }0.7989`$ . and approximately 98% of a normal distribution is found within 3.0 average deviations of the mean of the distribution<sup>8</sup><sup>8</sup>8
$$_{3.0a}^{+3.0a}\left[\frac{1}{\sigma \sqrt{2\pi }}e^{x^2/\left(2\sigma ^2\right)}\right]𝑑x_{2.4\sigma }^{+2.4\sigma }\left[\frac{1}{\sigma \sqrt{2\pi }}e^{x^2/\left(2\sigma ^2\right)}\right]𝑑x0.9836.$$
.
A robust estimate of the mean of a nearly-normally-distributed sample can be determined by deriving the median of a subsample of the parent sample that is within 3.0 average deviations of the median of the parent sample. This process can, of course, be repeated until the difference between the parent median and the subsample median is negligibly small. Five iterations will generally suffice for the determination of fiducial sequences from high-quality stellar photometry.
We now apply this algorithm (with 5 iterations) to the M92 $`VI`$ color photometry as a function of $`V`$ magnitude in order to determine its fiducial sequence: $`[V_\mathrm{M},(VI)_\mathrm{M}]`$. The algorithm results with 0.2-mag slices in $`V`$ are given in tabular form in Table A1 and graphically in Figures WFPC2 OBSERVATIONS OF THE URSA MINOR DWARF SPHEROIDAL GALAXY<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. , WFPC2 OBSERVATIONS OF THE URSA MINOR DWARF SPHEROIDAL GALAXY<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. and WFPC2 OBSERVATIONS OF THE URSA MINOR DWARF SPHEROIDAL GALAXY<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. .
We see that our M92 fiducial sequence (Table A1) matches the fit-by-eye fiducial sequence of JB98 near the main-sequence turnoff region ($`18V21`$) to a remarkable degree with a mean and rms difference of just 0.0004$`\pm `$0.0047 mag. The scatter increases slightly for stars brighter than $`V18`$ which is not at all surprising given the small sample sizes present on the subgiant branch and red-giant branch of M92 \[see column 6 of Table A1\]. At the faintest magnitudes ($`V>21`$ mag) on the main-sequence, Figures WFPC2 OBSERVATIONS OF THE URSA MINOR DWARF SPHEROIDAL GALAXY<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. and WFPC2 OBSERVATIONS OF THE URSA MINOR DWARF SPHEROIDAL GALAXY<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. indicate that our fiducial sequence M92 is slightly redder than that of JB98. Noting that the numbers of the stars in the sample gradually decreases below $`V21`$ even though the M92 stellar luminosity function is known to be increasing over this magnitude range (see, e.g., Stetson & Harris stha1988 (1988)), we see that completeness effects become increasingly significant for the JB98 data below $`V21`$ mag. The well-known tendency for faint stars to be measured too bright explains why the algorithm gave redder $`VI`$ colors than the fit-by-eye values of Johnson & Bolte who consciously compensated for this effect in their determination of the M92 fiducial sequence (see discussion in §3. of JB98).
Figure Captions
|
no-problem/9903/cond-mat9903177.html
|
ar5iv
|
text
|
# Electronic Structure and Thermoelectric Prospects of Phosphide Skutterudites
\[
## Abstract
The prospects for high thermoelectric performance in phosphide skutterudites are investigated based on first principles calculations. We find that stoichiometric $`CoP_3`$ differs from the corresponding arsenide and antimonide in that it is metallic. As such the band structure must be modified if high thermopowers are to be achieved. In analogy to the antimonides it is expected that this may be done by filling with La. Calculations for $`LaFe_4P_{12}`$ show that a gap can in fact be opened by La filling, but that the valence band is too light to yield reasonable p-type thermopowers at appropriate carrier densities; n-type $`La`$ filled material may be more favorable.
\]
There has been considerable recent interest in the electronic and thermal transport properties of skutterudites. This is driven primarily by the discovery of two new high performance thermoelectric (TE) materials in this class. The TE performance is characterized by a dimensionless figure of merit $`ZT=\sigma S^2T/k`$, where $`\sigma `$ is the electrical conductivity, $`S`$ is the thermopower and $`k`$ is the thermal conductivity; $`ZT`$ up to 1.4 at $`T=`$ 600 K has been measured in skutterudites. Much of the effort has focused on antimonides based on the expectation of lower values of the lattice thermal conductivity related to the heavier mass atoms as well as the likelihood of better carrier mobilities due to the chemistry of Sb as compared to say P. In fact, the two high $`ZT`$ compositions discovered are both antimonides: $`CeFe_4Sb_{12}`$ and $`La(Fe,Co)_4Sb_{12}`$.
The high $`ZT`$ values in these compounds derive from two important features: (1) high power factors $`\sigma S^2`$ related to their particular electronic structures, which are apparently different both between the two compounds and from the corresponding binary, $`CoSb_3`$; and (2) a strong suppression of the thermal conductivity of the binary upon filling. This latter effect, though crucial for the TE performance, is understood only qualitatively in terms of phonon scattering related to rare earth vibrations. Attempts to obtain even better performance by various alloying and substitutions on each of the three sites have thus far been unsuccessful, although there are still many possibilities remaining to be explored. These efforts are complicated by the large variety of realizable modifications of these skutterudites and the general lack of detailed understanding of their effects on properties relevant to TE. Moreover, TE performance typically is a strong function of the doping level, further complicating the search.
In the present brief report we present electronic structure calculations for the phosphides, $`CoP_3`$ and $`LaFe_4P_{12}`$ and discuss these in terms of the implications for TE performance and in relation to the corresponding antimonide materials in order to elucidate trends. All previous first principles calculations point to a particularly important role for bands associated with the chemical bonding of the pnictogen 4-membered rings in the skutterudite structure in determining transport properties - a point that was emphasized early on by Jung et al. based on tight binding calculations. Calculations for $`CeFe_4P_{12}`$, $`CeFe_4As_{12}`$ and $`CeFe_4Sb_{12}`$ have shown these materials to be hybridization gap semiconductors with decreasing gaps as the lattice parameter increases and $`Ce`$-f hybridization decreases down the pnictogen column. Previous calculations for the binaries $`CoSb_3`$ and $`CoAs_3`$ reveal generally similar electronic structures, but with differences that are particularly significant in the region near the Fermi energy (E<sub>F</sub>) that dominates electronic transport. $`CoSb_3`$ is a narrow gap semiconductor with a highly non-parabolic valence band dispersion, while $`CoAs_3`$ was found to be a zero gap semiconductor with parabolic bands.
Zhukov has reported first principles band structure calculations for $`CoP_3`$ finding the material to be a narrow indirect gap semiconductor . The relatively heavy conduction bands with their multi-valley minima would seem initially favorable for the electronic aspect of TE performance with n-type doping. However, the calculations were done with the linear muffin tin orbital atomic sphere approximation method (LMTO-ASA). Because the skutterudite crystal structure features large voids, low site symmetries and strong covalent bonding, such calculations are particularly difficult, and in such cases may have band shifts of several tenths of an eV compared to more accurate general potential calculations. Because of the small indirect gap, this is enough to qualitatively change the picture from a transport point of view, implying the need for a general potential investigation as presented here.
Our calculations were done in the framework of density functional theory using the general potential linearized augmented plane wave (LAPW) method which does not make any shape approximations and uses a flexible basis set including LAPW functions and local orbital extensions to relax linearization errors and treat semicore-states. Valence states were done in a scalar relativistic scheme while fully relativistic calculations were done for core states in the atomic spheres (R<sub>MT</sub> = 2.1, 2.1, 1.9, 2.5 a.u. for $`Co`$, $`Fe`$, $`P`$ and $`La`$ respectively). The the basis set convergence was tested using $`R_{min}k_{MAX}`$ from 5.0 to 8.5; a value of 7.0 was found to yield a reasonable computational effort with only a small error with respect the highest $`R_{min}k_{MAX}`$ ($`\mathrm{\Delta }E_T=0.3\frac{mRy}{atom}`$). We used a (4,4,4) special points grid for the Brillouin zone integration, which we found to be converged. The electronic density of states (DOS) was based on a 35 k points tetrahedral sampling in the irreducible BZ . The Hedin-Lundqvist parameterization for the exchange-correlation LDA functional is used.
As discussed below, we find, for $`CoP_3`$, a globally similar band structure to Zhukov, but with changes near E<sub>F</sub> that are large enough to drastically change the picture in an unfavorable direction from the point of view of TE.
As mentioned, the binary $`CoSb_3`$ has a relatively high thermal conductivity and a highly non-parabolic valence band dispersion, which is unfavorable for high p-type TE performance. Meanwhile $`La(Fe,Co)_4Sb_{12}`$ has both a strongly reduced $`k`$ and a band structure that is modified in such a way as to improve the electronic properties by shifting the valence band edge downwards due to repulsion from the La f-resonance above the Fermi level . One may conjecture that a similar effect could be present in $`La(Fe,Co)_4P_{12}`$ as the band edge states have the same character, and if so the question arises as to whether the electronic properties relevant to TE may be improved. There is also interest in the electronic structure of filled phosphide skutterudites because of the observation of superconductivity in $`LaFe_4P_{12}`$ with critical temperature, T<sub>c</sub> = 4.1 K .
The skutterudite structure (space group $`Im\overline{3}`$) consists of a simple cubic transition metal sub-lattice partially filled by almost square pnictogen groups ($`P_4`$). Three quarters of these sites are filled with such rings oriented in , and directions according to the cubic symmetry and the remaining one quarter are left empty. In filled skutterudites these remaining sites are occupied by a rare earth ion, which modifies thermal and electronic properties. Two symmetry independent parameters $`u`$ and $`v`$ determine the position of the $`P`$ with respect to the metal ion in the center of the cubic cell; they control the size and the squareness of the rings. We start from experimental crystal structure fixing the position of pnictonen group with respect the transition metal to $`u_e=`$ 0.1453 $`a`$ and $`v_e=`$ 0.3482 $`a`$ where the lattice parameter is $`a`$ = 0.77073 nm for $`CoP_3`$ and $`u_e=`$ 0.1504 $`a`$ and $`v_e=`$ 0.3539 $`a`$ where a = 0.78316 nm for $`LaFe_4P_{12}`$.
The band structure of $`CoP_3`$, as given in Fig.1, is metallic due to the fact that the pseudogap near E<sub>F</sub>, which is characterstic of skutterudites, is entirely crossed by a single mostly phosphorus p band. This is the same band as crosses the pseudogap in $`CoSb_3`$ and $`CoAs_3`$, but in $`CoP_3`$ it crosses the conduction bands above the Fermi level. As such, $`CoP_3`$ is not very promising for TE applications unless filling or other modifications alter the band structure enough to open a gap. The corresponding DOS and projections are shown in Fig.2. In the most relevant region for TE properties (near E<sub>F</sub>) our LAPW band structure is quite different from the LMTO-ASA results which predict an indirect $`\mathrm{\Gamma }`$-$`H`$ energy gap. As already found in Ref. for antimonides, there are (1) a single degenerate band, (2) a two-fold degenerate band and (3) a three-fold degenerate band at $`\mathrm{\Gamma }`$ and above E<sub>F</sub>. The first one is mostly P p-derived while the other two are more hybridized with higher contributions from $`Co`$ d-states. The energy alignments of these bands at $`\mathrm{\Gamma }`$ point are different for different $`Co`$ derived skutterudites passing from (1)-(3)-(2) in $`CoSb_3`$ to (3)-(2)-(1) in $`CoAs_3`$ to (2)-(1)-(3) in $`CoP_3`$.
What we need to “realize” a favorable band structure for p-type TE properties from $`CoP_3`$ is to lower the single degenerate band (1) until it crosses band (2) and arrives near to the heavy mass bands forming the bottom of the pseudogap so that we obtain a semiconductor with potentially high Seebeck coefficients analogous to $`La(Fe,Co)_4Sb_{12}`$. The effect of gap opening by means of filling that provides TE performance for antimonides relies on the interaction between rare earth f-states or resonances and the crossing band. This effect is strong because the wavefunctions of the pseudogap crossing band have f-like symmetry as discussed in detail in Ref. .
Our results on $`LaFe_4P_{12}`$ show that the crossing band is indeed pushed down, as expected, by repulsion of $`La`$ f-resonance states at about 3.0 eV above E<sub>F</sub> (Fig.2 lower panel) resulting in a small direct gap between bands (1) and (2): $`E_g=98`$ meV. However the top of the band crossed by the Fermi level is not close to any lower heavy bands (Fig.3) so that p-type thermopower can not be high enough at a reasonable band filling for $`La(Fe,Co)_4P_{12}`$ assuming rigid band behavior upon alloying with $`Co`$ as found in $`La(Fe,Co)_4Sb_{12}`$ (N.B. strong non-rigid band behavior, which we do not expect, would also be detrimental to TE as it would indicate strong alloy scattering and low carrier mobility). A study of the effective masses (Tab.I) for the three bands closest to $`E_F`$ points out also that because of the double degeneracy and the reasonably high $`m^{}`$, $`La(Fe,Co)_4P_{12}`$ could be more interesting for n-type TE application, but only if thermal conductivity can be strongly reduced and well-filled n-type material with low defect concentrations and high mobility can be produced.
The importance of four membered pnictogen rings for thermoelectricity and also for superconductivity suggests investigation of the $`A_g`$ Raman active phonon frequencies at Brillouin zone center that are associated to normal modes involving variations of the symmetry-independent parameters ($`u`$,$`v`$) in the skutterudites structure or, in other words, distortion of the pnictogen rings. We obtain LDA structural parameters ($`u_0=`$ 0.1462 and $`v_0=`$ 0.3478) near the experimental ones and $`\omega _1=`$ 228 cm<sup>-1</sup> and $`\omega _2=`$ 177 cm<sup>-1</sup> for $`CoP_3`$. Similar calculations for $`LaFe_4P_{12}`$ give $`u_0=`$ 0.1537 and $`v_0=`$ 0.3522 for LDA equilibrium parameters whereas $`\omega _1=`$ 189 cm<sup>-1</sup> and $`\omega _2=`$ 160 cm<sup>-1</sup>. The difference presumably reflects $`La`$-$`P`$ interactions.
In summary we have presented electronic structure calculations for $`CoP_3`$ and $`LaFe_4P_{12}`$. These show that while $`CoP_3`$ is a metal, a gap is opened up upon filling with $`La`$. Nonetheless the band structure does not allow for high p-type thermopowers with reasonable carrier concentrations. The conduction bands are more favorable having a degenerate heavy mass structure, though we note that n-type filled skutterudites are difficult to prepare.
We are grateful for useful discussions with R. S. Feigelson. This work is supported by ONR and DARPA.
|
no-problem/9903/astro-ph9903450.html
|
ar5iv
|
text
|
# The X-ray Timing Behavior of the X-ray Burst Source SLX 1735–269
## 1 Introduction
The galactic bulge source SLX 1735–269 was discovered in 1985 by Skinner et al. (1987) during the Spacelab 2 mission. Although observed on several occasions with other X-ray instruments (e.g., GRANAT/SIGMA: Goldwurm et al. 1996 and references therein; ASCA: David et al. 1997), little is known about this source. Goldwurm et al. 1996 detected the source up to about 150 keV with a spectral index above 30 keV of $`3`$. This is steeper than usually observed for black-hole candidates and therefore they tentatively suggested that the compact object in the system is a neutron star. ASCA observations of this source below 10 keV also could not uniquely identify the nature of the compact object (David et al. 1997) but they were consistent with the neutron star hypothesis. The issue of the nature of the compact object in SLX 1735–269 was finally settled by the discovery of a type I X-ray burst from this source using the Wide Field Cameras onboard BeppoSAX (Bazzano et al. 1997a; Bazzano et al. 1997b; Cocchi et al. 1998), demonstrating that SLX 1735–269 is a low-mass X-ray binary (LMXB) containing a neutron star.
So far, the rapid X-ray variability of this source has not been studied in detail. The neutron star nature of this system motivated us to analyze the timing behavior of this source as observed by the Rossi X-ray Timing Explorer (RXTE). We searched for quasi-periodic oscillations (QPOs) between 300 and 1200 Hz, which are often observed in neutron star LMXBs (see van der Klis 1998, 1999 for reviews), and coherent pulsations such as observed in the accretion-driven millisecond X-ray pulsar SAX J1808.4–3658 (Wijnands & van der Klis 1998a). Although those phenomena were not detected, we discovered one characteristic of the timing behavior which, so far, has only been observed for SAX J1808.4–3658, increasing the similarity between SAX J1808.4–3658 and the other neutron star LMXBs.
## 2 Observations, analysis, and results
SLX 1735–269 was observed using RXTE on several occasions (see Table 1 for a log of the observations) for a total of 11 ksec of on-source data. Data were collected simultaneously with 16 s time resolution in 129 photon energy channels (effective energy range 2–60 keV), and with 1 $`\mu `$s time resolution in 256 channels (2–60 keV). A light curve, an X-ray color-color diagram (CD), and an X-ray hardness-intensity diagram (HID) were created using the 16 s data, and power spectra (for the energy range 2–60 keV) were calculated from the 1 $`\mu `$s data using 256 s FFTs.
Figure 1 shows the background subtracted (using PCABACKEST version 2.1b and the faint sources L7/240 background model) light curve (2.0–16.0 keV; all 5 detectors on; Fig. 1a), the CD (Fig. 1b), and the HID (Fig. 1c) (for the energy bands used to calculate the colors used in those diagrams see the caption of the figure). The 2.0–16.0 keV count rate varies between $``$100 and $``$160 counts s<sup>-1</sup> (Fig. 1a and c). The hard color tends to increase when the count rate increases (Fig. 1c); the soft color does not have a clear correlation with count rate.
We selected power spectra based on the background corrected count rates in the 2.0–16.0 keV band and averaged them. We obtained two average power spectra: the first one corresponds to a count rate $`<`$128 counts s<sup>-1</sup>; the second one to $`>`$128 counts s<sup>-1</sup> (see also Fig 1a). These two power spectra are shown in Figure 2. Although the count rate differs only slightly between the two selections (10%–20%) the difference in the power spectra is remarkable. Both power spectra show a broad band-limited noise component. They were fitted (after subtraction of the Poisson level) with a broken power law. For the high count rate power spectrum also a Lorentzian was added in order to fit the bump superimposed on the band-limited noise near 0.9 Hz.
The fit parameters are presented in Table 2. The break frequency was higher (2.3 Hz) when the count rate was low than when it was higher, probably by an order of magnitude (see also Fig. 2). The index below the break during the highest count rates is $`0`$; during the lowest count rates this parameter had to be fixed to 0 due to the low statistics. The indices above the break in both count rate regimes are consistent with each other at $``$0.9. The strength of the noise is $``$24% rms in the high count rate selection, and $``$17% in the low count rate selection. The bump present on top of the broad-band noise at high count rate had an amplitude of 4.7 % rms (3.3$`\sigma `$), a FWHM of 0.3 Hz, and a frequency of 0.87 Hz.
We searched for kHz QPOs but none were found, with upper limits (95 % confidence levels) between 13 to 26 % rms (depending on count rate selection, frequency range, and assumed FWHM of the kHz QPO). These upper limits are higher than the strengths of kHz QPOs detected in other low-luminosity neutron star LMXBs. Therefore, we cannot exclude the presence of QPOs with frequencies between 100 and 1500 Hz. Upper limits (95 % confidence level) on coherent pulsations in the frequency range 100–1000 Hz were 2.2% rms, which is significantly lower than the 4%–6 % rms detected for the accretion-driven millisecond X-ray pulsar SAX J1808.4–3658 (Wijnands & van der Klis 1998a; Cui, Morgan, & Titarchuk 1998). However, it is possible that SAX J1808.4–3658 has a low system inclination (see Chakrabarty & Morgan 1998) and that SLX 1735–269 has a much larger inclination. The pulsations in SLX 1735–269 will then be smeared out over many frequency bins, making a 4%–6% rms amplitude pulsation undetectable in our analysis.
We fitted the X-ray spectra corresponding to the two power spectral selections. The X-ray spectrum corresponding to a count rate of $`>`$128 counts s<sup>-1</sup> could be adequately fitted with an absorbed power law with a power law index of $``$2.2 (using an N<sub>H</sub> of $`1.47\times 10^{22}`$ atoms cm<sup>-2</sup>; David et al. 1997). The 3–25 keV flux was $`3.8\times 10^{10}`$ ergs cm<sup>-2</sup> s<sup>-1</sup>, corresponding to an intrinsic luminosity of $`3.3\times 10^{36}`$ ergs s<sup>-1</sup> (assuming a distance of 8.5 kpc). The X-ray spectrum corresponding to a count rate of $`<`$128 counts s<sup>-1</sup> was fitted with an absorbed power law with index 2.4. The fit was considerably improved when a gaussian line, near 6.7 keV with a width of 0.8 keV, was added. The 3–25 keV flux was $`2.8\times 10^{10}`$ ergs cm<sup>-2</sup> s<sup>-1</sup>, corresponding to an intrinsic luminosity of $`2.4\times 10^{36}`$ ergs s<sup>-1</sup>. These fluxes are in the range previously observed for SLX 1735–269. The steeper power law for the low count rate selection is consistent with the smaller hard color in the CD (Fig. 1 left), compared to that of the high count rate selection.
## 3 Discussion
We presented for the first time an analysis of the rapid X-ray variability properties of the X-ray burst source SLX 1735-269. The timing properties are very similar to those of other low luminosity low magnetic field strength neutron star LMXBs and black hole candidates during their lowest observed mass accretion rates (Wijnands & van der Klis 1999 and references therein). The power spectrum is dominated by a broad band-limited noise component which follows roughly a power law at high frequency, but breaks at a certain frequency below which the power spectrum is approximately flat. When the statistics are sufficient, a broad bump can be detected super imposed on top of this band-limited noise, which is also often observed in other X-ray binaries (see Wijnands & van der Klis 1999 and references therein). The power spectra resemble those obtained for the low-luminosity neutron star LMXBs called the atoll sources, when they accrete at their lowest observed mass accretion rates (i.e., when they are in their so-called island state). We therefore suggest that SLX 1735–269 was during the RXTE observations in the island state, assuming it is an atoll source. Also similar to other X-ray binaries is that when the break frequency changes the high frequency part above the break remains approximately the same (see, e.g., Belloni & Hasinger 1990).
However, we observe one uncommon feature of the broad-band noise component: the break frequency increased when the X-ray flux, and therefore possibly the mass accretion rate, decreased. In atoll sources the break frequency decreases when the inferred mass accretion rate decreases (e.g., Prins & van der Klis 1998; Méndez et al. 1997; Ford & van der Klis 1998). So far, only one other source is known for which the break frequency has been observed to increase with decreasing inferred mass accretion rate: the accretion-driven millisecond X-ray pulsar SAX J1808.4–3658 (see Wijnands & van der Klis 1998b). During the beginning of the decay of the 1998 April outburst of this transient source, the break frequency decreased with decreasing X-ray flux. However, half-way the decay the break frequency suddenly increased again while the X-ray flux kept on decreasing. Wijnands & van der Klis (1998b) tentatively proposed that this unexpected behavior of the break frequency could be due to the unique pulsating nature of SAX J1808.4–3658 compared to the non-pulsating neutron star LMXBs and black holes candidates, or it could be due to the first ever detailed study of the timing properties of a neutron star LMXB at such low mass accretion rates. With our analysis of SLX 1735–269, which is a persistent LMXB and for which no coherent millisecond pulsations could be detected, it has been shown that the latter is most likely the case. Thus, the unexpected behavior of the accretion-driven millisecond X-ray pulsar is not a unique feature of this system, increasing the similarities of that source with the other, non-pulsating neutron star LMXBs.
During the highest count rates a bump is present on top of the band-limited noise. Wijnands & van der Klis (1999) showed that the frequency of this bump correlates well to the frequency of the break in low-luminosity neutron star LMXBs (including the accretion-driven millisecond X-ray pulsar) and black hole candidates. Figure 3 shows the same data plotted in Figure 2a of Wijnands & van der Klis (1999), but now including the data point of SLX 1735–269 (triangle). SLX 1735–269 is right on the relation defined by the other sources. Again, SLX 1735–269 is very similar to other low-luminosity LMXBs. The point obtained for SLX 1735–269 is at the low end of the neutron star points (the lower-frequency points are mostly for black-hole candidates) and very similar to the data of the X-ray bursters 4U 1812–12 and 1E 1724–3045 (see Wijnands & van der Klis 1999). The latter two sources are, therefore, good candidates to display the same increase of the break frequency with decreasing mass accretion rate.
It is also interesting to note that the 3–25 keV luminosity of SLX 1735–269 ($``$2–3 $`\times 10^{36}`$ ergs s<sup>-1</sup>) is very close to the 3–25 keV luminosity of SAX J1808.4–3658 when in this source the break-frequency and the mass accretion rate became anti-correlated ($``$2$`\times 10^{36}`$ ergs s<sup>-1</sup>; Wijnands & van der Klis 1999). It is possible that this anti-correlation occurs at a specific X-ray luminosity, which might be similar in all neutron star LMXBs. This can easily be checked by studying neutron star LMXBs in detail at such low luminosities.
###### Acknowledgements.
This work was supported in part by the Netherlands Foundation for Research in Astronomy (ASTRON) grant 781-76-017 and by the Netherlands Researchschool for Astronomy (NOVA). This research has made use of data obtained through the High Energy Astrophysics Science Archive Research Center Online Service, provided by the NASA/Goddard Space Flight Center.
|
no-problem/9903/astro-ph9903185.html
|
ar5iv
|
text
|
# Stability and Evolution of Galactic Discs
## 1 Bar instability
### 1.1 Mechanism
The mechanism for the bar instability in galaxy discs was clearly described by Toomre (1981) and is also reviewed by Binney & Tremaine (1987, chapter 6). The key idea is that waves can reflect from both the centre of a galaxy and the corotation circle allowing a standing wave to be set up. As for all resonant cavities (organ pipes, guitar strings, etc.) the phase change around a complete loop is a multiple of $`2\pi `$ only for certain values of the frequency, or pattern speed in the case of a galaxy; the spectrum of modes is therefore discrete.
The standing-wave pattern is, as usual, the super-position two travelling waves. The direction of propagation of small-amplitude wave packets depends on a number of factors; for the relevant cases (short-wavelength branch of the dispersion relation inside corotation), leading spiral waves propagate outwards while trailing waves travel inwards. As waves bounce off the centre, they reflect from trailing to leading, and at corotation they switch back to trailing.
The important difference between bar-forming modes and other more familiar standing wave patterns is that as incident leading waves reflect off the corotation circle they are swing-amplified into stronger trailing waves. The amplification process was first described in the early papers by Goldreich & Lynden-Bell (1965) for gaseous discs and by Julian & Toomre (1966) for stellar discs and was reviewed by Toomre (1981 and this conference). Since the reflected wave has larger amplitude than the incident wave, conservation of wave action requires that there also be a transmitted wave; waves inside corotation are negative-energy, negative-angular momentum disturbances (Lynden-Bell & Kalnajs 1972) whereas these quantities are both positive for the transmitted wave outside corotation.
### 1.2 Strategies for stabilising discs
Toomre’s mechanism for the instability suggests three distinct methods by which it can be prevented, as summarised by Binney & Tremaine (1987, §6.3).
The simplest to understand, is to make the disc dynamically hot; the radial velocity dispersion of the stars, $`\sigma _u`$, is measured by Toomre’s parameter
$$Q\frac{\sigma _u}{\sigma _{u,\mathrm{crit}}}=\frac{\sigma _u\kappa }{3.36G\mathrm{\Sigma }},$$
where $`\mathrm{\Sigma }`$ is the disc surface density and $`\kappa `$ is the epicyclic frequency. If $`Q\text{ }>2`$, collective density waves become very weak and growth rates of all instabilities are reduced to the point that the disc is effectively stable (Sellwood & Athanassoula 1986). This is unlikely to be how real spiral galaxies are stabilised, however, since a high $`Q`$ would both require the disc to be unrealistically thick (Sellwood & Merritt 1994 and references therein) and would also inhibit spiral patterns.
A second strategy, proposed by Ostriker & Peebles (1973), is to immerse the disc in a dynamically hot bulge/halo. In swing-amplification parlance, this strategy works by increasing the parameter
$$X\frac{\lambda _y}{\lambda _{\mathrm{crit}}}=\frac{2\pi R}{m}\frac{\kappa ^2}{4\pi ^2G\mathrm{\Sigma }},$$
where the spiral arm multiplicity $`m=2`$ for a bar. The effect of adding halo can be thought of either as increasing $`\kappa `$ while holding $`\mathrm{\Sigma }`$ fixed or reducing $`\mathrm{\Sigma }`$ while holding $`\kappa `$ fixed. Either way, if $`X`$ is increased to the point where it exceeds 3 (for a flat rotation curve) the swing-amplifier is tamed and the global bar instability is suppressed. The disadvantage of this strategy is that the swing-amplifier simply prefers higher values of $`m`$ instead; galaxies should then exhibit mostly multi-arm spiral patterns (Sellwood & Carlberg 1984). While it is hard to quantify the number of spiral arms in a galaxy, the overall impression from the majority of spiral galaxies is of an underlying bi-symmetry, which is inconsistent with the Ostriker-Peebles strategy for global stability. Furthermore, there is no evidence for a difference in halo fraction between barred and unbarred galaxies (Sellwood 1999).
A third, and probably the most promising, strategy was advocated by Toomre (1981). A key aspect of the instability mechanism is that amplified, ingoing, trailing waves are able to reach the centre where they can reflect into outgoing, leading waves. Toomre therefore proposed that if the centre of the galaxy should be made inhospitable for density waves, the feed-back loop would be cut and the disc would avoid this particularly virulent instability.
### 1.3 Hard or soft centres
The Lin-Shu-Kalnajs (Lin & Shu 1966; Kalnajs 1965) dispersion relation for collisionless particle discs indicates that small-amplitude density waves are able to propagate only between corotation and the Lindblad resonances on either side. The system is unable to sustain waves beyond the Lindblad resonances because particles cannot oscillate at frequencies higher than $`\kappa `$.<sup>1</sup><sup>1</sup>1Lovelace, Jore & Haynes (1997) point out that there are higher frequency solutions to the dispersion relation, near higher order resonances, which are analogous to Bernstein waves in plasmas. These solutions have a severely limited frequency ranges, lie much further from corotation and are inaccessible to waves propagating on the fundamental branches except through non-linear effects; they therefore seem unlikely to be of importance for our purposes. In fact, the last few frames of Toomre’s (1981) Figure 8, aptly dubbed “dust-to-ashes,” provide a graphic illustration of the ultimate fate of an amplified wave packet that encounters a Lindblad resonance – it is damped “as a wave on a beach” in the manner predicted by Mark’s (1974) second order treatment.
The stability of the disc is therefore profoundly influenced by whether the centre is hard or soft, as illustrated in Figure 1. A galaxy with a hard centre, such as the Mestel ($`V=\text{const.}`$) disc shown in the top panels, has a high central density which keeps the rotation speed high to close to the centre, so that an inner Lindblad resonance must be present for every reasonable disturbance frequency. For the soft-centred isochrone disc, on the other hand (lower panels), disturbances with angular frequencies in the range $`0.06<\mathrm{\Omega }_p<0.5`$ (in units of $`GM/a^3`$, with $`M`$ the disc mass and $`a`$ the length scale) do not have inner Lindblad resonances, and are therefore able to reflect off the centre. The dramatically different stability properties of these two models (Kalnajs 1978; Zang 1976; Evans & Read 1998) can therefore be understood.
These last-cited global stability analyses are far from straightforward, but they have been confirmed, at least for unstable models, by quiet start $`N`$-body simulations. Earn & Sellwood (1995) were able to construct the mode shapes and determine the eigenfrequencies from the time evolution of their models, obtaining essentially perfect agreement with linear theory. Confirmation that the Mestel disc is linearly stable is proving more difficult, but it seems likely that particle noise, which can never be completely eliminated in a system having a finite number of particles, is once again responsible for the discrepancy (Sellwood & Evans, in preparation).
Sellwood & Moore (1999) have, at last, provided a robust example of an $`N`$-body disc that is stabilised by a hard centre. In their model (shown in Figure 2), a small dense bulge-like mass in the centre is able to prevent an almost fully self-gravitating disc from developing a bar whilst the disc is able to support a sequence of large-amplitude 2- and 3-arm spiral disturbances. (I discuss the possible origin of these spiral patterns in §3.) The rotation curve of their model is not unlike those of many of the Sc galaxies observed by Rubin, Ford & Thonnard (1980) and the absence of bars in real galaxies having steeply rising inner rotation curves can therefore be understood without invoking either a high $`Q`$ or a massive halo.
## 2 Dynamical friction on bars
It was customary, in early work, to assume that dynamically hot spheroidal components, especially dark matter halos, could be modelled as rigid, unresponsive mass distributions. In a test of this assumption (Sellwood 1980), I found that it was inadequate once a bar had formed, but I also suggested that it may not be too bad an approximation for spiral waves. Recent work (Dubinski & Kuijken 1995; Nelson & Tremaine 1995; Binney, Jiang & Dutta 1998) has shown that the dynamics of warps is also profoundly influenced by a responsive halo. These experiences caution that treating a halo as a rigid mass distribution may be inadequate in other contexts also.
Dynamical friction between a bar and a live halo was studied by Tremaine & Weinberg (1984). In a follow-up paper, Weinberg (1985) estimated the frictional force for reasonable parameters and concluded that it could be strong enough to stop a bar from rotating altogether on a time scale of a few initial bar periods!
This rather surprising prediction has only recently been confirmed in fully self-consistent disc-bulge $`N`$-body simulations with adequate spatial resolution (Debattista & Sellwood 1996; Athanassoula 1996). They found that bars which formed in discs embedded in a dense halo (but not so dense as to suppress the bar instability entirely, §2.2), were slowed dramatically by the strong frictional forces predicted by Weinberg.
A convenient dimensionless, and therefore distance independent, estimate of the angular speed of a bar is the ratio $`D_\mathrm{L}/a_\mathrm{B}`$, where $`a_\mathrm{B}`$ is the semi-major axis of the bar and $`D_\mathrm{L}`$ is the distance from the centre to the major-axis Lagrange point (corotation). Direct estimates of this ratio are $`D_\mathrm{L}/a_\mathrm{B}1.4\pm 0.3`$ for NGC 936 (Merrifield & Kuijken 1995 and this volume) and a similar value for NGC 4596 (Gerssen 1998). By modelling the gas flow pattern in a 2-D rotating potential derived from near IR surface photometry, Lindblad et al. (1996) for NGC 1365 and Weiner (1998) for NGC 4123 concluded that this ratio should be about 1.3 in both galaxies. Athanassoula (1992) argued that the morphology of dust lanes in barred galaxies requires $`D_\mathrm{L}/a_\mathrm{B}1.2`$. Other, still more model dependent, estimates of bar pattern speeds can be made from the locations of rings (e.g. Buta & Combes 1996 for a review). While the data are meagre, there are no credible estimates which suggest $`D_\mathrm{L}/a_\mathrm{B}\text{ }>1.5`$ for any galaxy.
These values differ from those found in simulations having moderately dense halos. Debattista & Sellwood (1998), report that $`D_\mathrm{L}/a_\mathrm{B}`$ rose from just greater than unity at about the time the bar formed, to significantly more than two by the time dynamical friction against the halo effectively ceased, which occurred in about 20 rotation periods in the inner galaxy. While their result confirms the prediction from perturbation theory, it appears to be quite inconsistent with real galaxies. However, they also found that in models in which the central halo density was much lower, such that the disc contribution to the circular speed at two disc scale lengths was $`\text{ }>85`$% of the total, friction was reduced to the level at which the bar could continue to rotate with the Lagrange point at a distance $`<1.5a_\mathrm{B}`$. (The conclusion is not dramatically different for anisotropic and rotating halos – Debattista & Sellwood, in preparation.) Debattista & Sellwood (1998) used this result to argue that real dark matter halos must have large, low-density cores – in apparent contradiction with the predictions from cosmological simulations (e.g. Navarro 1998).
## 3 Spiral structure
There have been no major developments in the theory of spiral structure in recent years, yet there is still no concensus that we have reached a basic understanding of the phenomenon. Since passing companions (Toomre 1981 and this conference) do excite a swing-amplified response, and some spiral patterns also appear to be driven by bars, the most insistent problem remains for spiral patterns in unbarred and isolated galaxies.
### 3.1 Long-lived or transient spiral waves?
There is still considerable disagreement over the lifetime of spiral waves; C. C. Lin and his co-workers favour quasi-stationary patterns while Toomre, myself and others prefer to think of spirals as short-lived. Unfortunately, direct observational evidence to determine the lifetimes of spiral patterns is unobtainable.
Bertin et al. (1989) imagine that the equilibrium model (their “basic state”) is a cool disc ($`Q\text{ }>1`$) with a smooth DF. They seek global instabilities having a low growth rate, and suggest that a “quasi-steady” wave can be maintained when various non-linear effects, such as shock damping, are taken into account. The pattern must evolve slowly due to secular changes. A key ingredient of the instabilities they favour is a “$`Q`$ barrier” in the inner galaxy that shields the waves from the inner Lindblad resonance.
In the alternative picture of recurrent, short-lived spiral patterns, the random motions of the stars rise steadily over time as a direct result of the non-adiabatic potential fluctuations from the spiral patterns themselves (Barbanis & Woltjer 1967; Carlberg & Sellwood 1986; Jenkins & Binney 1990). Some cooling is therefore required to keep the disc responsive ($`Q\text{ }<2`$) and the spiral patterns active (Sellwood & Carlberg 1984; Toomre 1990). All possible cooling mechanisms involve dissipation in the gas component, which therefore accounts immediately for the absence of spiral arms in S0 galaxies which lack gas. The most efficient cooling mechanism is through infall of fresh gas to the disc, but dissipation in the existing gas, mass loss from old stars, etc. can also be important.
The origin of the fluctuating spiral patterns is less well understood, however. Toomre (1990) and Toomre & Kalnajs (1991) argue that chaotic spiral patterns in galaxies result from the vigorous response of the disc to co-orbiting mass clumps within the disc, such as giant molecular clouds. The spiral patterns therefore change shape and amplitude continuously on a time-scale of less than an orbital period. These authors do not expect strong Lindblad resonances to be present, essentially because the large-amplitude waves do not have well-defined pattern speeds.
Short-lived patterns, with fresh spirals appearing in rapid succession, have been observed in $`N`$-body simulations for several decades (e.g. Lindblad 1960; Hohl 1970; James & Sellwood 1978). Sellwood & Carlberg (1984) showed that such patterns appear to be swing-amplified but from a level that seemed too high to be consistent with the above shot noise interpretation for their finite number of particles – the amplitude seemed independent of $`N`$. Further analysis by Sellwood (1989) showed that the transient spirals resulted from the superposition of a small number of somewhat longer lived waves, which had density maxima near corotation and for which the Lindblad resonances were not shielded. Sellwood & Lin (1989) also showed that the DF did not remain smooth and that resonant scattering by one wave seeded a new instability, at least in their low-mass disc. Some echoes of this idea have been found in higher mass discs with more realistic rotation curves (Sellwood 1991) but the details of exactly how instabilities recur remain obscure.
### 3.2 Could resonant scattering be observed?
Before investing more effort to try to unravel the behaviour of the $`N`$-body simulations, it seemed appropriate to ask whether some observational consequence could be found to indicate whether or not these ideas were on the right track. With the HIPPARCOS mission already underway, I proposed (Sellwood 1994) that the data on the full space motions of Solar neighbourhood stars be examined for evidence of resonant scattering peaks in the local DF.
Stars interacting with a steady non-axisymmetric potential disturbance rotating at angular rate $`\mathrm{\Omega }_p`$ conserve neither their energy nor their angular momentum, but the combination
$$I_\mathrm{J}E\mathrm{\Omega }_pL,$$
known as the Jacobi invariant, is conserved. Here, $`E`$ and $`L`$ are the instantaneous energy and angular momentum per unit mass. Thus the changes in these quantities are related as
$$\mathrm{\Delta }E=\mathrm{\Omega }_p\mathrm{\Delta }L.$$
For a steady wave, scattering occurs only at the principal resonances for a pattern (Lynden-Bell & Kalnajs 1972); the resonances are somewhat broadened when the pattern has a finite lifetime. We will be most interested in the change in random energy of a star – the excess energy the star has over one on a circular orbit with the same angular momentum. Since $`dE/dL=\mathrm{\Omega }`$ for circular orbits, we expect no change in random motion at corotation, where $`\mathrm{\Omega }=\mathrm{\Omega }_p`$. Stars losing (gaining) angular momentum at inner (outer) Lindblad resonances, gain random energy and move onto more eccentric orbits, as shown in the Lindblad diagram Figure 3(a).
We now put ourselves in the position of an observer who is able to measure both $`E_{\mathrm{ran}}`$ and $`L`$ for many stars in a small region of the galaxy. The distribution of local stars in the space of these two variables might reveal the presence of a scattering peak. In Figure 3(b), $`L_0`$ is the angular momentum of a circular orbit at the position of the observer and stars in the shaded areas would never visit the neighbourhood of the observer. The density of stars in this plot will decrease for higher $`E_{\mathrm{ran}}`$ (since the DF is likely to be a decreasing function of energy) and the asymmetric drift implies there will be an excess of stars with $`L<L_0`$.
The ILR of a spiral wave will scatter stars upwards and to the left in this plot. Since the density of stars is higher for small $`E_{\mathrm{ran}}`$ (nearly circular orbits), we might hope to be able to observe an excess of stars along some trajectory, such as the dashed curve shown; the precise location of this trajectory will depend upon the value of $`\mathrm{\Omega }_p`$. Such scattering peaks are observed in $`N`$-body simulations (Sellwood 1994).
### 3.3 HIPPARCOS stars
Local kinematics of stars in the HIPPARCOS sample have been studied in some detail by Binney & Dehnen (1998). The satellite determined the position on the sky, a parallactic distance and the two components of proper motion transverse to the line of sight. The only one of the six phase space coordinates lacking, therefore, is the radial component of velocity, which is also needed for the above analysis. Dehnen (1998) deduced the missing component in a statistical sense, reasoning that the full, intrinsic distribution of velocities of local stars should be identical over the whole sky. With this assumption, differing viewing directions give us different projections of the same intrinsic velocity distribution, which can be combined to yield the missing information.
With this technique, Dehnen (private communication) has kindly computed $`E_{\mathrm{ran}}`$ and $`L`$, for his samples B4 and GI of the HIPPARCOS catalogue, which includes $`\mathrm{14\hspace{0.17em}000}`$ mostly main sequence stars of a broad range of ages, and prepared the plot shown in Figure 4. The asymmetric drift is clearly visible.
The local DF manifestly is not smooth; the density contours in this plane show significant and coherent distortions from that expected if the velocity distributions were closely Gaussian. Dehnen (1998) interprets the substructure at small $`E_{\mathrm{ran}}`$ as confirmation of the star streams and moving groups, but the structure at high $`E_{\mathrm{ran}}`$ has not been seen before.
There is a clear hint of at least one scattering line, and maybe a second, with the morphology of that expected from a strong ILR. These features, if confirmed when the radial velocities become available, support the picture of recurrent transient spiral patterns and are quite inconsistent with the idea that ILRs are shielded by a $`Q`$-barrier.
### 3.4 Effect of small scale features in DF
Resonant scattering depopulates the DF at small $`E_{\mathrm{ran}}`$ over a narrow range of $`L`$ and moves these stars to higher $`E_{\mathrm{ran}}`$ and smaller $`L`$. Strong density variations over narrow ranges of angular momentum are likely to be destabilising (Lovelace & Hohlfeld 1978). As these modes are excited by phase-space density gradients at corotation, the mechanism could be described as Landau excitation.
The simplest such instability to understand is the “groove mode,” which was described by Sellwood & Kahn (1991) using both $`N`$-body simulations and local theory. They showed that a half-mass Mestel disc with $`Q=1.5`$, which was globally stable when the DF was smooth, became strongly unstable to global 2- and 3-arm spiral modes when they removed particles over a narrow range in $`L`$. They referred to the narrow feature as a “groove,” but owing to random motion in the disc the surface density is imperceptibly reduced over a broad radial range – the feature is narrow only in integral space.
The mechanism for the instability is illustrated in Figure 5. The top panel shows a local patch of the disc, a shearing sheet, with a groove in which the surface density is lower between the dotted lines – the groove width is greatly exaggerated and, for simplicity, the blurring effects of random motions have been ignored. The diagram is drawn in a frame which co-rotates with the centre of the groove. (For definiteness, we will assume the galactic centre is far down the page and that the mean angular momentum of material in the sheet therefore increases up the page.) If wave-like disturbances are present on the edges of the groove as shown, the shaded areas mark regions where a larger density excess is created by the wave. If the two waves on opposite sides of the groove have a phase difference, the density excesses created by each attract the other; the azimuthal components of these force vectors are marked by the heavy arrows. Angular momentum is therefore exchanged between the density excesses, which cause the density maxima to grow if the phase difference has the sign in the illustration. To understand why the density excesses grow, focus first on the lower edge; material in the shaded density excess is urged forward by the forces from the density excess on the other edge and therefore gains angular momentum. Increased angular momentum causes the home radius of this material to increase causing the bulge to grow. Similarly, material in the bulge on the upper edge loses angular momentum causing it to sink further into the groove. In the absence of the other wave, each edge wave would be neutrally stable, but they aggravate each other through their mutual interaction to make the combined disturbance unstable.
If this were all that occurs, the instability would be mild and inconsequential, but the instability develops in a background disc that responds enthusiastically to orbiting density inhomogeneities. A possible example of the supporting response is contoured in the lower panel. In our case, the density disturbance is periodic along the groove and the supporting response is therefore also periodic with the same wavelength and extends as far as the Lindblad resonances on either side (shown by the dashed lines). Once again, the response is due to swing-amplification and, as shown by Julian & Toomre (1966), the disturbance in the supporting response is considerably more massive that the co-orbiting mass clump, unless $`Q2`$. The supporting response therefore converts a mild local disturbance into a large-scale spiral instability. In principle, the groove supports instabilities of many possible wavelengths, but the strongest spirals will be for those at which the swing-amplifier is most responsive.
The distribution of stars in the Galaxy is clearly more complicated than a smooth distribution with a single “groove,” but almost any narrow feature in the density of stars as a function of $`L`$ is destabilising (Sellwood & Kahn 1991). These modes therefore seem promising candidates for the generation of spiral patterns in the Milky Way.
## 4 Conclusions
The mechanism for the bar mode, which is the dominant global mode of a smooth disc, and ways in which it can be suppressed are well understood. We now believe that real galaxy discs, which possess most of the mass in the inner parts of spiral galaxies, avoid the bar-forming instability by having a dense centre.
Recent work has indicated that the usual rigid halo approximation is often inadequate and that a responsive halo strongly influences the mechanics of barred and/or warped galaxies. A moderately dense live halo slows a galactic bar through dynamical friction on a very short time-scale and the apparent generally high pattern speeds of real bars therefore require that the halo has a large core and that the halo central density can be little more than the minimum required to prevent it from being hollow.
The theory of spiral structure seems set for a major step forward now that the HIPPARCOS data indicate that it is probably wrong to assume a smooth DF. Spiral structure is likely to result from local instabilities caused by small-scale variations in the DF which give rise to large-scale spiral patterns with the assistance of the swing-amplifier. While the mechanism for linear instabilities of this form is already reasonably clear, exactly how the structure in the DF arose is not. The existence of resonant scattering peaks suggests that these are at least one of the processes which sculptures the DF, but it is unclear whether it is the only, or even the dominant, source of local inhomogeneities in the DF. The HIPPARCOS data have provided a much needed pointer to the way forward in this erstwhile stalled area for research and suddenly there is plenty to do!
###### Acknowledgements.
The author wishes to thank the director of the Isaac Newton Institute, Keith Moffat, for generous hospitality. This work was also supported by NSF grant AST 96/17088 and NASA LTSA grant NAG 5-6037.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.