id
stringlengths
30
36
source
stringclasses
1 value
format
stringclasses
1 value
text
stringlengths
5
878k
no-problem/0002/cond-mat0002110.html
ar5iv
text
# Spin quantization axis dependent magnetic properties and x-ray magnetic circular dichroism of FePt and CoPt ## I Introduction The disordered equiatomic binary alloys of the XY (X= Fe, Co – Y= Pd, Pt) type crystallize in the fcc structure and the magnetization is along the axis. At low temperatures these alloys tend to order in the $`L1_0`$ layered-ordered structure and in this case the spontaneous magnetization tends to align perpendicular to the layer stacking explaining the behavior of CoPt films during the magnetic annealing. The strong perpendicular magnetic anisotropy (PMA) is due to the highly anisotropic $`L1_0`$ structure and it makes them very attractive for magnetic recording devices. The latter structure can be also obtained by molecular beam epitaxy (MBE) of alternating layers of pure X and Y atoms due to the substrate induced constraints. The first observation of the $`L1_0`$ long-range order for a CoPt film grown by MBE was made by Harp et al. in 1993 and for a FePt film by Cebollada et al. Lately other techniques have been also employed to develop films presenting PMA. CoPt films were grown by sputtering by Visokay et al. and by evaporation by Lin and Gorman, and FePt films were grown by various sputtering techniques. The magneto-crystalline anisotropy energy (MCA) can be probed by many techniques such as torque or ferromagnetic resonance measurements. Both these methods describe the MCA in terms of phenomenological anisotropy constants. It has been demonstrated by Weller et al. that x-ray magnetic circular dichroism (XMCD) is also a suitable technique for probing the MCA, via the determination of the anisotropy of the orbital magnetic moment on a specific shell and site. The x-ray absorption spectroscopy (XAS) using polarized radiation probes element specific magnetic properties of alloys by applying the XMCD sum rules to the experimental spectra. However for itinerant systems, in particular to low symmetry systems, the use of the sum rules is debated because they are derived from an atomic theory. Lately angle-dependent XMCD experiments have been used to provide a deeper understanding for the relation between MCA and the orbital magnetic moments. The x-ray absorption for CoPt multi-layers has been already studied experimentally by Nakajima et al., Koide et al., Rüegg et al., and Schültz et al. Nakajima et al. revealed a strong enhancement of the cobalt orbital moment when PMA was present, Koide et al. showed that with decreasing cobalt thickness the easy axis rotates from in-plane to out-of-plane, and Rüegg et al. that platinum polarization increases also with decreasing cobalt thickness. Hlil et al. showed by x-ray absorption spectroscopy that modifications of platinum edges in different compounds are correlated to the change in the number of holes. Several ab-initio calculations have already been performed to investigate the XMCD. The $`L_2`$\- and $`L_3`$-edges involving electronic excitations of 2$`p`$-core electrons towards $`d`$-valence states have primarily attracted much attention due to dependence of the dichroic spectra on the exchange-splitting and the spin-orbit coupling of both initial core and final valence states. For 5$`d`$ elements dissolved in 3$`d`$ transition metals, the spin-orbit coupling of the initial 2$`p`$-core states is large and the resulting magnetic moment is small, while the opposite is true for the 3$`d`$ elements. This can lead to a pronounced dichroic spectra as seen by Schütz in the case of 5$`d`$ elements dissolved in iron. In this work we study the correlation between the quantization axis dependent XMCD and the magnetic properties of both ordered alloys FePt and CoPt. Our method is based on an all-electron relativistic and spin-polarized full-potential muffin-tin orbital method (LMTO) in conjunction with both the von Barth and Hedin parameterization to the local density approximation (LSDA) and the generalized gradient approximation (GGA) to the exchange correlation potential. The implementation of the calculation of the XMCD spectra has been presented in a previous work. In section 2 we present the details of the calculations, while in sections 3 and 4 we discuss our MCA and the magnetic spin and orbital moments, respectively. In section 5 we present our calculated XMCD as a function of the spin quantization axis, and in section 6 we discuss the interpretation of the MCA using the band structure and the total density of states anisotropy. ## II Computational Details To compute the electronic properties of CoPt and FePt we used the experimental lattice constants ($`a`$=3.806Å and $`c`$/$`a`$=0.968 for CoPt; $`a`$= 3.861Å and $`c`$/$`a`$=0.981 for FePt) and a unit cell containing one atom of cobalt(iron) and one of platinum. The $`L1_0`$ structure can be seen as a system of alternating cobalt(iron) and platinum layers along the direction. The MCA and the XMCD are computed with respect to the angle $`\gamma `$ between the axis and the spin quantization axis on the (010) plane. So $`\gamma `$=0<sup>o</sup> corresponds to the axis and $`\gamma `$=90<sup>o</sup> to the axis. MCA can be computed directly using ab-initio methods; it is defined as the difference between the total energy for two different spin quantization axis. The spin-orbit coupling contribution to MCA is implicitly included in our ab-initio calculations, and we do not take into account the many-body interactions of the spin-magnetic moments since their contribution to the MCA is negligible. The number of k points for performing the Brillouin zone (BZ) integration depends strongly on the interplay between the contributions to the MCA from the Fermi surface and the remaining band structure contribution to the total energy. When the former contribution to the MCA is important, a large number of k-points is needed to describe accurately the Fermi surface. For the two studied systems we found that 6750 k-points in the BZ are enough to converge the MCA within 0.1 meV. To perform the integrals over the BZ we use a Gaussian broadening method which convolutes each discrete eigenvalue with a Gaussian function of width 0.1 eV. This method is known to lead to a fast and stable convergence of the spin and charge densities compared to the standard tetrahedron method. To develop the potential inside the MT spheres we calculated a basis set of lattice harmonics including functions up to $`\mathrm{}=8`$, while for the FFT we used a real space grid of 16$`\times `$16$`\times `$20. We used a double set of basis functions, one set to describe the valence states and one for the unoccupied states. For the valence electrons we used a basis set containing 3$`\times s`$, 3$`\times p`$ and 2$`\times d`$ wave functions, and for the unoccupied states 2$`\times s`$, 2$`\times p`$ and 2$`\times d`$ wave functions. ## III Magneto-crystalline Anisotropy CoPt and FePt films are known to present a strong uniaxial MCA, because of the high anisotropic $`L1_0`$ inter-metallic phase. Experimentally the magnetization axis is found to be along the direction. In a first step we performed calculations with 250 k-points in the BZ. This number of k-points is large enough to produce accurate total energy when Gaussian smearing is used for the integration in the Brillouin Zone but not enough accurate to compute the MCA. In figure 1 we present calculations with respect to the angle $`\gamma `$ between the axis and the spin quantization axis on the (010) plane for the CoPt compound within the LSDA. We observe that total energy value increases with the angle $`\gamma `$. So the ground state corresponds to $`\gamma `$=0<sup>o</sup>, i.e. the axis. The same behavior occurs for the FePt compound. Because 250 k-points are not enough to produce an accurate value for the MCA, we present in figure 2 the convergence of the MCA with the number of k-points for both compounds within LSDA. The MCA is the difference in the total energy between the in-plane axis and and the easy axis . We have found that the total energy difference between the and the directions is negligible compared to the difference between the in-plane axis and the axis. Our values are converged up to 6750 k-points and are given per unit cell (one atom of X and one of platinum). The MCA converges to 2.2 eV for CoPt and 3.9 eV for FePt. These behavior confirms the assumption that the system is isotropic inside the plane. The GGA MCA calculations converged to 1.9 meV and 4.1 meV for CoPt and FePt, respectively. The GGA results seem to be in good agreement with the LSDA results. Daalderop et al. performed calculations for MCA in CoPt and FePt by means of an LMTO method in the atomic sphere approximation (ASA) within LSDA using the force theorem. The easy magnetization axis found by this latter calculation is the for both systems in agreement with our LSDA and GGA results. Their MCA value is 2 meV for CoPt and is 3.5 meV for FePt. Solovyev et al. used the same structure within a real-space Green’s function technique framework to find also that the magnetization is along the axis. Including the spin-orbit interaction for all the atoms, they found a CoPt MCA value of 2.3 meV and a FePt value of 3.4 meV. Our computed value agrees also with the value of 1.5 meV for CoPt and 2.8 meV for FePt, obtained by Sakuma using LMTO method in the atomic sphere approximation (ASA) in conjunction with the force theorem. The drawback of the LMTO-ASA method is that it accounts only for the spherical part of the potential and ignores the interstitial region. Furthermore, the force theorem does not account directly for the exchange-correlation contribution to the MCA. Finally, Oppeneer used the augmented spherical waves method (ASW) in the atomic sphere approximation and found a MCA value of 2.8 meV for FePt and 1.0 meV for CoPt, which are the smallest among all the ab-initio calculations. Grange et al., using a torque measurement for a MBE deposited CoPt film on a MgO(001) substrate, obtained a MAE of 1.0 meV. An early measurement of a monocristal of CoPt, by Eurin and Pauleve, produced a value of 1.3 meV. The large value of Eurin and Pauleve is due to the fact that their sample was completely ordered. For FePt the first experiment of Ivanov et al. produced an anisotropy value of 1.2 meV and showed that for a thin film the shape anisotropy would be one order of magnitude smaller compared to the MCA. Farrow et al. and Thiele et al. found for MBE deposited FePt films on a MgO(001) substrate an anisotropy value of 1.8 meV. These films were highly ordered (more than 95% of the atoms were in the correct site). All experiments have been carried out at room temperature, which explain at some extend the difference between the calculated and experimental MCA values (the MCA decreases with temperature). It is worth mentioning that the experimental MCA for FePt is much larger than that of CoPt in agreement with our calculations. For thick films volume shape anisotropy (VSA) contributes also to the MAE and it favors always an in-plane magnetization axis. We can estimate the VSA as $`2\pi M_V^2`$ in c.g.s. units, where $`M_V`$ is the mean magnetization density, and obtain a value of -0.1 meV for FePt and -0.06 meV for CoPt. These values are one order of magnitude smaller than the MCA values in agreement with the speculation of Ivanov et al. ## IV Magnetic moments ### A Density of States In figure 3 we present the cobalt-projected partial density of states for three spin quantization axis corresponding to angles $`\gamma `$= 0<sup>o</sup>, 45<sup>o</sup> and 90<sup>o</sup> calculated with 6750 k-points. The 3-$`d`$ states dominate the electronic structure of cobalt. The spin-up band is practically totally occupied while the spin down band is almost half-occupied. The general form of the cobalt DOS, as well as the iron DOS, does not seem to change appreciably with the angle $`\gamma `$. The platinum projected density of states show similar behavior for both FePt and CoPt compounds. Bulk platinum is paramagnetic and the small changes in the DOS come from the polarization of the 5$`d`$ electrons via hybridization with the 3$`d`$ electrons of cobalt (iron). It is worth mentioning that the DOS is calculated inside each muffin-tin and the interstitial region is not taken into account. To minimize the contribution of the interstitial region we use almost touching muffin-tin spheres. In addition, we have find that remaining region has a negligible spin polarization. ### B Spin Magnetic Moments The spin magnetic moments are isotropic with respect to the spin quantization axis as expected. They were calculated by attributing all the charge inside each muffin-tin sphere to the atom located in that sphere. As outlined above we have found that the interstitial contribution to the spin magnetic moment is one order of magnitude smaller than that of the platinum site. In CoPt we found a cobalt spin magnetic moment of 1.79 $`\mu _B`$ and a platinum moment of 0.36 $`\mu _B`$ within the LSDA. The GGA cobalt spin magnetic moment of 1.83 $`\mu _B`$ is slightly larger than the value within LSDA. This is due to a more atomic like description of the atoms in a solid within the GGA compared to the LSDA. Although the GGA underestimates the hybridization between cobalt and platinum $`d`$ valence electrons compared to the LSDA, the larger cobalt spin moment leads to a slightly larger GGA platinum spin moment of 0.37 $`\mu _B`$. Our calculated values are in good agreement with the experimental values of Grange et al. (1.75$`\mu _B`$ for cobalt and 0.35$`\mu _B`$ for platinum). Previous experiments by van Laar on a powder sample gave a value of 1.7 $`\mu _B`$ for the cobalt atom and 0.25 for the platinum atom. The spin magnetic moments have been previously calculated by Solovyev et al. (1.72 for cobalt and 0.37 for platinum), by Daalderop et al (1.86 for cobalt), by Sakuma (1.91 for cobalt and 0.38 for platinum), and finally by Kootte et al. by means of a localized spherical wave method (1.69 for cobalt and 0.37 for platinum). All previous calculations are in good agreement with our full-potential results. As expected the iron spin moments are much larger than the cobalt ones. The LSDA produced a value of 2.87 $`\mu _B`$ while the GGA produced a slightly larger value, 2.96 $`\mu _B`$. The hybridization between the iron 3$`d`$ states and the platinum 5$`d`$ states is less intense than in the case of CoPt resulting in a smaller platinum moment for FePt. The LSDA platinum spin moment is 0.33 $`\mu _B`$ (compared to 0.36 $`\mu _B`$ in CoPt) and the GGA platinum spin moment is 0.34 $`\mu _B`$ (compared to 0.37 $`\mu _B`$ in CoPt). The spin magnetic moments have been previously calculated by Solovyev et al (2.77 for iron and 0.35 for platinum), by Daalderop et al (2.91 for iron), by Sakuma (2.93 for iron and 0.33 for platinum), and finally by Osterloch et al. (2.92 for iron and 0.38 for platinum). Here again all previous calculations are in good agreement with our full-potential results. All the methods produced a smaller induced spin moment for platinum atom in FePt than for CoPt, verifying that the hybridization effect in CoPt is much stronger than in FePt. ### C Orbital Magnetic Moments Contrary to spin moments, the orbital moments are anisotropic. Figure 4 presents the behavior of the orbital moments as a function of the angle $`\gamma `$ between the direction and the spin quantization axis within LSDA for both CoPt and FePt compounds (the lines are guide to the eye). The orbital moments decrease with respect to the angle $`\gamma `$ but the values for the axis does not follow this general trend. All four lines seem to have the same behavior, but it is interesting to notice that for the CoPt and magnetization along the axis cobalt orbital moment is smaller than the platinum one (for the values see Tables I and II). The cobalt moments decrease faster than platinum moments in CoPt. For the iron and platinum atoms in FePt the two lines are practically parallel. The platinum orbital moments in FePt are smaller than for the CoPt with a factor that varies from 73% for $`\gamma `$=0<sup>o</sup> to 67% for $`\gamma `$=90<sup>o</sup>. The ratio of iron and platinum orbital moments in FePt varies from 1.56 for $`\gamma `$=0<sup>o</sup> down to 1.35 for for $`\gamma `$=90<sup>o</sup>, which is considerably smaller than the ratio of cobalt and platinum orbital moments in CoPt. We notice here that in our calculations we can estimate only the projection of the total orbital moment on the spin quantization axis and we have no information concerning the real value of the total magnetic moment or its direction in space. It seems that for the magnetization along the direction, the direction of the orbital magnetic moment undergoes a discontinuous jump, resulting in a large projection on the spin quantization axis with respect to $`\gamma `$ at the vicinity of 90<sup>o</sup>. This behavior is also reproduced by the GGA. In Table I we present the values of the orbital moments of iron and cobalt within both the LSDA and GGA as a function of the angle $`\gamma `$. The GGA values seem to be smaller than the LSDA ones but follow exactly the same trends. The orbital moment anisotropy is more important in the case of cobalt. The LSDA cobalt orbital moment changes by 0.048 $`\mu _B`$ and the GGA moment by 0.027 $`\mu _B`$ as we pass from the easy axis to the hard axis . The LSDA iron moment changes by 0.002 $`\mu _B`$ and the GGA moment is the same for the two high symmetry directions. In the case of cobalt the difference between values calculated within the two functionals, LSDA and GGA, becomes smaller when the angle increases and for 75<sup>o</sup> it changes sign. In the case of iron this difference decreases only slightly with the angle but since the difference is considerably smaller than in CoPt (less than 0.004), we conclude that the orbital moment in the LSDA and the GGA are roughly the same. In Table II we present the values for the platinum orbital moment within both functionals. We see that the GGA produces larger moments than the LSDA for platinum in FePt contrary to CoPt. The platinum moments are in general smaller than the moments of the 3$`d`$ ferromagnets, and the difference between the values calculated within LSDA and GGA are small. The absolute values for platinum are comparable to cobalt(iron) orbital moments even though the spin moments on platinum are one order of magnitude smaller than for cobalt(iron). The large orbital moments for platinum are due to a much larger spin-orbit coupling for the $`d`$ electrons of the platinum compared to the 3$`d`$ ferromagnets. The orbital moments of FePt and CoPt have been previously calculated by Daalderop and collaborators and by Solovyev and collaborators for the direction using the LSDA. The orbital moment of the cobalt site was found to be 0.12 $`\mu _B`$ by Daalderop and 0.09 $`\mu _B`$ by Solovyev. The value of Daalderop is closer to our LSDA value of 0.11 $`\mu _B`$. For iron site Daalderop found a value of 0.08 $`\mu _B`$ and Solovyev 0.07 $`\mu _B`$ in good agreement with our LSDA value. The platinum orbital moment has been calculated by Solovyev. He found a value of 0.06 $`\mu _B`$ for platinum in CoPt and 0.044 $`\mu _B`$ for platinum in FePt, close to our values, 0.06 $`\mu _B`$ and 0.05 $`\mu _B`$, respectively. On the other hand, experimental data are available for CoPt by Grange; obtained by applying the sum rules to the experimental XMCD spectra. The sum rules give the moments per hole in the $`d`$-band. To compare experiment with theory we calculated the number of $`d`$-holes by integrating the $`d`$ projected density of states inside each muffin-tin sphere. We found 2.63 $`d`$ and 2.48 $`d`$ holes for cobalt and platinum, respectively. The cobalt orbital moment varies from 0.26 $`\mu _B`$ for $`\gamma `$=10<sup>o</sup> down to 0.11 $`\mu _B`$ for $`\gamma `$=60<sup>o</sup>. The measured values are also available for two other angles: 0.24 $`\mu _B`$ for $`\gamma `$=30<sup>o</sup> and 0.17 $`\mu _B`$ for $`\gamma `$=45<sup>o</sup>. Our theory reproduces qualitatively the experimental trends but underestimates the absolute values by more than 50% (see Tables I and II for all the values). The calculated values show a less sharp decrease with the angle than the experimental ones. For platinum the experimental data are available only for two angles 10<sup>o</sup> and 60<sup>o</sup>. For $`\gamma `$=10<sup>o</sup> the orbital moment is 0.09 $`\mu _B`$ and for $`\gamma `$=60<sup>o</sup> it is 0.06 $`\mu _B`$. For the platinum site the calculated values are in much better agreement than for the cobalt site but we must keep in mind that the sum rules have their origin in an atomic theory and their use for 5$`d`$ itinerant electrons like in platinum is still debated. The discrepancy between the theory and the experiment comes mainly from the approximation to the exchange and correlation. Both the LSDA and the GGA approximations to the density functional theory are known to underestimate the orbital moment values, because the orbital moment is a property directly associated with the current in the solid and a static image is not sufficient. But until now a DFT formalism like the current and spin density functional theory (CS-DFT), which can treat at the same footing the Kohn-Sham and the Maxwell equations is too heavy to implement in a full-potential ab-initio method. The other problem is that there is no form of the exchange energy of a homogeneous electronic gas in a magnetic field known and this is the main quantity entering the CS-DFT formalism. Brooks has also developed an ad hoc correction to the Hamiltonian to account for the orbital polarization but this correction originates from an atomic theory and its application to itinerant systems is not satisfactory. ## V XMCD XMCD spectroscopy became popular after the development of the sum rules that enable the extraction of reliable information on the micro-magnetism directly from the experimental spectra. The great advantage of XMCD is that we can probe each atom and orbital in the system so to obtain information on the local magnetic properties. Lately angle-dependent XMCD experiments allowed the determination of magnetic properties for different spin quantization axis. All experimental spectra for CoPt have been obtained by Grange et al, and all calculated spectra presented in this section were obtained using the LSDA. The GGA produced the same results and are not presented. Figure 5 presents the XMCD spectra for cobalt and iron atoms for the and magnetization axis. We convoluted our theoretical spectra using a Lorentzian width of 0.9 eV and a Gaussian width of 0.4 eV as proposed by Ebert, in the case of iron to account for the core hole effect and the experimental resolution, respectively. The energy difference between the $`L_3`$ and $`L_2`$ peaks is given by the spin-orbit splitting of the $`p_{\frac{1}{2}}`$ and $`p_{\frac{3}{2}}`$ core states. It is larger in the case of cobalt, 14.8 eV, than for iron, 12.5 eV. The intensities of the peaks are comparable for both atoms, but the cobalt peak-intensities are larger for the axis contrary to the iron spectrum. The most important feature of these spectra is the integrated $`L_3`$/$`L_2`$ branching ratio as it enters the sum rules. It is larger in the case of cobalt site. This is expected since the sum rules predict that a larger $`L_3`$/$`L_2`$ ratio is equivalent to a larger orbital moment. For both atoms the integrated $`L_3`$/$`L_2`$ branching ratio is larger for the axis. But the ratio anisotropy is larger for the cobalt site (1.32 for M$``$ and 1.17 for M$``$) than for the iron site (1.15 for M$``$ and 1.14 for M$``$) reflecting the larger orbital moment anisotropy of cobalt compared to iron (see Table I). Especially for iron both orbital moment and integrated $`L_3`$/$`L_2`$ branching ratio are practically the same for both magnetization axis. As these changes in the XMCD signal are related to the change in the orbital magnetic moment between the two magnetization directions. The XMCD anisotropy should be roughly proportional to the underlying MCA but no relation exist that connects these two anisotropies. However they are connected indirectly through the orbital moment anisotropy. In figure 6 we have plotted the platinum XMCD spectra for two angles $`\gamma `$=10<sup>o</sup> and 60<sup>o</sup> for both compounds. The life time of the core-hole in platinum is smaller than for cobalt so the broadening used to account for its life time should be larger. We used both a Lorentzian (1 eV) and a Gaussian (1 eV) to represent this life time and a Gaussian of 1 eV width for the experimental resolution. As in the case of cobalt and iron the platinum XMCD spectra change with the angle and depend on the surrounding neighbors. The peak intensities are larger in CoPt. Also the difference in the intensity due to the anisotropy has different sign in the two compounds. The intensities of FePt spectra for $`\gamma `$=10<sup>o</sup> are much larger than for $`\gamma `$=60<sup>o</sup> contrary to the CoPt behavior. As is the case for cobalt in CoPt, the platinum site integrated $`L_3`$/$`L_2`$ branching ratio shows larger anisotropy than for platinum in FePt. In CoPt it is 1.49 for $`\gamma `$=10<sup>o</sup> and 1.19 for $`\gamma `$=60<sup>o</sup>, while for FePt it is 1.20 for $`\gamma `$=10<sup>o</sup> and 1.14 for $`\gamma `$=60<sup>o</sup>. Here again the XMCD follows the anisotropy of the orbital magnetic moment in these compounds. The energy difference between the two peaks is 1727 eV for both compounds. The 2$`p`$ electrons of platinum are deep in energy and are little influenced by the local environment, so that their spin-orbit splitting does not depend on the neighboring atoms of platinum. We expect a better agreement between the theoretical and the experimental XMCD spectra for the platinum site than for the cobalt site, because the core hole is deeper and would effect less the final states of the photo-excited electron. In figure 7, we have plotted the absorption and the XMCD spectra of cobalt for $`\gamma `$=0<sup>o</sup>. We have scaled our spectra in a way that the experimental and theoretical $`L_3`$ peaks in the absorption spectra have the same intensity. The energy difference between the $`L_3`$ and $`L_2`$ peaks is in good agreement with experiment. But the intensity of the $`L_2`$ peak is larger than the corresponding experimental peak. The high intensity of the calculated $`L_2`$ edge makes the theoretical XMCD integrated $`L_3`$/$`L_2`$ branching ratio of 1.32 much smaller than the experimental ratio of 1.72. This is because the LSDA fails to represent the physics of the core hole photo-excited electron recombination. In the case of 3$`d`$ ferromagnets the core hole is shallow and influences the final states seen by the photo-excited electron. A formalism that can treat this electron-hole interaction has been proposed by Schwitalla and Ebert, but it failed to improve the $`L_3`$/$`L_2`$ branching ratio of XAS of the late transition metals. Benedict and Shirley have also developed a scheme to treat this phenomena but its application is limited only to crystalline insulators. In figure 8 we have plotted experimental and theoretical total absorptions for the platinum atom in CoPt. For both $`L_2`$ and $`L_3`$ edges the theory gives a sharp peak which does not exist in experiment. As expected the $`L_3`$ peak is much more intense than the $`L_2`$. In contrast to what is obtained for cobalt, the results for the platinum XMCD (see figure 9) show better agreement with experiment, due to the fact that the core hole effect is less intense (core hole much deeper compared to cobalt). The experimental and theoretical $`L_2`$ and $`L_3`$ edges are separated by a spin-orbit splitting of the $`2p`$ core states of 1709 and 1727eV respectively. The width of both $`L_2`$ and $`L_3`$ edges is comparable to experiment, but the calculated $`L_2`$ edge is much larger. This produces a calculated integrated branching ratio of 1.49 which is much smaller than the experimental ratio of 2.66. Here again the theory is underestimating the branching ratio. ## VI Band Structure and Density of States Anisotropy In figure 10 we present the band structure along the and axis in the reciprocal space for different angles $`\gamma `$ for the CoPt compound within the LSDA. We know that it is essentially the area around the Fermi level that changes with respect to the spin quantization axis. For this reason we have enlarged a region of $`\pm `$1 eV around the Fermi level. In the first panel we plot the relativistic band structure for $`\gamma `$=0<sup>o</sup>. In the second and third panel we have plotted the relativistic band structure for $`\gamma `$=45<sup>o</sup> and $`\gamma `$=90<sup>o</sup>. We remark that as the angle increases there are bands that approach the Fermi level and cross it. However this information concerns just two high symmetry directions. For this reason we limited ourselves to the changes in the total DOS. In figure 11 we notice that just below the Fermi level the DOS for the hard axis is lower than for the easy axis which seems to favor the hard axis. This means that the anisotropy does not originate from the changes ate the vicinity of the Fermi level but from that of the whole DOS. It is difficult to investigate this phenomena by inspection of the changes at the vicinity of the Fermi surface and to explain the sign of the MCA . Our results confirm the work of Daalderop and collaborators that argued that not only states in the vicinity of the Fermi surface contribute to the MAE, as originally thought, but states far away make an equally important contribution. ## VII Conclusion We have performed a theoretical ab-initio study of the magnetic properties of the ordered CoPt and FePt fct alloys systems. The calculated easy axis is the for both compounds in agreement with other calculations and with experiments on films which found a strong perpendicular magnetization axis. The density of states is found to change very little with the direction of the spin quantization axis, and hence the magnetic moments are isotropic with respect to the magnetization axis. Contrary to the spin moments, the orbital magnetic moments decrease with the angle $`\gamma `$ up to 75<sup>o</sup> ($`\gamma `$ is the angle between the spin quantization axis and the axis). The calculated x-ray magnetic circular dichroism (XMCD) for all the atoms reflect the behavior of the orbital moments. Especially platinum resolved spectra present large differences between the two compounds. Cobalt XMCD spectra are in agreement with experiment but as usual the $`L_3`$/$`L_2`$ ratio is underestimated by the theory. The platinum site shows better agreement with experiments because the core-hole is much deeper than in the case of cobalt. Finally we showed that all the occupied electronic states contribute to the magneto-crystalline anisotropy and not just states near the Fermi level for both CoPt and FePt. ## Acknowledgments We thank J. M. Wills for providing us with his FPLMTO code. I.G. is supported by an European Union grant N<sup>o</sup> ERBFMXCT96-0089. The calculations were performed using the SGI Origin-2000 supercomputer of the Université Louis Pasteur de Strasbourg and the IBM SP2 supercomputer of the CINES under grant gem1917.
no-problem/0002/hep-ph0002179.html
ar5iv
text
# The masses of the mesons and baryons. Part I. The integer multiple rule ## 1 The spectrum of the masses of the particles The masses of the elementary particles are the best-known and most characteristic property of the particles. It seems to be important for the theoretical explanation of the masses of the particles to find a simple relationship between the different masses, as the formula for the Balmer series was important for the explanation of the spectrum of hydrogen. We will limit attention here to the mesons and baryons all of which are unstable, but for the proton. However, the lifetime of the mesons and baryons is so long as compared to the period of the basic frequency $`\nu =mc^2/h`$ that the mesons and baryons have often been categorized as “stable” or “elementary” particles. The masses of the so-called “stable” mesons and baryons are given in the “Particle Physics Summary” , and are reproduced with other data in Tables I,II. It is obvious that any attempt to explain the masses of the elementary particles should begin with the particles that are affected by the fewest parameters. These are certainly the particles without isospin ($`I=0`$) and without spin ($`J=0`$), but also with strangeness $`S=0`$, and charm $`C=0`$. Looking at the particles with $`I,J,S,C=0`$ it is startling to find that their masses are quite close to integer multiples of the mass of the $`\pi ^0`$ meson. It is $`m(\eta )=(1.0140\pm 0.0003)4m(\pi ^0)`$, and $`m(\eta ^{})=(1.0137\pm 0.00015)7m(\pi ^0)`$. We also note that the average mass ratios $`[m(\eta )/m(\pi ^0)+m(\eta )/m(\pi ^+)]/2=3.9892=0.99734`$, and $`[m(\eta ^{})/m(\pi ^0)+m(\eta ^{})/m(\pi ^+)]/2=6.9791=0.99707`$ are good approximations to the integers 4 and 7. Three particles seem hardly to be sufficient to establish a rule. However, if we look a little further we find that $`m(\mathrm{\Lambda })=1.03328m(\pi ^0)`$ or $`m(\mathrm{\Lambda })=1.01902m(\eta )`$. We note that the $`\mathrm{\Lambda }`$ particle has spin $`\frac{1}{2}`$, not spin 0 as the $`\pi ^0`$, $`\eta `$, $`\eta ^{}`$, mesons. Nevertheless, the mass of $`\mathrm{\Lambda }`$ is close to $`8m(\pi ^0)`$. Furthermore we have $`m(\mathrm{\Sigma }^0)=0.98179m(\pi ^0)`$, $`m(\mathrm{\Xi }^0)=0.974210m(\pi ^0)`$, $`m(\mathrm{\Omega }^{})=1.032512m(\pi ^0)=1.01833m(\eta )`$, ($`\mathrm{\Omega }^{}`$ is charged and has spin $`\frac{3}{2}`$). Finally the masses of the charmed baryons are $`m(\mathrm{\Lambda }_c^+)=0.995817m(\pi ^0)=1.023216m(\pi ^+)=1.0242m(\mathrm{\Lambda })`$, $`m(\mathrm{\Sigma }_c^0)=1.009318m(\pi ^0)`$, $`m(\mathrm{\Xi }_c^0)=1.016718m(\pi ^0)`$, and $`m(\mathrm{\Omega }_c^0)=1.001720m(\pi ^0)`$. Now we seem to have enough material to formulate the integer multiple rule, according to which the masses of the $`\eta `$, $`\eta ^{}`$, $`\mathrm{\Lambda }`$, $`\mathrm{\Sigma }^0`$, $`\mathrm{\Xi }^0`$, $`\mathrm{\Omega }^{}`$, $`\mathrm{\Lambda }_c^+`$, $`\mathrm{\Sigma }_c^0`$, $`\mathrm{\Xi }_c^0`$ and $`\mathrm{\Omega }_c^0`$ particles are, in a first approximation, integer multiples of the mass of the $`\pi ^0`$ meson, although some of the particles have spin, and may also have charge as well as strangeness and charm. A consequence of the integer multiple rule must be that the ratio of any meson or baryon mass divided by the mass of another meson or baryon is equal to the ratio of two integer numbers. And indeed, for example, $`m(\eta )/m(\pi ^0)`$ is practically two times (exactly $`0.99502`$) the ratio $`m(\mathrm{\Lambda })/m(\eta )`$, there is also the ratio $`m(\mathrm{\Omega }^{})/m(\mathrm{\Lambda })=0.9993\frac{3}{2}=0.99931.5`$. We have furthermore the ratios $`m(\mathrm{\Lambda })/m(\eta )=1.0192`$, $`m(\mathrm{\Omega }^{})/m(\eta )=1.0183`$, $`m(\mathrm{\Lambda }_c^+)/m(\mathrm{\Lambda })=1.023992`$, $`m(\mathrm{\Sigma }_c^0)/m(\mathrm{\Sigma }^0)=1.02812`$, and $`m(\mathrm{\Omega }_c^0)/m(\mathrm{\Xi }^0)=1.02822`$. We will call, for reasons to be explained later, the particles discussed above, which follow in a first approximation the integer multiple rule, the $`\gamma `$-branch of the particle spectrum. The mass ratios of these particles are listed in Table I. The deviation of the mass ratios from exact integer multiples of $`m(\pi ^0)`$ is at most 3.3%, the average of the factors in front of the integer multiples of $`m(\pi ^0)`$ of the ten $`\gamma `$-branch particles in Table I is $`1.0073\pm 0.0184`$. From a least square analysis follows that the masses of the eleven particles on Table I obey the formula $`m=1.0059N+0.0074`$ with a correlation coefficient $`r=0.999`$. The consequences of the combination of spin, isospin, strangeness and charm are difficult to assess. Nevertheless, even the combination of four different parameters does not change the integer multiple rule by more than 3.3%. To put this into perspective we note that the masses of the $`\pi ^\pm `$ mesons and the $`\pi ^0`$ meson differ already by 3.4%. Spin seems to have a profound effect on the mass of a particle. Changing the spin from zero for the $`\pi ^0`$, $`\eta `$, $`\eta ^{}`$, mesons to spin $`\frac{1}{2}`$ for the $`\mathrm{\Lambda }`$ baryon is accompanied by a mass twice the mass of the $`\eta `$ meson, it is $`m(\mathrm{\Lambda })=1.01902m(\eta )`$. The isospins of $`\eta `$ and $`\mathrm{\Lambda }`$ are both zero. The change of $`S`$ and the baryon number $`B`$ which accompany the formation of $`\mathrm{\Lambda }`$ seems to have little effect on the mass of the $`\mathrm{\Lambda }`$ particle and the other baryons, as follows from the masses of the baryons whose $`S`$ changes from $`1`$ to $`3`$. We find it most interesting that spin $`\frac{3}{2}`$ is accompanied with a mass three times $`m(\eta )`$, the $`\mathrm{\Omega }^{}`$ particle; whereas spin $`\frac{1}{2}`$ is accompanied by a mass two times $`m(\eta )`$, the $`\mathrm{\Lambda }`$ particle. Searching for what the $`\pi ^0`$, $`\eta `$, $`\eta ^{}`$, $`\mathrm{\Lambda }`$, $`\mathrm{\Sigma }^0`$, $`\mathrm{\Xi }^0`$, $`\mathrm{\Omega }^{}`$ particles have else in common, we find that the principal decays (decays with a fraction $`>1`$%) of these particles, as listed in Table I, involve primarily $`\gamma `$’s, the characteristic case is $`\pi ^0\gamma \gamma `$ (98.8%). The next most frequent decay product of the heavier particles of the $`\gamma `$-branch are $`\pi ^0`$ mesons which again decay into $`\gamma \gamma `$. To describe the decays in another way, the principal decays of the particles listed above take place always without the emission of neutrinos; see Table I. There the decays and the fractions of the principal decay modes are listed, as they are given in the Particle Physics Summary. We cannot consider decays with fractions $`<1`$%. We will refer to the particles whose masses are approximately integer multiples of the mass of the $`\pi ^0`$ meson, and which decay without the emission of neutrinos, as the $`\gamma `$-branch of the particle spectrum. To summarize the facts concerning the $`\gamma `$-branch. Within about 3% the masses of the particles of the $`\gamma `$-branch are integer multiples (namely 4, 7, 8, 9, 10, 12, and even 17, 18, 20) of the mass of the $`\pi ^0`$ meson. It is improbable that nine particles have masses so close to integer multiples of $`m(\pi ^0)`$ if there is no correlation between them and the $`\pi ^0`$ meson. It has, on the other hand, been argued that the integer multiple rule is a numerical coincidence. But the probability that the mass ratios of the $`\gamma `$-branch fall by coincidence on integer numbers between 1 and 20 instead on all possible percentage values between 1 and 20 is smaller than $`10^{20}`$, i.e., nonexistent. The integer multiple rule is not affected by more than 3% by the spin, the isospin, the strangeness, and by charm. The integer multiple rule seems even to apply to the $`\mathrm{\Omega }^{}`$ and $`\mathrm{\Lambda }_c^+`$ particles, although they are charged. In order for the integer multiple rule to be valid the deviation of the ratio $`m/m(\pi ^0)`$ from an integer number must be smaller than $`1/2N`$, where $`N`$ is the integer number closest to the actual ratio $`m/m(\pi ^0)`$. That means that the permissible deviation decreases rapidly with $`N`$. All particles of the $`\gamma `$-branch have deviations smaller than $`1/2N`$. The remainder of the stable mesons and baryons are the $`\pi ^\pm `$, $`K^{\pm ,0}`$, $`p`$, $`n`$, $`D^{\pm ,0}`$ and $`D_S^\pm `$ particles. These are in general charged, exempting the $`K^0`$ and $`D^0`$ mesons and the neutron $`n`$, in contrast to the particles of the $`\gamma `$-branch, which are in general neutral. It does not make a significant difference whether one considers the mass of a particular charged or neutral particle. After the $`\pi `$ mesons, the largest mass difference between charged and neutral particles is that of the $`K`$ mesons (0.81%), and thereafter all mass differences between charged and neutral particles are $`<0.5`$%. The integer multiple rule does not immediately apply to the masses of the charged particles if $`m(\pi ^\pm )`$ (or $`m(\pi ^0)`$) is used as reference, because $`m(K^\pm )=0.88434m(\pi ^\pm )`$. $`0.88434`$ is far from integer. Since the masses of the $`\pi ^0`$ meson and the $`\pi ^\pm `$ meson differ by only 3.4% it has been argued that the $`\pi ^\pm `$ mesons are, but for the isospin, the same particles as the $`\pi ^0`$ mesons, and that therefore the $`\pi ^\pm `$ cannot start another particle branch. However, this argument is not supported by the completely different decays of the $`\pi ^0`$ mesons and the $`\pi ^\pm `$ mesons. The $`\pi ^0`$ meson decays almost exclusively into $`\gamma \gamma `$ (98.8%), whereas the $`\pi ^\pm `$ mesons decay practically exclusively into $`\mu `$-mesons and neutrinos, e.g. $`\pi ^+\mu ^++\nu _\mu `$ (99.987%). Furthermore, the lifetimes of the $`\pi ^0`$ and the $`\pi ^\pm `$ mesons differ by nine orders of magnitude, being $`\tau (\pi ^0)=8.410^{17}`$ sec versus $`\tau (\pi ^\pm )=2.610^8`$ sec. If we make the $`\pi ^\pm `$ mesons the reference particles of the $`\nu `$-branch, then we must multiply the mass ratios $`m/m(\pi ^\pm )`$ of the above listed particles with a factor $`0.861\pm 0.022`$, as follows from the mass ratios listed on Table II. The integer multiple rule may, however, apply directly if one makes $`m(K^\pm )`$ the reference for masses larger than $`m(K^\pm )`$. The mass of the proton is $`0.95032m(K^\pm )`$, which is only a fair approximation to an integer multiple. There are, on the other hand, outright integer multiples in $`m(D^\pm )=0.99612m(p)`$, and in $`m(D_S^\pm )=0.99684m(K^\pm )`$. We note that the spin $`\frac{1}{2}`$ of the proton is associated with a mass twice the mass of the spinless $`K`$ meson, just as it was with the spin of the $`\mathrm{\Lambda }`$ particle, which is associated with a mass twice the mass of the spinless $`\eta `$ meson. We note further that the spin of the $`D^\pm `$ mesons, whose mass is nearly $`2m(p)`$, is zero, whereas the spin of the proton is $`\frac{1}{2}`$. It appears that the superposition of two particles of the same mass and with spin $`\frac{1}{2}`$ can cancel the spin. On the other hand, it appears that the superposition of two particles of equal mass without spin can create a particle with spin $`\frac{1}{2}`$. Contrary to the particles of the $`\gamma `$-branch, the charged particles decay preferentially with the emission of neutrinos, the foremost example is $`\pi ^+\mu ^++\nu _\mu `$ with a fraction of 99.987%. Neutrinos characterize the weak interaction. We will refer to the charged particles listed in Table II as the neutrino branch ($`\nu `$-branch) of the particle spectrum. We emphasize that a weak decay of the particles of the $`\nu `$-branch is by no means guaranteed, the proton is stable and there are decays as, e.g., $`K^+\pi ^+\pi ^{}\pi ^+`$ (5.59%), but the subsequent decays of the $`\pi ^\pm `$ mesons lead, on the other hand, to neutrinos and $`e^\pm `$. There is also the $`K^0`$ particle, which poses a problem because the principal primary decays of $`K_S^0`$ take place without the emission of neutrinos, but many of the secondary decays emit neutrinos. On the other hand, 2/3 of the primary decays of $`K_L^0`$ emit neutrinos. For comparison 63.6% of $`K^+`$ decay into $`\mu ^++\nu _\mu `$. The decays of the particles in the $`\nu `$-branch follow a mixed rule, either weak or electromagnetic. To summarize the facts concerning the $`\nu `$-branch of the mesons and baryons. The masses of these particles seem to follow an integer multiple rule if one uses the $`K^\pm `$ meson as reference, however the masses are not integer multiples of the $`\pi ^\pm `$ mesons but share a common factor $`0.86\pm 0.02`$. We do not want to discuss here the bottom mesons, and the bottom baryon. It is easy to associate the masses of the bottom mesons with integer multiples of the $`\pi ^0`$ meson. For example, $`m(B^0)=1.002939m(\pi ^0)`$ and $`m(B_S^0)=0.994540m(\pi ^0)`$ or $`1.019939m(\pi ^0)`$. The latter numbers show the difficulty in identifying the proper multiple. In this case the difference of the integer multiples is one out of forty or 2.5%, which is too small in order to make a proper identification of the multiple. There are, however, the $`c\overline{c}`$ mesons $`\eta _c`$ and $`\chi _{c0}`$, with $`I,J=0`$, whose masses are $`1.003522m(\pi ^0)`$ and $`1.012025m(\pi ^0)`$, which therefore fall into the range of the masses of the $`\gamma `$-branch considered here, and which are good approximations to integer multiples of $`m(\pi ^0)`$. ## 2 Summary In spite of differences in charge, spin, strangeness, and charm the masses of the stable mesons and baryons of the $`\gamma `$-branch are, within at most 3.3%, integer multiples of the mass of the $`\pi ^0`$ meson. Correspondingly, the masses of the particles of the $`\nu `$-branch are, after multiplication with a factor $`0.86\pm 0.02`$, integer multiples of the mass of the $`\pi ^\pm `$ mesons. The validity of the integer multiple rule can easily be verified from the Particle Physics Summary using a calculator. The integer multiple rule suggests that the particles are the result of superpositions of modes and higher modes of a wave equation. Such a theory will be presented in a following paper. The author is grateful to Professor I. Prigogine for his support. REFERENCE R. Barnett et al., Rev. Mod. Phys. 68, 611 (1996).
no-problem/0002/nucl-ex0002013.html
ar5iv
text
# Very high rotational frequencies and band termination in 73Br ## Abstract Rotational bands in <sup>73</sup>Br have been investigated up to spins of $`I`$ = 65/2 using the EUROBALL III spectrometer. One of the negative-parity bands displays the highest rotational frequency $`\mathrm{}\omega `$ = 1.85 MeV reported to date in nuclei with $`A25`$. At high frequencies, the experimental $`𝒥^{(2)}`$ dynamic moment of inertia for all bands decrease to very low values, $`𝒥^{(2)}`$ 10 $`\mathrm{}^2`$MeV<sup>-1</sup>. The bands are described in the configuration-dependent cranked Nilsson-Strutinsky model. The calculations indicate that one of the negative-parity bands is observed up to its terminating single-particle state at spin 63/2. This result establishes the first band termination in the $`A`$ 70 mass region. 23.20.lv, 23.20.En, 21.60.Ev, 27.50.+e Exploring nuclei at very high excitation energies and angular momentum is of fundamental importance for our understanding of many-body systems. Under such extreme conditions, one of the most interesting quantum phenomena is the termination of rotational bands. This means that a specific configuration manifests itself as a collective rotational band at low spin values, but gradually loses its collectivity with increasing spin and finally terminates at the maximum spin in a fully aligned state of single-particle nature . The origin of this phenomenon lies in the limited angular momentum content of the fixed configuration. Experimentally, terminating bands were first observed in <sup>158</sup>Er , then in the $`A110`$ mass region and recently in several other mass regions . In a terminating band, the nuclear shape gradually traces a path through the triaxial plane, starting as collective (often at near-prolate shape, $`\gamma 0^{}`$) and evolving over many transitions to a non-collective state at oblate ($`\gamma =+60^{}`$), spherical or prolate ($`\gamma =120^{}`$) shape . In the experiment, this has been most clearly demonstrated in the $`A`$ 110 and $`A`$ 60 mass regions . The A $``$ 70-80 mass region displays a large variety of structural effects. At low spins, there is a competition between prolate and oblate configurations due to the existence of the deformed shell gaps at different particle numbers and deformations . At higher spins, there is a shell gap at near-prolate and near-oblate collective shapes at neutron number 38 . A further interesting observation is that the bands in <sup>81</sup>Sr show features which are generally associated with terminating bands , even though they do not appear to become fully non-collective. It is now an interesting question whether bands can be observed up to termination in the somewhat lighter nuclei in the $`A=70`$ mass region. This is the motivation behind the present study of the $`{}_{35}{}^{}{}_{}{}^{73}`$Br<sub>38</sub> nucleus at very high rotational frequencies. High-spin states in <sup>73</sup>Br were populated in the reaction <sup>40</sup>Ca(<sup>40</sup>Ca,$`\alpha 3p`$), using the 185 MeV beam delivered by the XTU Tandem accelerator of the Laboratori Nazionali di Legnaro. The experiment was performed using an enriched (99.96 %) <sup>40</sup>Ca target with a thickness of 0.9 mg/cm<sup>2</sup>. $`\gamma `$-rays were registered with 15 Cluster and 26 Clover detectors of the EUROBALL III array. Charged particles were detected with the Italian SIlicon Sphere (ISIS) consisting of 40 Si $`\mathrm{\Delta }EE`$ telescopes . At forward angles, 15 segmented detector units filled with BC501A liquid scintillator were mounted to detect neutrons . A total number of 2$`\times `$10<sup>9</sup> $`\gamma \gamma \gamma `$ events were recorded. The $`\alpha 3p`$ exit channel leading to <sup>73</sup>Br is predicted to 3.4% of the total cross section, according to PACE calculations . $`\gamma \gamma `$-particle coincidences and $`\gamma \gamma \gamma `$ coincidences were sorted into two-dimensional $`E_\gamma E_\gamma `$ matrices and three-dimensional $`E_\gamma E_\gamma E_\gamma `$ cubes. Examples of doubly-gated coincidence spectra are shown in Fig. 1. These coincidence data were analysed using the Radware package . The resulting level scheme of the <sup>73</sup>Br nucleus is presented in Fig. 2. The sequences A, B and C have been known from previous studies up to the states of spins and parities $`I^\pi `$ = (45/2<sup>+</sup>), (47/2<sup>-</sup>) and (49/2<sup>-</sup>), respectively. In this work, these rotational bands were extended up to states with $`I^\pi `$ = (65/2<sup>+</sup>), (63/2<sup>-</sup>) and (65/2<sup>-</sup>), respectively, at excitation energies $`E`$ $``$ 26 MeV. Moreover, a new sequence D was established, which feeds the 2855 keV, 4020 keV and 5335 keV states of band A. An analysis of the directional correlations from oriented states (DCO) was performed to assign spins to the newly observed states. The coincidence data were added up for the 35 most backward-angle detectors, located at an average angle of 156 (149, 155, 157, 163) to the beam and the 108 detectors near 90 (72, 81, 99, 107). A 156 versus 90 $`\gamma \gamma `$ matrix was created in coincidence with $`\alpha `$ particles. From this matrix we extracted DCO-ratios defined as: $`R_{\mathrm{DCO}}^{exp}={\displaystyle \frac{I_{156^{}}^{\gamma _2}(Gate_{90^{}}^{\gamma _1})}{I_{90^{}}^{\gamma _2}(Gate_{156^{}}^{\gamma _1})}}`$ (1) where $`I_{156^{}}^{\gamma _2}(Gate_{90^{}}^{\gamma _1})`$ denotes the efficiency-corrected intensity of the $`\gamma _2`$ transition observed at 156 when gating on $`\gamma _1`$ at 90. According to the calculations for the DCO-ratios by Krane et al. , values of about 0.5 and 1 are expected for stretched and pure transitions of multipole order 1 and 2, respectively, if the gate is set on an $`E`$2 transition. The DCO-ratios of transitions in <sup>73</sup>Br are plotted in Fig. 3. There are two clearly separated groups of transitions around DCO-ratios of 0.6 and 1, which are assigned as dipole and quadrupole transitions, respectively. The DCO-ratio very close to 1 for the 455 keV M1 transition is related to the spin difference $`\mathrm{\Delta }I=0`$ between the initial and final states (see Fig. 2) . On the basis of the DCO-ratios we confirmed the character of all previously known transitions. We furthermore firmly derived an E2 character for the 1651 keV $`\gamma `$-ray in band A, for the 1471 keV, 1637 keV, 1780 keV $`\gamma `$-rays in band B, for the 462 keV and 1593 keV $`\gamma `$-rays in band C, and for the 1210 keV $`\gamma `$-ray depopulating the lowest observed state in band D. Hence, band D has the same signature and parity as band A. The rotational bands in <sup>73</sup>Br display very high $`\gamma `$-ray energies (see Fig. 2). For example the $`\gamma `$-ray energy on the top of sequence C is 3696 keV. This corresponds to a rotational frequency of $`\mathrm{}\omega `$ = $`E_\gamma `$/2 = 1.85 MeV. This is the highest frequency ever observed in a rotational cascade in nuclei with $`A25`$. For comparison, the highest rotational frequencies reported so far are $`\mathrm{}\omega `$ = 1.82 MeV in the <sup>60</sup>Zn nucleus and $`\mathrm{}\omega `$ = 1.4 MeV in the <sup>109</sup>Sb nucleus . A collective parameter which is very sensitive to the changes in the nuclear structure is the dynamic moment of inertia, $`𝒥^{(2)}=(dE_\gamma /dI)^1`$. In Fig. 1 a gradual increase in the $`\gamma `$-ray energy spacings as the $`\gamma `$-ray energies increase within the bands can be clearly seen. This implies a corresponding decrease of the dynamic moments of inertia $`𝒥^{(2)}`$, which are presented in Fig. 4-top. For $`\mathrm{}\omega `$ 1 MeV, the irregularities of $`𝒥^{(2)}`$ are signaling band crossings. For each band there are mainly two irregularities in $`𝒥^{(2)}`$ which could be caused by proton and neutron $`g_{9/2}`$ alignments. In contrast, for frequencies higher than $``$ 1 MeV, the dynamic moments of inertia of the bands converge and decrease to approximately 40 % of the value of a rigid rotor. This smooth down-sloping of the dynamic moment of inertia may indicate that, starting at $`\mathrm{}\omega 1.0`$ MeV, the configurations of bands A and C do not change up to the highest rotational frequency, and that the rotational band is gradually losing its collectivity. For band B a negative spike in $`𝒥^{(2)}`$ occurrs at $`\mathrm{}\omega `$ = 1.28 MeV and is caused by the decrease of the transition energy which indicates a band crossing. This behavior is reminiscent of <sup>158</sup>Er where a band terminating at $`I=46^+`$ crosses the more collective yrast band at $`I40`$ . The bottom graph of Fig. 4 shows the kinematical moment of inertia $`𝒥^{(1)}`$ = $`I/\omega `$. The latter stays close to the rigid-body value. For high rotational frequencies the relation $`𝒥^{(1)}𝒥^{(2)}`$ indicates that pairing correlations play no important role. In order to assign configurations to bands A, B and C we have used the configuration-dependent Cranked Nilsson-Strutinsky (CNS) approach based on the cranked Nilsson potential. Since we are interested in the high-spin properties, the pairing correlations have been neglected in the calculations. The calculations minimize the total energy of a specific configuration for a given spin with respect to the deformation parameters ($`\epsilon _2,\epsilon _4,\gamma `$). Thus, the total energy and the shape trajectory (the evolution of the minimum of the total energy in the ($`\epsilon _2,\gamma `$) plane as a function of spin) are obtained for each configuration. The configurations of interest can be described by excitations within the $`N=3`$ $`p_{3/2}`$, $`f_{5/2}`$, $`p_{1/2}`$ and the $`N=4`$ $`g_{9/2}`$ orbitals. Thus, there are no holes in the $`f_{7/2}`$ subshell and no particles in the orbitals above the spherical shell gap at 50. In the Ref. it was shown that at the deformation of $`\epsilon _2`$ = 0.35 the lowest neutron intruder orbital from the $`h_{11/2}`$ subshells comes down and crosses the Fermi surface at $`\mathrm{}\omega `$ = 1.4 MeV. However, the CNS calculations including one neutron in the $`h_{11/2}`$ orbital result in bands which behave very differently from the observed bands. Furthermore, the highest $`N=3`$ $`p_{1/2}`$ orbital will not become occupied in the low-lying configurations of <sup>73</sup>Br; therefore we can omit $`p_{1/2}`$ in the labeling. Note also that all orbitals are treated on the same footing in the cranking calculations, which means that, for example, the polarization of the core is taken care of. In the following, the configurations will be specified with respect to a $`{}_{28}{}^{}{}_{}{}^{56}`$Ni<sub>28</sub> core as having 7 active protons and 10 active neutrons. They will be labeled by the shorthand notation \[$`p_1p_2,n_1n_2`$\], where $`p_1(n_1)`$ stands for the number of protons (neutrons) in the ($`p_{3/2},f_{5/2}`$) orbitals and $`p_2(n_2)`$ stands for the number of protons (neutrons) in $`g_{9/2}`$ orbitals. In addition, the sign of the signature $`\alpha `$ of the last occupied orbital (given as a subscript) is used if the number of occupied orbitals in the specific group is odd. Considering that the bands A, B and C extend to high spins of $`I=(65/2)`$ or $`(63/2)`$, we will first outline the possible proton and neutron configurations using their maximum spins $`I_{\mathrm{max}}^{p,n}`$ as criteria. The maximum spin is defined from the distribution of particles and holes over the $`j`$-shells at low spin. Including 2 protons in the $`g_{9/2}`$ orbital, there are 5 $`(f_{5/2}p_{3/2})`$ protons, which lead to a maximum proton spin of $`I_{\mathrm{max}}^p=27/2`$ or 29/2 for the proton subsystem depending on signature. The maximum proton spin becomes 33/2 for proton configurations with 3 $`g_{9/2}`$ protons if the last proton occupies the favoured ($`\alpha =+1/2`$) $`g_{9/2}`$ signature. Including 3 neutrons in the $`g_{9/2}`$ orbital, the maximum neutron spin $`I_{\mathrm{max}}^n`$ is 15 or 16 depending on the signature for the 7 $`(f_{5/2}p_{3/2})`$ neutrons. Adding one more neutron to the $`g_{9/2}`$ orbital, the maximum neutron spin is $`I_{\mathrm{max}}^n=18`$. The occupation of additional $`g_{9/2}`$ orbitals adds only marginally to the maximum spin, but costs a lot of energy. Thus, it is reasonable to expect that such configurations will be considerably above the yrast line in the spin range of interest. There are five out of all possible combinations, where the proton and the neutron spins add to a maximum total spin of $`I_{\mathrm{max}}63/2`$. The calculations show that one of these configurations, \[43<sub>+</sub>,7<sub>-</sub>3<sub>+</sub>\], does not build any ‘collective band’ for spin values $`I30`$. Therefore, only four configurations, namely \[43<sub>+</sub>,64\], \[5<sub>+</sub>2,64\], \[5<sub>-</sub>2,64\] and \[43<sub>+</sub>,7<sub>+</sub>3<sub>+</sub>\], are left as possible candidates for the observed bands. In Fig. 5 the calculated energies relative to a rigid rotor reference $`(EE_{\mathrm{RR}})`$ of the above mentioned configurations are compared with the experimental data. The CNS calculations indicate that these configurations are indeed the lowest-lying ones capable of building angular momentum up to the values observed. Since pairing correlations have been neglected, the calculated energies are expected to be realistic only at high spin of $`I15`$. In the calculations, the configurations are following a parabola-like energy curve. As seen in Fig. 5, the \[43<sub>+</sub>,64\] configuration has its minimum around spin 61/2, while band A seems to approach the minimum at 65/2. Amongst the considered configurations, the \[5<sub>+</sub>2,64\] configuration is calculated to be lowest up to spin $`I=57/2`$, which reproduces the experimental situation with band C. Moreover, the minimum in $`(EE_{\mathrm{RR}})`$ occurs at similar spin values, 57/2 in band C and at 53/2 in the \[5<sub>+</sub>2,64\] configuration. Based on this comparison of theoretical and experimental energies we assign the \[43<sub>+</sub>,64\] and \[5<sub>+</sub>2,64\] configurations to bands A and C, respectively. Band B is observed up to $`I`$ = (63/2). The calculations suggest that the \[5<sub>-</sub>2,64\] configuration can be assigned to this band at low and medium spin. It is the signature partner of the \[5<sub>+</sub>2,64\] configuration assigned to band C. These configurations reproduce well the slope of experimental $`(EE_{\mathrm{RR}})`$ curves for bands B and C, but underestimate the signature splitting between them. The \[5<sub>-</sub>2,64\] configuration is crossed at spin 57/2 by the \[43<sub>+</sub>,7<sub>+</sub>3<sub>+</sub>\] configuration (see Fig. 5). The predicted crossing is indeed observed in band B at spin (55/2<sup>-</sup>) (Fig. 2). Thus, the latter configuration can be related to band B above the band crossing. In the calculations, the band built on the \[43<sub>+</sub>,7<sub>+</sub>3<sub>+</sub>\] configuration terminates at the maximum spin of the particle configuration, i.e. at 63/2. This can be seen in the shape trajectories, which are presented in Fig. 6. Whereas the trajectories related to bands A and C include collective near-prolate deformations over the whole spin range, the configuration assigned to band B after the band crossing at 55/2 (empty triangles) undergoes a shape change from a collective $`\gamma +30^{}`$ shape to a non-collective oblate $`\gamma =+60^{}`$ shape between spins 55/2 and 63/2. Thus, the band built on this configuration terminates at the maximum spin $`I_{\mathrm{max}}=63/2`$. Since this coincides with the maximum spin observed in band B, we have observed this band up to its termination. Our interpretation of band B requires a relatively strong interaction between two configurations which differ in their occupation of the $`j`$-shells for both protons and neutrons. This might appear unlikely but one could also note that a high-spin crossing in a terminating band in <sup>108</sup>Sn has been interpreted as built from configurations which differ in an analogous manner. In band A a branching at spin $`I`$ = (53/2) is experimentally seen (see Fig. 2). The configuration assigned to this band shows a shape change from $`\gamma 15^{}`$ to $`\gamma +5^{}`$ (see Fig. 6) between $`I`$ = 53/2 and 57/2. This suggests that in each minimum, there is a smooth configuration and the calculated yrast configuration jumps from one minimum to the other with increasing spin. Thus, the two states at (53/2) might belong to these different minima. A similar branching is also observed for the band C, but appears even more difficult to describe in the calculations even though the irregularities in the shape trajectories (Fig. 6) suggest the possibility of competitions between different minima in all configurations shown in Figs. 5 and 6. However, these branchings remain an interesting experimental result and a challenge to theory. In summary, we observed in the <sup>73</sup>Br nucleus the highest rotational frequency in a cascade in nuclei with $`A25`$. Out of the four observed rotational bands could be followed up to its termination. Moreover, we give arguments on the particle structure of this band. According to the calculations the other two high-spin bands are not terminating; instead they stay collective beyond the maximum spin defined from the distribution of particles and holes over the $`j`$-shells at low spin. These results establish the phenomenon of band termination in the $`A70`$ mass region for the first time. The predicted decrease in the quadrupole moment along the terminating band should be observable via Doppler shift lifetime measurements. Further studies will be important in order to obtain greater insight into the systematics of the physical observables in connection with band termination in this mass region. This work was supported by the German Ministry of Education and Research under contracts 06 DR 827, 06 OK 862, 06 GOE 851, by the Swedish Natural Science Research Council, by the UK EPSRC, and by the European Union within the TMR project. A.V. Afanasjev acknowledges the support by the Alexander von Humboldt Foundation.
no-problem/0002/astro-ph0002176.html
ar5iv
text
# An eigenfunction method for particle acceleration at ultra-relativistic shocks ## The method We study a stationary shock front in the $`xy`$plane. The accelerated particles are assumed to be test-particles without influence on the dynamics of the plasma or the jump conditions at the shock-front. The plasma flows along the $`z`$-axis, with constant velocities $`u_{}`$ in the upstream ($`z<0`$) region and $`u_+`$ downstream ($`z>0`$), the velocities are related by the Rankine-Hugoniot jump conditions. Test-particles are injected into the acceleration process and their interaction with the plasma flow is assumed to give rise to diffusion in the angle $`\mathrm{cos}^1\mu `$ between a particle’s velocity and the shock normal. In the frame of the shock front this leads to a stationary transport equation valid for the local plasma rest frame and given in mixed coordinates as aguthmann:kirkschneider87 $$\mathrm{\Gamma }(u+\mu )\frac{f}{z}=\frac{}{\mu }D_{\mu \mu }(1\mu ^2)\frac{f}{\mu }$$ (1) where the plasma speed $`u`$ is measured in units of the speed of light, $`\mathrm{\Gamma }=\left(1u^2\right)^{1/2}`$ is the Lorentz-factor, $`f(p,\mu ,z)`$ is the (Lorentz invariant) phase-space density as a function of the particle momentum $`p`$, direction $`\mu `$ and position. $`p`$ and $`\mu `$ are measured in the local rest frame of the plasma, whereas $`z`$ is measured in the rest frame of the shock front. Equation (1) is solved using the separation Ansatz aguthmann:kirkschneider87 $$f(p,u,\mu ,x)=\underset{i=\mathrm{}}{\overset{+\mathrm{}}{}}g_i(p)Q_i(\mu ,u)\mathrm{exp}\left(\mathrm{\Lambda }_iz/\mathrm{\Gamma }\right),$$ (2) valid in each half-plane with $`\mathrm{\Lambda }_i`$ and $`Q_i`$ the eigenvalues and eigenfunctions of the equation $$\left\{\frac{}{\mu }\left[D_{\mu \mu }\frac{}{\mu }\right]\mathrm{\Lambda }_i(u+\mu )\right\}Q_i(\mu ,u)=0$$ (3) The momentum distribution of particles with energy far above the injection energy range – those in which we are interested – takes the shape of a power-law $`g_i(p)p^s`$ with a power-law index $`s`$, since there is no preferred momentum scale in this range. Matching the expansion (2) across the shock front according to Liouville’s Theorem and imposing physically realistic boundary conditions up and downstream leads to a nonlinear algebraic equation for the power law index $`s`$. In aguthmann:kirkschneider87 and aguthmann:heavensdrury88 only the eigenfunctions with $`i<0`$ were used and the method was applied to mildly relativistic shock speeds ($`\mathrm{\Gamma }_{}5`$). Here, we use the eigenfunctions with $`i>0`$ and calculate them directly with a numerical scheme. In the limit $`u_{}1`$ an analytic expression is available aguthmann:kirkschneider89 . Four eigenfunctions ($`i=1,3,5,7`$) are shown in Fig. 1A as functions of the cosine $`\mu _\mathrm{s}=(\mu +u)/(1+\mu u)`$ of the angle between the particle direction and the shock normal, measured in the shock rest frame. For $`i>1`$ they are oscillatory for $`1<\mu _\mathrm{s}<0`$ and for all $`i>0`$ fall off monotonically in the range $`0<\mu _\mathrm{s}<1`$. ## Results The index $`s`$ of the momentum spectra of the accelerated particles in different cases are shown in Fig. 1B. The jump conditions investigated are those for a relativistic gas both up and downstream: $`u_{}u_+=1/3`$ and for a strong shock in a medium with adiabatic index $`4/3`$ aguthmann:kirkduffy99 . Also we investigate two different scattering operators, $`D_{\mu \mu 1}=1\mu ^2`$ (isotropic small/angle scattering) and $`D_{\mu \mu 2}=(1\mu ^2)\times (\mu ^2+0.01)^{1/3}`$ corresponding to scattering in weak Kolmogorov turbulence, together with a rough prescription for avoiding the lack of scattering at $`\mu =0`$ aguthmann:heavensdrury88 . For high upstream Lorentz-factors the power-law index settles at a value around $`4.23`$ for all equations of state, which is reproduced in the limiting case $`u_{}1`$. The scattering operator has only a minor effect. ## Summary These results are in agreement with the asymptotic Monte-Carlo results of Gallant et al. aguthmann:gallant and those of Bednarz & Ostrowski aguthmann:bednarz for $`\mathrm{\Gamma }_{}200`$. Anisotropic scattering, which has not been treated by Monte-Carlo simulations, leads to a slight steeping in the power-law spectrum, because fewer particles are able to cross the region $`\mu 0`$ and return to the shock. From observations of GRB afterglows, Galama et al. aguthmann:galama and Waxman aguthmann:waxmann have found synchrotron spectral indices corresponding to $`s4.25`$, implying that the particles could indeed have been accelerated by the first order Fermi mechanism operating at an ultrarelativistic shock front. ### This work was supported by the European Commission under the TMR programme, contract number ERBFMRX-CT98-0168
no-problem/0002/cond-mat0002292.html
ar5iv
text
# Vibrational States of Glassy and Crystalline Orthoterphenyl ## 1 Introduction Orthoterphenyl (OTP) has been studied for more than fourty years as prototype of a non-associative, non-polar molecular glass former AnUb55 . Early studies concentrated on viscosity AnUb55 ; LaUb58 ; GrTu67 ; LaUh72 ; CuLU73 ; ScKB98 and $`\alpha `$-relaxation WiHa71 ; WiHa72 ; Arr75 ; VaFl80 ; FyWL81 ; DiNa88 ; MeFi88 ; FiBH89 as direct manifestations of the frequency-dependent glass transition, or on slow $`\beta `$ relaxation JoGo70a ; JoGo70b ; WuNa92 . In the past decade, a new frame has been set by mode-coupling theory Got91 ; GoSj95 ; Cum99 which describes the onset of structural relaxation on microscopic time scales. Consequently, the fast dynamics of OTP has been studied by incoherent BaFK89b ; PeBF91 ; KiBD92 ; WuKB93 ; ToSW98a and coherent BaFL95 ; ToSW97b ; ToWS98b inelastic neutron scattering as well as by depolarised light scattering StPG94 ; SoSR95 ; CuLD97b and is found to be in good accord with asymptotic results of theory. Closed formulations of mode-coupling theory exist only for very simple systems consisting of spherical particles BeGS84 or mixtures thereof BoTh87 , and the generalisation to dipolar molecules requires already intimidating formal efforts FrFG97c ; ScSc97 . Intramolecular vibrations have not yet been explicitely considered, and therefore the fit of mode-coupling asymptotes to complex systems like OTP remains heuristic. Worse: theoretical studies of a hard-spheres system FrFG97b ; FuGM98 have shown that the full asymptotic behaviour is only reached when the relaxational time scale is separated by several orders of magnitude from the microscopic dynamics, whereas experimental studies are restricted to at best three decades in time or frequency. On this background the present communication shall supplement our neutron scattering studies of relaxational dynamics in OTP BaFK89b ; PeBF91 ; KiBD92 ; WuKB93 ; ToSW98a ; BaFL95 ; ToSW97b ; ToWS98b by an explicit investigation of vibrational states. As other glasses, OTP possesses more low-frequency modes than expected from the Debye model. This excess, for obscure reasons named “boson-peak”, is usually taken as a characteristic feature of disordered systems. To substantiate this interpretation, glasses and crystals must be compared systematically, which so far has been done for very few systems Lea69 ; GoLa81 ; Gom81 ; CrBA94 ; BeCM94 ; BeCA96 ; MeWP96a ; DoHH97 ; CuBF98 ; TaRV98 ; BeCC98 . OTP crystallises easily into a polycrystalline powder. Some effort is needed to prevent it from crystallisation when cooling below the melting point T<sub>m</sub>=329 K. It forms a stable glassy structure when it is supercooled below the caloric glass transition temperature T<sub>g</sub>=243 K. One particular advantage is that large single crystal specimens of several cm<sup>3</sup> can be grown. So we are able to compare scattering from a single crystal, from powder-like polycrystal, and from the glass. The paper is organised as follows: The analysis of single crystal dispersion relations in Sec. 2 enables us to fix the frequency range of acoustic modes in the ordered state. The vibrational density of states and the derived thermodynamic quantities of the glass and the polycrystal are compared with each other in Sec. 3. The temperature effect on the frequency distributions in the crystal and the glass is presented in Sec. 4. ## 2 Single Crystal: Phonon Dispersions ### 2.1 Structural Information The OTP molecule (1,2-diphenyl-benzene: C<sub>18</sub>H<sub>14</sub>) consists of a central benzene ring and two lateral phenyl rings in ortho position. The crystal structure belongs to the orthorhombic space group $`P2_12_12_1`$ with four molecules per unit cell and lattice parameters $`a=18.583`$ Å, $`b=6.024`$ Å, $`c=11.728`$ Å at room temperature BrLe79 ; AiMO78 . A sketch of the structure is given in Fig. 1 of Ref. BrLe79 . For steric reasons, the lateral phenyl rings are necessarily rotated out of plane. In addition, the overcrowding in the molecule leads to significant bond-angle and out-of-plane distortions of the phenyl-phenyl bonds. Such structural irregularities may explain why OTP can be undercooled far easier than $`m`$\- or $`p`$-terphenyl AnUb55 . In the crystal, the angles for the out-of-plane rotation of the lateral phenyl rings are $`\varphi _143^{}`$ and $`\varphi _262^{}`$ BrLe79 ; AiMO78 . For isolated molecules, an old electron diffraction study had suggested $`\varphi _1=\varphi _2=90^{}`$, but newer experiments and calculations agree that in the gas or liquid phase $`40^{}\varphi _1,\varphi _265^{}`$ BaPo85 . ### 2.2 Triple-Axis Experiment For coherent neutron scattering, perdeuterated OTP (C<sub>18</sub>D<sub>14</sub>, $`>`$ 99 % deuteration) was used. Single crystals of high quality and considerable size (several cm<sup>3</sup>) were grown out of hot methanol solution either by very slow cooling or by evaporation over several months. They grew preferentially along the shortest axis $`b`$. The phonon dispersion measurements were carried out on a specimen of about $`1\times 1\times 2`$ cm<sup>3</sup> for temperatures between 100 and 310 K. The experiments were performed on the cold triple-axis spectrometer IN12 at the Institut Laue Langevin (ILL), mostly with constant final wave vector $`k_f=1.55`$ Å<sup>-1</sup>. The lattice parameters as measured on IN12 at room temperature $`a=18.53`$ Å, $`b=6.02`$ Å, $`c=11.73`$ Å are in good agreement with the literature results from neutron BrLe79 and X-ray diffraction AiMO78 . On cooling to 100 K, $`a`$ and $`c`$ change only by a few ppm, whereas $`b`$ contracts by about 3 %. ### 2.3 Acoustic Phonons In Fig. 1 we present the measured phonon dispersion of crystalline OTP at 200 K along the three main symmetry directions , , and . Within the available beam time not all acoustic modes could be investigated. Some optic phonons were detected as well, but not studied in detail; as an example, the lowest optic branch in direction is included in Fig. 1. Due to the crystal symmetry modes of longitudinal and transverse character are pairwise degenerated at the Brillouin zone boundary. A first interesting point is that purely acoustic modes are confined to a rather small frequency range: crossing with optic-like branches occurs already at about 0.6 THz. Lattice dynamics calculations CrBA94 agree qualitatively with the measured acoustic dispersions. For instance, the crossing of the longitudinal acoustic mode in direction with optic branches is predicted correctly. Quantitatively, the measured acoustic dispersions are considerably steeper than calculated. The sound velocities, determined from the initial slopes, are summarised in Table 1; they lie 20–60% above the calculated values. In all lattice directions, longitudinal sound modes are almost twice as fast as transverse modes. The single crystal data may also be compared with sound velocities in the glass. Results from Brillouin scattering are summarised in Table 2. Taking the simple arithmetic average over the three crystal axes, the mean longitudinal and transverse sound velocities $`v_{\mathrm{L},\mathrm{T}}`$ exceed those of the glass by about 20 % and 27 %, respectively. ## 3 Glass and Polycrystal: Density of Vibrational States ### 3.1 Density of States and Neutron Scattering In the absence of crystalline order, it is no longer possible to measure selected phonon modes with well-defined polarisation and propagation vector. For polycrystalline or amorphous samples, the distribution of vibrational modes can be conveyed only in form of a spectral density of states (DOS). In principle, the DOS can be measured in absolute units by incoherent neutron scattering. However, as soon as one goes beyond the simplest textbook example of a harmonic, monoatomic, polycrystalline solid with a simple (ideally cubic) lattice, several difficulties arise and the very concept of a DOS becomes problematic. In a molecular solid, different atoms $`j`$ participate in given vibrational modes $`r`$ with different amplitudes $`𝐞_j^r`$. Therefore, atoms in non-equivalent positions have different vibrational densities of states $`g_j(\nu )`$. In neutron scattering, these $`g_j(\nu )`$ are weighted with the scattering cross sections $`\sigma _j`$. In the case of protonated OTP, we see almost only incoherent scattering from hydrogen. Worse, in the scattering law $`S(q,\nu )`$ the $`g_j(\nu )`$ of non-equivalent hydrogen atoms are weighted with a prefactor $`|𝐞_j^r|^2`$ and a Debye-Waller factor that depends also on $`j`$. Only for low-frequency, long-wavelength vibrations the molecules (or some structural subunits) move as rigid bodies, the $`g_j(\nu )=g(\nu )`$ become the same for all $`j`$, and the $`𝐞_j^r`$ become independent of $`r`$ WuKB93 ; CaPe75a . In this limit, the determination of $`g(\nu )`$ from a neutron scattering law remains meaningful and feasible. ### 3.2 Experiments and Data Reduction Vibrational spectra from glassy OTP between 160 and 245 K have been analysed previously WuKB93 . For the present comparison, we measured incoherent scattering from the glass at 100 K, and from the polycrystalline powder at 100, 200, and 300 K on the time-of-flight spectrometer IN6 at the ILL with an incident wavelength of 5.1 Å. If not stated otherwise, we refer in the following to the 100 K data. In these experiments, protonated OTP was used. OTP of $`>`$ 99 % purity was bought from Aldrich and further purified by repeated vacuum distillation. The raw data were converted into $`S(2\theta ,\nu )`$ and the container scattering subtracted. Without interpolating to constant wavenumbers $`Q`$, we calculated the DOS directly from $`S(2\theta ,\nu )`$, preferentially using data from large scattering angles $`2\theta `$. Multi-phonon contributions were calculated by repeatedly convoluting $`g(\nu )`$ with itself and subtracting it from $`S(2\theta ,\nu )`$ in an iterative procedure, as described in detail in WuKB93 . The so-obtained DOS shows a pronounced gap above $`\nu _\mathrm{g}5`$ THz. Comparing our results with model calculations Bus82 , we assign all vibrations below $`\nu _\mathrm{g}`$ to the 16 degrees of freedom needed to describe the crystal structure. For 16 low-lying modes in a molecule with 32 atoms, we expect an integrated DOS $$_0^{\nu _\mathrm{g}}d\nu g(\nu )=\frac{16}{32}=0.5.$$ (1) This condition is used to readjust the absolute scale of $`g(\nu )`$. In the Appendix, we argue that the difference between measured and re-normalised $`g(\nu )`$ is mainly an effect of multiple scattering. ### 3.3 Density of States in Orhoterphenyl Fig. 2 shows the DOS of glassy and crystalline OTP at 100 K. Rather broad distributions are found for both phases. In the glass a first shoulder around 1.5 THz is followed by a second at 3.5 THz in accordance with results from Raman studies CrBA94 ; KiVP99 . As expected, the crystal DOS is more structured, in particular in the low energy region. Distinct peaks at 0.6, 0.8, 1.1 and 1.5 THz become apparent. They are due to strong contributions from zone-boundary modes, as can be seen from Fig. 1. Compared to the glass, significant density is missing in the low energy region and in the range from 1.5 to 3 THz, and reappears at higher frequencies around 3.5 THz. In order to show the low energy modes on enhanced scale, we plot in Fig. 3 $`g(\nu )/\nu ^2`$ which in the one-phonon approximation is proportional to $`S(Q,\nu )`$ itself. In this representation, the excess of the glass over the crystal becomes evident. A well defined frequency peak appears around 0.35 THz which is downshifted with respect to the first peak of the crystal at 0.6 THz and superposed to a long tail which is similar for glass and crystal. Note that the maximum of the boson peak at 0.35 THz is located below the lowest acoustic zone-boundary phonons in the crystal. The DOS of the polycrystalline sample is in accord with the measured dispersion of the single crystal: A small shoulder at 0.4 THz can be attributed to the transverse acoustic zone-boundary phonons in $`[100]`$ direction, and the main peak at 0.6 THz corresponds to the transverse acoustic zone-boundary phonons in the other two lattice directions. The peak at 0.8 THz reflects the longitudinal acoustic zone-boundary phonon in $`[001]`$ direction and the transverse optic in $`[010]`$ direction. We performed additional coherent scattering experiments on a deuterated sample, which show that the frequency of the boson peak maximum has no dispersion for wave numbers in the range 0.8 to 2 Å<sup>-1</sup>. Its intensity is modulated in phase with the static structure factor, in accordance with observations in other glasses MeWP96a ; BuWR96b . ### 3.4 Mean Square Displacement Another dynamic observable which can be obtained from neutron scattering is the atomic mean square displacement $`r^2(T)`$. Roughly speaking, $`r^2(T)^{3/2}`$ measures the volume to which an atom remains confined in the limit $`t\mathrm{}`$. For a large class of model situations (harmonic solid, Markovian diffusion, …) it can be obtained directly from the Gaussian $`Q`$ dependence of the elastic scattering intensity $$S(Q,\nu =0)=\mathrm{exp}(Q^2r(T)^2).$$ (2) For a harmonic solid, $$r(T)^2=\frac{\mathrm{}^2}{6Mk_\mathrm{B}T}_0^{\mathrm{}}d\nu \frac{g(\nu )}{\beta }\mathrm{coth}(\frac{\beta }{2})$$ (3) with $`\beta =h\nu /k_\mathrm{B}T`$ crosses over from zero-point oscillations $`r^2(0)`$ to a linear regime $`r^2(T)T`$ x43 . In any real experiment, which integrates over the elastic line with a resolution $`\mathrm{\Delta }\nu `$, one measures actually atomic displacements within finite times $`t_\mathrm{\Delta }2\pi /\mathrm{\Delta }\nu `$. Fig. 4 shows mean square displacements of OTP, determined according to (2) from elastic back-scattering (fixed window scans on IN13, with $`t_\mathrm{\Delta }100200`$psec) and from Fourier-deconvoluted time-of-flight spectra (taking the plateau $`S(Q,t_\mathrm{\Delta })`$ with $`t_\mathrm{\Delta }510`$psec from IN6 data that were Fourier transformed and divided through the Fourier transform of the measured resolution function). For the glassy sample, a direct comparison can be made and shows good agreement between IN6 and IN13. In the polycrystalline sample, the displacement is for all temperatures smaller than in the glass. The lines in Fig. 4 are calculated through (3) from the DOS at 100 K. For low temperatures, the $`r^2(T)`$ are in full accord with the values determined through (2). This comparison can be seen as a cross-check between the analysis of elastic and inelastic neutron scattering data. Equation (3) gives not only the absolute value of $`r^2(T)`$, but enables us also to read off which modes contribute most to the atomic displacement. To this end, we restrict the integration (3) to modes with $`0<\nu <\nu ^{}`$. The inset in Fig. 4 shows the relative value $`r^2(\nu ^{};T)^{1/2}/r^2(\nu ;T)^{1/2}`$ for $`T=100`$K as function of $`\nu ^{}`$. Modes below 0.6 THz in the crystal, or 0.4 THz in the glass contribute about 55% to the total displacement; 90% are reached only at about 2 THz. This means that the modes which are responsible for the mean square displacement and the Debye-Waller factor are not rigid body motions alone, but contain a significant contribution from intramolecular degrees of freedom. ### 3.5 Debye-Limit and Sound Velocity Assuming Debye behaviour at very low frequencies $`g(\nu )=9\nu ^2/\nu _D^3`$, we can compare the neutron DOS with the Debye frequency calculated from experimental density and sound velocities: $$(2\pi \nu _\mathrm{D})^3=\frac{V}{18\pi ^2N}\left(\frac{1}{v_\mathrm{L}^3}+\frac{2}{v_\mathrm{T}^3}\right)$$ (4) where $`N`$ is the number of molecules in volume $`V`$. For the sound velocities of the crystal, we take the average $`v^3^{1/3}`$ over the three lattice directions of our triple-axis experiment at 200 K. For the glass, we take literature data HiWa81 from Brillouin scattering at 220 K. The so-obtained $`9/\nu _D^3`$ are indicated by arrows in Fig. 3. In both cases, but in particular for the glass, the neutron DOS extrapolates to a far higher Debye level than expected from the sound velocities. Part of the large discrepancy may be due to inaccurate estimates for the sound velocity (see tables) (for instance, the velocities from Brillouin scattering are based on a temperature extrapolation of the refraction index) or due to the circumstance of a broad tail of the resolution of IN6 and a boson peak of OTP which is located at exceptionally low frequency. But for the main part, we must conclude that our neutron scattering experiment on the glass simply does not reach the Debye regime. In many other glass forming systems, similar discrepancies between DOS and sound velocities are established as well BuPK88 ; GiRB93 ; WiBD98 , although in some substances a better accord is found ZoAC95 ; WuPC95 . Anyway, our data leave the possibility open that the low-frequency spectrum of the glass and the crystal contain non-harmonic, relaxational contributions, as have been found recently by light scattering MoFM99 ; MoCL99 or additional glassy excitation. ### 3.6 Heat Capacity For a harmonic solid, the heat capacity is given by the integral $$c_p(T)c_V(T)=N_{\mathrm{at}}R_0^{\mathrm{}}d\nu g(\nu )\frac{(\beta /2)^2}{\mathrm{sinh}^2(\beta /2)}.$$ (5) With a Debye DOS, this yields the well-known $`c_pT^3`$. Therefore, in Fig. 5 experimental data ChBe72 for the specific heat of glassy and polycrystalline OTP are plotted as $`c_p/T^3`$. In this representation, a boson peak at 0.35 THz is expected to lead to a maximum at about 4 K. The lines which are calculated through (5) from the neutron DOS agree for both glass and crystal in absolute units and over a broad temperature range with the measured data. Similar accord has been reported for a number of other systems CrBA94 ; BeCA96 ; TaRV98 ; WuPC95 ; BuPN86 . At higher temperatures, the heat capacities of crystalline and glassy OTP differ only little (e.g. at 200 K: $`c_p^{\mathrm{cryst}}=182.8`$ J mol<sup>-1</sup> K<sup>-1</sup> and $`c_p^{\mathrm{glass}}=186.1`$ J mol<sup>-1</sup> K<sup>-1</sup> ChBe72 ). Towards high frequencies, the DOS becomes less sensitive to the presence or absence of crystalline order (as suggested by the representation Fig. 3 for $`\nu 1`$THz), and remaining differences (clearly visible in Fig. 2) are largely averaged out by the integral (5). ## 4 Thermal effects Figures 6–8 show the temperature evolution of vibrations in single crystal, polycrystal and glass. In the single crystal, some phonons become softer on heating, others become stiffer, depending on their direction, as exemplified in Fig. 6. Broadening could not be observed because the linewidth remained always limited by the resolution of the spectrometer. The temperature dependence of the different phonons is in accord with results from Raman scattering on a polycrystal CrBA94 where substantial positive and negative frequency shifts and broadening were observed already above 70 K, indicating the presence of anharmonicities even at these low temperatures. The opposite trends in the temperature evolution of different modes ensure that the overall anharmonic effects are small, although individual modes are clearly anharmonic. Through exceptionally large negative Grüneisen parameters the softening of certain phonons may be related to anisotropic thermal expansion. Negative expansion coefficient have indeed been found in crystalline OTP at much lower temperatures ($`T<30`$ K) RaVB95 . In the frequency distributions of the polycrystal systematic temperature effects are detected as well (Fig. 7a). In Fig. 7b we find a slight increase in the Debye limit ($`\nu 0`$) of $`g(\nu )/\nu ^2`$, which may be explained by regular thermal expansion and softening — the kind of effects which can be accomodated in the harmonic theory of solids by admitting a temperature-dependent, “quasiharmonic” density of states $`g(\nu ;T)`$. In the glass the temperature effects are weak; only around 2 THz a systematic change in $`g(\nu )`$ may be recognised in Fig. 8a. In the low-frequency DOS, shown as $`g(\nu )/\nu ^2`$ in Fig. 8b, the temperature variation is stronger than in the polycrystalline counterpart. The increase starts at about 160 K, far below the glass transition. The same anharmonicity of low-lying modes is also responsible for the temperature dependence of the mean square displacement. Fig. 4 shows that $`r^2(T)`$ starts to increase faster than expected from (3) already at about 140 K. These anharmonic contributions to $`r^2(T)`$ amount to about 20% at 200 K for both phases, the crystal being slightly smaller. Note, that around 140 K deviations from the proportionality $`\mathrm{ln}S(Q,\nu =0)T`$ are also observed in coherent elastic scans on the BS instrument IN16. The additional increase of $`r^2(T)`$ above about 240 K has been consistently interpreted as the onset of fast $`\beta `$ relaxation PeBF91 . At higher temperatures, in the presence of quasielastic scattering from relaxational modes, the multi-phonon cross-section becomes ill-defined, and the iterative determination of a DOS is no longer possible. With a temperature-dependent DOS, the heat capacity can be expressed as HuAl75 $$c_p(T)=N_{\mathrm{at}}R_0^{\mathrm{}}d\nu g(\nu ;T)\frac{(\beta /2)^2}{\mathrm{sinh}^2(\beta /2)}\left[1\left(\frac{\mathrm{ln}\nu }{\mathrm{ln}T}\right)_p\right]$$ (6) where the second term arises explicitely from the shift of phonon modes. The temperature derivative of the logarithmic moment $$\mathrm{ln}\nu =_0^{\mathrm{}}d\nu g(\nu )\mathrm{ln}\nu $$ (7) can then be taken as a direct measure for the degree of anharmonicity. For Fig. 9, the integral (7) has been evaluated with an upper integration limit $`\nu _\mathrm{g}=5`$THz. From the plot of $`\mathrm{ln}\nu `$ versus $`\mathrm{ln}T`$ we estimate slopes $`\mathrm{ln}\nu /\mathrm{ln}T`$ of $`0.019`$ for the polycrystal and $`0.012`$ for the glass. The stronger anharmonicity of the crystal can be traced back to the softening of high-frequency modes. ## 5 Discussion Our incoherent scattering experiments reconfirm that it is possible to determine a meaningful DOS for a molecular system like OTP. Cross-checks versus $`r^2(T)`$ and $`c_p(T)`$ show excellent accord if only the absolute scale of $`g(\nu )`$ is corrected for multiple-scattering effects. By coherent scattering on a single crystal low-lying phonon branches could be resolved. The low-frequency peaks in the DOS of the polycrystal can be assigned to zone-boundary modes. Optic phonons and hybridisation are found at rather low frequencies. The excess of $`g(\nu )`$ of the glass over the crystal is restricted to frequencies below about 0.5 THz, the region where the crystal possesses mainly acoustic modes. Towards higher frequency, the DOS is less structured in the glass than in the polycrystal, but the overall spectral distribution is rather similar. The main result of this communication concerns the strong thermal effects as they were manifested earlier close to the glass transition. With increasing temperature, the glass shows less anharmonicity than the polycrystal. However, this may be partly due to a cancellation of opposite effects. An example is provided by different phonon branches of the single crystal for which positive, negative, and nearly vanishing temperature coefficients are found. In both, crystal and glass the anharmonic contributions to the mean square displacement occur above $`150`$ K. However, the anomalous increase in $`r^2(T)`$ is much stronger in the glass, leading to the known glass transition anomalies. Finally, concering our previous analysis of quasielastic scattering in the supercooled liquid OTP we can state: Hybridisation and coupling between inter- and intramolecular modes plays an important role for frequencies higher than 0.6 THz. As a consequence, the quasielastic scattering, which is confined to below 0.25 THz, is clearly dominated by rigid-body motions and the analysis in terms relaxations in the framework of the mode coupling theory remains reasonable. ## Acknowledgements We thank A. Doerk (Institut für Physikalische Chemie, Mainz) for help in purifying several samples. Financial support by BMBF under project numbers 03fu4dor5 and 03pe4tum9 is gratefully acknowledged. ## Appendix The determination of a density of states in absolute units is complicated by the inevitable presence of multiple scattering. In the inelastic spectrum of a glass, most multiple-scattering intensity comes from elastic-inelastic scattering histories Sea75 . This contribution is nearly isotropic, tends to smear out the characteristic $`Q`$ dependence of the phonon scattering, and dominates at small angles where the single-scattering signal is expected to vanish as $`Q^2`$. In our previous analysis of incoherent scattering from glassy OTP we performed a Monte-Carlo simulation for an ideal harmonic system with given $`g(\nu )`$ WuKB93 . Here, we simply expand the scattering function $`S(Q,\nu )=A(\nu )+Q^2B(\nu )+\mathrm{}`$ which enables us to estimate the multiple scattering $`A(\nu )`$ by $`Q^20`$ extrapolation from the $`S(2\theta (Q,\nu ),\nu )`$ spectra. Results are shown in Fig. 10a. For 200 K, good accord with the Monte-Carlo simulation is found, and measured spectra can be corrected by simply subtracting $`A(\nu )`$. The interplay between multi-phonon scattering and multiple scattering limits this procedure to frequencies $`\nu 3`$ THz. In the low-frequency region, the subtraction of $`A(\nu )`$ leads to almost the same DOS as the ad hoc normalisation of Sect. 3.2. This confirms our assignment of the 16 modes below the gap, and it shows at the same time that multiple scattering is indeed the main obstacle to a quantitative determination of a generalised density of vibrational states. Figure captions: Figure 1: Phonon dispersion relations of crystalline orthoterphenyl as measured on the triple axis spectrometer IN12 at 200 K. The wave vectors are given in reciprocal lattice units. Figure 2: Renormalized density of vibrational states of glassy ($`\mathrm{}`$) and crystalline OTP ($`\mathrm{}`$) at 100 K obtained from IN6. Figure 3: Density of vibrational states of glassy ($`\mathrm{}`$) and crystalline OTP ($`\mathrm{}`$) at 200 K. To emphasise the excess density of states of the glass over the crystal $`g(\nu )/\nu ^2`$ is shown. The arrows mark the Debye limit $`\nu 0`$ calculated from sound velocities and densities (at 220 K for the glass and at 200 K for the crystal). Figure 4: Mean square displacement $`r^2(T)`$ of glassy OTP from elastic back-scattering (IN13, $`\mathrm{}`$) and from time-of-flight spectra (IN6, plateau value of $`S(q,t)`$, $`\mathrm{}`$). For comparison, IN6 data of polycrystalline OTP are also shown ($`\mathrm{}`$). The lines are calculated from the densities of states at 100 K of the glass (dashed) and the polycrystal (solid line). The inset shows the relative value of $`d=r^2^{1/2}`$ at 100 K for the crystal and the glass when the integral (3) is restricted to modes with $`0<\nu <\nu ^{}`$. Figure 5: Experimental heat capacity capacities $`c_p/T^3`$ of glassy and crystalline OTP ChBe72 , compared to $`c_p/T^3`$ calculated from the density of vibrational states $`g(\nu )`$ at 100 K. Figure 6: Examples for the temperature dependence of phonons along different directions. Hardening (left, note the negative energy scale), softening (middle) Our data leave the possibility open that the low-frequency spectrum of the glass and the crystal contains non-harmonic, relaxational contributions. and nearly no temperature variation (right) is observed with increasing temperature. The intensities are Bose corrected. The remaining differences are due to the use of different set-ups (collimation). Figure 7: (a) Vibrational density of states of polycrystalline OTP for three different temperatures. Note the significant temperature dependence of the DOS for all frequencies. — (b) To emphasise the temperature evolution at small energies, the same data are plotted as $`g(\nu )/\nu ^2`$. The temperature dependence of the low-frequency modes becomes apparent. Figure 8: (a) Temperature dependence of the vibrational density of states of glassy OTP. — (b) The low frequency region of (a) plotted as $`g(\nu )/\nu ^2`$. Above 160 K the density of low-frequency modes increases strongly. Figure 9: The logarithmic moment $`\mathrm{ln}\nu `$ versus $`\mathrm{ln}T`$ of the density of states of glassy and crystalline OTP. From the straight line, one obtains the slope $`\mathrm{ln}\nu /\mathrm{ln}T`$ which is a measure for the anharmonicity. Figure 10: (a) Comparison of the experimental data at two scattering angles with the multiple scattering contribution obtained by an $`Q^20`$ extrapolation. The MS nearly completely dominates the scattering signal at small angles. — (b) Comparison of the density of states obtained from the MS-corrected spectra with the ad hoc normalized DOS.
no-problem/0002/physics0002032.html
ar5iv
text
# Equilibrium orbit analysis in a free-electron laser with a coaxial wiggler ## I. INTRODUCTION Most free-electron lasers employ a wiggler with either a helically symmetric magnetic field generated by bifilar current windings or a linearly symmetric magnetic field generated by alternating stacks of permanent magnets. A uniform static guide magnetic field is also frequently employed. Single-particle orbits in these helical and planar fields combined with an axial guide field have been analyzed in detail and have played a role in the development of free-electron lasers. Harmonics of gyroresonance for off-axis electrons caused by the radial variation of the magnetic field of a helical wiggler is found by Chu and Lin . Recently the feasibility of using a coaxial wiggler in a free-electron laser has been investigated. Freund et al. studied the performance of a coaxial hybrid iron wiggler consisting of a central rod and a coaxial ring of alternating ferrite and dielectric spacers inserted in a uniform static axial magnetic field. McDermott et al. proposed the use of a wiggler consisting of a coaxial periodic permanent magnet and transmission line. Coaxial devices offer the possibility of generating higher power than conventional free-electron lasers and with a reduction in the beam energy required to generate radiation of a given wavelength. In the present paper, single-particle orbits in a coaxial wiggler are studied. The wiggler magnetic field is radially dependent with the fundamental plus the third spatial harmonic component and a uniform static axial magnetic field present. In Sec. II the scalar equations of motion are introduced and reduced to a form which is correct to first order in the wiggler field. In Sec. III solutions of the equations of motion are developed in a form suitable for computing the electron orbital velocity and trajectory in the radially dependent magnetic field of a coaxial wiggler. The special case of a radially independent wiggler is also analyzed. In Sec. IV the results of numerical computations of the wiggler field components, velocity components, radial excursions, and the $`\mathrm{\Phi }`$ function for locating negative mass regimes are presented and discussed. In Sec. V some conclusions are presented. ## II. EQUATIONS OF MOTION Electron motions in a static magnetic field $`𝐁`$ may be determined by solution of the vector equation of motion $$\frac{d𝐯}{dt}=\frac{e}{\gamma mc}𝐯\times 𝐁$$ (1) where $`𝐯`$, $`e`$, and $`m`$ are the velocity, charge, and (rest) mass, respectively, of the electron. Lorentz factor $`\gamma `$ is a constant given by $$\gamma =(1v^2/c^2)^{1/2}$$ (2) where $`v=|𝐯|`$ is the constant electron speed. The total magnetic field inside a coaxial wiggler will be taken to be of the form $`𝐁=B_r\widehat{𝐫}+B_z\widehat{𝐳},`$ (3) $`B_r=B_wF_r(r,z),`$ (4) $`B_z=B_0+B_wF_z(r,z),`$ (5) where $`B_0`$ is a uniform static axial guide field, and $`F_r`$ and $`F_z`$ are known functions of cylindrical coordinates $`r`$ and $`z`$. Equation (1) may be written in the scalar form $`{\displaystyle \frac{dv_r}{dt}}{\displaystyle \frac{v_\theta ^2}{r}}=v_\theta (\mathrm{\Omega }_0+\mathrm{\Omega }_wF_z),`$ (6) $`{\displaystyle \frac{dv_\theta }{dt}}+{\displaystyle \frac{v_\theta v_r}{r}}=v_r(\mathrm{\Omega }_0+\mathrm{\Omega }_wF_z)v_z\mathrm{\Omega }_wF_r,`$ (7) $`{\displaystyle \frac{dv_z}{dt}}=v_\theta \mathrm{\Omega }_wF_r;`$ (8) $`\mathrm{\Omega }_0`$ and $`\mathrm{\Omega }_w`$ are relativistic cyclotron frequencies given by $`\mathrm{\Omega }_0={\displaystyle \frac{eB_0}{\gamma mc}},`$ (9) $`\mathrm{\Omega }_w={\displaystyle \frac{eB_w}{\gamma mc}}.`$ (10) Initial conditions will be chosen such that the transverse motion of the electron in the $`B_0`$ field vanishes in the limit as $`B_w`$ approaches zero. Then, in order to develop a solution to first order in the wiggler field $`B_w`$, the scalar equations of motion will be approximated by $`{\displaystyle \frac{dv_r}{dt}}=\mathrm{\Omega }_0v_\theta ,`$ (11) $`{\displaystyle \frac{dv_\theta }{dt}}=\mathrm{\Omega }_0v_rv_{}\mathrm{\Omega }_wF_r,`$ (12) $`{\displaystyle \frac{dv_z}{dt}}=0.`$ (13) with the wiggler field approximated by the fundamental plus the third spatial harmonic component, $$F_r=F_{r1}sin(k_wz)+F_{r3}sin(3k_wz),$$ (14) where $`F_{rn}G_n^1[S_nI_1(nk_wr)+T_nK_1(nk_wr)],`$ (15) $`G_nI_0(nk_wR_{out})K_0(nk_wR_{in})I_0(nk_wR_{in})K_0(nk_wR_{out}),`$ (16) $`S_n{\displaystyle \frac{2}{n\pi }}sin({\displaystyle \frac{n\pi }{2}})[K_0(nk_wR_{in})+K_0(nk_wR_{out})],`$ (17) $`T_n{\displaystyle \frac{2}{n\pi }}sin({\displaystyle \frac{n\pi }{2}})[I_0(nk_wR_{in})+I_0(nk_wR_{out})],`$ (18) and $`n=1,3`$; $`R_{in}`$ and $`R_{out}`$ are the inner and outer radii of the coaxial waveguide, $`k_w=2\pi /\lambda _w`$ where $`\lambda _w`$ is the wiggler (spatial) period, and $`I_0`$, $`I_1`$, $`K_0`$, and $`K_1`$ are modified Bessel functions. ## III. ORBITAL ANALYSIS A. Radially dependent wiggler The scalar equations of motion may be solved to determine the electron orbital velocity and trajectory in a coaxial wiggler. Equation (13) yields $$v_z=v_{}$$ (19) where the constant $`v_{}`$ is the root-mean-square axial velocity component. With the initial axial position taken to be $`z_0=0`$, $$z=v_{}t.$$ (20) Equations (11), (12), (14), and (20) may be combined to obtain $$\frac{d^2v_r}{dt^2}+\mathrm{\Omega }_0^2v_r=f(t)$$ (21) where $$f(t)=\mathrm{\Omega }_0\mathrm{\Omega }_wv_{}[F_{r1}sin(k_wv_{}t)+F_{r3}sin(3k_wv_{}t)].$$ (22) By the method of variation of parameters, a solution of Eq. (21) may be obtained in the form $`v_r=[v_{\theta 0}+\mathrm{\Omega }_0^1{\displaystyle _0^t}f(\tau )cos(\mathrm{\Omega }_0\tau )𝑑\tau ]sin(\mathrm{\Omega }_0t)`$ (23) $`+[v_{r0}\mathrm{\Omega }_0^1{\displaystyle _0^t}f(\tau )sin(\mathrm{\Omega }_0\tau )𝑑\tau ]cos(\mathrm{\Omega }_0t)`$ (24) where $`v_{r0}`$ and $`v_{\theta 0}`$ are initial radial and azimuthal velocity components. Then Eq. (11) yields $`v_\theta =[v_{\theta 0}\mathrm{\Omega }_0^1{\displaystyle _0^t}f(\tau )cos(\mathrm{\Omega }_0\tau )𝑑\tau ]cos(\mathrm{\Omega }_0t)`$ (25) $`+[v_{r0}\mathrm{\Omega }_0^1{\displaystyle _0^t}f(\tau )sin(\mathrm{\Omega }_0\tau )𝑑\tau ]sin(\mathrm{\Omega }_0t).`$ (26) The orbital velocity is given to first order in $`B_w`$ by Eqs. (23), (24), and (19). The trajectory may then be computed using $`r=r_0+{\displaystyle _0^t}v_r(\tau )𝑑\tau ,`$ (27) $`\theta =\theta _0+{\displaystyle _0^t}v_\theta (\tau )𝑑\tau ,`$ (28) and Eq. (20). B. Radially uniform wiggler By neglecting the radial variation of $`F_{r1}`$ and $`F_{r3}`$, a solution of Eq. (21) may be obtained in the form $$v_r=\alpha _1sin(k_wv_{}t)+\alpha _3sin(3k_wv_{}t),$$ (29) where $$\alpha _n=\frac{\mathrm{\Omega }_0\mathrm{\Omega }_wv_{}F_{rn}}{\mathrm{\Omega }_0^2n^2k_w^2v_{}^2}(n=1,3).$$ (30) Equation (11) then yields $$v_\theta =\mathrm{\Omega }_0^1k_wv_{}\alpha _1cos(k_wv_{}t)\mathrm{\Omega }_0^1(3k_wv_{}\alpha _3)cos(3k_wv_{}t).$$ (31) The corresponding initial conditions are $$v_{r0}=0,$$ (32) $$v_{\theta 0}=\mathrm{\Omega }_0^1k_wv_{}\alpha _1\mathrm{\Omega }_0^1(3k_wv_{}\alpha _3).$$ (33) Root-mean-square values of the velocity components may be determined by use of Eqs. (27), (28), and (19). Replacing $`v^2`$ by its root-mean-square value in Eq. (2) then yields $$\frac{v_{}^2}{c^2}[1+\frac{1}{2}(\frac{\alpha _1}{v_{}})^2+\frac{1}{2}\mathrm{\Omega }_0^2k_w^2\alpha _1^2+\frac{1}{2}(\frac{\alpha _3}{v_{}})^2+\frac{9}{2}\mathrm{\Omega }_0^2k_w^2\alpha _3^2]=1\gamma ^2.$$ (34) The derivative of $`v_{}`$ with respect to $`\gamma `$ may be obtained from Eq. (32) and, after some algebra, cast into the form $$\frac{dv_{}}{d\gamma }=\frac{c^2}{\gamma \gamma _{}^2v_{}}\mathrm{\Phi }$$ (35) where $$\mathrm{\Phi }=1\frac{_{n=1,3}(\mathrm{\Omega }_0^2n^2k_w^2v_{}^2)^3\gamma _{}^2\mathrm{\Omega }_w^2F_{rn}^2\mathrm{\Omega }_0^2(\mathrm{\Omega }_0^2+3n^2k_w^2v_{}^2)}{2+_{n=1,3}(\mathrm{\Omega }_0^2n^2k_w^2v_{}^2)^3\mathrm{\Omega }_w^2F_{rn}^2\mathrm{\Omega }_0^2(\mathrm{\Omega }_0^2+3n^2k_w^2v_{}^2)}.$$ (36) This equation may be used to establish the existence of a negative mass regime. ## IV. NUMERICAL RESULTS A numerical computation is conducted to investigate the properties of the equilibrium orbits of electrons inside a coaxial wiggler. Wiggler wavelength $`2\pi /k_w`$ and lab-frame electron density $`n_0`$ were taken to be 3 cm and $`10^{12}`$ cm<sup>-3</sup>, respectively. The wiggler magnetic field $`B_w`$ was taken to be 3745 G which corresponds to the relativistic wiggler frequency $`\mathrm{\Omega }_w/ck_w=0.442`$. Electron-beam energy $`(\gamma 1)m_0c^2`$ was taken to be $`700`$ keV, corresponding to a Lorentz factor $`\gamma =2.37`$. The axial magnetic field $`B_0`$ was varied from 0 to 25.3 kG, corresponding to a variation from 0 to 3 in the normalized relativistic cyclotron frequency $`\mathrm{\Omega }_0/ck_w`$ associated with $`B_0`$. The inner and outer radii of the coaxial wiggler were assumed to be $`R_{in}=1.5`$ cm and $`R_{out}=3`$ cm, respectively. Figure 1 shows the variation of the axial velocity of the quasi-steady-state orbits with the axial guide magnetic field for three classes of solutions. Group I orbits for which $`0<\mathrm{\Omega }_0<k_wv_{}`$, group II orbits with $`k_wv_{}<\mathrm{\Omega }_0<3k_wv_{}`$, and group III orbits with $`\mathrm{\Omega }_0>3k_wv_{}`$. Existence of group III orbits is due to the presence of the third spatial harmonic of the wiggler field, which also produces the second magnetoresonance at $`\mathrm{\Omega }_03k_wv_{}`$. The narrow width of the second resonance at $`\mathrm{\Omega }_0/ck_w2.7`$ compared to the width of the first magnetoresonance at $`\mathrm{\Omega }_0k_wv_{}`$ is illustrated in Fig. 1B. This is due to the relatively weak third harmonic compared to the fundamental component of the wiggler field. It should be noted that although the exact resonances $`\mathrm{\Omega }_0=k_wv_{}`$ and $`\mathrm{\Omega }_0=3k_wv_{}`$ occur at the origin where $`v_{}/c=\mathrm{\Omega }_0/ck_w=0`$, the first ”magnetoresonance” in the literature refers to the group II orbits with the cyclotron frequencies around $`\mathrm{\Omega }_0/ck_w1`$ in Fig. 1. Similarly we refer to the group III orbits with the cyclotron frequency around $`\mathrm{\Omega }_0/ck_w2.7`$ as the second magnetoresonance. The rate of change of the electron axial velocity with electron energy is proportional to $`\Phi `$ and is equal to unity in the absence of the wiggler field. Figure 2 illustrates the dependence of $`\Phi `$ on the radial wiggler magnetic field and the axial guide magnetic field $`B_0`$. The curves corresponding to the group I and II orbits are almost unaffected by the third harmonic and are almost the same as in Ref. where the third harmonic is neglected. A negative mass regime (i.e., negative $`\mathrm{\Phi }`$ for which a decrease in the axial velocity results in an increase in the electron energy) is found for group III orbits which is stronger than that of the group II orbits. Equations (15)-(18) are used to calculate the radial components of the wiggler field $`F_{r1}`$ and $`F_{r3}`$. For the axial component the following expressions are used $$F_z=F_{z1}cos(k_wz)+F_{z3}cos(3k_wz),$$ (37) $$F_{zn}=G_n^1[S_nI_0(nk_wr)T_nK_0(nk_wr)].$$ (38) Figure 3 shows the variation of the amplitudes of the wiggler magnetic field (divided by $`B_w=3745`$ G) with radius, for the first and third spatial harmonics. For the first harmonic the radial component, $`F_{r1}`$, has a minimum at $`r2.28`$ cm and the axial component, $`F_{z1}`$, changes sign around this point. Mc Dermott $`\mathrm{𝑒𝑡}`$ $`\mathrm{𝑎𝑙}.`$ have demonstrated the stability of a thin annular electron beam when $`F_{r1}`$ is minimum at the beam radius. The radial and axial components of the third harmonic of the wiggler $`F_{r3}`$ and $`F_{z3}`$ are also shown in Fig. 3. Magnitudes of $`F_{r3}`$ and $`F_{z3}`$ both are minimum at $`r2.28`$ cm where $`F_{r1}`$ is minimum. This is actually an inflection point for $`F_{z3}`$. Variations of the radial components of the first and third spatial harmonics of the normalized wiggler magnetic field $`F_{r1}`$ and $`F_{r3}`$ with the wiggler wave number $`k_w`$ are shown in Fig. 4. Figure 4 also shows the dimensionless transverse velocity coefficients $`\overline{\alpha }_1=\alpha _1/c`$ and $`\overline{\alpha }_3=\alpha _3/c`$ for the initial orbit radius $`r_02.28`$ cm where $`F_{r1}`$ is minimum. The cyclotron frequency $`\mathrm{\Omega }_0/ck_w2.7`$ is taken at the second magnetoresonance, and our choice of 3 cm for the wiggler wavelength corresponds to $`k_w2.1`$ cm<sup>-1</sup>. It can be observed that at this wave number although the radial component of the wiggler field at the first harmonic $`F_{r1}`$ is much larger than the third harmonic $`F_{r3}`$, the transverse velocity coefficients of the third harmonic $`\overline{\alpha }_3`$ are larger than $`\overline{\alpha }_1`$. This shows that the third harmonic may have considerable effects around the second magnetoresonance at $`\mathrm{\Omega }_03k_wv_{}`$. Away from this resonance Eq. (28) shows that $`\alpha _3`$ will be of the order of $`F_{r3}`$. In order to study the transverse motion of electrons in the radially dependent wiggler field Eqs. (11), (12), (25), and (26) are solved numerically with the initial conditions chosen so that, in the limit of zero wiggler field, there is axial motion at constant velocity $`v_{}`$ but no Larmor motion. Figure 5 shows the variation of the radial and azimuthal components of electron velocity with $`z(=v_{}t)`$. The normalized cyclotron frequency $`\mathrm{\Omega }_0/ck_w`$ is chosen to be 0.5, 1.2, and 3 for group I, II, and III orbits, respectively, which are somewhat away from the magnetoresonances. Solid curves correspond to the initial orbit radius $`r_0=2.28`$ cm, which is at the point where $`F_{r1}`$ is minimum. Broken curves correspond to $`r_0=1.8`$ cm, which is away from the $`F_{r1}`$ minimum. It can be observed that the spatial periodicity of $`v_r`$ and $`v_\theta `$ for the first two groups is equal to one wiggler wavelength, which is the same as that of the first harmonic. Although group III orbits have a clear sinusoidal shape at $`r_0=2.28`$ cm (solid curves), the slight deviations from sinusoidal shape are obvious at $`r_0=1.8`$ cm (broken curves). This is because that at $`r_0=2.28`$ cm where $`F_{r1}`$ is minimum $`F_{r3}`$ is very small. Therefore away from the resonance the third harmonic plays almost no role, at $`r_0=2.28`$ cm. Moving away from $`F_{r1}`$ minimum to $`r_0=1.8`$ cm, however, increases the magnitude of $`F_{r3}`$ slightly making the effect of the third harmonic noticeable on group III orbits, which are away from the second magnetoresonance, in Fig. 5. Figure 6 shows the variations of $`v_r/c`$ and $`v_\theta /c`$ with $`z(=v_{}t)`$ for group III orbits when the cyclotron frequency is adjusted at the second magnetoresonance at $`\mathrm{\Omega }_0/ck_w2.7`$. At $`r_0=2.28`$ cm the periodicity is approximately equal to $`\lambda _w/3`$, which is the same as that of the third harmonic, and shows the strong influence of the third harmonic on the transverse velocity components. Going away from the $`F_{r1}`$ minimum to $`r_0=1.8`$ cm makes the amplitudes of oscillations of $`v_r`$ and $`v_\theta `$ larger. Broken curves correspond to the solutions of Eqs. (27) and (29) when the radial variation of the wiggler field is neglected. These solutions do not differ appreciably from the r-dependent solutions for $`r_0=2.28`$ cm because at $`F_{r1}`$ minimum the radial excursions are small for group III orbits. Away from the $`F_{r1}`$ minimum at $`r_0=1.8`$ cm, however, deviations are noticeable due to the larger radial excursions. Figure 7 shows $`v_r/c`$ versus $`z`$ for group II orbits for r-dependent wiggler (broken curves) and r-independent wiggler (solid curves). The initial orbit radius is taken at $`r_0=2.28`$ cm and the cyclotron frequency is chosen around the first magnetoresonance at $`\mathrm{\Omega }_0/ck_w=0.9`$. Large radial excursions of electrons for group II orbits make the transverse velocity strongly affected by the radial dependency of the wiggler field. The amplitude is also modulated in space with the wavelength of around $`16\lambda _w=48`$ cm. The radial excursion $`r`$ shown in Fig. 8 corresponds to the cyclotron frequencies away from the magnetoresonances at $`\mathrm{\Omega }_0/ck_w`$ equal to 0.5, 1.2, and 3 for the group I, II, and III orbits, respectively. Solid curves correspond to the $`r_0=2.28`$ cm and the broken curves correspond to $`r_0=1.8`$ cm. It can be noticed that when the electrons are injected into the wiggler at $`r_0=2.28`$ cm, where $`F_{r1}`$ is minimum, electron orbits remain well away from the waveguide walls at $`R_{in}=1.5`$ cm and $`R_{out}=3`$ cm. Figure 9 compares the radial excursions of group III orbits at the second magnetoresonance at $`\mathrm{\Omega }_0/ck_w=2.7`$, (solid curve) with those slightly away from the resonance at $`\mathrm{\Omega }_0/ck_w=3`$, (broken curve). Influence of the third harmonic can be clearly seen through the modulation of the third harmonic by the first harmonic when the cyclotron frequency is adjusted at the second magnetoresonance. ## V. CONCLUSIONS The third spatial harmonic of the coaxial wiggler field gives rise to the group III orbits with $`\mathrm{\Omega }_0>3k_wv_{}`$. This relatively weak third harmonic makes the width of the second magnetoresonance narrow compared to the first magnetoresonance. A strong negative mass regime is found for the group III orbits. By adjusting the cyclotron frequency at the second magnetoresonance the wiggler induced velocity of the group III orbits was found to increase considerably. When the electrons are injected into the wiggler where its magnetic field is minimum the electron orbits remain well away from the waveguide boundaries. Harmonic gyroresonance of electrons in the combined helical wiggler and axial guide magnetic field is reported by Chu and Lin . In their analysis the relativistic single particle equation of motion is used with the axial velocity as well as the axial magnetic field of the wiggler averaged along the axial direction. By assuming near steady-state orbits for off-axis electrons they found that the radial variation of the wiggler magnetic field produces a harmonic structure in the transverse force. This force, in turn, comprises oscillations at all harmonics of $`k_wz`$. It should be noted that there is no harmonic structure in the helical wiggler itself and the higher velocity harmonics vanish for the exact steady-state orbits of the on-axis electrons. Moreover, higher harmonics do not appear in the one dimensional helical wiggler where the radial variation is neglected. In the present analysis of coaxial wiggler, on the other hand, equation of motion is written to first order in the wiggler amplitude. With this approximation axial component of the wiggler field has no contribution to the problem leaving the axial velocity as a constant. Magnetic field of a coaxial wiggler is composed of a fundamental plus a large number of odd spatial harmonics, which directly appear in the magnetic force represented by $`f(t)`$ in Eq. (22). Third harmonic in $`f(t)`$ appears in the transverse velocity components as a part of the integrands in Eqs. (23) and (24) and is also demonstrated numerically for the radially dependent coaxial wiggler, but for the radially uniform wiggler the third harmonic is explicit in the solutions Eqs. (27)-(29).
no-problem/0002/cond-mat0002284.html
ar5iv
text
# Ballistic electron transport in stubbed quantum waveguides: experiment and theory ## I INTRODUCTION Submicron-size T-shaped electron waveguides, defined electrostatically in a two-dimensional electron gas (2DEG) by Schottky gates, are very promising devices for potential applications in microelectronics since their conductance $`G`$ is determined, in the ballistic regime, by quantum interference effects and can be changed by applying voltages to the gates . Such devices, commonly known as Electron Stub Tuners (ESTs), also open the way for studying resonant states of ballistic quantum dots in both the weakly coupled tunneling and in the transmissive open regime . The size of an EST can be controlled by gate voltages, cf. Fig. 1. For a theoretical analysis, an EST can be considered as a rectangular quantum dot connected to 2DEG reservoirs through two oppositely placed Quantum Point Contacts (QPC). When the electron phase-coherence length exceeds the dimensions of the EST, transport through the device is ballistic. A number of theoretical papers , have been published on the ballistic transport characteristics of ESTs in the open regime of the QPCs. These works predict an oscillatory dependence of $`G`$ as a function of geometrical size parameters of the device or of the Fermi energy $`E_F`$. A minimum in $`G`$, or a reflection resonance, is said to occur due to resonant reflection of electron waves by quasibound states of the stub cavity (SC) formed by the gates, the quasibound state itself resulting because of the quantization of electron momentum associated with the small device size. Experimentally, it is possible to probe these resonance states through measurements of $`G`$ at low temperatures and has been reported only very recently. However, so far experimenters have failed to observe well-defined, regular oscillations in $`G`$ with mimima corresponding to excitations of the quasibound states as predicted theoretically. Most of the devices so far used in experiments were geometrically defined by only two gates, which do not allow an adequate independent control of the width of the QPCs and the shape of the SC. This possibly explains the failure to observe experimentally a well-defined, regular pattern of minima in $`G`$. In this paper we present experimental and theoretical results for the four-gate EST. A preliminary account of some of them has appeared recently . We report the experimental observation of a clear and pronounced oscillatory dependence of the ballistic $`G`$ on the size of the SC as the latter is changed by voltage-biasing the gates. Such oscillations occur on the first conductance plateau of the QPCs. We also present theoretical results for $`G`$ obtained from a numerical solution of Schröedinger’s equation for a two-dimensional (2D) hard-wall electron waveguide with a shape close to the one resulting from the biasing of the Schottky gates. Because of this choice, we believe our results are closer to reality than those reported in previous theoretical work based on a rectangular approximation for the SC shape. A rectangular SC shape is unrealistic since, though the lithographically defined device shape is rectangular, the shape of the SC changes as the gates are biased . Moreover, except for a few papers , the lengths of the QPCs have been considered as infinite; this is a very rough approximation for real devices and we avoid it in our computations. Comparison of the experimentally observed features of the ballistic $`G`$ to those obtained numerically enables us to determine the physical origin of these features and helps us understand what the shape of the SC is and how it can be modified by applied gate voltages. The paper is organized as follows. In Sec. II we give a brief description of the device fabrication and measurement techniques and then present results of conductance measurements as function of gate voltages. Section III outlines the theoretical model and of the calculations presents numerical results. Finally, an interpretation of the experimental results based on the theoretical analysis of Sec. III. and conclusions follow in Sec. IV. ## II EXPERIMENTAL ASPECTS ### A Device fabrication and measurement techniques The ESTs used in this study were fabricated from modulation doped AlGaAs/GaAs heterostructure wafers grown by MBE and having a two-dimensional electron gas (2DEG) at a depth of 80 nm below the surface. The carrier concentration $`n_{2D}`$ of the 2DEG was $`2.4\times 10^{15}`$ m<sup>-2</sup> with a mobility $`\mu `$ of 100 m<sup>2</sup>/V s at 4.2 K. These values of $`n_{2D}`$ and $`\mu `$ give a 2DEG Fermi energy $`E_F=`$8 meV. The ESTs were defined by four Schottky gates S1, T, S2, and B patterned by electron beam lithography on the surface of the wafer. Figure 1 gives a schematic drawing of an EST device while Fig. 2 shows a scanning electron micrograph of a fabricated EST sample. Lighter areas are the Schottky metal gates on the wafer surface. The central part of the EST, where the SC is located, forms a lithographically rectangular planar quantum dot of length 0.55 and width 0.25 $`\mu `$m. The lithographic lengths of the QPCs is 0.1 $`\mu `$m. The samples were clamped to the mixing chamber of a dilution refrigerator. Considerable care was taken to ensure good thermal contact to the sample. The two-terminal conductance G of the devices was measured at 90 mK as function of a gate voltage, while the other gates were biased at fixed voltages. Standard low-bias, low-frequency, lock-in techniques were used to measure G, which was corrected for a low series resistance due to the 2DEG reservoirs. A source-drain rms excitation of 10 $`\mu `$V was typically used to drive a current along the length ($`x`$-direction) of the QPCs. Since the four gates are independent, it was possible to characterize the QPCs of the EST device independently by biasing the appropriate gates while grounding the rest of them. High-quality, well-defined conductance quantization staircase was observed for both QPCs. The gates of the QPCs were negatively biased to assure fundamental-mode transport through them. The conductance G of the device was then measured as function of the size of the SC by sweeping the voltage $`V_T`$ of the top gate, or $`V_B`$ of the bottom gate, or $`V_S`$ of the side gates, while the other gates were biased at fixed negative voltages. The sweeping gate voltage was changed until the device was completely pinched off, allowing measurements in both the single-mode open and the tunneling regime of electron transport. Measurements were made on a few EST devices. All of them gave nearly identical and reproducible results differing only in the pinch-off voltages. ### B Experimental results Figures 3 to 7 show the experimental results. Very well-defined oscillations in the ballistic conductance G are observed as a function of a sweeping gate voltage, which changes the size of the SC, while the other gates are biased at fixed voltages. These oscillations exhibit several features which are found to be generic to the EST devices studied. All results shown correspond to transport in the fundamental mode through the QPCs until they are pinched off. Figure 3 shows the oscillations in G observed when the bias voltage $`V_T`$ of the top gate is swept, while the other gates are kept at fixed voltages. The solid curve is obtained when the bottom and the two side gates are biased, respectively, at $`V_B=765`$ mV, $`V_{S1}=860`$ mV, and $`V_{S2}=964`$ mV. As $`V_T`$ is made more negative the size of the SC shrinks. This also adds to the depletion due to the side gates and narrows the QPCs until they are pinched off when the device conductance drops to zero. The oscillations in $`G`$ are found to occur in two distinct regimes of the sweeping gate voltage, one for which $`G<e^2/h`$ and the other for which $`G>e^2/h`$. The one with $`G<e^2/h`$ is the tunneling regime when $`E_F`$ is below the bottom of the lowest conduction subband of the QPCs, which now form energy barriers through which electrons can tunnel. The oscillations of G in this regime are found to be periodic, quite sharp, and well resolved. The regime for which $`G>e^2/h`$ may be called the open regime. This happens when $`E_F`$ is above the bottom of the lowest subband of the QPCs such that transport is in the fundamental mode. The $`G`$ oscillations in this regime are located on the first quantized conductance plateau of the QPCs. Though quite robust, clear, and nearly periodic, they are relatively broad and show a certain degree of overlapping with the resulting convolution effect. The peak values are less than the expected quantized value of $`2e^2/h`$ and, as will be seen later, result from backscattering at the QPC entrance and/or from boundary roughness at the QPC walls. The dotted $`G`$ curve of Fig. 3 is obtained for $`V_B=780`$ mV and differs from the solid one in two important respects. First, the average conductance is substantially lower. Second, the oscillations in $`G`$ in the open regime become irregular due to the appearance of broad troughs or depressions in the conductance. The size of the central ballistic cavity of the device can also be altered by varying the bias voltages of the side gates while keeping those of the top and bottom gates at fixed values. Figure 4 shows the variation in $`G`$ as function of the side gate bias voltage $`V_S`$ ($`V_S=V_{S1}=V_{S2}+104`$ mV). The solid curve was generated with $`V_T=1400`$ mV and $`V_B=755`$ mV, while the dotted curve was obtained for $`V_T=800`$ mV and $`V_B=790`$ mV. Results obtained by sweeping $`V_B`$ are illustrated in Fig. 5. For these measurements the side gates were biased as follows: $`V_{S1}=860`$ and $`V_{S2}=964`$ mV. The solid and dotted curves correspond, respectively, to $`V_T=1085`$ and $`635`$ mV. Comparing the results of Figs. 4 and 5 to those of Fig. 3, we notice that, except for the device pinch-off voltages and the oscillation periods, the features of the oscillations in $`G`$ are similar. The solid curves show the same characteristics, as do the dotted ones, though the features are different for the two sets. This is not surprising since in all cases we are changing the size of the central cavity. An interesting question, however, is what causes the difference in the characteristic features of the $`G`$ oscillations observed on the first conductance plateau for the solid and the dotted curves. If we look more closely and compare the constant gate bias voltages used for generating the two sets of $`G`$ curves, an empirical consistency emerges. The voltages used for the dotted curves are such as to result in a SC which is long compared to that for the corresponding solid curves. As an example, the oscillatory $`G`$ of the dotted curve in Fig. 4 is obtained for a $`V_T=800`$ mV, while for the solid curve $`V_T`$ is equal to $`1400`$ mV. A more negative top gate voltage certainly makes the SC shorter. These observations lead us to conclude that a lower average value of $`G`$ and broad depressions in it occur when the stub cavity is long. We call the oscillatory $`G`$ pattern with regular minima observed for the solid curves a ”regular” pattern and that for the dotted curves an ”irregular” pattern. In order to better understand the origin of the observed oscillations in $`G`$ and to distinguish between the peaks in the tunneling and the open regime, we have studied the dependence of the regular $`G`$ pattern on temperature, drain-source excitation voltage, and a magnetic field applied perpendicular to the plane of the 2DEG. Figure 6 shows the temperature dependence of a regular $`G`$ pattern obtained by sweeping the top gate voltage $`V_T`$ with $`V_B=765`$ mV, $`V_{S1}=860`$ mV, and $`V_{S2}=964`$ mV. As the temperature is increased, all peaks in both the tunneling and the open regime broaden, and eventually they are washed out. At 4.2 K, the oscillatory $`G`$ pattern has disappeared and is replaced by the conductance step and plateau. At the highest temperature of 2.5 K measured in the dilution refrigerator, the peaks in the open regime have practically disappeared, while those in the tunneling regime show a trace existence. The influence of the source-drain voltage $`V_{ds}`$ on the regular $`G`$ pattern has also been studied and is shown in Fig. 7. Notice that the effect of increasing $`V_{ds}`$ is similar to that of temperature. The oscillations in $`G`$ are found to practically fade out and be replaced by the conductance step and plateau when $`V_{ds}`$ is increased to a rms value of 700 $`\mu `$V. ## III THEORETICAL TREATMENT ### A Cavity potential A realistic, theoretical description of ballistic electron transport through a cavity, such as the stub of an EST, must take into account the electrostatic potential inside the cavity since it determines the actual shape of the conducting channel. Accordingly, we have calculated the electrostatic potential created in the plane of a 2DEG situated at a distance $`d=`$80 nm below the surface, at $`z=0`$, of a two-gate EST, defined by two surface Schottky gates with voltages $`V_T`$ and $`V_B`$, when $`S1`$, $`T`$, and $`S2`$ are connected together, cf. Fig. 1. The distance between the gates at entrance and exit is $`2w=`$250 nm, the bottom gate $`G_B`$ is flat, while the top gate $`G_T`$ contains the stub-like opening of width $`2w`$ and of length $`L=`$300 nm. This value of $`d`$ and the lithographic dimensions correspond to the experimental device described above, although the present model of just two gates is somewhat idealized but necessary for simplifying the calculations. The potential $`\phi (x,y,z)`$ has been calculated from the Laplace equation in the semi-space $`z>0`$, with Dirihlet boundary conditions $`\phi (x,y,0)=V_i`$ on the $`i`$th gate region ($`i=T,B`$) and Newmann boundary conditions $`\phi (x,y,z)/z|_{z=0}=4\pi en_{2D}/ϵ`$ at the exposed surface region; $`n_{2D}`$ is the electron concentration in the 2D gas in the absence of depletion and $`ϵ`$ is the dielectric constant. The last boundary condition expresses the so-called ”frozen surface model” in which the electric charge at the exposed surface is constant; this model appears appropriate at low temperatures and is often used in theoretical calculations. To make our model finite in the $`x`$ direction, we choose a length $`l=L`$ and use the boundary condition $`\phi (x,y,z)/x|_{x=\pm l}=0`$. We have also assumed that the concentration of the ionized donors in the doping region between the surface and the 2D gas plane is equal to a sum of the $`n_{2D}`$ and surface charge concentration and is not changed appreciably when the voltages are applied to the gates. In Fig. 8 we present the resulting contour plot of the potential $`\phi (x,y,d)`$; since $`\phi (x,y,z)=\phi (x,y,z)`$, we show $`\phi `$ only for the right half of the stub. Although we do not take into account the free-electron charge in the plane $`z=d`$, in order to avoid the heavily involved self-consistent calculations, we expect that the screening effect due to these electrons will not change the shape of the equipotential lines considerably but would cause at most a flattening of the bottom of the potential distribition. As a result, we expect the shape of the conducting channel to follow, more or less, the calculated equipotential lines. This enables us to draw the following important qualitative conclusions. i) The shape of the cavity inside the stub region does not follow that defined by the gate edges and is not rectangular as has been assumed in previous pertinent theoretical works. ii) The width of the cavity is close to the lithographic width of the stub, and since the Fermi wavelength at $`E_F`$ 8 meV is about 53 nm, which is considerably less than the lithographic stub width $`2w`$, the cavity accommodates not just one longitudinal mode, as has been frequently assumed, but several modes. iii) When the width of the narrowest part of the conducting channel, in our model at $`x=l`$, is small compared to the lithographic one $`2w`$ of the wire, the length of the cavity at $`x=0`$ is considerably smaller than the lithographic length $`L`$. iv) The length of the cavity is even smaller when the upper gate voltage $`V_T`$ is more negative ($`V_T<V_B`$) so that there is an overall shift of the conducting channel towards the bottom gate. In going beyond the two-gate model towards the four-gate device shown in Fig. 1, it is reasonable to expect that when the top gate voltage is more negative than that of the side gates, the height of the cavity decreases, while in the opposite case it should increase. In the following we use this qualitative information to appropriately model the shape of the conducting channel in the four-gate EST and calculate the electron transmission through the cavity. ### B Model of the cavity and numerical method We model the conducting channel of the device in Fig. 1 with a 2D waveguide having hard-wall boundaries described, in an obvious notation, by the functions $`y_{bot}(x)=y_{wire}(x),y_{wire}(x)=W/[1+\mathrm{exp}((x+r)/\beta )]+W/[1+\mathrm{exp}((x+r)/\beta )],`$ (1) $`y_{top}(x)=y_{wire}(x)+a+y_{cav}(x),y_{cav}(x)=h\mathrm{exp}(x^2/b^2).`$ (2) We describe the cavity with the Gaussian function $`y_{cav}(x)`$ since it gives us the most relevant elementary-function approximation of the equipotential lines shown in Fig. 8. The function $`y_{wire}(x)`$ describes the transition from the constricted region near $`x=0`$ to the 2D reservoirs at $`x=\pm \mathrm{}`$. Here we set $`\beta =W/4`$ to model the square-angle opening of the conducting channel of the experimental device (Fig. 1). The parameter $`W`$, which describes the semiwidth of the channels away from the constriction and must be large enough, is chosen as $`W=w`$. For this value of $`W`$, the channels away from the constriction already accommodate about 10 transverse modes and can be effectively treated as 2D leads. The remaining parameter $`r`$ is chosen, by inspection, as $`r=w+l+3\beta `$, where $`l`$ is the lithographic length of the QPCs; this gives a more or less suitable correspondence between the outer parts of the conducting channel and the gate corners. The resulting shape of the conducting channel, together with the lithographic gate layout, is shown in Fig. 9. To determine the transmission coefficients of electron waves through the device, we solved numerically the 2D Schrödinger equation $$\frac{\mathrm{}^2}{2m}\left(\frac{^2}{x^2}+\frac{^2}{y^2}\right)\mathrm{\Psi }(x,y)+[U(x,y)\epsilon ]\mathrm{\Psi }(x,y)=0,$$ (3) using the following expansion for the wave function $$\mathrm{\Psi }(x,y)=\underset{n}{}\psi _n(x)\chi _n(x,y),\chi _n(x,y)=\sqrt{\frac{2}{Y(x)}}\mathrm{sin}\left[\frac{\pi n}{Y(x)}(yy_{bot}(x))\right],$$ (4) where $`Y(x)=y_{top}(x)y_{bot}(x)`$ is the $`x`$-dependent channel width. The basis functions $`\chi _n(x,y)`$ already satisfy the boundary conditions for hard-wall confinement. Substituting Eq. (4) into Eq. (3) leads to the 1D matrix equation for $`\psi _n(x)`$ $$\left[\frac{d^2}{dx^2}\left(\frac{\pi n}{Y(x)}\right)^2+k^2\right]\psi _n(x)+\underset{m}{}\left[2B_{nm}(x)\frac{d}{dx}+C_{nm}(x)K_{nm}(x)\right]\psi _m(x)=0;$$ (5) here $`k^2=2m\epsilon /\mathrm{}^2`$ and $`B_{nm}(x)={\displaystyle _{y_{bot}\left(x\right)}^{y_{top}\left(x\right)}}𝑑y\chi _n(x,y){\displaystyle \frac{}{x}}\chi _m(x,y),`$ (6) $`C_{nm}(x)={\displaystyle _{y_{bot}\left(x\right)}^{y_{top}\left(x\right)}}𝑑y\chi _n(x,y){\displaystyle \frac{^2}{x^2}}\chi _m(x,y),`$ (7) $`K_{nm}(x)={\displaystyle \frac{2m}{\mathrm{}^2}}{\displaystyle _{y_{bot}\left(x\right)}^{y_{top}\left(x\right)}}𝑑y\chi _n(x,y)U(x,y)\chi _m(x,y).`$ (8) Since we assume $`U(x,y)=0`$ far away from the constriction, all parameters defined by Eqs. (6) depend on $`x`$ only in the constriction region. We choose $`x_{max}`$ and $`x_{min}`$ far enough from the constriction and discretize Eq. (5) on a $`N+1`$-point grid according to $`x=x_i=x_{min}+is`$, $`s=(x_{max}x_{min})/N`$. The resulting finite-difference equation for $`\psi _m(x_i)`$ is solved subject to the boundary conditions $`A_{nm}(1)\psi _m(x_1)+A_{nm}(0)\psi _m(x_0)=A_n^\alpha `$ and $`A_{nm}(N1)\psi _m(x_{N1})+A_{nm}(N)\psi _m(x_N)=0`$, appropriate to a wave, in state $`\alpha `$, incident from the left side. Since $`\chi _n(x,y)`$ are the exact normalized eigenfunctions of the problem at $`x=\pm \mathrm{}`$, the boundary matrices $`A_{nm}`$ are diagonal $`A_{nm}(1)=A_{nm}(N1)=\delta _{nm},A_{nm}(0)=A_{nm}(N)=\delta _{nm}\mathrm{exp}(ip_ns)`$, while $`A_n^\alpha =\delta _{n\alpha }[\mathrm{exp}(ip_ns)\mathrm{exp}(ip_ns)]`$. In these matrix expressions we introduced the longitudinal quantum number $$p_n=\sqrt{k^2(\pi n/Y(\mathrm{}))^2},$$ (9) which can be either real or imaginary ($`Imp_n>`$0); in the latter case the waves are evanescent in the leads. Far away from the constriction $`p_n`$ is the longitudinal momentum in the leads. The ballistic conductance $`G`$ at zero temperature is given by the multichannel Landauer-Büttiker formula $$G=\frac{2e^2}{h}\underset{\alpha \alpha ^{}}{}|T_{\alpha \alpha ^{}}|^2\frac{p_\alpha ^{}}{p_\alpha }.$$ (10) The transmission amplitude $`T_{\alpha \alpha ^{}}`$ in Eq. (8) is equal to $`\psi _\alpha ^{}(x_N)`$ for the problem with the incident wave in state $`\alpha `$ and $`\epsilon =E_F`$. The sum runs over all propagating states (for which $`p_\alpha `$ are real). The generalization of Eq. (8) to finite temperatures is straightforward, see, e.g., Eq. (7) in Ref. 10. The actual number $`M`$ of transverse subbands involved in the numerical calculations is determined by the condition that further increase of $`M`$ does not produce any perceptible change of the calculated wave functions and transmission coefficients. For the calculations described below it is sufficient to take $`M`$ between 10 and 15; this results in a reasonably short calculation time. ### C Numerical results In order to decrease the number of the unknown parameters, we restricted ourselves to the flat-band approximation, $`U(x,y)=0`$, in all calculations. As a result, we have only the three geometrical parameters $`a`$, $`h`$ and $`b`$, which are assumed to be controlled by the gates. The first of them, $`a`$, characterizes the width of the constriction in its narrowest parts, although it is somewhat smaller than the constriction width, cf. Eqs. (1) and (2). Assuming that the depletion of the 2DEG by the gates follows a linear law, which is confirmed by our experimental studies of the conductance quantization in a single constriction, we can directly associate a change of the value of $`a`$ with a change of the gate voltages. As for $`h`$ and $`b`$, they describe mostly the shape of the cavity formed in the stub region and are related, respectively, to its height and width. It is convenient to measure all these parameters in units of the cut-off length $`a_0=\mathrm{}\pi /\sqrt{2mE_F}`$, which is the width of the hard-wall quantum wire, when it stops conducting, and is equal to 26.5 nm for $`E_F=`$8 meV, a value of $`E_F`$ used in all calculations, and $`GaAs`$ effective mass $`m=0.067m_e`$. Varying the bottom gate voltage $`V_B`$, in our model, means simply changing the parameter $`a`$ while keeping $`h`$ and $`b`$ constant. This describes the case when the lower (bottom) boundary of the conducting channel is shifted linearly by $`V_B`$, while the upper boundary remains insensitive to this voltage because of screening by the electron gas inside the channel. These assumptions are supported by previous calculations of the potential distribution in homogeneous, along $`x`$ axis, split-gate structures . The calculated dependence of the conductance on $`a/a_0`$, in the range of the first and second plateau, for two values of $`h`$ and $`b=w`$ is shown in Fig. 10. The following qualitative features are evident: for small $`h`$ the transmission pattern shows narrow, almost equally spaced minima of resonance reflections of similar shape: we call this the ”regular” pattern. The number of the minima on the first plateau (5-7) is consistent with the experimentally observed number (Fig. 5, solid curve). As $`h`$ increases ($`V_T`$ less negative), the oscillations become irregular and show broad troughs, on which are superimposed the closely spaced resonances, and the average conductance is considerably smaller than that of the shorter cavity. These results are consistent with experimental observations (Fig. 5, dotted curve). Results for a narrower cavity show a similar but less regular pattern. The situation is more complex when the top gate voltage $`V_T`$ is varied. Making $`V_T`$ less negative obviously leads to an increase of $`h`$, and, as mentioned earlier, it also widens the constriction. To describe this situation $`a`$ and $`h`$ should be varied. Very likely the cavity width $`b`$ also changes in this situation, but we expect this change to be small and make the following assumptions in order to generate the numerical results: $`b`$ remains unchanged while $`h`$ varies linearly as $`h=h_0+va`$ as $`V_T`$ is changed. Since the distance of the top gate from the highest part of the cavity is smaller, but not much smaller, than its distance from the narrowest part of the constriction, we expect $`v`$ to be positive but not much larger than unity. The dependence of $`G`$ on $`a/a_0`$, for fixed $`b=w`$ and different $`h_0`$ and $`v`$, is shown in Fig. 11 for the region of the first plateau. The curves marked (1) and (2) are obtained, respectively, with the $`h_0=1.2a_0`$, $`v=`$1, for (1), and $`h_0=0.6a_0`$, $`v=`$2 for (2). The parameters are chosen in such a way that at the beginning of the plateau ($`a/a_0`$ 0.6) the cavity height is equal to 1.8 $`a_0`$ for both curves. Curve (1), with smaller $`v`$, shows an oscillatory $`G`$ with regular minima, similar to those of curve (2) in Fig. 10 for constant $`h`$. But there is also a difference: the average conductance shows a small depression near the end of the plateau. As $`v`$ increases (curve (2)), the depression becomes more pronounced and broad, spreading over the second half of the plateau. Similar results, but with a less regular oscillatory pattern, are obtained for cavities with larger $`h`$, e.g., with $`h2.6a_0`$ at the beginning of the plateau. Also similar qualitative features have been obtained for narrower cavities ($`b=0.75w`$). We do not discuss separately the case when the side-gate voltage $`V_S`$ is varied, with $`h`$ and $`a`$ changing simultaneously, since we expect the effect to be the same as when $`V_T`$ changes. The experimental results of Figs. 3 and 4 show no qualitative difference between the two cases and confirm this assessment. ## IV DISCUSSION AND CONCLUSIONS Comparison of the experimental and theoretical results shows that the qualitative behavior of the ballistic conductance, as a function of different gate voltages, is the same in either case. This indicates that our choice of parameters and the assumptions about their variation with gate voltages, supported in part by the solution of the electrostatic problem, represent fairly the experimental situation. Below we discuss this in more detail. When we make $`V_S`$ and $`V_T`$ more negative than $`V_B`$, the conducting channel is shifted towards the bottom gate and the cavity height $`h`$ decreases. In this situation we obtain rather regular oscillations of the conductance as a function of bottom, top or side gate voltages. The minima correspond to resonant reflections of the electron waves from quasibound states in the cavity . Variation of one of the gate voltages sweeps the levels of the quasibound states through the Fermi level $`E_F`$ and results in a resonance minimum each time $`E_F`$ coincides with one of them. The regularity in shape and spacing of these minima, for cavities with small heights, follows from the fact that these minima originate from the same set of quasibound states. For confirmation, we calculated again curves (1) in Figs. 10 and 11 with just two transverse cavity modes in the expansion (4); in the region of the first plateau the first mode is transmitted through the constrictions, while the second one is quasibound in the cavity. Comparing these results with the curves (1) of Figs. 10 and 11, we find that all regular resonances occuring on the first plateau appear also in this simplified calculation and their shapes are very similar. We, therefore, conclude that the regular resonance reflection pattern results from quasibound states associated with the second transverse mode and each of them characterized by a proper wavenumber that expresses longitudinal quantization along the $`x`$ direction. Higher transverse quantization states bring additional resonant features and make the dependence of the conductance on the gate voltages less regular. The experimental results of Figs. 3-5 (solid curves) show well-defined regularly spaced minima in $`G`$ and fully corroborate this analysis. Note that the number of the minima obtained from the theory is about the same as that observed experimentally; this means that the cavity width is really close to the lithographic width $`2w`$ of the stub, as we assumed in our model. The experimental dependences of the conductance on the top, side, and bottom gate voltages are similar and show almost the same number of oscillations. This is in agreement with our assumption that we are dealing with a wide cavity of small height. For the same $`h`$ the minima are narrower for wider cavities since in such a case the upper boundary of the cavity is smoother and the electron motion through the cavity is more adiabatic. In terms of our model this means that the derivative $`dy_{cav}(x)/dx`$ is smaller. Experimentally, a broadening of the cavity without changing its height can be achieved by applying less negative voltages to the side gates. When we make $`V_S`$ and/or $`V_T`$ less negative compared to the values of the ”regular” case discussed above, the channel widens and the cavity height $`h`$ increases. In this ”irregular” case the situation is modified as follows. i) More quasibound states occur and influence the resonant reflection, ii) The coupling between transmitted and quasibound states increases due to the increase of the non-adiabaticity. Both these factors should lead to a decrease of the average conductance $`G`$. This decrease of $`G`$ is clearly seen in the theoretical results, cf. Figs. 10-11. Besides, the theory shows that the behavior of $`G`$ as a function of different gate voltages has a qualitative difference: the variation of $`V_B`$ produces broad minima and depression of the average $`G`$ over the entire plateaus while the variation of $`V_T`$ or $`V_S`$ gives narrower minima and the average $`G`$ has pronounced troughs near the ends of the plateaus. Our theoretical calculations show that the troughs are more pronounced when the height $`h`$ is larger and when it increases faster with the opening of the constriction. A simple explanation is as follows. Sweeping $`V_T`$ or $`V_S`$ towards less negative values along the conductance plateau substantially increases $`h`$ and leads to a departure from the short-cavity, ”regular” pattern to the ”irregular” one of a long-cavity. Therefore, the averaged $`G`$ decreases at the end of the first plateau and then increases, when the second transverse mode is allowed in the constriction, reaching the next plateau. If the initial value of $`h`$ is already large, we need a lesser increase in $`h`$ in order to move to the long-cavity regime; then the decrease of the average $`G`$ appears earlier and gives rise to a broader trough. The experimentally observed behavior of $`G`$, cf. dashed curves in Figs. 3-5, shows pronounced troughs and a decrease in its average value in agreement with the above interpretation. We emphasize that the appearance of these troughs in $`G`$ is a very common and reproducible feature of the stubbed quantum devices studied in this work. Possible EST applications require a regular, periodic dependence of the conductance on the gate voltages. The initial idea<sup>1</sup>, followed in subsequent theoretical works, was to use a narrow cavity, containing one quantized state, or mode, in the ($`x`$) transport direction, and control the conductance through it by changing the height of the cavity by a top-gate voltage. However, from our results it is clear that real EST devices do not satisfy these conditions because the electrostatic potential created by the gates is rounded near the corners, cf. Fig. 8. Moreover, for a real device, as this work shows, if the cavity is long enough, the dependence of the conductance on the gate voltages is rather irregular. We have shown that it is possible to obtain a short-length cavity by a proper choice of the gate voltages that is wide in the transport direction. The conductance through such a cavity shows a regular pattern of resonant reflection minima associated with the quasibound states in it that result from longitudinal quantization. ###### Acknowledgements. One of us (P. D.) gratefully acknowledges the award of a Senior Research Associateship by the National Research Council, Washington, DC. The work of P. V. was supported by the Canadian NSERC Grant No. OGP0121756.
no-problem/0002/quant-ph0002066.html
ar5iv
text
# Quantum lower bounds by quantum arguments ## 1 Introduction In the query model, algorithms access the input only by querying input items and the complexity of the algorithm is measured by the number of queries that it makes. Many quantum algorithms can be naturally expressed in this model. The most famous examples are Grover’s algorithm for searching an $`N`$-element list with $`O(\sqrt{N})`$ quantum queries and period-finding which is the basis of Shor’s factoring algorithm. In the query setting, one can not only construct efficient quantum algorithms but also prove lower bounds on the number of queries that any quantum algorithm needs. For example, it can be shown that any algorithm solving the unordered search problem needs $`\mathrm{\Omega }(\sqrt{N})`$ queries. (This implies that Grover’s algorithm is optimal.) The lower bounds in quantum query model provide insights into the limitations of quantum computing. For example, the unordered search problem provides an abstract model for NP-complete problems and the $`\mathrm{\Omega }(\sqrt{N})`$ lower bound of provided evidence of the difficulty of solving these problems on a quantum computer. For two related problems - inverting a permutation (often used to model one-way permutation) and AND of ORs only weaker lower bounds have been known. Both of these problems can be solved using Grover’s algorithm with $`O(\sqrt{N})`$ queries for inverting a permutation and $`O(\sqrt{N}\mathrm{log}N)`$ queries for AND of ORs. However, the best lower bounds have been $`\mathrm{\Omega }(\sqrt[3]{N})`$ and $`\mathrm{\Omega }(\sqrt[4]{N})`$, respectively. We present a new method for proving lower bounds on quantum query algorithms and use it to prove $`\mathrm{\Omega }(\sqrt{N})`$ lower bounds for inverting a permutation and AND of ORs. It also provides a unified proof for several other results that have been previously proven via variety of different techniques. In contrast to that use a classical adversary argument (an adversary runs the algorithm with one input and, after that, changes the input slightly so that the correct answer changes but the algorithm does not notice that), we use a quantum adversary. In other words, instead of running the algorithm with one input, we run it with a superposition of inputs. This gives stronger bounds and can also simplify the proofs. More formally, we consider a bipartite quantum system consisting of the algorithm and an oracle answering algorithm’s queries. At the beginning, the algorithm part is in its starting state (normally $`|0`$), the oracle part is in a uniform superposition over some set of inputs and the two parts are not entangled. In the query model, the algorithm can either perform a unitary transformation that does not depend on the input or a query transformation that accesses the input. The unitary transformations of the first type become unitary transformations over the algorithm part of the superposition. The queries become transformations entangling the algorithm part with the oracle part. If the algorithm works correctly, the algorithm part becomes entangled with the oracle part because the algorithm part must contain different answers for different inputs. We obtain lower bounds on quantum algorithms by bounding the number of query transformations needed to achieve such entanglement. Previously, two main lower bound methods were classical adversary (called ’hybrid argument’ in ) and polynomials methods. The classical adversary/hybrid method of starts with running the algorithm on one input. Then the input is modified so that the behavior of algorithm does not change much but the correct answer does change. That implies that the problem cannot be solved with a small number of queries. Polynomials method uses the fact that any function computable with a small number of queries can be approximated by a polynomial of a small degree and then applies results about inapproximability by polynomials. Our “quantum adversary” method can be used to give more unified proofs for many (but not all) results that were previously shown using different variants of hybrid and/or polynomials method. There is also a new proof of the $`\mathrm{\Omega }(\sqrt{N})`$ lower bound on unordered search by Grover. This proof is based on considering the sum of distances between superpositions on different inputs. While the motivation for Grover’s proof (sum of distances) is fairly different from ours (quantum adversary), these two methods are, in fact, closely related. We discuss this relation in section 7. ## 2 The model We consider computing a Boolean function $`f(x_1,\mathrm{},x_N):\{0,1\}^N\{0,1\}`$ in the quantum query model. In this model, the input bits can be accessed by queries to an oracle $`X`$ and the complexity of $`f`$ is the number of queries needed to compute $`f`$. A quantum computation with $`T`$ queries is just a sequence of unitary transformations $$U_0OU_1O\mathrm{}U_{T1}OU_T.$$ $`U_j`$’s can be arbitrary unitary transformations that do not depend on the input bits $`x_1,\mathrm{},x_N`$. $`O`$ are query (oracle) transformations. To define $`O`$, we represent basis states as $`|i,b,z`$ where $`i`$ consists of $`\mathrm{log}N`$ bits, $`b`$ is one bit and $`z`$ consists of all other bits. Then, $`O`$ maps $`|i,b,z`$ maps to $`|i,bx_i,z`$. (i.e., the first $`\mathrm{log}N`$ bits are interpreted as an index $`i`$ for an input bit $`x_i`$ and this input bit is XORed on the next qubit.) We use $`O_x`$ to denote the query transformation corresponding to an input $`x=(x_1,\mathrm{},x_n)`$. Also, we can define that $`O`$ maps $`|i,b,z`$ to $`(1)^{bx_i}|i,b,z`$ (i.e., instead of XORing $`x_i`$ on an extra qubit we change phase depending on $`x_i`$). It is well known that both definitions are equivalent up to a constant factor: one query of the $`1^{\mathrm{st}}`$ type can be simulated with a one query of the $`2^{\mathrm{nd}}`$ type and one query of the $`2^{\mathrm{nd}}`$ type can be simulated with 2 queries of the $`1^{\mathrm{st}}`$ type. For technical convenience, we use the $`2^{\mathrm{nd}}`$ definition in most of this paper. The computation starts with a state $`|0`$. Then, we apply $`U_0`$, $`O`$, $`\mathrm{}`$, $`O`$, $`U_T`$ and measure the final state. The result of the computation is the rightmost bit of the state obtained by the measurement. The quantum computation computes $`f`$ with bounded error if, for every $`x=(x_1,\mathrm{},x_N)`$, the probability that the rightmost bit of $`U_TO_xU_{T1}\mathrm{}O_xU_0|0`$ equals $`f(x_1,\mathrm{},x_N)`$ is at least $`1ϵ`$ for some fixed $`ϵ<1/2`$. This model can be easily extended to functions defined on a larger set (for example, $`\{1,\mathrm{},N\}`$) or functions having more than 2 values. In the first case, we replace one bit $`b`$ with several bits ($`\mathrm{log}N`$ bits in the case of $`\{1,\mathrm{},N\}`$). In the second case, we measure several rightmost bits to obtain the answer. ## 3 The main idea Let $`S`$ be a subset of the set of possible inputs $`\{0,1\}^N`$. We run the algorithm on a superposition of inputs in $`S`$. More formally, let $`_A`$ be the workspace of the algorithm. We consider a bipartite system $`=_A_I`$ where $`_I`$ is an “input subspace” spanned by basis vectors $`|x`$ corresponding to inputs $`xS`$. Let $`U_TOU_{T1}\mathrm{}U_0`$ be the sequence of unitary transformations on $`_A`$ performed by the algorithm $`A`$ (with $`U_0,\mathrm{},U_T`$ being the transformations that do not depend on the input and $`O`$ being the query transformations). We transform it into a sequence of unitary transformations on $``$. A unitary transformation $`U_i`$ on $`_A`$ corresponds to the transformation $`U_i^{}=U_iI`$ on the whole $``$. The query transformation $`O`$ corresponds to a transformation $`O^{}`$ that is equal to $`O_x`$ on subspace $`H_A|x`$. We perform the sequence of transformations $`U_T^{}O^{}U_{T1}^{}\mathrm{}U_0^{}`$ on the starting state $$|\psi _{start}=|0\underset{xS}{}\alpha _x|x.$$ Then, the final state is $$|\psi _{end}=\underset{xS}{}\alpha _x|\psi _x|x$$ where $`|\psi _x`$ is the final state of $`A=U_TOU_{T1}\mathrm{}U_0`$ on the input $`x`$. This follows from the fact that the restrictions of $`U_T^{},O^{},U_{T1}^{},\mathrm{},U_0^{}`$ to $`_A|x`$ are $`U_T`$, $`O_x`$, $`U_{T1}`$, $`\mathrm{}`$, $`U_0`$ and these are exactly the transformations of the algorithm $`A`$ on the input $`x`$. In the starting state, the $`_A`$ and $`_I`$ parts of the superposition are unentangled. In the final state, however, they must be entangled (if the algorithm works correctly). To see that, consider a simple example where the algorithm has to recover the whole input. Let $`\alpha _x=1/\sqrt{m}`$ (where $`m=|S|`$) for all $`xS`$. In the exact model (the algorithm is not allowed to give the wrong answer even with a small probability), $`|\psi _x`$ must be $`|x|\phi _x`$ where $`|x`$ is the answer of the algorithm and $`|\phi _x`$ are algorithm’s workbits. This means that the final state is $$\frac{1}{\sqrt{m}}\underset{xS}{}|x|\phi _x|x,$$ i.e., it is fully entangled state. In the bounded error model (the algorithm can give a wrong answer with a probability at most $`ϵ`$), $`|\psi _x`$ must be $`(1ϵ)|x|\phi _x+|\psi _x^{}`$ and the final state must be $$\frac{1}{\sqrt{m}}\underset{xS}{}((1ϵ)|x|\phi _x+|\psi _x^{})|x$$ which is also quite highly entangled. In the general case (we have to compute some function $`f`$ instead of learning the whole input $`x`$), the parts of $`_I`$ corresponding to inputs with $`f(x)=z`$ must become entangled with parts of $`_A`$ corresponding to the answer $`z`$. Thus, we can show a lower bound on quantum query algorithms by showing that, given an unentangled start state, we cannot achieve a highly entangled end state with less than a certain number of query transformations. Next, we describe more formally how we bound this entanglement. If we trace out $`_A`$ from the states $`|\psi _{start}`$ and $`|\psi _{end}`$, we obtain mixed states over $`_I`$. Let $`\rho _{start}`$ and $`\rho _{end}`$ be the density matrices describing these states. $`\rho _{start}`$ is a $`m\times m`$ matrix corresponding to the pure state $`_{xS}\alpha _x|x`$. Entries of this matrix are $`(\rho _{start})_{xy}=\alpha _x^{}\alpha _y`$. In particular, if the start state is $`\frac{1}{\sqrt{m}}_{xS}|x`$, all entries of the $`\rho _{start}`$ are $`1/m`$. For $`\rho _{end}`$ we have ###### Lemma 1 Let $`A`$ be an algorithm that computes $`f`$ with probability at least $`1ϵ`$. Let $`x,y`$ be such that $`f(x)f(y)`$. Then, $$|(\rho _{end})_{xy}|2\sqrt{ϵ(1ϵ)}|\alpha _x||\alpha _y|.$$ Proof: Let $`|\psi _x`$, $`|\psi _y`$ be the final superpositions of the algorithm $`A`$ on inputs $`x,y`$. We take a basis for $`_A`$ consisting of the vectors of the form $`|z|v`$ where $`|z`$ is a basis for the answer part (the part which is measured at the end of algorithm to obtain the answer) and $`|v`$ is a basis for the rest of $`_A`$ (workbits). We express $`|\psi _x`$ and $`|\psi _y`$ in this basis. Let $$|\psi _x=\underset{z,v}{}a_{z,v}|z|v,\text{ }|\psi _y=\underset{z,v}{}b_{z,v}|z|v.$$ The final state of the algorithm is $`_{xS}\alpha _x|\psi _x|x`$. By tracing out $`_A`$ in the $`|z|v`$ basis, we get $$(\rho _{end})_{xy}=\alpha _x\alpha _y\underset{z,v}{}a_{z,v}^{}b_{z,v}.$$ Define $`ϵ_1=_{z,v:zf(x)}|a_{z,v}|^2`$ and $`ϵ_2=_{z,v:z=f(x)}|b_{z,v}|^2`$. Then, $`ϵ_1ϵ`$ and $`ϵ_2ϵ`$ (because these are the probabilities that the measurement at the end of algorithm gives us a wrong answer: not $`f(x)`$ for the input $`x`$ and $`f(x)`$ for the input $`y`$). We have $$|\underset{z,v}{}a_{z,v}^{}b_{z,v}|\underset{z,v}{}|a_{z,v}||b_{z,v}|=\underset{z,v:z=f(x)}{}|a_{z,v}||b_{z,v}|+\underset{z,v:zf(x)}{}|a_{z,v}||b_{z,v}|$$ $$\sqrt{\underset{z,v:z=f(x)}{}|a_{z,v}|^2}\sqrt{\underset{z,v:z=f(x)}{}|b_{z,v}|^2}+\sqrt{\underset{z,v:zf(x)}{}|a_{z,v}|^2}\sqrt{\underset{z,v:zf(x)}{}|b_{z,v}|^2}=\sqrt{(1ϵ_1)ϵ_2}+\sqrt{ϵ_1(1ϵ_2)}.$$ This expression is maximized by $`ϵ_1=ϵ_2=ϵ`$, giving us $`2\sqrt{ϵ(1ϵ)}`$. Therefore, $$|(\rho _{end})_{xy}|=|\alpha _x||\alpha _y||\underset{z,v}{}a_{z,v}^{}b_{z,v}|2\sqrt{ϵ(1ϵ)}|\alpha _x||\alpha _y|.$$ $`\mathrm{}`$ In particular, if $`|\psi _{start}`$ is the uniform $`\frac{1}{\sqrt{m}}_{xS}|x`$, we have $`(\rho _{end})_{xy}2\sqrt{ϵ(1ϵ)}/m`$. Note that, for any $`ϵ<1/2`$, $`2\sqrt{ϵ(1ϵ)}<1`$. Thus, if the algorithm $`A`$ works correctly, the absolute value of every entry of $`\rho _{end}`$ that corresponds to inputs $`x,y`$ with $`f(x)f(y)`$ must be smaller than the corresponding entry of $`\rho _{start}`$ by a constant fraction. To prove a lower bound on the number of queries, we bound the change in $`\rho _{xy}`$ caused by one query. Together with Lemma 1, this implies a lower bound on the number of queries. ## 4 Lower bound on search Next, we apply this technique to several problems. We start with the simplest case: the lower bound on unordered search problem (Theorem 1). Then, we show two general lower bound theorems (Theorems 2 and 6). Each of these theorems is a special case of the next one: Theorem 2 implies Theorem 1 and Theorem 6 implies Theorem 2. However, more general theorems have more complicated proofs and it is easier to see the main idea in the simple case of unordered search. Therefore, we show this case first, before general theorems 2 and 6. Problem: We are given $`x_1,\mathrm{},x_N\{0,1\}`$ and we have to find $`i`$ such that $`x_i=1`$. ###### Theorem 1 Any quantum algorithm that finds $`i`$ with probability $`1ϵ`$ uses $`\mathrm{\Omega }(\sqrt{N})`$ queries. Proof: Let $`S`$ be the set of inputs with one $`x_i`$ equal to 1 and the rest 0. Then, $`|S|=N`$ and $`_I`$ is an $`N`$-dimensional space. To simplify the notation, we use $`|i`$ to denote the basis state of $`_I`$ corresponding to the input $`(x_1,\mathrm{},x_N)`$ with $`x_i=1`$. Let $`\rho _k`$ be the density matrix of $`_I`$ after $`k`$ queries. Note that $`\rho _0=\rho _{start}`$ and $`\rho _T=\rho _{end}`$. We consider the sum of absolute values of all its off-diagonal entries $`S_k=_{x,y,xy}|(\rho _k)_{xy}|`$. We will show that 1. $`S_0=N1`$, 2. $`S_T2\sqrt{ϵ(1ϵ)}(N1)`$ 3. $`S_{k1}S_k2\sqrt{N1}`$ for all $`k\{1,\mathrm{},T\}`$. This implies that the number of queries $`T`$ is at least $`(12\sqrt{ϵ(1ϵ)})\sqrt{N1}/2`$. The first two properties follow straightforwardly from the results at the end of section 3. $`N\times N`$ matrices $`\rho _i`$ have $`N(N1)`$ non-diagonal entries and each of these entries is $`1/N`$ in $`\rho _{start}`$ and at most $`2\sqrt{ϵ(1ϵ)}/N`$ in $`\rho _{end}`$ (Lemma 1 together with the fact that each of these entries corresponds to two inputs with different answers). It remains to prove the third part. First, notice that $$S_{k1}S_k=\underset{x,y:xy}{}|(\rho _{k1})_{xy}|\underset{x,y:xy}{}|(\rho _k)_{xy}|\underset{x,y:xy}{}|(\rho _{k1})_{xy}(\rho _k)_{xy}|.$$ Therefore, it suffices to bound the sum of $`|(\rho _{k1})_{xy}(\rho _k)_{xy}|`$. A query corresponds to representing the pure state before the query as $$|\psi _{k1}=\underset{i,z}{}\sqrt{p_{i,z}}|i,z|\psi _{i,z},$$ $$|\psi _{i,z}=\underset{j=1}{\overset{n}{}}\alpha _{i,z,j}|j$$ and changing the phase on the $`|i`$ component of $`|\psi _{i,z}`$. If we consider just the $`H_I`$ part, the density matrix $`\rho _{k1}`$ before the query is $`_{i,z}p_{i,z}|\psi _{i,z}\psi _{i,z}|`$. The density matrix $`\rho _k`$ after the query is $`_{i,z}p_{i,z}|\psi _{i,z}^{}\psi _{i,z}^{}|`$ where $$|\psi _{i,z}^{}=\underset{ji}{}\alpha _{i,z,j}|j\alpha _{i,z,i}|i.$$ Consider $`\rho _{i,z}=|\psi _{i,z}\psi _{i,z}|`$ and $`\rho _{i,z}^{}=|\psi _{i,z}^{}\psi _{i,z}^{}|`$. Then, $`\rho _{k1}=_{i,z}p_{i,z}\rho _{i,z}`$ and $`\rho _k=_{i,z}p_{i,z}\rho _{i,z}^{}`$. The only entries where $`\rho _{i,z}`$ and $`\rho _{i,z}^{}`$ differ are the entries in the $`i^{\mathrm{th}}`$ column and the $`i^{\mathrm{th}}`$ row. These entries are $`\alpha _{i,z,j}^{}\alpha _{i,z,i}`$ in $`\rho _{i,z}`$ and -$`\alpha _{i,z,j}^{}\alpha _{i,z,i}`$ in $`\rho _{i,z}^{}`$. The sum of absolute values of the differences of all entries in the $`i^{\mathrm{th}}`$ row is $$\underset{ji}{}2|\alpha _{i,z,j}^{}\alpha _{i,z,i}|2|\alpha _{i,z,i}|\underset{ji}{}|\alpha _{i,z,j}|.$$ Similarly, the sum of absolute values of the differences of all entries in the $`i^{\mathrm{th}}`$ column is at most $`2|\alpha _{i,z,i}|_{ji}|\alpha _{i,z,j}|`$ as well. So, the sum of absolute values of all differences is at most $`4|\alpha _{i,z,i}|_{ji}|\alpha _{i,z,j}|`$. By Cauchy-Schwartz inequality, $$\underset{ji}{}|\alpha _{i,z,j}|\sqrt{N1}\sqrt{\underset{ji}{}|\alpha _{i,z,j}|^2}=\sqrt{N1}\sqrt{1|\alpha _{i,z,i}|^2}.$$ Therefore, $$\underset{ji}{}4|\alpha _{i,z,j}^{}\alpha _{i,z,i}|4\sqrt{N1}|\alpha _{i,z,i}|\sqrt{1|\alpha _{i,z,i}|^2}2\sqrt{N1}.$$ Define $`S_{i,z}=_{x,y:xy}|(\rho _{i,z})_{xy}(\rho _{i,z}^{})_{xy}|`$. Then, we have just shown $`S_{i,z}2\sqrt{N1}`$. This implies a bound on the sum $`_{x,y:xy}|(\rho _{k1})_{xy}(\rho _k)_{xy}|`$. $$\underset{x,y:xy}{}|(\rho _{k1})_{xy}(\rho _k)_{xy}|=\underset{x,y:xy}{}|\underset{i,z}{}p_{i,z}(\rho _{i,z})_{xy}\underset{i,z}{}p_{i,z}(\rho _{i,z}^{})_{xy}|$$ $$\underset{i,z}{}p_{i,z}\underset{x,y:xy}{}|(\rho _{i,z})_{xy}(\rho _{i,z}^{})_{xy}|\underset{i,z}{}p_{i,z}2\sqrt{N1}=2\sqrt{N1}.$$ This completes the proof. $`\mathrm{}`$ ## 5 The general lower bound ### 5.1 The result Next, we obtain a general lower bound theorem. ###### Theorem 2 Let $`f(x_1,\mathrm{},x_N)`$ be a function of $`n`$ $`\{0,1\}`$-valued variables and $`X,Y`$ be two sets of inputs such that $`f(x)f(y)`$ if $`xX`$ and $`yY`$. Let $`RX\times Y`$ be such that 1. For every $`xX`$, there exist at least $`m`$ different $`yY`$ such that $`(x,y)R`$. 2. For every $`yY`$, there exist at least $`m^{}`$ different $`xX`$ such that $`(x,y)R`$. 3. For every $`xX`$ and $`i\{1,\mathrm{},n\}`$, there are at most $`l`$ different $`yY`$ such that $`(x,y)R`$ and $`x_iy_i`$. 4. For every $`yY`$ and $`i\{1,\mathrm{},n\}`$, there are at most $`l^{}`$ different $`xX`$ such that $`(x,y)R`$ and $`x_iy_i`$. Then, any quantum algorithm computing $`f`$ uses $`\mathrm{\Omega }(\sqrt{\frac{mm^{}}{ll^{}}})`$ queries. Proof: Consider the set of inputs $`S=XY`$ and the superposition $$\frac{1}{\sqrt{2|X|}}\underset{xX}{}|x+\frac{1}{\sqrt{2|Y|}}\underset{yY}{}|y$$ over this set of inputs. Let $`S_i`$ be the sum of $`|(\rho _i)_{xy}|`$ over all $`x,y`$ such that $`(x,y)R`$. Then, the theorem follows from 1. $`S_0S_T(12\sqrt{ϵ(1ϵ)})\sqrt{mm^{}}`$ 2. $`S_{k1}S_k\sqrt{ll^{}}`$ To show the first part, let $`(x,y)R`$. Then, $`(\rho _0)_{xy}=\frac{1}{\sqrt{|X||Y|}}`$ and $`|(\rho _T)_{xy}|\frac{2\sqrt{ϵ(1ϵ)}}{\sqrt{|X||Y|}}`$ (Lemma 1). Therefore, $$|(\rho _0)_{xy}||(\rho _T)_{xy}|\frac{12\sqrt{ϵ(1ϵ)}}{\sqrt{|X||Y|}}.$$ The number of $`(x,y)R`$ is at least $`\mathrm{max}(|X|m,|Y|m^{})`$ because for every $`xX`$, there are at least $`m`$ possible $`yY`$ and, for every $`yY`$, there are at least $`m^{}`$ possible $`xX`$. We have $$\mathrm{max}(|X|m,|Y|m^{})\frac{|X|m+|Y|m^{}}{2}\sqrt{|X||Y|mm^{}},$$ $$S_0S_T\sqrt{|X||Y|mm^{}}\frac{12\sqrt{ϵ(1ϵ)}}{\sqrt{|X||Y|}}=(12\sqrt{ϵ(1ϵ)})\sqrt{mm^{}}.$$ Next, we prove the second part. Similarly to the previous proof, we represent $$|\psi _{k1}=\underset{i,z}{}\sqrt{p_{i,z}}|i,z|\psi _{i,z},$$ $$|\psi _{i,z}=\underset{xS}{}\alpha _{i,z,x}|x.$$ A query corresponds to changing the sign on all components with $`x_i=1`$. It transforms $`|\psi _{i,z}`$ to $$|\psi _{i,z}^{}=\underset{xS:x_i=0}{}\alpha _{i,z,x}|x\underset{xS:x_i=1}{}\alpha _{i,z,x}|x.$$ Let $`\rho _{i,z}=|\psi _{i,z}\psi _{i,z}|`$ and $`\rho _{i,z}^{}=|\psi _{i,z}^{}\psi _{i,z}^{}|`$. We define $`S_{i,z}=_{(x,y)R}|(\rho _{i,z})_{xy}(\rho _{i,z}^{})_{xy}|`$. If $`x_i=y_i`$, then $`(\rho _{i,z})_{xy}`$ and $`(\rho _{i,z}^{})_{xy}`$ are the same. If one of $`x_i`$, $`y_i`$ is 0 and the other is 1, $`(\rho _{i,z})_{xy}=(\rho _{i,z}^{})_{xy}`$ and $`|(\rho _{i,z})_{xy}(\rho _{i,z}^{})_{xy}|=2|(\rho _{i,z})_{xy}|=2|\alpha _{i,z,x}||\alpha _{i,z,y}|`$. Therefore, $$S_{i,z}=\underset{(x,y)R:x_iy_i}{}2|\alpha _{i,z,x}||\alpha _{i,z,y}|\underset{(x,y)R:x_iy_i}{}\sqrt{\frac{l^{}}{l}}|\alpha _{i,z,x}|^2+\sqrt{\frac{l}{l^{}}}|\alpha _{i,z,y}|^2$$ $$\underset{xX}{}l\sqrt{\frac{l^{}}{l}}|\alpha _{i,z,x}|^2+\underset{yY}{}l^{}\sqrt{\frac{l}{l^{}}}|\alpha _{i,z,y}|^2=\sqrt{ll^{}}\underset{xXY}{}|\alpha _{i,z,x}|^2=\sqrt{ll^{}}.$$ Similarly to the previous proof, this implies the same bound on $`S_{k1}S_k`$. $`\mathrm{}`$ ### 5.2 Relation to block sensitivity bound Our Theorem 2 generalizes the block sensitivity bound of . Let $`f`$ be a Boolean function and $`x=(x_1,\mathrm{},x_n)`$ an input to $`f`$. For a set $`S\{1,\mathrm{},n\}`$, $`x^{(S)}`$ denotes the input obtained from $`x`$ by flipping all variables $`x_i`$, $`iS`$. $`f`$ is sensitive to $`S`$ on input $`x`$ if $`f(x)f(x^{(S)})`$. The block sensitivity of $`f`$ on input $`x`$ is the maximal number $`t`$ such that there exist $`t`$ pairwise disjoint sets $`S_1`$, $`S_2`$, $`\mathrm{}`$, $`S_t`$ such that, for all $`i\{1,\mathrm{},t\}`$, $`f`$ is sensitive to $`S_i`$ on $`x`$. We denote it by $`bs_x(f)`$. The block sensitivity of $`f`$, $`bs(f)`$ is just the maximum of $`bs_x(f)`$ over all inputs $`x`$. ###### Theorem 3 Let $`f`$ be any Boolean function. Then, any quantum query algorithm computing $`f`$ uses $`\mathrm{\Omega }(\sqrt{bs(f)})`$ queries. To see that this is a particular case of Theorem 2, let $`x`$ be the input on which $`f`$ achieves $`bs(f)`$ block sensitivity. Then, we can take $`X=\{x\}`$ and $`Y=\{x^{(S_1)},\mathrm{},x^{(S_{bs(f)})}\}`$. Let $`R=\{(x,x^{(S_1)}),(x,x^{(S_2)}),\mathrm{},(x,x^{(S_{bs(f)})})\}`$. Then, $`m=bs(f)`$, $`m^{}=1`$. Also, $`l=1`$ (because, by the definition of the block sensitivity, $`m`$ blocks of input variables have to be disjoint) and $`l^{}=1`$. Therefore, we get $`\frac{mm^{}}{ll^{}}=bs(f)`$ and Theorem 2 gives the $`\mathrm{\Omega }(\sqrt{bs(f)})`$ lower bound for any Boolean function $`f`$. In the next subsection we show some problems for which our method gives a better bound than the block-sensitivity method. ### 5.3 Applications For a first application, consider AND of ORs: $$f(x_1,\mathrm{},x_N)=(x_1ORx_2\mathrm{}ORx_\sqrt{N})AND$$ $$(x_{\sqrt{N}+1}\mathrm{}x_{2\sqrt{N}})AND\mathrm{}AND(x_{N\sqrt{N}+1}OR\mathrm{}ORx_N)$$ where $`x_1,\mathrm{},x_N\{0,1\}`$. $`f`$ can be computed with $`O(\sqrt{N}\mathrm{log}N)`$ queries by a two-level version of Grover’s algorithm (see ). However, a straightforward application of lower bound methods from only gives an $`\mathrm{\Omega }(\sqrt[4]{N})`$ bound because the block sensitivity of $`f`$ is $`\mathrm{\Theta }(\sqrt{N})`$ and the lower bound on the number of queries given by hybrid or polynomials method is the square root of block sensitivity. Our method gives ###### Theorem 4 Any quantum algorithm computing AND of ORs uses $`\mathrm{\Omega }(\sqrt{N})`$ queries. Proof: For this problem, let $`X`$ be the set of all $`x=(x_1,\mathrm{},x_n)`$ such that, for every $`i\{1,\mathrm{},\sqrt{N}\}`$, there is exactly one $`j\{1,\mathrm{},\sqrt{n}\}`$ with $`x_{(i1)\sqrt{N}+j}=1`$. $`Y`$ is the set of all $`y=(y_1,\mathrm{},y_n)`$ such that $`y_{(i1)\sqrt{N}+1}=\mathrm{}=y_{i\sqrt{N}}=0`$ for some $`i`$ and, for every $`i^{}i`$, there is a unique $`j\{1,\mathrm{},\sqrt{N}\}`$ with $`y_{(i1)\sqrt{N}+j}=1`$. $`R`$ consists of all pairs $`(x,y)`$, $`xX`$, $`yY`$ such that there is exactly one $`i`$ with $`x_iy_i`$. Then, $`m=m^{}=\sqrt{n}`$ because, given $`xX`$, there are $`\sqrt{n}`$ 1s that can be replaced by 0 and replacing any one of them gives some $`yY`$. Conversely, if $`yY`$, there are $`\sqrt{n}`$ ways to add one more 1 so that we get $`xX`$. On the other hand, $`l=l^{}=1`$ because, given $`xX`$ (or $`yY`$) and $`i\{1,\mathrm{},n\}`$, there is only one input that differs from $`x`$ only in the $`i^{\mathrm{th}}`$ position. Therefore, $`\sqrt{\frac{mm^{}}{ll^{}}}=\sqrt{n}`$ and the result follows from Theorem 2. $`\mathrm{}`$ Our theorem can be also used to give another proof for the following theorem of Nayak and Wu. ###### Theorem 5 Let $`f:\{0,1,\mathrm{},n1\}\{0,1\}`$ be a Boolean function that is equal to 1 either at exactly $`n/2`$ points of the domain or at exactly $`(1+ϵ)n/2`$ points. Then, any quantum algorithm that determines whether the number of points where $`f(x)=1`$ is $`n/2`$ or $`(1+ϵ)n/2`$ uses $`\mathrm{\Omega }(\frac{1}{ϵ})`$ queries. This result implies lower bounds on the number of quantum queries needed to compute (or to approximate) the median of $`n`$ numbers. It was shown in using polynomials method. No proof that uses adversary arguments similar to is known. With our “quantum adversary” method, Theorem 5 can be proven in similarly to other theorems in this paper. Proof: Let $`X`$ be the set of all $`f`$ that are 1 at exactly $`n/2`$ points, $`Y`$ be the set of all $`f`$ that are 1 at $`(1+ϵ)n/2`$ points and $`R`$ be the set of all $`(f,f^{})`$ such that $`fX`$, $`f^{}Y`$ and they differ in exactly $`ϵn/2`$ points. Then, $`m=\left(\genfrac{}{}{0pt}{}{n/2}{ϵn/2}\right)`$ and $`m^{}=\left(\genfrac{}{}{0pt}{}{(1+ϵ)n/2}{ϵn/2}\right)`$. On the other hand, $`l=\left(\genfrac{}{}{0pt}{}{n/21}{ϵn/21}\right)`$ and $`l^{}=\left(\genfrac{}{}{0pt}{}{(1+ϵ)n/21}{ϵn/21}\right)`$. Therefore, $$\frac{mm^{}}{ll^{}}=\frac{\left(\genfrac{}{}{0pt}{}{n/2}{ϵn/2}\right)\left(\genfrac{}{}{0pt}{}{(1+ϵ)n/2}{ϵn/2}\right)}{\left(\genfrac{}{}{0pt}{}{n/21}{ϵn/21}\right)\left(\genfrac{}{}{0pt}{}{(1+ϵ)n/21}{ϵn/21}\right)}=\frac{\frac{n}{2}\frac{(1+ϵ)n}{2}}{\frac{ϵn}{2}\frac{ϵn}{2}}=\frac{1+ϵ}{ϵ^2}>\frac{1}{ϵ^2}.$$ By Theorem 2, the number of queries is $`\mathrm{\Omega }(\sqrt{\frac{mm^{}}{ll^{}}})=\mathrm{\Omega }(1/ϵ)`$. $`\mathrm{}`$ There are several other known lower bounds that also follow from Theorem 2. In particular, Theorem 2 implies $`\mathrm{\Omega }(N)`$ lower bounds for MAJORITY and PARITY of . ## 6 Inverting a permutation ### 6.1 Extension of Theorem 2 For some lower bounds (like inverting a permutation), we need a following extension of Theorem 2. This is our most general result. ###### Theorem 6 Let $`f(x_1,\mathrm{},x_N)`$ be a function of $`n`$ variables with values from some finite set and $`X,Y`$ be two sets of inputs such that $`f(x)f(y)`$ if $`xX`$ and $`yY`$. Let $`RX\times Y`$ be such that 1. For every $`xX`$, there exist at least $`m`$ different $`yY`$ such that $`(x,y)R`$. 2. For every $`yY`$, there exist at least $`m^{}`$ different $`xX`$ such that $`(x,y)R`$. Let $`l_{x,i}`$ be the number of $`yY`$ such that $`(x,y)R`$ and $`x_iy_i`$ and $`l_{y,i}`$ be the number of $`xX`$ such that $`(x,y)R`$ and $`x_iy_i`$. Let $`l_{max}`$ be the maximum of $`l_{x,i}l_{y,i}`$ over all $`(x,y)R`$ and $`i\{1,\mathrm{},N\}`$ such that $`x_iy_i`$. Then, any quantum algorithm computing $`f`$ uses $`\mathrm{\Omega }(\sqrt{\frac{mm^{}}{l_{max}}})`$ queries. The parameters $`l`$ and $`l^{}`$ of Theorem 2 are just $`\mathrm{max}_{xX,i}l_{x,i}`$ and $`\mathrm{max}_{yY,i}l_{y,i}`$. It is easy to see that $$\underset{(x,y)R,x_iy_i}{\mathrm{max}}l_{x,i}l_{y,i}\underset{x,i}{\mathrm{max}}l_{x,i}\underset{y,i}{\mathrm{max}}l_{y,i}.$$ Therefore, the lower bound given by Theorem 6 is always greater than or equal to the lower bound of Theorem 2. However, Theorem 6 gives a better bound if, for every $`(x,y)R`$ and $`i`$, at least one of $`l_{x,i}`$ or $`l_{y,i}`$ is less than its maximal value (which happens for inverting a permutation). Also, Theorem 6 allows $`\{1,\mathrm{},N\}`$-valued variables instead of only $`\{0,1\}`$-valued in Theorem 2. Proof: Similarly to Theorem 6, we consider the set of inputs $`S=XY`$ and the superposition $$\frac{1}{\sqrt{2|X|}}\underset{xX}{}|x+\frac{1}{\sqrt{2|Y|}}\underset{yY}{}|y$$ over this set of inputs. Let $`S_k`$ be the sum of $`|(\rho _k)_{xy}|`$ over all $`x,y`$ such that $`(x,y)R`$. The theorem follows from 1. $`S_TS_0(12\sqrt{ϵ(1ϵ)})\sqrt{mm^{}}`$ 2. $`S_{k1}S_k\sqrt{l_{max}}`$ The first part is shown in the same way as in the proof of Theorem 2. For the second part, express the state before the $`k^{\mathrm{th}}`$ query as $$|\psi _{k1}=\underset{i,a,z,x}{}\alpha _{i,a,z,x}|i,a,z|x$$ where $`i`$ is the index of the input variable $`x_i`$ being queried, $`a`$ are $`\mathrm{log}N`$ bits for the answer, $`z`$ is the part of $`_A`$ that does not participate in the query (extra workbits) and $`x`$ is $`_I`$ part of the superposition. A query changes this to $$|\psi _k=\underset{i,a,z,x}{}\alpha _{i,a,z,x}|i,ax_i,z|x=\underset{i,a,z,x}{}\alpha _{i,ax_i,z,x}|i,a,z|x.$$ Denote $$|\psi _{i,a,z}=\underset{x}{}\alpha _{i,a,z,x}|x,\text{ }|\psi _{i,a,z}^{}=\underset{x}{}\alpha _{i,ax_i,z,x}|x.$$ $`\rho _{k1,i}=_{a,z}|\psi _{i,a,z}\psi _{i,a,z}|`$ and $`\rho _{k,i}=_{a,z}|\psi _{i,a,z}^{}\psi _{i,a,z}^{}|`$ are the parts of $`\rho _{k1}`$ and $`\rho _k`$ corresponding to querying $`i`$. We have $`\rho _{k1}=_{i=1}^n\rho _{k1,i}`$ and $`\rho _k=_{i=1}^n\rho _{k,i}`$. Let $`S_{k,i}`$ be sum of absolute values of differences $`|(\rho _{k1,i})_{xy}(\rho _{k,i})_{xy}|`$ over all $`(x,y)R`$. Then, for every $`x,y`$, $$|(\rho _{k1})_{xy}||(\rho _k)_{xy}||(\rho _{k1})_{xy}(\rho _k)_{xy}|=|\underset{i}{}(\rho _{k1,i})_{xy}\underset{i}{}(\rho _{k,i})_{xy}|\underset{i}{}|(\rho _{k1,i})_{xy}(\rho _{k,i})_{xy}|.$$ Therefore (by summing over all such $`x`$ and $`y`$), $`S_{k1}S_k_iS_{k,i}`$ and we can bound $`S_{k1}S_k`$ by bounding $`S_{k,i}`$. Let $`x`$, $`y`$ be two inputs such that $`x_i=y_i`$. Then, it is easy to see that $$(\rho _{k1,i})_{xy}=\underset{a,z}{}\alpha _{i,a,z,x}^{}\alpha _{i,a,z,y}=\underset{a,z}{}\alpha _{i,ax_i,z,x}^{}\alpha _{i,ay_i,z,y}=(\rho _{k,i})_{xy}.$$ Therefore, the only non-zero entries in $`S_{k,i}`$ are the entries corresponding to $`(x,y)R`$ with $`x_iy_i`$. The sum of their differences $`|(\rho _{k1,i})_{xy}(\rho _{k,i})_{xy}|`$ is at most the sum of absolute values of such entries in $`\rho _{k1,i}`$ plus the sum of absolute values of them in $`\rho _{k,i}`$. We bound these two sums. First, any density matrix is semipositive definite. This implies that $$|(\rho _{k1,i})_{xy}|\frac{1}{2}\left(\sqrt{\frac{l_{y,i}}{l_{x,i}}}|(\rho _{k1,i})_{xx}|+\sqrt{\frac{l_{x,i}}{l_{y,i}}}|(\rho _{k1,i})_{yy}|\right).$$ for any $`x`$ and $`y`$. Therefore, $$\underset{x,y:(x,y)R}{}_{x_iy_i}|(\rho _{k1,i})_{xy}|\frac{1}{2}\underset{x,y:(x,y)R}{}_{x_iy_i}\sqrt{\frac{l_{y,i}}{l_{x,i}}}|(\rho _{k1,i})_{xx}|+\sqrt{\frac{l_{x,i}}{l_{y,i}}}|(\rho _{k1,i})_{yy}|=\frac{1}{2}\underset{xXY}{}l_{x,i}\sqrt{\frac{l_{y,i}}{l_{x,i}}}|(\rho _{k1,i})_{xx}|$$ $$=\frac{1}{2}\underset{xXY}{}\sqrt{l_{x,i}l_{y,i}}|(\rho _{k1,i})_{xx}|\frac{1}{2}\sqrt{l_{max}}\underset{xXY}{}|(\rho _{k1,i})_{xx}|=\frac{\sqrt{l_{max}}}{2}Tr\rho _{k1,i}.$$ The same argument shows that a similar sum is at most $`\frac{\sqrt{l_{max}}}{2}Tr\rho _{k,i}`$ for the matrix $`\rho _{k,i}`$. Therefore, $$S_{k1}S_k\underset{i}{}S_{k,i}\underset{i}{}\frac{\sqrt{l_{max}}}{2}(Tr\rho _{k1,i}+Tr\rho _{k,i})=\frac{\sqrt{l_{max}}}{2}(Tr\rho _{k1}+Tr\rho _k)=\sqrt{l_{max}}.$$ This completes the proof. $`\mathrm{}`$ ### 6.2 Application We use Theorem 6 to prove a lower bound for inverting a permutation. Problem: We are given $`x_1,\mathrm{},x_N\{1,\mathrm{},N\}`$ such that $`(x_1,\mathrm{},x_N)`$ is a permutation of $`\{1,\mathrm{},N\}`$. We have to find the $`i`$ such that $`x_i=1`$. This problem was used in to show $`NP^AcoNP^ABQP^A`$ for an oracle $`A`$. It is easy to see that it can be solved by Grover’s algorithm (search for $`i`$ with $`x_i=1`$). This takes $`O(\sqrt{N})`$ queries. However, the $`\mathrm{\Omega }(\sqrt{N})`$ lower bound proof for search problem from does not work for this problem. showed a weaker $`\mathrm{\Omega }(\sqrt[3]{N})`$ bound with a more complicated proof. ###### Theorem 7 Any quantum query algorithm that inverts a permutation with probability $`1ϵ`$ uses $`\mathrm{\Omega }(\sqrt{N})`$ queries. Proof: Let $`X`$ be the set of all permutations $`x`$ with $`x_i=1`$ for an even $`i`$ and $`Y`$ be the set of all permutations $`y`$ with $`y_i=1`$ for an odd $`i`$. $`(x,y)R`$ if $`x=(x_1,\mathrm{},x_N)`$, $`y=(y_1,\mathrm{},y_N)`$ and there are $`i,j`$, $`ij`$ such that $`x_i=y_j=1`$, $`x_j=y_i`$ and all other elements of $`x`$ and $`y`$ are the same. For every $`xX`$, there are $`m=n/2`$ $`y`$ with $`(x,y)R`$. Similarly, for every $`yY`$, there are $`m^{}=n/2`$ $`x`$ such that $`(x,y)R`$. Finally, if we take a pair $`(x,y)R`$ and a location $`i`$ such that $`x_iy_i`$, then one of $`x_i`$, $`y_i`$ is 1. We assume that $`x_i=1`$. (The other case is similar.) Then, there are $`n/2`$ $`y^{}`$ such that $`(x,y^{})R`$ and $`x_iy_i^{}`$. However, the only $`x^{}`$ such that $`(x^{},y)R`$ and $`x_i^{}y_i`$ is $`x^{}=x`$. (Any $`x^{}`$ such that $`(x^{},y)R`$ and $`x_i^{}y_i^{}`$ must also have $`x_j^{}y_j`$ where $`j`$ is the variable for which $`y_j=1`$ and $`x`$ is the only permutation that differs from $`y`$ only in these two places.) Therefore, $`l_{x,i}=n/2`$, $`l_{y,i}=1`$ and $`l_{max}=n/2`$. By Theorem 6, this implies that any quantum algorithm needs $`\mathrm{\Omega }(\sqrt{\frac{n^2}{n}})=\mathrm{\Omega }(\sqrt{n})`$ queries. $`\mathrm{}`$ ## 7 Relation to Grover’s proof Grover presents a proof of the $`\mathrm{\Omega }(\sqrt{n})`$ lower bound on the search problem based on considering the sum of distances $$\mathrm{\Delta }(t)=\underset{i,j\{1,\mathrm{},n\},ij}{}\varphi _i^t\varphi _0^t^2$$ where $`\varphi _i^t`$ is the state of the algorithm after $`t`$ queries on the input $`x_1=\mathrm{}=x_{i1}=0`$, $`x_i=1`$, $`x_{i+1}=\mathrm{}=x_n=1`$ and $`\varphi _0^t`$ is the state of the algorithm after $`t`$ queries on the input $`x_1=\mathrm{}=x_n=0`$. Grover shows that, after $`t`$ queries, $`\mathrm{\Delta }(t)4t^2`$. If the algorithm outputs the correct answer with probability 1, the final vectors $`\varphi _1^t`$, $`\mathrm{}`$, $`\varphi _n^t`$ have to be orthogonal, implying that $`\mathrm{\Delta }(t)2N2\sqrt{N}`$ (cf. ). This implies that the number of queries must be $`\mathrm{\Omega }(\sqrt{N})`$. A similar idea (bounding a certain sum of distances) has been also used by Shi to prove lower bounds on the number of quantum queries in terms of average sensitivity. These “distance-based” ideas can be generalized to obtain another proof of our Theorems 2 and 6. Namely, for Theorem 2, one can take $$\mathrm{\Delta }(t)=\underset{(x,y)R}{}\varphi _x^t\varphi _y^t^2$$ where $`\varphi _x^t`$, $`\varphi _y^t`$ are the states of the algorithm after $`t`$ steps on the inputs $`x`$ and $`y`$. Then, $$\varphi _x^t\varphi _y^t^2=1\varphi _x^t|\varphi _y^t^2.$$ Let $`\rho _t`$ be the density matrix of $`_I`$ after $`t`$ steps. By writing out the expressions for $`\varphi _x^t|\varphi _y^t`$ and $`(\rho _t)_{xy}`$, we can see that $$(\rho _t)_{xy}=\frac{1}{4|X||Y|}\varphi _x^t|\varphi _y^t.$$ This shows that the two quantities (the sum of entries in the density matrix and the sum of distances) are quite similar. Indeed, we can give proofs for Theorems 2 and 6 in terms of distances and their sums $`\mathrm{\Delta }(t)`$. (Namely, $`\mathrm{\Delta }(0)=0`$ before the first query, $`\mathrm{\Delta }(T)`$ should be large if the algorithm solves the problem with $`T`$ queries and we can bound the difference $`\mathrm{\Delta }(t)\mathrm{\Delta }(t1)`$. This gives the same bounds as bounding the entries of density matrices.) Thus, Theorems 2 and 6 have two proofs that are quite similar algebraically but come from two completely different sources: running a quantum algorithm with a superposition of inputs (our “quantum adversary”) and looking at it from a geometric viewpoint (sum of distances). The “quantum adversary” approach may be more general because one could bound other quantities (besides the sum of entries in the density matrix) which have no simple geometric interpretation. ## 8 Conclusion and open problems We introduced a new method for proving lower bounds on quantum algorithms and used it to prove tight (up to a multiplicative or logarithmic factor) lower bounds on Grover’s search and 3 other related problems. Two of these bounds (Grover’s search and distinguishing between an input with 1/2 of values equal to 1 and $`1/2+ϵ`$ values equal to 1) were known before. For two other problems (inverting a permutation and AND of ORs), only weaker bounds were known. One advantage of our method is that it allows to prove all 4 bounds in a similar way. (Previous methods were quite different for different problems.) Some open problems: 1. Collision problem. We are given a function $`f:\{1,\mathrm{},n\}\{1,\mathrm{},n/2\}`$ and have to find $`i`$, $`j`$ such that $`f(i)=f(j)`$. Classically, this can be done by querying $`f(x)`$ for $`O(\sqrt{n})`$ random values of $`x`$ and it is easy to see that this is optimal. There is a quantum algorithm that solves this problem with $`O(\sqrt[3]{n})`$ queries. However, there is no quantum lower bound at all for this problem (except the trivial bound of $`\mathrm{\Omega }(1)`$). The collision problem is an abstraction for collision-resistant hash functions. If it can be solved with $`O(\mathrm{log}n)`$ queries, there no hash function is collision-resistant against quantum algorithms. The exact argument that we gave in this paper (with bounding a subset of the entries in the density matrix) does not carry over to the collision problem. However, it may be possible to use our idea of running the algorithm with a superposition of oracles together with some other way of measuring the entanglement between the algorithm and the oracle. 2. Simpler/better lower bound for binary search. It may be possible to simplify other lower bounds proven previously by different methods. In some cases, it is quite easy to reprove the result by our method (like Theorem 5) but there are two cases in which we could not do that. The first is the bound of on the number of queries needed to achieve very small probability of error in database search problem. The second is the lower bound on the ordered search. It seems unlikely that our technique can be useful in the first case but there is a chance that some variant of our idea may work for ordered search (achieving both simpler proof and better constant under big-$`\mathrm{\Omega }`$). 3. Communication complexity of disjointness. Quantum communication complexity is often related to query complexity. Can one use our method (either “quantum adversary” or distance-based formulation) to prove lower bounds on quantum communication complexity? A particularly interesting open problem in quantum communication complexity is set disjointness. The classical (both deterministic and probabilistic) communication complexity of set disjointness is $`\mathrm{\Omega }(n)`$. There is a quantum protocol (based on Grover’s search algorithm) that computes set disjointness with an $`O(\sqrt{n}\mathrm{log}n)`$ communication but the best lower bound is only $`\mathrm{\Omega }(\mathrm{log}n)`$. Acknowledgements. I would like to thank Dorit Aharonov, Daniel Gottesman, Ashwin Nayak, Umesh Vazirani and Ronald de Wolf for useful comments.
no-problem/0002/cond-mat0002201.html
ar5iv
text
# Magnetization Switching in Nanowires: Monte Carlo Study with Fast Fourier Transformation for Dipolar Fields ## I Introduction Small magnetic particles in the nanometer regime are interesting for fundamental research as well as for applications in magnetic devices. With decreasing size thermal activation becomes more and more important for the stability of nanoparticles and is hence investigated experimentally as well as theoretically. Wernsdorfer et al. studied magnetization reversal in nanowires as well as in nanoparticles experimentally. They found that for very small particles the magnetic moments rotate coherently as in the Stoner-Wohlfarth model while for larger system sizes more complicated nucleation processes occur. Numerical studies of the thermal activation in magnetic systems base either on Langevin dynamics or on Monte Carlo methods. With the second method, mainly nucleation processes in Ising systems have been studied , but also vector spin models have been used to investigate the switching behavior in systems with continuous degrees of freedom . In these numerical studies the dipole-dipole interaction is often neglected or at least approximated due to its large computational effort. In this work, we will use Monte Carlo methods in order to investigate the thermally activated magnetization behavior of a systems of magnetic moments including the dipole-dipole interactions. Since the calculation of these long-range interaction is extremely time consuming when performed straightforward it is necessary to develop efficient numerical methods for this task. We will demonstrate that the implementation of fast Fourier transformation (FFT) methods — which are already established in the context of micromagnetic simulations based on the Landau-Lifshitz equation of motion — is possible also in a Monte Carlo algorithm where a single-spin-flip method is used. Our approach is applied to the important problems of thermally activated magnetization reversal in a model for nanowires. Depending on the system geometry, material parameters, and the magnetic field different reversal mechanisms occur. Very small wires reverse by a coherent rotation mode while for sufficiently long wires the reversal is dominated by nucleation . With increasing system width an additional crossover sets in to a reversal by a curling-like mode. Our numerical results based on Monte Carlo simulations are supplemented by Langevin dynamics simulations for comparison. ## II Model for a nanowire We consider a classical Heisenberg Hamiltonian for localized spins on a lattice, $``$ $`=`$ $`J{\displaystyle \underset{ij}{}}𝐒_i𝐒_j\mu _s𝐁{\displaystyle \underset{i}{}}𝐒_i`$ (1) $``$ $`w{\displaystyle \underset{i<j}{}}{\displaystyle \frac{3(𝐒_i𝐞_{ij})(𝐞_{ij}𝐒_j)𝐒_i𝐒_j}{r_{ij}^3}},`$ (2) where the $`𝐒_i=𝝁_i/\mu _s`$ are three dimensional magnetic moments of unit length. The first sum represents the ferromagnetic exchange of the moments where $`J`$ is the coupling constant, the second sum is the coupling of the magnetic moments to an external magnetic field $`B`$, and the last sum represents the dipolar interaction. $`w=\mu _0\mu _s^2/(4\pi a^3)`$ describes the strength of the dipole-dipole interaction with $`\mu _0=4\pi 10^7\mathrm{V}s(\mathrm{A}m)^1`$. We consider a cubic lattice with lattice constant $`a`$. The $`𝐞_{ij}`$ are unit vectors pointing from lattice site $`i`$ to $`j`$. We model a finite, cylindric system with diameter $`D`$ and length $`L`$ (in number of lattice sites). All our simulations start with a configuration where all spins point up (into $`z`$ direction). The external magnetic field is antiparallel to the magnetic moments, hence favoring a reversal process. As in earlier publications we measure the characteristic time of the reversal process, i. e. the time when the $`z`$ component of the magnetization of the system changes its sign averaged over either 100 (Langevin dynamics) or 1000 (Monte Carlo) simulation runs. ## III Numerical Methods In the following we will mainly use a Monte Carlo method with a time step quantification as derived in and later further discussed in . The method is based on a single-spin-flip algorithm. The trial step of the algorithm is a random movement of the magnetic moment within a certain maximum angle. In order to achieve this efficiently we construct a random vector with constant probability distribution within a sphere of radius $`R`$ and add this vector to the initial moment. Subsequently the resulting vector is normalized. The trial step width $`R`$ is connected to a time scale $`\mathrm{\Delta }t`$ according to the relation $$R^2=\frac{20k_\mathrm{B}T\alpha \gamma }{(1+\alpha ^2)\mu _s}\mathrm{\Delta }t.$$ (3) Using this relation one Monte Carlo step per spin (MCS) corresponds to the time interval $`\mathrm{\Delta }t`$ of the corresponding Landau-Lifshitz-Gilbert equation in the high damping limit . Throughout the paper we set $`\mathrm{\Delta }t=\frac{1}{40}\mu _s/\gamma `$ and adjust $`R`$, except of the simulations for Fig. 1 where we have to fix $`R`$ and calculate $`\mathrm{\Delta }t`$. In order to compute the dipole-dipole interaction efficiently we use FFT methods for the calculation of the long-range interactions . This method uses the discrete convolution theorem for the calculation of the dipolar field $$𝐇_i=\underset{ji}{}\frac{3𝐞_{ij}(𝐞_{ij}𝐒_j)𝐒_j}{r_{ij}^3}.$$ (4) This dipolar field can be rewritten in the form $$H_i^\eta =\underset{\theta ,j}{}W_{ij}^{\eta \theta }S_j^\theta $$ (5) where the Greek symbols $`\eta ,\theta `$ denote the Cartesian components $`x,y,z`$ and the Latin symbols $`i,j`$ denote the lattice sites. $`(W^{\eta \theta })_{ij}`$ are interaction matrices which only depend on the lattice. Since the lattice is translational invariant one can apply the discrete convolution theorem in order to calculate the Fourier transform of the dipolar field as $$H_k^\eta =\underset{\theta }{}W_k^{\eta \theta }S_k^\eta .$$ (6) Thus one first computes the interaction matrices $`(W^{\eta \theta })_{ij}`$ and its Fourier transform $`(W^{\eta \theta })_k`$. This task has to be performed only once before the simulation starts since the interaction matrices depend only on the lattice structure and, hence, remain constant during the simulation. For each given spin configuration the dipolar field can then be calculated by first performing the Fourier transform of the $`𝐒_i`$, second calculating the product above following Eq. 6, and third transforming the fields $`𝐇_k`$ back into real space resulting in the dipolar fields $`𝐇_i`$. Using FFT techniques the algorithm needs only of the order of $`N\mathrm{log}N`$ calculations instead of $`N^2`$ calculations which one would need for the straightforward calculation of the double sum in Eq. 1. In order to apply the convolution theorem the system has to be periodically and the range of the interaction has to be identical to the system size. Since we are interested in finite systems with open boundary conditions we use the zero padding method . This method is based on a duplication of the system size where the original system is wrapped up with zero spins. The FFT methods is well established as containing no further approximation in the context of micromagnetism . There is however an additional problem concerning the implementation of this method to Monte Carlo algorithms since here the spins are not updated in parallel. Principally, in a Monte Carlo simulation the dipolar field has to be calculated after each single spin flip since the dipolar field at any site of the lattice depends on all other magnetic moments. Thus if one magnetic moment moves, the value of the whole dipolar field changes. Hence, in a usual Monte Carlo algorithm one would store the dipolar fields in an array which is updated after every accepted spin flip . Here, only the changes of the dipolar field due to the single spin update are calculated ($`N`$ operations) resulting in a need of computation time which scales with $`N^2`$ per MCS. Nevertheless, in the following we will show that alternatively one can also recalculate the whole dipolar field at once after a certain number of MCS taking advantage from the FFT method. This can be a good approximation as long as the changes of the system configuration during this update interval are small enough. In order to investigate the validity of this idea systematically, we consider thermally activated magnetization reversal in a spin chain, in other words, in a cylinder with diameter $`D=1`$. First, we calculate the “correct” value of the reduced characteristic time following from a Monte Carlo simulation where the dipolar field is calculated after each accepted spin flip. This value is shown as solid line in Fig. 1. The data points represent the dependence of the reduced characteristic time on the length of the update interval $`\mathrm{\Delta }t_u`$ after which the dipolar field is calculated. We used two different trial step widths $`R`$ for comparison. As is demonstrated our method converges already for update intervals clearly longer than 1MCS, depending on the trial step width $`R`$. The dependence on $`R`$ can be understood as follows. The vertical lines represent the minimal number of MCS which one spin needs to reverse. This can be estimated from the mean step width of a magnetic moment assuming that each trial step is accepted following a strong external field. The mean step width within our Monte Carlo procedure is $`R/\sqrt{5}`$ and thus the minimal number of MCS for a spin reversal is $`\pi \sqrt{5}/R`$. We conclude from this considerations that it is a good approximation to update the whole dipolar field after a certain interval of the Monte Carlo procedure which has to be smaller than the minimum number of MCS needed for a spin reversal. Note that it follows, that this method should never work for an Ising system. Now we will turn to an investigation of the efficiency and capability characteristics of the method. A comparison of the computation time needed when the dipolar field is either calculated straightforward by updating the field after each accepted spin flip or with the FFT method after each MCS is shown in Fig. 2. Here, the computation time is the CPU time needed on an IBM RS6000/590 workstation for 100MCS. Using the FFT method the computation time is proportional to $`L\mathrm{ln}L`$ while for the usual method the computer time scales like $`L^2`$. For the chain of length $`2^{18}=262144`$ the gain of efficiency of the FFT method is roughly a factor 5000 (note that the FFT algorithm is most efficient for the system sizes which are products of small prime numbers) which is a rather impressing result. As a test of the FFT method as well as the time quantification of the Monte Carlo algorithm we use for comparison also another numerical method namely Langevin dynamics simulations. Here we solve numerically the Landau-Lifshitz-Gilbert equation of motion with Langevin dynamics using the Heun method . This equation has the form $$\frac{(1+\alpha ^2)\mu _s}{\gamma }\frac{𝐒_i}{t}=𝐒_i\times \left(𝐇_i(t)+\alpha \left(𝐒_i\times 𝐇_i(t)\right)\right)$$ (7) with the internal field $`𝐇_i(t)=𝜻_i\frac{}{𝐒_i}`$, the gyromagnetic ratio $`\gamma =1.76\times 10^{11}(\mathrm{T}s)^1`$ and the dimensionless damping constant $`\alpha `$. The noise $`𝜻_i`$ represents thermal fluctuations, with $`𝜻_i(t)=0`$ and $`\zeta _i^\eta (t)\zeta _j^\theta (t^{})=2\delta _{ij}\delta _{\eta \theta }\delta (tt^{})\alpha k_\mathrm{B}T\mu _s/\gamma `$ where $`i,j`$ denote once again lattice sites and $`\eta ,\theta `$ Cartesian components. The implementation of the FFT method is straightforward for a Langevin dynamics simulation since here a parallel update of spin configurations is appropriate. In order to compare our different methods, in Fig. 3 the $`\alpha `$ dependence of the characteristic time for the reversal of a spin chain is shown. Monte Carlo data are compared with those from Langevin dynamics simulations. Interestingly, for the whole range of $`\alpha `$ values the Monte Carlo and Langevin dynamics data coincide. This is in contrast to earlier tests where this agreement was achieved only in the high damping limit. Seemingly, there exist certain systems which show only a simple $`\alpha `$ dependence of the form $`\tau (1+\alpha ^2)/\alpha `$. This $`\alpha `$ dependence usually describes the high damping limit which is rendered by the time quantified Monte Carlo algorithm. In our model this $`\alpha `$ dependence is obviously valid in the whole range of damping constants which we considered, leading to the remarkable agreement. In general, this is not necessarily the case since for other models different low and high damping limits appear (see e. g. ), depending on the systems properties. In the following we set $`\alpha =1`$. ## IV Nucleation and Curling As application of the methods described in the previous section we start with a simple chain of classical magnetic moments of length $`L=128`$ as extreme case of a cylindrical system with $`D=1`$. Thermally activated magnetization reversal in a spin chain was treated analytically by Braun who proposed a soliton-antisoliton nucleation process as reversal mechanism. For a finite system with open boundary conditions the nucleation will originate at the sample ends leading to an energy barrier $$\mathrm{\Delta }E=2\sqrt{2d}(\mathrm{tanh}rhr)$$ (8) with $`h=\mu _sB_z/(2d)`$ and $`r=\mathrm{a}rcosh\sqrt{1/h}`$. Here, the dipole-dipole interaction is approximated by a uniaxial anisotropy of the form $`d_i(S_i^z)^2`$ with $`wd/\pi `$ . This estimate follows from a comparison of the stray field energy of an elongated ellipsoid with the energy of a chain with uniaxial anisotropy. In the occurrence of soliton-antisoliton nucleation was confirmed and Eq.8 was verified as well as the asymptotic form of the corresponding characteristic time $$\tau =\tau _0\mathrm{exp}\frac{\mathrm{\Delta }E}{k_\mathrm{B}T},$$ (9) where $`\tau _0`$ is a prefactor depending on the system parameters, the applied magnetic field, and the damping constant. This prefactor was also derived asymptotically for a periodic chain for the case of nucleation as well as for coherent rotation . The latter is energetically favorable for low enough system size. However, for a finite chain the prefactor is still unknown. Fig. 4 shows the temperature dependence of the reduced characteristic time for our spin chain where the dipole-dipole interaction is not approximated by a uniaxial anisotropy as discussed before but taken into account using FFT methods. Data from Monte Carlo simulations are shown as well as from Langevin dynamics. Both methods yield identical results. The slope of the curve in the low temperature limit which represents the energy barrier is 12% lower than the predicted one. This deviation is probably due to the local approximation of the dipole-dipole interaction underlying Eq. 8. We will now turn to the question whether such a nucleation mode will also be found in an extended system. Therefore we consider cylindrical systems of length 128 with varying diameter $`D`$. Fig. 5 shows snapshots of two simulated systems at the characteristic time. For $`D=8`$ (left side) two nuclei originate at the sample ends leading to two domain walls which pass the system. This is comparable to the nucleation process in a spin chain since all spins in a plane of the cylinder are more or less parallel (also shown in the figure) and can be describe as one effective magnetic moment leading to the above mentioned model of Braun . The behavior changes for larger diameter above the so-called exchange length $`l_{ex}`$. For a reversal mode with inhomogeneous magnetization within the planes a domain wall has to be created. The loss of energy due to the existence of a $`180^{}`$ domain wall on the length scale $`l`$ in a spin chain is $$\mathrm{\Delta }E=J\underset{i,j}{}(1\mathrm{cos}(\theta _i\theta _j))\frac{J\pi ^2}{2l}$$ (10) under the assumption that the change of the angle $`\theta `$ between the next nearest magnetic moments is constant (in a continuum description one can also prove that this resembles the minimum energy configuration by minimizing the Euler Lagrange equations). The dipolar field energy of a chain of length $`l`$ is at most $`3wl\zeta (3)`$ where we used Riemann’s Zeta function $$\zeta (3)=\underset{i}{\overset{\mathrm{}}{}}\frac{1}{i^3}$$ (11) (see also for a corresponding calculation in two dimensions). A comparison of Eq. 10 and Eq. 11 yields the exchange length $$l_{\mathrm{e}x}=\pi \sqrt{\frac{J}{6w\zeta (3)}}.$$ (12) Usually the exchange length is calculated in the continuum limit where $`3\zeta (3)3.6`$ is replaced by $`\pi `$. However, we prefer the slightly deviating expression above since its derivation is closer to the spin model which we discuss here. The exchange length for our material parameters is $`l_{ex}7`$, so that for systems with smaller diameter it cannot be energetically favorable for the system to build up inhomogeneous magnetization distributions within the planes while for wider systems another reversal process can occur namely curling . A snapshot during the reversal process in a wider particle ($`D=16`$) is also shown in Fig. 5. The middle part of the cylinder is in the curling mode consisting of a magnetization vortex with a central axis which is still magnetized up. The reversal mode shown in this figure is still mixed up with an additional nucleation process which started first at the sample ends. Note, that in the clipping plane on the right hand side one can find the exchange length as typical length scale of the domain wall in the vortex. ## V conclusions We considered thermal activation in classical spin systems using Monte Carlo methods with a time quantified algorithm as well as Langevin dynamics simulations. We combined both techniques with FFT methods for the calculation of the dipolar field. Whereas this method is established in the context of micromagnetic equations of motion it is less obviously applicable to Monte Carlo methods with single spin flip dynamics. We show that it can be a good approximation for vector spins to update dipolar fields during a Monte Carlo procedure only after certain intervals the length of which depend on the trial step width of the algorithm. Since Monte Carlo methods need less computational effort as compared to Langevin dynamics simulations its combination with FFT methods strongly enhances the capabilities for the numerical investigation of thermal activation. As application we consider a model for nanowires. We compare numerical results for the characteristic time of the magnetization reversal in a spin chain with the theoretical formula for the energy barrier of soliton-antisoliton nucleation. We found small deviations probably due to the fact that Braun in his model approximated the dipole-dipole interaction by a uniaxial anisotropy . Varying the diameter in a cylindrical system we observe a crossover from nucleation to a curling-like mode. ## ACKNOWLEDGMENTS We thank H. B. Braun and K. D. Usadel for helpful discussions. The work was supported by the Deutsche Forschungsgemeinschaft, and by the EU within the framework of the COST action P3 working group 4.
no-problem/0002/astro-ph0002244.html
ar5iv
text
# The Elusive Active Nucleus of NGC 4945. Based on observations made with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS 5–26555. Also based on observation collected at European Southern Observatory, La Silla, Chile. ## 1 Introduction A key problem in studies of objects emitting most of their energy in the FIR/submm is to establish the relative importance of highly obscured Active Galactic Nuclei (AGN) and starburst activity. In particular, it is important to know if it is still possible to hide an AGN, contributing significantly to the bolometric emission, when optical to mid-IR spectroscopy and imaging reveal only a starburst component. Several pieces of evidence suggest that most cosmic AGN activity is obscured. Most, and possibly all, cores of large galaxies host a supermassive black hole ($`10^6`$$`10^9`$$`\mathrm{M}_{}`$; e.g. Richstone et al. richstone (1998)). To complete the formation process in a Hubble time, accretion must proceed at high rates, producing quasar luminosities ($`L10^{12}\mathrm{L}_{}`$). However the observed black hole density is an order of magnitude greater than that expected from the observed quasar light, assuming accretion efficiency of 10%, suggesting that most of the accretion history is obscured (e.g. Fabian & Iwasawa fabian99 (1999), and references therein). It is estimated either that 85% of all AGNs are obscured (type 2) or that 85% of the accretion history of an object is hidden from view. In addition, the hard X-ray background ($`>1\mathrm{keV}`$) requires a large population of obscured AGNs at higher redshifts ($`z1`$) since the observed spectral energy distribution cannot be explained with the continua of Quasars, i.e. un–obscured (type 1) AGNs (Comastri et al. comastri (1995), Gilli et al. gilli99 (1999)). Despite the above evidence, detections of obscured AGNs at cosmological distances are still sparse (e.g. Akiyama et al. akiyama (1999)). Ultra Luminous Infrared Galaxies (ULIRGs; see Sanders & Mirabel sanders96 (1996) for a review) and the sources detected in recent far-infrared and submm surveys performed with ISO and SCUBA (e.g. Rowan-Robinson et al. rowanrob (1997), Blain et al. blain (1999) and references therein) are candidate to host the missing population of type 2 AGNs. However, mid-IR ISO spectroscopy has recently shown that ULIRGs are mostly powered by starbursts and that no trace of AGNs is found in the majority of cases (Genzel et al. genzel98 (1998); Lutz et al. lutz98 (1998)). Yet, the emission of a hidden AGN could be heavily absorbed even in the mid-IR. Indeed, the obscuration of the AGN could be related to the starburst phenomenon, as observed for Seyfert 2s (Maiolino et al. maiolino95 (1995)). Fabian et al. (fabian98 (1998)) proposed that the energy input from supernovae and stellar winds prevents interstellar clouds from collapsing into a thin disk, thus maintaining them in orbits that intercept the majority of the lines of sight from an active nucleus. In this paper, we investigate the existence of completely obscured AGNs and the Starburst-AGN connection through observations of NGC 4945, one of the closest galaxies where an AGN and starburst coexist. NGC 4945 is an edge-on ($`i80^{}`$), nearby ($`D=3.7\mathrm{Mpc}`$) SB spiral galaxy hosting a powerful nuclear starburst (Koornneef koorn (1993); Moorwood & Oliva 1994a ). It is a member of the Centaurus group and, like the more famous Centaurus A (NGC 5128), its optical image is marked by dust extinction in the nuclear regions. The ONLY evidence for a hidden AGN comes from the hard X-rays where NGC 4945 is characterized by a Compton-thick spectrum (with an absorbing column density of $`N_\mathrm{H}=5\times 10^{24}\mathrm{cm}^2`$, Iwasawa et al. iwasawa93 (1993)) and one of the brightest 100$`\mathrm{keV}`$ emissions among extragalactic sources (Done et al. done96 (1996)). Recently, BeppoSAX clearly detected variability in the 13-200$`\mathrm{keV}`$ band (Guainazzi et al., guainazzi (2000)). Its total infrared luminosity derived from IRAS data is $`2.4\times 10^{10}\mathrm{L}_{}`$ (Rice et al. rice88 (1988)), $`75\%`$ of which arises from a region of $`12\mathrm{}\times 9\mathrm{}`$ centered on the nucleus (Brock et al. brock88 (1988)). Although its star formation and supernova rates are moderate, $`0.4\mathrm{M}_{}\mathrm{yr}^1`$ and $`0.05\mathrm{yr}^1`$ (Moorwood & Oliva 1994a ), the starburst activity is concentrated in the central $`100\mathrm{pc}`$ and has spectacular consequences on the circumnuclear region which is characterized by a conical cavity evacuated by a supernova-driven wind (Moorwood et al. 1996a ). The radio emission is characterized by a compact non-thermal core with a luminosity of $`8\times 10^{38}\mathrm{erg}\mathrm{s}^1`$ (Elmouttie et al. elmouttie (1997)). It is one of the first H<sub>2</sub>O and OH megamaser sources detected (dos Santos & Lepine dossantos (1979); Baan baan85 (1985)) and the H<sub>2</sub>O maser was mapped by Greenhill et al. (greenhill (1997)) who found the emission linearly distributed along the position angle of the galactic disk and with a velocity pattern suggesting the presence of a $`10^6\mathrm{M}_{}`$ black hole. Mauersberger et al. (mauersberger (1996)) mapped the $`J=32`$ line of <sup>12</sup>CO which is mostly concentrated within the nuclear $`200\mathrm{pc}`$. We present new line and continuum images obtained with the Near Infrared Camera and Multi Object Spectrograph (NICMOS) on-board the Hubble Space Telescope (HST), aimed at detecting AGN activity in the near-infrared. These observations are complemented by recent ground based near- and mid-IR observations obtained at the European Southern Observatory. Section 2 describes the observations and data reduction techniques. Results are presented in Section 3 and discussed in Section 4. Finally, conclusions will be drawn in Sec. 5. Throughout the paper we assume a distance of 3.7$`\mathrm{Mpc}`$ (Mauersberger et al. mauersberger (1996)), whence 1″ corresponds to $`18`$$`\mathrm{pc}`$. ## 2 Observations and Data Reduction The nuclear region of NGC 4945 was observed on March 17<sup>th</sup> and 25<sup>th</sup>, 1998, with NICMOS Camera 2 (MacKenty et al. mackenty (1997)) using narrow and broad band filters for imaging in lines and continuum. HST observations are logged in Tab. 1. All observations were carried out with a MULTIACCUM sequence (MacKenty et al. mackenty (1997)) and the detector was read out non-destructively several times during each integration to remove cosmic rays hits and correct saturated pixels. For each filter we obtained several exposures with the object shifted by $`1\mathrm{}`$ on the detector to remove bad pixels. The observations in the F222M and F237M filters were also repeated on a blank sky area several arcminutes away from the source to remove thermal background emission. For narrow band images, we obtained subsequent exposures in line and near continuum filters with the object at several positions on the detector. The data were re-calibrated using the pipeline software CALNICA v3.2 (Bushouse et al. bushouse (1997)). A small (few percent) drift in the NICMOS bias level caused an error in the flat-fielding procedure which resulted in spurious artifacts in the final images (the so-called ”pedestal problem” – Skinner, Bergeron & Daou skinner (1998)). Given the strong signal from the galaxy, such artifacts are only visible in ratio or difference images. This effect was effectively removed using the pedestal estimation and quadrant equalization software developed by Roeland P. van der Marel which subtracts a constant bias level times the flat-field, minimizing the standard deviation in the images. For each filter, the corrected images were then aligned via cross-correlation and combined. Flux calibration of the images was achieved by multiplying the count rates (adu$`\mathrm{s}`$<sup>-1</sup>) for the PHOTFLAM ($`\mathrm{erg}`$$`\mathrm{cm}`$<sup>-2</sup> adu<sup>-1</sup>) conversion factors (MacKenty et al. mackenty (1997)). The narrow band images obtained at wavelengths adjacent to the $`\mathrm{Pa}\alpha `$ and $`\mathrm{H}_2`$ lines where used for continuum subtraction. The procedure was verified by rescaling the continuum by up to $`\pm 10\%`$ before subtraction and establishing that this did not significantly affect the observed emission-line structure. WFPC2 observations in the F606W (R band) filter were retrieved from the Hubble Data Archive and re-calibrated with the standard pipeline software (Biretta et al. biretta (1996)). Ground-based observations were obtained at the European Southern Observatory at La Silla (Chile) in the continuum L (3.8$`\mu \mathrm{m}`$) and N (10$`\mu \mathrm{m}`$) bands, and in the $`[\mathrm{Fe}\text{ii}]`$ 1.64$`\mu \mathrm{m}`$ emission line and are logged in Tab. 2. The L image was obtained with IRAC1 (Moorwood et al. 1994b ) at the ESO/MPI 2.2 m telescope on May 30, 1996 using an SBRC 58$`\times `$62 pixel InSb array with a pixel size of 0$`\stackrel{}{.}`$45. Double beam-switching was used, chopping the telescope secondary mirror every 0.24$`\mathrm{s}`$ and nodding the telescope every 24$`\mathrm{s}`$ to build up a total on-source integration time of 10 minutes in a seeing of 0$`\stackrel{}{.}`$9. The N-band image was obtained with TIMMI (Käufl et al. kaufl (1994)) at the ESO 3.6m telescope on May 27, 1996 using a 64$`\times `$64 Si:Ga array with 0$`\stackrel{}{.}`$46 pixels. Again using double beam switching, total on-source integration time was 40 minutes in 1″ seeing. The $`[\mathrm{Fe}\text{ii}]`$ 1.64$`\mu \mathrm{m}`$ image was taken with with the IRAC2B camera (Moorwood et al. moorwood92 (1992)) on the ESO/MPI 2.2m telescope on April 1, 1998, using a 256$`\times `$256 Rockwell NICMOS3 HgCdTe array with 0$`\stackrel{}{.}`$51 pixels. The $`[\mathrm{Fe}\text{ii}]`$ line was scanned with a $`\lambda /\mathrm{\Delta }\lambda =1500`$ Fabry-Perot etalon covering three independent wavelength settings on the line and two on the continuum on either side of the line, for a total integration times of 24 minutes on the line in 0$`\stackrel{}{.}`$9 seeing. Standard procedures were used for sky subtraction, flat fielding, interpolation of hot and cold pixels at fixed positions on the array, recentering and averaging of the data. For the $`[\mathrm{Fe}\text{ii}]`$ data, the continuum was determined from the two off-line channels and subtracted from the on-line data. The integrated $`[\mathrm{Fe}\text{ii}]`$ line flux is in excellent agreement with the value determined by Moorwood and Oliva (1994a ). ## 3 Results ### 3.1 Morphology Panels a–d in Figure 1 are the continuum images in the NICMOS K, H, J and WFPC2 R filters<sup>1</sup><sup>1</sup>1Color images are also available at http://www.arcetri.astro.it/$``$marconi. The cross marks the position of the K band peak and the circle is the position of the H<sub>2</sub>O maser measured by Greenhill et al. (greenhill (1997)). The radius of the circle is the $`\pm 1`$″ r.m.s. uncertainty of the astrometry performed on the images and based on the Guide Star Catalogue (Voit et al. voit (1997)). The position of the K peak is offset by $`0\stackrel{}{.}5`$ from the location of the H<sub>2</sub>O maser, hereafter identified with the location of the nucleus of the galaxy. Note that this offset is still within the absolute astrometric uncertainties of the GSC and the K peak could be coincident with the nucleus. The continuum images are also shown with a ”true color” RGB representation in Fig. 1f (Red=F222M, Green=F110W, Blue=F606W). A comparison of photometry between our data and earlier published results is not straightforward since the NICMOS filters are different from the ones commonly used. However, as shown in table 3, our measured fluxes in 6″ and 18″ circular apertures centered on the K band peak are within 15-30% of the ones by Moorwood & Glass (moorwood84 (1984)) measured in the same areas. Figure 1e is the H-K color map. H-K contours are also overlayed on the J image in Fig. 1c. At the location of the maser, South-East of the K band peak, emission from galactic stars is obscured by a dust lane oriented along the major axis of the galactic disk. The morphology of this resembles an edge-on disk with a 4$`\stackrel{}{.}`$5 radius (80$`\mathrm{pc}`$) and probably marks the region where high density molecular material detected in CO is concentrated (e.g. Mauersberger et al. mauersberger (1996)). The average H-K color is $`1.7`$ with a peak value of 2.3. The dust lane has a sharp southern edge which is not very evident in the color map and can be explained by saturated absorption: background H and K emission becomes completely undetectable and the color is dominated by foreground stars. Therefore, the sharp K edge is evidence for a region with such high extinction that it is not detected even in the near-IR. With this dust distribution, the observed morphology in the continuum images is the result of an extinction gradient in the direction perpendicular to the galactic disk. Patchy extinction is also present all over the field of view. At shorter wavelengths, the morphology is more irregular because dust extinction is more effective (the same effect seen so obviously in Centaurus A, cf. Schreier et al. schreier98 (1998), Marconi et al. marconi99 (2000)) and the conical cavity extensively mapped by Moorwood et al. (1996a ) becomes more prominent: there, the dust has been swept away by supernova-driven winds. Indeed, in the R image, significant emission is detected only in the wind-blown cavity which presents a clear conical morphology with well defined edges and apex lying $`3\mathrm{}`$ from the K peak. Due to the above mentioned reddening gradient, the apex of the cone gets closer to the nucleus with increasing wavelength (compare with R band and Pa$`\alpha `$/H<sub>2</sub> images – see below). The continuum-subtracted Paschen $`\alpha `$ image (Fig. 2a) shows the presence of several strong emission line knots along the galactic plane, very likely resulting from a circumnuclear ring of star formation seen almost edge-on. ”Knot B” of Moorwood et al. (1996a ) is clearly observed South East of the nucleus while ”Knot C”, North-West of the nucleus, is barely detected. Both knots are also marked on the figure. Dust extinction strongly affects the $`\mathrm{Pa}\alpha `$ morphology making very difficult to trace the ring and locate its center; a likely consequence is the apparent misalignment between the galaxy nucleus and the ring center. The observed ring of star formation is similar to what has been found in other starburst galaxies (cf. Moorwood 1996b ). The starburst ring could result from two alternative scenarios: either the starburst originates at the nucleus, and then propagates outward forming a ring in the galactic disk; or the ring corresponds to the position of the inner Lindblad resonance where the gas density is naturally increased by flow from both sides (see the review in Moorwood 1996b ). Panel b shows the continuum-subtracted H<sub>2</sub> image which traces the edges of the wind-blown cavity. As expected, the morphology is completely different from that of $`\mathrm{Pa}\alpha `$ which traces mainly starburst activity. Note the strong H<sub>2</sub> emission close to the nucleus at the apex of the cavity with an elongated, arc-like morphology. The $`\mathrm{H}_2`$ flux in a $`6\mathrm{}\times 6\mathrm{}`$ aperture centered on the K band peak is 1.1$`\times 10^{13}`$$`\mathrm{erg}`$$`\mathrm{cm}`$<sup>-2</sup>$`\mathrm{s}`$<sup>-1</sup> and corresponds to $`70\%`$ of the total integrated emission in the NICMOS field of view. This is in good agreement with the 1.29$`\pm 0.05`$ found by Koornneef & Israel (koorn96 (1996)) in an equally sized aperture and the integrated 3.1$`\times 10^{13}`$$`\mathrm{erg}`$$`\mathrm{cm}`$<sup>-2</sup>$`\mathrm{s}`$<sup>-1</sup> from the map by Moorwood & Oliva (1994a ). We remark that contamination of $`\mathrm{H}_2`$ emission by $`\mathrm{He}\text{i}`$$`\lambda `$2.112$`\mu \mathrm{m}`$ is unlikely since the line was detected neither by Koornneef (koorn (1993)) nor by Moorwood & Oliva (1994a ) and from their spectra we can set an upper limit of 5-10% to the $`\mathrm{He}\text{i}`$/$`\mathrm{H}_2`$ ratio. Panel c in Figure 2 shows that the equivalent width of $`\mathrm{Pa}\alpha `$ is up to 150–200Å in the star forming regions, but much lower in the wind-blown cavity. Since near-IR continuum emission within the cone is not significantly higher than in the surrounding medium, the low equivalent width within the cone is due to weaker $`\mathrm{Pa}\alpha `$ emission, the likely consequence of low gas density. Panel d in Figure 2 is the Pa$`\alpha `$/H<sub>2</sub> ratio image which also traces the wind-blown cavity. Note that the cone traced by Pa$`\alpha `$ and H<sub>2</sub> is offset with respect to the light cone observed in R: this is a result of the reddening gradient in the direction perpendicular to the galactic plane. Fig. 2f is a true color RGB representation of line and continuum images (Red=F222M, Green=$`\mathrm{H}_2`$, Blue=$`\mathrm{Pa}\alpha `$). L and N band ground based images are shown in Fig. 3 a and b with contours overlayed on the NICMOS K band image. No obvious point source is detected at the location of the nucleus and the extended emission is smooth and regular, elongated as the galactic disk. The $`[\mathrm{Fe}\text{ii}]`$ emission shown in contours in Fig. 2b deviates from the $`\mathrm{Pa}\alpha `$ image in a number of interesting ways. First, the northern edge of the cavity outlined most clearly in $`\mathrm{H}_2`$ emission is also detected, although more faintly, in $`[\mathrm{Fe}\text{ii}]`$, presumably excited by the shocks resulting from the superwind. Otherwise, the $`[\mathrm{Fe}\text{ii}]`$ emission displays two prominent peaks in the starburst region traced by $`\mathrm{Pa}\alpha `$, one peak close to the nucleus and one offset at a position angle of about 250 degrees (counterclockwise from North). In both of these regions the $`[\mathrm{Fe}\text{ii}]`$/$`\mathrm{Pa}\alpha `$ ratio is much higher than in the rest of the starburst region. The $`[\mathrm{Fe}\text{ii}]`$ emission likely originates in radiative supernova remnants (SNRs). In the dense nuclear region of NGC4945 the radiative phase of the SNRs will be short, and hence the $`[\mathrm{Fe}\text{ii}]`$ emission will be much more strongly affected by the stochastic nature of supernova explosions in the starburst ring than $`\mathrm{Pa}\alpha `$. The regions of high $`[\mathrm{Fe}\text{ii}]`$/$`\mathrm{Pa}\alpha `$ ratios thus simply trace recent supernova activity. ### 3.2 Reddening A lower limit and a reasonable estimate of reddening can be obtained from the H–K color image in the case of foreground screen extinction. In this case, the extinction is simply $$A_\mathrm{V}=\frac{E(HK)}{c(H)c(K)}$$ (1) where the color excess is given by the difference between observed and intrinsic colour, $`E(HK)=(HK)(HK)_{}`$ and the $`c`$ coefficients represent the wavelength dependence of the extinction law; $`A_\lambda =c(\lambda )A_\mathrm{V}`$. We have assumed $`A_\lambda =A_{1\mu \mathrm{m}}(\lambda /1\mu \mathrm{m})^{1.75}`$ ($`\lambda >1\mu \mathrm{m}`$) and $`A_\mathrm{V}=2.42A_{1\mu \mathrm{m}}`$. Spiral and elliptical galaxies have average intrinsic colours $`(HK)_{}0.22`$ with 0.1$`\mathrm{mag}`$ dispersion (Hunt et al. hunt (1997)) and the color correction due to the non-standard filters used by NICMOS is negligible – $`(HK)_{}0.26`$ instead of 0.22. In the region of the $`\mathrm{Pa}\alpha `$ ring, the average color $`HK=1.1`$ yields $`A_\mathrm{V}11`$, in fair agreement with the estimate $`A_\mathrm{V}>13`$ from the Balmer decrement presented below. In knot B $`(HK)=1.2`$ yields $`A_\mathrm{V}=12.5`$, while in knot C $`(HK)=0.64`$ yields $`A_\mathrm{V}=5.2`$. A different reddening estimate can be derived from the analysis of Hydrogen line ratios. We can estimate the reddening to ”Knot B” and ”Knot C” by using the images and spectra published by Moorwood et al. (1996a ). The inferred reddening (assuming an intrinsic ratio Pa$`\alpha `$/H$`\alpha `$=0.18, and $`A(\mathrm{H}\alpha )=0.81A_\mathrm{V}`$, $`A(\mathrm{Pa}\alpha )=0.137A_\mathrm{V}`$) is $`A_\mathrm{V}`$=3.2 for Knot B and $`A_\mathrm{V}`$=3.8 for Knot C. We can also estimate a lower limit to the reddening on the $`\mathrm{Pa}\alpha `$ ring. Considering a region $`11\mathrm{}\times 5\mathrm{}`$ aligned along the galactic plane, including all the stronger $`\mathrm{Pa}\alpha `$ emission, we find $`\mathrm{Pa}\alpha /\mathrm{H}\alpha >500`$ which corresponds to $`A_\mathrm{V}>13`$mag, a value in agreement with the estimate given by Moorwood & Oliva (moorwood88 (1988)), $`A_\mathrm{V}=14\pm 3`$, from the $`\mathrm{Br}\alpha `$/$`\mathrm{Br}\gamma `$ ratio in a $`6\mathrm{}\times 6\mathrm{}`$ aperture centered on the IR peak. We note that the first approach measures the mean extinction of the starlight, while the second one measures the extinction toward the HII regions. Therefore, these $`A_\mathrm{V}`$ estimates indicate that in the case of Knot C the star light and the emitting gas are located behind the same screen. Conversely, Knot B has a lower extinction and must therefore be located in front of the screen hiding the star light. A likely interpretation is that Knot C is located within the galactic plane on the walls of the cavity farthest from us. whereas Knot B is located above the galactic plane, toward the observer It appears that the hypothesis of screen extinction can provide reasonable results. Of course the true extinction, i.e. the optical depth at a given wavelength, is larger if dust is mixed with the emitting regions. However, it should be noted that the case in which dust is completely and uniformly mixed with the emitting regions does not apply here because the observed color excesses are larger than the maximum value expected in that case ($`E(HK)0.6`$). ### 3.3 CO Index A straight computation of the CO stellar index as $`W(CO)=m(CO)m(K)`$, where $`m(CO)`$ and $`m(K)`$ are the magnitudes in the CO and K filters, is hindered by the high extinction gradients. Therefore we have corrected for the reddening using the prescription described above: $$W(CO)=m(CO)m(K)+\frac{c(K)c(CO)}{c(H)c(K)}E(HK)$$ (2) where, as above, the $`c`$ coefficients represent the wavelength dependence of the reddening law. The correction is $`0.145E(HK)`$ which is important since the expected CO index is $`0.2`$. The ”corrected” photometric CO index map is displayed in Fig. 2e. As a check, in the central $`4\mathrm{}\times 4\mathrm{}`$ we derive a photometric CO index of 0.18 which is in good agreement with the value 0.22 obtained from spectroscopic observations by Oliva et al. (oliva95 (1995)), when one takes into account the uncertainties of reddening correction. In the central region there are three knots where the CO index reaches values $`0.25`$ aligned along the galactic disk. However, we do not detect any clear indication of dilution by a spatially unresolved source, that would be expected in the case of emission by hot ($`1000\mathrm{K}`$) dust heated by the AGN. There are regions close to the location of the H<sub>2</sub>O maser where the CO index is as low as 0.08 but that value is still consistent with pure stellar emission or, more likely, with an imperfect reddening correction. ### 3.4 AGN activity The NICMOS observations presented in this paper were aimed at detecting near-IR traces of AGN activity in the central ($`R<10\mathrm{}`$) region of NGC 4945. Indeed, recent NICMOS studies exploiting the high spatial resolution of HST show that active galactic nuclei are usually characterized by prominent point sources in K, detected e.g. in the Seyfert 2 galaxies Circinus (Maiolino et al. 1999a ) and NGC 1068, and the radio galaxy Centaurus A (Schreier et al. schreier98 (1998)). NGC 4945 does not show any point-like emission at the position of the nucleus (identified by the $`\mathrm{H}_2\mathrm{O}`$ maser) and the upper limit to the nuclear emission is $`F_\lambda (\mathrm{F222M})<2\times 10^{13}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1\mu \mathrm{m}^1`$. We also do not detect any dilution of the CO absorption features by hot dust emission, as observed in many active galaxies (Oliva et al. 1999b ). From the analysis of the CO index image, non-stellar light contributes less than $`F_\lambda (\mathrm{F222M})<6\times 10^{14}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1\mu \mathrm{m}^1`$ thus providing a tighter upper limit than above. The lack of a point source in the ground based L and N observations also places upper limits on the mid-IR emission, though less tight due to the lower sensitivity and spatial resolution ($`F_\lambda (\mathrm{L})<1.2\times 10^{12}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1\mu \mathrm{m}^1`$ and $`F_\lambda (\mathrm{N})<6.0\times 10^{13}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1\mu \mathrm{m}^1`$). Finally, type 2 AGNs are usually characterized by ionization cones detected either in line images or in excitation maps, i.e. ratios between high and low excitation lines (usually $`[\mathrm{O}\text{iii}]`$ and $`\mathrm{H}\alpha `$) revealing higher excitation than the surrounding medium. In NGC 4945 the equivalent width of $`\mathrm{Pa}\alpha `$ and the $`\mathrm{Pa}\alpha `$/$`\mathrm{H}_2`$ ratio indeed show a cone morphology but the behaviour is the opposite of what expected, i.e. the excitation within the cone is lower than in the surroundings and the H<sub>2</sub>/Pa$`\alpha `$ ratio increases up to $`5`$ (see Fig. 2d). Two processes could be responsible for the enhanced $`\mathrm{H}_2`$ emission – either shocks caused by the interaction between the supernova-driven wind and the interstellar medium or exposure to a strong X-ray dominated photon flux emitted by the AGN. But in any case there is absolutely no indication of the strong UV flux which produces “standard” AGN ionization cones. We find, therefore, no evidence for the expected AGN markers in our NICMOS data. ## 4 Discussion Although no trace of its presence has been found in these data, the existence of an obscured AGN in the nucleus of NGC 4945 is unquestionably indicated by the X-rays (Iwasawa et al. iwasawa93 (1993), Done et al. done96 (1996)). Recent, high signal-to-noise observations by BeppoSAX (Guainazzi et al., guainazzi (2000)) have confirmed the previous indications of variability from Ginga observations (Iwasawa et al. iwasawa93 (1993)): in the 13-200 keV band, where the transmitted spectrum is observed, the light curve shows fluctuations with an extrapolated doubling/halving time scale of $`\tau 35\times 10^4\mathrm{s}`$. These time scales and amplitudes essentially exclude any known process for producing the high energy X-rays other than accretion onto a supermassive black hole. Making the 3$`\times 10^{42}`$$`\mathrm{erg}`$$`\mathrm{s}`$<sup>-1</sup> observed in the 2-10$`\mathrm{keV}`$ band with BeppoSAX would require about 10000 of the most luminous X-ray binaries observed in our Galaxy (e.g. Scorpio-X1) and only a few of this objects are known. Alternatively, very hot plasma ($`KT`$ a few $`\mathrm{keV}`$), due to supernovae, has been observed in the 2-10$`\mathrm{keV}`$ spectrum of starburst galaxies, but at higher energies ($`>30\mathrm{keV}`$) the emission is essentially negligible (Cappi et al. cappi (1999); Persic et al. persic (1998)); whereas the emission of NGC4945 peaks between 30 and 100$`\mathrm{keV}`$. Also, given that the X-ray emission is observed through a gaseous absorbing column density of a few times $`10^{24}`$$`\mathrm{cm}`$<sup>-2</sup>, both the 10000 superluminous X-ray binaries and the very hot SN wind should be hidden by this huge gaseous column. It is very difficult to find a geometry for the gas distribution that could produce this effect. We therefore conclude that the presence of an AGN provides the only plausible origin of the hard X-ray emission. The above considerations combined with the absence of any evidence for the presence of an AGN at other wavelengths has important consequences irrespective of the relative, and unknown, contributions of the starburst and AGN to the total bolometric luminosity. This is illustrated below by considering the extreme possibilities that the luminosity is dominated either by the starburst or the AGN. ### 4.1 NGC 4945 as a starburst dominated object Most previous studies have concluded that the FIR emission in NGC 4945 can be attributed solely to starburst activity (e.g. Koornneef koorn (1993), Moorwood & Oliva 1994a ) without invoking the presence of an AGN. We note that, on average, active galaxies are characterized by $`L_{FIR}/L_{Br\gamma }`$ ratios much larger than starbursts and this fact was sometimes invoked to discern starbursts from AGNs (see the discussion in Genzel et al. genzel98 (1998)). In this regard, NGC 4945 has a starburst-like ratio: $`L_{\mathrm{FIR}}/L_{\mathrm{Br}\gamma }1.4\times 10^5`$ (from observed $`\mathrm{Pa}\alpha `$ with $`A_\mathrm{V}`$=15mag). This is similar to the value for the prototypical starburst galaxy M82 ($`L_{\mathrm{FIR}}/L_{\mathrm{Br}\gamma }3.4\times 10^5`$, Rieke et al. rieke80 (1980)), suggesting that the FIR emission of NGC 4945 may arise from the starburst. Genzel et al. (genzel98 (1998)) showed that, when considering the reddening correction derived from the mid-IR – usually much larger than from the optical and near-IR – the observed H line emission from the starburst translates into an ionizing luminosity comparable to the FIR luminosity. Indeed, if in NGC 4945 the bulk of H emission is hidden by just $`A_\mathrm{V}`$=45mag, $`L_{\mathrm{FIR}}/L_{\mathrm{ion}}1`$ and the observed starburst activity is entirely responsible for the FIR. Although all the bolometric luminosity could be generated by a starburst it is also possible to construct starburst models which are consistent with the observed near infrared properties but generate a much lower total luminosity. It is important to recall that $`L_{\mathrm{FIR}}/L_{\mathrm{Br}\gamma }`$ represents the ratio between star formation rates averaged over two different timescales, i.e. $`>10^8`$yrs and $`<10^7`$yrs, respectively. Therefore, this ratio strongly depends on the past star formation history. For example, objects which have not experienced star formation in the past $`10^7`$yrs will emit little $`\mathrm{Br}\gamma `$, but significant FIR radiation. A more quantitative approach is presented in Fig. 4 where we compare the observed nuclear properties of NGC 4945 with synthesis models by Leitherer et al. (leitherer (1999)). We have considered two extreme cases of star formation history. The thick solid line in the figure represents an instantaneous burst with mass $`3.5\times 10^7\mathrm{M}_{}`$ whereas the thick dashed line is a continuous star formation rate of $`0.13\mathrm{M}_{}\mathrm{yr}^1`$. In both cases a Salpeter initial mass function (i.e. $`M^{2.35}`$), upper mass cutoff of 100$`\mathrm{M}_{}`$ and abundances $`Z=\mathrm{Z}_{}`$ are chosen. Panel 1 shows the evolution of the ionizing photon rate ($`Q(\mathrm{H})`$) as a function of time after the beginning of the burst. The shaded region limits the values compatible with the observations; $`Q(\mathrm{H})`$ is estimated from the total $`\mathrm{Pa}\alpha `$ flux in the NICMOS images ($`5.6\times 10^{13}\mathrm{erg}\mathrm{s}^1\mathrm{cm}^2`$), dereddened with $`A_V=5`$mag and $`A_V=20`$mag and converted using case B approximation for H recombinations. Panel 2 gives the equivalent width of $`\mathrm{Br}\gamma `$ ($`W_\lambda (\mathrm{Br}\gamma )`$); the observed value outlined by the shaded area is a lower limit for the starburst models and was derived by rescaling the observed $`\mathrm{Pa}\alpha `$ flux and dividing by the flux observed in the same aperture with the F222M filter. Panel 3 is the evolution of the SuperNova Rate (SNR). Estimates of SNR from radio observations suggest values $`>0.3\mathrm{yr}^1`$ (Koornneef koorn (1993)), $`0.2\mathrm{yr}^1`$ (Forbes & Norris forbes (1998)), down to 0.05$`\mathrm{yr}`$<sup>-1</sup> (Moorwood & Oliva 1994a ). The shaded region covers the 0.01-0.4$`\mathrm{yr}`$<sup>-1</sup> range. Panel 4 is the mechanical luminosity produced by the Supernovae. Finally, panels 5 and 6 give the K-band and bolometric luminosity, respectively. The allowed range for the K monochromatic luminosity is given by the total observed flux in a $`6\mathrm{}\times 6\mathrm{}`$ aperture centered on the K peak where photospheric emission from supergiants is known to dominate (Oliva et al. 1999b ). The upper and lower limits represent the values obtained after dereddening by $`A_V=5`$mag and $`A_V=20`$mag. The upper limit to the bolometric luminosity is the total NGC 4945 luminosity derived from IRAS observations (Rice et al. rice88 (1988)). In all cases the thin dotted line represents the time at which the properties of the instantaneous burst meet the observational constraints. The crossed square represent the combination of the two models at $`t=10^{7.4}\mathrm{yr}`$. It is clear from the figure that an instantaneous burst of $`t10^{6.8}\mathrm{yr}`$ is capable of meeting all the observational constraints. It reproduces the correct supernova rate and K band luminosity and its bolometric luminosity dominates the total bolometric luminosity of the galaxy. Conversely the continuous burst fails to reproduce the SNR and K luminosity. Just considering these two models alone it is tempting to infer that the starburst powers the bolometric emission of NGC4945. However, the instantaneous and continuous SFR are two extreme and simplistic cases. More realistically the SF history is more complex since bursts have a finite and limited length or are the combination of several different events. As an example we consider the case of two bursts of star formation taking place at the same time: one instantaneous and the other continuous. Both have the same characteristics as the bursts presented above. The properties of this double burst model at $`t=10^{7.4}\mathrm{yr}`$ are shown in the figure by the crossed squares. The choice of the time is arbitrary and any other value between $`10^{7.2}\mathrm{yr}`$ and $`10^{7.5}\mathrm{yr}`$ might do. Even in this case the starburst model meets all the observational constraints: $`Q(\mathrm{H})`$ is provided for by the continuous burst while SNR and K luminosity come from the instantaneous burst. The important difference with respect to the single instantaneous burst is that the bolometric luminosity of the burst is now $`20\%`$ of the total bolometric luminosity of the galaxy. The mechanical luminosity injected by the SN in the ”instantaneous” burst (which dominates also in the double burst model) is $`10^{8.5}\mathrm{L}_{}`$ over $`10^{7.4}\mathrm{yr}`$. This results in a total injected energy of $`10^{57}\mathrm{erg}`$ which is more than enough to account for the observed superwind. Indeed Heckman et al. (heckman (1990)) estimate an energy content of the winds blown cavity of $`1.5\times 10^{55}\mathrm{erg}`$ (after rescaling for the different adopted distance of NGC 4945). Both models agree with the constraints imposed by dynamical measurements that the central mass in stars must be less than 6.6$`\times 10^8`$$`\mathrm{M}_{}`$ (Koornneef koorn (1993), after rescaling for the different assumed distances of the galaxy): the continuous SFR would require 5$`\times 10^9`$$`\mathrm{yr}`$ to produce that mass of stars. In conclusion, two different star formation histories can reproduce the observed starburst properties but only in one case does the starburst dominate the bolometric luminosity of the galaxy. Therefore the available data do not allow any constraints on the bolometric luminosity of the starburst. As shown in the next section, the observed ($`L_{\mathrm{FIR}}`$/$`L_\mathrm{X}`$) ratio of NGC 4945 is equal to that of a ”normal” AGN in which the $`L_{\mathrm{FIR}}`$ is reprocessed UV radiation. If the $`L_{\mathrm{FIR}}`$ in NGC 4945 is actually dominated by the starburst, therefore, it is clear that the AGN must be strongly deficient in UV relative to X-rays. In this starburst-dominant scenario for NGC4945, with the black hole mass inferred from the $`\mathrm{H}_2\mathrm{O}`$ maser measurements (Greenhill. et al. greenhill (1997)), the AGN is emitting at ≲10% of its Eddington Luminosity. ### 4.2 NGC 4945 as an AGN dominated object By fitting the simultaneous 0.1-200 keV spectrum from BeppoSAX, the absorption corrected luminosity in the 2-10 keV band is $`L_\mathrm{X}(210\mathrm{keV})=3\times 10^{42}\mathrm{erg}\mathrm{s}^1`$ (Guainazzi et al., guainazzi (2000)). If the AGN in NGC 4945 has an intrinsic spectral energy distribution similar to a quasar, then $`L_\mathrm{X}(210\mathrm{keV})/L_{\mathrm{bol}}0.03`$ (Elvis et al. elvis (1994)) therefore $`(L_{\mathrm{bol}})_{\mathrm{AGN}}10^{44}\mathrm{erg}\mathrm{s}^1=2.6\times 10^{10}\mathrm{L}_{}`$ which is the total far-IR luminosity of NGC 4945, measured by IRAS (Rice et al., rice88 (1988)). Thus, a ”normal” AGN in NGC 4945 could in principle power the total bolometric luminosity. For this scenario, we compare NGC 4945 with a nearby obscured object, the Circinus galaxy, now considered an example of a “standard” Seyfert 2 galaxy (c.f. Oliva et al. oliva94 (1994), Oliva et al. oliva98 (1998), Maiolino et al. 1998a , Matt et al. matt99 (1999), Storchi-Bergmann et al. storchi99 (1999), Curran et al. curran (1999)). In particular, Oliva, Marconi & Moorwood (1999a ) and, previously, Moorwood et al. (1996c ) showed that the total energy output from the AGN required to explain the observed emission line spectrum is comparable to the total FIR luminosity, concluding that any starburst contribution to the bolometric luminosity is small (≲10%). The choice of the Circinus galaxy is motivated by the similar distance (D=4$`\mathrm{Mpc}`$), FIR and hard X-ray luminosities as NGC 4945 ($`L_{\mathrm{FIR}}1.2\times 10^{10}\mathrm{L}_{}`$; Siebenmorgen et al., sieben (1997)$`L_\mathrm{X}(210\mathrm{keV})3.417\times 10^{41}\mathrm{erg}\mathrm{s}^1`$; Matt et al., matt99 (1999)). Note that its $`L_\mathrm{X}(210\mathrm{keV})/L_{\mathrm{FIR}}`$ ratio ($`0.010.05`$) is consistent with the average value for quasars (Elvis et al., elvis (1994)). The overall spectral energy distributions of NGC 4945 and Circinus are compared in Fig. 5. The ”stars” represent the IRAS photometric points (except for the points with the largest wavelength which are the 150$`\mu \mathrm{m}`$ measurements by Ghosh et al. ghosh92 (1992)). In NGC 4945 the points labeled with ”K”, ”L” and ”N” are the upper limits derived from our observations, while in Circinus they represent emission from the unresolved nuclear source corrected for stellar emission (Maiolino et al. 1998a ). The points labeled ”100 keV” are from Done et al. done96 (1996) (NGC 4945) and Matt et al. matt99 (1999) (Circinus). The bars between 13.6 and 54.4 eV are at a level given by $`\nu L_\nu Q(\mathrm{H})<h\nu >`$, where $`Q(\mathrm{H})`$ is the rate of H-ionizing photons and $`<h\nu >`$ is the mean photon energy of the ionizing spectrum. For NGC 4945, $`Q(\mathrm{H})`$ is derived from H recombination lines and thus represent the energy which is radiated by the young starburst; we assumed $`<h\nu >=16\mathrm{ev}`$. For Circinus, the point labeled with ”Starburst” is similarly derived from Br$`\gamma `$ emission associated with the starburst (Oliva et al. oliva94 (1994)) while that labeled ”AGN” is from the estimate made by Oliva, Marconi & Moorwood (1999a ). In the lower panel, we represent the IR spectrum of Circinus by connecting the photometric points just described. We plot this same spectrum as a dotted line in the upper panel, rescaling to match the 100 keV points. NGC 4945 and Circinus have similar X/FIR ratios: $`\nu L_\nu (100keV)/L_{FIR}2\times 10^3`$ for Circinus and $`3\times 10^3`$ for NGC 4945. Note that, at each wavelength, both Circinus and NGC 4945 were observed with comparable resolution. If the AGN in NGC 4945 dominates the luminosity and its intrinsic spectrum is similar to that of Circinus, then the lack of AGN detections in the near-IR and mid-IR require larger obscuration. In particular the non-detection of a K band point source or of dilution of the CO features can be used to estimate the extinction at 2.2$`\mu \mathrm{m}`$. In Circinus from Maiolino et al. 1998a (K band) and Matt et al. matt99 (1999), we can derive $`\nu L_\nu (K)/\nu L_\nu (100keV)0.5`$. If NGC 4945 has a similar near-IR over hard-X-rays ratio then, given $`\nu L_\nu (K)/\nu L_\nu (100keV)<8\times 10^4`$ (K band upper limit from CO and X-ray data from Done et al. done96 (1996)), the extinction toward the nucleus is $`\mathrm{\Delta }\mathrm{A}_\mathrm{K}>7`$mag (i.e. $`\mathrm{\Delta }A_\mathrm{V}>70`$mag) larger than in the case of Circinus. Hot dust in NGC 4945 must be hidden by at least $`A_\mathrm{V}>135\mathrm{mag}`$, in agreement with the estimate by Moorwood & Glass (moorwood84 (1984)), $`A_\mathrm{V}>70`$ and, more recently, with an analysis of ISO CVF spectra implying $`A_\mathrm{V}100\mathrm{mag}`$ (Maiolino et al., 2000, in preparation). We note that the required extinction is not unexpected and in agreement with the X-ray measurements. The measured column density in absorption in the X-rays is $`N_\mathrm{H}\mathrm{few}\times 10^{24}\mathrm{cm}^2`$ therefore the expected $`A_V`$, assuming a galactic gas-to-dust ratio is: $$A_V450\left(\frac{N_\mathrm{H}}{10^{24}\mathrm{cm}^2}\right)$$ (3) The $`A_V`$ measured from optical/IR data is estimated smaller than derived from X-rays ($`A_V(\mathrm{IR})0.10.5A_V(\mathrm{X})`$; Granato, Danese & Franceschini granato (1997)), therefore the X-ray absorbing column density is in excellent agreement with the required extinction. Very high extinction, expected in the frame of the unified AGN model are observed in many objects as discussed and summarized, for instance, in Maiolino et al. 1998b and in Risaliti et al. risaliti (1999). The higher extinction can also qualitatively explain the redder colors of the NGC 4945 FIR spectrum. The solid line in the upper panel is the spectrum of Circinus after applying foreground extinction by $`A_\mathrm{V}=150\mathrm{mag}`$. We have applied the extinction law by Draine & Lee (draine (1984)) and the energy lost in the mid-IR has been reprocessed as 40$`\mathrm{K}`$ dust emission (i.e. black body emission at 40$`\mathrm{K}`$ corrected for $`\lambda ^{1.75}`$ emissivity). Though a careful treatment requires a full radiation transfer calculation, this simple plot demonstrates that (i) the redder color of NGC 4945 with respect to Circinus can be explained with extra absorption and (ii) that this is not energetically incompatible with the observed FIR luminosity, i.e. the absorbed mid-IR emission re-radiated in the FIR does not exceed the observed points. If the FIR emission is powered by the AGN this is UV radiation re-processed by dust. However, if the AGN emits $`2\times 10^{10}\mathrm{L}_{}`$ in UV photons, high excitation gas emission lines should also be observed. The absence of high ionization lines like $`[\mathrm{O}\text{iii}]`$$`\lambda 5007,4959`$Å (Moorwood et al. 1996a ) or $`[\mathrm{Ne}\text{v}]`$$`\lambda 14.3\mu \mathrm{m}`$ (Genzel et al. genzel98 (1998)) and the low excitation observed in the wind-blown cone strongly argues that no ionizing UV photons (i.e. $`13.6\mathrm{h}\nu <500\mathrm{ev}`$) escape from the inner region. The low excitation H<sub>2</sub>/Pa$`\alpha `$ map, associated with the peak in H<sub>2</sub> emission close to the nucleus location, indicates that ALL ultraviolet photons must be absorbed within $`R<1\stackrel{}{.}5`$, i.e. $`R<30\mathrm{pc}`$ along ALL lines of sight. This is in contrast with the standard unified model of AGN where ionizing radiation escapes along directions close to the torus axis. If the AGN is embedded in a thick dusty medium then two effects will contribute to its obscuration. First, dust will compete with the gas in absorbing UV photons which will be directly converted into infrared radiation (e.g. Netzer & Laor netzer93 (1993), Oliva, Marconi & Moorwood 1999a ). Second, emission lines originating in this medium will be suppressed by dust absorption. To estimate the amount of required extinction, note that in Circinus $`[\mathrm{Ne}\text{v}]14.3\mu \mathrm{m}/[\mathrm{Ne}\text{ii}]12.8\mu \mathrm{m}=0.4`$ (extinction corrected) and in NGC 4945 $`[\mathrm{Ne}\text{v}]`$/$`[\mathrm{Ne}\text{ii}]`$$`0.008`$ (both ratios are from Genzel et al. genzel98 (1998)). If NGC4945 has the same intrinsic ratio as Circinus, then the observed $`[\mathrm{Ne}\text{v}]`$/$`[\mathrm{Ne}\text{ii}]`$ ratio requires $`A(14.3\mu \mathrm{m})>4.2`$mag corresponding to $`A_\mathrm{V}>110`$mag and in agreement with the above estimates. We conclude that the AGN can power the FIR emission if it is properly obscured. Inferring the black hole mass from the $`\mathrm{H}_2\mathrm{O}`$ maser observations ($`1.4\times 10^6\mathrm{M}_{}`$, Greenhill. et al. greenhill (1997)), we find in this scenario that the AGN is emitting at $`50\%`$ of its Eddington Luminosity. ### 4.3 On the existence of completely hidden Active Galactic Nuclei As discussed above, if an AGN powers the FIR emission of NGC 4945, it must be hidden up to mid-IR wavelengths and does not fit in the standard unified model. The possible existence of such a class of Active Nuclei, detectable only at $`>10\mathrm{keV}`$, would have important consequences on the interpretation of IR luminous objects whose power source is still debated. Genzel et al. (genzel98 (1998)) and Lutz et al. (lutz98 (1998)) compared mid-IR spectra of Ultra Luminous IRAS galaxies (ULIRGs, see Sanders & Mirabel sanders96 (1996) for a review) with those of AGN and starburst templates. They concluded that the absence of high excitation lines (e.g. $`[\mathrm{Ne}\text{v}]`$) and the presence of PAH features undiluted by strong thermal continuum in ULIRGs spectra strongly suggest that the starburst component is dominant. They also show that, after a proper extinction correction, the observed star formation activity can power FIR emission. In their papers, NGC 4945 is classified as a starburst because of its mid-IR properties but, as shown in the previous section, NGC 4945 could also be powered by a highly obscured AGN and the same scenario could in principle apply to all ULIRGs. Their bolometric emission can be powered by an active nucleus completely obscured even at mid-IR wavelengths. The same argument could be used for the sources detected at submillimeter wavelengths by SCUBA which can be considered as the high redshift counterpart of local ULIRGs. If they are powered by hidden active nuclei then their enormous FIR emission would not require star formation rates in excess of $`>100\mathrm{M}_{}\mathrm{yr}^1`$ (e.g. Hughes et al. hughes98 (1998)), and this would have important consequences for understanding the history of star formation in high redshift galaxies. In addition, it is well known that in order to explain the X-ray background a large fraction of obscured AGN is required. However Gilli et al. (gilli99 (1999)) have shown that, in order to reconcile the observed X-ray background with hard X-ray counts, a rapidly evolving population of hard X-ray sources is required up to redshift $`1.5`$. No such population is known at the moment and the only class of objects which are known to undergo such a rapid density evolution are local ULIRGs (Kim et al. kim98 (1998)) and, at higher redshift, the SCUBA sources (Smail et al. smail97 (1997)). SCUBA sources are therefore candidates to host a population of highly obscured AGNs. Almaini et al. (almaini99 (1999)) suggest that, if the SED of high redshift AGN is similar to those observed locally, one can explain 10–20% of the 850$`\mu \mathrm{m}`$ SCUBA sources at 1 mJy. This fraction could be significantly higher if a large population of AGN are Compton thick at X-ray wavelengths. Trentham, Blain & Goldader (trentham (1999)) show that if the SCUBA sources are completely powered by a dust enshrouded AGN then they may help in explaining the discrepancy between the local density in super massive black holes and the high redshift AGN component (see also Fabian & Iwasawa fabian99 (1999)). Establishing the nature of SCUBA sources could be extremely difficult if the embedded AGNs are like NGC4945, i.e. completely obscured in all directions, because they would then not be identifiable with the standard optical/IR diagnostics. Incidentally, this fact could possibly account for the sparse detections of type 2 AGNs at high redshifts (Akiyama et al. akiyama (1999)). The best possibility for the detection of NGC4945-like AGNs is via their hard X-ray emission but, unfortunately, the sensitivity of existing X-ray surveys is still not high enough to detect high $`z`$ AGN and the low spatial resolution makes identifications uncertain in the case of faint optical/near-IR counterparts. Moreover, hard X-rays alone are not enough to establish if the AGN dominates the bolometric emission. ## 5 Conclusions Our new HST NICMOS observations of NGC 4945, complemented by new ground based near and mid-IR observations, have provided detailed morphology of the nuclear region. In $`\mathrm{Pa}\alpha `$, we detect a 100pc-scale starburst ring while in $`\mathrm{H}_2`$ we trace the walls of a conical cavity blown by supernova driven winds. The continuum images are strongly affected by dust extinction but show that even at HST resolution and sensitivity, the nucleus is completely obscured by a dust lane with an elongated, disk-like morphology. We detect neither a strong point source nor dilution in CO stellar features, expected signs of AGN activity. Whereas all the infrared properties of NGC 4945 are consistent with starburst activity, its strong and variable hard X-ray emission cannot be plausibly accounted for without the presence also of an AGN. Although the starburst must contribute to the total bolometric luminosity we have shown, using starburst models, that the actual amount is dependent on the star formation history. A major contribution from the AGN is thus not excluded. Irrespective of the assumption made, however, our most important conclusion is that the observed variable hard X-ray emission combined with the lack of evidence for reprocessed UV radiation in the infrared is incompatible with the ”standard” AGN model. If the AGN dominates the bolometric luminosity, then its UV radiation must be totally obscured along all lines of sight. If the starburst dominates then the AGN must be highly deficient in its UV relative to X-ray emission. The former case clearly raises the possibility that a larger fraction of ULIRGs than currently thought could actually be AGN rather than starburst powered. ###### Acknowledgements. A.M. and A.R. acknowledge the partial support of the Italian Space Agency (ASI) through grants ARS–98–116/22 and ARS–99–44 and of the Italian Ministry for University and Research (MURST) under grant Cofin98-02-32. E.J.S. acknowledge support from STScI GO grant O0113.GO-7865. We thank Roeland P. van der Marel for the use of the pedestal estimation and quadrant equalization software.
no-problem/0002/cond-mat0002204.html
ar5iv
text
# Various regimes of flux motion in Bi2Sr2CaCu2O8+δ single crystals ## Abstract Four regimes of vortex motion were identified in the magnetoresistance of Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8+δ</sub> single crystals: (1) thermally activated flux flow (TAFF) in samples with surface defects caused by thermal annealing; (2) TAFF-like plastic motion of highly entangled vortex liquid at low temperatures, with $`U_{pl}(1T/T_c)/H^{1/2}`$; (3) pure free flux flow above the region of (2) in clean and optimally doped samples; or, in its place, (4) a combination of (2) and (3). This analysis gives an overall picture of flux motion in Bi cuprates. The layered structure of high-$`T_c`$ cuprates causes intrinsic pinning even in the vortex liquid state. According to Vinokur et al. , such pinning arises from the plastic creep due to flux entanglement. The highly viscous flux motion gives rise to an activation-type resistivity, $`\rho _{pl}=\rho _0\mathrm{exp}(U_{pl}/T)`$, with $`U_{pl}(1T/T_c)/H^{1/2}`$. This pinned liquid model had been confirmed in recent experimental studies. Another interesting phenomenon is free flux flow, as described by the classical Bardeen-Stephen model , which occurs when the pinning barrier is suppressed. Plastic creep and free flux flow are generally difficult to verify in experiment, because they can be easily replaced by defect pinning effects. Free flux flow (FFF) was only observed in clean YBCO samples , or under high driving forces by using current pulses . For Bi cuprates, intrinsic pinning is weaker due to its higher anisotropy, and is thus more difficult to investigate. In the present work, we report different regimes in flux motion under various pinning circumstances, by measuring the magnetoresistance of Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8</sub> single crystals. 1. Pinning by surface defects In annealed samples, thermally activated flux flow (TAFF) resistance (with $`T`$-independent barrier) was consistently observed down to the lowest $`T`$. Before annealing, on the other hand, a crossover behavior frequently occurred: below some crossover $`T_x`$, plastic creep was identified in agreement with the pinned liquid scenario; Above $`T_x`$ it crossovers to either pure free flux flow or a combination of FFF and TAFF. This clear difference between as-grown and annealed samples is connected to the effect of annealing. Scanning microscopy imaging revealed that the atomic-scale smooth surfaces of as-grown samples could be damaged by thermal annealing ($`450^{}`$C for 4h). Partial evaporation of material results a mesh of sub-$`\mu `$m-sized pits on the sample surfaces. Obviously, such defects can effectively cause pinning to give rise to the TAFF behavior over wide temperature ranges, even when the bulk pinning is absent. 2. Pinning in vortex liquid Figure 1 shows a typical set of resistance data for fields applied along the $`c`$-axis of an as-grown Bi-2212 sample ($`T_c76`$ K). The crossover in flux motion is evident. Similar phenomenon is also clear in other as-grown samples, one of which is presented in Fig. 2. We tried to fit the tail below the crossover temperature to the pinned-liquid model : $`\rho _{pl}=\rho _0\mathrm{exp}(U_{pl}/T)`$, with $`U_{pl}\varphi _0^2a(1T/T_c)/8\gamma \pi ^2\lambda _L(0)^2U_0(1T/T_c)`$, where $`a(\varphi _0/H)^{1/2}`$ is the inter-vortex spacing. The result is given by the solid lines in Fig. 1, which shows satisfactory agreement. The extracted field dependence of $`U_0`$ from Figs. 1 and 2 is plotted in the inset of Fig. 2. At high fields, $`U_0`$ also follows the $`1/\sqrt{H}`$ dependence as predicted (the straight line of -1/2 slope ). Similar behavior and crossover at low fields were reported for Bi-2212 and 2223 . 3. Free flux flow Above the crossover temperature, flux lines become disentangled and the pinning vanishes ($`U<k_BT`$). In this regime, free flux flow is thus expected if there exists no defect pinning. We tried to fit the resistance to the Bardeen-Stephen model: $`R_f=R_NH/H_{c2}`$, where $`R_N`$ is the normal-state extrapolated resistance and $`H_{c2}=\beta (T_cT)`$. One of the results is as shown in Fig. 2 (solid lines). The fitting is impressive over a very wide temperature range (51 to 78 K for $`H=5.5`$ T, $`T_c`$ 85 K). The value of $`\beta `$ is about 0.8 to 1.4 T/K, increasing with field. This result closely agrees with literature data (0.75 T/K) . 4. An intermediate situation involves combined contributions of both FFF and TAFF, as revealed by resistivity of YBCO . The total resistivity is expressed as : $`1/\rho =1/\rho _{fff}+1/\rho _{TAFF}`$. A preliminary fitting of our data to this scheme seems to explain the resistance above the crossover $`T`$ in Fig. 1. In summary, we emphasize the pinning effect of the surface defects caused by thermal annealing, which results in activation-type flux flow even when the bulk pinning is absent. In clean samples, vortex motion was analyzed in the schemes of pinned liquid (plastic flow) and free flux flow.
no-problem/0002/astro-ph0002438.html
ar5iv
text
# Resolved Spectroscopy of the Narrow-Line Region in NGC 1068: Kinematics of the Ionized GasBased on observations with the NASA/ESA Hubble Space Telescope, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. ## 1 Introduction The narrow-line region (NLR) in Seyfert galaxies is characterized by emission lines with widths on the order of 500 km s<sup>-1</sup> (full-width at half-maximum, FWHM), which are attributed to motions of the ionized clouds of gas. Over the past couple of decades, ground-based studies attempted to determine the kinematics of the NLR, but a general consensus on the velocity flow pattern was not reached; cases were made for infall, rotation, parabolic orbits, outflow, etc. (e.g., Osterbrock and Mathews 1986, and references therein; DeRobertis & Shaw 1990; Veilleux 1991; Moore & Cohen 1994, 1996). Since the majority of the NLR flux comes from a region that subtends only a few arcseconds in most Seyferts (Schmitt & Kinney 1996), these studies had to rely on spatially-integrated line profiles. Unfortunately, similar profile shapes and asymmetries can be generated from a wide variety of kinematic models (Capriotti, Foltz, & Byard 1980, 1981), and hence the difficulty in determining the velocity fields from these data. By contrast, ground-based studies of the extended narrow-line region (ENLR, at distances typically $``$ 500 pc from the central source) can take advantage of of spatially-resolved spectra; these studies find that the ionized gas in the ENLR is undergoing normal galactic rotation (Unger et al. 1987), with evidence for an additional component of outward radial motion in some cases (Whittle et al. 1988). Despite the limited spatial resolution, recent ground-based studies suggest that the NLR of NGC 1068, the nearest bright Seyfert 2 galaxy, shows evidence for radial outflow (Cecil, Bland, & Tully 1990; Arribas, Mediavilla, and García-Lorenzo 1996), which is a suggestion first offered by Walker (1968). With the Hubble Space Telescope (HST) and the Space Telescope Imaging Spectrograph (STIS), we now have the ability to obtain spectra of the NLR at high spatial resolution. The importance of these observations is that we can probe the velocity field of the ionized gas close to the central continuum source, where the supermassive black hole presumably dominates the kinematics (due to its gravitational influence and/or the radiation, winds, and jets emanating from its vicinity). In this letter, we use STIS long-slit spectra of the Seyfert 2 galaxy NGC 1068 to determine the kinematics of the ionized gas in its NLR. In previous papers, we used these spectra to study the extended continuum emission (Crenshaw & Kraemer 2000, hereafter Paper I) and the physical conditions in ionized gas near the continuum “hot spot” (Kraemer & Crenshaw 2000, Paper II). We adopt a systemic redshift of cz $`=`$ 1148 km s<sup>-1</sup> from H I observations of NGC 1068 (Brinks et al. 1997) and a distance of 14.4 Mpc (Bland-Hawthorne 1997), so that 0$`^{\prime \prime }.`$1 corresponds to 7.2 pc. ## 2 Observations and Results The observations and data reduction are described in detail in Paper I. Briefly, we obtained STIS low-dispersion spectra over the range 1150 – 10,270 Å at a spatial resolution of 0$`^{\prime \prime }.`$05 – 0$`^{\prime \prime }.`$1 and a spectral revolving power of $`\lambda `$/$`\mathrm{\Delta }\lambda `$ $``$ 1000 through a 0$`^{\prime \prime }.`$1 slit at a position angle of 202<sup>o</sup>. The slit position intercepts the brightest clouds in the inner NLR, as shown in Paper I. Here we concentrate on the brightest emission line, \[O III\] $`\lambda `$5007, to trace the kinematics as far as possible away from the nucleus. Figure 1 presents an enlarged view of the STIS spectrum, in the region around the H$`\beta `$ and \[O III\] $`\lambda \lambda `$4959, 5007 lines, which show a considerable amount of spatial and velocity structure. The most striking aspect of these lines is that they split into two velocity components both above (SW of) and below (NE of) the spectrum of the continuum hot spot (the horizontal streak). The brightest emission-line clouds show a definite trend of increasing radial velocity with distance from the hot spot, out to an angular distance of about $`\pm `$1$`^{\prime \prime }.`$7. Most of the fainter clouds follow this trend, and show an overall decrease in radial velocity further out, until they approach the systemic velocity. There are a few exceptions to these trends, which we will discuss below. We determined radial velocities from the \[O III\] $`\lambda `$5007 emission at each pixel location along the slit (at a spacing of 0$`^{\prime \prime }.`$05). At each location, we fit the emission with a local continuum and a Gaussian for each clearly identifiable peak, resulting in 1 or 2 kinematic components. A number of the observed components, particularly near the hot spot, are asymmetric and/or very broad, and we suspect that these may split into multiple components at higher spectral resolution. In Figure 2, we plot the heliocentric radial velocities, widths (FWHM, corrected for the instrumental profile), and fluxes as a function of distance from the peak of the hot spot in our slit (which is 0$`^{\prime \prime }.`$14 north of the hot spot’s centroid in HST optical continuum images, see Paper I). Most of the radial velocities in Figure 2 follow well-defined curves; the local peaks in the radial velocity curves can be attributed in many cases to bulk motion of the emission-line knots seen in the bottom plot. The pattern of increasing velocity out to $``$1$`^{\prime \prime }.`$7 is evident, except for the blueshifted knots on the SW side, which reach a smaller maximum velocity closer in ($``$1<sup>′′</sup>). The decrease in radial velocity at greater distances is also clear, except that the redshifted knots in the SW abruptly terminate at about 2$`^{\prime \prime }.`$0. There are two knots of emission NE of the hot spot that do not conform to this pattern at all; one is highly blueshifted ($``$1400 km s<sup>-1</sup>) and the other is highly redshifted ($`+`$1000 km s<sup>-1</sup>) with respect to the systemic velocity. The middle plot in Figure 2 shows the large widths of the lines, which tend to decrease with distance, particularly in the SW. ## 3 A Kinematic Model: Biconical Outflow HST images indicate a biconical geometry for the NLR in most Seyfert 2 galaxies (Schmitt & Kinney 1996), and for NGC 1068 in particular (Evans et al. 1991). The radial velocity curves in Figure 2 suggest a velocity field in which the emission-line knots accelerate out from the inner nucleus, reach a terminal velocity, and then decelerate. Thus, we favor a kinematic model of biconical outflow away from the nucleus. Similar amplitudes of the blueshifted and redshifted curves on the NE side indicate that the axis of the bicone is close to the plane of the sky, and the lack of low radial velocities where the curves peak (around $`\pm `$1$`^{\prime \prime }.`$7) suggests that the bicone is evacuated along its axis. We have generated a kinematic model of radial outflow in a bicone that is hollow along its axis. We constrain our model to be consistent with the observed morphology: Evans et al. (1991) find that the NLR in NGC 1068 can be described by a bicone with a projected half-opening angle of $``$35<sup>o</sup> and a position angle of the bicone axis on the sky of 15<sup>o</sup>. For simplicity, we assume that the two cones have identical properties (geometry, velocity law, etc.), a filling factor of one within the defined geometry, and no absorption of \[O III\] photons (e.g., by dust). The parameters that are allowed to vary in our code are the extent of each cone along its axis (z<sub>max</sub>), its minimum and maximum half-opening angles ($`\theta `$<sub>inner</sub> and $`\theta `$<sub>outer</sub>), the inclination of its axis out of the plane of the sky (i<sub>axis</sub>), and the velocity law as a function of distance from the nucleus. For the latter, we will show that constant acceleration to a maximum velocity (v<sub>max</sub>) at a turnover radius (r<sub>t</sub>), followed by a constant deceleration to zero velocity at the greatest extent of the cone ($`=`$ z<sub>max</sub>/cos $`\theta `$<sub>outer</sub>), provide a reasonable match to the observations. Our code generates a two-dimensional velocity map and samples this map with a slit that matches the position, orientation, and width of our observational slit (Paper I); in this case, our slit is placed 0$`^{\prime \prime }.`$14 north of the hot spot centroid and rotated 7<sup>o</sup> with respect to the projected bicone axis. We then compare the simulated and observed radial velocity curves, and adjust the model parameters until a good match is obtained. Since the observed radial velocity curves have significant intrinsic scatter, we do not fine-tune the models (e.g., by choosing different turnover locations and maximum velocities for each side), but settle for an illustrative model that matches the overall trend. The parameters for our final model are given in Table 1. Figure 3 shows the envelope of radial velocities from the model, compared to the observed radial velocities; the width of the envelope is determined by the range in half-opening angle ($`\theta `$<sub>inner</sub> to $`\theta `$<sub>outer</sub>), and the relative amplitudes of blue and redshifted curves are determined by the inclination angle (i<sub>axis</sub>). In three quadrants of the plot in Figure 3, the overall trend of observed radial velocities is well matched by the model. <sup>1</sup><sup>1</sup>1Note that we have assumed that the origin of the outflow is the continuum hot spot; if the origin is at the S1 radio component (Gallimore, Baum, & O’Dea 1996), which is 0$`^{\prime \prime }.`$17 S of the hot spot (Capetti, Macchetto, & Lattanzi 1997), we find that only minor adjustments are needed in the half-opening angles and turnover radius to match the oberved trend. For the blueshifted emission on the SW side, the radial velocities reach a smaller maximum closer in (at $``$1<sup>′′</sup>), and show a slight trend towards deceleration further out. As mentioned previously, there are two emission-line knots at very high velocities that do not fit this model at all. The presence of additional points outside of the envelope indicates that there may be a few emission-line knots close to the axis or knots that undergo slightly different accelerations or decelerations. Thus, although this model is simplistic, we adopt it as a tool for interpreting the velocity field, and discuss ways in which the descrepancies can be accommodated. ## 4 Discussion ### 4.1 Comparison with Ground-based Observations Cecil et al. (1990) provide the most comprehensive set of velocity maps for the NLR of NGC 1068, based on Fabry-Perot observations of the \[N II\] lines at $``$1<sup>′′</sup> spatial and 140 km s<sup>-1</sup> spectral resolutions; these authors conclude that their data can be explained by cylindrical or biconical outflow. Given the limited spatial resolution, their observations are in agreement with ours and are consistent with our model. Their line profiles along the STIS slit position show a single broad component at the location of the hot spot and double-peaked profiles, separated by $``$1000 km s<sup>-1</sup>, at a distance $``$2<sup>′′</sup> from the hot spot, in agreement with our observations (Figure 3). Outside of the STIS slit position, their profiles show the same double-peaked structure, with the largest velocity separation at $``$2<sup>′′</sup> on either side of the hot spot, which is consistent with our model prediction that the highest blueshifts and redshifts should occur at this distance. The higher spatial resolution of the STIS observations allows us to see the acceleration of clouds outward from the nucleus followed by a clear deceleration of the clouds. We have not applied our model to ground-based observations at distances $`>`$ 6<sup>′′</sup>, where galactic rotation begins to play a much larger role in the kinematics (Baldwin, Whittle, & Wilson 1987). ### 4.2 Comparison with Other Models Two other kinematic models of the NLR have been proposed on the basis of spectra at high spatial resolution, which were obtained with HST’s Faint Object Camera (FOC). Axon et al. (1998) suggest a model for NGC 1068 in which the gas expands outward from the radio jet (which is nearly coincident with the axis of the ionization cone). Winge et al. (1999) propose a rotating disk model for the NLR in the Seyfert 1 galaxy NGC 4151. To test the Axon et al. (1998) suggestion for NGC 1068, we generated a model with the same parameters as in Table 1, except that the velocity vectors are directed perpendicular to the radio axis. In this case, we find that the envelope of radial velocities is nearly the same as in Figure 3, due to the small inclination angle. However, we have two concerns about this model. First, the observed radial velocities follow a well-organized flow pattern, and we can discern no correlation with the clumpy radio structure in the NLR (Gallimore et al. 1996). Second, this model cannot explain the kinematics of the NLR in the Seyfert 1 galaxy NGC 4151, where the gas is blueshifted in one cone and redshifted in the other (Winge et al. 1999; Nelson et. al. 2000, and references therein), whereas motions perpendicular to the axis produce equal blueshifts and redshifts in each cone, regardless of the inclination of the axis. We note that our radial outflow model for NGC 1068 provides a slightly better fit than the Axon et al. model, because a small tilt of the axis (5<sup>o</sup>) can match the slightly different amplitudes of the observed blueshifted and redshifted curves. Furthermore, by tilting the cone axis towards the line of sight, this model can explain the observed radial velocities in NGC 4151 (see Crenshaw et al. 2000). Winge et al.’s (1999) rotating disk model for NGC 4151 is only used to match their low velocity component (within 300 km s<sup>-1</sup> of systemic). Even so, they require an extended (and otherwise undetected) distribution of matter within 0$`^{\prime \prime }.`$1 (64 pc) of the nucleus with a mass on the order of 10<sup>9</sup> M$``$, in addition to a central point-source mass of 10<sup>7</sup>M$``$. A rotation model can be ruled out for NGC 1068, because high redshifts and blueshifts are seen on either side of the nucleus. Other gravitational models can also be ruled out as the principal source of the velocities in NGC 1068, because the required mass is prohibitive. At the peaks of the velocity curves (ignoring the two knots with very high velocities), the projected distance from the nucleus is $``$120 pc, the projected velocity is $``$850 km s<sup>-1</sup>, and the required mass is $``$ 10<sup>10</sup> M$``$ (a lower limit because of projection effects). By comparison, the dynamical mass from stars within a radius of $``$1<sup>′′</sup> from the nucleus of NGC 1068 is only 6 x 10<sup>8</sup> M$``$ (Thatte et al. 1997). Thus, radial outflow provides the simplest and best explanation of the observed velocities in these two Seyfert galaxies. ### 4.3 Implications of our Model Our kinematic model assumes constant acceleration of clouds in the inner NLR ($`<`$ 140 pc), and constant deceleration further out; this assumption provides a reasonable match to the observations, although more complicated velocity laws as a function of distance are possible, given the intrinsic scatter in the observed points. Nevertheless, these results favor dynamical models that invoke radiation and/or wind pressure to drive clouds out from the nucleus. The deceleration of clouds further is not primarily due to gravity, for the reasons given above: an unreasonably high mass ($``$10<sup>10</sup> M$``$) is required to slow the clouds down. The simplest explanation for the deceleration is that the clouds experience a drag force, presumably due to interaction with ambient material at $``$140 pc from the nucleus. A possible explanation for the blueshifted clouds on the SW side is that they run into ambient material that is closer to the nucleus ($``$80 pc). In this picture, the two high velocity clumps in Figure 3 represent clouds that have not experienced a drag force in the direction they are traveling, which suggests that there are holes in the surrounding medium. We note that gravity may eventually play a role in the kinematics. In our model, the axis of the outflow is inclined by $``$45<sup>o</sup> with respect to the galactic disk (inclination $`=`$ 40<sup>o</sup>, major axis at position angle 106 <sup>o</sup>, see Bland-Hawthorne et al. 1997), and as the NLR clouds slow down, they may be pulled back to the disk, possibly joining the existing ENLR gas. In conclusion, we find that a biconical outflow model, with evidence for acceleration close to the nucleus and deceleration further out, provides a reasonable explanation for the radial velocities in our long-slit spectrum of NGC 1068. STIS long-slit observations of NGC 1068 at higher spectral resolution and at different slit positions will be helpful in resolving velocity components and mapping out the two-dimensional velocity field. STIS observations of other Seyferts will help test the utility of this model. This work was supported by NASA Guaranteed Time Observer funding to the STIS Science Team under NASA grant NAG 5-4103.
no-problem/0002/hep-lat0002014.html
ar5iv
text
# References February 2000 Reply to A. Patrascioiu’s and E. Seiler’s comment on our paper Percolation properties of the 2D Heisenberg model B. Allés<sup>a</sup>, J. J. Alonso<sup>b</sup>, C. Criado<sup>b</sup>, M. Pepe<sup>c</sup> <sup>a</sup>Dipartimento di Fisica, Università di Milano-Bicocca and INFN Sezione di Milano, Milano, Italy <sup>b</sup>Departamento de Física Aplicada I, Facultad de Ciencias, 29071 Málaga, Spain <sup>c</sup>Institut für Theoretische Physik, ETH–Hönggerberg, CH–8093 Zürich, Switzerland The most of the problems raised by the authors of the comment about Ref. are based on claims which have not been written in , for instance almost all the introduction and the point (1) in are based on such non–existent claims. Instead in Ref. we avoid to make claims not based on well–founded results. For instance in the abstract we write “… This result indicates how the model can avoid a previously conjectured Kosterlitz–Thouless \[KT\] phase transition…” and in the conclusive part we notice that “Our results exclude this massless phase for $`T>0.5`$”. Therefore it seems to us that the opening sentence in the Comment “In a recent letter Allés et al. claim to show that the two dimensional classical Heisenberg model does not have a massless phase.” is strongly inadequate. As for the points that appear in the Comment: * (1) The purpose of the paper is to fill a gap in the research about the critical properties of the Heisenberg model. This gap is the following one: in Ref. a scenario was proposed where the 2D Heisenberg model should undergo a KT phase transition at a finite temperature. This scenario is based mainly on three hypotheses, the third one (which states the non–percolation of the $`𝒮`$–type or equatorial clusters) being left in without a plausible justification. To back up that hypothesis a numerical test was cited in but the details of the numerics (temperature, size of the lattice, etc.) and several data concerning the percolation properties of the system, were completely skipped. The only quoted result was (see beginning of section 4 in ) “We also tested numerically for $`ϵ=1/3`$,… There is no indication of percolation…”. On the contrary, such interesting results about the critical properties should be put forward with a thorough description of the hypotheses involved. Moreover, one would like to understand how was possible to use the small value of epsilon mentioned in Ref. , because that value implies a really tiny temperature $`T`$ and consequently it requires a huge lattice size. If “Everybody agrees that at $`\beta =2.0`$ the standard action model has a finite correlation length”, see , also everybody would like to know details about the numerics and the computer used to simulate the model at such a small temperature. * (2) There is a statement in which is repeated several times: all results are valid for any versor $`\stackrel{}{n}`$ of the internal symmetry space $`O(3)`$. In particular, a percolating equatorial cluster is found for every $`\stackrel{}{n}`$. Under these conditions, we do not see how the percolation of the equatorial cluster may lead to a breaking of the $`O(3)`$ symmetry. On the other hand, the fractal properties of a cluster are very sensitive to the choice of parameters. By varying $`ϵ`$ around the value $`ϵ=1`$ (for $`T=0.5`$), one can make the data for $`M_𝒮/L^2`$ in Table 1 of to change rather dramatically. It is important (even in the case of a high temperature regime, like $`T=0.5`$) to study this dependence. It is sensible to expect that the fractal properties of the cluster show up at the threshold of percolation. Again in we do not claim that the cluster is a fractal, but just write “… \[the equatorial clusters\] present a high degree of roughness recalling a fractal structure”. To state any firmer claim, a deep analysis of the errors and better statistics in Table 1 should be performed. All these problems are currently investigated. * (3) It is true that not all flimsy clusters can avoid a KT transition. However this trivial truth proves nothing. Other kinds of lattices can hold versions of the $`XY`$ model with no transition (see for instance ). On the other hand, the statement “… there should be no doubt that on such a lattice \[square holes of side length $`L`$\] the $`O(2)`$ model has a KT phase transition for any finite $`L`$” is surprising. In Ref. it is shown that for any finite $`L`$ the KT transition is still present but it approaches $`T=0`$ as $`L`$ becomes larger. The idea of a fractal as the limit of some kind of cluster should not be forgotten. * (4) We agree with one of the sentences of this point: “It would be interesting to verify this \[the existence of a KT transition for $`XY`$ models on a fractal lattice\]”. Yet we do not see the relevance of such an obvious claim. We disagree however with the authors of when they say “our argument does not depend on the existence of such a transition on that particular percolating cluster”. Instead, after the conclusions of Ref. , we think that the non–rigorous proof proposed in for the case when the equatorial cluster does percolate, heavily lies on whether or not such a transition is realized. ————————
no-problem/0002/gr-qc0002089.html
ar5iv
text
# Gravitational waves in preheating ## 1. Introduction Gravitational waves in inflationary cosmology are produced by sub-Hubble scale vacuum fluctuations. They are stretched to super-Hubble scales by inflationary expansion, and then they generate anisotropies in the cosmic microwave background (CMB) (see, e.g., Grishchuk 1975, Mukhanov et al. 1992). Scales which re-enter before matter-radiation equality will rapidly redshift (and suffer small damping during recombination), having negligible effect on CMB temperature anisotropies, while scales which re-enter later can affect the temperature anisotropies at large angles. Roughly speaking, we can approximate the impact of gravitational waves upon CMB temperature anisotropies by neglecting all local causal dynamics and treating the large-scale fluctuations as frozen beyond the Hubble scale until re-entry, where their amplitude is conserved and determined by the inflationary Hubble rate at the time of Hubble-crossing. This picture is confirmed by the evolution equation (Lifshitz 1946) $$f_{ij}^{\prime \prime }+2\frac{a^{}}{a}f_{ij}^{}+k^2f_{ij}=0$$ (1) for the Fourier modes of the transverse traceless tensor perturbation $`f_{ij}`$, defined (on a flat background) by $$ds^2=a^2(\eta )\left[d\eta ^2+\left(\delta _{ij}+f_{ij}\right)dx^idx^j\right].$$ (2) Equation (1) shows that on very large scales, $`k/aH1`$ (where $`H=a^{}/a^2`$), $$f_{ij}A_{ij}+B_{ij}\frac{d\eta }{a^2},$$ (3) where $`A_{ij}^{}=0=B_{ij}^{}`$. If $`a^{}>0`$ then the $`B`$-mode is decaying, and we have $`f_{ij}`$ constant, with the constant determined by the Hubble rate at inflationary Hubble-crossing, $`H(\eta _c)=k/a(\eta _c)`$. This simple picture is modified by small corrections induced at preheating. At the end of slow-roll inflation, the inflaton oscillates and transfers its energy to fluctuations, initiating the reheating era, which ends when created particles thermalize as a radiative plasma. Reheating is often initiated by a preheating era, marked by coherent inflaton oscillations which drive parametric resonant amplification of scalar fluctuations (see, e.g., Traschen and Brandenberger 1990, Kofman et al. 1994, 1997). One crucial point about preheating is that, since the inflaton is coherent on scales well beyond the Hubble horizon, it is in principle possible for super-Hubble fluctuations to be amplified without violating causality (Bassett et al. 1999a,b).<sup>*</sup><sup>*</sup>*It is also possible for small-scale gravitational waves to be generated by gravitational bremstrahlung via rescattering of scalar fluctuations during preheating (Khlebnikov and Tkachev 1997). This is a particular example of the generation of tensor perturbations by scalar perturbations at second order (Matarrese et al. 1998). In the case of scalar fluctuations of the metric, this can in principle produce nonlinear amplification, depending on initial conditions and coupling strengths (Bassett et al. 1999a,b,c, 2000, Ivanov 2000, Jedamzik and Sigl 2000, Liddle et al. 2000). While tensor fluctuations will not be strongly amplified by scalar inflaton oscillations, these oscillations nevertheless could leave an imprint on the tensor spectrum on large scales. Such a possibility is usually ruled out on causality grounds, but no such causality constraint operates during coherent oscillations of the inflaton condensate. These small corrections will be carried into Eq. (1) via the scale factor, which inherits an oscillatory addition to its average value. However, the nature of the effect is more clearly brought out via an alternative description of gravitational waves, based on the idea that a full characterization requires the curvature tensor, not the metric (Pirani 1957). Transverse traceless modes are given by the divergence-free electric and magnetic parts of the Weyl tensor $`E_{\mu \nu }=C_{\mu \alpha \nu \beta }u^\alpha u^\beta ,H_{\mu \nu }=^{}C_{\mu \alpha \nu \beta }u^\alpha u^\beta ,`$where $`u^\mu `$ is the background four-velocity (there are no velocity perturbations for tensor modes). This covariant description of gravitational waves was used by Hawking (1966), and is remarkably analogous to electromagnetic radiation theory (Dunsby et al. 1997, Maartens and Bassett 1998). In this paper, we use the Maxwell-Weyl approach to gravitational waves to investigate the effects of inflaton oscillations in some simple preheating models, generalizing previous work in Minkowski spacetime (Bassett 1997). In Sections 2 and 3, we give the basic equations and their qualitative properties. In Section 4 we present the numerical calculations, and Section 5 contains concluding remarks and discussion. ## 2. Background dynamical equations The background inflaton is governed by the Klein-Gordon equation $$\ddot{\phi }+3H\dot{\phi }+V_\phi =0,$$ (4) where $`V_\phi =V/\phi `$. We will consider the simple chaotic inflation potentials $`V=\frac{1}{2}m^2\phi ^2`$ and $`V=\frac{1}{4}\lambda \phi ^4`$. Although the resonance in scalar fluctuations is dramatically increased when the inflaton is coupled to other fields, for example via the additional potential term $`\frac{1}{2}g^2\phi ^2\chi ^2`$, this does not seem to have an effect on tensor fluctuations (Tilley 2000). Thus we will confine ourselves to simplified single-field models of preheating. Preheating in more realistic models ends when backreaction effects of the fluctuations destroy the coherence of inflaton oscillations. In our simplified models, backreaction effects are not incorporated, but we can use the results from detailed investigations to estimate the time that preheating lasts (see, e.g., Kofman et al. 1997). The Hubble rate is determined by the Friedmann equation $$H^2=\frac{1}{3}\kappa ^2\left[\frac{1}{2}\dot{\phi }^2+V(\phi )\right],$$ (5) where $`\kappa ^2=8\pi /M_p^2`$. Equations (4) and (5) imply $`\dot{H}=\frac{1}{2}\kappa ^2\dot{\phi }^2`$. The energy density and effective pressure of the inflaton are $$\rho =\kappa ^2\left(\frac{1}{2}\dot{\phi }^2+V\right),p=\kappa ^2\left(\frac{1}{2}\dot{\phi }^2V\right).$$ (6) During slow-roll inflation, the coupled equations (4) and (5) have approximate analytic solutions for the simple potentials. When the value of $`\phi `$ drops low enough, slow-roll ends and the oscillatory regime begins. The approximate analytic forms are (see, e.g., Kaiser 1997, Kofman et al. 1997) $`\phi `$ $``$ $`\phi _{in}{\displaystyle \frac{\mathrm{sin}\tau }{\tau }}\text{where}\tau =m(tt_{in})\text{and}V=\frac{1}{2}m^2\phi ^2,`$ (7) $`\phi `$ $``$ $`{\displaystyle \frac{\phi _{in}}{a}}\text{cn}(\tau ,{\displaystyle \frac{1}{\sqrt{2}}})\text{where}\tau =\sqrt{\lambda }\phi _{in}(\eta \eta _{in})\text{and}V=\frac{1}{4}\lambda \phi ^4,`$ (8) where $`a_{in}=1`$ and cn is a Jacobian elliptic function. The initial values of $`\phi _{in}`$ are $`0.3M_p`$ in the quadratic case, and $`0.6M_p`$ in the quartic case. Time-averaging over oscillations shows that $`\overline{a}t^{2/3}`$ for the quadratic potential, and $`\overline{a}t^{1/2}`$ for the quartic potential. These are the asymptotic values of the scale factor, but in practice backreaction effects due to couplings will end the preheating oscillations. If one uses the average forms $`\overline{a}`$ for $`a`$, i.e., if one ignores oscillatory behaviour in the inflaton, then one regains the standard results for gravitational wave evolution in the dust and radiation eras. In particular, models with a smooth transition from inflation to radiation-domination, neglecting reheating dynamics, show that there is no super-Hubble amplification of gravitational waves (e.g. Caldwell 1996, Tilley and Maartens 1998) . Our numerical integrations show that this averaging loses interesting features in the gravitational waves (see Section 3). We do not use the approximate forms in Eqs. (7) and (8) for our numerical results—instead, we integrate the Friedmann and Klein-Gordon equations numerically. ## 3. Maxwell-Weyl gravitational wave equations Gravitational wave perturbations in the covariant approach are governed by the Maxwell-Weyl equations: $`\left(\text{div}E\right)_\mu =0=\left(\text{div}H\right)_\mu ,`$ $`\dot{E}_{\mu \nu }+3HE_{\mu \nu }\text{curl}H_{\mu \nu }=\frac{1}{2}(\rho +p)\sigma _{\mu \nu },`$ $`\dot{H}_{\mu \nu }+3HH_{\mu \nu }+\text{curl}E_{\mu \nu }=0,`$ where $`\sigma _{\mu \nu }`$ is the shear, a dot denotes $`u^\mu _\mu `$, and div and curl are the covariant spatial divergence and curl for tensors (Maartens and Bassett 1998). These equations hold for perfect fluids and minimally coupled scalar fields (in which case $`\rho +p=\kappa ^2\dot{\phi }^2`$). The shear is a tensor potential for the electric and magnetic Weyl tensors (Maartens and Bassett 1998): $`E_{\mu \nu }`$ $`=`$ $`\dot{\sigma }_{\mu \nu }2H\sigma _{\mu \nu },`$ (9) $`H_{\mu \nu }`$ $`=`$ $`\text{curl}\sigma _{\mu \nu },`$ (10) in close analogy with the Maxwell relations $`\stackrel{}{E}=\dot{\stackrel{}{A}}`$ and $`\stackrel{}{H}=\text{curl}\stackrel{}{A}`$. Taking the curl and dot of the Maxwell-Weyl propagation equations, and using the identity for the tensor curl of the curl, we find wave equations for the three tensors (Dunsby et al. 1997, Maartens and Bassett 1998, Challinor 2000). Using equations (4) and (5), these become $`\ddot{E}_{\mu \nu }+7H\dot{E}_{\mu \nu }+4\kappa ^2V(\phi )E_{\mu \nu }`$ $`=`$ $`\mathrm{\Delta }E_{\mu \nu }+\kappa ^2\dot{\phi }\left[5H\dot{\phi }+V^{}(\phi )\right]\sigma _{\mu \nu },`$ (11) $`\ddot{H}_{\mu \nu }+7H\dot{H}_{\mu \nu }+4\kappa ^2V(\phi )H_{\mu \nu }`$ $`=`$ $`\mathrm{\Delta }H_{\mu \nu },`$ (12) where $`\mathrm{\Delta }`$ is the covariant spatial Laplacian. One can also derive a wave equation for the shear: $$\ddot{\sigma }_{\mu \nu }+5H\dot{\sigma }_{\mu \nu }+\kappa ^2\left[2V(\phi )\frac{1}{2}\dot{\phi }^2\right]\sigma _{\mu \nu }=\mathrm{\Delta }\sigma _{\mu \nu }.$$ (13) We decompose the tensors into modes (Challinor 2000) $`E_{\mu \nu }`$ $`=`$ $`a^2{\displaystyle k^2\left[_kQ_{\mu \nu }^{(k)}+\stackrel{~}{}_k\stackrel{~}{Q}_{\mu \nu }^{(k)}\right]},`$ (14) $`H_{\mu \nu }`$ $`=`$ $`a^2{\displaystyle k^2\left[_kQ_{\mu \nu }^{(k)}+\stackrel{~}{}_k\stackrel{~}{Q}_{\mu \nu }^{(k)}\right]},`$ (15) $`\sigma _{\mu \nu }`$ $`=`$ $`a^1{\displaystyle k\left[𝒮_kQ_{\mu \nu }^{(k)}+\stackrel{~}{𝒮}_k\stackrel{~}{Q}_{\mu \nu }^{(k)}\right]},`$ (16) where $``$ denotes a symbolic sum over harmonic modes, and $`Q`$, $`\stackrel{~}{Q}`$ are tensor harmonics of electric and magnetic parity, which are time-independent, transverse traceless, and related by $`\text{curl}Q_{\mu \nu }^{(k)}={\displaystyle \frac{k}{a}}\stackrel{~}{Q}_{\mu \nu }^{(k)},`$showing that the different polarization states are coupled. The mode functions $`_k`$ determine the tensor contribution to the CMB power spectrum. The magnetic Weyl mode functions are algebraically related to the shear mode functions by $$_k=\stackrel{~}{𝒮}_k.$$ (17) Specializing the results in Challinor (2000) to the scalar field case, we find the evolution equations $`\dot{}_k+3H_k{\displaystyle \frac{a}{k}}\left({\displaystyle \frac{k^2}{a^2}}\frac{1}{2}\kappa ^2\dot{\phi }^2\right)𝒮_k`$ $`=`$ $`0,`$ (18) $`\dot{𝒮}_k+H𝒮_k+{\displaystyle \frac{k}{a}}_k`$ $`=`$ $`0.`$ (19) For comparison, in the time-averaged approximation, the $`\dot{\phi }^2`$ term in the coefficient of $`𝒮_k`$ in Eq. (18) is replaced by $`\gamma \rho `$, where $`\rho =\rho _{\mathrm{in}}a^{3\gamma }`$, with $`\gamma =1`$ for the averaged quadratic case (i.e. cold matter or ‘dust’), and $`\gamma =\frac{4}{3}`$ for the averaged quartic case (i.e. radiation). ## 4. Numerical results Equations (17)–(19) for the gravitational wave perturbations, and (4)–(5) for the background are integrated numerically, and the results are compared with those for the time-averaged approximation. We investigated the evolution of COBE modes, which left the Hubble radius at about 50 to 60 e-folds before the end of inflation, so that $`k/aH10^{24}`$ at the start of preheating, $`t=t_{in}`$. We also integrated for a typical small scale mode, with $`k/aH10`$ at $`t=t_{in}`$. The initial values for $`\phi `$ were given in the previous section. We used the slow-roll relation to set the initial inflaton velocity, $`\dot{\phi }_{in}=M_pV_\phi /\sqrt{24\pi V}`$. For the tensor modes, we used the initial value $`10^5`$. The inflaton mass and self-coupling were chosen as $`m=10^6M_p`$ and $`\lambda =10^{12}`$. In order to take account of backreaction effects, which will end the coherent inflaton oscillations, we terminated the integrations after $`\tau =100`$. The results are shown in Figs. 1–8. ## 5. Conclusions Apart from specific features peculiar to the potential, these figures reflect two key properties: (1) Super-Hubble gravitational waves are not significantly amplified by coherent inflaton oscillations during preheating; (2) gravitational waves on all scales carry some imprint of the coherent oscillatory dynamics of the inflaton during preheating. The latter point is clearly brought out by a comparison with the evolution in the absence of oscillations, i.e. for the time-averaged scale factor. In particular, one can see that the electric Weyl modes on super-Hubble scales, which determine the effect of gravitational waves on the CMB (Challinor 2000), inherit oscillations from the inflaton. In the early part of preheating, there is also some amplification on average of $`_k`$, so that if backreaction takes effect early, then this does produce a small amplification relative to the no-oscillation time-averaged model. However, one cannot expect these preheating imprints on $`_k`$ to lead to detectable differences in the CMB power spectrum. Firstly, the oscillations will effectively be averaged out, and secondly, any amplification is likely to be scale invariant for all measurable anisotropies, since the Hubble scale at preheating corresponds to about 1 metre today, so that all cosmologically significant scales at preheating effectively have $`k=0`$, and behave like the particular scale chosen in our numerical integrations (Bassett et al. 2000, Jedamzik and Sigl 2000). Gravitational waves on sub-Hubble scales oscillate even in the case of time-averaged background, but Figs. 3, 4, 7 and 8 show that the frequency and amplitude of oscillation are significantly modulated by the inflaton oscillations. Thus preheating leaves an imprint on these scales. In principle, this could be detected, but in practice the signal will be far too weak, since there is no preheating amplification on these scales. The absence of amplification is fundamentally due to the expansion of the universe, since there is strong amplification on a Minkowski background of gravitational waves during preheating (Bassett 1997, Tilley 2000). Acknowledgement: We thank Bruce Bassett for valuable discussions and comments.
no-problem/0002/astro-ph0002355.html
ar5iv
text
# Relativistic flows in blazars ## 1. Introduction We believe that the continuum radiation we see from blazars comes from the transformation of bulk kinetic energy, and possibly Poynting flux, into random energy of particles, which quickly produce beamed emission through the synchrotron and the inverse Compton process. This is analogous to what we believe is happening in gamma–ray bursts, although the bulk Lorentz factor of their flow is initially larger. Evidences for bulk motion in blazars with Lorentz factors between 5 and 20 have been accumulated along the years, especially through the monitoring of superluminally moving blobs on the VLBI scale (Vermeulen & Cohen 1994), and, more recently, through the detection of very large variable powers emitted above 100 MeV (see the third EGRET catalogue, Hartman et al., 1999), which require beaming for the source to be transparent to photon–photon absorption (e.g. Dondi & Ghisellini, 1995). The explanation of intraday variations of the radio flux, leading to brightness temperatures in excess of $`T_\mathrm{B}=10^{18}`$ K (much exceeding the Compton limit) are instead still controversial (Wagner & Witzel 1995). Interstellar scintillation is surely involved, but it can work only if the angular diameter of the variable sources is so small to nevertheless lead to $`T_\mathrm{B}=10^{15}`$ K, which requires either a coherent process to be at work (e.g. Benford & Lesch 1998) or a Doppler factor of the order of a thousand. Another controversial issue is the matter content of jets. We still do not know if they are dominated by electron–positron pairs or by normal electron–proton plasma (see the reviews by Celotti, 1997, 1998). Part of our ignorance comes from the difficulty of estimating intrinsic quantities, such as the magnetic field and the particle densities, using the observed flux, which is strongly modified by the effects of relativistic aberration, time contraction and blueshift, all dependent on the unknown plasma bulk velocity and viewing angle. Furthermore it is now clear (especially thanks to multiwavelength campaigns) that the blazar phenomenon is complex. On the optimistic side, we have for the first time a complete information of the blazar energy output, after the discovery of their $`\gamma `$–ray emission, and some hints on the acceleration process, through the behaviour of flux variability detected simultaneously in different bands (see the review by Ulrich, Maraschi & Urry 1996). Also, blazar research can now take advantage of the explosion of studies regarding gamma–ray bursts, which face the same problem of how to transform ordered to random energy to produce beamed radiation (for reviews: Piran 1999; Meszaros 1999). ## 2. Accretion = Rotation? Despite the prediction that jets carry plasma in relativistic motion dates back to 1966 (Rees, 1966), and intense studies over the last 20 years (Begelman, Blandford & Rees, 1984), quantitative estimates of the amount of power transported in jets have been done only relatively recently, following new observational results. One important point is that the extended (or lobe) radio emission of radiogalaxies and quasars traces the energy content of the emitting region. Through minimum energy arguments and estimates of the lobe lifetime by spectral aging of the observed synchrotron emission and/or by dynamical arguments, Rawlings & Saunders (1991) found a nice correlation between the average power that must be supplied to the lobes and the power emitted by the narrow line region. Although one always expects some correlation between powers (they both scales with the square of the luminosity distance) it is the ratio of the two quantities to be interesting, being of order of 100. Since we also know that, on average, the total luminosity in narrow lines is of the order of one per cent of the ionizing luminosity, we have the remarkable indication that the power carried by the jet (supplying the extended regions of the radio–source) and the power produced by the accretion disk (illuminating the narrow line clouds) are of the same order. Celotti, Padovani and Ghisellini (1997) later confirmed this by calculating the kinetic power of the jet at the VLBI scale (see Celotti & Fabian 1993) and the broad line luminosity (assumed to reprocess $`10\%`$ of the ionizing luminosity). A possible explanation involves the magnetic field being responsible for both the extraction of spin energy of a rotating black hole and the extraction of gravitational energy of the accreting matter. Assume in fact that the main mechanism to power the jet is the Blandford–Znajek (1977) process: $$L_{\mathrm{jet}}\left(\frac{a}{m}\right)^2U_\mathrm{B}(3R_\mathrm{s})^2c$$ (1) where $`(a/m)`$ is the specific black hole angular momentum ($`1`$ for a maximally rotating Kerr hole), $`U_\mathrm{B}`$ is the magnetic energy density and $`R_\mathrm{s}`$ is the Schwarzchild radius. Note that Eq. 1 has the form of a Poynting flux. Assume now that most of the luminosity of the accretion disk is produced at $`3R_\mathrm{s}`$. The corresponding radiation energy density is then $`U_\mathrm{r}=L_{\mathrm{disk}}/(36\pi R_\mathrm{s}^2c)`$, leading to $$L_{\mathrm{disk}}=U_\mathrm{r}(3R_\mathrm{s})^2c$$ (2) Therefore a magnetic field in equipartion with the radiation energy density of the disk would lead to $`L_{\mathrm{jet}}L_{\mathrm{disk}}`$. ## 3. Mass outflowing rate We can estimate the ratio of the outflowing (in the jet) to the inflowing mass rate, since $$L_{\mathrm{disk}}=\eta \dot{M}_{\mathrm{in}}c^2;L_{\mathrm{jet}}=\mathrm{\Gamma }\dot{M}_{\mathrm{out}}c^2;\dot{M}_{\mathrm{out}}=\frac{\eta }{\mathrm{\Gamma }}\frac{L_{\mathrm{disk}}}{L_{\mathrm{jet}}}\dot{M}_{\mathrm{in}}$$ (3) If jets carry as much energy as the one produced by the accretion disk, we then obtain that the mass outflow rate is $`1\%`$ of the accreting mass rate (if $`\eta =10\%`$ and $`\mathrm{\Gamma }=10`$). ## 4. The blazar diversity BL Lac objects and Flat Spectrum Radio Quasars (FSRQ) are characterized by very rapid and large amplitude variability, power law spectra in restricted energy bands and strong $`\gamma `$–ray emission. These common properties justify their belonging to the same blazar class. However they differ in many other respects, such as the presence (in FSRQ) or absence (in BL Lacs) of broad emission lines, the radio to optical flux ratio, the relative importance of the $`\gamma `$–ray emission, the polarization degree, and the variability behavior. Within the BL Lac class, Giommi & Padovani (1994) have subdivided the objects according to where (i.e. at what frequency) the first broad (synchrotron) peak is located. Low energy peaked BL Lacs (LBL) show a peak in the IR–optical bands, while in High energy peaked BL Lacs (HBL) this is in the X–ray band (see, in this volume, the contributions of Costamante et al., Giommi et al., Pian et al.,Tagliaferri et al., Tavecchio & Maraschi, Wolter et al.). As the emission of all blazars is beamed towards us, so there must be a parent population of objects pointing in other directions. The parent populations of BL Lacs and FSRQs are believed to be FR I and more powerful FR II radio galaxies, respectively (see the review by Urry & Padovani 1995). The absence of broad emission lines in BL Lacs is shared by FR I radio galaxies, whose nuclei are well visible by Hubble Space Telescope observations (Chiaberge, Capetti & Celotti 1999). This suggests that in FR I and BL Lac objects broad emission lines are intrinsically weaker than in more powerful objects. ## 5. The re–united blazars Fossati et al. (1998) found that the SED of all blazars is related to their observed luminosity. There is a rather well defined trend: low luminosity objects are HBL–like, and furthermore their high energy peak is in the GeV–TeV band. As the bolometric luminosity increases, both peaks shift to lower frequencies, and the high energy emission is increasingly more dominating the total output. <sup>1</sup><sup>1</sup>1A note of caution: the limited sensitivity of EGRET (onboard CGRO) and ground based Cherenkov telescopes allows to detect sources which are in high states. Therefore the trend of more high energy dominated spectra as the total power increases strictly refers to high states. Ghisellini et al. (1998), fitted the SED of all blazars detected in the $`\gamma `$–ray band for which the distance and some spectral information of the high energy radiation were available. They found a correlation between the energy $`\gamma _{\mathrm{peak}}m_\mathrm{e}c^2`$ of the electrons emitting at the peaks of the spectrum and the amount of energy density $`U`$ (both in radiation and in magnetic field), as measured in the comoving frame: $`\gamma _{\mathrm{peak}}U^{0.6}`$. This indicates that, at $`\gamma _{\mathrm{peak}}`$, the radiative cooling rate $`\dot{\gamma }(\gamma _{\mathrm{peak}})\gamma _{\mathrm{peak}}^2U`$const. It also suggests that this may be due to a “universal” acceleration mechanism, which must be nearly independent of $`\gamma `$ and $`U`$: in less powerful sources with weak magnetic field and weak lines the radiative cooling is less severe and electrons can be accelerated up to very high energies, producing a SED typical of a HBL. The paucity of photons produced externally to the jet leaves synchrotron self–Compton as the only channel to produce high energy radiation. At the other extreme, in the most powerful sources with strong emission lines, electrons cannot be accelerated to high energies because of severe cooling. Their spectrum is therefore peaked in the far IR and in the MeV band. In these sources the inverse Compton scattering off externally produced photons is the dominant cooling mechanism, producing a dominant $`\gamma `$–ray luminosity. ### 5.1. Powers For the same sample of blazars fitted in Ghisellini et al. (1998) we can estimate the powers radiated and transported by jets in the form of cold protons, magnetic field and hot electrons and/or electron–positron pairs. Since the model allows to determine the bulk Lorentz factor, the dimension of the emitting region, the value of the magnetic field and the particle density, we can then determine $$L_\mathrm{p}=\pi R^2\mathrm{\Gamma }^2\beta cn_\mathrm{p}^{}m_\mathrm{p}c^2;L_\mathrm{e}=\pi R^2\mathrm{\Gamma }^2\beta cn_\mathrm{e}^{}\gamma m_\mathrm{e}c^2;L_\mathrm{B}=\pi R^2\mathrm{\Gamma }^2\beta c\frac{B^2}{8\pi }$$ (4) where $`n_\mathrm{p}^{}`$ and $`n_\mathrm{e}^{}`$ are the comoving proton and lepton densities, respectively, $`R`$ is the cross section radius of the jet, and $`\gamma m_\mathrm{e}c^2`$ is the average lepton energy. These powers can be compared with the radiated one estimated in the same frame (in which the emitting blob is seen moving). The power radiated in the entire solid angle is thus $`L_\mathrm{r}=L_\mathrm{r}^{}\mathrm{\Gamma }^2`$ (the same holds for the power $`L_{\mathrm{syn}}`$ emitted by the synchrotron process). All these quantities are plotted in Fig. 2 (Celotti & Ghisellini 2000, in prep.). In this figure hatched areas correspond to BL Lac objects. Several facts are to be noted: * If the jet is made by a pure electron–positron plasma, then the associated kinetic power is $`L_\mathrm{e}`$. However, we note that $`L_\mathrm{e}L_\mathrm{r}`$ posing a serious energy budget problem. * If there is a proton for each electron, the bulk kinetic power $`L_\mathrm{p}10L_\mathrm{r}`$. This corresponds to an efficiency of $`10\%`$ in converting bulk into random energy. The remaining 90% is therefore available to power the radio lobes, as required. * The power in the Poynting flux, $`L_\mathrm{B}`$, is of the same order of $`L_\mathrm{e}`$, indicating that the magnetic field is close to equipartition with the electron energy density. This suggests that, on these scales, the magnetic field is not a prime energy carrier, but is a sub–product of the process transforming bulk into random energy. ## 6. Internal shocks The central engine may well inject energy into the jet in a discontinuous way, with individual shells or blobs having different masses, bulk Lorentz factors and energies. If this occurs there will be collisions between shells, with a faster shell catching up a slower one. This idea has become the leading model to explain the emission of gamma–ray bursts, but it was born in the AGN field, due to Rees (1978) (see also Sikora 1994). * Location — The $`\gamma `$–ray emission of blazars and its rapid variability imply that there must be a preferred location where dissipation of the bulk motion energy occurs. If it were at the base of the jet, and hence close to the accretion disk, the produced $`\gamma `$–rays would be inevitably absorbed by photon–photon collisions, with associated copious pair production, reprocessing the original power from the $`\gamma `$–ray to the X–ray part of the spectrum (contrary to observations). If it were far away, in a large region of the jet, it becomes difficult to explain the observed fast variability, even accounting for the time–shortening due to the Doppler effect. The region where the radiation is produced is then most likely located at a few hundreds of Schwarzchild radii ($`10^{17}`$ cm) from the base of the jet, within the broad line region (see Ghisellini & Madau 1996 for more details). The extra seed photons provided by emission lines enhance the efficiency of the Compton process responsible for the $`\gamma `$–ray emission. This is indeed the typical distance at which two shells, initially separated by $`R_010^{15}`$ cm (comparable to a few Schwarzschild radii) and moving with $`\mathrm{\Gamma }10`$ and $`\mathrm{\Gamma }20`$ would collide. * Variability timescales — In fact if the initial separation of the two shells is $`R_0`$ and if they have Lorentz factors $`\mathrm{\Gamma }_1`$, $`\mathrm{\Gamma }_2`$, they will collide at $$R=\frac{2\mathrm{\Gamma }_1^2}{1(\mathrm{\Gamma }_1/\mathrm{\Gamma }_2)^2}R_0$$ (5) If the shell widths are of the same order of their initial separation the time needed to cross each other is of the order of $`R/c`$. The observer at a viewing angle $`\theta 1/\mathrm{\Gamma }`$ will see this time Doppler contracted by the factor $`(1\beta \mathrm{cos}\theta )\mathrm{\Gamma }^2`$. The typical variability timescale is therefore of the same order of the initial shell separation. If the mechanism powering GRB and blazar emission is the same, we should expect a similar light curve from both systems, but with times appropriately scaled by the different $`R_0`$, i.e. the different masses of the involved black holes. * Efficiencies — As most of the power transported by the jet must reach the radio lobes, only a small fraction can be radiatively dissipated. The efficiency $`\eta `$ of two blobs/shells for converting ordered into random energy depends on their masses $`m_1`$, $`m_2`$ and bulk Lorentz factors $`\mathrm{\Gamma }_1`$, $`\mathrm{\Gamma }_2`$, as $$\eta =\mathrm{\hspace{0.17em}1}\mathrm{\Gamma }_f\frac{m_1+m_2}{\mathrm{\Gamma }_1m_1+\mathrm{\Gamma }_2m_2}$$ (6) where $`\mathrm{\Gamma }_f=(1\beta _f^2)^{1/2}`$ is the bulk Lorentz factor after the interaction and is given by (see e.g. Lazzati, Ghisellini & Celotti 1999) $$\beta _f=\frac{\beta _1\mathrm{\Gamma }_1m_1+\beta _2\mathrm{\Gamma }_2m_2}{\mathrm{\Gamma }_1m_1+\mathrm{\Gamma }_2m_2}$$ (7) The above relations imply, for shells of equal masses and $`\mathrm{\Gamma }_2=2\mathrm{\Gamma }_1=20`$, $`\mathrm{\Gamma }_f=14.15`$ and $`\eta =5.7\%`$. Efficiencies $`\eta `$ around 5–10% are just what needed for blazar jets. * Peak energies? — In the rest frame of the fast shell, the bulk kinetic energy of each proton of the slower shell is $`(\mathrm{\Gamma }^{}1)m_pc^2`$, where $`\mathrm{\Gamma }^{}2`$. This is what can be transformed into random energy. Assume now that the electrons share this available energy (through an unspecified acceleration mechanism). In the comoving frame, the acceleration rate can be written as $`\dot{E}_{heat}(\mathrm{\Gamma }^{}1)m_pc^2/t_{heat}^{}`$. The typical heating timescale may correspond to the time needed for the two shells to cross, i.e. $`t_{heat}^{}\mathrm{\Delta }R^{}/cR/(c\mathrm{\Gamma })`$, where $`\mathrm{\Delta }R^{}`$ is the shell width (measured in the same frame). The heating and the radiative cooling rates will balance for some value of the random electron Lorentz factor $`\gamma _{peak}`$: $$\dot{E}_{heat}=\dot{E}_{cool}\frac{\mathrm{\Gamma }m_pc^3}{R}=\frac{4}{3}\sigma _TcU\gamma _{peak}^2\gamma _{peak}=\left(\frac{3\mathrm{\Gamma }m_pc^2}{4\sigma _TRU}\right)^{1/2}$$ (8) The agreement of the above simple relation with what can be derived from model fitting the SED of blazars is surprisingly good (see Ghisellini 2000). * Radio flares — Collisions between shells may (and should) happen in a hierarchical way. As an illustrative example, assume that one pair of shells after the collision moves with a final Lorentz factor $`\mathrm{\Gamma }_1=14`$ (this number corresponds to $`\mathrm{\Gamma }=10`$ and 20 for the two shells before the interaction). The collision produces a flare –say– in the optical and $`\gamma `$–ray bands. After some observed time $`\mathrm{\Delta }t`$ two other shells collide and another flare is produced. Assume that the final Lorentz factor is now $`\mathrm{\Gamma }_2=17`$ (corresponding to an initial $`\mathrm{\Gamma }=10`$ and 30 before collision). Since the second pair is faster, it will catch up the first one after a distance (from eq. 5) $`R1200c\mathrm{\Delta }t`$. A time separation of $`\mathrm{\Delta }t`$ a day between the two flares then corresponds to $`R`$ 1 pc, i.e. the region of the radio emission of the core. Due again to Doppler contraction, this radio flare will be observed ony a few days after the second optical flare. Since the ratio $`\mathrm{\Gamma }_2/\mathrm{\Gamma }_1`$ is small, the efficiency is also small (at least a factor 10 smaller than the firsts shocks). There is then the intriguing possibility of explaining the birth of radio blobs after intense activity (i.e. more than one flare) of the higher energy flux. Radio light–curves should have some memory of what has happened days–weeks earlier at higher frequencies. ## 7. Conclusions Here I will dare to assemble different pieces of information gathered in recent years in a coherent, albeit still preliminary, picture. There is a link between the extraction of gravitational energy in an accretion disk and the formation and acceleration of jets, since both have the same power. Objects of low luminosity accretion disks also lack strong emission lines, suggesting that it is the paucity of ionizing photons, not of gas, the reason for the lack of strong lines in BL Lacs. Correspondingly, this implies that, if FR I are the parents of BL Lacs, they also have intrinsically weak line emission (i.e. no need for an obscuring torus). Despite the fact that the jet power in blazars spans at least four orders of magnitude, the average bulk Lorentz factor is almost the same, suggesting a link between the power and the mass outflowing rate: their ratio is constant. In the region where most of the radiation is produced, the jet is heavy, in the sense that protons carry most of the bulk kinetic energy. There the jet dissipates $`10\%`$ of its power and produces beamed radiation. The power dissipated at larger distances is much less, and therefore the jet can transport $`90\%`$ of its original power to the radio extended regions. One way to achieve this is through internal shocks, which can explain why the major dissipation occurs at a few hundreds Schwarzchild radii, why the efficiency is of the order of 10%, and give clues on the observed variability timescales and even on why electrons are accelerated at a preferred energy. The spectral energy distribution of blazars depends on where shell–shell collisions take place, and on the amount of seed photons present there. Even in a single source it is possible that the separation of two consecutive shells is sometimes large, resulting in a collision occuring outside the broad line region. In this case the corresponding spectrum should be produced by the synchrotron self–Compton process only, without the contribution of external photons: we then expect a simultaneous optical–$`\gamma `$–ray flare of roughly equal powers (but with the self–Compton flux varying quadratically, see Ghisellini & Maraschi 1966). This is what should always happen in lineless BL Lac objects. On the other hand, if the initial separation of the two shells is small (or the $`\mathrm{\Gamma }`$–factor of the slower one is small), the collision takes place close to the disk. X–rays produced by the disk would then absorb all the produced $`\gamma `$–rays and a pair cascade would develop, reprocessing the power originally in the $`\gamma `$–ray band mainly into the X–ray band. We should therefore see an X–ray flare without accompanying emission above $`\mathrm{\Gamma }m_\mathrm{e}c^2`$. Pairs of shells which have already collided can interact again between themselves, at distances appropriate for the radio emission. This offers the interesting possibility to explain why the radio luminosity is related with the $`\gamma `$–ray one, and why radio flares are associated with flares at higher frequencies. Work is in progress in order to quantitatively test this idea against observations. ## ACKNOWLEDGEMENTS I thank Annalisa Celotti for very insightful discussions. ## REFERENCES Begelman M.C., Blandford R.D. & Rees M.J., 1984, Rev. Mod. Phys., 56, 255 Benford G. & Lesch H., 1998, MNRAS 301 414 Blandford R.D. & Znajek R.L., 1977, MNRAS, 176, 465 Celotti A., 1997, in Relativistic jets in AGNs, eds. M. Ostrowski, M. Sikora, G. Madejski & M. Begelman, p. 270 Celotti A., 1998, in Astrophysical jets: open problems, (Gordon & Breach Science publ.), eds. S. Massaglia & G. Bodo (Amsterdam), p. 79 Celotti A. & Fabian A.C. 1993, MNRAS, 264, 228 Celotti A., Padovani P. & Ghisellini G., 1997, MNRAS, 286, 415 Chiaberge, Capetti & Celotti, 1999, A&A, 349, 77 Dondi L. & Ghisellini G., 1995, MNRAS, 273, 583 Fossati G., Celotti A.,Comastri A., Maraschi L. & Ghisellini G., 1998, MNRAS, 299, 433 Ghisellini G. & Madau P., 1996, MNRAS, 280, 67 Ghisellini G. & Maraschi L., 1996, in Blazar Continuum Variability, ASP Conference series, Vol. 110, 1996, eds. H.R. Miller & J.R. Webb, p. 436 Ghisellini G., Celotti A., Fossati G., Maraschi L. &Comastri A., 1998, MNRAS, 301, 451 Ghisellini G., 2000, in The Asca Symposium, Tokio, March 1999, in press Giommi P. & Padovani P., 1994, ApJ, 444, 567 Hartman, R.C. et al., 1999, ApJS, 123, 79 Lazzati D., Ghisellini G. & Celotti A., 1999, MNRAS, 309, L13 Mészáros, P., 1999, Nuclear Phys. B, in press (astro–ph/9904038) Piran, P., 1999, Phys. Rep., in press (astro–ph/9810256) Rawlings S.G. & Saunders R.D.E., 1991, Nature, 349, 138 Rees M.J., 1966, Nature, 211, 468 Rees, M.J., 1978, MNRAS, 184, P61 Sikora M., 1994, ApJS, 90, 923 Ulrich M.–H., Maraschi L. & Urry C.M., 1996, ARA&A, 35, 445 Urry M.C. & Padovani P., 1995, PASP, 107, 803 Vermeulen R.C. & Cohen M.H., 1994, ApJ, 430, 467 Wagner S.J. & Witzel A., 1995, ARA&A, 33, 163
no-problem/0002/quant-ph0002045.html
ar5iv
text
# Nonlocal properties of two-qubit gates and mixed states and optimization of quantum computations ## Introduction. Nonlocality is an important ingredient in quantum information processing, e.g. in quantum computation and quantum communication. Nonlocal correlations in quantum systems reflect entanglement between its parts. Genuine nonlocal properties should be described in a form invariant under local unitary operations. In this paper we discuss such locally invariant properties of (i) unitary transformations and (ii) mixed states of a two-qubit system. Two unitary transformations (logic gates), $`M`$ and $`L`$, are called locally equivalent if they differ only by local operations: $`L=U_1MU_2`$, where $`U_1,U_2SU(2)^2`$ are combinations of single-bit gates on two qubits . A property of a two-qubit operation can be considered nonlocal only if it has the same value for locally equivalent gates. We present a complete set of local invariants of a two-qubit gate: two gates are equivalent if and only if they have equal values of all these invariants. The set contains three real polynomials of the entries of the gate’s matrix. This set is minimal: the group $`SU(4)`$ of two-qubit gates is 15-dimensional and local operations eliminate $`2dim[SU(2)^2]=12`$ degrees of freedom; hence any set should contain at least $`1512=3`$ invariants. This result can be used to optimize quantum computations. Quantum algorithms are built out of elementary quantum logic gates. Any many-qubit quantum logic circuit can be constructed out of single-bit and two-bit operations . The ability to perform 1-bit and 2-bit operations is a requirement to any physical realization of quantum computers. Barenco et al. showed that the controlled-not (CNOT) gate together with single-bit gates is sufficient for quantum computations. Furthermore, it is easy to prove that any two-qubit gate $`M`$ forms a universal set with single-bit gates, if $`M`$ itself is not a combination of single-bit operations and the SWAP-gate, which interchanges the states of two qubits. The efficiency of such a universal set, that is the number of operations from the set needed to build a certain quantum algorithm, depends on $`M`$. For a particular realization of quantum computers, a certain two-qubit operation $`M`$ (or a set $`\mathrm{exp}[it]`$ of operations generated by a certain hamiltonian) is usually considered elementary. It can be performed by a basic physical manipulation (switching of one parameter or application of a pulse). Then the question of optimization arises: what is the most economical way to perform a particular computation, i.e. what is the minimal number of elementary steps? In many situations two-qubit gates are more costly than single-bit gates (e.g., they can take a longer time, involve complicated manipulations or stronger additional decoherence), and then only the number of two-bit gates counts. The simplest and important version of this question is how a given two-bit gate $`L`$ can be performed using the minimal number of elementary two-bit gates $`M`$. In particular, when is it sufficient to employ $`M`$ only once ? This is the case when the two gates are locally equivalent, and computation of invariants gives an effective tool to verify this. Moreover, if $`M`$ and $`L`$ are equivalent, a procedure presented below allows to find efficiently single-bit gates $`U_1`$, $`U_2`$, which in combination with $`M`$ produce $`L`$. If one elementary two-bit step is not sufficient, one can ask how many are needed. Counting of dimensions suggests that two steps always suffice. However, this is not true for some gates $`M`$ (bad entanglers). A related problem is that of local invariants of quantum states. A mixed state is described by a density matrix $`\widehat{\rho }`$. Two states are called locally equivalent if one can be transformed into the other by local operations: $`\widehat{\rho }U^{}\widehat{\rho }U`$, where $`U`$ is a local gate. Apparently, the coefficients of the characteristic polynomials of $`\widehat{\rho }`$ and of the reduced density matrices of two qubits are locally invariant. The method developed in Refs. , in principle, allows to compute all invariants. For a two-qubit system counting of dimensions shows that there should be 9 functionally independent invariants of density matrices with unit trace, but additional invariants may be needed to resolve a remaining finite number of states. A set of 20 invariants was presented in Ref. . However it was not clear if this set was complete, i.e. two states with the same invariants are always locally equivalent. Here we present a complete set of 18 polynomial invariants. We prove that two states are locally equivalent if and only if all 18 invariants have equal values in these states. Hence, any nonlocal characteristic of entanglement is a function of these invariants . We also show that no subset of this set is complete. To demonstrate applications of our results, we discuss in the last section which 2-bit operations can be constructed out of only one elementary 2-bit gate for Josephson charge qubits or for qubits based on spin degrees of freedom in quantum dots . ## Single-bit gates as orthogonal matrices. The following result is used below to classify two-qubit gates. Theorem 1. Single-qubit gates with unit determinant are represented by real orthogonal matrices in the Bell basis $`\frac{1}{\sqrt{2}}(|00+|11)`$, $`\frac{i}{\sqrt{2}}(|01+|10)`$, $`\frac{1}{\sqrt{2}}(|01|10)`$, $`\frac{i}{\sqrt{2}}(|00|11)`$. The transformation of a matrix $`M`$ from the standard basis of states $`|00,|01,|10,|11`$ into the Bell basis is described as $`MM_B=Q^{}MQ`$, where $$Q=\frac{1}{\sqrt{2}}\left(\begin{array}{cccc}1& 0& 0& i\\ 0& i& 1& 0\\ 0& i& 1& 0\\ 1& 0& 0& i\end{array}\right).$$ Proof. We introduce a measure of entanglement in a pure 2-qubit state $`\psi _{\alpha \beta }`$, a quadratic form $`Ent\psi `$, which in the standard basis is defined as $`Ent\psi =det\widehat{\psi }=\psi _{00}\psi _{11}\psi _{01}\psi _{10}`$. This quantity is locally invariant. Indeed, a single-bit gate $`W_1W_2`$ transforms $`\widehat{\psi }`$ into $`W_1\widehat{\psi }W_2^T`$, preserving the determinant. Under a unitary operation $`V`$ the matrix of this form transforms as $`\widehat{Ent}V^T\widehat{Ent}V`$. In the Bell basis it is proportional to the identity matrix. Since local gates preserve this form, they are given by orthogonal matrices in this basis. As unitary and orthogonal they are also real. Thus local operations form a subgroup of the group $`SO(4,𝐑)`$ of real orthogonal matrices. This subgroup is 6-dimensional, and hence coincides with the group. $`\mathrm{}`$ ## Classification of two-qubit gates. Our result for unitary gates is expressed in terms of $`M_B`$. Theorem 2. The complete set of local invariants of a two-qubit gate $`M`$, with $`detM=1`$, is given by the set of eigenvalues of the matrix $`M_B^TM_B`$. In other words, two 2-bit gates with unit determinants, $`M`$ and $`L`$, are equivalent up to single-bit operations iff the spectra of $`M_B^TM_B`$ and $`L_B^TL_B`$ coincide. Since $`mM_B^TM_B`$ is unitary, its eigenvalues have absolute value 1 and are bound by $`detm=1`$. For such matrices the spectrum is completely described by a complex number $`trm`$ and a real number $`tr^2mtrm^2`$. The matrix $`m`$ is unitary and symmetric. The following statement is used in the proof of Theorem 2: Lemma. Any unitary symmetric matrix $`m`$ has a real orthogonal eigenbasis. Proof. Any eigenbasis of $`m`$ can be converted into a real orthogonal one, as seen from the following observation: If $`𝐯`$ is an eigenvector of $`m`$ with eigenvalue $`\lambda `$ then $`𝐯^{}`$ is also an eigenvector with the same eigenvalue (conjugation of $`m^{}𝐯=m^1𝐯=\lambda ^1𝐯=\lambda ^{}𝐯`$ gives this result). Hence, $`\mathrm{Re}𝐯`$ and $`\mathrm{Im}𝐯`$ are also eigenvectors. $`\mathrm{}`$ Proof of Theorem 2. In the Bell basis local operations transform a unitary gate $`M_B`$ into $`O_1M_BO_2`$, where $`O_1,O_2SO(4,𝐑)`$ are orthogonal matrices. Therefore $`m=M_B^TM_B`$ is transformed to $`O_2^TmO_2`$. Obviously, the spectrum of $`m`$ is invariant under this transformation. To prove completeness of the set of invariants, we notice that the lemma above implies that $`m`$ can be diagonalized by an orthogonal rotation $`O_M`$, i.e. $`m=O_M^Td_MO_M`$ where $`d_M`$ is a diagonal matrix. Suppose that another gate $`L`$ is given, and $`lL_B^TL_B`$ has the same spectrum as $`m`$. Then the entries of $`d_M`$ and $`d_L`$ are related by a permutation. Hence $`d_M=P^Td_LP`$, where $`P`$ is an orthogonal matrix, which permutes the basis vectors. Using the relation of $`m`$ to $`d_M`$ and of $`l`$ to $`d_L`$, we conclude that $`l=O^TmO`$ where $`OSO(4,𝐑)`$. Single-bit operations $`O`$ and $`O^{}L_BO^TM_B^1`$ transform one gate into the other: $`L_B=O^{}M_BO`$. The gate $`O^{}`$ is a single-bit operation since it is real and orthogonal. Indeed, $`O^TO^{}=(M_B^1)^TOL_B^TL_BO^TM_B^1=(M_B^1)^TOlO^TM_B^1=(M_B^1)^TmM_B^1=\widehat{1}`$. On the other hand, $`O^{}`$ is unitary as a product of unitary matrices. This implicates its reality. $`\mathrm{}`$ So far we have discussed equivalence up to single-bit transformations with unit determinant. However, physically unitary gates are defined only up to an overall phase factor. The condition $`detM=1`$ fixes this phase factor but not completely: multiplication by $`\pm i`$ preserves the determinant. With this in mind, we describe a procedure to verify if two two-qubit unitary gates with arbitrary determinants are equivalent up to local transformations and an overall phase factor: We calculate $`m=M_B^TM_B`$ for each of them and compare the pairs $`[tr^2mdetM^{};trm^2detM^{}]`$. If they coincide, the gates are equivalent and the proof of Theorem 2 allows to express explicitly one gate via the other and single-bit gates, $`O`$ and $`O^{}`$. ## Classification of two-qubit states. In this section we discuss equivalence and invariants of two-qubit states up to local operations. Let us express a 2-bit density matrix in terms of Pauli matrices acting on the first and the second qubit: $`\widehat{\rho }=\frac{1}{4}\widehat{1}+\frac{1}{2}𝐬\stackrel{}{\sigma }^1+\frac{1}{2}𝐩\stackrel{}{\sigma }^2+\beta _{ij}\sigma _i^1\sigma _j^2`$. If the qubits are considered as spin-1/2 particles, then $`𝐬`$ and $`𝐩`$ are their average spins in the state $`\widehat{\rho }`$, while $`\widehat{\beta }`$ is the spin-spin correlator: $`\beta _{ij}=S_i^1S_j^2`$. Any single-bit operation is represented by two corresponding $`3\times 3`$ orthogonal real matrices of ‘spin rotations’, $`O,PSO(3,𝐑)`$. Such an operation, $`OP`$, transforms $`\widehat{\rho }`$ according to the rules: $`𝐬O𝐬`$, $`𝐩P𝐩`$, and $`\widehat{\beta }O\widehat{\beta }P^T`$. We find a set of invariants which completely characterize $`\widehat{\rho }`$ up to local gates. The density matrix is specified by 15 real parameters, while local gates form a 6-dimensional group. Thus we expect $`156=9`$ functionally independent invariants ($`I_1`$$`I_9`$ in Table I). However, these invariants fix a state only up to a finite symmetry group, and additional invariants are needed . In Table I we present a set of 18 polynomial invariants and prove that the set is complete. Theorem 3. Two states are locally equivalent exactly when the invariants $`I_1`$$`I_{18}`$ have equal values for these states. (Only signs of $`I_{10}`$, $`I_{11}`$, $`I_{1518}`$ are needed.) None of the invariants can be removed from the set without affecting completeness, as demonstrated by examples in the table. The proof below gives an explicit procedure to find single-bit gates which transform one of two equivalent states into the other. Proof. It is clear that all $`I_i`$ in the table are invariant under independent orthogonal rotations $`O`$, $`P`$, i.e. under single-bit gates. To prove that they form a complete set, we show that for given values of the invariants one can by local operations transform any density matrix with these invariants to a specific form. In the course of the proof we fix more and more details of the density matrix by applying local gates (this preserves the invariants). The first step is to diagonalize the matrix $`\widehat{\beta }`$, which can be achieved by proper rotations $`O`$, $`P`$ (singular value decomposition). The invariants $`I_1`$$`I_3`$ determine the diagonal entries of $`\widehat{\beta }`$ up to a simultaneous sign change for any two of them. Using single-bit operations $`R^i\widehat{1}`$ (where $`R^i`$ is the $`\pi `$-rotation about the axis $`i=1,2,3`$) we can fix these signs: all three eigenvalues, $`b_1,b_2,b_3`$, can be made nonnegative, if $`I_1=det\widehat{\beta }0`$, or negative, if $`det\widehat{\beta }<0`$. Further transformations, with $`O=P`$ representing permutations of basis vectors, place them in any needed order. From now on we consider only states with a fixed diagonal $`\widehat{\beta }`$. Hence, only such single-bit gates, $`OP`$, are allowed which preserve $`\widehat{\beta }`$. The group of such operations depends on $`b_1,b_2,b_3`$. We examine all possibilities below, showing that the invariants $`I_{418}`$ fix the state completely. (A) $`\widehat{\beta }`$ is nondegenerate: all $`b_i`$ are different. Then the only $`\widehat{\beta }`$-preserving local gates are $`r^iR^iR^i`$. The invariants $`I_{46}`$ and $`I_{79}`$ set absolute values of six components $`s_i`$, $`p_i`$, but not their signs. The signs are bound by other invariants. In particular, $`I_{10}`$ and $`I_{11}`$ fix the values of $`s_1s_2s_3`$ and $`p_1p_2p_3`$. Furthermore, $`I_{1214}`$ give three linear constraints on three quantities $`s_ip_i`$. When $`\widehat{\beta }`$ is not degenerate, one can solve them for $`s_ip_i`$. Let us consider several cases: i) $`𝐬`$ has at least two nonzero components, say $`s_1,s_2`$. These two can be made positive by single-bit gates $`r^1`$, $`r^2`$. After that, the signs of $`p_{i=1,2}`$ are fixed by values of $`s_ip_i`$, while $`I_{10}`$ fixes the sign of $`s_3`$. The sign of $`p_3`$ can be determined from $`s_3p_3`$, if $`s_30`$; if $`s_3=0`$ then $`p_3`$ is fixed by $`I_{15}=p_3s_1s_2b_3(b_2^2b_1^2)`$ or $`I_{17}=p_3s_1s_2b_1b_2(b_2^2b_1^2)`$. If $`𝐩`$ has at least two nonzero components, a similar argument applies, with $`I_{11,16,18}`$ instead of $`I_{10,15,17}`$. If both $`𝐬`$ and $`𝐩`$ have at most one nonzero component, $`s_i`$ and $`p_j`$, then either ii) $`i=j`$, and the signs are specified by $`s_ip_i`$ up to $`\widehat{\beta }`$-preserving gates $`r^k`$; or iii) $`ij`$, and one can use $`r^i,r^j`$ to make both components nonnegative. (B) $`\widehat{\beta }`$ has two equal nonzero eigenvalues: $`b_3b_1=b_20`$. We define horizontal components $`𝐬_{}=(s_1,s_2,0)`$, $`𝐩_{}=(p_1,p_2,0)`$. Then $`\widehat{\beta }`$-preserving operations are generated by simultaneous, coinciding rotations of $`𝐬_{}`$ and $`𝐩_{}`$ \[i.e. $`𝐬_{}O𝐬_{}`$, $`𝐩_{}O𝐩_{}`$, where $`OSO_{1,2}(2)`$ is a 2D-rotation\], as well as $`r^i`$. The invariants $`I_{49}`$ fix $`𝐬_{}^2`$, $`s_3^2`$, $`𝐩_{}^2`$ and $`p_3^2`$. To specify $`𝐬`$ and $`𝐩`$ completely the angle between $`𝐬_{}`$ and $`𝐩_{}`$, as well as the signs of $`s_3`$ and $`p_3`$ should be determined. These are bound by the remaining invariants. In particular, $`I_{1214}`$ fix $`s_3p_3`$ and $`𝐬_{}𝐩_{}`$. The latter sets the angle between $`𝐬_{}`$ and $`𝐩_{}`$ up to a sign. There are two possibilities: i) $`s_3=p_3=0`$. The states with opposite angles are related by $`r^1`$ and hence equivalent. ii) $`s_30`$ (the case $`p_30`$ is analogous). Applying $`r^1`$, if needed, we can assume that $`s_3`$ is positive. Then, $`p_3`$ is specified by the value of $`s_3p_3`$. Apart from that, $`I_{15}`$ sets $`(𝐬_{}\times 𝐩_{})_3s_3`$, and hence the sign of the cross product $`𝐬_{}\times 𝐩_{}`$. This fixes the density matrix completely. (C) $`b_1=b_2=0`$, $`b_30`$. In this case $`\widehat{\beta }`$-preserving operations are independent rotations of $`𝐬_{}`$ and $`𝐩_{}`$ \[which form $`SO_{1,2}(2)^2`$\] and $`r^i`$. The invariants $`I_{49}`$ fix $`𝐬_{}^2`$, $`s_3^2`$, $`𝐩_{}^2`$, and $`p_3^2`$, while $`I_{12}`$ (or $`I_{13}`$) sets $`s_3p_3`$. It is easy to see that they specify the state completely. (D) $`b_1=b_2=b_30`$. All transformations $`OO`$, where $`OSO(3)`$, preserve $`\widehat{\beta }`$. The invariants $`I_4`$, $`I_7`$ and $`I_{12}`$ fix $`𝐬^2`$, $`𝐩^2`$ and $`\mathrm{𝐬𝐩}`$, and this information is sufficient to determine $`𝐬`$ and $`𝐩`$ up to a rotation. (E) $`\widehat{\beta }=0`$. In this case all local transformations preserve $`\widehat{\beta }`$; hence, $`𝐬`$ and $`𝐩`$ can rotate independently. The only nonzero invariants, $`I_4`$ and $`I_7`$, fix $`𝐬^2`$ and $`𝐩^2`$. $`\mathrm{}`$ ## Discussion. In this section we demonstrate applications of our results. We calculate the invariants for several two-qubit gates to find out which of them are locally equivalent. These gates include CNOT, SWAP and its square root, as well as several gates $`\mathrm{exp}(it)`$ generated by hamiltonians $`\frac{1}{4}\stackrel{}{\sigma }^1\stackrel{}{\sigma }^2`$, $`\frac{1}{4}\stackrel{}{\sigma }_{}^1\stackrel{}{\sigma }_{}^2`$ \[here $`\stackrel{}{\sigma }_{}=(\sigma _x;\sigma _y)`$\] and $`\frac{1}{4}\sigma _y^1\sigma _y^2`$ (cf. Refs. ) after evolution during time $`t`$. Analyzing the invariants we see that to achieve CNOT one needs to perform a two-qubit gate at least twice if the latter is triggered by the Heisenberg hamiltonian $`\stackrel{}{\sigma }\stackrel{}{\sigma }`$. At the same time SWAP and $`\sqrt{\mathrm{SWAP}}`$ can be performed with one elementary two-bit gate . The interaction $`\sigma _y\sigma _y`$ allows to perform CNOT in one step, while $`\stackrel{}{\sigma }_{}\stackrel{}{\sigma }_{}`$ requires at least two steps for all three gates. In Josephson qubits elementary two-bit gates are generated by the interaction $`=\frac{1}{2}E_\mathrm{J}(\widehat{\sigma }_x^1+\widehat{\sigma }_x^2)+(E_\mathrm{J}^2/E_L)\widehat{\sigma }_y^1\widehat{\sigma }_y^2`$. Investigation of the invariants shows that CNOT can be performed if $`E_\mathrm{J}`$ is tuned to $`\alpha E_L`$ for a finite time $`t=\alpha \pi (2n+1)/4E_L`$, where $`n`$ is an integer and $`\alpha `$ satisfies $`\alpha ^2\mathrm{cos}[\pi (n+\frac{1}{2})\sqrt{1+\alpha ^2}]=1`$. For creation of entanglement between qubits a useful property of a two-qubit gate is its ability to produce a maximally entangled state (with $`|Ent\psi |=1/2`$) from an unentangled one . This property is locally invariant and one can show that a gate $`M`$ is a perfect entangler exactly when the convex hull of the eigenvalues of the corresponding matrix $`m`$ contains zero. In terms of the invariants, introduced in Table II, this condition reads: $`\mathrm{sin}^2\gamma 4|G_1|1`$ and $`\mathrm{cos}\gamma (\mathrm{cos}\gamma G_2)0`$, where $`G_1=|G_1|e^{i\gamma }`$. Among the gates in the table CNOT and $`\sqrt{\mathrm{SWAP}}`$ are perfect entanglers. The Heisenberg hamiltonian can produce only two perfect entanglers ($`\sqrt{\mathrm{SWAP}}`$ or its inverse), while $`\sigma _y\sigma _y`$ \- only CNOT. At the same time, the interaction $`\stackrel{}{\sigma }_{}\stackrel{}{\sigma }_{}`$ produces a set of perfect entanglers if the system evolves for time $`t`$ with $`\mathrm{cos}t0`$. To conclude, we have presented complete sets of local polynomial invariants of two-qubit gates (3 real invariants) and two-qubit mixed states (18 invariants) and demonstrated how these results can be used to optimize quantum logic circuits and to study entangling properties of unitary operations. I am grateful to G. Burkard, D.P. DiVincenzo, M. Grassl, G. Schön, and A. Shnirman for fruitful discussions.
no-problem/0002/astro-ph0002269.html
ar5iv
text
# On the universality of the spectrum of cosmic rays accelerated at highly relativistic shocks ## 1 Introduction There is currently a growing interest in the acceleration of non–thermal particles at highly relativistic shocks. There are three classes of bona fide relativistic sources: beyond the well–established extra–Galactic (Blazars) and Galactic (superluminal) sources, both of which exhibit superluminal motions, it is now also well-established that Gamma Ray Bursts (GRBs) display highly relativistic expansions, with Lorentz factors well in excess of $`100`$. Other classes of relativistic sources may include Soft Gamma Ray Repeaters (SGRs), whose recurrent explosions are largely super–Eddington, and special SNe similar to SN 1998bw, which displayed marginally Newtonian expansion ($`6\times 10^4kms^1`$) when optical emission lines became detectable, about a month after the explosion. With the discovery of GRBs’ afterglows, it has now become feasible to derive the energy spectral index $`k`$ of electrons accelerated at the forward shock, as a function of the varying (decreasing) shock Lorentz factor $`\gamma `$, provided simultaneous wide–band spectral coverage is available. With the launch of the USA/Italy/UK mission SWIFT, these data will become available for a statistically significant number of bursts, testing directly models for particle acceleration at relativistic shocks. Furthermore, since GRBs must also clearly accelerate protons, the same index $`k`$ may determine the spectrum of ultra high energy cosmic rays observed at Earth. However, until recently, both the lack of astrophysical motivation and the difficulty inherent in treating highly anisotropic distribution functions have stiffened research on this topic. Early work, both semi–analytic and outright numerical, has concentrated on barely relativistic flows with Lorentz factors of a few, well suited to Blazars and Galactic superluminals, but clearly insufficent for GRBs, the only exception being the numerical simulations of Bednarz and Ostrowski (1998). It is the purpose of this Letter to perform an analytic investigation of the large $`\gamma `$ limit, to establish which (if any) of the properties of the particles’ distribution function depend upon the physical conditions of the fluid. ## 2 The analysis I deal first with pure pitch angle scattering, and then (Subsection 2.3) I discuss oblique shocks. In the well–known equation for the particles’ distribution function, under the assumption of pure pitch angle scattering, $$\gamma (u+\mu )\frac{f}{z}=\frac{}{\mu }\left(D(\mu ,p)(1\mu ^2)\frac{f}{\mu }\right),$$ (1) $`f`$ is computed in the shock frame, in which are also defined the distance from the shock $`z`$, the fluid speed in units of $`c`$, $`u`$, and fluid Lorentz factor $`\gamma `$. Instead, the scattering coefficient $`D`$, particle momentum $`p`$ and particle pitch angle cosine, $`\mu `$, are all defined in the local fluid frame. I make no hypothesis whatsoever about $`D`$, except that it is positive definite and smooth. We place ourselves in the shock frame, and call $`z=0`$ the shock position; the upstream section is for $`z<0`$, so that the fluid speeds are both $`>0`$. The above equation admits of an integral: by integrating over $`\mu `$ and $`z`$ we see that $$_1^1(u+\mu )f𝑑\mu =\text{const.},$$ (2) independent of $`z`$. The required boundary condition for $`f`$, i.e., that $`f0`$ as $`z\mathrm{}`$, implies that the constant, upstream, is $`0`$. Downstream, Eq. 2 is also a constant, but, because of Taub’s jump conditions, it is not the same constant as upstream. Since the required boundary condition for $`f`$ far downstream ($`f_{\mathrm{}}`$) is that it becomes isotropic and $`f`$ is a relativistic invariant, we see that, far downstream, $`_1^1\mu f_{\mathrm{}}𝑑\mu =0`$. We thus have $$_1^1(u+\mu )f𝑑\mu =\{\begin{array}{cc}0\hfill & z<0\hfill \\ _1^1uf_{\mathrm{}}𝑑\mu >0\hfill & z>0\hfill \end{array}$$ (3) where the inequality in Eq. 3b (which will become necessary later on) derives from the obvious constraint $`f>0`$. ### 2.1 Upstream I begin the analysis by considering Eq. 3a in the limit of very large shock Lorentz factors, in which case, upstream, $`u1`$. For $`u=1`$, this reduces to $$_1^1(1+\mu )f𝑑\mu =0.$$ (4) Since $`1+\mu >0`$ everywhere in the integration interval except of course at $`\mu =1`$, and since $`f0`$ we see that, for $`u=1`$ we must have $`f\delta (\mu +1)`$ where $`\delta (x)`$ is Dirac’s delta. Thus, in this limit, the angular dependence factors out. For reasons to be explained in the next Subsection, we shall also need $`f`$ for $`1u1`$, but still $`0`$. To search for such a solution, we let ourselves be guided by the solution at $`u=1`$: thus, we let the angular dependence factor out, and use the Ansatz $`f=g(z)w((\mu +1)/h(u))`$. Here $`h(u)`$ is an as yet undetermined function of the pre–shock fluid speed such that $`h(u)0`$ as $`u1`$. In this way, as the speed grows, the angular dependence becomes more and more concentrated toward $`\mu =1`$, as required by the previously found solution for $`u=1`$. Introducing our Ansatz into Eq. 1 I find $$\frac{\gamma }{g}\frac{dg}{dz}=\frac{1}{(1+\mu )w}\frac{d}{d\mu }\left(D(\mu ,p)(1\mu ^2)\frac{dw}{d\mu }\right)=\frac{2D_1\lambda ^2}{h^2(u)},$$ (5) where I defined $`D_1D(\mu =1,p)`$, and I factored the eigenvalue for future convenience. Concentrating on the angular part, and defining $`(\mu +1)/h(u)y`$, and $`\dot{w}dw/dy`$, $`\ddot{w}d^2w/dy^2`$, I find $$\frac{D(\mu ,p)(1\mu ^2)}{h^2(u)}\ddot{w}+\frac{\dot{w}}{h(u)}\frac{d}{d\mu }\left(D(\mu ,p)(1\mu ^2)\right)\frac{2\lambda ^2D_1(1+\mu )w}{h^2(u)}=0.$$ (6) We are interested in a solution of the above equation only in the limit $`h(u)0`$, the only one in which our factored Ansatz is a good approximation to the true $`f`$. In this case, the term $`\dot{w}`$ is clearly subdominant, and can be neglected in a first approximation (this technique is called dominant balance, Bender and Orszag 1978). Furthermore, for $`h(u)0`$, we expect $`w(\mu )`$ to be more and more concentrated around $`\mu =1`$, so that in this range we can approximate the term $`D(\mu ,p)(1\mu ^2)2D_1(1+\mu )`$, and I obtain $`\ddot{w}\lambda ^2w`$ with obvious solution $`ww_0\mathrm{exp}(\lambda (\mu +1)/h(u))`$. The factor $`\lambda /h(u)`$ can be determined by inserting this approximate expression for $`w`$ into Eq. 3a. A trivial computation yields $`\lambda /h(u)=1/(1u)`$. Now, going back to the equation for $`g`$, the spatial part of $`f`$, we find, also using the above, $$\frac{1}{g}\frac{dg}{dz}\frac{2D_1}{\gamma (1u)^2}8\gamma ^3D_1$$ (7) from which, in the end, I find an approximate solution for the distribution function in the limit $`u1`$: $$fA\mathrm{exp}(8\gamma ^3D_1z)\mathrm{exp}\left((\mu +1)/(1u)\right).$$ (8) It is thus seen that the detailed shape of the pitch angle scattering function $`D(\mu ,p)`$ is irrelevant, and that what is left of it (its value $`D_1`$ at $`\mu =1`$) only enters the spatial part of the distribution function $`f`$, not the angular one. ### 2.2 Downstream We make here the usual assumption, that the distribution function depends upon the particle momentum $`p`$ as $`fp^s`$ in either frame (but see the Discussion for further comments). From the condition of continuity of the distribution function at the shock, denoting as $`p_a`$ and $`\mu _a`$ the particle’s momentum and cosine of the pitch angle in the downstream frame, we have $$\frac{1}{p_a^s}w_a(\mu _a)\frac{1}{p^s}\mathrm{exp}((\mu +1)/(1u))$$ (9) where the irrelevant constant of proportionality does not depend on $`p,p_a,\mu ,\mu _a`$. Using the Lorentz transformations to relate $`p,p_a,\mu ,\mu _a`$ ($`\mu =(\mu _au_r)/(1u_r\mu _a)`$, $`p=p_a\gamma _r(1u_r\mu _a)`$, with $`u_r`$ and $`\gamma _r`$ the relative speed and corresponding Lorentz factor between the upstream and downstream fluids), I find $$w_a(\mu _a)=\frac{1}{(1u_r\mu _a)^s}\mathrm{exp}\left(\frac{(\mu _a+1)(1u_r)}{(1u)(1u_r\mu _a)}\right).$$ (10) For $`u1`$, it is easy to derive from Taub’s conditions (Landau and Lifshitz 1987) that $`u_r1`$, and that $`(1u_r)/(1u)\gamma ^2/\gamma _r^22`$. This result does use a post–shock equation of state $`p=\rho /3`$, which is surely correct in the limit $`u1`$. In the end, I obtain $$w_a(\mu _a)=\frac{1}{(1\mu _a)^s}\mathrm{exp}\left(2\frac{\mu _a+1}{1\mu _a}\right).$$ (11) This equation shows why we needed to determine the pitch angle distribution, in the upstream frame, even for $`1u0`$: in fact, even though the angular distribution in the upstream frame (Eq. 8) tends to a singularity, the downstream distribution does not (because the factor $`(1u)/(1u_r)`$ has a finite, non–zero limit), and the concrete form to which it tends depends upon the departures of the upstream distribution from a Dirac’s delta. From now on I will drop the subscript $`a`$ in $`\mu _a`$, since all quantities refer to downstream. In order to determine $`s`$, we now appeal to a necessary regularity condition which must be obeyed by the initial (i.e., for $`z=0`$) pitch angle distribution, Eq. 11. Looking at Eq. 1 specialized to the downstream case, where $`u=1/3`$ for very fast shocks, we see that this equation has a singularity at $`\mu =1/3`$. Passing through this singularity will fix the index $`s`$. It is not convenient to use $`f`$ directly; rather, I use its Laplace transform $$\widehat{f}(r,\mu )_0^+\mathrm{}f(z,\mu )e^{rz}𝑑z.$$ (12) Taking Laplace transforms of both sides of Eq. 1 I obtain $$\frac{\gamma (1/3+\mu )w_a(\mu )}{r}+\gamma (1/3+\mu )\widehat{f}=\frac{1}{r}\frac{}{\mu }\left(D(\mu ,p)(1\mu ^2)\frac{\widehat{f}}{\mu }\right).$$ (13) I am interested in the limit $`r+\mathrm{}`$. In fact, here I can use two results. First, in this limit, it is well–known (Watson’s Lemma, Bender and Orszag 1978) that Eq. 12 reduces to $$\widehat{f}(r,\mu )\frac{f(z=0,\mu )}{r}=\frac{w_a(\mu )}{r}.$$ (14) Despite this wonderful result in all its generality, I will actually use it only in the neighborhood of $`\mu =1/3`$; here, Eq. 13 takes on a simple form: defining $`t\mu +1/3`$, $$\frac{bt}{r}+at\widehat{f}=\frac{1}{r}\frac{^2\widehat{f}}{t^2}+\frac{c}{r}\frac{\widehat{f}}{t}$$ (15) where I defined $`b(\gamma w_a(\mu )D(\mu ,p)(1\mu ^2))|_{\mu =1/3}`$, $`a(\gamma D(\mu ,p)(1\mu ^2))|_{\mu =1/3}`$, and $`c(/\mu D(\mu ,p)(1\mu ^2))/D(\mu ,p)(1\mu ^2)|_{\mu =1/3}`$. Now I make the Ansatz (to be checked a posteriori) that the term $`\widehat{f}/t`$ is negligible compared to the second order derivative in the limit $`r+\mathrm{}`$. I am interested only in the most significant term in $`r`$, since Eq. 14 was only obtained to this order. Then, the equation 15 becomes $$at\widehat{f}=\frac{1}{r}\frac{^2\widehat{f}}{t^2}.$$ (16) The above equation is the prototype of the one–turning point problem. Its solution, strictly in the neighborhood of the point $`t=\mu +1/3=0`$, is (Bender and Orszag 1978, Sect. 10.4, Eq. 10.4.13b): $$\widehat{f}(t)r^{1/12}CAi(r^{1/3}at)$$ (17) where $`C`$ is an arbitrary constant, and $`Ai(x)`$ is one of Airy’s functions. From this it can easily be checked that our Ansatz was justified. Clearly, close to the point $`t=\mu +1/3=0`$, Eq. 14 and Eq. 17 must give the same results. Thus I find that, close to $`\mu =1/3`$, $$w_a(\mu )Ai(r^{1/3}a(\mu +1/3))$$ (18) which solves our problem: from this in fact we see that, since $`d^2Ai(x)/dx^2=xAi(x)`$ by definition, and thus $`d^2Ai(x)/dx^2=0`$ in $`x=0`$, then we must have $$\frac{^2w_a}{\mu ^2}|_{\mu =1/3}=0.$$ (19) This is our sought for extra condition for $`s`$; we have seen that it comes directly from demanding that the boundary condition of the problem, Eq. 11, manages to pass through the singular point of Eq. 1, which I showed to be a conventional one–turning point problem familiar from elementary quantum mechanics. By substituting into Eq. 11 I find $$\left(\frac{^2w_a}{\mu ^2}\right)_{\mu =1/3}=2^{2(2+s)}3^{2+s}e^1(s^25s+3)=0$$ (20) from which we obtain $`s=(5\pm \sqrt{13})/2`$. The solution with the minus sign is unacceptable: in fact, if we plug Eq. 11 into the conservation equation 2, we see (Fig. 1) that for $`s3`$ the integral is $`0`$. We remarked after Eq. 3b, however, that this integral had necessarily to be $`>0`$, so that we may conclude that $`3`$ is an absolute lower limit to $`s`$. Thus we discard the solution with the minus sign, and are left with the unique solution $$s=\frac{5+\sqrt{13}}{2}4.30.$$ (21) ### 2.3 Oblique shocks Let us call $`\varphi `$ the angle that the magnetic field makes with the shock normal, in the upstream fluid. Then shocks can be classified as either subluminal or superluminal, depending upon whether, upstream, $`u/\mathrm{cos}\varphi <1`$ or $`u/\mathrm{cos}\varphi >1`$, respectively (de Hoffmann and Teller 1950). We are interested in the limit $`u1`$, so that most shocks will be of the superluminal type. In this case, we could (but we won’t) move to a frame where the magnetic field is parallel to the shock surface, both upstream and downstream. However, downstream this extremely orderly field configuration appears more idealized than warranted by physical reality and observations. In fact, behind a relativistic shock, a large number of processes (compression, shearing, turbulent dynamo, Parker instability, two–stream instability) can generate magnetic fields; furthermore, there is no obvious reason why these fields should have large coherence lengths. In GRBs, a large number of observations of different afterglows supports this picture, the most detailed of all being those of GRB 970508, extending from a few hours to $`400d`$ after the burst (Waxman, Frail and Kulkarni 1998; Frail, Waxman and Kulkarni 2000, and references therein). Accurate and successful modeling fixes the post–shock ratio of magnetic to non–magnetic energy densities to $`ϵ_B0.1`$. Notice that here the protons’ rest mass is not even the largest contribution to the non–magnetic energy density! Polarization measurements also support, albeit less cogently, the idea of a small coherence length: of the four bursts observed so far, only one has a detected polarization, at the $`1.7\%`$ level (GRB 990510, Covino et al., 1999). Thus the most plausible physical model downstream, is that particles move in a locally generated turbulent, dynamically negligible magnetic field; if then we call $`l`$ the average post–shock field coherence length, and restrict our attention to particles with sufficiently large energies (i.e., with gyroradii $`r_g>l`$), we see there can be no reflection as particles approach the shock from downstream. It follows that we expect the situation downstream to be identical to that of pure pitch angle scattering. Upstream, the parallel magnetic field is also irrelevant. In fact, backward deflection of a particle occurs on a length–scale $`r_g`$, but backward diffusion of the particle by magnetic irregularities only requires the sideways deflection by an angle $`\gamma ^2`$ ($`\gamma `$ being the shock Lorentz factor), for the shock to overrun the particle. This typically occurs on a length $`\eta r_g/\gamma ^2`$, with $`\eta `$ a few. So, as $`\gamma \mathrm{}`$, the length–scale for scattering upstream by the magnetic field increases, while that by magnetic irregularities decreases: the field is irrelevant. In the end, the same analysis as for pure pitch angle scattering applies, and the same index $`s`$ and pitch angle distributions at the shock follow. In the case of subluminal shocks, a similar comment applies. Downstream, we expect on a physical basis the same situation as for superluminal shocks. Upstream, Eq. 1 is replaced by (Kirk and Heavens 1989) $$\gamma \mathrm{cos}\varphi (u+\mu )\frac{f}{z}=\frac{}{\mu }\left(D(\mu ,p)(1\mu ^2)\frac{f}{\mu }\right).$$ (22) to which the same analysis as in Subsection 2.1 can be applied. Thus we find the same $`s`$ and pitch angle distributions at the shock as above. As a corollary, it may be noticed that the above argument also implies that the results above are independent of the ratio $`\kappa _{}/\kappa _{}`$, the cross–field and parallel diffusion coefficients. ## 3 Discussion For ultrarelativistic particles the energy spectral index $`k`$ is related to $`s`$ by $$k=s2=\frac{1+\sqrt{13}}{2}2.30,$$ (23) which is our final result. Also, now that we know $`s`$, the final pitch angle distribution at the shock, but downstream, can be determined (Eq. 11), and is plotted in Fig. 2. None of these results depends upon the specific form of $`D(\mu ,p)`$, so that widely differing assumptions should yield precisely the same results. How does this compare with numerical work? The near–constancy of the index $`s`$ (or $`k`$) explains moderately well the results of previous authors: Kirk and Schneider (1987) find $`s=4.3`$ for their computation with the highest speed, which is however a modest Lorentz factor of $`\gamma =5`$, and a single functional dependence of $`D`$ on $`\mu `$. Heavens and Drury (1988) find again a result of $`s=4.2`$ for equally modest Lorentz factors, but for two different recipes for $`D`$. Extensive numerical computations using a MonteCarlo technique (i.e., totally independent of the validity of Eq. 1) were performed by Bednarz and Ostrowski (1998), for a wide variety of assumptions about the scattering properties of the fluids. They remarked quite explicitly that the energy spectral index $`k`$ seemed to converge to a constant value, independent of shock Lorentz factor (provided $`\gamma 1`$), magnetic field orientation angle $`\varphi `$, and diffusion coefficient ratio, $`\kappa _{}/\kappa _{}`$. They found $`k2.2`$. The present work confirms (for all untried forms for $`D(\mu ,p)`$) and extends their simulations (by yielding the exact value, and explicit forms for the particle angular distributions). The upstream angular distributions also agree well: Fig.3a of Bednarz and Ostrowski clearly shows that this is (for the highest displayed value of $`\gamma =27`$) a Dirac’s delta, in agreement with the large–$`\gamma `$ limit in Section 2.1. However, the downstream pitch angle distributions (in their Fig. 3b) agree well with mine, but not perfectly. Gallant, Achterberg and Kirk (1998) have claimed that there is a small error in Bednarz and Ostrowksi’s distributions. As a matter of fact, my distribution (Fig. 2) agrees much better with Gallant et al.’s and Kirk and Schneider’s (1987) than Bednarz and Ostrowski’s, despite the very small shock Lorentz factors of these two papers ($`\gamma =2.3`$ and $`\gamma =5`$, respectively). Possibly, the small error in question may even explain the (small!) discrepancy between the two values of $`k`$. A limitation applies to the claim of universality of Eqs. 8, 11 and 23: I neglected any process altering the particles’ energy during the scattering. Clearly, the results of this paper only apply in the limit $`\delta p/p1`$, where $`\delta p`$ is the typical momentum transfer in each scattering event. In the large momentum limit considered here, it seems unlikely that this constraint may be violated. Lastly, a comment on the assumed dependence $`p^s`$ of the distribution function upon particle momenta is in order. It can be seen from Eq. 1 that such a dependence is not required by this equation. To see this, let us make the usual assumption that $`D`$ is homogeneous of degree $`r`$ in $`p`$, i.e., $`D(\mu ,p)=q(\mu )p^r`$. Then by defining a new variable $`\widehat{z}z/p^r`$, we see that the form assumed by Eq. 1 after this change of variable is identical to the original one, except that now $`p`$ has altogether disappeared. At large $`z`$ (i.e., far downstream), $`ff_{\mathrm{}}=`$ constant, and there is no $`p`$–dependence. This paradox is solved by noticing that the real problem to be solved involves both scattering (= Fermi acceleration) and injection. In this case, a typical injection momentum $`p_0`$ arises naturally, and the dimensional problem discussed above is naturally solved: we must have $`f=f(\mathrm{},p/p_0,\mathrm{})`$ where the dots indicate all other parameters. In the limit of $`p_00`$, $`f`$ does not tend to a constant independent of $`p_0`$ as is always assumed, but tends instead to zero as $`f(p_0/p)^s`$. Problems of this sort, though rare in astrophysics, are common in hydrodynamics, where they are called self–similar problems of the second kind (Zel’dovich 1956). They range from the deceptively simple laminar flow of an ideal fluid plast an infinite wedge (Landau and Lifshitz 1987) to the illuminating case of the filtration in an elasto–plastic porous medium (Barenblatt 1996). It is remarkable that, in the problem at hand, no such complication is necessary to fix the all–important index $`s`$, yet the powerful methods of intermediate asymptotics (Barenblatt 1996) and the renormalization group (Goldenfeld 1992) can be brought to bear on the intermediate $`\gamma `$ cases, where no easy limiting solution can be found. In short, what I have done in this paper is to show that the spectrum of non–thermal particles accelerated at relativistic shocks is universal, in the sense that the energy spectral index $`k`$, and the angular distributions in both the upstream and downstream frames (Eqs. 8, 11, 23, and Fig. 2) do not depend upon the scattering function $`D(\mu ,p)`$, the shock Lorentz factor (provided of course $`\gamma 1`$), the magnetic field geometry, and the ratio of cross–field to parallel diffusion coefficients.Thus we have the result that the cosmic rays’ spectra are independent of flow details in both the Newtonian (Bell 1978) and the relativistic regimes.
no-problem/0002/cond-mat0002105.html
ar5iv
text
# Soft Phonon Anomalies in the Relaxor Ferroelectric Pb(Zn1/3Nb2/3)0.92Ti0.08O3 \[ ## Abstract Neutron inelastic scattering measurements of the polar TO phonon mode dispersion in the cubic relaxor Pb(Zn<sub>1/3</sub>Nb<sub>2/3</sub>)<sub>0.92</sub>Ti<sub>0.08</sub>O<sub>3</sub> at 500 K reveal anomalous behavior in which the optic branch appears to drop precipitously into the acoustic branch at a finite value of the momentum transfer $`q=0.2`$ Å<sup>-1</sup> measured from the zone center. We speculate this behavior is the result of nanometer-sized polar regions in the crystal. \] The discovery by Kuwata et al. in 1982 that it was possible to produce single crystals of the relaxor-ferroelectric material Pb(Zn<sub>1/3</sub>Nb<sub>2/3</sub>)<sub>1-x</sub>Ti<sub>x</sub>O<sub>3</sub> represented an important achievement in the field of ferroelectrics . Because the parent compounds Pb(Zn<sub>1/3</sub>Nb<sub>2/3</sub>)O<sub>3</sub> (PZN) and PbTiO<sub>3</sub> (PT) form a solid solution, it was possible to tune the stoichiometry of the material to lie near the morphotropic phase boundary (MPB) that separates the rhombohedral and tetragonal regions of the phase diagram . Such MPB compositions in Pb(Zr<sub>1-x</sub>Ti<sub>x</sub>)O<sub>3</sub> (PZT), the material of choice for the fabrication of high-performance electromechanical actuators, exhibit exceptional piezoelectric properties, and have generated much scientific study . However, in contrast to PZN-$`x`$PT, all attempts to date to grow large single crystals of PZT near the MPB have failed, and this has impeded progress in fully characterizing the PZT system. The dielectric and piezoelectric properties of single crystals of both PZN-$`x`$PT and PMN-$`x`$PT (M = Mg) have since been examined by Park et al. who measured the strain as a function of applied electric field . These materials were found to exhibit remarkably large piezoelectric coefficients $`d_{33}>2500`$ pC/N and strain levels $`S`$ 1.7% for rhombohedral crystals oriented along the pseudo-cubic direction. This level of strain represents an order of magnitude increase over that presently achievable by conventional piezoelectric and electrostrictive ceramics including PZT. That these ultrahigh strain levels can be achieved with nearly no dielectric loss ($`<1`$%) due to hysteresis suggests both PMN-$`x`$PT and PZN-$`x`$PT hold promise in establishing the next generation of solid state transducers . A very recent theoretical advance in our understanding of these materials occurred when it was shown using first principles calculations that the intrinsic piezoelectric coefficient $`e_{33}`$ of MPB PMN-40%PT was dramatically enhanced relative to that for PZT by a factor of 2.7 . Motivated by these experimental and theoretical results, we have studied the dynamics of the soft polar optic phonon mode in a high quality single crystal of PZN-8%PT, for which the measured value of $`d_{33}`$ is a maximum, using neutron inelastic scattering methods. In prototypical ferroelectric systems such as PbTiO<sub>3</sub> it is well known that the condensation or softening of a zone-center transverse optic (TO) phonon is responsible for the transformation from a cubic paraelectric phase to a tetragonal ferroelectric phase. This is readily seen in neutron inelastic scattering measurements made at several temperatures above the Curie temperature. In the top panel of Fig. 1 we show the dispersion of the lowest-energy TO branch in PbTiO<sub>3</sub> where at 20 K above $`T_c`$ the zone center ($`\zeta =0`$) energy has fallen to 3 meV . In relaxor compounds, however, there is a built-in disorder that produces a diffuse phase transition in which the dielectric permittivity $`ϵ`$ exhibits a broad maximum as a function of temperature at $`T_{max}`$. In the case of PMN and PZN, both of which have the simple $`ABO_3`$ perovskite structure, the disorder results from the $`B`$-site being occupied by ions of differing valence (either Mg<sup>2+</sup> or Zn<sup>2+</sup>, and Nb<sup>5+</sup>). This breaks the translational symmetry of the crystal. Despite years of intensive research, the physics of the observed diffuse phase transition is still not well understood . Moreover, it is interesting to note that no definitive evidence for a soft mode has been found in these systems. The bottom panel of Fig. 1 shows neutron scattering data taken by Naberezhnov et al. on PMN exactly analogous to that shown in the top panel for PbTiO<sub>3</sub>, except that the temperature is $`570`$ K higher than $`T_{max}`$. A seminal model for the disorder inherent to relaxors was first proposed by Burns and Dacol in 1983 . Using measurements of the optic index of refraction on both ceramic samples of (Pb<sub>1-3x/2</sub>La<sub>x</sub>)(Zr<sub>y</sub>Ti<sub>1-y</sub>)O<sub>3</sub> (PLZT) and single crystals of PMN and PZN , they demonstrated that a randomly-oriented local polarization $`P_d`$ develops at a well-defined temperature $`T_d`$, frequently referred to as the Burns temperature, several hundred degrees above the apparent transition temperature $`T_{max}`$. Subsequent studies have provided additional evidence of the existence of $`T_d`$ . The spatial extent of these locally polarized regions was conjectured to be $``$ several unit cells, and has given rise to the term “polar micro-regions,” or PMR . For PZN-8%PT, the formation of PMR occurs at $`T_d`$ 700 K, well above the cubic-to-tetragonal phase transition at $`T_c`$ 450 K. We find striking anomalies in the TO phonon branch (the same branch that goes soft at the zone center at $`T_c`$ in PbTiO<sub>3</sub>) that we speculate are directly caused by these PMR. Fig. 1. Top - Dispersion of the lowest energy TO mode and the TA mode in PbTiO<sub>3</sub>, measured just above $`T_c`$ (from Ref. ). Bottom - Dispersion curves of the equivalent modes in PMN measured far above $`T_g`$ (from Ref. ). All of the neutron scattering experiments were performed on the BT2 and BT9 triple-axis spectrometers located at the NIST Center for Neutron Research. The (002) reflection of highly-oriented pyrolytic graphite (HOPG) was used to monochromate and analyze the incident and scattered neutron beams. An HOPG transmission filter was used to eliminate higher-order neutron wavelengths. The majority of our data were taken holding the final neutron energy $`E_f`$ fixed at 14.7 meV ($`\lambda _f=2.36`$ Å) while varying the incident neutron energy $`E_i`$, and using horizontal beam collimations 60-40-S-40-40. The single crystal of PZN-8% PT used in this study weighs 2.8 grams and was grown using the high-temperature flux technique described elsewhere . The crystal was mounted onto an aluminum holder and oriented with the either the cubic \[$`\overline{1}`$10\] or axis vertical. It was then placed inside a vacuum furnace capable of reaching temperatures up to 670 K. Fig. 2. Solid dots represent positions of peak scattered neutron intensity taken from constant-$`\stackrel{}{Q}`$ and constant-E scans at 500 K along both and symmetry directions. Vertical (horizontal) bars represent phonon FWHM linewidths in $`\mathrm{}\omega `$ ($`q`$). Solid lines are guides to the eye indicating the TA and TO phonon dispersions. Two types of scans were used to collect data. Constant energy scans were performed by keeping the energy transfer $`\mathrm{}\omega =\mathrm{\Delta }E=E_fE_i`$ fixed while varying the momentum transfer $`\stackrel{}{Q}`$. Constant-$`\stackrel{}{Q}`$ scans were performed by holding the momentum transfer $`\stackrel{}{Q}=\stackrel{}{k_f}\stackrel{}{k_i}`$ ($`k=2\pi /\lambda `$) fixed while varying the energy transfer $`\mathrm{\Delta }E`$. Using these scans, the dispersions of both the transverse acoustic (TA) and the lowest-energy transverse optic (TO) phonon modes were mapped out at a temperature of 500 K (still in the cubic phase, but well below the Burns temperature of $``$ 700 K). In Fig. 2 we plot where the peak in the scattered neutron intensity occurs as a function of $`\mathrm{}\omega `$ and $`\stackrel{}{q}`$, where $`\stackrel{}{q}=\stackrel{}{Q}\stackrel{}{G}`$ is the momentum transfer measured relative to the $`\stackrel{}{G}=(2,2,0)`$ and $`(4,0,0)`$ Bragg reflections along the symmetry directions and , respectively. The horizontal scales of the left and right halves of the figure have been adjusted so that each corresponds to the same $`q`$<sup>-1</sup>) per unit length. The sizes of the vertical and horizontal bars represent the phonon FWHM (full width at half maximum) linewidths in $`\mathrm{}\omega `$ (meV) and $`q`$<sup>-1</sup>), respectively, and were derived from Gaussian least-squares fits to the constant-$`\stackrel{}{Q}`$ and constant-$`E`$ scans. The lowest energy data points trace out the TA phonon branch along and . Solid lines have been drawn through these points as a guide to the eye, and are nearly identical to that shown for PMN in Fig. 1. By far the most striking feature in Fig. 2 is the unexpected collapse of the TO mode near the zone center where the polar optic branch appears to drop precipitously, like a waterfall, into the acoustic branch. This anomalous behavior, shown by the shaded regions in Fig. 2, stands in stark contrast to that of PMN at high temperature where the same phonon branch intercepts the $`\mathrm{}\omega `$-axis at a finite energy (see bottom panel of Fig. 1). The strange drop in the TO phonon energy occurs for $`q0.13`$ r.l.u. measured along , and for $`q0.08`$ r.l.u. measured along (1 r.l.u. = $`2\pi /a`$ = 1.54 Å<sup>-1</sup>). It is quite intriguing to note that these $`q`$-values are both approximately equal to 0.2 Å<sup>-1</sup>. To clarify the nature of this unusual observation, we show an extended constant-$`E`$ scan taken at $`\mathrm{\Delta }E=6`$ meV in Fig. 3 along with a constant-$`\stackrel{}{Q}`$ scan in the insert. Both scans were taken at the same temperature of 500 K, near the (2,2,0) Bragg peak, and along the direction. The small horizontal bar shown under the left peak of the constant-$`E`$ scan represents the instrumental FWHM $`q`$-resolution, and is clearly far smaller than the instrinsic peak linewidth. We see immediately that the constant-$`\stackrel{}{Q}`$ scan shows no evidence of any well-defined phonon peak, most likely because the phonons near the zone center are overdamped. However, the constant-$`E`$ scan indicates the presence of a ridge of scattering intensity at $`\zeta =q=0.13`$ r.l.u., or about 0.2 Å<sup>-1</sup>, that sits atop the scattering associated with the overdamped phonons. Thus the sharp drop in TO branch that appears to take place in Fig. 2 does not correspond to a real dispersion curve as such. Rather, it simply indicates a region of ($`\mathrm{}\omega ,q`$)-space in which the phonon scattering cross section is enhanced. The origin of this enhancement is unknown, however we speculate that it is a direct result of the PMR described by Burns and Dacol . If the length scale associated with this enhancement is of order $`2\pi /q`$, this corresponds to $`31`$ Å, or about 7 to 8 unit cells, consistent with Burns and Dacol’s conjecture. Fig. 3. Single constant-E scan measured along at 6 meV at 500 K near the (2,2,0) Bragg peak. Solid line is a fit to a double Gaussian function of $`\zeta `$. The inset shows no peak in the scattered intensity measured along the energy axis. The arrow indicates the position of the constant-E scan. Limited data were also taken as a function of temperature to determine the effect on this anomalous ridge of scattering. In Fig. 4 we show two constant-$`E`$ scans, both measured at an energy transfer $`\mathrm{\Delta }E=5`$ meV along the direction, with one taken at 450 K, and the other at 600 K. The solid and dashed lines are fits to simple Gaussian functions of $`q`$. As is clearly seen, the ridge of scattering shifts to smaller $`q`$, i.e. towards the zone center, with increasing temperature. These data strongly suggest a picture, shown schematically in the inset to Fig. 4, in which the ridge of scattering evolves into the expected classic TO phonon branch behavior at higher temperature. A single data point, obtained briefly at 670 K to avoid damaging the crystal, is plotted in the inset to Fig. 4, and tentatively corroborates this picture. We have discovered an anomalous enhancement of the polar TO phonon scattering cross section that occurs at a special value of $`q=0.2`$ Å<sup>-1</sup>, independent of whether we measure along the or direction. We believe this to be direct microscopic evidence of the PMR proposed by Burns and Dacol . The presence of such small polarized regions of the crystal above $`T_c`$ should effectively prevent the propagation of long-wavelength ($`q0`$) soft mode phonons. A similar conclusion was reached by Tsurumi et al. based on dielectric measurements of PMN . The observation that the phonon scattering cross section is enhanced 0.2 Å<sup>-1</sup> from the zone center gives a measure of the size of the PMR consistent with the estimates of Burns and Dacol. If true, then this unusual behavior should be observed in other related relaxor systems. Indeed, tentative evidence for this has already been observed at room temperature in neutron scattering measurements on PMN . This enhancement should also be reflected in x-ray diffuse scattering intensities (), although it may be masked by the superposition of strong acoustic modes. Fig. 4. Two constant-E scans measured along at 5 meV at different temperatures. The peak shifts towards the zone center with increasing temperature. The inset suggests schematically how the TO branch dispersion recovers at higher temperatures. Our picture is not yet complete. Whereas Fig. 3 demonstrates that these anomalies appear as ridges on top of a broad overdamped cross section, the complete nature of this cross section can only be revealed by an extensive contour map of the Brillouin zone, for which we lack sufficient data. Another important aspect which requires further study is exactly how the “waterfall” evolves, at much higher temperatures, into the standard optic mode dispersion as shown in Fig. 1 for PMN. We have not yet carried out this experiment because of the concern of possible crystal deterioration at these high temperatures under vacuum . We intend to do so only after all other key experiments have been completed . Our current picture suggests that the TO phonon dispersion should change if one alters the state of the PMR. It is known that a macro ferroelectric phase can be created in these relaxor crystals by cooling the crystal in a field, or by application, at room temperature, of a sufficiently strong field. We are now planning neutron inelastic measurements on such a crystal, as well as on PZN. We thank S. Vakhrushev, S. Wakimoto, as well as D. E. Cox, L. E. Cross, R. Guo, B. Noheda, N. Takesue, and G. Yong for stimulating discussions. Financial support by the U. S. Dept. of Energy under contract No. DE-AC02-98CH10886, by ONR under project MURI (N00014-96-1-1173), and under resource for piezoelectric single crystals (N00014-98-1-0527) is acknowledged. We acknowledge the support of NIST, U. S. Dept. of Commerce, in providing the neutron facilities used in this work.
no-problem/0002/cond-mat0002457.html
ar5iv
text
# 1 a Comment on “Triviality of the Ground State Structure in Ising Spin Glasses” In a recent, very interesting paper, Palassini and Young have shown that it is possible to get useful information about the nature of the low $`T`$ phase of $`3D`$ Ising spin glasses by studying the behavior of the ground state (GS) after changing the boundary conditions (BC) of an $`L^3`$ lattice system from periodic ($`P`$) to anti-periodic ($`AP`$). They analyze GS obtained with the same realization of the quenched disorder and different BC. Let $`P(M,L)`$ be the probability that the spins in an $`M^3`$ cube remain in the same configuration (apart from a full reversal, due to the global $`Z_2`$ symmetry at zero magnetic field) when we change BC from $`P`$ to $`AP`$ . The behavior of $`P(M,L)`$ when $`L`$ goes to infinity is very important from the theoretical point of view. In the droplet model (DM) $`P(M,L)L^\lambda `$, where $`\lambda DD_s`$, $`D`$ is the space dimension and $`D_s`$ is the fractal dimension of the interface, while in the usual form of the Replica Symmetry Breaking (RSB) approach $`P(M,L)A(M)`$, where $`A(M)`$ is a non zero function (i.e. the interface is space filling). In it is shown that the data for $`P(2,L)`$ can be well fitted by a power law with a non-zero $`\lambda `$ (as suggested by the DM), although they can also be fitted as $`a+bL^1+cL^2`$, with a non zero $`a`$. In this comment we point out that one can better discriminate among the DM and the RSB approach if one extends their analysis analyzing the value of additional quantities. At this end we have computed the GS in systems with side up to $`L=12`$ (with Gaussian disorder) and we have compared the GS obtained with $`P`$ and $`AP`$ BC. If in the large volume limit the interface is a homogeneous fractal that can be characterized by a single fractal dimension (i.e. if it has not a multi-fractal behavior), and if the relation $`\lambda =DD_s`$ is correct, the probability that the interface does not intersect a region $``$, whose size is proportional to the system size, goes to a limit which is a non-trivial function of the shape of the region $``$. If the interface is space filling, such a probability always goes to zero. This argument implies that under the previous assumptions in the DM (for large $`L`$) $`P(M,L)g(ML^1)`$. We plot in figure (1) our results for boxes of size $`M`$ $`=`$ $`2`$, $`3`$, $`4`$ versus $`ML^1`$. The data are very far from collapsing on a single universal curve (they are consistent with a smooth behavior in $`L^1`$, and are well fitted by a second order polynomial in $`L^1`$). Stronger hints are obtained if we consider the probability $`P_L`$ of finding that a full $`yz`$ plane of $`L^2`$ spins does not hit the interface when we go from $`P`$ to $`AP`$ in the $`x`$ direction. This corresponds in the previous argument to consider a region $``$ of size $`L\times L\times 1`$. In figure (2) we plot $`P_L`$ versus $`L`$. $`P_L`$ can be roughly fitted as $`L^\gamma `$, with a relative large value of $`\gamma `$ (i.e $`\gamma 1.52.0`$). In other words the probability that the interface hits $``$ goes to one (or to a value very close to one) when the volume goes to infinity. We have shown that extending the innovative analysis of Palassini and Young to a larger set of observables, one finds serious problems with the usual DM interpretation: the most natural scenario is based on the fact that the interface is space filling as predicted by the RSB approach. Other possibilities like the presence of very strong corrections to the scaling, or that the relation $`\lambda =DD_s`$ is not valid and/or the interface is multi-fractal, are less plausible. We are grateful to M. Palassini and P. Young for pointing out an error in the interpretation of the data in a first version of this comment and for a very useful correspondence. E. Marinari and G. Parisi, Università di Roma La Sapienza
no-problem/0002/hep-ex0002041.html
ar5iv
text
# References USE OF CRYSTALS FOR HIGH ENERGY PHOTON BEAM LINEAR POLARIZATION CONVERSION INTO CIRCULAR N.Z. Akopov<sup>a</sup>, A.B. Apyan<sup>b</sup>, S.M. Darbinyan<sup>c</sup> Yerevan Physics Institute, Yerevan, 375036, Armenia <sup>a</sup> E-mail:akopov@inix.yerphi.am <sup>b</sup> E-mail:aapyan@jerewan1.yerphi.am <sup>c</sup> E-mail:simon@inix.yerphi.am > The possibility to convert the photon beam linear polarization into circular one at photon energies of hundreds GeV with the use of crystals is considered. The energy and orientation dependencies of refractive indexes are investigated in case of diamond, silicon and germanium crystal targets. To maximize the values for figure of merit, the corresponding crystal optimal orientation angles and thickness are found. The degree of circular polarization and intensity of photon beam are estimated and possibility of experimental realization is discussed. The method to convert the linear polarization into circular one at high energies using the single crystals (similar to the quarter-wave plate in optics) was proposed by N. Cabibbo since sixties and its experimental verification is not yet found. In the last decade the interest to this problem has increased and there have been proposals to verify it experimentally . This interest is connected with planned experiments with circular polarized photon beams at energies of tens and hundreds GeV on investigation of fundamental problems of theory. It was considered the possibility of using the circularly polarized photon beams in order to measure G, the polarized gluon distribution in nucleons, which is necessary for understanding the spin crisis problem. The investigation of this problem is of great importance since the results of EMC Collaboration experiment show that only $`30\%`$ of nucleon spin is carried out by quarks . The problem of spin crisis can be investigated by the processes of production of jets and heavy quarks via photon-gluon fusion , production of high transverse momentum mesons , production of $`J/\psi `$ and $`\rho ^0`$-mesons by circularly polarized photons. In particular for processes of polarized photon fusion with gluons of polarized nucleon target the asymmetry in produced charm-anticharm quark pairs rates for opposite polarization of the target will appreciate the gluon contribution to the proton spin. According to theoretical calculations and existing proposals a large asymmetry, of $`40\%`$, is anticipated. The estimated value of $`\mathrm{\Delta }G/G`$ obtained recently in the HERMES experiment (HERA, DESY) at the photon energy of 27.5 GeV is in order 0.41 which corresponds to asymmetry in order of 0.28 . Circular polarized photon beams can be produced by the longitudinal polarized electron beams, however the energy of electrons at modern accelerators is not sufficiently high to realize of above mentioned experiments. Therefore it is nessesary to continue the works on production of circular polarized photon beams at high energy proton accelerators (CERN, Fermilab). The first stage of the experiment NA59 devoted to production of linear polarized photon beam was performed at CERN on the 180 GeV energy SPS electron beam, using the 1.5 cm silicon radiator. The $`40\%`$ of the average linear polarization for the photons at energy range of 90-140 GeV was obtained. The second stage of the experiment on conversion of the photon beam linear polarization is planned to realize in this year. In this connection is actual to carry out calculations on investigation of the problem in different crystals to choose more convenient one for polarization conversion experiment and to estimate the crystal plate optimal thickness. In present work the problem of photon polarization conversion at energies of 100-300 GeV for usually used C, Si and Ge crystals is investigated. The energy and orientation dependencies of real parts of refractive indexes are calculated as well as the optimal thickness’ of crystal plates, which provide the production of photon beams with maximal value of figure of merit (FOM) in sense of the degree of circular polarization and beam intensity, are estimated. The photon beam orientation with respect to the crystal axes is defined in the following way. Let the chosen three orthogonal axes of cubic crystal are (1,2,3). The beam orientation is defined by angle $`\theta `$ between the axis 3 and the direction of incident photon beam $`\stackrel{}{n}`$ and by the angle $`\alpha `$ between the projection of $`\stackrel{}{n}`$ on the plane (1,2) and the axis 1. The photon beam polarization direction is defined with respect to the incidence plane containing photon direction $`\stackrel{}{n}`$ and axis 3 and the indexes $``$ and $``$ correspond to cases of $`\phi _0`$ =0 and $`\phi _0=\pi /2`$ (Fig.1). The coordinate system (x, y, z) connected with beam direction is chosen as shown in Fig.1. The projections $`(g_x,g_y,g_z)`$ of reciprocal lattice vector $`\stackrel{}{g}=\stackrel{}{g}_1\stackrel{}{n}_1+\stackrel{}{g}_2\stackrel{}{n}_2+\stackrel{}{g}_3\stackrel{}{n}_3`$ are equal: $`g_{}=g_z=\theta (G_1\mathrm{cos}\alpha +G_2\mathrm{sin}\alpha ),`$ $`g_x=G_1\mathrm{cos}\alpha +G_2\mathrm{sin}\alpha ,`$ (1) $`g_y=G_1\mathrm{sin}\alpha +G_2\mathrm{cos}\alpha ,`$ where $`\stackrel{}{g_i}`$ are the vectors along the crystal basis axes $`(|\stackrel{}{g_i}|=2\pi /\alpha )`$ and $`G_1,G_2,G_3`$ are projections of $`\stackrel{}{g}`$ on the axes 1, 2, 3. When high energy photon beam propagates through medium the photons are absorbed mainly due to the pair production mechanism on the medium atoms and photon beam attenuates with penetrating depth. Hence the refractive index is a complex quantity and his imaginary part is connected with pair production cross section by equation: Im$`n=W/2\omega `$ where $`W=N\sigma `$ is the absorption coefficient, n is dencity of atoms and $`\sigma `$ is pair production cross section. In crystals the cross section (and accordingly refractive index) depends upon photon beam polarization direction with respect to the crystal axes. The real parts of refractive indexes defined via imaginary parts by dispersion relations . This interesting assumption for photon birefringence at high energies is not so obviousely and its experimental verification presents itself of great importance. Let us consider the linear polarized photon beam incidents upon crystal as shown in Fig.1. The polarization of photon beam is described by Stocks parameters $`\xi _1=P_0\mathrm{sin}2\phi _0,\xi _3=P_0\mathrm{cos}2\phi _0`$, where $`P_0`$ and $`\phi _0`$ are the degree and direction of polarization. The beam intensity and Stocks parameters beyond the crystal plate of thickness $`l`$ are defined by formulae : $`I(l)=I(0)[\mathrm{cosh}(al)+P_0\mathrm{cos}(2\phi _0)\mathrm{sinh}(al)]e^{\overline{W}l},`$ $`\xi _1(l)={\displaystyle \frac{P_0\mathrm{sin}(2\phi _0)\mathrm{cos}(bl)}{\mathrm{cosh}(al)+P_0\mathrm{cos}(2\phi _0)\mathrm{sinh}(al)}},`$ $`\xi _2(l)={\displaystyle \frac{P_0\mathrm{sin}(2\phi _0)\mathrm{sin}(bl)}{\mathrm{cosh}(al)+P_0\mathrm{cos}(2\phi _0)\mathrm{sinh}(al)}},`$ (2) $`\xi _3(l)={\displaystyle \frac{P_0\mathrm{cos}(2\phi _0)\mathrm{cosh}(al)+\mathrm{sinh}(al)}{\mathrm{cosh}(al)+P_0\mathrm{cos}(2\phi _0)\mathrm{sinh}(al)}},`$ where $`W=(W_{}+W_{})/2,a=(W_{}W_{})/2,b=\omega Re(n_{}n_{})`$. As it’s follows from (1) $$\xi _1^2(l)+\xi _2^2(l)+\xi _3^2(l)=1+\frac{P_0^21}{\mathrm{cosh}(al)+P_0\mathrm{cos}(2\phi _0)\mathrm{sinh}(al)}$$ (3) and in case of completely linearly polarized incident photon beam there is conservation of polarization: $$\xi _1^2(l)+\xi _2^2(l)+\xi _3^2(l)=\xi _1^2(0)+\xi _3^2(0)$$ (4) and this condition can be used for determination of circular polarization afther measuring the linear polarization of survived photon beam. The formulae (2) permits to calculate the parameters of surviving photon beam in dependence of orientation angle $`\theta `$ and incident beam polarization direction $`\phi _0`$. From expressions for $`\xi _2(l)`$ and $`I(l)`$ it is clear that crystal must be oriented at angles corresponding to maximum values of real parts differences in order to decrease the beam attenuation or the thickness of crystal plate. The results of calculations for C, Si and Ge crystals at energies 100, 200 and 300 GeV are shown in Fig. 1-3. The photon beam is oriented parallel to the plane $`(1\overline{1}0)`$ and makes angle $`\theta `$ with axis . The absorption coefficients and, respectively, differences of real parts are calculated by coherent theory , the validity of which is not questionable since the results obtained by this theory are in good agreement with experimental data at considered angles . The criterion of crystal efficiency is the maximal value of the figure of merit (FOM) FOM=$`\xi _2^2(l)I(l)`$ in dependence of orientation angle $`\theta `$ and crystal thickness $`l`$. For determination of $`\theta ^{opt}`$ and $`l^{opt}`$ the calculations are carried out at $`\phi _0=45^0`$. Such choice of incident beam polarization direction is due to the fact, that at considered energies the condition $`al1`$ is fulfilled and $`\xi _2(l)`$ receives its maximum value at that angle. The results given in Table 1 show, that optimal angles $`\theta ^{opt}`$ decrease with increasing of photon energy being significantly higher than characteristic angle of quasiclassical theory at considered energies . The value of FOM is greatest for diamond but silicon crystal is more suitable for conversion experiment because of the silicon can be grown of required thickness and does not need cooling as germanium crystal. The authors would like to thank Prof. M.L. Ter-Mikaelian for useful discussions. This work was supported by the ISTC Grant A-099. Figure Captions Fig.1. Crystal orientation with respect to the polarized photon beam. Fig.2. Curves for diamond, silicon and cooled germanium crystals for photon beam energy 100 GeV. Solid curves for Si, dashed curves for Ge and dotted curves for C. a - the differences of refractive indexes real parts as a function of $`\theta `$. b and c - photon beam circular polarization degree and intensity as a functions of crystal plate thickness respectively. Fig.3. The curves as in Fig. 2. for photon beam energy 200 GeV. Fig.4. The curves as in Fig. 2. for photon beam energy 300 GeV.
no-problem/0002/hep-ex0002051.html
ar5iv
text
# Underground Muon Physics with the MACRO experiment ## 1 INTRODUCTION Muons detected deep underground are a useful tool for different physics and astrophysics items. These muons are the decay product of mesons originating in the very first hadronic interactions in the atmosphere or in secondary interactions during the shower development. Therefore, their study can provide many informations about primary cosmic rays (CR) composition and/or high energy hadronic interactions. Muons arriving in the underground Gran Sasso Laboratory have crossed at least $`h`$ 3100 $`hg/cm^2`$ of standard rock, corresponding to a muon energy cut $`E_\mu 1.3`$ TeV at the surface. This means that the primary CR energies range from some TeV/nucleon up to the maximum energies well beyond the “knee”. The MACRO experiment has collected a large amount of muon data in the last decade, at a rate of $`6.6\times 10^6`$ muon events/live year, and the $``$ 6% of these are multiple muon events. Many underground observables have been studied. The multiplicity distribution, i.e. the rate of muon events as a function of their multiplicity, is a quantity strongly dependent on the primary composition model. A detailed analysis on primary composition has been performed by the MACRO collaboration : one of the result is that data prefers a composition model with an average mass slightly increasing with energy above the knee<sup>1</sup><sup>1</sup>1for a more detailed discussion on composition studies see the contribution of E. Scapparone in these proceedings. Nevertheless, in the context of composition studies the knowledge of the hadronic interaction model is crucial. The main contribution to the systematic uncertainties in these analyses are due to the interaction models adopted and it is important to find out new observables to test the reliability of these models implemented in the Monte Carlo simulations. Moreover, is intrinsically important to test interaction models in kinematical regions not yet explored at accelerators or colliders. For instance, about 5% of MACRO muon data comes from $`pp`$ interactions with $`\sqrt{s}>`$ 2 TeV, while about 30% of the muons observed are the decay products of mesons produced with a (pseudo)rapidity $`\eta _{cm}>`$ 5. The situation is more evident if we consider that part of primary interactions in the atmosphere are nucleus-nucleus interactions, and very few data for energy $`E_{lab}\stackrel{>}{_{}}`$ 150 A GeV are available. In the following, we will focus on the decoherence analysis and on cluster analysis, two different tools able to extract informations on the interaction model adopted in the simulation codes. ## 2 THE MACRO DETECTOR The MACRO detector is a large area apparatus located in hall B of the Gran Sasso Laboratory at an average depth of 3800 hg/cm<sup>2</sup> of standard rock ($`E_\mu `$1.3 TeV). It has a modular structure, organized in six almost identical “supermodules” covering an horizontal surface area of $`1000`$ m<sup>2</sup>. The apparatus is equipped with three different and independent sub-detectors: streamer tube chambers for particle tracking, scintillator counters for timing and energy loss reconstruction and nuclear track-etch detectors optimized for the search of magnetic monopoles. The wire view of the streamer tube system is complemented with a second view, disposed at $`26.5^{}`$ with respect to the wire view, realized with aluminium pick-up strips. The spatial resolutions are $`\sigma _W`$=1.1 cm and $`\sigma _S`$=1.6 cm for the wire and strip view respectively. This arrangement allows the 3-D track reconstruction with an intrinsic pointing resolution of $`0.2^{}`$. ## 3 SIMULATION TOOLS The full simulation chain used to interpret MACRO data is composed of an event generator modelling high energy hadronic interactions, included in a shower propagation code which follows particles above threshold up to a given atmospheric depth. A muon transport code propagate muons inside the mountain overburden the apparatus and a detector simulator produces an output in the same format of real data. $``$ Event generator Most of the analyses of MACRO data have been carried on using the original HEMAS interaction model , based on the parametrizations of minimum bias events collected at the $`Sp\overline{p}S`$ collider at CERN . According to these results, the charged multiplicity is sampled from a negative binomial distribution and the transverse momentum contains a power law component. The model includes nuclear target effects and extrapolations to higher energies are performed in the context of log$`(s)`$ physics. The interaction model in Ref. (called “NIM85”) has been tested in a parametrized form with MACRO muon data . This model neglects some important experimental results: for instance, the charged multiplicity is sampled from a Poisson distribution and the transverse momentum distribution contains only a pure exponential functional form. Presently new interaction models are under study: DPMJET , QGSJET , SIBYLL and HDPM, the original interaction model of the shower simulation code CORSIKA . These are phenomenological models where the interactions are treated at parton level, with the exception of the HDPM generator, which is based on the DPM model but it is built according to parametrized results. They have the common feature to refer to the Regge-Gribov theories for the modelling of the soft part of the interactions, where perturbative QCD cannot be applied. Nevertheless, the transverse component of the interactions is not constrained by the theory and is introduced “by hand”, according to some experimental results such as the seagull effect or the Cronin effect in nuclear interactions. In this context, it is useful the comparison of the model one to each other to estimate the systematic uncertainty associated to the unknown transverse structure of the interactions. $``$ Cascade code Two different shower propagation codes have been used in the analyses: HEMAS<sup>2</sup><sup>2</sup>2the name HEMAS refers both to the hadronic interaction model and to the cascade code and CORSIKA . HEMAS is conceived as a fast tool to compute the hadronic, electromagnetic and muon component of air showers. It has been extensively used in the MACRO collaboration and has revised many improvements since its first release. It introduces some approximations in the shower development, so that the model can be used to follow only particles with a minimum energy in atmosphere $`E>500`$ GeV. The electromagnetic size is computed by means a semi-analytical method. It can be used interfaced with the DPMJET and HEMAS interaction models. Recently, the CORSIKA Monte Carlo code, generally used in surface EAS-arrays, has been interfaced with the muon transport code to propagate muons up to the Gran Sasso depth. This code has been used only in the analysis of high multiplicity events. $``$ Muon transport code The muon propagation in the rock has been realized using the PROPMU package , which represents an improvement with respect to the propagation model included in the original HEMAS version. It takes into account muon energy loss due to multiple Coulomb scattering and to discrete processes, such as bremsstrahlung, pair production processes and photonuclear interactions. $``$ Detector simulator The response of the apparatus is simulated by means a GEANT based code, called GMACRO. It reproduces all relevant physical processes in the detector and produces an output in the same format of real data, so as to process data using the same offline chain of real data. ## 4 ANALYSIS RESULTS ### 4.1 Decoherence Function The decoherence function, defined as the distribution of the muon pair separation in multiple muon events, is mainly connected with the muon lateral distribution with respect to the shower axis and is very sensitive to the transverse structure of the hadronic interaction models. The decoherence distribution is instead weakly dependent on the primary composition model, as far as the shape of the distribution is concerned. Therefore, the study of this function allows to some extent the disentangle between the two effects. MACRO has studied the decoherence function up to the maximum distance allowed by the apparatus ($``$ 70 m). The unfolded decoherence function (the distribution corrected for the detector effects) is shown in Fig.1, where it is compared with the predictions of the HEMAS Monte Carlo. A further check on the reliability of the simulation code has been performed in different rock/zenith windows, constraining some component of the shower development with respect to others . Again, the comparison shows a good agreement between data and Monte Carlo. Finally, the study of the decoherence function in the very low distance region has shown that the QED process $`\mu ^\pm +N\mu ^\pm +N+\mu ^++\mu ^{}`$ at small distances must be taken into account if we want to reproduce the experimental data . This is shown in Fig.2, where the low distance region of the decoherence function is shown before and after the correction. The contribution of this process is negligible compared to the $`e^+e^{}`$ pair production process in the GeV range, but it becomes progressively more important in the TeV region . ### 4.2 Cluster Analysis The search for substructures inside muon bundles is able to provide additional information with respect to traditional methods . In some events muon bundles appear to be splitted into “clusters” and we ask if this feature is the result of simple statistical fluctuations in the muon lateral distribution or if there is some dynamical correlation connected with the development of the shower in the atmosphere. From an experimental point of view, we select muon bundles with at least 8 muons underground ($`N_\mu `$ 8) corresponding to CR primary energies $`E_{primary}\stackrel{>}{_{}}`$ 1000 TeV. The search for muon clusters is performed by means of an iterative cluster finding algorithm, which groups the muons depending on the choice of a free parameter called $`\chi _{cut}`$ (for the definition of this parameter see Ref. ). In Ref. has been pointed out that this method is sensitive both on the primary composition model and on the hadronic composition model. The comparison was made between two extreme composition model (the “heavy” and “light” composition models ) and between two very different hadronic interaction model (HEMAS and NIM85). Most of the effect can be explained as the result of fluctuations of the muon density inside the bundles. Considering that the density (namely the average number of muons per unity of area) depends both on primary mass number and on the modelling of meson transverse momentum, we can expect that the cluster effect is sensitive to the composition model and to the hadronic interaction model at the same time. On the other hand, if we consider the interaction models quoted in previous sections, the sensitivity of the cluster effect on the interaction model becomes weaker. This is shown in Fig.3, where is reported the cluster rates as a function of the parameter $`\chi _{cut}`$ for different interaction models and for a fixed primary composition model (the model derived from the fit of the MACRO multiplicity distribution ). In this case we are forced to enhance the sensitivity applying some selection criteria which correlate the underground substructures with the hadronic interaction features in the atmosphere. Apart from statistical fluctuations, a Monte Carlo study has revealed other two mechanisms responsible for the cluster effect: $``$ muons belonging to the same cluster have a larger probability to have a common parent meson in the steps of shower tree generation; $``$ muons belonging to the same cluster are the decay products of mesons highly correlated in the phase space of the very first hadronic interactions in the atmosphere. We considered only events reconstructed as two-cluster events by the algorithm with a fixed $`\chi _{cut}`$ and we studied the kinematical variables pseudorapidity $`\eta `$ and azimuthal angle $`\varphi `$ of the parent mesons in the first interaction of the shower. The typical topology of these selected events is a muon rich cluster close to the shower axis, generally the remnant of the shower development after many steps, and a far cluster with few muons directly generated in the very first hadronic interactions. Fig.4 shows the distributions of the relative distance in the $`\eta \varphi `$ space of pairs of muon parent mesons originated in the first interactions: the topological selection of muons belonging to the same or to different clusters reflects in a strong selection of different phase space regions of hadronic interactions. A quantitative computation of the relative contributions of these effects is at presently under study. ## 5 CONCLUSIONS The MACRO experiment has collected a large amount of muon data in the last decade. The analyses performed with these data allowed to draw conclusions about several items of muon physics. The dimension and granularity of the detector allowed the detection of multiplicity up to $`40`$, under more than 3000 hg/cm<sup>2</sup> of rock overburden. The study of the multiplicity distribution has been used to extract informations on the primary cosmic ray composition. The modelling of the transverse component of the interaction models at TeV energies, connected with the lateral distribution of cosmic ray shower, is one of the main source of uncertainties in the simulation codes. The analysis of the decoherence function has shown the reliability of the HEMAS Monte Carlo, used in most of the MACRO analyses. The study of second order effects, like the search for jet substructures inside muon bundles, is a useful tool to add new informations with respect to conventional analyses. The physical interpretation of the results has shown the dynamical origin of the effect, connected with the development of the shower in the atmosphere.
no-problem/0002/cond-mat0002402.html
ar5iv
text
# Randomly dilute spin models: a six-loop field-theoretic study. ## I Introduction. The critical behavior of systems with quenched disorder is of considerable theoretical and experimental interest. A typical example is obtained by mixing an (anti)-ferromagnetic material with a non-magnetic one, obtaining the so-called dilute magnets. These materials are usually described in terms of a lattice short-range Hamiltonian of the form $$_x=J\underset{<ij>}{}\rho _i\rho _j\stackrel{}{s}_i\stackrel{}{s}_j,$$ (1) where $`\stackrel{}{s}_i`$ is an $`M`$-component spin and the sum is extended over all nearest-neighbor sites. The quantities $`\rho _i`$ are uncorrelated random variables, which are equal to one with probability $`x`$ (the spin concentration) and zero with probability $`1x`$ (the impurity concentration). The pure system corresponds to $`x=1`$. One considers quenched disorder, since the relaxation time associated to the diffusion of the impurities is much larger than all other typical time scales, so that, for all practical purposes, one can consider the position of the impurities fixed. For sufficiently low spin dilution $`1x`$, i.e. as long as one is above the percolation threshold of the magnetic atoms. the system described by the Hamiltonian $`_x`$ undergoes a second-order phase transition at $`T_c(x)<T_c(x=1)`$ (see e.g. Ref. for a review). The relevant question in the study of this class of systems is the effect of the disorder on the critical behavior. The Harris criterion states that the addition of impurities to a system which undergoes a second-order phase transition does not change the critical behavior if the specific-heat critical exponent $`\alpha _{\mathrm{pure}}`$ of the pure system is negative. If $`\alpha _{\mathrm{pure}}`$ is positive, the transition is altered. Indeed, in disordered systems the exponent $`\nu `$ satisfies the inequality $`\nu 2/d`$ — this fact has been questioned however in Refs. — and therefore, by hyperscaling, $`\alpha _{\mathrm{random}}`$ is negative. Thus, if $`\alpha _{\mathrm{pure}}`$ is positive, $`\alpha _{\mathrm{random}}`$ differs from $`\alpha _{\mathrm{pure}}`$, so that the pure system and the dilute one have a different critical behavior. In pure $`M`$-vector models with $`M>1`$, the specific-heat exponent $`\alpha _{\mathrm{pure}}`$ is negative; therefore, according to the Harris criterion, no change in the critical asymptotic behavior is expected in the presence of weak quenched disorder. This means that in these systems disorder leads only to irrelevant scaling corrections. Three-dimensional Ising systems are more interesting, since $`\alpha _{\mathrm{pure}}`$ is positive. In this case, the presence of quenched impurities leads to a new universality class. Theoretical investigations, using approaches based on the renormalization group , and numerical Monte Carlo simulations , support the existence of a new random Ising fixed point describing the critical behavior along the $`T_c(x)`$ line: the critical exponents are dilution independent (for sufficiently low dilution) and different from those of the pure Ising model. Experiments confirm this picture. Cristalline mixtures of an Ising-like uniaxial antiferromagnet with short-range interactions (e.g. FeF<sub>2</sub>, MnF<sub>2</sub>) with a nonmagnetic material (e.g. ZnF<sub>2</sub> ) provide a typical realization of the random Ising model (RIM) (see e.g. Refs. ). Some experimental results are reported in Table I. This is not a complete list, but it gives an overview of the experimental state of the art. Other experimental results can be found in Refs. . The experimental estimates are definitely different from the values of the critical exponents for pure Ising systems, which are (see Ref. and references therein) $`\gamma =1.2371(4)`$, $`\nu =0.63002(23)`$, $`\alpha =0.1099(7)`$, and $`\beta =0.32648(18)`$. Moreover, they appear to be independent of the concentration. We mention that in the presence of an external magnetic field along the uniaxial direction, dilute Ising systems present a different critical behavior, equivalent to that of the random-field Ising model . This is also the object of intensive theoretical and experimental investigations (see e.g. Refs. ). Several experiments also tested the effect of disorder on the $`\lambda `$-transition of <sup>4</sup>He that belongs to the $`XY`$ universality class, corresponding to $`M=2`$ . They studied the critical behaviour of <sup>4</sup>He completely filling the pores of porous gold or Vycor glass. The results indicate that the transition is in the same universality class of the $`\lambda `$-transition of the pure system in agreement with the Harris criterion . The starting point of the field-theoretic approach to the study of ferromagnets in the presence of quenched disorder is the Ginzburg-Landau-Wilson Hamiltonian $$=d^dx\left\{\frac{1}{2}(_\mu \varphi (x))^2+\frac{1}{2}r\varphi (x)^2+\frac{1}{2}\psi (x)\varphi (x)^2+\frac{1}{4!}g_0\left[\varphi (x)^2\right]^2\right\},$$ (2) where $`rTT_c`$, and $`\psi (x)`$ is a spatially uncorrelated random field with Gaussian distribution $$P(\psi )=\frac{1}{\sqrt{4\pi }w}\mathrm{exp}\left[\frac{\psi ^2}{4w}\right].$$ (3) We consider quenched disorder. Therefore, in order to obtain the free energy of the system, we must compute the partition function $`Z(\psi ,g_0)`$ for a given distribution $`\psi (x)`$, and then average the corresponding free energy over all distributions with probability $`P(\psi )`$. By using the standard replica trick, it is possible to replace the quenched average with an annealed one. First, the system is replaced by $`N`$ non-interacting copies with annealed disorder. Then, integrating over the disorder, one obtains the Hamiltonian $$_{MN}=d^dx\left\{\underset{i,a}{}\frac{1}{2}\left[(_\mu \varphi _{a,i})^2+r\varphi _{a,i}^2\right]+\underset{ij,ab}{}\frac{1}{4!}\left(u_0+v_0\delta _{ij}\right)\varphi _{a,i}^2\varphi _{b,j}^2\right\}$$ (4) where $`a,b=1,\mathrm{}M`$ and $`i,j=1,\mathrm{}N`$. The original system, i.e. the dilute $`M`$-vector model, is recovered in the limit $`N0`$. Note that the coupling $`u_0`$ is negative (being proportional to minus the variance of the quenched disorder), while the coupling $`v_0`$ is positive. In this formulation, the critical properties of the dilute $`M`$-vector model can be investigated by studying the renormalization-group flow of the Hamiltonian (4) in the limit $`N0`$, i.e. of $`_{M0}`$. One can then apply conventional computational schemes, such as the $`ϵ`$-expansion, the fixed-dimension $`d=3`$ expansion, the scaling-field method, etc… In the renormalization-group approach, if the fixed point corresponding to the pure model is unstable and the renormalization-group flow moves towards a new random fixed point, then the random system has a different critical behavior. It is important to note that in the renormalization-group approach one assumes that the replica symmetry is not broken. In recent years, however, this picture has been questioned on the ground that the renormalization-group approach does note take into account other local minimum configurations of the random Hamiltonian (2), which may cause the spontaneous breaking of the replica symmetry. In this paper we assume the validity of the standard renormalization-group approach, and simply consider the Hamiltonian (4) for $`N=0`$. For generic values of $`M`$ and $`N`$, the Hamiltonian $`_{MN}`$ describes $`M`$ coupled $`N`$-vector models and it is usually called $`MN`$ model . $`_{MN}`$ has four fixed points: the trivial Gaussian one, the O($`M`$)-symmetric fixed point describing $`N`$ decoupled $`M`$-vector models, the O($`MN`$)-symmetric and the mixed fixed point. The Gaussian one is never stable. The stability of the other fixed points depends on the values of $`M`$ and $`N`$ (see e.g. Ref. for a discussion). The stability properties of the decoupled O($`M`$) fixed point can be inferred by observing that the crossover exponent associated with the O($`MN`$)-symmetric interaction (with coupling $`u_0`$) is related to the specific-heat critical exponent of the O($`M`$) fixed point . Indeed, at the O($`M`$)-symmetric fixed point one may interpret $`_{MN}`$ as the Hamiltonian of $`N`$ $`M`$-vector systems coupled by the O($`MN`$)-symmetric term. But this interaction is the sum of the products of the energy operators of the different $`M`$-vector models. Therefore, at the O($`M`$) fixed point, the crossover exponent $`\varphi `$ associated to the O($`MN`$)-symmetric quartic term should be given by the specific-heat critical exponent $`\alpha _M`$ of the $`M`$-vector model, independently of $`N`$. This argument implies that for $`M=1`$ (Ising-like systems) the pure Ising fixed point is unstable since $`\varphi =\alpha _I>0`$, while for $`M>1`$ the O($`M`$) fixed point is stable given that $`\alpha _M<0`$. This is a general result that should hold independently of $`N`$. For the quenched disordered systems described by the Hamiltonian $`_{M0}`$, the physically relevant region for the renormalization-group flow corresponds to negative values of the coupling $`u`$ . Therefore, for $`M>1`$ the renormalization group flow is driven towards the pure O($`M`$) fixed point, and the quenched disorder yields correction to scaling proportional to the spin dilution and to $`|t|^{\mathrm{\Delta }_r}`$ with $`\mathrm{\Delta }_r=\alpha _M`$. Note that for the physically interesting two- and three-vector models the absolute value of $`\alpha _M`$ is very small: $`\alpha _20.013`$ (see e.g. the recent results of Refs. ) and $`\alpha _30.12`$ (see e.g. ). Thus disorder gives rise to very slowly-decaying scaling corrections. For Ising-like systems, the pure Ising fixed point is instead unstable, and the flow for negative values of the quartic coupling $`u`$ leads to the stable mixed or random fixed point which is located in the region of negative values of $`u`$. The above picture emerges clearly in the framework of the $`ϵ`$-expansion, although for the Ising-like systems the RIM fixed point is of order $`\sqrt{ϵ}`$ rather than $`ϵ`$. The other fixed points of the Hamiltonian $`_{M0}`$ are located in the unphysical region $`u>0`$. Thus, they are not of interest for the critical behavior of randomly dilute spin models. For the sake of completeness, we mention that for $`M>1`$ the mixed fixed point is in the region of positive $`u`$ and is unstable . The last fixed point is on the positive $`v=0`$ axis, is stable and corresponds to the $`(MN)`$-vector theory for $`N0`$. It is therefore in the same universality class of the self-avoiding walk model (SAW). Figure 1 sketches the flow diagram for Ising ($`M=1`$) and multicomponent ($`M>1`$) systems. The Hamiltonian $`_{MN}`$ has been the object of several field-theoretic studies, especially for $`M=1`$, the case that describes the RIM. Several computations have been done in the framework of the $`ϵ`$-expansion and of the fixed-dimension $`d=3`$ expansion . In these approaches, since field-theoretic perturbative expansions are asymptotic, the resummation of the series is essential to obtain accurate estimates of physical quantities. For pure systems described by the Ginzburg-Landau-Wilson Hamiltonian one exploits the Borel summability of the fixed-dimension expansion (for which Borel summability is proved) and of the $`ϵ`$-expansion (for which Borel summability is conjectured), and the knowledge of the large-order behavior of the series . Resummation procedures using these properties lead to accurate estimates (see e.g. Refs. ). Much less is known for the quenched disordered models described by $`_{M0}`$. Indeed, the analytic structure of the corresponding field theory is much more complicated. The zero-dimensional model has been investigated in Ref. . They analyze the large-order behavior of the double expansion in the quartic couplings $`u`$ and $`v`$ of the free energy and show that the expansion in powers of $`v`$, keeping the ratio $`\lambda =u/v`$ fixed, is not Borel summable. In Ref. , it is shown that the non-Borel summability is a consequence of the fact that, because of the quenched average, there are additional singularities corresponding to the zeroes of the partition function $`Z(\psi ,g_0)`$ obtained from the Hamiltonian (2). Recently the problem has been reconsidered in Ref. . In the same context of the zero-dimensional model, it has been shown that a more elaborate resummation gives the correct determination of the free energy from its perturbative expansion. The procedure is still based on a Borel summation, which is performed in two steps: first, one resums in the coupling $`v`$ each coefficient of the series in $`u`$; then, one resums the resulting series in the coupling $`u`$. There is no proof that this procedure works also in higher dimensions, since the method relies on the fact that the zeroes of the partition function stay away from the real values of $`v`$. This is far from obvious in higher-dimensional systems. At present, the most precise field-theoretic results have been obtained using the fixed-dimension expansion in $`d=3`$. Several quantities have been computed: the critical exponents , the equation of state and the hyperuniversal ratio $`R_\xi ^+`$ . The most precise estimates of the critical exponents for the RIM have been obtained from the analysis of the five-loop fixed-dimension expansion, using Padé-Borel-Leroy approximants . In spite of the fact that the series considered are not Borel summable, the results for the critical exponents are stable: they do not depend on the order of the series, the details of the analysis, and, as we shall see, are in substantial agreement with our results obtained following the precedure proposed in Ref. . This fact may be explained by the observation of Ref. that the Borel resummation applied in the standard way (i.e. at fixed $`v/u`$) gives a reasonably accurate result for small disorder if one truncates the expansion at an appropriate point, i.e. for not too long series. The $`MN`$ model has also been extensively studied in the context of the $`ϵ`$-expansion . The critical exponents have been computed to three loops for generic values of $`M`$, $`N`$ and to five loops for $`M=1`$ . Several studies also considered the equation of state and the two-point correlation function . In spite of these efforts, studies based on the $`ϵ`$-expansion have been not able to go beyond a qualitative description of the physics of three-dimensional randomly dilute spin models. The $`\sqrt{ϵ}`$-expansion turns out not to be effective for a quantitative study of the RIM (see e.g. the analysis of the five-loop series done in Ref. ). A strictly related scheme is the so-called minimal-subtraction renormalization scheme without $`ϵ`$-expansion . The three-loop and four-loop results are in reasonable agreement with the estimates obtained by other methods. At five loops, however, no random fixed point can be found using this method. This negative result has been interpreted as a consequence of the non-Borel summability of the perturbative expansion. In this case, the four-loop series could represent the “optimal” truncation. We also mention that the Hamiltonian (4) for $`M=1`$ and $`N0`$ has been studied by the scaling-field method . The randomly dilute Ising model (1) has been investigated by many numerical simulations (see e.g. Refs. ). The first simulations were apparently finding critical exponents depending on the spin concentration. It was later remarked that this could be simply a crossover effect: the simulations were not probing the critical region and were computing effective exponents strongly biased by the corrections to scaling. Recently, the critical exponents have been computed using finite-size scaling techniques. They found very strong corrections to scaling, decaying with a rather small exponent $`\omega 0.37(6)`$, — correspondingly $`\mathrm{\Delta }=\omega \nu =0.25(4)`$ — which is approximately a factor of two smaller than the corresponding pure-case exponent. By taking into proper account the confluent corrections, they were able to show that the critical exponents are universal with respect to variations of the spin concentration in a wide interval above the percolation point. Their final estimates are reported in Table II. In this paper we compute the renormalization-group functions of the generic $`MN`$ model to six loops in the framework of the fixed-dimension $`d=3`$ expansion. We extend the three-loop series of Ref. and the expansions for the cubic model ($`M=1`$) reported in Ref. (five loops) and Ref. (six loops). We will focus here on the case $`N=0`$ corresponding to disordered dilute systems. Higher values of $`N`$ are of interest for several types of magnetic and structural phase transitions and will be discussed in a separate paper. For $`M=1`$ and $`N2`$ the six-loop series have already been analyzed in Ref. where we investigated the stability of the $`O(N)`$-symmetric point in the presence of cubic interactions. We should mention that two-loop and three-loop series for the $`MN`$ model in the fixed dimension expansion for generic values of $`d`$ have been reported in Refs. . For $`M=1`$, $`N=0`$, we have performed several analyses of the perturbative series following the method proposed in Ref. . The analysis of the $`\beta `$-functions for the determinaton of the fixed point is particularly delicate and we have not been able to obtain a very robust estimate of the random fixed point. Nonetheless, we derive quite accurate estimates of the critical exponents. Indeed, their expansions are well behaved and largely insensitive to the exact position of the fixed point. Our final estimates are reported in Table II, together with estimates obtained by other approaches. The errors we quote are quite conservative and are related to the variation of the estimates with the different analyses performed. The overall agreement is good: the perturbative method appears to have a good predictive power, in spite of the complicated analytic structure of the Borel transform that does not allow the direct application of the resummation methods used successfully in pure systems. For $`M2`$ and $`N=0`$ we have verified that no fixed point exists in the region $`u<0`$ and that the $`O(M)`$-symmetric fixed point is stable, confirming the general arguments given above. The paper is organized as follows. In Sec. II we derive the perturbative series for the renormalization-group functions at six loops and discuss the singularities of the Borel transform. The results of the analyses are presented in Sec. III and the final numerical values are reported in Table II. ## II The fixed-dimension perturbative expansion of the three-dimensional $`MN`$ model. ### A Renormalization of the theory. The fixed-dimension $`\varphi ^4`$ field-theoretic approach provides an accurate description of the critical properties of $`O(N)`$-symmetric models in the high-temperature phase (see e.g. Ref. ). The method can also be extended to two-parameter $`\varphi ^4`$ models, such as the $`MN`$ model. The idea is to perform an expansion in powers of appropriately defined zero-momentum quartic couplings. The theory is renormalized by introducing a set of zero-momentum conditions for the (one-particle irreducible) two-point and four-point correlation functions: $$\mathrm{\Gamma }_{ai,bj}^{(2)}(p)=\delta _{ai,bj}Z_\varphi ^1\left[m^2+p^2+O(p^4)\right],$$ (5) where $`\delta _{ai,bj}\delta _{ab}\delta _{ij}`$, $$\mathrm{\Gamma }_{ai,bj,ck,dl}^{(4)}(0)=Z_\varphi ^2m\left(uS_{ai,bj,ck,dl}+vC_{ai,bj,ck,dl}\right),$$ (6) and $`S_{ai,bj,ck,dl}`$ $`=`$ $`{\displaystyle \frac{1}{3}}\left(\delta _{ai,bj}\delta _{ck,dl}+\delta _{ai,ck}\delta _{bj,dl}+\delta _{ai,dl}\delta _{bj,ck}\right),`$ (7) $`C_{ai,bj,ck,dl}`$ $`=`$ $`\delta _{ij}\delta _{ik}\delta _{il}{\displaystyle \frac{1}{3}}\left(\delta _{ab}\delta _{cd}+\delta _{ac}\delta _{bd}+\delta _{ad}\delta _{bc}\right).`$ (8) Eqs. (5) and (6) relate the second-moment mass $`m`$, and the zero-momentum quartic couplings $`u`$ and $`v`$ to the corresponding Hamiltonian parameters $`r`$, $`u_0`$ and $`v_0`$: $$u_0=muZ_uZ_\varphi ^2,v_0=mvZ_vZ_\varphi ^2.$$ (9) In addition we define the function $`Z_t`$ through the relation $$\mathrm{\Gamma }_{ai,bj}^{(1,2)}(0)=\delta _{ai,bj}Z_t^1,$$ (10) where $`\mathrm{\Gamma }^{(1,2)}`$ is the (one-particle irreducible) two-point function with an insertion of $`\frac{1}{2}\varphi ^2`$. From the pertubative expansion of the correlation functions $`\mathrm{\Gamma }^{(2)}`$, $`\mathrm{\Gamma }^{(4)}`$ and $`\mathrm{\Gamma }^{(1,2)}`$ and the above relations, one derives the functions $`Z_\varphi (u,v)`$, $`Z_u(u,v)`$, $`Z_v(u,v)`$, $`Z_t(u,v)`$ as a double expansion in $`u`$ and $`v`$. The fixed points of the theory are given by the common zeros of the $`\beta `$-functions $`\beta _u(u,v)`$ $`=`$ $`m{\displaystyle \frac{u}{m}}|_{u_0,v_0},`$ (11) $`\beta _v(u,v)`$ $`=`$ $`m{\displaystyle \frac{v}{m}}|_{u_0,v_0},`$ (12) calculated keeping $`u_0`$ and $`v_0`$ fixed. The stability properties of the fixed points are controlled by the matrix $$\mathrm{\Omega }=\left(\begin{array}{cc}\frac{\beta _u(u,v)}{u}& \frac{\beta _u(u,v)}{v}\\ \frac{\beta _v(u,v)}{u}& \frac{\beta _v(u,v)}{v}\end{array}\right),$$ (13) computed at the given fixed point: a fixed point is stable if both eigenvalues are positive. The eigenvalues $`\omega _i`$ are related to the leading scaling corrections, which vanish as $`\xi ^{\omega _i}|t|^{\mathrm{\Delta }_i}`$ where $`\mathrm{\Delta }_i=\nu \omega _i`$. One also introduces the functions $`\eta _\varphi (u,v)`$ $`=`$ $`{\displaystyle \frac{\mathrm{ln}Z_\varphi }{\mathrm{ln}m}}=\beta _u{\displaystyle \frac{\mathrm{ln}Z_\varphi }{u}}+\beta _v{\displaystyle \frac{\mathrm{ln}Z_\varphi }{v}},`$ (14) $`\eta _t(u,v)`$ $`=`$ $`{\displaystyle \frac{\mathrm{ln}Z_t}{\mathrm{ln}m}}=\beta _u{\displaystyle \frac{\mathrm{ln}Z_t}{u}}+\beta _v{\displaystyle \frac{\mathrm{ln}Z_t}{v}}.`$ (15) Finally, the critical exponents are obtained from $`\eta `$ $`=`$ $`\eta _\varphi (u^{},v^{}),`$ (16) $`\nu `$ $`=`$ $`\left[2\eta _\varphi (u^{},v^{})+\eta _t(u^{},v^{})\right]^1,`$ (17) $`\gamma `$ $`=`$ $`\nu (2\eta ).`$ (18) ### B The perturbative series to six loops. We have computed the perturbative expansion of the correlation functions (5), (6) and (10) to six loops. The diagrams contributing to the two-point and four-point functions to six-loop order are reported in Ref. : they are approximately one thousand, and it is therefore necessary to handle them with a symbolic manipulation program. For this purpose, we wrote a package in Mathematica . It generates the diagrams using the algorithm described in Ref. , and computes the symmetry and group factors of each of them. We did not calculate the integrals associated to each diagram, but we used the numerical results compiled in Ref. . Summing all contributions we determined the renormalization constants and all renormalization-group functions. We report our results in terms of the rescaled couplings $$u\frac{16\pi }{3}R_{MN}\overline{u},v\frac{16\pi }{3}R_M\overline{v},$$ (19) where $`R_K=9/(8+K)`$, so that the $`\beta `$-functions associated to $`\overline{u}`$ and $`\overline{v}`$ have the form $`\beta _{\overline{u}}(\overline{u},0)=\overline{u}+\overline{u}^2+O(\overline{u}^3)`$ and $`\beta _{\overline{v}}(0,\overline{v})=\overline{v}+\overline{v}^2+O(\overline{v}^3)`$. The resulting series are $`\beta _{\overline{u}}=`$ $`\overline{u}+\overline{u}^2+{\displaystyle \frac{2(2+M)}{8+M}}\overline{u}\overline{v}{\displaystyle \frac{4(190+41MN)}{27(8+MN)^2}}\overline{u}^3`$ (21) $`{\displaystyle \frac{400(2+M)}{27(8+MN)(8+M)}}\overline{u}^2\overline{v}{\displaystyle \frac{92(2+M)}{27(8+M)^2}}\overline{u}\overline{v}^2+\overline{u}{\displaystyle \underset{i+j3}{}}b_{ij}^{(u)}\overline{u}^i\overline{v}^j,`$ $`\beta _{\overline{v}}=`$ $`\overline{v}+\overline{v}^2+{\displaystyle \frac{12}{8+MN}}\overline{u}\overline{v}{\displaystyle \frac{4(190+41M)}{27(8+M)^2}}\overline{v}^3{\displaystyle \frac{16(131+25M)}{27(8+MN)(8+M)}}\overline{u}\overline{v}^2`$ (23) $`{\displaystyle \frac{4(370+23MN)}{27(8+MN)^2}}\overline{u}^2\overline{v}+\overline{v}{\displaystyle \underset{i+j3}{}}b_{ij}^{(v)}\overline{u}^i\overline{v}^j,`$ $$\eta _\varphi =\frac{8(2+MN)}{27(8+MN)^2}\overline{u}^2+\frac{16(2+M)}{27(8+MN)(8+M)}\overline{u}\overline{v}+\frac{8(2+M)}{27(8+M)^2}\overline{v}^2+\underset{i+j3}{}e_{ij}^{(\varphi )}\overline{u}^i\overline{v}^j,$$ (24) $`\eta _t=`$ $`{\displaystyle \frac{2+MN}{8+MN}}\overline{u}{\displaystyle \frac{2+M}{8+M}}\overline{v}+{\displaystyle \frac{2(2+MN)}{(8+MN)^2}}\overline{u}^2`$ (26) $`+{\displaystyle \frac{4(2+M)}{(8+MN)(8+M)}}\overline{u}\overline{v}+{\displaystyle \frac{2(2+M)}{(8+M)^2}}\overline{v}^2+{\displaystyle \underset{i+j3}{}}e_{ij}^{(t)}\overline{u}^i\overline{v}^j.`$ For $`3i+j6`$, The coefficients $`b_{ij}^{(u)}`$, $`b_{ij}^{(v)}`$, $`e_{ij}^{(\varphi )}`$ and $`e_{ij}^{(t)}`$ are reported in the Tables III, IV, V and VI respectively. We have performed the following checks on our calculations: * $`\beta _{\overline{u}}(\overline{u},0)`$, $`\eta _\varphi (\overline{u},0)`$ and $`\eta _t(\overline{u},0)`$ reproduce the corresponding functions of the O($`MN`$)-symmetric model ; * $`\beta _{\overline{v}}(0,\overline{v})`$, $`\eta _\varphi (0,\overline{v})`$ and $`\eta _t(0,\overline{v})`$ reproduce the corresponding functions of the O($`M`$)-symmetric model ; * For $`M=1`$, the functions $`\beta _{\overline{u}}`$, $`\beta _{\overline{v}}`$, $`\eta _\varphi `$ and $`\eta _t`$ reproduce the corresponding functions of the $`N`$-component cubic model ; * The following relations hold for $`N=1`$: $`\beta _{\overline{u}}(u,xu)+\beta _{\overline{v}}(u,xu)=\beta _{\overline{v}}(0,x),`$ (27) $`\eta _\varphi (u,xu)=\eta _\varphi (0,x),`$ (28) $`\eta _t(u,xu)=\eta _t(0,x).`$ (29) ### C Borel summability and resummation of the series. Since field-theoretic perturbative expansions are asymptotic, the resummation of the series is essential to obtain accurate estimates of the physical quantities. In the case of the O($`N`$)-symmetric $`\varphi ^4`$ theory the expansion is performed in powers of the zero-momentum four-point coupling $`g`$. The large-order behavior of the series $`S(g)=s_kg^k`$ of any quantity is related to the singularity $`g_b`$ of the Borel transform closest to the origin. Indeed, for large $`k`$, the coefficient $`s_k`$ behaves as $$s_kk!(a)^kk^b\left[1+O(k^1)\right]\mathrm{with}a=1/g_b.$$ (30) The value of $`g_b`$ depends only on the Hamiltonian, while the exponent $`b`$ depends on which Green’s function is considered. The value of $`g_b`$ can be obtained from a steepest-descent calculation in which the relevant saddle point is a finite-energy solution (instanton) of the classical field equations with negative coupling . Since the Borel transform is singular for $`g=g_b`$, its expansion in powers of $`g`$ converges only for $`|g|<|g_b|`$. An analytic extension can be obtained by a conformal mapping , such as $$y(g)=\frac{\sqrt{1g/g_b}1}{\sqrt{1g/g_b}+1}.$$ (31) In this way the Borel transform becomes a series in powers of $`y(g)`$ that converges for all positive values of $`g`$ provided that all singularities of the Borel transform are on the real negative axis . In this case one obtains a convergent sequence of approximations for the original quantity. For the O($`N`$)-symmetric theory accurate estimates (see e.g. Ref. ) have been obtained resumming the available series: the $`\beta `$-function is known up to six loops, while the functions $`\eta _\varphi `$ and $`\eta _t`$ are known to seven loops . The large-order behavior of the perturbative expansions in the $`MN`$ model can be studied by employing the same methods used in the standard $`\varphi ^4`$ theory . We may consider the series in $`\overline{u}`$ and $`\overline{v}`$ at fixed ratio $`z\overline{v}/\overline{u}`$. The large-order behavior of the resulting expansion in powers of $`\overline{u}`$ is determined by the singularity of the Borel transform that is closest to the origin, $`\overline{u}_b(z)`$, given by $`{\displaystyle \frac{1}{\overline{u}_b(z)}}`$ $`=a\left(R_{MN}+R_Mz\right)`$ $`\mathrm{for}z>0\mathrm{and}z<{\displaystyle \frac{2N}{N+1}}{\displaystyle \frac{R_{MN}}{R_M}},`$ (32) $`{\displaystyle \frac{1}{\overline{u}_b(z)}}`$ $`=a\left(R_{MN}+{\displaystyle \frac{1}{N}}R_Mz\right)`$ $`\mathrm{for}{\displaystyle \frac{2N}{N+1}}{\displaystyle \frac{R_{MN}}{R_M}}<z<0.`$ (33) where $$a=0.14777422\mathrm{},R_K=\frac{9}{8+K}.$$ (34) Using Eq. (33) and the conformal mapping (31), one can resum the perturbative series in $`\overline{u}`$ at fixed $`z`$. This method has been applied in Ref. to the analysis of the renormalization-group functions of the three-dimensional cubic model. The result (33) has been obtained for integer $`M,N1`$. For $`N=0`$, one may think that the correct behaviour is obtained by analytic continuation of (33), i.e. $$\frac{1}{\overline{u}_b(z)}=a\left(\frac{9}{8}+R_Mz\right),$$ (35) for all $`z`$. However, this is not correct. Indeed, as explicitly shown in Refs. in the context of the zero-dimensional random Ising model, there is an additional contribution to the large-order behavior of the series in $`u`$ at fixed $`\lambda u/v`$, which makes the series non-Borel summable, giving rise to singularities of the Borel transform on the positive real axis. They are due to the zeroes of the partition function at fixed disorder. We have no reason to believe that similar non-Borel summable contributions are not present in higher dimensions. It is likely that the same phenomenon occurs even in three dimensions. As a consequence, a summation procedure based on Eq. (35) and a conformal mapping of the type (31) would not lead to a sequence of approximations converging to the correct result . Fortunately, this is not the end of the story. As shown recently in Ref. , at least in zero dimensions, one can still resum the perturbative series. Indeed the zero-dimensional free energy can be obtained from its perturbative expansion if one applies a more elaborated procedure which is still based on a Borel summation. Let us write the double expansion of the free energy $`f(u,v)`$ in powers of $`u,v`$ as $`f(u,v)`$ $`=`$ $`{\displaystyle \underset{n=0}{\overset{\mathrm{}}{}}}c_n(v)u^n,`$ (36) $`c_n(v)`$ $``$ $`{\displaystyle \underset{k=0}{\overset{\mathrm{}}{}}}c_{nk}v^k,`$ (37) The main result of Ref. is that the expansions of the coefficients (37) and the resulting series at fixed $`v`$, Eq. (36), are Borel summable. Using this result, a resummation of the free energy is obtained in two steps. First, one resums the coefficient $`c_n(v)`$; then, using the computed coefficients, one resums the series in $`u`$. The resummation of Eq. (37) can be performed using the Padé-Borel-Leroy method, as suggested in Ref. . However, also the conformal method can be used, since the large-order behavior is known exactly. Indeed, $$c_n(v)\frac{^nf(u,v)}{u^n}|_{u=0}.$$ (38) Thus, $`c_n(v)`$ can be related to zero-momentum correlation functions in the theory with $`u=0`$, which is the standard $`M`$-vector model. Therefore, one can use the well-known results for the large-order behavior of the perturbative series in the O($`M`$)-symmetric theory . ## III Analysis of the six-loop expansion for $`N=0`$ ### A The random Ising model As we said in the Introduction, the random Ising model corresponds to $`M=1`$ and $`N=0`$. There are two relevant fixed points, the Ising and the random point, see Fig. 1. In Ref. we already discussed the stability of the Ising point. We found that this fixed point is unstable since the stability matrix has a negative eigenvalue $`\omega =0.177(6)`$, in good agreement with the general argument predicting $`\omega =\alpha _I/\nu _I=0.1745(12)`$. We will now investigate the random fixed point, which is stable and determines therefore the critical behaviour of the RIM. In order to study the critical properties of the random fixed point, we used several different resummation procedures, according to the discussion of the previous Section. Following Ref. , for each quantity we consider, we must perform first a resummation of the series in $`v`$, see Eq. (37). This may be done in two different ways. We can either use the Padé-Borel method, or perform a conformal mapping of the Borel-transformed series, using the known value of the singularity of the Borel transform. Explicitly, let us consider a $`p`$-loop series in $`u`$ and $`v`$ of the form $$\underset{n=0}{\overset{p}{}}\underset{k=0}{\overset{pn}{}}c_{nk}u^nv^k.$$ (39) In the first method, for each $`0np`$, we choose a real number $`b_n`$ and a positive integer $`r_n`$ such that $`r_npn`$; then, we consider $$R_1(c_n)(p;b_n,r_n;v)=_0^{\mathrm{}}𝑑te^tt^{b_n}\left[\underset{i=0}{\overset{pnr_n}{}}B_i(tv)^i\right]\left[1+\underset{i=1}{\overset{r_n}{}}C_i(tv)^i\right]^1.$$ (40) The coefficients $`B_i`$ and $`C_i`$ are fixed so that $`R_1(c_n)(p;b_n,r_n;v)=_{k=0}^{pn}c_{nk}v^k+O(v^{pn+1})`$. Here we are resumming the Borel transform of each coefficient of the series in $`u`$ by means of a Padé approximant $`[pnr_n/r_n]`$. Eq. (40) is well defined as long as the integrand is regular for all positive values of $`t`$. However, for some values of the parameters, the Padé approximant has poles on the positive real axis — we will call these cases defective — so that the integral does not exist. These values of $`b_n`$ and $`r_n`$ must of course be discarded. The second method uses the large-order behaviour of the series and a conformal mapping . In this case, for each $`0np`$, we choose two real numbers $`b_n`$ and $`\alpha _n`$ and consider $$R_2(c_n)(p;b_n,r_n;v)=\underset{k=0}{\overset{pn}{}}B_k_0^{\mathrm{}}𝑑te^tt^{b_n}\frac{(y(vt))^k}{(1y(vt))^{\alpha _n}},$$ (41) where $$y(t)=\frac{\sqrt{|\overline{g}_I|+t}\sqrt{|\overline{g}_I|}}{\sqrt{|\overline{g}_I|+t}+\sqrt{|\overline{g}_I|}},$$ (42) and $`\overline{g}_I=1/a`$ is the singularity of the Borel transform for the pure Ising model. The numerical value of $`a`$ is given in Eq. (34). Using these two methods we obtain two different partial resummations of the original series (39): $`{\displaystyle \underset{n=0}{\overset{p}{}}}R_1(c_n)(p;b_n,r_n;v)u^n,`$ (43) $`{\displaystyle \underset{n=0}{\overset{p}{}}}R_2(c_n)(p;b_n,\alpha _n;v)u^n.`$ (44) Nothing is known on the asymptotic behaviour of these series, and we will thus use the Padé-Borel method. Starting from Eq. (43) we will thus consider $$E_1(c)(q,p;b_u,r_u;\{b_n\},\{r_n\})_0^{\mathrm{}}𝑑te^tt^{b_u}\left[\underset{i=0}{\overset{qr_u}{}}B_i(v)(tu)^i\right]\left[1+\underset{i=1}{\overset{r_u}{}}C_i(v)(tu)^i\right]^1.$$ (45) The coefficients $`B_i(v)`$ and $`C_i(v)`$ are fixed so that $`E_1(c)(q,p;b_u,r_u;\{b_n\},\{r_n\})`$ coincides with the expansion (43) up to terms of order $`O(u^{q+1})`$. Note that we have introduced here three additional parameters: $`b_u`$, the power appearing in the Borel transform, $`r_u`$ that fixes the order of the Padé approximant, and $`q`$ which indicates the number of terms that are resummed, and that, in the following, will always satisfy $`qp1`$. Analogously, starting from Eq. (44), we define $`E_2(c)(q,p;b_u,r_u;\{b_n\},\{\alpha _n\})`$. We will call the first method the “double Padé-Borel” method, while the second will be named the ”conformal Padé-Borel” method. Let us now apply these methods to the computation of the fixed point $`(\overline{u}^{},\overline{v}^{})`$. In this case, we resum the $`\beta `$-functions $`\beta _u/u`$ and $`\beta _v/v`$ and then look for a common zero with $`u<0`$. We consider first the resummation $`E_1`$, Eq. (45). A detailed analysis shows that the coefficients $`R_1`$ can only be defined for $`r_n=1`$. We have also tried $`r_0=2`$ and $`r_1=2`$, but the resulting Padé approximants turned out defective. Therefore, we have fixed $`r_n=1`$ for all $`0n5`$. We must also fix the parameters $`\{b_n\}`$. It is impossible to vary all of them independently, since there are too many combinations. For this reason, we have taken all $`b_n`$ equal, i.e. we have set $`b_n=b_v`$ for all $`n`$. Finally, we have only considered the case $`q=p1`$. Therefore, the analysis is based on the approximants $$\widehat{E}_1()(p;b_u,r_u;b_v)=E_1()(p1,p;b_u,r_u;\{b_n=b_v\},\{r_n=1\}).$$ (46) Estimates of the fixed point $`(\overline{u}^{},\overline{v}^{})`$ have been obtained by solving the equations $$\widehat{E}_1(\beta _{\overline{u}}/\overline{u})(p;b_u,r_u;b_v)=0,\widehat{E}_1(\beta _{\overline{v}}/\overline{v})(p;b_u,r_u;b_v)=0.$$ (47) We have used $`r_u=1,2`$, $`p=4,5,6`$ and we have varied $`b_u`$ and $`b_v`$ between 0 and 20. As usual in these procedures, we must determine the optimal values of the parameters $`b_u`$ and $`b_v`$. This is usually accomplished by looking for values of $`b_u`$ and $`b_v`$ such that the estimates are essentially independent of the order $`p`$ of the series. In the present case, we have not been able to find any such pair. Indeed, the five-loop results ($`p=5`$) are systematically higher than those obtained with $`p=4`$ and $`p=6`$. For instance, if we average all estimates with $`0b_u,b_v5`$ we obtain $`\overline{u}^{}`$ $`=`$ $`0.66(1)(p=4),0.78(2)(p=5),0.63(3)(p=6);`$ (48) $`\overline{v}^{}`$ $`=`$ $`2.235(3)(p=4),2.273(4)(p=5),2.250(23)(p=6).`$ (49) The uncertainties quoted here are the standard deviations of the estimates in the quoted interval and show that the dependence on $`b_u`$, $`b_v`$, and $`r_u`$ is very small compared to the variation of the results with $`p`$. Increasing $`b_u,b_v`$ does not help, since the five-loop result is largely insensitive to variations of the parameters, while for $`p=4`$ and $`p=6`$ $`|\overline{u}^{}|`$ and $`\overline{v}^{}`$ decrease with increasing $`b_u`$ and $`b_v`$. It is difficult to obtain a final estimate from these results. We quote $$\overline{u}^{}=0.68(10),\overline{v}^{}=2.25(2),$$ (50) that includes all estimates reported above. The instability of the results reported above with $`p`$ seems to indicate that some of the hypotheses underlying the choice of the parameters is probably incorrect. One may suspect that choosing all $`b_n`$ equal does not allow a correct resummation of the coefficients, and that $`\beta _{\overline{u}}`$ and $`\beta _{\overline{v}}`$ need different choices of the parameters. We have therefore tried a second strategy. First, for each $`\beta `$-function, we have carefully analyzed each coefficient of the series in $`u`$, trying to find an optimal value of the parameter $`b_n`$$`r_n`$ was fixed in all cases equal to 1 — by requiring the stability of the estimates of the coefficient with respect to a change of the order of the series. However, only for the first two coefficients we were able to identify a stable region, so that we could not apply this method. On the other hand, as we shall see, this method works very well for the resummations of the coefficients that use the conformal mapping. Let us now discuss the conformal Padé-Borel method. As before, we have tried two different strategies. In the first case we have set all $`b_n`$ equal to $`b_v`$ and all $`\alpha _n`$ equal to $`\alpha _v`$, we have used the same parameters for the two $`\beta `$-functions, and we have looked for optimal values of $`b_u`$, $`b_v`$ and $`\alpha _v`$, setting $`r_u=1,2`$. While before, for each $`p`$, the estimates were stable, in this case the fluctuations for each fixed $`p`$ are very large, and no estimate can be obtained. Then, we applied the second strategy. We analyzed carefully each coefficient of the series in $`u`$, finding optimal values $`b_{n,\mathrm{opt}}`$ and $`\alpha _{n,\mathrm{opt}}`$ for each $`n`$ and $`\beta `$-function. Of course, the required stability analysis can only be performed if the series is long enough, and thus we have always taken $`q4`$. Therefore, we consider $$\widehat{E}_2()(q,p;b_u,r_u;\delta _b,\delta _\alpha )=E_2()(q,p;b_u,r_u;\{b_{n,\mathrm{opt}}+\delta _b\},\{\alpha _{n,\mathrm{opt}}+\delta _\alpha \}),$$ (51) where $`\delta _\alpha `$ and $`\delta _b`$ are ($`n`$-independent) numbers which allow us to vary $`b_n`$ and $`\alpha _n`$ around the optimal values. Estimates of the fixed point are obtained from $$\widehat{E}_2(\beta _{\overline{u}}/\overline{u})(q,p;b_u,r_{1,u};\delta _b,\delta _\alpha )=0,\widehat{E}_2(\beta _{\overline{v}}/\overline{v})(q,p;b_u,r_{2,u};\delta _b,\delta _\alpha )=0.$$ (52) The first problem which must be addressed is the value of the parameters $`r_{1,u}`$ and $`r_{2,u}`$. For $`\beta _{\overline{u}}/\overline{u}`$ we find that the Padé approximants are always defective for $`r_{1,u}=1,2`$; they are well behaved only for $`r_{1,u}=3`$ and $`q=4`$. Since the resummed series in $`u`$ has coefficients that are quite small, we decided to use also $`r_{1,u}=0`$, which corresponds to a direct summation of the series in $`u`$, without any Padé-Borel transformation. For $`\beta _{\overline{v}}/\overline{v}`$ we did not observe a regular pattern for the defective Padé’s and we have used $`r_{2,u}=1,2`$, discarding all defective cases. The results, for chosen values of $`p`$, $`q`$, and $`r_{1,u}`$ are reported in Table VII. The quoted uncertainty is the standard deviation of the results when $`3\delta _b3`$ and $`1\delta _\alpha 1`$. This choice is completely arbitrary, but in similar analyses of different models we found that varying $`\alpha `$ by $`\pm 1`$ and $`b`$ by $`\pm 3`$ provides a reasonable estimate of the error. Notice that we have not optimized $`b_u`$, but we have averaged over all values between 0 and 20, since the dependence on this parameter is extremely small. The results are stable, giving a final estimate (average of the results with $`p=6`$, $`q=4`$) $$\overline{u}^{}=0.631(16),\overline{v}^{}=2.195(20).$$ (53) The error bars have been chosen in such a way to include all central values for $`p=5`$ and $`p=6`$. It should be noted that, even if our results are quite stable with respect to changes of the parameters, most of the approximants do not contribute since they are defective. For these reasons, in the following we will always carefully check the dependence of the estimates on the value of the fixed point, considering also values of $`(\overline{u}^{},\overline{v}^{})`$ that are well outside the confidence intervals of Eq. (53). We can compare our results for the fixed point with previous determinations. Ref. reports $`(\overline{u}^{},\overline{v}^{})=(0.667,2.244)`$ obtained from the Chisholm-Borel analysis of the four-loop series. The same expansion was also analyzed by Varnashev obtaining $`(\overline{u}^{},\overline{v}^{})=(0.582(85),2.230(83))`$ and $`(\overline{u}^{},\overline{v}^{})=(0.625(60),2.187(56))`$ using different sets of Padé approximants. The $`ϵ`$-algorithm by Wynn with a Mittag-Leffler transform was used in Ref. finding $`(\overline{u}^{},\overline{v}^{})=(0.587,2.178)`$. From the analysis of the five-loop series Pakhnin and Sokolov obtain $`(\overline{u}^{},\overline{v}^{})=(0.711(12),2.261(18))`$. While the four-loop results are in good agreement with our estimates, the five-loop estimate differs significantly, a fact that may indicate that the claim of Ref. that the error on their estimates is approximately 1–2% is rather optimistic. Note also that the five-loop result is quite different from the previous four-loop estimates. We have also tried to determine the eigenvalues of the stability matrix $`\mathrm{\Omega }`$, cf. Eq. (13), that controls the subleading corrections in the model. We used both a double Padé-Borel transformation and the conformal-Padé-Borel method. In the first case we obtain estimates that vary strongly with the order, and, as it happened for the position of the fixed point, it is impossible to obtain results that are insensitive to the order $`p`$. For $`r_u=1`$, discarding the cases in which the computed eigenvalues are complex, we obtain for the smallest eigenvalue $`\omega `$: $$\omega =0.16(2)(p=4),0.21(3)(p=5),0.16(3)(p=6).$$ (54) We have included in the error the dependence on the position of the fixed point. These estimates have been obtained setting $`r_n=1`$ and averaging over $`b_u`$ and $`b_v`$ varying between 0 and 10. We have not tried to optimize the choice of these parameters, since the estimates show only a small dependence on them. We have also considered $`r_u=2`$. In this case a large fraction of the approximants is defective (for $`p=4`$ they are all defective). We obtain $$\omega =0.29(4)(p=5),0.33(5)(p=6).$$ (55) The quite large discrepancy between the estimates (54) and (55) clearly indicates that the analysis is not very robust. A conservative final estimate is $$\omega =0.25(10),$$ (56) that includes the previous results. We have also tried the conformal-Padé-Borel method, optimizing separately each coefficient . However, several problem appeared immediately. First, we could not perform a Padé-Borel resummation of the series in $`u`$ of the elements of the stability matrix. Indeed, in all cases, some Padé approximant was defective. As in the determination of the fixed-point position, we tried to resum the series in $`u`$ without any additional transformation. For $`p=4`$ this gives reasonable results, and we can estimate $`\omega =0.29(9)`$. However, for $`p=5,6`$ all eigenvalues we find are complex, and as such must be discarded. The fact that the series appearing in the stability matrix generate always defective Padé approximants may indicate that the series in $`u`$ are not Borel summable. In this case, one expects that the estimates converge towards the correct value up to a certain number of loops. Increasing further the length of the series, worsens the final results. If indeed the expansion is not Borel summable, the previous results seem to indicate that for the subleading exponent $`\omega `$ the best results are obtained at four loops. Let us now compute the critical exponents. As before, we tried several different methods. A first estimate was determined using the double Padé-Borel method. Each exponent was computed from the approximants $`\widehat{E}_1(ϵ)(p;b_u,r_u;b_v)`$ defined in Eq. (46). For $`\gamma `$ and $`\nu `$, the series $`1/\gamma `$ and $`1/\nu `$ are more stable and thus the final estimates are obtained from their analysis. For $`\eta `$, if we write $`\eta =\eta _n(v)u^n`$, then $`\eta _0(v)v^2`$ and $`\eta _1(v)v`$. In this case, for $`n=0`$ we resummed the series $`\eta _0(v)/v^2`$, while for $`n=1`$ we considered $`\eta _1(v)/v`$. The results we obtain are very stable, even if we do not optimize the parameters $`b_u`$ and $`b_v`$. Without choosing any particular value for them, but simply averaging over all values between 0 and 10, we obtain the results of Table VIII. Note that we have not quoted any estimate of $`\eta `$ for $`p=4`$: in all cases, some Padé approximant was defective. The quoted uncertainty, that expresses the variation of the estimates when changing $`b_u`$, $`b_v`$, and $`r_u`$, is very small, and it is clear that it cannot be interpreted as a correct estimate of the error, since the variation with the order $`p`$ of the series is much larger. In Table VIII we also report the estimates of the exponents corresponding to several different values of $`(\overline{u}^{},\overline{v}^{})`$: beside the estimate (53), we consider two values appearing in the first analysis of the fixed point position, those with the largest and smallest value of $`\overline{u}^{}`$, when $`b_u`$ and $`b_v`$ vary in $`[0,5]`$. The dependence on $`(\overline{u}^{},\overline{v}^{})`$ is quite small, of the same order of the dependence on the order $`p`$. As final estimate we quote the value obtained for $`p=6`$, using the estimate (53) for the fixed point. The error is estimated by the difference between the results with $`p=5`$ and $`p=6`$. Therefore we have $$\gamma =1.313(14),\nu =0.668(6),\eta =0.0327(19).$$ (57) Note that, within one error bar, all estimates of $`\gamma `$ and $`\nu `$ reported in Table VIII are compatible with the results given above. Instead, the estimates of $`\eta `$ show a stronger dependence on the critical point, and a priori, since we do not know how reliable are the uncertainties reported in Eq. (53), it is possible that the correct estimate is outside the confidence interval reported above. We will now use the conformal Padé-Borel method. A first estimate is obtained considering approximants of the form $$\widehat{E}_3()(q,p;b_u,r_u;b_v,\alpha _v)=E_2()(q,p;b_u,r_u;\{b_n=b_v\},\{\alpha _n=\alpha _v\}),$$ (58) setting all $`b_n`$ equal to $`b_v`$ and $`\alpha _n`$ equal to $`\alpha _v`$. The results show a tiny dependence on $`b_u`$, while no systematic difference is observed between the approximants with $`r_u=1`$ and $`r_u=2`$. Therefore, we averaged over all non-defective results with $`0b_u10`$ and $`r_u=1,2`$. Then, we looked for optimal intervals $`[\alpha _{\mathrm{opt}}\mathrm{\Delta }\alpha ,\alpha _{\mathrm{opt}}+\mathrm{\Delta }\alpha ]`$, $`[b_{\mathrm{opt}}\mathrm{\Delta }b,b_{\mathrm{opt}}+\mathrm{\Delta }b]`$ for the parameters $`\alpha _v`$ and $`b_v`$. They were determined by minimizing the discrepancies among the estimates corresponding to $`(p,q)=(5,3)`$, $`(5,4)`$, $`(6,3)`$, and $`(6,4)`$. Using $`\mathrm{\Delta }\alpha =1`$ and $`\mathrm{\Delta }b=3`$ as we did before, we obtain $`b_{\mathrm{opt}}=5`$ and $`\alpha _{\mathrm{opt}}=0.5`$. The results corresponding to this choice of parameters are reported in Table IX. As final estimate we quote the value obtained for $`p=6`$ and $`q=4`$, using the estimate (53) for the fixed point: $$\gamma =1.338(21),\nu =0.676(11),\eta =0.0279(5).$$ (59) For $`\gamma `$ and $`\nu `$ the estimates given above are compatible with all results of Table IX. In particular, they are correct even if the error in Eq. (53) is underestimated. They are also in good agreement with the estimates obtained with the double Padé-Borel transformation, cf. Eq. (57). On the other hand, it is not clear if the error on $`\eta `$ is reliable. Indeed, comparison with Eq. (57) may indicate that the correct value of $`\eta `$ is larger than what predicted by this analysis. As we did for the fixed point, we can also use the approximants $`\widehat{E}_2`$ defined in Eq. (51), optimizing separately each coefficient. The results are reported in Table X and correspond to $`0b_u10`$, $`3\delta _b3`$, $`1\delta _\alpha 1`$ and $`r_u=1,2`$. As it can be seen from the very small “errors” on the results, the dependence on $`b_u`$ is tiny and we have not tried to optimize this parameter. The results are reasonably stable with respect to changes of $`p`$ and $`q`$ and also the dependence on the value of the fixed point is small. As final results from this analysis we quote the values obtained with $`p=6`$ and $`q=4`$: $$\gamma =1.321(8),\nu =0.681(1),\eta =0.0313(5).$$ (60) We can compare these results with the previous estimates (57) and (59). The agreement is reasonable, although the quoted error on $`\nu `$ and $`\eta `$ is probably underestimated. This is confirmed by the fact that the scaling relation $`\gamma =(2\eta )\nu `$ is not satisfied within error bars: indeed, using the estimates of $`\nu `$ and $`\eta `$, we get $`\gamma =1.341(2)`$. We want now to obtain final estimates from the analyses given above. Since in the conformal Padé-Borel method we make use of some additional information, the position of the singularity of the Borel transform, we believe this analysis to be the more reliable one. As our finale estimate we have therefore considered the average between (59) and (60), fixing the error in such a way to include also the estimates (57). In this way we obtain $$\gamma =1.330(17),\nu =0.678(10),\eta =0.030(3).$$ (61) A check of these results is provided by the scaling relation $`\gamma =\nu (2\eta )`$. Using the values of $`\nu `$ and $`\eta `$ we obtain $`\gamma =1.336(20)`$ in good agreement with the direct estimate. A second check of these results is given by the inequalities $`\nu 2/30.66667`$ and $`\gamma +2\eta /34/31.33333`$, that are clearly satisfied by our estimates, e.g. $`\gamma +2\eta /3=1.350(17)`$. Finally, we want to stress that our final estimates (61) are compatible with all results appearing in Tables VIII, IX, and X, even those computed for $`(\overline{u}^{},\overline{v}^{})`$ largely different from the estimate (53). Thus, we believe that our error estimates take properly into account the uncertainty on the position of the fixed point. Using the scaling relations $`\alpha =23\nu `$ and $`\beta =\frac{1}{2}\nu (1+\eta )`$ we have $$\alpha =0.034(30),\beta =0.349(5).$$ (62) For comparison we have also performed the direct analysis of the perturbative series, resumming the expansions for fixed $`\overline{v}^{}/\overline{u}^{}`$. In zero dimensions, these series are not Borel summable, and this is expected to be true in any dimension. However, for the short series we are considering, we can still hope to obtain reasonable results. We have used the same procedures described in Ref. , performing a conformal transformation and using $`\overline{u}_b(z)`$ given in Eq. (35) as position of the singularity. We obtain $`\overline{u}^{}=0.763(25),`$ $`\overline{v}^{}=2.306(43);`$ (63) $`\gamma =1.327(12),\nu =0.673(8),`$ $`\eta =0.029(3),\omega =0.34(11).`$ (64) The estimate of the fixed point is very different from that computed before. This may indicate that the non-Borel summability causes a large systematic error in this type of analysis. Probably the optimal truncation for the $`\beta `$-functions corresponds to shorter series. On the other hand, the critical exponents show a tiny dependence on the position of the fixed point. The estimates we obtain are in good agreement with our previous ones, indicating that the exponent series are much better behaved. Let us now compare our results with previous field-theoretic determinations, see Table XI. We observe a very good agrement with all the reported results. Note that our error bars on $`\gamma `$ and $`\nu `$ are larger than those previously quoted. We believe our uncertainties to be more realistic. Indeed, we have often found in this work, that Padé-Borel estimates are insensitive to the parameters used in the analysis, in particular to the parameter $`b`$ characterizing the Borel-Leroy transform. Therefore, error estimates based on this criterion may underestimate the uncertainty of the results. The perturbative results reported in Table XI correspond to the massive scheme in fixed dimension $`d=3`$ and to the minimal subtraction scheme without $`ϵ`$-expansion. It should be noted that the latter scheme does not provide any estimate at five loops . Indeed, at this order the resummed $`\beta `$-function do not have any zero in the region $`u<0`$. This fact is probably related to the fact that the series which is analyzed is not Borel-summable. Therefore, perturbative expansions should have an optimal truncation beyond which the quality of the results worsens. For the $`\beta `$-functions in the minimal subtraction scheme, the optimal number of loops appears to be four. Two other methods have been used to compute the critical exponents: the scaling-field method and the $`\sqrt{ϵ}`$-expansion . The former gives reasonable results, while the latter is unable to provide quantitative estimates of critical quantities, see e.g. Ref. . We can also compare our results with the recent Monte Carlo estimates of Ref. . The agreement is quite good for $`\gamma `$ and $`\nu `$, while our estimate of $`\eta `$ is slightly smaller, although still compatible within one error bar. This is not unexpected and appears as a general feature of the $`d=3`$-expansion: indeed, also for the pure model, the estimate of $`\eta `$ obtained in the fixed-dimension expansion is lower than the Monte Carlo and high-temperature results (see Ref. and references therein). ### B The random $`M`$-vector model for $`M2`$ In this Section we consider the random vector model for $`M2`$. First, we have studied the region $`u<0`$, looking for a possible fixed point. We have not found any stable solution, in agreement with the general arguments given in the introduction: the mixed point is indeed located in the region $`u>0`$, and it is therefore irrelevant for the critical behavior of the dilute model. What remains to be checked is the stability of the $`O(M)`$ fixed point. As we mentioned in the introduction, a general argument predicts that this fixed point is stable; the random (cubic) perturbation introduces only subleading corrections with exponent $`\omega =\alpha _M/\nu _M`$. This exponent can be easily computed from $$\omega =\frac{\beta _{\overline{u}}}{\overline{u}}(0,\overline{v}),$$ (65) which is $`N`$-independent as expected. The analysis is identical to that performed for the stability of the Ising point in Ref. . We use a conformal transformation and the large-order behavior of the series; then we fix the optimal values $`b_{\mathrm{opt}}`$ and $`\alpha _{\mathrm{opt}}`$ of the parameters by requiring the estimates of $`\overline{v}^{}`$ and $`\omega `$ to be stable with respect to the order of the series used. The errors were obtained varying $`\alpha `$ and $`b`$ in the intervals $`[\alpha _{\mathrm{opt}}2,\alpha _{\mathrm{opt}}+2]`$ and $`[b_{\mathrm{opt}}3,b_{\mathrm{opt}}+3]`$. As in Ref. , the final result is reported with an uncertainty corresponding to two standard deviations. The final results for some values of $`M`$ are reported in Table XII, together with estimates of the theoretical prediction $`\alpha _M/\nu _M`$. For $`M3`$ these results clearly indicate that the $`O(M)`$-symmetric point is stable. The results are somewhat lower than the theoretical prediction, especially if we consider the high-temperature estimates of the critical exponents of Ref. . This is not surprising: indeed the estimates of the subleading exponents $`\omega `$ show in many cases discrepancies with estimates obtained by using other methods. This is probably connected to the non-analyticity of the $`\beta `$-function at the fixed point . A similar discrepancy, although still well within a combined error bar, is observed for $`M=2`$. In this case, we obtain $`\omega >0`$, indicating that the fixed point is stable. The error however does not allow to exclude the opposite case. ###### Acknowledgements. We thank Victor Martín-Mayor and Giorgio Parisi for useful discussions.
no-problem/0002/hep-ph0002164.html
ar5iv
text
# ITFA–2000–04 Effect of CP-violation on the sphaleron rate ## Acknowledgements I would like to thank Jan Smit and Chris van Weert for useful discussions.
no-problem/0002/hep-th0002153.html
ar5iv
text
# Absence of Asymptotic Freedom in Non-Abelian Models \[ ## Abstract The percolation properties of equatorial strips of the two dimensional $`O(3)`$ nonlinear $`\sigma `$ model are investigated numerically. Convincing evidence is found that a sufficently narrow strip does not percolate at arbitrarily low temperatures. Rigorous arguments are used to show that this result implies both the presence of a massless phase at low temperature and lack of asymptotic freedom in the massive continuum limit. A heuristic estimate of the transition temperature is given which is consistent with the numerical data. 64.60.Cn, 05.50.+q, 75.10.Hk \] One of the crucial unanswered problems in particle and condensed matter physics concerns the phase diagram of the two dimensional ($`2D`$) nonlinear $`\sigma `$ models . It is widely believed that the nonabelian models with $`N3`$ are in a massive phase for any finite inverse temperature $`\beta `$ and that this is intimately related to their perturbative asymptotic freedom. Over the years we have brought forth many reasons why we think that these beliefs are unfounded , but the absence of a mathematical proof combined with ambiguous numerical results left the issues wide open. In the present letter we would like to provide convincing numerical evidence that in fact the $`2D`$ $`O(3)`$ model possesses a massless phase for sufficiently large $`\beta `$ and give a rigorous proof that this is incompatible with asymptotic freedom in the massive phase. We will also give a heuristic explanation of why and where the phase transition happens. The models we are considering consist of unit length spins $`s`$ taking values on the sphere $`S^{N1}`$, placed at the sites of a $`2D`$ regular lattice. These spins interact ferromagnetically with their nearest neighbors. Let $`ij`$ denote a pair of neighboring sites. We will consider two types of interactions between neighbouring spins: * Standard action (s.a.): $`H_{ij}=s(i)s(j)`$ * Constrained action (c.a.): $`H_{ij}=s(i)s(j)`$ for $`s(i)s(j)c`$ and $`H_{ij}=\mathrm{}`$ for $`s(i)s(j)<c`$ for some $`c[1,1)`$. Almost a decade ago we showed that one can rephrase the issue regarding the existence of a soft phase in these models as a percolation problem and in fact this is the reason we introduced the c.a. model (please note that the c.a. model has the same perturbative expansion as the s.a. model and possesses instantons). Namely let $`ϵ=\sqrt{2(1c)}`$ and $`S_ϵ`$ the set of sites such that $`|sn|<ϵ/2`$ for some given unit vector $`n`$. Our rigorous result was that if on the triangular (T) lattice the set $`S_ϵ`$ does not contain a percolating cluster, then the $`O(N)`$ model must be massless at that $`c`$. For the abelian $`O(2)`$ model we could prove the absence of percolation of this equatorial set $`S_ϵ`$ for $`c`$ sufficiently large (modulo certain technical assumptions which were later eliminated by M. Aizenman ). For the nonabelian cases we could not give a rigorous proof. We did however present certain arguments explaining why the percolating scenario seemed unlikely. In this letter we will give convincing numerical evidence that a sufficiently narrow equatorial strip does not percolate for any $`c`$. We will also show that via a rigorous inequality derived by us in the past , the existence of a finite $`\beta _{crt}`$ in the s.a. model on the square (S) lattice is incompatible with the presence of asymptotic freedom in the massive continuum limit. For clarity we will review first a few crucial points regarding percolation in $`2D`$: a) Let $`A`$ be a subset of the lattice defined by the spin lying in some subset $`𝒜S^{N1}`$ and $`\stackrel{~}{A}`$ its complement. Then with probability 1 $`A`$ and $`\stackrel{~}{A}`$ do not percolate at the same time. (For this point it is crucial that the lattice is self matching, hence our use of the T instead of the S lattice; on the latter an ordinarily connected cluster can be stopped by a cluster connected only $``$-wise, i.e. also diagonally.) This fact has been proven rigorously only for special cases like the $`+`$ and $``$ clusters of the Ising model, but is believed to hold generally. b) If neither $`A`$ nor its complement $`\stackrel{~}{A}`$ percolate, then the expected size of the cluster of $`A`$ attached to the origin, denoted by $`A`$ of $`A`$, diverges; the same holds for its complement $`\stackrel{~}{A}`$ (Russo’s lemma ). (If $`\stackrel{~}{A}`$ percolates, then $`A`$ is finite.) Let then $`𝒫_ϵ`$ (the union of the polar caps) be the complement of $`𝒮_ϵ`$ (the equatorial strip of width $`ϵ`$). According to the discussion above, either $`S_ϵ`$ percolates, or $`P_ϵ`$ percolates, or neither $`S_ϵ`$ nor $`P_ϵ`$ percolates and then both have divergent mean size (we shall call this third possibility in short formation of rings). Consider a sequence of tori of increasing size $`L`$ and the mean cluster size of the set $`A`$ corresponding to a subset $`𝒜S^{N1}`$ of positive measure. If $`A`$ percolates $`A=O(L^2)`$, if its complement percolates $`A`$ will approach a finite nonzero value, and if $`A`$ forms rings $`A=O(L^{2\eta })`$ for some $`\eta >0`$. Therefore, if we define the ratio $$r=P_ϵ/S_ϵ,$$ (1) for $`L\mathrm{}`$ it should either go to 0, or to $`\mathrm{}`$ or to some finite nonzero value depending on which one of the three possibilities is realized; the latter possibility assumes that $`\eta `$ is the same for $`P_ϵ`$ and $`S_ϵ`$, as indicated by our numerics and consistent with scaling. In Fig.1 we show the numerical value of the ratio $`r`$ as function of $`c`$ for $`\beta =0`$ for four values of $`ϵ`$ for the c.a. model on a T lattice. The results were obtained from a Monte Carlo (MC) investigation using an $`O(3)`$ version of the Swendsen-Wang cluster algorithm and consist of a minimum of 20,000 lattice configurations used for taking measurements. For each value of $`ϵ`$ we studied $`L=160`$, 320 and 640 (for $`ϵ=.78`$ we also studied $`L=1280`$). Three distinct regimes are manifest for each of the four values of $`ϵ`$ investigated: * For small $`c`$, $`r`$ is increasing with $`L`$, presumably diverging to $`\mathrm{}`$ (region 1). * For intermediate $`c`$, $`r`$ is decreasing with $`L`$, presumably converging to 0 (region 2). * For $`c`$ sufficiently large depending upon $`ϵ`$, $`r`$ becomes independent of $`L`$, just as it does at the crossings from region 1 into 2. Consequently for these values of $`ϵ`$ for small $`c`$ $`P_ϵ`$ percolates, for intermediate $`c`$ $`S_ϵ`$ percolates and for sufficiently large $`c`$ both $`P_ϵ`$ and $`S_ϵ`$ form rings. In Fig.2 we present the phase diagram of the c.a. model on the T lattice for $`\beta =0`$. The dashed line $`D`$ represents the minimal equatorial width above which $`S_ϵ`$ percolates. For $`c=1`$ (no constraint) the model reduces to independent site percolation, for which the percolation threshold is known rigorously to be $`ϵ=1`$. The rest of the diagram represents qualitatively the results of our investigation of the ratio $`r`$, such as those shown in Fig.1. Two features of this diagram are worth emphasizing: 1. An equatorial strip of width less than approximately $`ϵ=.76`$ never percolates. 2. For approximately $`c>0.4`$ a new phase opens up in which both $`P_ϵ`$ and $`S_ϵ`$ form rings (the dotted line separates it from percolation of $`P_ϵ`$). For clarity let us briefly review the argument indicating that this phase diagram is incompatible with the existence of a massive phase for arbitrarily large $`c`$. Choosing an arbitrary unit vector $`n`$, one introduces Ising variables $`\sigma =\pm 1`$. The s.a. Hamiltonian becomes: $$H_{ij}=\sigma _i\sigma _j|s_{}(i)s_{}(j)|+s_{}(i)s_{}(j)$$ (2) where $`s_{}(i)=s(i)n`$ and $`s_{}(i)=(s(i)\times n)\times n`$. One thus obtains an induced Ising model for which the Fortuin-Kastleyn (FK) representation is applicable. In this representation the Ising system is mapped into a bond percolation problem: between any like neighboring Ising spins a bond is placed with probability $`p=1\mathrm{exp}(2\beta s_{}(i)s_{}(j))`$. For the c.a. model a bond is also placed if after flipping one of the two neighboring Ising spins the constraint $`s(i)s(j)c`$ is violated. The FK representation relates the mean cluster size of the site clusters joined by occupied bonds (FK-clusters) to the Ising magnetic susceptibility. In a massive phase the latter must remain finite. Hence, if the FK-clusters have divergent mean size, the original $`O(3)`$ ferromagnet must be massless (the Ising variables $`\sigma `$ are local functions of the originally spin variables $`s`$). Now notice that by construction for the c.a. model the FK-clusters with say $`\sigma =+1`$ must contain all sites with $`s(i)n>\sqrt{(1c)/2}`$. Therefore the model must be massless if clusters obeying this condition have divergent mean size. But the polar set $`P_ϵ`$ consists of two disjoint components $`P_ϵ^+`$ (north) and $`P_ϵ^{}`$ (south). For $`c>(1ϵ^2)/2`$ there are no clusters containing both elements of $`P_ϵ^+`$ and $`P_ϵ^{}`$. Hence if for such values of $`c`$ clusters of $`P_ϵ`$ form rings, so do clusters of $`P_ϵ^+`$ separately and hence the $`O(3)`$ model must be massless. The curve $`C`$ given by $`c=(1ϵ^2)/2`$ is the solid line in Fig.2. The point $`T`$ at the intersection of the curves $`D`$ and $`C`$ gives an upper bound for $`c_{crt}`$, the value of $`c`$ above which the c.a. model must be massless. To verify the phase diagram in Fig.2 we also measured (at $`\beta =0`$) the ratio of the mean cluster size of the set $`P_ϵ^{}^+`$ with $`ϵ^{}=.5`$ to that of the set $`S_ϵ`$ with $`ϵ=0.75`$, for which the equatorial strip never percolates (Fig.3) ($`ϵ`$ and $`ϵ^{}`$ ar chosen such that the two sets have equal density). The data indicate that for some intermediate values of $`c`$ $`P_ϵ^{}^+`$ forms rings while $`S_ϵ`$ has finite mean size; this region terminates around $`c=0.4`$, where also $`S_ϵ`$ starts forming rings. The larger average cluster size of the polar cap compared to the strip of the same area is probably due to the fact that the strip has a larger boundary than the polar cap. This is in agreement with a general conjecture stated in , namely that for $`c`$ sufficiently large, if two sets have equal area but different perimeters, the one with the smaller perimeter will have larger average cluster size. For the case at hand, this is apparently true for all values of $`c`$. The general belief, which we criticized in ref.,is that there is a fundamental difference between abelian and nonabelian models. To test this belief we studied the ratio $`r`$ for the c.a. $`O(2)`$ model on the T lattice. The phase diagram is shown in Fig.4. Since in the $`O(2)`$ model the set $`𝒫_ϵ`$ can also be regarded as a set $`𝒮_{\stackrel{~}{ϵ}}`$ where $`\stackrel{~}{ϵ}=\sqrt{(4ϵ^2)}`$, certain features of that diagram follow from rigorous arguments. For instance it is clear that in the c.a. model there exist two curves $`C`$ and $`\stackrel{~}{C}`$ and in the region to their right the model must be massless . The precise location of the curves $`D`$ (or $`\stackrel{~}{D}`$) must be determined numerically, something which we did not do. We did verify though that the ring formation region begins around $`c=0.5`$. In our opinion the arguments and numerical evidence provided so far give strong indications that the c.a. models on the T lattice possess a massless phase. Universality would suggest that a similar situation must exist for the s.a. models on the T and S lattices. To test universality we measured on the S lattice the renormalized coupling both on thermodynamic lattices in the massive phase and in finite volume in the presumed critical regime (as in ). Our data for the c.a. model on the S lattice only determine an interval (about $`.5`$ to $`.7`$) in which the massless phase of the model sets in; we tried to see if we could get a similar $`L`$ dependence for the renormalized coupling in the s.a. model at a suitable $`\beta `$ as for $`c=.61`$ in the c.a. model at $`\beta =0`$. This seems to be indeed the case for $`\beta `$ roughly $`3.4`$. We went only up to $`L=640`$, hence this equivalence between $`c`$ and $`\beta `$ should be considered only as a rough approximation, but there seems to be no doubt that the two models have the same continuum limit. It is intersting to note that there is a heuristic explanation for both the existence of a massless phase in the s.a. $`O(3)`$ model and for the value of $`\beta _{\mathrm{𝑐𝑟𝑡}}`$. Indeed it is known rigorously that in $`2D`$ a continuous symmetry cannot be broken at any finite $`\beta `$. In a previuos paper we showed that the dominant configurations at large $`\beta `$ are not instantons but superinstantons (s.i.). In principle both instantons and s.i. could enforce the $`O(3)`$ symmetry. In a box of diameter $`R`$ the former have a minimal energy $`E_{\mathrm{𝑖𝑛𝑠𝑡}}=4\pi `$ while the latter $`E_{s.i.}=\delta ^2\pi /\mathrm{ln}R`$, where $`\delta `$ is the angle by which the spin has rotated over the distance $`R`$. Now suppose that $`\beta _{\mathrm{𝑐𝑟𝑡}}`$ is sufficiently large for classical configurations to be dominant. Then let us choose $`\delta =2\pi `$ (restoration of symmetry) and ask how large must $`R`$ be so that the superinstanton configuration has the same energy as one instanton. One finds $`\mathrm{ln}R=\pi ^2`$. But in the Gaussian approximation $$s(0)s(x)1\frac{1}{\beta \pi }\mathrm{ln}x$$ (3) Thus restoration of symmetry occurs for $`\mathrm{ln}x\pi \beta `$. This simpleminded argument suggests that for $`\beta \pi `$ instantons become less important than s.i.. Now in a gas of s.i. the image of any small patch of the sphere forms rings, hence the system is massless. While this is not a quantitative argument, we believe it captures qualitatively what happens: a transition from localized defects (instantons) to s.i.. Next let us discuss the connection between a finite $`\beta _{crt}`$ and the absence of asymptotic freedom. It follows from our earlier work concerning the conformal properties of the critical $`O(2)`$ model . We refer the reader for details to that paper and give only an outline of the argument. The s.a. lattice $`O(N)`$ model possesses a conserved isospin current. This currrent can be decomposed into a transverse and longitudinal part. Let $`F^T(p)`$ and $`F^L(p)`$ denote the thermodynamic values of the 2-point functions of the transverse and longitudinal parts at momentum $`p`$, respectively. Using reflection positivity and a Ward identity we proved that in the massive continuum limit the following inequalities must hold for any $`p0`$: $$0F^T(p)F^T(0)=F^L(0)F^L(p)=2\beta E/N$$ (4) Here $`E`$ is the expectation value of the energy $$E=s(i)s(j)$$ at inverse temperature $`\beta `$. Since $`E1`$ it follws that if $`\beta _{\mathrm{𝑐𝑟𝑡}}<\mathrm{}`$ $`F^T(0)F^T(p)`$ cannot diverge for $`p\mathrm{}`$ as required by perturbative asymptotic freedom. In fact, for $`\beta _{crt}=3.4`$ (which is a plausible guess) $`F^T(p)`$ must be less than 2.27, in violation of the form factor computation giving $`F^T(0)F^T(\mathrm{})>3.651`$ . Since the implications of our result, that for the c.a. model a suffciently narrow equatorial strip never percolates, are so dramatic, the reader may wonder how credible are the numerics. The usual suspect, the random number generator, should not be important since precision is not the issue here. The only debatable point is whether our results represent the true thermodynamic behaviour for $`L\mathrm{}`$ or are merely small volume artefacts. While we cannot rule out rigorously that scenario, certain features of the data make it highly implausible: * Small volume effects should set in gradually, while the data in Fig.1 indicate a rather abrupt change from a region where $`r`$ is decreasing with $`L`$ to one where $`r`$ is essentially independent of $`L`$. * For $`c1`$ at fixed $`L`$, $`r`$ must approach the ‘geometric’ value $`r=2/ϵ1`$. As can be seen, in all the cases studied, throughout the ‘ring’ region $`r`$ is clearly larger than this value, while it should go to 0 if $`S_ϵ`$ percolated. * In Fig.3 there is no indication of the ratio going to 0 for increasing $`L`$. Moreover the dramatic change in slope around $`c=.4`$ indicates that the polar cap $`P_ϵ^{}`$ starts forming rings at a smaller value of $`c`$ than the equatorial strip $`S_ϵ`$. Thus we doubt very much that the effects we are seeing represent small volume artefacts. Moreover, if $`s_z`$ remained massive at low temperature and in fact an arbitrarily narrow equatorial strip percolated, one would have to explain away our old paradox : if such a narrow strip percolated, an even larger strip would percolate and on it one would have an induced $`O(2)`$ model in its massless phase, in contradiction to the Mermin-Wagner theorem. Consequently we are confident that the phase diagram in Fig.2 represents the truth, that a soft phase exists both in the s.a. and the c.a. model and that the massive continuum limit of the $`O(3)`$ model is not asymptotically free. In a previous paper we have already shown that in nonabelian models even at short distances perturbation theory produces ambiguous answers. The present result sharpens that result by eliminating the possibility of asymptotic freedom. Acknowledgement: AP is grateful to the Humboldt Foundation for a Senior US Scientist Award and to the Werner-Heisenberg-Institut for its hospitality.
no-problem/0002/hep-ph0002305.html
ar5iv
text
# Top Spin and Experimental Tests*footnote **footnote *Contribution to the Thinkshop on Top-Quark Physics for the Tevatron Run II, Fermilab, October 16 - 18, 1999. ## I Introduction Evidence to date is circumstantial that the top events analyzed in Tevatron experiments are attributable to a spin-1/2 parent. The evidence comes primarily from consistency of the distribution in momentum of the decay products with the pattern expected for the weak decay $`tb+W`$, with $`Wł+\nu `$ or $`W\mathrm{jets}`$, where the top $`t`$ is assumed to have spin-1/2. It is valuable to ask whether more definitive evidence for spin-1/2 might be obtained in future experiments at the Tevatron and LHC. We take one look at this question by studying the differential cross section $`d\sigma /dM_{t\overline{t}}`$ in the region near production threshold. Here $`M_{t\overline{t}}`$ is the invariant mass of the $`t\overline{t}`$ pair. We contrast the behavior of $`t\overline{t}`$ production with that expected for production of a pair of spin-0 objects. We are motivated by the fact that in electron-positron annihilation, $`e^++e^{}q+\overline{q}`$, there is a dramatic difference in energy dependence of the cross section in the near-threshold region for quark spin assignments of 0 and 1/2. For definiteness, we compare top quark $`t`$ and top squark $`\stackrel{~}{t}`$ production since a consistent phenomenology exists for top squark pair production, obviating the need to invent a model of scalar quark production. Moreover, top squark decay may well mimic top quark decay. Indeed, if the chargino $`\stackrel{~}{\chi }`$ is lighter than the light top squark, as is true in many models of supersymmetry breaking, the dominant decay of the top squark is $`\stackrel{~}{t}b+\stackrel{~}{\chi }^+`$. If there are no sfermions lighter than the chargino, the chargino decays to a W and the lightest neutralino $`\stackrel{~}{\chi }^o`$. In another interesting possible decay mode, the chargino decays into a lepton and slepton, $`\stackrel{~}{\chi }^+\mathrm{}^+\stackrel{~}{\nu }`$. The upshot is that decays of the top squark may be very similar to those of the top quark, but have larger values of missing energy and softer momenta of the visible decay products. A recent study for Run II of the Tevatron concluded that even with $`4\mathrm{fb}^1`$ of data at the Tevatron, and including the LEP limits on chargino masses, these decay modes remain open (though constrained) for top squarks with mass close to the top quark mass . ## II Calculation and Results At the energy of the Fermilab Tevatron, production of $`t\overline{t}`$ pairs and of $`\stackrel{~}{t}\overline{\stackrel{~}{t}}`$ pairs is dominated by $`q\overline{q}`$ annihilation, where the initial light quarks $`q`$ are constituents of the initial hadrons. The subprocess proceeds through a single intermediate gluon at leading-order in QCD perturbation theory. The analogy to $`e\overline{e}q\overline{q}`$ through an intermediate virtual photon is evident. We choose to work at leading-order in the $`t\overline{t}`$ and $`\stackrel{~}{t}\overline{\stackrel{~}{t}}`$ partonic cross sections. In Fig. 1(a) we display the partonic cross sections $`\widehat{\sigma }(\sqrt{\widehat{s}})`$ as functions of the partonic subenergy $`\sqrt{\widehat{s}}`$ for the $`q\overline{q}`$ channel, where $`q`$ represents a single flavor of massless quark. We use the nominal value $`m_t=`$ 175 GeV for the mass of the top quark. We select $`m_{\stackrel{~}{t}}=`$ 165 GeV for the mass of the top squark so that the maximum values of the partonic cross sections occur at about the same value of $`\sqrt{\widehat{s}}`$ in $`t\overline{t}`$ and $`\stackrel{~}{t}\overline{\stackrel{~}{t}}`$ production. Although the coupling strengths $`g`$, where $`\alpha _s=g^2/(4\pi )`$, are the same in the amplitudes for $`t\overline{t}`$ and $`\stackrel{~}{t}\overline{\stackrel{~}{t}}`$ production, the magnitude of the $`\stackrel{~}{t}\overline{\stackrel{~}{t}}`$ partonic cross section is a factor of $`0.015`$ smaller at the peak. The reduction comes in part from the final-state sum over spins and in part from the momentum dependence of the p-wave coupling for $`\stackrel{~}{t}\overline{\stackrel{~}{t}}`$ production. If we ignore relative normalization, the very different threshold energy dependences of the $`t\overline{t}`$ and $`\stackrel{~}{t}\overline{\stackrel{~}{t}}`$ cross sections in Fig. 1(a) suggest that spin discrimination might be possible if one could study the pair mass distribution, $`d\sigma /dM_{t\overline{t}}`$. However, in hadron reactions, one observes the cross section only after convolution with parton densities. In Fig. 1(b), we display the hadronic cross sections for $`p\overline{p}t\overline{t}X`$ and $`p\overline{p}\stackrel{~}{t}\stackrel{~}{\overline{t}}X`$ at proton-antiproton center-of-mass energy 2 TeV as a function of pair mass. We use the CTEQ5L parton densities with the factorization scale $`\mu `$ chosen equal to the top quark mass for $`t\overline{t}`$ production, and the top squark mass for $`\stackrel{~}{t}\overline{\stackrel{~}{t}}`$ production. We include the relatively small contributions from the glue-glue initial state. The parton luminosities fall steeply with subenergy so the tails at high pair mass evident in Fig. 1(a) are cut-off sharply in Fig. 1(b). Indeed, the convolution with parton densities sharpens the peak of the $`\stackrel{~}{t}\overline{\stackrel{~}{t}}`$ pair mass distribution significantly and makes it resemble a background that is similar to $`t\overline{t}`$ production. The top squark cross section is approximately 12% of the top cross section. The smaller value is due in part to the fact that the $`p`$wave top squark production reduces the partonic cross section for low $`M_{t\overline{t}}`$, where the parton luminosities are largest. At the energy of the CERN LHC, production of $`t\overline{t}`$ pairs and of $`\stackrel{~}{t}\overline{\stackrel{~}{t}}`$ pairs is dominated by $`gg`$ subprocess, and the threshold behaviors in the two cases do not differ as much as they do for the $`q\overline{q}`$ incident channel. In Fig. 2(a), we show the partonic cross sections $`\widehat{\sigma }(\sqrt{\widehat{s}})`$ as functions of the partonic subenergy $`\sqrt{\widehat{s}}`$ for the $`gg`$ channel. In Fig. 2(b), we display the hadronic cross sections for $`ppt\overline{t}X`$ and $`pp\stackrel{~}{t}\stackrel{~}{\overline{t}}X`$ at proton-proton center-of-mass energy 14 TeV as a function of pair mass. We include the relatively small contributions from the $`q\overline{q}`$ initial state. After convolution with parton densities, the shape of the $`\stackrel{~}{t}\overline{\stackrel{~}{t}}`$ pair mass distribution is remarkably similar to that of the $`t\overline{t}`$ case. ## III Discussion Based on shapes and the normalization of cross sections, it is difficult to exclude the possibility that some fraction (on the order of 10%) of top squarks with mass close to 165 GeV is present in the current $`t\overline{t}`$ sample. The invariant mass distribution of the produced objects, $`M_{t\overline{t}}`$, is quite different at the partonic level for the $`q\overline{q}`$ initial state (dominant at the Tevatron), but much less so for the $`gg`$ initial state (dominant at the LHC). However, after one folds with the parton distribution functions, the difference in the $`q\overline{q}`$ channel at the Tevatron is reduced to such an extent that the $`M_{t\overline{t}}`$ distribution is not an effective means to isolate top squarks from top quarks. Ironically, the good agreement of the absolute rate for $`t\overline{t}`$ production with theoretical expectations would seem to be the best evidence now for the spin-1/2 assignment in the current Tevatron sample. A promising technique to isolate a top squark with mass close to $`m_t`$ would be a detailed study of the momentum distribution of the top quark decay products (presumably in the top quark rest frame). One could look for evidence of a chargino resonance in the missing transverse energy and charged lepton momentum, or for unusual energy or angular distributions of the decay products owing to the different decay chains. One could also look for deviations from the expected correlation between angular distributions of decay products and the top spin . As a concrete example of an analysis of this type, in Fig. 3 we present the distribution in the invariant mass $`X`$ of the bottom quark and charged lepton, with $`X^2=(p_b+p_\mathrm{}^+)^2`$, where the bottom quark and lepton are decay products of either a top quark with $`m_t=175`$ GeV or a top squark $`\stackrel{~}{t}\stackrel{~}{\chi }^+bW^+\stackrel{~}{\chi }^0b\mathrm{}^+\nu _{\mathrm{}}\stackrel{~}{\chi }^0b`$, with $`m_{\stackrel{~}{t}}=165`$ GeV, $`m_{\stackrel{~}{\chi }^+}=130`$ GeV, $`m_{\stackrel{~}{\chi }^0}=40`$ GeV, and $`m_b=5`$ GeV. The $`X`$ distribution is a measure of the degree of polarization of the $`W`$ boson in top quark decay , and the figure shows that the different dynamics responsible for top squark decay result in a very different distribution, peaked at much lower $`X`$. The areas under the curves are normalized to the inclusive $`t\overline{t}`$ and $`\stackrel{~}{t}\overline{\stackrel{~}{t}}`$ rates at the Tevatron and LHC, respectively. At the LHC there is relatively more top squark in the sample, and thus the difference is more prominent. In this simple demonstration potentially important effects are ignored such as cuts to extract the $`t\overline{t}`$ signal from its backgrounds, detector resolution and efficiency, and ambiguities in identifying the correct $`b`$ with the corresponding charged lepton from a single decay. Detailed simulations would be required to determine explicitly how effective this variable would be in extracting a top squark sample from top quark events. Nevertheless, such techniques, combined with the large $`t\overline{t}`$ samples at the Tevatron Run II and LHC, should prove fruitful in ruling out the possibility of a top squark with mass close to the top quark mass, or alternatively, in discovering a top squark hidden in the top sample. ## Acknowledgments Work in the High Energy Physics Division at Argonne National Laboratory is supported by the U.S. Department of Energy, Division of High Energy Physics, under Contract W-31-109-ENG-38.
no-problem/0002/astro-ph0002270.html
ar5iv
text
# Systematic effects in the interpretations of Cluster X-ray Temperature functions ## 1. Introduction Clusters of galaxies are the largest known virialized gravitational systems in the Universe. According to our working hypothesis of structure formation, the cold dark matter (CDM) model, clusters form via gravitational collapse of high density peaks of primordial density fluctuations. The distribution and evolution of these peaks strongly depend on the CDM model parameters. We use the following parameterization for our CDM models: $`\mathrm{\Omega }_0`$, $`\mathrm{\Omega }_b`$, $`\mathrm{\Omega }_\mathrm{\Lambda }`$, $`h`$ (matter density, baryon density, cosmological constant, all in units of the critical density, all at $`z`$ = 0, and $`h`$ is the Hubble constant in units of $`100\mathrm{Km}\mathrm{s}^1\mathrm{Mpc}^1`$). Semi-analytical methods, like the Press-Schechter approximation, use some version of a spherical collapse model to predict the distribution of collapsed objects and their masses as a function of redshift. Assuming a physical model for the clusters and their intracluster medium, one can translate cluster virial masses to X-ray luminosity, or temperature of the intracluster gas, which can be compared to observations. We use the temperature function of clusters in our study. The most popular method is to assume a power law approximation for the power spectra of CDM models, use the Press-Schechter mass function (PSMF) to predict the abundance, and then derive the virial temperature to get the intracluster gas temperature, or integrate over formation epochs to determine the mass-temperature conversion. This method has the power spectrum normalization and the power law exponent of the mass variance on cluster scales as free parameters and determines matter density, and has no dependence on the Hubble constant. We use COBE normalized spectra of CDM models to derive the mass variance and the Press-Schechter mass function to derive the distribution of cluster masses. This method returns the matter density as a function of the Hubble constant, thus one can not derive either parameter separately. We use four different methods to obtain theoretical temperature functions. We integrate over formation epochs in methods A and B but not in methods C and D. We take into account the collapsed fraction of objects in methods A and C and not in B and D. Method A thus take both effects and method D none of them into account. For each method (A, B, C, and D) we use a grid of CDM model parameters, $`0.2<\mathrm{\Omega }_m<0.9`$ and $`0.2<h<0.9`$, and calculate the temperature functions assuming open and flat CDM cosmologies. We compare these theoretical temperature functions to data and determine the best fit models by minimizing the corresponding $`\chi ^2`$. ## 2. Outline of the methods The power law approximation for the mass variance assumes a power exponent $`\alpha `$, and derives the mass variance as $`\sigma (M)=\sigma _8(M/M_{15})^\alpha `$, where $`M_{15}=M/(10^{15}M_{})`$, and $`M`$ is the mass we will identify with the virial mass in the PSMF. It is commonly assumed that the power spectrum then can be approximated by a power law with an exponent $`\alpha =(n_{PS}+3)/6`$, but that is true only if the power spectrum could be approximated by one power law in all scales (note the integral in equation 1). That is not true for CDM models, therefore we quote the mass variance power exponents on cluster scale for power law approximations. Instead of the approximation we use COBE normalized power spectra of CDM models (Hu and Sugiyama 1996) and obtain the mass variance from a numerical integral $$\sigma _{R(M)}^2=(1/2\pi ^2)P(k)W^2(kR)k^2𝑑k,$$ (1) where $`P(k)`$, $`W(kR)`$, $`R(M)`$, and $`k`$ are the power spectrum, filter function in Fourier space, the radius of filtering, and the co-moving wavenumber. We use the standard PSMF corrected for collapsed fraction $`f_c`$, which gives the fraction of matter in collapsed objects: $`n(z,M)dM=(f_c/f_c^{PS})n_{PS}(z,M)dM`$, where $`f_c^{PS}`$ is the Press-Schechter collapsed fraction (Martel and Shapiro 1999). In order to obtain the temperature function, we assume that clusters form over a period of time and integrate over formation $`n_T(z,M)=_z^{\mathrm{}}F(M,z_f,z)\frac{dM}{dT}(T,z_f,z)𝑑z_f`$, where $`F(M,z_f,z)`$ depends on $`\sigma (M)`$, $`\delta _c(z)`$, and $`n(z_f,M)`$ (Kitayama and Suto 1996). We used the spherical collapse model virial temperature (Eke et al. 1998): $$M(T)=M_{15}\left[\frac{\beta kT[keV]}{1.38keV(z+1)}\right]^{\frac{3}{2}}\left[\frac{\mathrm{\Omega }(z)}{\mathrm{\Omega }(0)\mathrm{\Delta }_c(z)}\right]^{\frac{1}{2}}.$$ (2) We use $`\beta =1`$ as suggested by numerical simulations (Eke et al. 1998). ## 3. Results and Conclusions In Figure 1a we show the resulting best fit temperature functions using the power law approximation of Donahue and Voit (1999) and Blanchard et al. (1999) as we reconstructed them following their method. We used Horner et al. (1999)’s compilation of data taken from Edge et al. (1990) Henry and Arnaud (1991), and Markevitch (1998) (squares, diamonds, and triangles). The solid and dashed dotted lines represent best fit flat model, $`\mathrm{\Omega }_m=0.27`$, $`\sigma _8=0.73`$ $`\alpha =0.13`$, of Donahue and Voit (they integrated over cluster formation epoch) and their best fit model, but not integrated over cluster formation epoch. Blanchard et al.’s best fit flat model, $`\mathrm{\Omega }_m=0.735`$, $`\sigma _c=0.623`$, $`\alpha =0.18`$, is represented by a long dashed line. The short dashed line represents Blanchard et al.’s best fit temperature function, but integrated over cluster formation epoch. Open model temperature functions (not shown) are very similar to those of the flat models. The difference between temperature functions from methods with and without integration over cluster formation epoch is significant even with existing data. The large difference between density parameters derived by Donahue and Voit, and Blanchard et al. is due to the different normalizations of the $`M(T)`$ relation and not integration over cluster formation, which can cause a change less than 0.1 in the density parameter (cf. next paragraph). Figure 1b shows temperature functions of the best fit flat CDM model (best fit using method A) with $`\mathrm{\Omega }_m=0.39`$, $`\mathrm{\Lambda }=0.61`$, $`h=0.5`$, and this best fit model, but temperature functions with and without taking the collapsed fraction and/or integrating over cluster formation epochs (our methods B, C and D). The data are the same as in figure 1a. The solid, long dashed, short dashed and dashed dot lines represent temperature functions using our methods A, B, C and D. Open models behave similarly so we do not show them here. The smallest effect is the correction for collapsed fraction of objects which is about the size of the error bars of the data, thus an important effect (compare methods A and B). A larger difference is caused by integration over formation epoch (methods A and C). If one fits models without integrating and/or taking collapsed fraction into account (methods B, C and D), one get matter densities $`\mathrm{\Omega }_m=0.36`$, $`\mathrm{\Omega }_m=0.34`$, and $`\mathrm{\Omega }_m=0.31`$ (assuming $`h=0.5`$). Not taking either effects into account causes about 0.1 change in the derived matter density (compare results using methods A and D). As we can see, the temperature function is very sensitive to CDM model parameters if one uses a full CDM treatment, and thus a precise determination of the matter density is possible if the Hubble constant is known. We should keep in mind, however, that the normalization of the $`M(T)`$ relation, for example, causes much larger error in determining $`\mathrm{\Omega }_m`$. Since this method does not allow us to separate the density parameter from the Hubble constant, the result is a best fit function of the two. We can make use of the results from the power law approximation which gives the best fit density parameter, and check if the corresponding Hubble constant is reasonable using our methods. We find that our best fit CDM models yield the same matter density as the best fit models of Donahue and Voit (1999) and Blanchard et al. (1999) if we use $`h=0.55`$ and $`h=0.35`$. Thus Blanchard et al.’s method is only marginally compatible to ours. Figure 1b shows that systematic errors from interpretation of the temperature function are larger than the error bars on even the existing data. We conclude that, if the Hubble constant is known, a comparison between the observationally derived cluster temperature function and those derived from a full CDM treatment may yield an accurate determination of the density parameter (with an error less then 0.05), however, this can be done only if we find other ways to derive a correct theoretical model to interpret the data. ## ACKNOWLEDGEMENTS We thank N. Aghanim and R. Mushotzky for valuable discussions, and Don Horner for providing us his compilation of the temperature data. We thank N. Aghanim for her hospitality at the Institute of Astronomy of Orsay University. ## REFERENCES Blanchard, A., et al.1999, astro-ph/9908037 Donahue, M., and Voit, G. M., 1999, preprint, astro-ph/9907333 Edge et al.1990, MNRAS, 245, 559 Eke, V. R., Cole S., Frenk, C. S., and Henry, J. P., 1998, MNRAS, 298, 1145 Horner, D. et al.1999, in prepaparation Hu, W, and Sugiyama, N, 1996, ApJ, 471, 542 Markevitch, M., 1998, ApJ, 504, 27 Martel, H., and Shapiro, P. R., 1999, astro-ph/9903425 Henry, J. P., and Arnaud, K. A., 1991, ApJ, 372, 410 Voges et al., 1999, A&A, 349, 389
no-problem/0002/astro-ph0002217.html
ar5iv
text
# Properties of Gamma-Ray Burst Time Profiles Using Pulse Decomposition Analysis ## 1 Introduction There has been considerable recent progress in the study of gamma-ray bursts. Much of this results from the detection of bursts by BeppoSAX with good locations that have allowed the detection of counterparts at other wavelengths. This has allowed measurements of redshifts that have firmly established that these bursts are at cosmological distances. However, only a few redshifts are known, so there is still much work to be done in determining the mechanisms that produce gamma-ray bursts. Investigation of time profiles and spectra can shed new light on this subject. The vast majority of gamma-ray bursts that have been observed have been observed *only* by BATSE. This data can be classified into three major types: burst locations with relatively large uncertainties, temporal characteristics, and spectral characteristics. Here, we shall examine temporal characteristics of bursts, along with some spectral characteristics. The temporal structure of gamma-ray bursts exhibit very diverse morphologies, from single simple spikes to extremely complex structures. So far, the only clear division of bursts based on temporal characteristics that has been found is the bimodal distribution of the $`T_{90}`$ and $`T_{50}`$ intervals, which are measures of burst durations (Kouveliotou et al., 1993; Meegan et al., 1996a). In order to characterize burst time profiles, it is useful to be able to describe them using a small number of parameters. Many burst time profiles appear to be composed of a series of discrete, often overlapping, pulses, often with a *fast rise, exponential decay* (FRED) shape (Norris et al., 1996). These pulses have durations ranging from a few milliseconds to several seconds. The different pulses might, for example, come from different spatial volumes in or near the burst source. Therefore, it may be useful to decompose burst time profiles in terms of individual pulses, each of which rises from background to a maximum and then decays back to background levels. Here, we have analyzed gamma-ray burst time profiles by representing them in terms of a finite number of pulses, each of which is described by a small number of parameters. The BATSE data used for this purpose is described in Section 2. The basic characteristics of the time profiles based on the above model are described in Section 3 and some of the correlations between these characteristics are described in Section 4. (Further analysis of these and other correlations and their significance are discussed in an accompanying paper, Lee et al. (2000).) Finally, a brief discussion is presented in Section 5. ## 2 The BATSE Time-to-Spill Data The BATSE Time-to-Spill (TTS) burst data type records the times required to accumulate a fixed number of counts, usually 64, in each of four energy channels (Meegan, 1991). These time intervals give fixed multiples of the reciprocals of the average count rates during the spill intervals. There has been almost no analysis done using the TTS data because it is less convenient to use with standard algorithms than the BATSE Time-Tagged Event (TTE) data or the various forms of binned BATSE data. The TTS data use the limited memory on board the CGRO more efficiently than do the binned data types because at lower count rates, it stores spills less frequently, with each spill having the same constant fractional statistical error. On the other hand, the binned data types always store binned counts at the same intervals, so that at low count rates the binned counts have a large fractional statistical error. The variable time resolution of the TTS data ranges from under 50 ms at low background rates to under 0.1 ms in the peaks of the brightest bursts. In contrast, the finest time resolution available for binned data is 16 ms for the medium energy resolution (MER) data, and then only for the first 33 seconds after the burst trigger. The TTS data can store up to 16,384 spill events (over $`10^6`$ counts) for each energy channel, and this is almost always sufficient to record the complete time profiles of bright, long bursts. This is unlike the TTE data, which are limited to 32,768 counts in all four energy channels combined. For short bursts, the TTE data have finer time resolution than the TTS data, because it records the arrival times of individual counts with 2 $`\mu `$s resolution. Furthermore, the TTE data also contain data from before the burst trigger time. One reason why this is useful is that some of the shortest bursts are nearly over by the time burst trigger conditions have been met, so the TTS and MER data aren’t very useful for these bursts. Figure 1 shows a portion of the time profile of BATSE trigger number 1577 (GRB 4B 920502B) that contains a spike with duration shorter than 1 ms. The data with the finest time resolution, the time-tagged event (TTE) data, end long before the spike occurs, so the TTS data give the best representation of the spike. The binned data with the finest time resolution, the MER data with 16 ms bins, are unavailable for this burst, as are the PREB and DISCSC data with 64 ms bins. For a Poisson process, the individual event times in the TTE data and the binned counts in the various binned data types follow the familiar exponential and the Poisson distributions, respectively. The spill times recorded in the TTS data follow the *gamma distribution*, which is the distribution of times needed to accumulate a fixed number of independent (Poisson) events occurring at a given rate. The probability of observing a spill time $`t_s`$ is $$P(t_s)=\frac{t_s^{N1}R^Ne^{Rt_s}}{\mathrm{\Gamma }(N)},$$ (1) where $`N`$ is the number of events per spill and $`R`$ is the rate of individual events. This probability distribution is closely related to the Poisson distribution, which gives the number of events occurring within fixed time intervals for the same process of independent individual events, such as photon arrivals. ### 2.1 The Pulse Model and the Pulse Fitting Procedure We now describe the pulse model used to fit GRB time profiles, and the pulse-fitting procedure. The pulse model we use is the phenomological pulse model of Norris et al. (1996) In this model, each pulse is described by five parameters with the functional form $$I(t)=A\mathrm{exp}\left(\left|\frac{tt_{\text{max}}}{\sigma _{r,d}}\right|^\nu \right),$$ (2) where $`t_{\text{max}}`$ is the time at which the pulse attains its maximum, $`\sigma _r`$ and $`\sigma _d`$ are the rise and decay times, respectively, $`A`$ is the pulse amplitude, and $`\nu `$ (the “peakedness”) gives the sharpness or smoothness of the pulse at its peak. Pulses can, and frequently do, overlap. Stern et al. (1997) have used the same functional form to fit averaged time profiles (ATPs) of entire bursts. We have developed an interactive pulse-fitting program that can automatically find initial background level and pulse parameters using a Haar wavelet denoised time profile (Donoho, 1992), and allows the user to add or delete pulses graphically. The program then finds the parameters of the pulses and a background with a constant slope by using a maximum-likelihood fit for the gamma distribution (equation 1) that the TTS spill times follow (Lee et al., 1996, 1998; Lee, 2000). The data that we use in this paper are the TTS data for all gamma-ray bursts in the BATSE 3B catalog (Meegan et al., 1996b) up to trigger number 2000, covering the period from 1991 April 21 through 1992 October 22, in all channels that are available and show time variation beyond the normal Poisson noise of the background. We fit each channel of each burst separately and obtained 574 fits for 211 bursts, with a total of 2465 pulses. In many cases, the data for a burst showed no activity in a particular energy channel, only the normal background counts, so there were no pulses to fit. This occurred most frequently in energy channel 4. In other cases, the data for a burst contained telemetry gaps or were completely missing in one or more channels, making it impossible to fit those channels. This procedure is likely to introduce selection biases, which can be quantified through simulation. To determine these biases, we simulated a set of bursts with varying numbers of pulses with distributions of pulse and background parameters based on the observed distributions in actual bursts. We generated independent counts according to the simulated time profiles to create simulated TTS data, which we subjected to the same pulse-fitting procedure used for the actual BATSE data. The detailed results of this simulation are discussed in the Appendix. We will contrast the results from the actual data with those from the simulations where necessary and relevant. ### 2.2 Count Rates and Time Resolution The time resolution of the TTS data can be determined from the fitted background rates and the amplitudes of the individual pulses (discussed later in Subsection 3.3), at both the background levels and at the peaks of the pulses. Table 1, columns (a) show the percentage of bursts in our fitted sample where the time resolution at background levels and at the peak of the highest amplitude pulse are finer than 64 ms and 16 ms, the time resolutions of the more commonly used DISCSC and MER data, respectively. The background rates are taken at the time of the burst trigger, and ignore the fitted constant slope of the background. The rates at the peaks of the highest amplitude pulses include the background rates at the peak times of the pulses calculated with the background slopes. However, these rates ignore overlapping pulses, so the actual time resolution will be finer since the actual count rates will be higher. Note that even at background levels, the TTS data always have finer time resolution than the DISCSC data, except in energy channel 4 where the DISCSC data have finer time resolution for 32% of the bursts in our sample. Table 1, columns (b) show the percentage of individual pulses where the TTS data have time resolution finer than 16 ms and 64 ms at the pulse peaks. Again, the count rates include the fitted background rates at the peak times of the pulses but ignore overlapping pulses. For all individual pulses, the TTS data have finer time resolution at their peaks than the DISCSC data. ## 3 General Characteristics of Pulses in Bursts In this section we describe characteristics of pulses in individual bursts and in the sample as a whole. ### 3.1 Numbers of Pulses The number of pulses in a fit range from 1 to 43, with a median of 2 pulses per fit in energy channels 1, 2, and 4, and a median of 3 pulses per fit in energy channel 3. (See Figure 2.) The numbers of pulses per fit follows the trend of pulse amplitudes, which we shall see tend to be highest in energy channel 3, followed in order by channels 2, 1, and 4, respectively. This appears to occur because higher amplitude pulses are easier to identify above the background, and is consistent with the simulation results shown in the Appendix. Norris et al. (1996) have used the pulse model of equation 2 to fit the time profiles of 45 bright, long bursts. They analyzed the BATSE PREB and DISCSC data types, which contain four-channel discriminator data with 64 ms resolution beginning 2 seconds before the burst trigger. For their selected sample of bursts, they fitted an average of 10 pulses per burst, with no time profiles consisting of only a single pulse. This number is considerably higher than the mean number of pulses per fit for our sample of bursts, probably because their sample was selected for high peak flux and long duration, which makes it easier to resolve more pulses. ### 3.2 Matching Pulses Between Energy Channels To see how attributes of pulses within a burst vary with energy, it is necessary to match pulses in different energy channels. Although burst time profiles generally have similar features in different energy channels, this matching is not straightforward, since the number of pulses fitted to a burst time profile is very often different between energy channels. We have used a simple automatic algorithm for matching pulses between adjacent energy channels. This algorithm begins by taking all pulses from the channel with fewer pulses. It then takes the same number of pulses of highest amplitude from the other channel, and matches them in time order with the pulses from the channel with fewer pulses. For example, the time profiles of BATSE trigger number 1577 were fitted with nine pulses in energy channel 3, and only four pulses in channel 4. This algorithm simply matches all four pulses in channel 4 in time order with the four highest amplitude pulses in channel 3. While this method will not always correctly match individual pulses between energy channels and will result in broad statistical distributions, it should still preserve central tendencies and yield useful statistical information. ### 3.3 Brightness Measures of Pulses: Amplitudes and Count Fluences The amplitude of a pulse, parameter $`A`$ in equation 2, is the maximum count rate within the pulse, and measures the observed intensity of the pulse, which depends on the absolute intensity of the pulse at the burst source and the distance to the burst source. The amplitudes of the fitted pulses ranged from 40 counts/second to over 500,000 counts/second. (See Table 2 and Figure 3.) Pulses tend to have the highest amplitudes in energy channel 3, followed in order by channels 2, 1, and 4, in agreement with Norris et al. (1996). The central 68% of the pulse amplitude distributions span a range of about one order of magnitude in each of the four energy channels, with a somewhat greater range in channel 3. We will see in the Appendix that the fitting procedure tends to miss pulses with low amplitudes, so that the distributions shown may be strongly affected by selection effects in the fitting procedure. The amplitude of the highest amplitude pulse in a burst is an approximation to the instantaneous peak flux above background of that burst in that energy channel. The peak flux is often used as an indicator of the distance to the burst source. Since pulses can overlap, the highest pulse amplitude can be less than the actual background-subtracted peak flux. The BATSE burst catalogs give background-subtracted peak fluxes for 64, 256, and 1024 ms time bins in units of photons/cm<sup>2</sup>/second, for which effects such as the energy acceptances of the detectors and the orientation of the spacecraft and hence the detectors relative to the source have been accounted for and removed. The BATSE burst catalog also lists raw peak count rates that are not background-subtracted or corrected for any of the effects described, averaged over 64, 256, and 1024 ms time bins in the second most brightly illuminated detector for each burst. These peak count rates are primarily useful for comparison with the BATSE event trigger criteria. In some bursts, the highest pulses are considerably narrower than the shortest time bins used to measure peak flux in the BATSE burst catalog. For these bursts, these peak fluxes will be lower than the true peak flux, and the fitted pulse amplitudes are likely to be a better measure of the true peak flux. The distributions of the highest pulse amplitudes are shown in Table 3 and Figure 4. Since BATSE selectively triggers on events with high peak flux, the distributions must be strongly affected by the trigger criteria. Figure 5 shows the number of pulses in each fit plotted against the amplitudes of all of the pulses comprising each fit. It shows that in fits with more pulses, the minimum pulse amplitude, which can be seen from the left boundary of the distribution, tends to be higher. This could result in part from intrinsic properties of the burst sources, but may also result at least in part from a selection effect: In a complex time profile with many overlapping pulses, low amplitude pulses, which have poor signal-to-noise ratios, will be more difficult to resolve, while in a less complex time profile, they will be easier to resolve. This hypothesis appears to be confirmed by the simulation results shown in the Appendix. Table 4, columns (a) give the Spearman rank-order correlation coefficients, commonly denoted as $`r_s`$, for the joint distribution of pulse amplitudes and numbers of pulses in the corresponding bursts shown in Figure 5, as well as the probability that a random data set of the same size with no correlation between the two variables would produce the observed value of $`r_s`$. It shows strong positive correlations between pulse amplitudes and the number of pulses in the fit for all energy channels. These correlations appear to be stronger than those arising in the fits to simulations shown in Table 16, columns (a). The area under the light curve of a pulse gives the total number of counts contained in the pulse, which is its count fluence. It is given in terms of the pulse parameters and the gamma function by $$=A_{\mathrm{}}^{\mathrm{}}I(t)𝑑t=A\frac{\sigma _r+\sigma _d}{\nu }\mathrm{\Gamma }\left(\frac{1}{\nu }\right).$$ (3) The count fluence is a measure of the observed integrated luminosity of the pulse, which depends on the total number of photons emitted by the source within the pulse and the distance to the burst source. We will see in the Appendix that the fitting procedure tends to miss pulses with low count fluences. Figure 6 shows the number of pulses in each fit versus the count fluences of the individual pulses. It shows that in bursts containing more pulses, the individual pulses tend to contain fewer counts. We shall see in the next section that pulses tend to be narrower in more complex bursts. This result for count fluences implies that the tendency for pulses to be narrower is stronger than the tendency for pulses to have higher amplitudes in more complex bursts. Table 4, columns (b) show that the corresponding negative correlations between pulse count fluences and numbers of pulses per fit are statistically significant in energy channels 1 and 2, but not in channels 3 and 4. The fits to simulations (Figure 17 and Table 16, columns (b)) do not show the same tendency, so this most likely is not caused by selection effects in the pulse-fitting procedure. ### 3.4 Pulse Widths and Time Delays Timescales in gamma-ray bursts are likely to be characteristic of the physical processes that produce them. However, since some, and possibly all, bursts are produced at cosmological distances, all observed timescales will be affected by cosmological time dilation, and won’t represent the physical timescales at the sources. #### 3.4.1 Pulse Widths The most obvious timescale that appears in the pulse decomposition of gamma-ray burst time profiles is the pulse width, or duration. We shall measure the duration, or width, of a pulse using its full width at half maximum (FWHM), which is given by $$T_{\text{FWHM}}=(\sigma _r+\sigma _d)(\mathrm{ln}2)^{\frac{1}{\nu }}.$$ (4) The distributions of the pulse widths, which are shown in Figure 7 and columns (a) of Table 5, peak near one second in all energy channels, with no sign of the bimodality seen in total burst durations mentioned above. Pulses tend to be narrower (shorter) at higher energies. The narrowing of pulses in higher energy channels can also be measured from the ratios of pulse widths of matched pulses in adjacent energy channels, as shown in Table 6. We can test the hypothesis that pulses tend to be narrower at higher energies by computing the probability that the observed numbers of pulses width ratios less than 1 will occur by chance if pulse width ratios less than 1 and greater than 1 are equally probable. This probability can be computed from the binomial distribution, and is shown in the last column of Table 6. The table shows less narrowing than a simple comparison of median pulse widths from Table 5 would suggest, though it also shows that the hypothesis that pulses *do not* become narrower at higher energies is strongly excluded between channels 1 and 2 and between channels 2 and 3. Qualitatively similar kinds of trends have been shown to be present in individual pulses (Norris et al., 1996) and composite pulse shapes of many bursts (Link et al., 1993; Fenimore & Bloom, 1995). There are, however, some quantitative differences. For example, we find that there seems to be less narrowing at higher energies; the pulse width ratios tend to be closer to 1 between energy channels 3 and 4 than for the lower energy channels (although the statistics are poorer, as with anything involving channel 4), which is the opposite of the tendency found by Norris et al. (1996). We can use the Kolmogorov-Smirnov test to determine if the distributions of pulse width ratios are the same between adjacent energy channels. These results are shown in the last column of Table 6. This test shows significant differences in the distribution of pulse widths of matched pulses between adjacent energy channels. The fact that pulse widths decrease monotonically with energy, and the signal-to-noise ratios of the different energy channels increase in order of the energy channels 3, 2, 1, 4, imply that the narrowing is caused by the burst production mechanism itself. #### 3.4.2 Pulse Widths and Numbers of Pulses Figure 8 and Table 4, columns (c) show the relation between the number of pulses per burst and the widths of the pulses. These show that pulses tend to be narrower in bursts with more pulses. This may be an intrinsic property of GRBs, or it may be a selection effect arising because narrower pulses have less overlap with adjacent pulses, hence they are easier to resolve, so more pulses tend to be identified in bursts with narrower pulses. This may also be a side effect of correlations between other burst and pulse characteristics with the number of pulses per burst and the pulse widths. Table 4 shows strong negative correlations between the numbers of pulses per fit and the pulse widths. The fits to simulations shown in Figure 19 and Table 16, columns (c) do not have the same tendency. This suggests that the negative correlation between the number of pulses in each fit and the pulse widths seen in the fits to actual bursts do not result from selection effects in the pulse-fitting procedure, but are intrinsic to the burst production mechanism, or may arise from other effects. #### 3.4.3 Time Delays Between Energy Channels Table 7, columns (a) show the differences, or time delays, between the peak times $`t_{\text{max}}`$ of all pulses matched between adjacent energy channels. It shows a significant tendency for individual pulses to peak earlier at higher energies. This has been previously observed, and described as a hard-to-soft spectral evolution of the individual pulses (Norris et al., 1986, 1996). The time delays found here are greater than those found by Norris et al. (1996), who found an average pulse peak time delay between adjacent energy channels of $`20`$ ms. Comparing the peak times of the highest amplitude pulses in each fit between adjacent energy channels also shows a significant tendency for bursts to peak earlier at higher energies. (See Table 7, columns (b).) The time delays between energy channels observed here and elsewhere are likely to result from intrinsic properties of the burst sources. ### 3.5 Pulse Shapes: Asymmetries and the Peakedness $`\nu `$ Although the pulse model uses separate rise and decay times as its basic parameters, it is often more natural to consider the widths and asymmetries of pulses, which give equivalent information to the rise and decay times. The ratios of pulse rise times to decay times $`\sigma _r/\sigma _d`$ are a convenient way to measure the asymmetry of pulses, and depends only on the shapes of pulses. The asymmetry ratios cover a very wide range of values, but there is a clear tendency for pulses to have slightly shorter rise times than decay times. (See Figure 9.) Table 8 shows that the hypothesis that pulses are symmetric is strongly excluded in energy channels 2 and 3. The binomial probability isn’t computed for all pulses in all energy channels combined, because pulses cannot be considered to be independent between energy channels. We also see that the degree of the asymmetry isn’t significantly different for the different energy channels. Norris et al. (1996) found far greater asymmetry, with average values of $`\sigma _d/\sigma _r`$ (the inverse of the ratio used here) ranging from 2 to 3 for their selected sample of bursts, and with about 90% of pulses having shorter rise times than decay times. The relation of the peakedness parameter $`\nu `$ to physical characteristics of gamma-ray burst sources is far less clear than for other pulse attributes. Nevertheless, it does give information that can be used to compare the shapes of different pulses. The peakedness $`\nu `$ has a median value near 1.2 in all energy channels, so that pulses tend to have shapes between an exponential, for which $`\nu =1`$, and a Gaussian, for which $`\nu =2`$. (See Figure 10 and columns (b) of Table 5.) Stern et al. (1997) use the functional form of equation 2 to fit averaged time profiles of many bursts rather than individual constituent pulses, and find that $`\nu 1/3`$ for the *averaged time profiles*. ## 4 Correlations Between Pulse Characteristics Correlations between different characteristics of pulses, or the lack thereof, may reveal much about gamma-ray bursts that the distributions of the individual characteristics cannot. Some correlations may arise from intrinsic properties of the burst sources, while others may result from the differing distances to the sources. The first kind of correlation may be present among pulses of individual bursts or among the whole population of bursts, while the second kind will not be present among pulses of individual bursts. In order to distinguish between these two kinds of effects, it is useful to examine correlations of pulse characteristics both between different bursts, and between pulses within individual bursts. It is simplest to find correlations between characteristics of all pulses, but such correlations would combine both kinds of effects, and the statistics would be weighted in favor of bursts containing more pulses. It is also possible to select a single pulse from each burst, and find correlations between the characteristics of these pulses from burst to burst in order to look for effects arising from the distances to burst sources. However, if the correlations are taken using the single highest amplitude or highest fluence pulse from each burst, then they could still be affected by correlations of pulse characteristics within individual bursts. For example, consider a situation where amplitudes and durations of pulses within individual bursts are correlated, and where pulse amplitudes and durations follow a common distribution for all bursts. In such a case, if we select the single highest amplitude pulse from each burst, we would find a spurious correlation between highest pulse amplitude and duration between different bursts. Correlation results which compare and contrast the cosmological and intrinsic effects will be discussed in greater detail in the accompanying paper Lee et al. (2000). Here, we describe our method and some other correlation results. One way to find correlations of pulse characteristics within individual bursts is to calculate a correlation coefficient for each burst and examine the distribution of the degrees of correlation, for example to see if the correlation coefficients were positive for a large majority of bursts. The Spearman rank-order correlation coefficient is used for this purpose here. When using the Spearman rank-order correlation coefficient, the coefficients for the individual bursts are often not statistically significant because the number of pulses in each burst is not large, even though the coefficents for the different bursts may be mostly positive or mostly negative. We can test the hypothesis that there is no correlation because in the absence of any correlation, we would expect an equal number of bursts with positive correlations as with negative correlations, so the probability that the observed numbers of bursts with positive and negative correlations could occur by chance if there was no correlation is given by the binomial distribution. This is the method used here. This method ignores the strengths of the individual correlations, so it is more sensitive to a weak correlation that affects large numbers of bursts than it is to a strong correlation that affects only a small number of bursts. ### 4.1 Spectral Characteristics The data that we are using only has very limited spectral information, only four energy channels. We can investigate spectral characteristics by using the *hardness ratios* of individual pulses. The hardness ratio of a pulse between two specified energy channels is the ratio of the fluxes or fluences of the pulse between the two energy channels. Although the actual numerical values of the hardness ratios depend on the somewhat arbitrary boundaries of the energy channels, the values can be compared between different pulses, and between different bursts. There have been several claims of correlation between peak or average hardness ratios and durations among bursts, with shorter bursts being harder, and there has been some analysis of the cosmological significance of this. Here we investigate similar correlations for bursts, and for pulses in individual bursts. #### 4.1.1 Pulse Widths Table 9, columns (a) show the correlations between the pulse amplitude hardness ratios and the pulse widths for the highest amplitude pulse in each burst. The pulse widths used are arithmetic means of the widths in the two adjacent energy channels that the hardness ratios are taken between, *e.g.* hardness ratios between channels 2 and 3 are compared with pulse widths averaged over channels 2 and 3. In all pairs of adjacent energy channels, the highest amplitude pulse has a slight tendency to be narrower when the burst is harder, as measured using peak flux, but this does not appear to be statistically significant, except possibly between channels 3 and 4. This may be a signature of weak redshift effects; whereby the higher the redshift, the softer the spectrum and the longer the duration. Table 10, columns (a) shows the correlations between pulse amplitude hardness ratios and pulse widths within bursts. As evident, there is almost equal probability for positive and negative correlations. We conclude, therefore, that there is no significant tendency for longer or shorter duration pulses to have harder or softer spectra, measured using peak flux. #### 4.1.2 Intervals Between Pulses Table 9, columns (b) show the correlations between the pulse amplitude hardness ratios and the intervals between the two highest amplitude pulses in each fit. The time intervals used are also averaged over the two adjacent energy channels. In all pairs of adjacent energy channels, the two highest amplitude pulses have a slight tendency to be closer together when the burst is harder (as expected from cosmological redshift effects), as measured using peak flux, but this is not statistically significant, except possibly between channels 1 and 2. #### 4.1.3 Pulse Amplitudes Table 9, columns (c) show the correlations between the pulse amplitude hardness ratio and the pulse amplitudes for the highest amplitude pulse in each burst. If the peak luminosity of the highest amplitude pulse is a standard candle or has a narrow distribution, the effects of cosmological redshift would introduce a correlation between hardness ratio and amplitude. In all pairs of adjacent energy channels, the highest amplitude pulse has a slight tendency to be stronger when the burst is harder, as measured using peak flux, but this is not statistically significant (except possibly between channels 2 and 3,) indicating that the distribution of the above-mentioned luminosity is broad. Table 10, columns (b) shows the correlations between pulse amplitude hardness ratios and pulse amplitudes within bursts. The pulse amplitudes are summed over the two adjacent energy channels that the hardness ratios are taken between. There appears to be no statistically significant tendency for higher amplitude pulses to have harder or softer spectra, although slightly more bursts show a positive correlation (higher amplitude pulses are harder) than a negative correlation (higher amplitude pulses are softer.) This points to a weak or negligible intrinsic correlation between these quantities. #### 4.1.4 Count Fluence Hardness Ratios In what follows, we carry out the same tests using the hardness ratio measured by count fluence instead of amplitude, for bursts and pulses within bursts. Table 11, columns (a) shows the correlations between the total burst count fluence hardness ratios and the pulse widths for the highest amplitude pulses of the bursts. A positive correlation (harder bursts having shorter durations) would be expected if pulse total energy had a narrow intrinsic distribution. There is no consistent or statistically significant tendency for the highest amplitude pulse in each burst to be wider or narrower when the burst is harder or softer, as measured using fluence. Table 12, columns (a) show the correlations between pulse count fluence hardness ratios and pulse widths within bursts. In channels 1 and 2, more bursts show negative correlations between the two quantities, *i.e.* longer duration pulses tend to have softer spectra, as measured using count fluence, and this effect, which may be statistically significant, indicates the presence of an intrinsic correlation. There are no statistically significant effects between channels 2 and 3 or between channels 3 and 4. Table 11, columns (b) shows the correlations between the total burst count fluence hardness ratios and the intervals between the two highest amplitude pulse in each fit. In all pairs of adjacent energy channels, the two highest amplitude pulses have a slight tendency to be closer together when the burst is harder (as expected from cosmological effects,) but this is not statistically significant. Table 11, columns (c) shows the correlations between the total burst count fluence hardness ratios and the total burst count fluence in each fit. There is no consistent or statistically significant tendency for harder or softer bursts to contain fewer or more counts. Table 12, columns (b) shows the correlations between pulse count fluence hardness ratios and pulse count fluences within bursts. The pulse count fluences are summed over the two adjacent energy channels that the hardness ratios are taken between. In channels 1 and 2, more bursts show negative correlations, *i.e.* higher fluence pulses tend to have softer spectra, but again, this intrinsic effect appears weak, and there is no significant effect in the other pairs of energy channels. *In summary, there seems to be little intrinsic correlation between the spectra, as measured by hardness ratio, and other pulse characteristics between bursts and among pulses. There may be weak (statistically not very significant) evidence for trends expected from cosmological redshift effects.* ### 4.2 Time Evolution of Pulse Characteristics Within Bursts One class of correlations between pulse characteristics within bursts are those between the pulse peak time and other pulse characteristics. These indicate whether certain pulse characteristics tend to evolve in a particular way during the course of a burst. Again, we have used the method described in the previous section, calculating the Spearman rank-order correlation coefficients for the individual bursts, and testing the observed numbers of bursts with positive and negative correlations using the binomial distribution. #### 4.2.1 Pulse Asymmetry Ratios Table 13, columns (a) show the number and fraction of bursts (in each channel) where there is a negative correlation between the pulse asymmetry ratio $`\sigma _r/\sigma _d`$ and peak time, *i.e.*, the pulse asymmetry decreases with time. Fits for which the calculated Spearman rank-order correlation coefficient was 0, indicating no correlation, were counted as half for decreasing and half for increasing in order to calculate, using the binomial distribution, the probability of this occurring randomly if pulse amplitudes within bursts are equally likely to increase as to decrease with time. The probability was not calculated for all energy channels combined, because fits to the same burst in different energy channels cannot be considered independent, so the binomial distribution cannot be used. Pulse asymmetry ratios more often decrease than increase with time during bursts, except in energy channel 4, which has the fewest pulses. This effect appears to be statistically significant in channel 3, and possibly channels 1 and 2. The fits to simulations (See Table 17) show no tendency for pulse asymmetries to increase or decrease within bursts. This indicates that the observed tendency for pulse asymmetry ratios to decrease with time within actual bursts does not arise from selection effects in the pulse-fitting procedure, so that any tendency would be intrinsic to gamma-ray bursts. #### 4.2.2 Pulse Rise and Decay Times and Pulse Widths When we examine the evolution of the rise and decay times separately, instead of their ratios, and the evolution of the pulse widths, we find that there is a nearly equal and opposite trend of decreasing rise times $`\sigma _r`$ and increasing decay times $`\sigma _d`$ as the burst progresses. This gives rise to the evolution of pulse asymmetry ratios described above, although the statistical significance of the evolution of rise times and decay time are weaker than for the pulse asymmetry ratios. The decrease in rise times is possibly a slightly stronger effect than the increase in decay times. However, the combined effect of these two trends is that there appears to be no statistically significant evolution of pulse widths. (See Table 13, columns (b).) This is in agreement with the results of Ramirez-Ruiz & Fenimore (1999b, a), who found no evidence that pulse widths increase or decrease with time when fitting a power-law time dependence, using a small sample of complex bursts selected from the bright, long bursts fitted by Norris et al. (1996). #### 4.2.3 Spectra It has been previously reported that bursts tend to show a hard-to-soft spectral evolution, which we can test by seeing how the hardness ratios of individual pulses vary with time. Table 14, columns (a) show that the pulse amplitude hardness ratios have a slight tendency to decrease with time during bursts between all three pairs of adjacent energy channels. However, with the numbers of available bursts that are composed of multiple pulses in adjacent energy channels, this tendency is statistically insignificant. #### 4.2.4 Pulse Count Fluences When we consider the time evolution of the pulse count fluence hardness ratios within bursts, we find no tendency for the hardness ratio of energy channel 2 to channel 1 to increase or decrease, a possibly significant tendency for the hardness ratio of energy channel 3 to 2 to decrease with time, and a statistically insignificant tendency for the hardness ratio of channel 4 to 3 to increase with time. (See Table 14, columns (b).) #### 4.2.5 Other Pulse Characteristics We have conducted similar tests for other pulse characteristics and found that none show any tendencies to increase or decrease with time within bursts that are clearly statistically significant (Lee, 2000). These pulse characteristics are the pulse amplitude, the peakedness parameter $`\nu `$, and the pulse count fluence; *e.g.*, we find no tendency for later pulses within a burst to be stronger or weaker, than earlier pulses. *In summary, we find no significant correlations between the peak times of pulses in bursts and any other pulse characteristics except possibly the pulse asymmetry ratio, so that the pulses appear to result from random and independent emission episodes.* ## 5 Discussion Decomposing burst time profiles into a superposition of discrete pulses gives a compact representation that appears to contain their important features, so this seems to be a useful approach for analyzing their characteristics. Our pulse decomposition analysis confirms a number of previously reported properties of gamma-ray burst time profiles using a larger sample of bursts with generally finer time resolution than in prior studies. These properties include tendencies for the individual pulses comprising bursts to have shorter rise times than decay times; for the pulses to have shorter durations at higher energies; and for the pulses to peak earlier at higher energies, which is sometimes described as a hard-to-soft spectral evolution of individual pulses. Pulse rise times tend to decrease during the course of a burst, while pulse decay times tend to increase. When examining pulse widths, or durations, these two effects nearly balance each other; the apparent tendency for pulse widths to decrease during the course of a burst appears to be statistically insignificant. The ratios of pulse rise times to decay times tend to decrease during the course of a burst. The evolution of pulse asymmetry ratios does not arise from selection effects in the pulse-fitting procedure, so it is most likely intrinsic to the bursters. No other pulse characteristics show any time evolution within bursts, although it is possible that there is non-monotonic evolution; for example, a pulse characteristic may tend to be greater at the beginning and end of a burst and smaller in the middle, and the tests used here wouldn’t be sensitive to this. In particular, it doesn’t appear that either pulse amplitudes or pulse count fluences have any tendency to increase or decrease during the course of a burst. Also, later pulses in a burst don’t tend to be spectrally harder or softer than earlier pulses, although there is spectral softening *within* most pulses. The spectra of pulses within a burst also don’t appear to be harder or softer for stronger or weaker pulses, or for longer or shorter duration pulses. One may therefore conclude that the pulses in a burst arise from random and independent emission episodes such as those expected in the internal episodic shock model rather than the external shock models where the presence of distinguishable pulses must be attributed to inhomogeneities in the interaction of the blast wave shock and the clumpy interstellar medium. When examining similar correlations between the attributes of some characteristic pulses from burst to burst, we find some weak and tantalizing evidence which may be due to cosmological redshift effects. In the accompanying paper Lee et al. (2000) we describe the correlation studies which can distinguish between trends due to cosmological redshifts and intrinsic trends. We thank Jeffrey Scargle and Jay Norris for many useful discussions. This work was supported in part by Department of Energy contract DE–AC03–76SF00515. ## Appendix A Appendix: Testing for Selection Effects There are a number of ways in which the pulse-fitting procedure may introduce selection effects into correlations between pulse characteristics. One is that the errors in the different fitted pulse parameters may be correlated. Another is that the pulse-fitting procedure may miss some pulses by not identifying them above the background noise. Still another cause of selection effects is that overlapping pulses may be identified as a single broader pulse. In order to determine the degree of importance of these selection effects, we have generated a sample of artificial burst time profiles using the pulse model with randomly generated but known pulse parameters, fitted the simulated bursts using the same procedure used for actual burst data, and compared the simulated and fitted pulse characteristics (Lee, 2000). ### A.1 Numbers of Bursts and Pulses A total of 286 bursts were generated, with only one energy channel for each burst. For many of these, the limit of $`2^{20}`$ counts was reached before the 240 second limit, which almost never occurred in the actual BATSE TTS data. These simulated bursts contained a total of 2671 pulses that had peak times before the limits of $`2^{20}`$ counts and 240 seconds, while the fits to the simulated bursts contained a total of only 1029 pulses. Of these, 223 of the simulated bursts and 198 of the fits to the simulations contained more than one pulse. (See Figure 11 and Table 15.) Note that in the fits to actual BATSE data, the largest number of fits containing more than one pulse was 116 for energy channel 3, so that the simulated data set is larger. Figure 12 shows the number of pulses fitted versus the number of pulses originally generated for each simulated bursts. It shows that the greatest differences between the the fitted and the simulated numbers of pulses tend to occur in the most complex bursts. Figure 13 compares the numbers of pulses per fit between the simulations and the fits to simulations. Most (54%) of the fits to simulated bursts contain fewer pulses than the initial simulations, and for nearly all of the remaining simulated bursts, the number of pulses are the same for the initial simulations and the fits to simulations. The fits to simulations have a mean of 15 fewer pulses than the initial simulations, and a median of 1 fewer pulse. The fits to simulations have a geometric mean of 0.63 times as many pulses as the initial simulations, and a mean of 0.80 times as many pulses. ### A.2 Brightness Measures of Simulated Pulses Figure 14 shows the distribution of pulse amplitudes in the original simulations and in the fits to simulations. It shows that the fitting procedure has a strong tendency to miss low amplitude pulses. However, if we compare this with Figure 3, we see that in the fits to actual BATSE bursts, the fitting procedure found pulses with considerably lower amplitudes than it found in the fits to simulated bursts. Figure 15 shows the number of pulses in each burst plotted against the amplitudes of all of the pulses comprising each fit. In the simulations, there are no correlations between pulse amplitudes and the number of pulses in the time profile, because the pulse amplitudes were generated independently of the number of pulses in each burst. In the fits to the simulations, pulse amplitudes tend to be higher in bursts containing more pulses. This must result from the selection effect discussed in Section 3.3; it is easier to identify more pulses when they are stronger. Table 16, columns (a) shows that even though there is no correlation between pulse amplitudes and the number of pulses for the initial simulated data, the fitting procedure introduces a strong positive correlation between these quantities; the tendency to miss low amplitude pulses is greater in more complex bursts. Figure 16 shows the distribution of pulse count fluences in the original simulations and in the fits to simulations. It shows that the fitting procedure has a strong tendency to miss pulses with low count fluences, similar to what we have seen for low amplitude pulses. Figure 17 and Table 16, columns (b) compare the number of pulses in each time profile with the count fluences of the individual pulses. They show no tendency for pulses to contain fewer or more counts in bursts with more pulses, in either the initial simulations (by design) or in the fits to simulations. Unlike pulse amplitudes, the tendency to miss low count fluence pulses appears to be independent of burst complexity. This differs from the results seen in the fits to actual bursts, where bursts containing more pulses tended to have pulses with lower count fluences. (See Figure 6 and Table 4, columns (b).) This may explain why the $`2^{20}`$ count limit for the TTS data was frequently reached before the 240 second time limit in the simulated bursts, but rarely in the actual bursts; the total count fluence increases linearly with the number of pulses in the simulated bursts, but less rapidly in the actual BATSE bursts. ### A.3 Pulse Widths Figure 18 shows the distribution of pulse widths in the original simulations and in the fits to simulations. The pulses in the fits to simulations tend to be slightly longer in duration than in the original simulations, but applying the Kolmogorov-Smirnov test to the two distributions show that they are not significantly different; the probability that they are the same distribution is 0.39. This agrees with what we have seen in Figures 14 and 16, that the selection effects of the fitting procedure for pulse amplitudes and for pulse count fluences are similar. Figure 19 and Table 16, columns (c) compare the number of pulses in each time profile with the widths of the individual pulses. They show no tendency for pulses to be wider or narrower in bursts with more pulses, in either the simulations or the fits to the simulations. ### A.4 Time Evolution of Pulse Characteristics Within Bursts In the fits to actual BATSE data, it was found that pulse asymmetry ratios tended to decrease over the course of a burst. (See Table 13.) Table 17 shows the correlations between pulse asymmetry ratio and peak times within bursts for the simulations and the fits to simulations. It shows no tendency for positive or negative correlations in either the simulations or the fits to simulations.
no-problem/0002/astro-ph0002117.html
ar5iv
text
# The Effelsberg Search for Pulsars in the Galactic Centre ## Acknowledgments. We thank the receiver group of the MPIfR for building the superb 6cm-receiver and the corresponding filterbanks. The digital-group, in particular Thomas Kugelmeier, made significant contributions in the development of the new backend POESY.
no-problem/0002/cond-mat0002032.html
ar5iv
text
# Phase ordering and roughening on growing films ## Abstract We study the interplay between surface roughening and phase separation during the growth of binary films. Already in 1+1 dimensions, we find a variety of different scaling behaviors, depending on how the two phenomena are coupled. In the most interesting case, related to the advection of a passive scalar in a velocity field, nontrivial scaling exponents are obtained in simulations. PACS numbers: 68.35.Rh, 05.70.Jk, 05.70.Ln, 64.60.Cn Thin solid films are grown for a variety of technological applications, using molecular beam epitaxy (MBE) or vapor deposition. In order to create materials with specific electronic, optical, or mechanical properties, often more than one type of particle is deposited. When the particle mobility in the bulk is small, surface configurations become frozen in the bulk, leading to anisotropic structures that reflect the growth history, and are different from bulk equilibrium phases. Characterizing structures generated during composite film growth is not only of technological importance, but represents also an interesting and challenging problem in statistical physics. In this paper, we examine the growth of binary films through vapor deposition, and study some of the rich phenomena resulting from the interplay of phase separation and surface roughening. Simple models for layer by layer growth assume either that the probability that an incoming atom sticks to a given surface site depends on the state of the neighboring sites in the layer below , or that the top layer is fully thermally equilibrated . Assuming that the bulk mobility is zero, once a site is occupied, its state does not change any more. If the growth rules are invariant under the exchange of the two particle types, the phase separation is in the universality class of an equilibrium Ising model. Correlations perpendicular to the growth direction are characterized by the critical exponent $`\nu `$ of the Ising model, and those parallel to the growth direction by the exponent $`\nu z_m`$, with $`z_m`$ being the dynamical critical exponent of the Ising model. However, the layer by layer growth mode underlying these simple models is unstable, and the growing surface becomes rough. In many cases the fluctuations in the height $`h(𝐱,t)`$, at position $`𝐱`$ and time $`t`$ are self-affine, with correlations $$\left[h(𝐱,t)h(𝐱^{},t^{})\right]^2|𝐱𝐱^{}|^{2\chi }g\left(t/|𝐱𝐱^{}|^{z_h}\right),$$ (1) where $`\chi `$ is the roughness exponent, and $`z_h`$ is a dynamical scaling exponent. A computer model with local sticking probabilities that allows for a rough surface was introduced in . In 1+1 dimensions, the authors find phase separation into domains (with sizes consistent with the Ising model), and a very rough surface profile with sharp minima at the domain boundaries. We may ask the following questions: (1) Are the roughness exponents different at the phase transition point? (2) Are the critical exponents modified on a rough surface? We shall demonstrate that the coupling of roughening and phase separation leads to a rich phase diagram, and to nontrivial critical exponents already in 1+1 dimensions. To characterize phase separation, we introduce an order parameter $`m(𝐱,t)`$, which is the difference in the densities of the two particle types at the surface at position $`𝐱`$ and time $`t`$. The interplay between the fluctuations in $`m`$, and the height $`h`$ is captured phenomenologically by the coupled Langevin equations, $`_th`$ $`=`$ $`\nu ^2h+{\displaystyle \frac{\lambda }{2}}(h)^2+{\displaystyle \frac{\alpha }{2}}m^2+\zeta _h,`$ (2) $`_tm`$ $`=`$ $`K(^2m+rmum^3)+ahm+bm^2h`$ (4) $`+{\displaystyle \frac{c}{2}}m(h)^2+\zeta _m.`$ Here, we have included the lowest order (potentially relevant) terms allowed by the symmetry $`mm`$. Equation (2) is the Kardar-Parisi-Zhang (KPZ) equation for surface growth, plus a coupling to the order parameter. Equation (4) is the time dependent Landau–Ginzburg equation for a (non-conserved) Ising model, with three different couplings to the height fluctuations. The Gaussian, delta-correlated noise terms, $`\zeta _h`$ and $`\zeta _m`$, mimic the effects of faster degrees of freedom. A different set of equations was proposed by Léonard and Desai for phase separation during MBE. Their equations reflect the MBE conditions of random particle deposition (in contrast to sticking probabilities that depend on the local environment), and a conserved order parameter which evolves by surface diffusion. They do not include the KPZ nonlinearity. Computer simulations of corresponding 1+1 dimensional systems are presented in . Dimensional analysis indicates that the couplings appearing in Eqs. (2-4) are relevant, and may lead to new universality classes. We shall leave the renormalization group analysis of these equations to a more technical paper, and focus here instead on computer simulations in 1+1 dimensions. The quantities evaluated in the computer simulations are the height correlation function in Eq. (1), and the order parameter correlation functions perpendicular and parallel to the growth direction. Allowing for the possibility of different dynamic exponents, $`z_m`$ and $`z_h`$, for the order parameter and the height variables, we fit to the scaling forms $`G_m^{(x)}(xx^{})`$ $``$ $`m(x,t)m(x^{},t)`$ (5) $`=`$ $`|xx^{}|^{\eta 1}g_m^{}(|xx^{}|/\xi )`$ (6) $`G_m^{(t)}(tt^{})`$ $``$ $`m(x,t)m(x,t^{})`$ (7) $`=`$ $`|tt^{}|^{(\eta 1)/z_m}g_m^{}(|tt^{}|/\xi ^{z_m}).`$ (8) Our simulations were done using a “brick wall” restricted solid-on-solid model (see Fig. 1). Starting from a flat surface, particles are added such that no overhangs are formed, and with the center of each particle atop the edge of two particles in the layer below. We use two types of particles, $`A`$ and $`B`$ (black and grey in the figures). The probability for adding a particle to a given surface site, and the rule for choosing its color, depend on the local neighborhood. When $`A`$ particles are more likely to be added to $`A`$ dominated regions, and vice versa, the particles tend to phase separate and form domains. In this case, the order parameter correlation length $`\xi `$ is of the order of the average domain width. By changing the growth rules, it is possible to study cases in which some (or all) of the couplings $`a`$, $`b`$, $`c`$, and $`\alpha `$ vanish, and thus to gain a complete picture of the different ways in which the height and the order parameter influence each other. The decoupled case, $`\alpha =a=b=c=0`$, is implemented using the following updating rules: A surface site is chosen at random, and a particle is added if it does not generate overhangs. Its color is then chosen depending on the colors of its two neighbors in the layer below. If both neighbors have the same color, the newly added particle takes this color with probability $`1p`$, and the other color with probability $`p`$ (where $`p`$ is much smaller than 1). If the two neighbors have different colors, the new particle takes either color with probability 1/2. Neighbors within the same layer are not considered. Since the probability of adding a particle to a given surface site does not depend on its color, the surface grows exactly as with only one particle type, and is characterized by the KPZ exponents $`\chi =1/2`$, and $`z_h=3/2`$. Similarly, the choice of particle color at a given site is not affected by the height profile. The height profile determines only the moment at which a site is added, since the no-overhang condition requires both neighbors in the previous layer to be occupied. If we equate layer number with time, domain walls move to the right or left with probability 1/2 during one time unit, and a pair of new domain walls is created with probability $`p`$. This is identical to the Glauber model for a one-dimensional Ising chain with coupling $`J`$ and at temperature $`T`$, with $`p=\mathrm{exp}(4J/kT)`$. The correlation length $`\xi `$ perpendicular to the growth direction is consequently $`\xi =\mathrm{exp}(2J/kT)=1/\sqrt{p}`$, and the correlation time is $`\tau =\mathrm{exp}(4J/kT)=1/p`$. The dynamical critical exponent for the order parameter is thus $`z_m=2`$. Note that the “time” used for the order parameter (namely layer number) is different from real time, which is for each particle the moment when it is added to the growing surface. However, this difference becomes negligible for sufficiently small $`p`$ since the thickness of the surface over the correlation length, $`\sqrt{\xi }`$, is much smaller than the characteristic time, $`\xi ^2`$, for order parameter fluctuations. Simulations indeed confirm that the order parameter and height evolve completely independently. A typical profile is shown in Fig. 2a; the corresponding scaling analysis conforms to expectations, and is not presented here. The situation $`\alpha >0`$ with $`a=b=c=0`$ can be implemented by updating sites on top of particles of different colors less often by a factor $`r<1`$ compared to sites above particles of the same color. As the order parameter is not affected by the height variable, its dynamics is still the same as that of an Ising model, with $`z_m=2`$. The height profile now has domain boundaries sitting preferentially at its local minima, with mounds forming over domains (see Fig. 2b). This leads to a surface roughness exponent of $`\chi =1`$ on length scales $`\xi `$, which is the case studied in. At these scales, changes in the height profile are slaved to domain wall motion, and the dynamic exponent is $`z_h=2`$. However, on length scales much larger than $`\xi `$, the KPZ exponents of $`\chi =1/2`$ and $`z_h=3/2`$ are regained. The crossover in the roughness can be described by a scaling form $$[h(x,t)h(x^{},t)]^2=|xx^{}|^2g(|xx^{}|/\xi ),$$ with a constant $`g(y)`$ for $`y1`$, and $`g(y)1/y`$ for $`y1`$. To mimic the influence of surface roughness on the order parameter (nonzero $`a`$, $`b`$, or $`c`$ in Eqs.(4)), the color of a newly added particle is made dependent not only on those of its two neighbors in the layer below, but also on the colors of its two nearest neighbors on the same layer, if these sites are already occupied. With probability $`1p`$, the newly added particle takes the color of the majority of its 2, 3, or 4 neighbors, and with probability $`p`$ it assumes the opposite color. If there is a tie, the color is chosen at random with equal probability. The height variable now affects the order parameter in two ways: (1) Domain walls are driven downhill. The reason is that the neighbor on the hillside of a site being updated is more likely to be occupied than the one on the valley side. The newly added particle is thus more likely to have the color on the hillside. (This corresponds to $`a>0`$ in Eq. (4).) (2) New domains are predominantly formed on hilltops. This is because domains on hilltops can expand more easily than those on slopes or in valleys, indicating $`b>0`$ in Eq. (4). Another consequence is that for the same $`p`$, the correlation length $`\xi `$ is much larger than in the decoupled case, as is apparent in Figs.2c,d. For the fully coupled case depicted in Fig.2c we find essentially the same scaling behavior as in Fig.2b, i.e. a height profile slaved to the Glauber dynamics of the domains. The most interesting case, shown in Fig.2d, is when the height profile is independent of the domains ($`\alpha =0`$), evolving with KPZ dynamics, while the order parameter is influenced by the roughness. The dynamic exponent $`z_m`$ for the order parameter was first obtained by collapsing the correlation functions using Eqs. (8), as shown in Fig.3. These curves imply that $`\eta =1`$, $`\xi p^{0.542}`$, and $`\tau \xi ^{z_m}1/p`$, giving $`z_m1/0.5421.85`$. The same non-trivial value for $`z_m`$ is obtained by a completely independent measurement of the dynamics of domain coarsening following a quench from a “high temperature” ($`p`$ close to 0.5) to zero temperature ($`p`$=0). Fig. 4 shows the domain density as function of time for a system of size $`L=16384`$. The resulting $`z_m1.85`$, is in agreement with the value from the scaling collapse. The following simple argument fails to provide the exponent $`z_m1.85`$. Consider a Langevin equation, $`\dot{x}=\eta (t)`$, for the position $`x`$ of a single domain wall at time $`t`$. Since the motion of the domain wall is strongly influenced by the height profile, the noise $`\eta (t)`$ must have long-range correlations $`\eta (t)\eta (t^{})=D|tt^{}|^\alpha ,`$ reflecting the dynamics of surface. This choice leads to $`z_m=2`$ for $`\alpha >1`$, and $`z_m=2/(2\alpha )`$ for $`\alpha <1`$. For a colored noise dominated by the slope fluctuations, $`\alpha =2/3`$ and $`z_m=3/2`$, i.e. the height imposes its characteristic time scale on the order parameter. This would presumably be the case if the domain walls were uniformly distributed along the surface. However, due to their tendency to move downhill, they are preferentially found near valleys. A different scaling of the slope fluctuations in the valleys may be the reason for the nontrivial value of $`z_m`$. Indeed, for short times, before the domain walls have moved to their preferred positions, the exponent $`3/2`$ is seen. The dynamics of domain walls on a growing KPZ surface bears some resemblance to the advection of a passive scalar in a turbulent velocity field, which is characterized by nontrivial exponents and multiscaling . If we neglect interactions between domain walls, and treat them as independent “dust particles” floating on the KPZ surface, the Langevin equation for the particle density $`\rho `$ is $$_t\rho =K^2\rho +a(h\rho +\rho ^2h)+\zeta _\rho .$$ (9) The second term describes the advection of particles along a velocity field $`\stackrel{}{v}=h`$. Indeed this transformation maps the KPZ equation into the Burgers equation for a vorticity-free, compressible fluid flow . Equation (9) is a special case of Eq. (4) for $`m`$, with $`r=u=c=0`$, $`b=a`$, and with a conserved noise $`\zeta _\rho `$. (Together with Eq. (2) for the height profile, it is also a special case of the equations used to describe the dynamic relaxation of drifting polymers.) In the remainder, we give the results of computer simulations for this case. The rules for the motion of “dust particles” are identical to those for domain walls. However, each particle is treated as if the others were not present. This means in particular that there is no creation or annihilation of particles. Fig. 5 shows the mean square displacement of a single “dust particle” in a system of size $`L=4096`$. To obtain good statistics, we averaged over 512 independent and noninteracting particles, and used more than 40 runs. The best fit is obtained for $`z_\rho 1.74`$, distinct from the previous $`z_m1.85`$, implying that the exponents depend on whether or not the domain walls (or “dust particles”) are conserved. In contrast to the advection of a passive scalar in a turbulent velocity field, we find no sign of multiscaling. Fig. 6 shows the positions of 1024 independent “dust particles” in a system of length $`L=512`$. While there is some correlation between minima of the surface profile and wall positions, there are also clusters of particles at higher elevations, indicating that particle diffusion is not sufficiently fast to fully adjust the density to the faster changing height profile. A fit of the density-density correlation function to $`\rho (x)\rho (0)1/x^{2(1\chi _\rho )}`$, gives an exponent $`\chi _\rho 0.85`$. In summary, the interplay between surface roughening and phase separation leads to a variety of novel critical scaling behaviors. At one extreme, the height profile adapts to the dynamics of critical domain ordering. At the other, the dynamics of domain wall motion is influenced by the roughness, exhibiting new and nontrivial scaling behaviors. This work was supported by EPSRC (grant No. GR/K79307, for BD), and the National Science Foundation (Grant No. DMR-98-05833, for MK).
no-problem/0002/astro-ph0002052.html
ar5iv
text
# Relativistic Effects in the Pulse Profile of the 2.5 msec X-Ray Pulsar SAX J1808.4-3658 ## 1 Introduction Strong X-ray pulsations with a 2.49 msec period were discovered from SAX J1808.4-3658 in an April 1998 observation with the Rossi X-ray Timing Explorer (Wijnands & van der Klis 1998). The pulsar is a member of an accreting binary system with an orbital period of 2.01 hour and a low-mass companion (Chakrabarty & Morgan 1998). Though similar to other low-mass X-ray binaries in its timing and spectral properties (e.g. Wijnands & van der Klis 1999; Heindl & Smith 1998), SAX J1808.4-3658 is unique for its X-ray pulsations. No other such binary has shown coherent pulsations in its persistent flux despite of careful searches (Vaughan et al. 1994b, and references therein). As such, SAX J1808.4-3658 is the fastest rotating accreting neutron star. If the pulsations are due to modulated emission from one hot spot on the neutron star surface, the 2.49 msec period corresponds to an equatorial speed of approximately 0.1c. With these high speeds, SAX J1808.4-3658 offers an excellent system for studying relativistic effects. One such effect may be the observed lag of low-energy photons relative to high-energy photons in the pulse discovered by Cui et al. (1998). Cui et al. (1998) suggest that the lags are due to Comptonization in a relatively cool surrounding medium. Alternatively, the lags may be the result of a relativistic effect: the high-energy photons are preferentially emitted at earlier phases due to Doppler boosting along the line of sight. This possibility was suggested for the similar lags in the 549 Hz oscillations in an X-ray burst of Aql X-1 (Ford 1999), where a simple model showed that the delays roughly match those expected. In the following, we present new measurements of the pulsed emission from SAX J1808.4-3658. We show that the energy-dependent phase lags are equivalent to a hardening pulse profile. We model this behavior in terms of a hot spot on the neutron star, including relativistic effects. ## 2 Observations & Analysis We have used publicly available data from the proportional counter array (PCA) on board RXTE in an ‘event’ mode with high time resolution (122$`\mu `$sec) and high energy resolution (64 channels). The observations occurred from April 10 1998 to May 7 1998, when the source was in outburst. We generate folded lightcurves in each PCA channel. This is accomplished with the fasebin tool in FTOOLS version 4.2, which applies all known XTE clock corrections and corrects photon arrival times to the solar system barycenter using the JPL DE-200 ephemeris, yielding a timing accuracy of several $`\mu `$sec (much less than the phase binning used here). As a check, we have applied this method to Crab pulsar data and the results are identical to Pravdo et al. (1997). To produce pulse profiles in the neutron star rest frame, we use the SAX J1808.4-3658 orbital ephemeris found by Chakrabarty & Morgan (1998). An example folded lightcurve is shown in Figure 1 (top) for the observation of April 18 1998 14:05:40 to April 19 1998 00:51:44 UTC. To study the energy spectra at each phase bin, we take the rates at pulse minimum and subtract it from the rest of the data at other phases. This effectively accomplishes background subtraction and eliminates the unpulsed emission which we do not wish to consider. Note, however, that the pulsed emission may have some contribution even at the pulse minimum and this subtraction scheme represents only a best approximation to the true pulse emission. We generate detector response matrices appropriate to the observation date and data mode using pcarsp v2.38, and use XSPEC v.10.0 to fit model spectra. We fit the spectrum here with a simple powerlaw function. Though the function itself is not meant to be a physical description, the powerlaw index provides a good measure of the spectral hardness. Fits in each phase bin have reduced $`\chi ^2`$ of 0.7 to 1.6. Including an interstellar absorption does not substantially affect these results. The powerlaw index clearly increases through the pulse phase (Figure 1, bottom), i.e. spectrum evolves from hard to soft. We also fit the profiles of the folded lightcurves in each channel using Fourier functions at the fundamental frequency and its harmonics. From these fits we determine the phase lag in each channel relative to the fits in some baseline channel range. Results for the 18 April 1998 observation are shown in Figure 2 as solid symbols. Note that negative numbers indicate that high-energy photons precede low-energy photons. We are also able to measure lags in the first harmonic, and find that they are opposite in sign to the fundamental, i.e. low-energy photons precede high-energy photons in the first harmonic. No lags are measurable in the other harmonics. Another way to measure energy-dependent phase lags is by Fourier cross-correlation analysis. This is the method used for SAX J1808.4-3658 by Cui et al. (1998) and for other timing signals as well, e.g. kilohertz QPOs (e.g. Kaaret et al. 1999). For a description of cross correlation analysis see e.g. Vaughan et al. (1994a). From the PCA event mode data we calculate Fourier spectra in various channel ranges with Nyquist frequency of 2048 Hz and resolution of 0.25 Hz. We then calculate cross spectra defined as $`C(j)=X_1^{}(j)X_2(j)`$, where $`X`$ are the complex Fourier coefficients for two energy bands at the pulsar frequency $`\nu _j`$. The phase lag between the two energy bands is given by the argument of $`C`$. We measure all phase delays relative to the unbinned channels range 5 to 8, i.e. 1.83 to 3.27 keV for 5 detector units in PCA gain epoch 3 (April 15 1996 to March 22 1999). The results for the 18 April 1998 observation, are shown in Figure 2 by the open symbols. The phase lags are consistent with those calculated from the lightcurve fitting. From the cross-correlation spectra we are not able to measure lags in the much weaker harmonics. We have calculated phase lag spectra also for other RXTE observations during the April 1998 outburst. These spectra are similar to that in Figure 2. To quantify the trends, we compute an average phase delay, $`\varphi _{avg}`$, over all energies for each observation. We also fit a broken powerlaw function to each phase delay spectrum: $`\varphi =E^\alpha `$ below a break energy, $`E_b`$, and $`\varphi =\varphi _{max}`$ above this energy. Figure 3 shows the quantities $`\varphi _{avg}`$, $`E_b`$ and $`\varphi _{max}`$ versus the time of each observation. There is a clear connection between the results of the two analyses presented here. The phase resolved spectroscopy shows that the spectrum softens and correspondingly the peak of the pulse profile appears slightly earlier in phase for higher energies (Figure 1). The method of measuring phase delays shows the same behavior: higher-energy photons preferentially lead lower-energy photons in the fundamental and the magnitude of this phase delay increases with energy (Figure 2). In the following we discuss a model that can account for the phase delays/spectral softening measured here. ## 3 Model We calculate the expected luminosity as a function of phase in a manner similar to Pechenick et al. (1983); Strohmayer (1992) but including Doppler effects (Chen & Shaham 1989) and time of flight delays (Ftaclas et al. 1986). This treatment is based on a Schwarzschild metric, where the photon trajectories are completely determined by the compactness, $`R/M`$. The predicted luminosity as a function of phase from our code matches the results of Pechenick et al. (1983) and Chen & Shaham (1989) for the various choices of parameters. Braje et al. (2000) recently developed a model for pulse profiles using a slightly different approach. The parameters in the model are the speed at the equator of the neutron star, $`v`$, the mass of the neutron star, $`M`$, the compactness, $`R/M`$, the angular size of the cap, $`\alpha `$, and the viewing angles, $`\beta `$ (the angle between the rotation axis and the cap center) and $`\gamma `$ (rotation axis to line of sight). Another ingredient is the emission from the spot, which we take as isotropic and isothermal. The spectrum of energy emitted from the spot is another important input. Note that if the emitted spectrum is power law like, the observed spectrum will not evolve with phase, since Doppler transformation preserves the shape (see Chen & Shaham 1989). The intrinsic spectrum must therefore have some shape that transforms to match the observed hardening and phase lags; we use a blackbody spectrum with temperature $`kT_0`$. A fit from the model is shown in Figure 2. This fit uses the following model parameters: $`R/M=5`$, $`M=1.8\mathrm{M}_{\mathrm{}}`$, $`kT_0=0.6`$ keV, $`v=0.1`$, $`\beta =\gamma =10^\mathrm{o}`$, $`\alpha =10^\mathrm{o}`$. To derive count rates, we use table models in XSPEC and the appropriate response files as discussed above. The fit for this single set of parameters is good; we find $`\chi ^2=5.3`$ from the Fourier cross-correlation data or a reduced $`\chi ^2`$ of 0.8 for all the parameters fixed. A full exploration of the parameter space of the model is beyond the scope of this letter. We note the following trends, however. The magnitude of the phase lags depends sensitively on $`v`$ and $`kT_0`$. Larger delays result from higher speeds, because the pulses become increasingly asymmetric due to Doppler boosting. This asymmetry is also energy dependent, so there is a dependence on $`kT_0`$ as well (see Chen & Shaham 1989). The lags also depend on $`\beta `$ and $`\gamma `$, especially at higher energies where there is the turn-over noted above. The phase lags are less sensitive to $`R/M`$ and $`\alpha `$. We have tested the assumption of isotropic emission and found that the phase lags depend only weakly on the beaming. ## 4 Discussion The pulsed emission from SAX J1808.4-3658 evolves through its phase from a relatively hard to a soft spectrum, as shown by our phase resolved spectroscopy (Figure 1). This evolution can also be thought of, and measured as, an energy-dependent phase lag in the fundamental (Figure 2), i.e. higher-energy photons emerging earlier in phase than lower-energy photons. We have applied a model to the data which consists of a hot spot on the rotating neutron star under a general relativistic treatment. The dominant effect accounting for energy-dependent delays is Doppler boosting, the larger boosting factors at earlier phases giving harder spectra. The model fits the data quite well. The model also provides a stable mechanism for generating the phase delays, which meshes with the fact that the characteristic delays remain stable in time to within 25% (Figure 3) even as there is a factor of two decrease in the X-ray flux, a possible tracer of accretion rate. As noted in Ford (1999), in addition to explaining the energy-dependent phase lags in SAX J1808.4-3658, this mechanism may also account for the lags in the burst oscillations of Aql X-1 (Ford 1999) and kilohertz QPOs (Vaughan et al. 1998; Kaaret et al. 1999). Phase resolved spectroscopy has not yet been possible in these signals. The model offers a new means of measuring the neutron star mass and radius, a notoriously difficult problem in accreting binary systems. The radius is directly related to $`v`$, the equatorial speed ($`v=\mathrm{\Omega }_{\mathrm{spin}}R`$). The fits also depend on the model parameters $`M`$ and $`R/M`$, though the lag spectra are less sensitive to these quantities. The model could be independently constrained by future optical spectroscopy of the companion which could provide information on $`M`$ and the viewing angles, $`\alpha `$ and $`\beta `$. We acknowledge stimulating discussions with the participants of the August 1999 Aspen workshop on Relativity where an early version of this work was presented. We thank Michiel van der Klis, Mariano Méndez and Luigi Stella for helpful discussions. This work was supported by NWO Spinoza grant 08-0 to E.P.J.van den Heuvel, by the Netherlands Organization for Scientific Research (NWO) under contract number 614-51-002, and by the Netherlands Researchschool for Astronomy (NOVA). This research has made use of data obtained through the High Energy Astrophysics Science Archive Research Center Online Service, provided by the NASA/Goddard Space Flight Center.
no-problem/0002/math0002004.html
ar5iv
text
# Introduction ## Introduction In 1765, Euler proved that several important centers of a triangle are collinear; the line containing these points is named after him. The incenter $`I`$ of a triangle, however, is generally not on this line. Less than twenty years ago, Andrew P. Guinand discovered that although $`I`$ is not necessarily on the Euler line, it is always fairly close to it. Guinand’s theorem states that for any non-equilateral triangle, the incenter lies inside, and the excenters lie outside, the circle whose diameter joins the centroid to the orthocenter; henceforth the orthocentroidal circle. Furthermore, Guinand constructed a family of locus curves for $`I`$ which cover the interior of this circle twice, showing that there are no other restrictions on the positions of the incenter with respect to the Euler line. Here we show that the Fermat point also lies inside the orthocentroidal circle; this suggests that the neighborhood of the Euler line may harbor more secrets than was previously known. We also construct a simpler family of curves for $`I`$, covering the interior once only except for the nine-point center $`N`$, which corresponds to the limiting case of an equilateral triangle. Triangle geometry is often believed to be exhausted, although both Davis and Oldknow have expressed the hope that the use of computers may revive it. New results do appear occasionally, such as Eppstein’s recent construction of two new triangle centers . This article establishes some relations among special points of the triangle, which were indeed found by using computer software. ## The locus of the incenter To any plane triangle $`ABC`$ there are associated several special points, called *centers*. A few of these, in the standard notation , are: the centroid $`G`$, where the medians intersect; the orthocenter $`H`$, where the altitudes meet; the centers of the inscribed and circumscribed circles, called the incenter $`I`$ and the circumcenter $`O`$; and the nine-point center $`N`$, half-way between $`O`$ and $`H`$. The radii of the circumcircle and the incircle are called $`R`$ and $`r`$, respectively. If equilateral triangles $`BPC`$, $`AQC`$ and $`ARB`$ are constructed externally on the sides of the triangle $`ABC`$, then the lines $`AP`$, $`BQ`$ and $`CR`$ are concurrent and meet at the Fermat point $`T`$. This point minimizes the distance $`TA+TB+TC`$ for triangles whose largest angle is $`120^{}`$ . The points $`O`$, $`G`$, $`N`$, and $`H`$ lie (in that order) on a line called the Euler line, and $`OG:GN:NH=2:1:3`$. They are distinct unless $`ABC`$ is equilateral. The circle whose diameter is $`GH`$ is called the *orthocentroidal circle*. Guinand noticed that Euler’s relation $`OI^2=R(R2r)`$ \[1, p. 85\] and Feuerbach’s theorem $`IN=\frac{1}{2}Rr`$ \[1, p. 105\] together imply that $$OI^24IN^2=R(R2r)(R2r)^2=2r(R2r)=\frac{2r}{R}OI^2>0.$$ Therefore, $`OI>2IN`$. The locus of points $`P`$ for which $`OP=2PN`$ is a circle of Apollonius; since $`OG=2GN`$ and $`OH=2HN`$, this is the orthocentroidal circle. The inequality $`OI>2IN`$ shows that $`I`$ lies in the interior of the circle . Guinand also showed that the angle cosines $`\mathrm{cos}A`$, $`\mathrm{cos}B`$, $`\mathrm{cos}C`$ of the triangle satisfy the following cubic equation: $$\rho ^4(12x)^3+8\rho ^2\sigma ^2x(32x)16\sigma ^4x4\sigma ^2\kappa ^2(1x)=0,$$ (1) where $`OI=\rho `$, $`IN=\sigma `$ and $`OH=\kappa `$. We exploit this relationship below. The relation $`OI>2IN`$ can be observed on a computer with the software *The Geometer’s Sketchpad*$`^\text{®}`$, that allows tracking of relative positions of objects as *one* of them is moved around the screen. Let us fix the Euler line by using a Cartesian coordinate system with $`O`$ at the origin and $`H`$ at $`(3,0)`$. Consequently, $`G=(1,0)`$ and $`N=(1.5,0)`$. To construct a triangle with this Euler line, we first describe the circumcircle $`(O,R)`$, centered at $`O`$ with radius $`R>1`$ —in order that $`G`$ lie in the interior— and choose a point $`A`$ on this circle. If $`AA^{}`$ is the median passing through $`A`$, we can determine $`A^{}`$ from the relation $`AG:GA^{}=2:1`$; then $`BC`$ is the chord of the circumcircle that is bisected perpendicularly by the ray $`OA^{}`$. It is not always possible to construct $`ABC`$ given a fixed Euler line and a circumradius $`R`$. If $`1<R<3`$ then there is an arc on $`(O,R)`$ on which an arbitrary point $`A`$ yields an $`A^{}`$ outside the circumcircle, which is absurd. If $`U`$ and $`V`$ are the intersections of $`(O,R)`$ with the orthocentroidal circle, and if $`UY`$ and $`VZ`$ are the chords of $`(O,R)`$ passing through $`G`$, then $`A`$ cannot lie on the arc $`ZY`$ of $`(O,R)`$, opposite the orthocentroidal circle (Figure 1). $`OYG`$ and $`NUG`$ are similar triangles. Indeed, $`YO=UO=2UN`$ and $`OG=2GN`$, so $`YG:GU=2:1`$; in the same way, $`ZG:GV=2:1`$. Hence, if $`A=Y`$ or $`A=Z`$, then $`A^{}=U`$ or $`A^{}=V`$ respectively, and $`ABC`$ degenerates into the chord $`UY`$ or $`VZ`$. If $`A`$ lies on the arc $`ZY`$, opposite the orthocentroidal circle, $`A^{}`$ will be outside $`(O,R)`$. Another formula of Guinand , $$OH^2=R^2(18\mathrm{cos}A\mathrm{cos}B\mathrm{cos}C),$$ (2) shows that the forbidden arc appears if and only if $`\mathrm{cos}A\mathrm{cos}B\mathrm{cos}C<0`$, that is, if and only if $`1<R<3=OH`$. The triangle $`ABC`$ is obtuse-angled in this case. Once $`A`$ is chosen and the triangle constructed, we can find $`I`$ by drawing the angle bisectors of $`ABC`$ and marking their intersection. *The Geometer’s Sketchpad* will do this automatically, and we can also ask it to draw the locus which $`I`$ traces as we move the point $`A`$ around the circumcircle. The idea of parameterizing the loci with $`R`$, instead of an angle of the triangle (as Guinand did) was inspired by the drawing tools available in *The Geometer’s Sketchpad*. This locus turns out to be a quartic curve, as follows. Proposition 1. In the coordinate system described above, the incenter of $`ABC`$ is on the curve $$(x^2+y^2)^2=R^2[(2x3)^2+4y^2].$$ (3) Proof. From equation (2) we see that, once we fix $`R`$, the product $`\mathrm{cos}A\mathrm{cos}B\mathrm{cos}C`$ is fixed. Using Viète’s formulas, this product is obtained from the constant term of Guinand’s equation (1): $$\mathrm{cos}A\mathrm{cos}B\mathrm{cos}C=\frac{1}{8}\left(1\frac{4\sigma ^2\kappa ^2}{\rho ^4}\right),$$ so that $$\rho ^4(18\mathrm{cos}A\mathrm{cos}B\mathrm{cos}C)=4\sigma ^2OH^2.$$ (4) Now, $`I`$ is a point of intersection of the two circles $$\rho ^2=x^2+y^2\text{and}\sigma ^2=(x\frac{3}{2})^2+y^2.$$ We get (3) by substituting these and (2) in (4), and dividing through by the common factor $`18\mathrm{cos}A\mathrm{cos}B\mathrm{cos}C`$, which is positive since $`OH`$ (the equilateral case is excluded). Figure 2 is a *Mathematica* plot of the curves (3) with the orthocentroidal circle. Every point on a locus inside the orthocentroidal circle is the incenter of a triangle. When $`R>3`$, the locus is a lobe entirely inside the circle; if the point $`A`$ travels around the circumcircle once, then $`I`$ travels around the lobe three times, since $`A`$ will pass through the three vertices of each triangle with circumradius $`R`$. When $`1<R<3`$, the interior portion of (3) isshaped like a bell (Figure 2). Let $`A`$ travel along the allowable arc from $`Y`$ to $`Z`$, passing through $`V`$ and $`U`$; then $`I`$ travels along the bell from $`U`$ to $`V`$, back from $`V`$ to $`U`$, and then from $`U`$ to $`V`$ again. While $`A`$ moves from $`V`$ to $`U`$, the orientation of the triangle $`ABC`$ is reversed. When $`R=3`$, the locus closes at $`H`$ and one vertex $`B`$ or $`C`$ also coincides with $`H`$; the triangle is right-angled. If $`A`$ moves once around the circumcircle, starting and ending at $`H`$, $`I`$ travels twice around the lobe. Proposition 2. The curves (3), for different values of $`R`$, do not cut each other inside the orthocentroidal circle, and they fill the interior except for the point $`N`$. Proof. Let $`(a,b)`$ be inside the orthocentroidal circle, that is, $$(a2)^2+b^2<1.$$ (5) If $`(a,b)`$ also lies on one of the curves (3), then $$R=\sqrt{\frac{(a^2+b^2)^2}{(2a3)^2+4b^2}}.$$ There is only one positive value of $`R`$ and thus *at most* one curve of the type (3) on which $`(a,b)`$ can lie. Now we show $`(a,b)`$ lies on *at least* one curve of type (3); to do that, we need to show that given (5), $`R>1`$. We need only prove $$(a^2+b^2)^2>(2a3)^2+4b^2.$$ (6) Indeed, $`(2a3)^2+4b^2=0`$ only if $`(a,b)=(\frac{3}{2},0)`$; this point is $`N`$. It cannot lie on a locus (3), in fact it corresponds to the limiting case of an equilateral triangle as $`R\mathrm{}`$. The inequality (5) can be restated as $$a^2+b^2<4a3,$$ (7) and (6) as $$(a^2+b^2)^24a^2+3(4a3)4b^2>0.$$ (8) From (7) it follows that $`(a^2+b^2)^24a^2+3(4a3)4b^2`$ $`>`$ $`(a^2+b^2)^24a^2+3(a^2+b^2)4b^2`$ $`=`$ $`(a^2+b^2)(a^2+b^21).`$ But $`a^2+b^2>1`$ since $`(a,b)`$ is inside the orthocentroidal circle; therefore, (6) is true. ## The whereabouts of the Fermat point The same set-up, a variable triangle with fixed Euler line and circumcircle, allows us to examine the loci of other triangle centers. Further experimentation with *The Geometer’s Sketchpad* suggests that the Fermat point $`T`$ also lies inside the orthocentroidal circle in all cases. Theorem 1. The Fermat point of any non-equilateral triangle lies inside the orthocentroidal circle. Proof. Given a triangle $`ABC`$ with the largest angle at $`A`$, we can take a coordinate system with $`BC`$ as the $`x`$-axis and $`A`$ on the $`y`$-axis. Then $`A=(0,a)`$, $`B=(b,0)`$, $`C=(c,0)`$ where $`a`$, $`b`$ and $`c`$ are all positive. Let $`BPC`$ and $`AQC`$ be the equilateral triangles constructed externally over $`BC`$ and $`AC`$ respectively (Figure 4). Then $`P=(\frac{1}{2}(cb),\frac{1}{2}\sqrt{3}(b+c))`$ and $`Q=(\frac{1}{2}(\sqrt{3}a+c),\frac{1}{2}(a+\sqrt{3}c))`$. The coordinates of $`T`$ can be found by writing down the equations for the lines $`AP`$ and $`BQ`$ and solving them simultaneously. After a little work, we get $`T=({\displaystyle \frac{u}{d}},{\displaystyle \frac{v}{d}})`$, where $`u`$ $`=`$ $`(\sqrt{3}bc\sqrt{3}a^2acab)(bc),`$ $`v`$ $`=`$ $`(a^2+\sqrt{3}ab+\sqrt{3}ac+3bc)(b+c),`$ $`d`$ $`=`$ $`2\sqrt{3}(a^2+b^2+c^2)+6ac+6ab+2\sqrt{3}bc.`$ (9) The perpendicular bisectors of $`BC`$ and $`AC`$ intersect at the circumcenter $`O=(\frac{1}{2}(cb),(a^2bc)/2a)`$. The nine-point center $`N`$ is the circumcenter of the triangle whose vertices are the midpoints of the sides of $`ABC`$; we can deduce that $`N=(\frac{1}{4}(cb),(a^2+bc)/4a)`$. To show that $`T`$ lies inside the orthocentroidal circle, we must show that $`OT>2NT`$, or $$OT^2>4NT^2.$$ (10) In coordinates, this inequality takes the equivalent form: $$\left[\frac{u}{d}\left(\frac{cb}{2}\right)\right]^2+\left[\frac{v}{d}\left(\frac{a^2bc}{2a}\right)\right]^2>4\left[\frac{u}{d}\left(\frac{cb}{4}\right)\right]^2+4\left[\frac{v}{d}\left(\frac{a^2+bc}{4a}\right)\right]^2,$$ or, multiplying by $`(2ad)^2`$, $$[2auad(cb)]^2+[2avd(a^2bc)]^2>[4auad(cb)]^2+[4avd(a^2+bc)]^2.$$ After expanding and canceling terms, this simplifies to $$4a^2du(cb)+4adv[(2a^2+2bc)(a^2bc)]4a^2d^2bc>12a^2u^2+12a^2v^2,$$ or better, $$adu(cb)+dv(a^2+3bc)abcd^23au^23av^2>0.$$ (11) One way to verify this inequality is to feed the equations (9) into *Mathematica*, which expands and factors the left hand side of (11) as $$2(b+c)(\sqrt{3}a^2+\sqrt{3}b^2+\sqrt{3}c^2+\sqrt{3}bc+3ab+3ac)(a^4+a^2b^28a^2bc+a^2c^2+9b^2c^2).$$ The first three factors are positive. The fourth factor can be expressed as the sum of two squares, $$(a^23bc)^2+a^2(bc)^2,$$ and could be zero only if $`a^2=3bc`$ and $`b=c`$, so that $`a=\sqrt{3}b`$. This gives an equilateral triangle with side $`2b`$. Since the equilateral case is excluded, all the factors are positive, which shows that (11) is true, and therefore (10) holds. Varying the circumradius $`R`$ and the position of the vertex $`A`$ with *The Geometer’s Sketchpad* reveals a striking parallel between the behavior of the loci of $`T`$ and those of $`I`$. It appears that the loci of $`T`$ also foliate the orthocentroidal disc, never cutting each other, in a similar manner to the loci of $`I`$ (Figure 2). The locus of $`T`$ becomes a lobe when $`R=3`$, as is the case with the locus of $`I`$. Furthermore, the loci of $`T`$ close in on the center of the orthocentroidal circle as $`R\mathrm{}`$, just as the loci of $`I`$ close in on $`N`$ (Figure 2). It is difficult to prove these assertions with the same tools used to characterize the loci of $`I`$, because we lack an equation analogous to (1) involving $`T`$ instead of $`I`$. A quick calculation for non-equilateral isosceles triangles, however, shows that $`T`$ can be anywhere on the segment $`GH`$ except for its midpoint. This is consistent with the observation that the loci of $`T`$ close in on the center of the orthocentroidal circle. Consider a system of coordinates like those of Figure 4. Let $`b=c`$ so that $`ABC`$ is an isosceles triangle. In this case, by virtue of (9), $`T=(0,b/\sqrt{3})`$. For this choice of coordinates, $`G=(0,a/3)`$ and $`H=(0,b^2/a)`$. $`T`$ lies on the Euler line, which can be parametrized by $`(1t)G+tH`$ for real $`t`$. This requires that $$(1t)\frac{a}{3}+t\frac{b^2}{a}=\frac{b}{\sqrt{3}},$$ for some real $`t`$. Solving for $`t`$, this becomes $$t=\frac{a^2\sqrt{3}ab}{a^23b^2}=\frac{a}{a+\sqrt{3}b},$$ unless $`a=\sqrt{3}b`$. This case is excluded since $`ABC`$ is not equilateral. Note that $`t\frac{1}{2}`$ as $`a\sqrt{3}b`$. Thus, $`t`$ takes real values between $`0`$ and $`1`$ except for $`\frac{1}{2}`$, so $`T`$ can be anywhere on the segment $`GH`$ except for its midpoint. ## Acknowledgments I am grateful to Les Bryant and Mark Villarino for helpful discussions, to Joseph C. Várilly for advice and nical help, and to Julio González Cabillón for good suggestions on the use of computer software. I also thank the referees for helpful comments.
no-problem/0002/astro-ph0002253.html
ar5iv
text
# Not enough stellar Mass Machos in the Galactic Halo Based on observations made at the European Southern Observatory, La Silla, Chile. ## 1 Research context The search for gravitational microlensing in our Galaxy has been going on for a decade, following the proposal to use this effect as a tool to probe the dark matter content of the Galactic halo (Paczyński (1986)). The first microlensing candidates were reported in 1993, towards the lmc (Aubourg et al. (1993); Alcock et al. 1993) and the Galactic Centre (Udalski et al. (1993)) by the eros, macho and ogle collaborations. Because they observed no microlensing candidate with a duration shorter than 10 days, the eros1 and macho groups were able to exclude the possibility that a substantial fraction of the Galactic dark matter resides in planet-sized objects (Aubourg et al. (1995); Alcock et al. (1996); Renault et al. 1997; Renault et al. (1998); Alcock et al. (1998)). However a few events were detected with longer timescales. From 6-8 candidate events towards the lmc, the macho group estimated an optical depth of order half that required to account for the dynamical mass of the standard spherical dark halo; the typical Einstein radius crossing time of the events, $`t_E`$, implied an average mass of about 0.5 M for the lenses (Alcock et al. 1997a). Based on two candidates, eros1 set an upper limit on the halo mass fraction in objects of similar masses (Ansari et al. 1996), that is below that required to explain the rotation curve of our Galaxy<sup>1</sup><sup>1</sup>1 Assuming the eros1 candidates are microlensing events, they would correspond to an optical depth six times lower than that expected from a halo fully comprised of machos.. The second phase of the eros programme was started in 1996, with a ten-fold increase in the number of monitored stars in the Magellanic Clouds. The analysis of the first two years of data towards the Small Magellanic Cloud (smc) allowed the detection of one microlensing event (Palanque-Delabrouille et al. 1998; see also Alcock et al., 1997b). This single event, out of 5.3 million stars, allowed eros2 to further constrain the halo composition, excluding in particular that more than 50 % of the standard dark halo is made up of $`0.010.5\mathrm{M}_{}`$ objects (Afonso et al. 1999). In contrast, an optical detection of a halo white dwarf population was reported (Ibata et al. 1999). In this letter, we describe the analysis of the two-year light curves from 17.5 million lmc stars. We then combine these results with our previous limits, and derive the strongest limit obtained thus far on the amount of stellar mass objects in the Galactic halo. ## 2 Experimental setup and LMC observations The telescope, camera, telescope operation and data reduction are as described in Bauer et al. (1997) and Palanque-Delabrouille et al. (1998). Since August 1996, we have been monitoring 66 one square-degree fields in the lmc, simultaneously in two wide passbands. Of these, data prior to May 1998 from 25 square-degrees spread over 43 fields have been analysed. In this period, about 70-110 images of each field were taken, with exposure times ranging from 3 min in the lmc center to 12 min on the periphery; the average sampling is once every 6 days. ## 3 LMC data analysis The analysis of the lmc data set was done using a program independent from that used in the smc study, with largely different selection criteria. The aim is to cross-validate both programs (as was already done in the analysis of eros1 Schmidt photographic plates, Ansari et al. (1996)) and avoid losing rare microlensing events. Preliminary results of the present analysis were reported in Lasserre (1999). We only give here a list of the various steps, as well as a short description of the principal new features; details will be provided in Lasserre et al. (2000). We first select the 8% “most variable” light curves, a sample much larger than the number of detectable variable stars. Working from this “enriched” subset, we apply a first set of cuts to select, in each colour separately, the light curves that exhibit significant variations. We first identify the baseline flux in the light curve - basically the most probable flux. We then search for runs along the light curve, i.e. groups of consecutive measurements that are all on the same side of the baseline flux. We select light curves that either have an abnormally low number of runs over the whole light curve, or show one long run (at least 5 valid measurements) that is very unlikely to be a statistical fluctuation. We then ask for a minimum signal-to-noise ratio by requiring that the group of 5 most luminous consecutive measurements be significantly further from the baseline than the average spread of the measurements. We also check that the measurements inside the most significant run show a smooth time variation. The second set of cuts compares the measurements with the best fit point-lens point-source constant speed microlensing light curve (hereafter “simple microlensing”). They allow us to reject variable stars whose light curve differs too much from simple microlensing, and are sufficiently loose not to reject light curves affected by blending, parallax or the finite size of the source, and most cases of multiple lenses or sources. After this second set of cuts, stars selected in at least one passband represent about 0.01% of the initial sample; almost all of them are found in two thinly populated zones of the colour-magnitude diagram. The third set of cuts deals with this physical background. The first zone contains stars brighter and much redder than those of the red clump; variable stars in this zone are rejected if they vary by less than a factor two or have a very poor fit to simple microlensing. The second zone is the top of the main sequence. Here we find that selected stars, known as blue bumpers (Alcock et al. 1997a ), display variations that are always smaller than 60% and lower in the visible passband than in the red one. These cannot correspond to simple microlensing, which is achromatic; they cannot correspond to microlensing plus blending with another unmagnified star either, as it would imply blending by even bluer stars, which is very unlikely. We thus reject all candidates from the second zone exhibiting these two features. The fourth set of cuts tests for compatibility between the light curves in both passbands. We retain candidates selected in only one passband if they have no conflicting data in the other passband. For candidates selected independently in the two passbands, we require that their largest variations coincide in time. The tuning of each cut and the calculation of the microlensing detection efficiency are done with simulated simple microlensing light curves, as described in Palanque-Delabrouille et al. (1998). For the efficiency calculation, microlensing parameters are drawn uniformly in the following intervals: time of maximum magnification $`t_0`$ within the observing period $`\pm 150`$ days, impact parameter normalised to the Einstein radius $`u_0[0,2]`$ and timescale $`t_E[5,300]`$ days. All cuts on the data were also applied to the simulated light curves. Only two candidates remain after all cuts. Their light curves are shown in Fig. 1; microlensing fit parameters are given in Table 1. Although the candidates pass all cuts, agreement with simple microlensing is not excellent. The efficiency of the analysis, normalised to events with an impact parameter $`u_0<1`$ and to an observing period $`T_{\mathrm{obs}}`$ of two years, is summarised in Table 2. The main source of systematic error is the uncertainty in the influence of blending. Blending lowers the observed magnifications and timescales. While this decreases the efficiency for a given star, the effective number of monitored stars is increased so that there is partial compensation. This effect was studied with synthetic images using measured magnitude distributions (Palanque-Delabrouille (1997)). Our final efficiency is within 10% of the naive efficiency. ## 4 EROS1 results revisited The two eros1 microlensing candidates have been monitored by eros2. The source star in event eros-lmc-2 had been found to be variable (Ansari et al. (1995)), but microlensing fits taking into account the observed periodicity gave a good description of the measurements. Its follow-up by eros2 revealed a new bump in March 1999, eight years after the first one<sup>2</sup><sup>2</sup>2We thank the macho group for communication about their data on this star.. This new variation, of about a factor two, was not well sampled but is significant. Therefore, eros-lmc-2 is no longer a candidate and we do not include it in the limit computation. ## 5 Limits on Galactic halo MACHOs eros has observed four microlensing candidates towards the Magellanic Clouds, one from eros1 and two from eros2 towards the lmc, and one towards the smc. As discussed in Palanque-Delabrouille et al. (1998), we consider that the long duration of the smc candidate together with the absence of any detectable parallax, in our data as well as in that of the macho group (Alcock et al. 1997b ), indicates that it is most likely due to a lens in the smc. For that reason, the limit derived below uses the three lmc candidates; for completeness, we also give the limit corresponding to all four candidates. The limits on the contribution of dark compact objects to the Galactic halo are obtained by comparing the number and durations of microlensing candidates with those expected from Galactic halo models. We use here the so-called “standard” halo model described in Palanque-Delabrouille et al. (1998) as model 1. The model predictions are computed for each eros data set in turn, taking into account the corresponding detection efficiencies (Ansari et al. (1996); Renault et al. 1998; Afonso et al. 1999; Table 2 above), and the four predictions are then summed. In this model, all dark objects have the same mass $`M`$; we have computed the model predictions for many trial masses $`M`$ in turn, in the range \[$`10^8\mathrm{M}_{}`$, $`10^2\mathrm{M}_{}`$\]. The method used to compute the limit is as in Ansari et al. (1996). We consider two ranges of timescale $`t_E`$, smaller or larger than $`t_E^{\mathrm{lim}}=10`$ days. (All candidates have $`t_E>t_E^{\mathrm{lim}}`$.) We can then compute, for each mass $`M`$ and any halo fraction $`f`$, the combined Poisson probability for obtaining, in the four different eros data sets taken as a whole, zero candidate with $`t_E<t_E^{\mathrm{lim}}`$ and three or less (alt. four or less) with $`t_E>t_E^{\mathrm{lim}}`$. For any value of $`M`$, the limit $`f_{\mathrm{max}}`$ is the value of $`f`$ for which this probability is 5%. Whereas the actual limit depends somewhat on the precise choice of $`t_E^{\mathrm{lim}}`$, the difference ($`5\%`$) is noticeable only for masses around $`0.02\mathrm{M}_{}`$. Furthermore, we consider 10 days to be a conservative choice. Figure 2 shows the 95% C.L. exclusion limit derived from this analysis on the halo mass fraction, $`f`$, for any given dark object mass, $`M`$. The solid line corresponds to the three lmc candidates; it is the main result of this letter. (The dashed line includes the smc candidate in addition.) This limit rules out a standard spherical halo model fully comprised of objects with any mass function inside the range $`[10^74]M_{}`$. In the region of stellar mass objects, where this result improves most on previous ones, the new lmc data contribute about 60% to our total sensitivity (the smc and eros1 lmc data contribute 15% and 25% respectively). The total sensitivity, that is proportional to the sum of $`N_{}T_{\mathrm{obs}}ϵ(t_E)`$ over the four eros data sets, is 2.4 times larger than that of Alcock et al. (1997a). We observe that a large fraction of the domain previously allowed by Alcock et al. (1997a) is excluded by the limit in Fig. 2. ## 6 Discussion and conclusion After eight years of monitoring the Magellanic Clouds, eros has a meager crop of three microlensing candidates towards the lmc and one towards the smc, whereas 27 events are expected for a spherical halo fully comprised of $`0.5\mathrm{M}_{}`$ objects. These were obtained from four different data sets analysed by four independent, cross-validated programs. So, the small number of observed events is unlikely to be due to bad detection efficiencies. This allows us to put strong constraints on the fraction of the halo made of objects in the range \[$`10^7\mathrm{M}_{}`$, $`4\mathrm{M}_{}`$\], excluding in particular at the 95 % C.L. that more than 40 % of the standard halo be made of objects with up to $`1\mathrm{M}_{}`$. The preferred value quoted in Alcock et al. (1997a), $`f=0.5`$ and $`0.5\mathrm{M}_{}`$, is incompatible with the limits in Fig. 2 at the 99.7% C.L. (but see the note added below). What are possible reasons for such a difference? Apart from a potential bias in the detection efficiencies, several differences should be kept in mind while comparing the two experiments. First, eros uses less crowded fields than macho with the result that blending is relatively unimportant for eros. Second, eros covers a larger solid angle (43 deg<sup>2</sup> in the lmc and 10 deg<sup>2</sup> in the smc) than macho, which monitors primarily the central 11 deg<sup>2</sup> of the lmc. The eros rate should thus be less contaminated by self-lensing that is more common in the central regions - the importance of self-lensing was first stressed by Wu (1994) and Sahu (1994). Third, the macho data have a more frequent time sampling. Finally, while the eros limit uses both Clouds, the macho result is based only on the lmc. For halo lensing, the timescales towards the two Clouds should be nearly identical and the optical depths comparable. In this regard, we remark that the smc event is longer than all lmc candidates from macho and eros. Finally, given the scarcity of our candidates and the possibility that some observed microlenses actually lie in the Magellanic Clouds, eros is not willing to quote at present a non zero lower limit on the fraction of the Galactic halo comprised of dark compact objects with masses up to a few solar masses. Note added. While the writing of this letter was being finalised, the analysis of 5.7 yrs of lmc observations by the macho group was made public (Alcock et al. (2000)). The new favoured estimate of the halo mass fraction in the form of compact objects, $`f=0.20`$, is 2.5 times lower than that of Alcock et al. (1997a) and is compatible with the limit presented here. None of the conclusions in this article have to be reconsidered. A detailed comparison of our results with those of Alcock et al. (2000) will be available in our forthcoming publication (Lasserre et al. (2000)). ###### Acknowledgements. We are grateful to D. Lacroix and the staff at the Observatoire de Haute Provence and to A. Baranne for their help with the MARLY telescope. The support by the technical staff at ESO, La Silla, is essential to our project. We thank J.F. Lecointe for assistance with the online computing.
no-problem/0002/astro-ph0002206.html
ar5iv
text
# On The Reddening in X-ray Absorbed Seyfert 1 Galaxies ## 1 Introduction The presence of absorption edges of O VII and O VIII in the X-ray (Reynolds 1997; George et al. 1998a) indicates that there is a significant amount of intrinsic ionized material along our line-of-sight to the nucleus in a large fraction ($``$ 0.5) of Seyfert 1 galaxies. In addition to highly ionized gas (referred to as an X-ray or “warm” absorber), X-ray spectra often show evidence for a less-ionized absorber. This component has been modeled using neutral gas (cf. George et al. 1998b), and its relationship to the “warm” absorber is unclear. Interestingly, there are several instances in which this additional neutral column is too small by as much as an order of magnitude to explain the reddening of the continuum and emission lines, assuming typical Galactic dust/gas ratios (cf. Shull & van Steenburg 1985). This inconsistency was first noted in regard to the absence of high ionization emission lines in the IUE spectra of MCG -6-30-15 (Reynolds & Fabian 1995). The first quantitative comparison of the neutral columns inferred from the X-ray data to that derived from the reddening was for the QSO IRAS 13349+2438 by Brandt, Fabian, & Pounds (1996), who suggested that the dust exists within the highly ionized X-ray absorber (ergo, a dusty warm absorber). It has been suggested that dusty warm absorbers are present in several other Seyferts (NGC 3227: Komossa & Fink 1997a; NGC 3786: Komossa & Fink 1997b; IRAS 17020+4544: Leighly et al. 1997, Komossa & Bade 1998; MCG -6-30-15: Reynolds et al. 1997). Since it is unlikely that dust could form within the highly ionized gas responsible for the O VII and O VIII absorption, it has been suggested that the dust is evaporated off the putative molecular torus (at $``$ 1 pc) and, subsequently, swept up in an radially outflowing wind (cf. Reynolds 1997). In this paper, we present an alternative explanation. It is possible that there is a component of dusty gas (which we will refer to as the “lukewarm” absorber), with an ionization state such that hydrogen is nearly completely ionized but the O VII and O VIII columns are negligible, which has a sufficient total column to account for the reddening. Such a possibility has been mentioned by Brandt et al. (1996), while Reynolds et al. (1997) have suggested that the dusty warm absorber in MCG -6-30-15 may have multiple zones. Here we suggest that the lukewarm absorber lies far into the narrow-line region (NLR). Such a component has been detected in the Seyfert galaxy NGC 4151 (Kraemer et al. 1999), and it lies at sufficient radial distance to cover much of the NLR. We will demonstrate that the combination of a dusty lukewarm absorber and a more highly ionized (O VII and O VIII) absorber is consistent with the observed X-ray data and with the reddening of the narrow emission lines in the Seyfert 1 galaxy NGC 3227. ## 2 Absorption and Reddening in NGC 3227 NGC 3227 ($`z`$ $`=`$ 0.003) is a well studied Sb galaxy with an active nucleus, usually classified as a Seyfert 1.5 (Osterbrock & Martel 1993). X-ray observations of NGC 3227 with ROSAT and ASCA reveal the presence of ionized gas along the line-of-sight to the nucleus (Ptak et al. 1994; Reynolds 1997; George et al. 1998b). Using the combined ASCA and ROSAT dataset obtained in 1993, George et al. characterized the absorber with an ionization parameter (number of photons with energies $``$ 1 Ryd per hydrogen atom at the ionized face of the cloud) U $``$ 2.4, and a column density N<sub>H</sub> $``$ 3 x 10<sup>21</sup> cm<sup>-2</sup>. The UV and optical emission lines and continuum in NGC 3227 are heavily reddened. Cohen (1983) measured a narrow H$`\alpha `$/H$`\beta `$ ratio $``$ 4.68, and derived a reddening of E<sub>B-V</sub> $`=`$ 0.51 $`\pm `$ 0.04, assuming the intrinsic decrement to be equal to the Case B value (Osterbrock 1989). Cohen derived a somewhat larger reddening from the \[S II\] lines, E<sub>B-V</sub> $`=`$ 0.94 $`\pm `$0.23, which may be less reliable due to the weakness of \[S II\] $`\lambda `$4072 (see Wampler 1968). The ratio of broad H$`\alpha `$/H$`\beta `$ $``$ 5.1, indicating that the broad and narrow lines are similarly reddened. Winge et al. (1995) used the total (narrow $`+`$ broad) H$`\alpha `$/H$`\beta `$ ratio to derive a somewhat smaller reddening, E<sub>B-V</sub> $``$ 0.28. IUE spectra show the UV continuum of NGC 3227 is also heavily reddened (Komossa & Fink 1997a). Based on the Balmer lines, assuming Galactic dust properties and dust/gas ratio, the derived reddening requires a hydrogen (H I and H II combined) column density $``$ 2 x 10<sup>21</sup> cm<sup>-2</sup> (cf. Shull & Van Steenberg 1985), which is much greater than the estimated neutral column, but similar to that of the ionized gas detected in X-rays in 1993 ($`N_\mathrm{H}`$ $``$ 3 x 10<sup>21</sup> cm<sup>-2</sup>; George et al. 1998b). Several workers have suggested that NGC 3227 contains a screen of neutral material (in addition to that in the Galaxy) along the line-of-sight to the nucleus. Besides the ionized absorber, both Komossa & Fink (1997a) and George et al. (1998b) found a column density of $``$ 3 x 10<sup>20</sup> cm<sup>-2</sup> of neutral material, in addition to the Galactic column ($``$ 2.1 x 10<sup>20</sup> cm<sup>-2</sup>, cf. Murphy et al. 1996), is required to model the X-ray spectrum below $``$500 eV. A higher column density ($``$ 6 x 10<sup>20</sup> cm<sup>-2</sup>) has been suggested based on 21-cm VLA observations (Mundell et al. 1997). However the angular resolution of the VLA data is poor (12<sup>′′</sup>, or $``$ 850 pc) and Mundell et al. did not make a direct detection of H I absorption against the radio continuum source in the inner nucleus. In this paper we argue that based on the current data, there is no reason to include a significant column of completely neutral material. We show that a dusty lukewarm absorber lying outside the NLR is consistent with both the reddening of the optical continuum and narrow lines, and with the attenuation of the X-ray spectrum below $``$500 eV. ## 3 Modeling The Absorber ### 3.1 The Lukewarm Component The photoionization code we use has been described in previous publications (cf. Kraemer et al. 1994). For the sake of simplicity, we assume that the lukewarm absorber can be represented as a single zone, described by one set of initial conditions (i.e., density, ionization parameter, elemental abundances, and dust fraction). The gas is ionized by the continuum radiation emitted by the central source in the active nucleus of NGC 3227. In order to fit the SED, we first determined the intrinsic luminosity at the Lyman limit. Since NGC 3227 is heavily reddened, we fit the value at the Lyman limit based on the optical continuum flux. From the average fluxes measured by Winge et al. (1995), after correcting for a reddening of E<sub>B-V</sub> $`=`$ 0.4 (the average of the reddening quoted by Cohen (1983) and Winge et al.), assuming the reddening curve of Savage & Mathis (1979), we find that the intrinsic flux at 5525 Å is F<sub>λ</sub> $``$ 2.5 x 10<sup>-14</sup> ergs s<sup>-1</sup> cm<sup>-2</sup> Å<sup>-1</sup>. Interestingly, the optical luminosities of NGC 3227 and NGC 4151 are roughly equal and, therefore, we have made the assumption that the two galaxies have similar optical-UV SEDs. Using the same ratio of optical to UV flux for NGC 3227 as in NGC 4151 (Nelson et al. 1999), we determine F<sub>ν</sub> at the Lyman limit to be $``$ 6.2 x 10<sup>-26</sup> ergs s<sup>-1</sup> cm<sup>-2</sup> Hz<sup>-1</sup>. The X-ray continuum from 2 – 10 keV can be fit with an index, $`\alpha `$ $``$ 0.6 (George et al. 1998b), from which we derive a flux at 2 keV of $``$ 2.20 x 10<sup>-29</sup> ergs s<sup>-1</sup> cm<sup>-2</sup> Hz<sup>-1</sup>. Since an extrapolation of the X-ray continuum underpredicts the flux at the Lyman limit by more than two orders of magnitude, the continuum must steepen below 2 keV. Hence, we have modeled the EUV to X-ray SED as a series of power-laws of the form F<sub>ν</sub> $``$ $`\nu ^\alpha `$, with $`\alpha =1`$ below 13.6 eV, $`\alpha =2`$ over the range 13.6 eV $`h\nu <`$ 500 eV, and $`\alpha =0.6`$ above 500 eV. Given this simple parameterization, the steepening of the continuum cannot occur at a much lower energy, otherwise the EUV continuum would be too soft<sup>2</sup><sup>2</sup>2It is possible to have a lower break energy and a sufficient number of He II ionizing photons if the EUV continuum has a significant “Big Blue Bump”, as suggested by Mathews & Ferland (1987). Although assuming such a continuum does not appreciably affect our predictions, a full exploration of parameter space is beyond the scope of this paper. to produce the observed He II $`\lambda `$4686/H$`\beta `$ ratio ($``$ 0.23), specifically since the strong \[O I\] $`\lambda `$6300 line indicates that much of the NLR gas is optically thick (for the relative narrow emission-line strengths, see Cohen 1983). The luminosity in ionizing photons, from 13.6 – 10,000 eV, is $``$ 1.5 x 10<sup>53</sup> photons s<sup>-1</sup>. We have assumed roughly solar element abundances (cf. Grevesse & Anders 1989), which are, by number relative to H, as follows: He $`=`$ 0.1, C $`=`$ 3.4 x 10<sup>-4</sup>, N $`=`$ 1.2 x 10<sup>-4</sup>, O $`=`$ 6.8 x 10<sup>-4</sup>, Ne $`=`$ 1.1 x 10<sup>-4</sup>, Mg $`=`$ 3.3 x 10<sup>-5</sup>, Si $`=`$ 3.1 x 10<sup>-5</sup>, S $`=`$ 1.5 x 10<sup>-5</sup>, and Fe $`=`$ 4.0 x 10<sup>-5</sup>. We assume that both silicate and carbon dust grains are present in the gas, with a power-law distribution in sizes (see Mathis, Rumpl, & Nordsieck 1977). Thus, we have modified the abundances listed above by depletion of elements from gas-phase onto dust grains, as follows (cf. Snow & Witt 1996): C, 20%; O, 15%; Si, Mg and Fe, 50%. For our model, we require that 1) the absorber lies outside the majority of the NLR emission, and 2) the column of gas is fixed to obtain the observed reddening. Based on the WFPC2 narrow-band \[O III\] $`\lambda `$5007 imaging (Schmitt & Kinney 1996), we have placed the lukewarm absorber at least 100 pc from the central source, and truncated the model at a hydrogen column density N<sub>H</sub> $`=`$ 2 x 10<sup>21</sup> cm<sup>-2</sup>. We adjusted the ionization parameter such that the model produced a reasonable match to the absorption in the observed soft X-ray continuum (see below). ### 3.2 Comparison to the X-ray data To compare our model predictions to the X-ray data, we used the 1993 ASCA (0.6–10 keV) and ROSAT PSPC (0.1–2.5 keV) data described in George et al (1998b), excluding the flare (“t3” in Fig 4 of George et al). Following standard practice, the normalizations of each of the 4 ASCA instruments and of the ROSAT dataset were allowed to vary independently. The 5-7 keV band was also excluded from the analysis due to the presence of the strong, broad Fe emission line. We assumed the continuum described above, except that the spectral index above 500 eV was allowed to vary during the analysis. In addition to the lukewarm absorber (and Galactic absorption), any highly-ionized gas was modeled by a series of edges of fixed energy. This is somewhat problematic since neither the ASCA nor ROSAT instruments have sufficient spectral resolution and/or sensitivity to resolve all the possible edges. We have therefore tested for the edges most likely to be visible in highly ionized gas (e.g., O VII and O VIII). An acceptable fit to the data was obtained ($`\chi ^2=1195`$ for 1176 degrees of freedom; $`\chi _\nu ^2=1.02`$) with the following parameters for the lukewarm absorber: U $`=`$ 0.13, n<sub>H</sub> $`=`$ 20 cm<sup>-3</sup>, and the distance of the cloud from the ionizing source is $``$ 120 pc. The predicted electron temperature at the ionized face of this component is $``$ 18,000K and, therefore, it is thermally stable (cf. Krolik, McKee, & Tarter 1981). The best-fitting value for the spectral index above 500 eV was $`0.58\pm 0.03`$, consistent with our initial assumptions. We find evidence of absorption by several ions, with the following column densities: C V, 6.5 x 10<sup>17</sup> cm<sup>-2</sup>; O VI, 2.9 x 10<sup>17</sup> cm<sup>-2</sup>; O VII, 7.8 x 10<sup>17</sup> cm<sup>-2</sup>; O VIII, 1.0 x 10<sup>18</sup> cm<sup>-2</sup>; and, Ne IX, 4.9 x 10<sup>17</sup> cm<sup>-2</sup>. The O VII and O VIII edges translate to an effective hydrogen column density of highly-ionized gas of $``$2 x 10<sup>21</sup> cm<sup>-2</sup>. The data/model ratios from this fit are shown in Fig. 1. The slight underprediction of the absorption below 300 eV in the ROSAT band is easily rectified by a small ($``$ 20%) increase in the column density of the lukewarm absorber. The ionic column densities for the lukewarm absorber are listed in Table 1 and, as expected, the column densities of O VII and are too small to make detectable contributions to the X-ray absorption edges ($`\tau _{\mathrm{OVII}}`$ $`<`$ 0.01, as opposed to $``$ 0.19 for the highly-ionized gas). Therefore, this component does not resemble the X-ray absorbers most frequently discussed to date (eg Reynolds 1997; George et al 1998a). On the other hand, the model predicts substantial columns for H I, N V, Si IV, C IV, and Mg II, which would result in strong and, for the most part, saturated, UV resonance absorption lines. ASCA observed NGC 3227 again in 1995. George et al (1998b) have shown that during this epoch the observed spectrum was significantly different, consistent with a thick, highly-ionized cloud moving into and attenuating $``$85% of the line-of-sight to X-ray source. Under our hypothesis that the lukewarm absorber is located $``$100 pc from the nucleus, hence, we do not expect it to have varied between these two epochs. We have therefore checked and found that indeed the 1995 ASCA dataset are consistent with the soft X-ray attenuation from our lukewarm absorber. (Although the lack of simultaneous ROSAT PSPC data during 1995 prevents a stringent test.) ## 4 Discussion We have shown that the X-ray spectrum of NGC 3227 is consistent with attenuation by the sum of a highly-ionized absorber and a lukewarm absorber. We suggest that these are physically different components of the circumnuclear material surrounding NGC 3227. The characteristics of the highly-ionized absorber are similar to those previously suggested for NGC 3227 (George et al. 1998b); this is in the range of and generally similar to those in other Seyfert 1s (Reynolds 1997; George et al. 1998a). Such absorbers have been observed to vary on timescales $``$3 yr (and much faster in some cases), and are probably due to gas well within the NLR. The main result of this paper is that the second component, our “lukewarm absorber”, has the appropriate physical conditions to simultaneously explain the absorption seen below 0.5 keV in the X-ray band (previously modeled as completely neutral gas) and the reddening seen in the optical/UV. Agreement with the soft X-ray data is the result of the lukewarm gas containing significant opacity due to He II. Agreement with the reddening of the narrow emission lines places the component outside the NLR. Although the lukewarm absorber has the appropriate physical conditions and radial distance to redden the NLR, it must also have a sufficiently high covering fraction to be detected. For example, Reynolds (1997) found that 4/20 of radio-quiet active galaxies show both intrinsic X-ray absorption and reddening. Thus, the global covering factor of the dusty ionized absorber must be 20%, within the solid angle that we see these objects (cf., Antonucci 1993). Kraemer et al. (1999) have shown that the covering factor for optically thin gas in NGC 4151, similar to our lukewarm model, can be quite large ($``$ 30%). In addition, Crenshaw et al. (1999) find that $``$ 60% of Seyfert 1 galaxies have UV absorbers with a global covering factor $``$ 50%, and an ionization parameter similar to the lukewarm absorber, but with lower columns on average (cf. Crenshaw & Kraemer 1999). Therefore, it is entirely plausible that there would be optically thin NLR gas along our line-of-sight to the nucleus in a fraction of Seyfert 1s. The lukewarm model predicts a column of Mg II of 3.3 x 10<sup>14</sup> cm<sup>-2</sup>, which would produce strong Mg II $`\lambda `$2800 absorption. It is interesting that NGC 3227 is one of the few Seyfert 1s to show evidence of Mg II $`\lambda `$2800 in absorption (Ulrich 1988). While Kriss (1998) has shown that Mg II absorption can arise in clouds characterized by small column density and low ionization parameter (N<sub>H</sub> $``$ 10<sup>19.5</sup> cm<sup>-2</sup>, U $``$ 10<sup>-2.5</sup>), our results predict that it may also arise in a large column of highly ionized NLR gas, even if a substantial fraction of Mg is depleted onto dust grains. The lukewarm model predicts average grain temperatures of 30K – 60K, for grains with radii from 0.25 $`\mu `$m – 0.005 $`\mu `$m, respectively. The reradiated IR continuum, which is produced primarily by the silicate grains (cf. Mezger, Mathis, & Panagia 1982), peaks near 60 $`\mu `$m. Assuming a covering factor of unity, this component only accounts for $``$ 1% of the observed IR flux from NGC 3227, (which is $``$ 7.98 Jy at 60 $`\mu `$m; The IRAS Point Source Catalog ). It is likely that most of the thermal IR emission in NGC 3227 arises in the dense (n<sub>H</sub> $``$ 10<sup>3</sup> cm<sup>-3</sup>) NLR gas in which the narrow emission lines are formed, as is the case for the Seyfert 2 galaxy, Mrk 3 (Kraemer & Harrington 1986). ## 5 Summary Using the combined 1993 ROSAT/ASCA dataset for NGC 3227, and photoionization model predictions, we have demonstrated that the observed reddening may occur in dusty, photoionized gas which is in a much lower ionization state than X-ray absorbers detected (by their O VII and O VIII edges) in Seyfert 1 galaxies (cf. Reynolds 1997; George et al. 1998a) and the dusty warm absorbers that have been proposed (cf. Komossa & Fink 1997a). This component (the lukewarm absorber) is $``$ 120 pc from the central ionizing source and its physical conditions are similar to those in optically thin gas present in the NLR of the Seyfert 1 galaxy NGC 4151. If this model is correct, we predict that strong UV resonance absorption lines with high column densities from the lukewarm absorber will be observed in NGC 3227. We have confirmed earlier results regarding the presence of an X-ray absorber within NGC 3227 with O VII and O VIII optical depths similar to those determined by Reynolds (1997). This component lies closer to the central source than the lukewarm absorber, but is essentially transparent to EUV and soft X-ray radiation and, hence, does not effectively screen the lukewarm gas. We find no requirement for neutral gas in addition to the Galactic column. These results illustrate that a moderately large ($``$ 10<sup>21</sup> cm<sup>-2</sup>) column of ionized gas can produce significant soft X-ray absorption if much of the helium is in the singly ionized state. Since clouds with large He II columns may be a common feature of the NLR of Seyfert galaxies, such a component should be included in modeling the X-ray absorption. If such a component is present along our line-of-sight in an active galaxy, it is likely that the intrinsic neutral column has been overestimated. S.B.K. and D.M.C. acknowledge support from NASA grant NAG5-4103. T.J.T. acknowledges support from UMBC and NASA/LTSA grant NAG5-7835.
no-problem/0002/math0002119.html
ar5iv
text
# Gröbner Basis Procedures for Testing Petri Nets KEYWORDS: Petri net, Decidability, Reachability, Reversibility, Model Checking, Gröbner bases, Rewriting. AMS 1991 CLASSIFICATION: ## 1 Introduction Petri nets are a graphical and mathematical modelling tool applicable to many systems. They may be used for specifying information processing systems that are concurrent, asynchronous, distributed, parallel, non-deterministic, and/or stochastic. Graphically, Petri nets are useful for illustrating and describing systems, and tokens can simulate the dynamic and concurrent activities. Mathematically, it is possible to set up models such as state equations and algebraic equations which govern the behaviour of systems. Petri nets are understood by both practitioners and theoreticians and so provide a powerful communication link between them. For example, engineers can show mathematicians how to make practical and realistic models and mathematicians may be able to produce theories to make the systems more methodical or efficient, which is in fact demonstrated by this collaborative paper. The area of computer algebra called Gröbner basis theory includes the rewriting theory widely used in computer science and provides methods for handling the rule systems defining various types of algebraic structure. It has been proved that it is not always possible to deduce all consequences of a system of rules – when it is possible the levels of complexity involved quickly require the use of computers. In the commutative case computational Gröbner basis methods have has been successfully applied in theorem proving, robotics, integer programming, coding theory, signal processing, enzyme kinetics, experimental design, differential equations, and many others. All major computer algebra packages now include implementations of these procedures, and pocket calculator implementations will soon be available. A collection of recent papers on Gröbner basis research is . In this paper we show how Gröbner basis procedures can be applied to reversible Petri nets to solve the reachability problem. This provides a practical test which can be useful in the design and analysis of Petri nets. In particular the examples show a practical application of the Gröbner basis methods to Petri nets modelling navigation systems. Further details of these mechatronic navigation systems can be found in . Related algebraic research, and preliminaries to this paper may be found in . ## 2 Background to Gröbner Bases We give a brief summary of the main results in commutative Gröbner basis theory that will be used in this paper. For a fuller introduction to the subject see . Let $`X`$ be a set. Then the elements of $`X^\mathrm{\Delta }`$ are all power products of elements of $`X`$, including an identity $`1`$, with multiplication defined in the usual way. The commutativity condition is summarised by $`xy=yx`$ for all $`x,yX`$. Let $`K`$ be a field (the field of rational numbers, $``$ suffices for our work). Then $`K[X^\mathrm{\Delta }]`$ is the ring of commutative polynomials $$f=k_1m_1+\mathrm{}+k_tm_t$$ where $`k_1,\mathrm{},k_tK`$ and $`m_1,\mathrm{},m_tX^\mathrm{\Delta }`$ with the operations of polynomial addition and polynomial multiplication defined in the usual way. Consider a set of polynomials $`PK[X^\mathrm{\Delta }]`$. We say that two polynomials $`f`$ and $`g`$ of $`K[X^\mathrm{\Delta }]`$ are equivalent modulo $`P`$ and write $`f=_Pg`$ if their difference can be expressed in terms of $`P`$, i.e. $$fg=u_1p_1+\mathrm{}+u_np_n$$ for some $`p_1,\mathrm{},p_nP,u_1,\mathrm{},u_nK[X^\mathrm{\Delta }]`$. In 1965 Bruno Buchberger invented the concept of a Gröbner basis . Techniques of Gröbner basis theory enable us to decide whether or not $`f=_Pg`$ for given $`P`$, $`f`$, $`g`$ in $`K[X^\mathrm{\Delta }]`$ as above. Computation begins by specifying an ordering $`>`$ on the power products (this must be a well-ordering, compatible with multiplication). This enables us to define reduction modulo a set of polynomials $`P`$ – multiples of polynomials in $`P`$ are subtracted from a given polynomial $`f`$ in order to obtain successively smaller polynomials – the reduction is denoted $`_P`$. The reflexive, symmetric, transitive closure of $`_P`$ coincides with the congruence $`=_P`$. If $`P`$ is a Gröbner basis then $`_P`$ is confluent, meaning that there is a unique irreducible element in each congruence class, obtainable from any other element by repeated reduction modulo $`P`$. If $`P`$ is not a Gröbner basis then it is always possible to use Buchberger’s algorithm to obtain a set of polynomials $`Q`$ which is a Gröbner basis such that $`=_P`$ coincides with $`=_Q`$. Thus, given a set of polynomials $`PK[X^\mathrm{\Delta }]`$, the problem of deciding whether $`f`$ is equivalent to $`g`$ modulo $`P`$ for any $`f,g`$ in $`K[X^\mathrm{\Delta }]`$ can always be determined by calculating a Gröbner basis $`Q`$. The polynomials are equivalent if and only if their difference $`fg`$ reduces modulo $`Q`$ to zero. We will not explain these calculations in any greater detail, but refer the reader to texts on Gröbner bases, such as . In the commutative case it is always possible to determine a Gröbner basis, but computers are usually required for all but the most basic problems. In our examples we use $`\mathrm{𝖬𝖠𝖯𝖫𝖤}`$ and $`\mathrm{𝖦𝖠𝖯𝟥}`$, with some Gröbner basis procedures implemented by the second author . ## 3 Petri Nets A Petri net has two types of vertices: *places* (represented by circles) and *transitions* (represented by double lines). Edges exist only between places and transitions and are labelled with their weights. In modelling, places represent conditions and transitions represent events. A transition has input and output places, which represent preconditions and postconditions (respectively) of the event. A good introduction to the ideas of Petri nets is . ###### Definition 3.1 (Petri Net) A Petri net (without specific initial marking) is a quadruple $`\underset{¯}{N}=(X,T,,w)`$ where: $`X`$ is a finite set (of places), $`T`$ is a finite set (of transitions), $`(X\times T)(T\times X)`$ is a set of edges (flow relation) and $`w:`$ is a weight function. The state of a system is represented by the assignation of “tokens” to places in the net. ###### Definition 3.2 (Marking) A marking is a function $`M:X\{0\}`$. Dynamic behaviour is represented by changes in the state of the Petri net which is formalised by the concept of firing. ###### Definition 3.3 (Firing Rule) 1. A transition $`t`$ is enabled if each input place $`x`$ of $`t`$ is marked with at least $`w(x,t)`$ tokens. 2. An enabled transition may or may not fire – depending on whether or not the relevant event occurs. 3. Firing of an enabled transition $`t`$ removes $`w(x,t)`$ tokens from each input place $`x`$ of $`t`$ and adds $`w(t,y)`$ tokens to each output place $`y`$ of $`t`$. Despite their apparant simplicity, Petri nets can be used to model complex situations – for some examples see . One of the main problems in Petri net theory is reachability – the problem corresponds to deciding which situations (modelled by the net) are possible, given some sequence of events. ###### Definition 3.4 (Reachability) A marking $`M_1`$ is said to be reachable from a marking $`M_2`$ in a net $`\underset{¯}{N}`$, if there is a sequence of firings that transforms $`M_2`$ to $`M_1`$. Often a Petri net comes with a specified initial marking $`M_0`$. The reachability problem for a Petri net $`\underset{¯}{N}`$ with initial marking $`M_0`$ is: Given a marking $`M`$ of $`\underset{¯}{N}`$, is $`M`$ reachable in $`\underset{¯}{N}`$? For the type of Petri nets defined so far, reachability is decidable in exponential time and space . Reversibility is a property of Petri nets corresponding to the potential for the device being modelled to be reset. For our applications it is essential that we can reset, therefore this property is vital. ###### Definition 3.5 (Reversibility) A Petri net $`\underset{¯}{N}`$ is called reversible if a marking $`M^{}`$ is reachable from a marking $`M`$ in $`\underset{¯}{N}`$, then $`M`$ is reachable from $`M^{}`$. Different definitions of reversibility exist. The definition we use is chosen for engineering rather than mathematical reasons as in . The paper by Caprotti, Ferscha and Hong contains a result apparently similar to ours, but they use a different definition of reversibility, which is much more restrictive – perhaps this is appropriate for different applications. In order to apply Gröbner basis techniques we use monomials to represent the markings (there is a one-to-one correspondence between monomials and markings), and so associate a transition with the difference between two monomials (input and output). ###### Definition 3.6 (Polynomial Associated with a Marking) Let $`\underset{¯}{N}=(X,T,,w)`$ be a Petri net. To every marking $`M`$ we will associate a polynomial $$pol(M):=\underset{X}{}x^{M(x)},$$ that is the formal product of elements of $`X`$ raised to the power $`M(x)`$ (the number of tokens held at the place $`x`$). ###### Definition 3.7 (Polynomial Associated with a Transition) Each transition $`t`$ has an associated polynomial $$pol(t):=\underset{X}{}x^{w(x,t)}\underset{X}{}y^{w(t,y)},$$ that is the input required for the transition to be enabled minus the output resulting from a firing. We often write $`pol(t)=lr`$, to distinguish the two terms. To represent the dynamic structure we must consider how the transition polynomials are related to polynomials of markings which enable them and how firings of transitions affect the polynomials of the markings. Suppose a marking $`M_i`$ enables a transition $`t_i`$. By the definitions it is clear that this corresponds to $`pol(M_i)`$ being equal to $`u_il_i`$ where $`pol(t_i)=l_ir_i`$ and $`u_i`$ is a power product in $`X^\mathrm{\Delta }`$. It then follows that if $`t_i`$ fires, the resulting marking $`M_{i+1}`$ will have polynomial $`pol(M_{i+1})=pol(M_i)u_ipol(t_i)=u_ir_i`$. ###### Example 3.8 (Polynomials and the Firing Rule) *The diagrams above show three different states of a transition $`t_3`$ of a Petri net Example 3.13. The polynomial associated with the transition is $`pol(t_3)=x_3x_6x_4`$. The first marking $`M_1`$ does not enable $`t_3`$; this corresponds to the fact that $`pol(M_1)=(x_6)^2`$ is not a multiple of $`x_3x_6`$. The second marking $`M_2`$ does enable $`t_3`$, and $`pol(M_2)=x_3(x_6)^2`$. The marking resulting from the firing of $`t_3`$ after it has been enabled by $`M_2`$ is $`M_3`$. In terms of polynomials the firing is represented by $`pol(M_3)=pol(M_2)x_6pol(t_3)=x_4x_6`$. A firing sequence is denoted by $`M_0\stackrel{t_1}{}M_1\stackrel{t_2}{}\mathrm{}\stackrel{t_n}{}M_n`$ where the $`M_i`$ are markings and the $`t_i`$ are transitions (events) transforming $`M_{i1}`$ into $`M_i`$. In terms of polynomials the above firing sequence gives the information $`pol(M_n)=pol(M_0)u_1pol(t_1)u_2pol(t_2)\mathrm{}u_npol(t_n)`$ for some $`u_1,u_2,\mathrm{},u_nX^\mathrm{\Delta }`$.* ###### Theorem 3.9 (Reachability and Equivalence of Polynomials) Let $`\underset{¯}{N}`$ be a reversible Petri net with initial marking $`M_0`$. Define $`P:=\{pol(t):tT\}`$. Then a marking $`M`$ is reachable in $`\underset{¯}{N}`$ if and only if $`pol(M_0)=_Ppol(M)`$. Proof First suppose that $`M`$ is reachable. Then there is a firing sequence $`M_0\stackrel{t_1}{}M_1\stackrel{t_2}{}\mathrm{}\stackrel{t_{n1}}{}M_{n1}\stackrel{t_n}{}M`$. Therefore, as above, there exist $`u_1,\mathrm{},u_nX^\mathrm{\Delta }`$ such that $`pol(M_0)pol(M)=u_1pol(t_1)+\mathrm{}+u_npol(t_n)`$. Hence $`pol(M_0)=_Ppol(M)`$. For the converse, suppose $`pol(M_0)=_Ppol(M)`$. Then $$pol(M_0)=pol(M)\pm u_1pol(t_1)\pm \mathrm{}\pm u_mpol(t_m).$$ The proof is by induction on $`m`$. For the base step put $`m=0`$ then $`pol(M_0)=pol(M)`$. The correspondence between markings and their associated polynomials is one-to-one, so here $`M_0=M`$ and $`M`$ is clearly reachable. For the induction step we assume that a marking $`M^{}`$ is reachable from $`M_0`$ if $$pol(M_0)=pol(M^{})\pm u_1pol(t_1)\pm \mathrm{}\pm u_{m1}pol(t_{m1}).$$ for a fixed $`m`$. Now suppose $`M`$ is a marking such that $$pol(M_0)=pol(M)\pm u_1pol(t_1)\pm \mathrm{}\pm u_mpol(t_m).$$ Then for some $`i\{1,\mathrm{},m\}`$ either $`pol(M_0)=u_il_i`$ or $`pol(M_0)=u_ir_i`$ where $`pol(t_i)=l_ir_i`$. In the first case $`pol(M_0)=u_il_i`$. Observe that $`M_0`$ enables $`t_i`$ and define a marking $`M^{}`$ by $`M_0\stackrel{t_i}{}M^{}`$. Then $$pol(M^{})=pol(M)\pm u_1pol(t_1)\pm \mathrm{}\pm u_{i1}pol(t_{i1})\pm u_{i+1}pol(t_{i+1})\pm \mathrm{}\pm u_mpol(t_m)$$ so, by assumption, $`M`$ is reachable from $`M^{}`$ and so $`M`$ is reachable from $`M_0`$. In the second case $`pol(M_0)=u_ir_i`$. There is a marking $`M^{}`$ such that $`pol(M^{})=u_ir_i`$ and $$pol(M^{})=pol(M)\pm u_1pol(t_1)\pm \mathrm{}\pm u_{i1}pol(t_{i1})\pm u_{i+1}pol(t_{i+1})\pm \mathrm{}\pm u_mpol(t_m).$$ Now, $`M`$ is reachable from $`M^{}`$ by assumption and $`M_0`$ is reachable from $`M^{}`$ by a firing of $`t_i`$. By reversibility, therefore, $`M^{}`$ is reachable from $`M_0`$ and hence $`M`$ is reachable from $`M_0`$. $`\mathrm{}`$ ###### Corollary 3.10 (Gröbner Bases Determine Reachability) Reachability in a reversible Petri net can be determined using a Gröbner basis. Proof Let $`K`$ be a field. First observe that $`PK[X^\mathrm{\Delta }]`$. Let $`Q`$ be a Gröbner basis for $`P`$. Then $`pol(M)=_Ppol(M_0)`$ if and only if there exists $`pK[X^\mathrm{\Delta }]`$ such that $`pol(M)`$ and $`pol(M_0)`$ reduce to $`p`$ by $`_Q`$. $`\mathrm{}`$ ###### Remark 3.11 (Catalogue of Reachable Markings) *Recall that Gröbner bases techniques use an ordering on the power products. There is a one-to-one correspondence between power products and markings. We can begin to catalogue the markings in increasing order. Given a Gröbner basis for the polynomials of the transitions of a Petri net it can be determined whether each marking is reachable: if the power product reduces to the same irreducible power product as the initial marking then it is reachable. In this way the Gröbner basis can be used to build up a list of reachable markings.* ###### Remark 3.12 (Testing for Reversibility in Petri Net Design) *The reversibility of a Petri net can be interpreted as the ability to reset the application it models. Whilst the reachability of a place, given an initial marking, can be determined by standard means, reversibility cannot be established directly.* *Calculating a Gröbner basis for the Petri net makes the determination of reachable markings much more obvious, and unwanted markings can be immediately detected. There are two reasons why unwanted markings may occur. In the first case there is a basic error in the net which allows some firing sequence of marking which should be avoided; the Gröbner basis is effective in showing up these markings. The second type of problem occurs when marking supposed to be unreachable is found to be reachable, the implication here being that the net is not truly reversible. As reversibility is a desirable property, the net can then be modified and retested.* *In practical terms Gröbner bases have been shown by the authors to be useful in Petri net design – repeated testing by computing Gröbner bases shows up unintended effects or non-reversibility. Our examples are Petri nets designed by the first author to model software interfaces to hardware components of mobile robot navigation systems, and their development was helped in this way.* ###### Example 3.13 (Software Interface for Motors) *This Petri net represents the software interface between a user and the set of motors used to drive a mobile robot.* *Here, once the motors have been initialised, the user may input the required speed and direction for each motor. This information is then interpreted and written to the relevant port, if there is also a token available in the “ready” place (3), to enable the “interpret speed and direction” transition $`t_3`$.* *The places are labelled $`x_1,\mathrm{},x_{11}`$. There are eight transitions, and their polynomials are as follows:* | $`pol(t_1)=`$ | $`x_1x_2x_3`$ | $`pol(t_2)=`$ | $`x_2x_7`$ | $`pol(t_3)=`$ | $`x_3x_6x_4`$ | $`pol(t_4)=`$ | $`x_4x_5`$ | | --- | --- | --- | --- | --- | --- | --- | --- | | $`pol(t_5)=`$ | $`x_7x_6`$ | $`pol(t_6)=`$ | $`x_5x_3x_8`$ | $`pol(t_7)=`$ | $`x_3x_8x_1`$ | $`pol(t_8)=`$ | $`x_8x_7`$ | The Gröbner basis for this set of polynomials – with respect to a degree-lexicographic ordering – is $$\{x_4x_1,x_5x_1,x_6x_2,x_7x_2,x_8x_2,x_2x_3x_1\}.$$ The catalogue of markings reachable from an initial marking $`x_1`$ is quickly calculated to be: $$\{x_1,x_4,x_5,x_2x_3,x_3,x_6,x_3x_7,x_3x_8\}.$$ This catalogue can be examined by the Petri net designer who interprets the different states. When unexpected states appear in the catalogue it indicates an error, which generally signifies that the net is not reversible. *For Petri nets such as this to execute efficiently, it is essential that the user can confirm both the reachability and the reversibility of the net. For instance, should the place “done” (5) prove to be unreachable from an initial marking where the place “start” (1) held a token, this would show that no data would be written to the port in transition “write to port” ($`t_4`$), thus making the motors uncontrollable. If the net here was non-reversible, it would indicate that the motors could not be disabled, which in this situation is undesirable. Once the Petri net has been tested for such bugs, the user need only concern themselves with the simple functions executed within individual transitions, greatly decreasing the likelihood of a serious, or perhaps dangerous, failure of the robot.* ## 4 Coloured Petri Nets A coloured Petri net circulates tokens of more than one type. The transitions in the net are affected differently by different combinations of colours of tokens. An example of this is where tokens represent data signals. Incomplete or corrupt signals should be dealt with differently from complete signals, these two types of data would be represented by different colours of tokens (“pass” and “fail” in Example 4.3). Recall that if $`C`$ is a set (of colours) then $`C^\mathrm{\Delta }`$ is the set of all power products of elements of $`C`$. Essentially an element of $`C^\mathrm{\Delta }`$ assigns a non-negative integer to each element of $`C`$. The definition of a coloured Petri net that we give uses this kind of notation, but is equivalent to that given by Murata in . One element $`m`$ of $`C^\mathrm{\Delta }`$ is said to be a multiple of another element $`l`$ if $`m=ul`$ for some $`uC^\mathrm{\Delta }`$. ###### Definition 4.1 (Coloured Petri Net) A coloured Petri net is a quintuple $`\underset{¯}{N}_C=(X,T,C,,w)`$, where $`X`$ is a set of places, $`T`$ is a set of transitions, $`C`$ is a set of colours, $`(X\times T)(T\times X)`$ is the flow relation and $`w:C^\mathrm{\Delta }`$. A marking in $`\underset{¯}{N}_C`$ is a function $`M:XC^\mathrm{\Delta }`$. The firing rule is as follows: 1. A transition $`t`$ is enabled if each input place $`x`$ of $`t`$ is marked with a multiple of $`w(x,t)`$. 2. An enabled transition may or may not fire. 3. A firing of an enabled transition $`t`$ deletes the power product $`w(x,t)`$ from the marking at each input place $`x`$, and appends the marking at each output place $`y`$ with the power product $`w(t,y)`$. A coloured Petri net can in fact be considered as a structurally folded version of an ordinary Petri net if the number of colours is finite. Each place $`x`$ is unfolded into a set of places, one for each colour of token which $`x`$ may hold, and each transition $`t`$ is unfolded into a number of transitions, one for each way that $`t`$ may fire. It is immediate that the techniques discussed in the previous section may be applied to coloured Petri nets. In fact we can pass directly from the coloured Petri net to commutative polynomials in $`K[(X\times C)^\mathrm{\Delta }]`$, where $`K`$ is a field. Elements of $`(X\times C)^\mathrm{\Delta }`$ are written $`(x_1,c_1)\mathrm{}(x_n,c_n)`$, where $`x_1,\mathrm{},x_nX`$, and $`c_1,\mathrm{},c_nC`$. We define $`(x_i,c_i)(x_j,c_j)=(x_i,c_ic_j)`$ when $`x_i=x_j`$. ###### Theorem 4.2 (Gröbner Bases for Coloured Petri Nets) Let $`\underset{¯}{N}_C`$ be a coloured Petri net. If $`M`$ is a marking in $`\underset{¯}{N}_C`$, then define the polynomial associated with the coloured marking to be $`pol(M):=_X(x,M(x))`$. Similarly if $`t`$ is a transition in $`\underset{¯}{N}_C`$, then define the polynomial associated with the coloured transition to be $`pol(t):=_X(x,w(x,t))_X(y,w(t,y))`$. From these definitions we observe that a transition $`t`$ in a coloured Petri net has an associated polynomial of the form $`pol(t)=lr`$ where $`l,r(X\times C)^\mathrm{\Delta }`$. The transition $`t`$ is enabled by a marking $`M`$ if $`pol(M)=ul`$, for some $`u(X\times C)^\mathrm{\Delta }`$. If $`t`$ fires then the new marking has associated polynomial $`ur`$. It follows that if we define $`P:=\{pol(t):tT\}`$ then a marking $`M`$ is reachable if and only if $`pol(M)=_Ppol(M_0)`$. Therefore if $`Q`$ is a Gröbner basis for $`P`$ it is decidable whether or not $`M`$ is reachable in $`\underset{¯}{N}`$. The results (and proofs) are naturally very similar to the results for standard Petri nets. The value is in the application – where it is more efficient to work with coloured nets it is appropriate to associate polynomials to these models directly. ###### Example 4.3 (Software Interface for Compass) *The following Petri net shows the software interface to an external compass, where the compass provides data in the form of an ASCII string.* *The states here are numbered, but two types of token: “pass” ($`x`$) and “fail” ($`y`$), circulate in the net. This Petri net is initialised with a single “pass” ($`x`$) token at the “start” place ($`1`$) together with a “pass” ($`x`$) and a “fail” ($`y`$) token in each of the places “input” ($`18`$) and “continue” ($`19`$). The additional tokens at ($`18`$) and ($`19`$) provide the colouring essential for rigorous testing of this Petri net. For instance, when the “return data” $`t_3`$ or $`t_{20}`$ transition is fired, the colour of the token output to place “raw data ready” ($`3`$) depends solely on the colour of the token from place “input” ($`18`$). The transitions “read in” $`t_4`$ or $`t_{21}`$, “calculate checksum” $`t_5`$ or $`t_{22}`$ and “test” $`t_6`$ or $`t_{23}`$ will output a token matching the input token, having no effect on the colouring, but the transition “find bearing” $`t_7`$ will only be enabled by a “pass” token, which represents a received ASCII string with a correct checksum, as determined in the “test” $`t_6`$ or $`t_{23}`$ transition. A “fail” token would instead enable the transition “data request” $`t_{16}`$, which will provide a value using dead reckoning in place of the corrupted data.* *Colouring of this net is helpful, as it ensures that only complete uncorrupt data is used. The Petri net of this example was constructed by repeated testing using Gröbner basis methods. We use $`x_i`$ to denote a “pass” token at place $`i`$, and $`y_i`$ to denote a “fail” token at place $`i`$. The initial marking is therefore associated with the monomial $`x_1x_{18}x_{19}y_{18}y_{19}`$. The set $`P`$ of polynomials associated with the transitions is as follows:* $`x_1x_2x_4,x_5x_{12},x_2x_{18}x_3x_{18},y_2y_{18}y_3y_{18},x_3x_{13}x_6,y_3x_{13}y_6,x_6x_7,y_6y_7,`$ $`x_7x_8,y_7y_8,x_8x_{10},x_{12}x_{13},x_{11}x_2x_{14},x_{14}x_{19}x_{15}x_{19},x_{14}y_{19}y_{15}y_{19},y_{15}x_{17},`$ $`x_3x_{17}x_{16},y_3x_{17}x_{16},x_2x_{17}x_{16},x_{15}x_{12},x_4x_5,y_8x_9,x_9x_{11},x_{10}x_{11},x_{16}x_1.`$ *Using $`\mathrm{𝖬𝖠𝖯𝖫𝖤}`$ a Gröbner basis $`Q`$ for $`P`$ with respect to the order $`tdeg`$ has 47 rules* *Given the initial marking $`x_1x_{18}x_{19}y_{18}y_{19}`$, there are 11 reachable markings having five tokens and 32 reachable markings having six tokens. Examining the catalogue of reachable states and relating them to the situations they represent will confirm that the net will behave as the user would expect.* ## 5 Further Considerations ### 5.1 Boundedness Another interesting property is boundedness – the maximum number of tokens that may exist at a particular place or the maximum number of tokens that can exist in the entire net –given an initial marking. It is obvious to see how the catalogue may be used to check either type of boundedness, but more interesting to observe that certain information may be derived directly from the ($`tdeg`$) Gröbner basis. If the Gröbner basis contains only polynomials $`lr`$ (assume $`l>r`$) such that $`l`$ and $`r`$ are power products of the same total degree then all reachable markings will have the same number of tokens. The least number of tokens possible is the degree of the reduced form of the polynomial associated with the initial marking. Regarding the polynomials $`lr`$ as reduction rules $`lr`$ we can sometimes determine the most number of tokens possible by examining the degree-reducing rules to find what multiples of the reductum can be reduced to the same form as the initial marking (it was possible to do this with the 47 rule Gröbner basis obtained for our last example). ### 5.2 Use and Efficiency Similarly to we point out that although in general Gröbner basis computation can be lengthy, the type arising from Petri nets are not usually complex, involving only two-term polynomials with unitary coefficients. There is no problem, in any case with ordinary or coloured Petri nets, as commutative Gröbner bases can always be found, using a computer algebra package (e.g. $`\mathrm{𝖬𝖠𝖯𝖫𝖤}`$). Although it is possible to make use of existing implementations of Buchberger’s Algorithm it would be practical to include the Gröbner basis procedures as part of the software in our mechatronic navigation systems. One aim of the research in is to provide an easier way of safely programming a mobile robot. By using a Petri net to model the navigation system the C code controlling the robot is split into small pieces, corresponding to the transitions in the net. A transition can be programmed in a few lines, and code for a selection of alternative transitions could be provided in advance. The structure of the net corresponds to the structure of the executable program, and thus by replacing individual transitions in the net the whole program for controlling the mobile robot can be rewritten and retested with the minimum difficulty. The Gröbner basis tests would form an important part of the software, particularly in terms of safety. One example this work could be applied to would be an autonomous excavator. By using the Petri net representation, modifications to the control of the excavator could be made in the field, without the requirement for on site programming expertise. The Gröbner basis testing would provide a catalogue of reachable markings. If any undesirable (dangerous) states of the Petri net were shown to be reachable, this problem could be rectified by further alteration to the net until the model was shown to be satisfactory. ### 5.3 Streamed Petri Nets We are interested in Petri nets that can model systems involving streams of data. Places will hold ordered lists of coloured tokens rather than unordered sets of tokens. This introduces a degree of noncommutativity into the Petri net. The Gröbner basis situation is more interesting here than with the ordinary Petri nets. Undecidability of the word problem indicates the existence of streamed Petri nets for which it is not possible to determine whether or not a state is reachable. The streamed models we have worked with store the streams of data as stacks or allow random access to any substream of data within a given stream. The problem with this is that the type of streamed Petri net suitable for our more advanced models is one whose transitions read data streams from the left and build them up on the right. This is a net to which we cannot yet apply Gröbner basis theory, but hope to investigate in future work. ### 5.4 Enhanced Petri Nets Inhibitor arcs are the simplest extension to a basic Petri net. The inhibitor arc is represented by a line with a small circle at the end, equivalent to the $`\mathrm{𝙽𝙾𝚃}`$ in switching theory, and is used to prevent a transition from firing. If a transition $`t`$ has an inhibitor arc from a place $`p`$ then $`t`$ is enabled only when there are tokens in all of its ordinary input places and no tokens in the place $`p`$. The inhibitor arcs provide an alternative method of forcing a decision between two enabled transitions. These decisions can also be made randomly, or with the use of colours, but in this specific case, the inhibitor arc can give one transition priority over the other by preventing the second transition from firing. This method of decision making could be useful in any system where one function should be given priority over another. For instance, if a Petri net driving a mobile robot detected an obstruction, it would be important that it should stop, or alter the speed of the motors before attempting to read any sensors. It is interesting to consider how the Gröbner basis methods could be extended to cover variations of the Petri net theory, especially when the results of the extensions are motivated by the requirement for testing modifications to navigation systems. ### 5.5 Linked Petri Nets The motivation for our work has been the application to control systems of mobile robots, using the TRAMP philosophy (Toolkit for Rapid Autonomous Mobile Prototyping). It allows the analysis of different control components of a single mobile robot and it would be desirable for the Petri nets to be logically linked to provide a unified model of the control of the device. The analysis of the nets by Gröbner bases should then be extended to provide an analysis of the model as a whole. The problem of the subdivision of a large net into suitable components (objects) and the extension of local analyses of such components to global checks on reachability, safety etc, are examples of the well known local to global problem.
no-problem/0002/astro-ph0002088.html
ar5iv
text
# Constraints on Cluster Formation from Old Globular Cluster Systems ## 1. Introduction Until quite recently, it was commonly assumed that the old globular clusters in galaxy halos were the remnants of a unique sort of star formation that occurred only in a cosmological context. The discovery of young, massive, “super” star clusters in local galaxy mergers and starbursts has clearly done much to change this perception; but at least as important is the parallel recognition that star formation in the Milky Way itself proceeds—under much less extreme conditions—largely in a clustered mode. Observations of entire starbursts (Meurer et al. 1995) and individual Galactic molecular clouds (e.g., Lada 1992), as well as a more general comparison of the mass function of molecular cloud clumps and the stellar IMF (Patel & Pudritz 1994), all argue convincingly that (by mass) most new stars are born in groups rather than in isolation. The production of a true stellar cluster—one that remains bound even after dispersing the gas from which it formed—is undoubtedly a rare event, but it is an exceedingly regular one. Seen in this light, the globular cluster systems (GCSs) found in most galaxies can be used to good effect as probes not only of galaxy formation but also of an important element of the generic star-formation process at any epoch. This is arguably so even in cases where newly formed clusters may not be “massive” according to the criteria of this workshop (the main issue being simply the formation of a self-gravitating stellar system), and even though GCSs have been subjected to $`10^{10}`$ yr of dynamical evolution in the tidal fields of their parent galaxies (see O. Gerhard’s contribution to these proceedings, and note that theoretical calculations geared specifically to conditions both in the Milky Way \[Gnedin & Ostriker 1997\] and in the giant elliptical M87 \[Murali & Weinberg 1997\] suggest that GCS properties are most affected by evolution inside roughly a stellar effective radius in each case). ## 2. The Efficiency of Cluster Formation At some point during the collapse and fragmentation of a cluster-sized cloud of gas, the massive stars which it has formed will expel any remaining gas by the combined action of their stellar winds, photoionization, and supernova explosions. If the star formation efficiency of the cloud, $`\eta M_{\mathrm{stars}}/(M_{\mathrm{stars}}+M_{\mathrm{gas}})`$, is below a critical threshold just when the gas is lost, then the blow-out removes sufficient energy that the stellar group left behind is unbound and disperses into the field. The precise value of this threshold depends on details of the internal density and velocity structure of the initial gas cloud, and on the timescale over which the massive stars dispel the gas; but various estimates place it in the range $`\eta _{\mathrm{crit}}0.2`$–0.5 (e.g., Hills 1980; Verschueren 1990; Goodwin 1997, and these proceedings). There is no theory which can predict whether any given piece of gas can ultimately achieve $`\eta >\eta _{\mathrm{crit}}`$, but it is straightforward to evaluate empirically the frequency—or efficiency—with which this occurs. Traditionally, this has been discussed for GCSs in terms of specific frequency, defined by Harris & van den Bergh (1981) as the normalized ratio of a galaxy’s total GCS population to its $`V`$-band luminosity: $`S_N𝒩_{\mathrm{tot}}\times 10^{0.4(M_V+15)}`$. As is well known (see, e.g., Elmegreen 2000 for a recent review), there are substantial and systematic variations in this ratio from one galaxy to another: Global specific frequencies decrease with increasing galaxy luminosity for early-type dwarfs, then increase gradually with $`L_{V,\mathrm{gal}}`$ in normal giant ellipticals, and finally increase rapidly with galaxy luminosity among the central ellipticals (BCGs) in groups and clusters of galaxies. In addition, the more extended spatial distribution of GCSs relative to halo stars in some (but not all) bright ellipticals leads to local specific frequencies (ratios of GCS and field-star densities) that increase with radius inside the galaxies (see McLaughlin 1999). However, McLaughlin (1999) shows (following related work by Blakeslee et al. 1997 and Harris et al. 1998) that these trends in $`S_N`$ do not reflect any such behavior in the ability to form globulars in protogalaxies. To see this, it is best to work in terms of an efficiency per unit mass, $`ϵ_{\mathrm{cl}}M_{\mathrm{gcs}}^{\mathrm{init}}/M_{\mathrm{gas}}^{\mathrm{init}}`$, where $`M_{\mathrm{gas}}^{\mathrm{init}}`$ is the total gas supply that was available to form stars in a protogalaxy (whether in a monolithic collapse or a slower assembly of many distinct, subgalactic clumps is unimportant) and $`M_{\mathrm{gcs}}^{\mathrm{init}}`$ is the total mass of all globulars formed in that gas. As McLaughlin (1999) argues, the integrated mass of an entire GCS should not be much affected by dynamical evolution, and it is most appropriate to include any gas presently associated with galaxies, as well as their stellar masses, in estimating their initial gas contents. The observable ratio $`M_{\mathrm{gcs}}/(M_{\mathrm{gas}}+M_{\mathrm{stars}})`$ should therefore improve on $`S_NM_{\mathrm{gcs}}/M_{\mathrm{stars}}`$ as an estimator of $`ϵ_{\mathrm{cl}}`$. Figure 1 shows the total GCS populations vs. galaxy luminosity in 97 early-type galaxies and the metal-poor spheroid of the Milky Way and compares the expectations for a constant $`ϵ_{\mathrm{cl}}=0.26\%`$, given both the variation of stellar mass-to-light ratio with $`L_{V,\mathrm{gal}}`$ on the fundamental plane of ellipticals and the increase of $`M_{\mathrm{gas}}/M_{\mathrm{stars}}`$ with $`L_{V,\mathrm{gal}}`$ for regular gE’s and BCGs inferred from the correlation between their X-ray and optical luminosities (bold solid curve; see McLaughlin 1999), and after correcting (according to the model of Dekel & Silk 1986) for the gas mass lost in supernova-driven winds from early bursts of star formation in faint dwarfs ($`L_{V,\mathrm{gal}}2\times 10^9L_{}`$; bold dashed line). All systematic variations in GCS specific frequencies reflect only different relations, in different magnitude ranges, between $`M_{\mathrm{gas}}^{\mathrm{init}}`$ and the present-day $`L_{V,\mathrm{gal}}`$. McLaughlin (1999) also shows that the ratio of local densities, $`\rho _{\mathrm{gcs}}/(\rho _{\mathrm{gas}}+\rho _{\mathrm{stars}})`$, is constant as a function of galactocentric position (beyond a stellar effective radius) in each of the large ellipticals M87, M49, and NGC 1399, and that this ratio is the same in all three systems: $`ϵ_{\mathrm{cl}}=0.0026\pm 0.0005`$. Moreover, it seems (although the data are much less clear in this case) that the same efficiency also applies to the ongoing formation of open clusters in the Galactic disk. It therefore appears that there is a universal efficiency for cluster formation, whose value should serve as a strong constraint on very general theories of star formation. (Note that one exception to the figure of 0.26% by mass may be the formation of massive clusters in mergers and starbursts, where it has been suggested that $`ϵ_{\mathrm{cl}}1`$–10% \[e.g., Meurer et al. 1995; Zepf et al. 1999\]. However, this conclusion is very uncertain and requires more careful investigation.) While this result certainly has interesting implications for aspects of large-scale galaxy formation (McLaughlin 1999; Harris et al. 1998), the main point to be emphasized here is that the variations in early-type GCS specific frequencies are now understood to result from variations in the gas-to-star mass ratio in galaxies, rather than from any peculiarities in their GCS abundances per se (cf. the similar suggestion of Blakeslee et al. 1997). That is, the efficiency of unclustered star formation was not universal in protogalaxies: while globulars apparently always formed in just the numbers expected of them, the formation of a normal proportion of field stars was subsequently disabled in many cases. The clumps of gas which formed bound clusters therefore must have collapsed before those forming unbound groups and associations, i.e., they must have been denser than average. This and the insensitivity of $`ϵ_{\mathrm{cl}}`$ to local or global galaxy environment together suggest that quantitative theories of cluster formation should seek to identify a threshold in relative density, $`\delta \rho /\rho `$, that is always exceeded by $``$0.26% of the mass fluctuations in any large star-forming complex. ## 3. Globular Cluster Binding Energies Even as they clarify the probability that a $``$$`10^5`$$`10^6M_{}`$ clump of gas was able to form stars with cumulative efficiency $`\eta `$ high enough to produce a bound globular cluster, the integrated GCS mass ratios in galaxies say nothing of how this was achieved in any individual case. This more ambitious question is essentially one of energetics—When does the energy injected by the massive stars in an embedded young cluster overcome the binding energy of whatever gas remains, thus expelling it and terminating star formation?—and its answer requires both an understanding of local star formation laws ($`d\rho _{\mathrm{stars}}/dt`$ as a function of $`\rho _{\mathrm{gas}}`$) and a self-consistent treatment of feedback on small ($``$10–100 pc) scales. One way to begin addressing this complex problem empirically is to compare the energies of globular clusters with the initial energies of their gaseous progenitors. McLaughlin (2000) has calculated the $`V`$-band mass-to-light ratios of 39 regular (non–core-collapsed) Milky Way globulars, and finds that they are all consistent with a single $`\mathrm{{\rm Y}}_{V,0}=(1.45\pm 0.10)M_{}L_{}^1`$. Applying this to all other Galactic globulars, and adopting single-mass, isotropic King (1966) models for their internal structure, then allows binding energies $`E_b`$ to be estimated for a complete sample of 109 regular (and 30 post–core-collapse) objects. This exercise reveals a very tight correlation between $`E_b`$, total cluster luminosity $`L`$ (or mass $`M=\mathrm{{\rm Y}}_{V,0}L`$), and Galactocentric position: $`E_b=7.2\times 10^{39}\mathrm{erg}(L/L_{})^{2.05}(r_{\mathrm{gc}}/8\mathrm{kpc})^{0.4}`$, with uncertainties of roughly $`\pm `$0.1 in each of the fitted exponents on $`L`$ and $`r_{\mathrm{gc}}`$ (cf. Saito 1979, who claimed $`E_bM^{1.5}`$ on the basis of a much smaller dataset). These constraints on $`\mathrm{{\rm Y}}_{V,0}`$ and for $`E_b(L,r_{\mathrm{gc}})`$ are, in fact, two edge-on views of a fundamental plane in the (four-dimensional) parameter space of King models, to which real globulars are confined in the Milky Way (cf. Djorgovski 1995; Bellazzini 1998). The full characteristics of this plane subsume all other observable correlations between any combination of other cluster parameters (see McLaughlin 2000), and they therefore provide a complete set of independent facts to be explained in any theory of globular cluster formation and evolution. In fact, the $`E_b`$$`L`$ correlation is stronger among clusters at larger Galactocentric radii (where dynamical cluster evolution is weaker), suggesting that it was set largely by the cluster formation process. The same is true of a weaker correlation between cluster concentration and luminosity (see Vesperini 1997), which is related to the distribution of globulars on the fundamental plane. Any collection of critically stable, virialized gas spheres under a surface pressure $`P_s`$ have a common column density, $`\mathrm{\Sigma }M/(\pi R^2)P_s^{0.5}`$, and thus $`E_b^{\mathrm{gas}}GM^2/RM^{1.5}P_s^{0.25}`$. Harris & Pudritz (1994) have developed a physical framework in which protoglobular clusters in the Milky Way were massive analogues of the dense clumps in disk molecular clouds today; in particular, their column densities were the same: $`\mathrm{\Sigma }10^3M_{}`$ pc<sup>-2</sup> at $`r_{\mathrm{gc}}=8`$ kpc. In addition, it is natural to expect $`P_sr_{\mathrm{gc}}^2`$ for such protocluster clumps embedded in larger (but subgalactic) star-forming clouds that were themselves surrounded by a diffuse medium virialized in a “background” isothermal potential well (Harris & Pudritz 1994). Together, these basic hypotheses imply $`E_b^{\mathrm{gas}}=4.8\times 10^{42}\mathrm{erg}(M/M_{})^{1.5}(r_{\mathrm{gc}}/8\mathrm{kpc})^{0.5}`$. Note that the $`r_{\mathrm{gc}}`$ scaling is essentially that observed directly for Galactic globulars today, enabling a direct comparison of the (model) initial and final $`E_b(M,r_{\mathrm{gc}})`$ relations in Fig. 2. To explain the relative $`E_b(M)`$ normalizations in Fig. 2 requires quantitative modelling of the initial structure and feedback dynamics in the gaseous protoclusters. Meanwhile, the different slopes of the two relations are significant: The ratio of the initial energy of a gaseous clump to the final $`E_b`$ of a stellar cluster is a non-decreasing function of the cumulative star formation efficiency $`\eta `$; but this Figure shows that it is also an increasing function of cluster mass, and thus that $`\eta `$ was systematically higher in more massive protoclusters. The quantitative details of this dependence are also model-dependent (McLaughlin, in preparation), but the inference on the qualititative behavior of $`\eta `$ is robust and presents a new constraint for theories of cluster formation. Once the behavior of $`\eta `$ as a function of initial gas mass is understood, progress will have been made in explaining the universal $`ϵ_{\mathrm{cl}}`$ of §2, and there will be further implications for other global properties of GCSs—such as their mass functions, which, contrary to current modelling (McLaughlin & Pudritz 1996; Elmegreen & Efremov 1997), can no longer simply be assumed proportional to those of their gaseous protoclusters. ### Acknowledgments. This work was supported by NASA through grant number HF-1097.01-97A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA under contract NAS5-26555. ## References Bellazzini, M. 1998, New Astronomy, 3, 219 Blakeslee, J. P., Tonry, J. L., & Metzger, M. R. 1997, AJ, 114, 482 Dekel, A.., & Silk, J. 1986, ApJ, 303, 39 Djorgovski, S. 1995, ApJ, 438, L29 Elmegreen, B. G. 2000, in Toward a New Millennium in Galaxy Morphology, ed. D. L. Block, I. Puerari, A. Stockton, and D. Ferreira (Dordrecht: Kluwer), in press (astro-ph/9911157) Elmegreen, B. G., & Efremov, Y. N. 1997, ApJ, 480, 235 Gnedin, O. Y., & Ostriker, J. P. 1997, ApJ, 474, 223 Goodwin, S. P. 1997, MNRAS, 284, 785 Harris, W. E. 1996, AJ, 112, 1487 Harris, W. E., & Pudritz, R. E. 1994, ApJ, 429, 177 Harris, W. E., & van den Bergh, S. 1981, AJ, 86, 1627 Harris, W. E., Harris, G. L. H., & McLaughlin, D. E. 1998, AJ, 115, 1801 Hills, J. G. 1980, ApJ, 225, 986 King, I. R. 1966, AJ, 71, 64 Lada, E. A. 1992, ApJ, 393, L25 McLaughlin, D. E. 1999, AJ, 117, 2398 McLaughlin, D. E. 2000, ApJ, in press McLaughlin, D. E., & Pudritz, R. E. 1996, ApJ, 457, 578 Meurer, G. R., Heckman, T. M., Leitherer, C., Kinney, A., Robert, C., & Garnett, D. R. 1995, AJ, 110, 2665 Murali, C., & Weinberg, M. D. 1997, MNRAS, 288, 767 Patel, K., & Pudritz, R. E. 1994, ApJ, 424, 688 Saito, M. 1979, PASJ, 31, 181 Verschueren, W. 1990, A&A, 234, 156 Vesperini, E. 1997, MNRAS, 287, 915 Zepf, S. E., Ashman, K. M., English, J., Freeman, K. C., & Sharples, R. M. 1999, AJ, 118, 752 ## Discussion G. Meurer: Concerning the two-orders-of-magnitude difference between $`ϵ_{\mathrm{cl}}`$ and the fraction of UV light in starbursts: One order of magnitude may be explainable by the gas content in starbursts. McLaughlin: That does seem plausible (e.g., Zepf et al. 1999), although it should of course be checked in detail in every individual case. But the gas mass in starbursts really does have to enter as much more than a factor-of-ten effect if there is no boost in the cluster formation efficiency in starbursts vs. old galaxy halos. A real question remains as to whether or not that is the case. G. Östlin: Since none of the fundamental properties of globular clusters depend on metallicity, including the core mass-to-light ratio which appears constant, I guess this requires them to have had a universal stellar IMF, independent of metallicity. McLaughlin: I think that’s exactly right.
no-problem/0002/cond-mat0002066.html
ar5iv
text
# 𝑐-Axis tunneling in YBa2Cu3O7-δ/PrBa2Cu3O7-δ superlattices ## Abstract In this work we report $`c`$-axis conductance measurements done on a superlattice based on a stack of 2 layers YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7-δ</sub> and 7 layers PrBa<sub>2</sub>Cu<sub>3</sub>O<sub>7-δ</sub> (2:7). We find that these quasi-2D structures show no clear superconducting coupling along the $`c`$-axis. Instead, we observe tunneling with a gap of $`\mathrm{\Delta }_c=5.0\pm 0.5`$ meV for the direction perpendicular to the superconducting planes. The conductance spectrum show well defined quasi-periodic structures which are attributed to the superlattice structure. From this data we deduce a low temperature $`c`$-axis coherence length of $`\xi _c=0.24\pm 0.03`$ nm. preprint: Y123/Pr123-v1.4 As for classical superconductors, tunneling experiments are a direct way of testing the local superconducting density of states . In particular the co-existence of $`s`$\- wave and $`d`$-wave components of the order parameter was directly investigated from $`c`$-axis planar tunneling measurements done in high temperature superconductors (HTS) . However tunneling experiments are extremely sensitive to the quality of barriers and interfaces. This aspect explains the extreme care taken by the different groups in the fabrication of HTS tunnel junctions. One way of getting around this problem is to investigate $`c`$-axis tunneling in YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7-δ</sub>/PrBa<sub>2</sub>Cu<sub>3</sub>O<sub>7-δ</sub> superlattices (Y123/Pr123). In those systems it is possible to modify the tunneling properties simply by varying the periodicities of the Y123 and Pr123 layers. Another advantage is that transmission electron microscope (TEM) studies show atomically flat Y123/Pr123 interfaces in superlattices . In this work we study the influence of the periodicity of an artificial superlattice on the local superconducting density of states of Y123. This was only possible by a sustained effort divided in three main steps: the preparation of high quality Y123/Pr123 superlattices, the patterning of suitable mesa structures and the measurement of the $`c`$-axis transport properties. Y123/Pr123 superlattices are deposited on a similar way as Y123 thin films. The only difference is that after sputtering one to 10 unit cells (u.c.) of Y123 the substrate is turned to a cathode containing the Pr123 target. This process is repeated until a total film thickness of 200 nm. The switching between targets is made with a computer-controlled step motor. To provide low ohmic contacts the process ends by the in-situ deposition of a protective gold layer with thickness from 200 to 400 nm. In order to avoid the formation of pin-holes and reduce the surface roughness the sputtering process was done at 3 mbar pressure of a mixture 1 to 2 of O<sub>2</sub> and Ar. The crystallographic quality of the superlattices used in this work has been checked by x-ray diffraction. Our measurements showed up to third order satellite peaks observed in ($`\theta 2\theta `$) scans. On the other hand TEM studies show steps of only 1 u.c. for every 100 nm. This value is much smaller than the 7 u.c. thick Pr123 barrier. Given that a Gold layer covered the superlattices the surface morphology could not be checked directly on the same samples where the $`c`$-axis measurements were done. However scanning tunneling microscopy done in similar superlattices reveal an average surface modulation of $`\pm `$7.2 nm (6 u.c.) for a length scale of 1.6 $`\mu `$m. This roughness is very small when compared to the total thickness of the mesas which was of about 120 nm. Since the measurements below were performed on 2:7 superlattices and the top layer was always Y123, only the two upper Y123 layers (of a total of 10) are probably affected by the protective gold layer. This is expected to have little influence on our experimental results. Before making the mesa structures, a ground electrode with 1.2 $`\times `$ 10 mm<sup>2</sup> was wet chemically etched. The mesas were later on prepared by standard UV-photolithography and ion milling. During the etching process the samples were cooled down to 77 K. Later the chip was coated with photoresist, and a window was opened on the top of each mesa by a photolithographic process. The preparation step ends with the deposition of a gold top electrode patterned by wet chemical etching. Because of its smoothness, high homogeneity and low defect density the photoresist was directly used for insulating the top contact from the ground electrode. All measurements were done in a ”three point” geometry, where the typical contact resistance between the HTS material and gold is $`R_S3\times 10^5\mathrm{\Omega }`$cm<sup>2</sup>. For the moment, we are limited to mesa structures down to 15$`\times `$15 $`\mu `$m<sup>2</sup>. In Fig. 1 we show a semi-logarithmic plot of the resistances of three mesas with $`30\times 30`$, $`40\times 40`$ and $`50\times 50\mu `$m<sup>2</sup> prepared on a Y123/Pr123 superlattice with 2 layers Y123 and 7 layers Pr123 (2:7). The transition observed at 65 K corresponds to the superconducting transition $`T_c`$ of the Y123 layers. The reduced $`T_c`$ is typical for the non-fully developed order parameter in the 2 u.c. thick Y123 layer . Above $`T_c`$, the larger mesas show a temperature dependence that is similar to the one measured in the (a,b)-plane. This demonstrates that we measured a non-negligible amount of the (a,b) component. However below $`T_c`$ the superconducting Y123 layers define equipotential planes, and only the $`c`$-axis component of the resistance can be measured. This is confirmed by the vertical shift existing between the different plots shown in Fig. 1. The logarithmic y- axis shows that the different curves differ below $`T_c`$ only by a proportionality factor. The resistances of the mesas at 20 K were $`R_c(30)=168`$ $`\mathrm{\Omega }`$, $`R_c(40)=68`$ $`\mathrm{\Omega }`$ and $`R_c(50)=63`$ $`\mathrm{\Omega }`$ which give ratios of $`R_c^{50}/R_c^{30}=0.38`$ (expected $`50^2/30^2=0.36`$) and $`R_c^{40}/R_c^{30}=0.40`$ ($`40^2/30^2=0.56`$). The discrepancy of 30% observed in the $`R_c^{40}/R_c^{30}`$ can be attributed to a larger degree of damage introduced during the etching of the $`40\times 40`$ $`\mu `$m mesa. The contact resistance of the gold top electrode should scale as well with the area of the mesas. We measured simultaneously the $`U`$ vs. $`I`$ characteristics and the differential resistance in the $`30\times 30`$ $`\mu `$m<sup>2</sup> mesa. This was done by using a battery operated current source that superposes above an arbitrary DC current, a small AC signal generated by the reference of a Lock-in amplifier. The DC-and AC-signals were measured with an HP34420A nano-voltmeter and a PAR 5210 Lock-in amplifier respectively. In Fig. 2 we show some of our results on a $`30\times 30\mu `$m<sup>2</sup> mesa done on a 2:7 superlattice for temperatures between 2.0 and 60 K. Several features can be identified in Fig. 2. The first is the parabolic background which can be well described by the Simmons model which predicts : $$\sigma _b(U)=\sigma _0\left(1+\zeta U^2\right)$$ (1) This model corresponds to a metal-insulator-metal junction with a rectangular barrier of width $`d`$ and height $`\mathrm{\Phi }`$. The constant factors are $`\sigma _0=(3/2)(e(2m\mathrm{\Phi })^{1/2}/h^2d)\mathrm{exp}(Ad\mathrm{\Phi }^{1/2})`$ and $`\zeta =(Aed)^2/96\mathrm{\Phi }`$ where $`A=4\pi (2m)^{1/2}/h`$. Considering that we have 10 bi-layers connected in series, we deduced from Eq. 1 that each Pr123 barrier has below $`T=25`$ K an effective height $`\mathrm{\Phi }=370`$ meV and an effective width $`d=3.5`$ nm. Although $`d`$ is much smaller than the 8.2 nm expected from the $`7`$ u.c., we can explain this result by pure geometrical arguments. If we take into account the imperfections in the superlattice (steps of 1 u.c. per interface) and the fact that superconductivity extends to the chains (0.5 u.c. per interface), we would have an effective barrier thickness of $`4.8`$ nm (4 u.c.). This crude estimation is only $`37\%`$ larger than $`d`$. From our data we deduce that the values of $`d`$ and $`\mathrm{\Phi }`$ were practically temperature independent up to $`25`$ K. Above this temperature the conductivity follows the behavior characteristic for resonant tunneling via up to two localized states : $$\sigma _b(U)=g_0+\alpha U^{4/3}$$ (2) The two peaks observed at lower temperatures in $`\sigma (U)`$ should indeed correspond to a $`c`$-axis superconducting gap. The distance between the two peaks is $`U_{pp}=178`$ mV. This particular mesa was made with a height of about 120 nm estimated from an etching rate calibrated by Atomic Force Microscopy done in different etched films. This gives a stack of $`n=8`$ to $`10`$ bi-layers which present each a $`c`$-axis superconducting gap of $`\mathrm{\Delta }_c=U_{pp}/4n=5.0\pm 0.5`$ meV. This value is in excellent agreement with the value of $`\mathrm{\Delta }_c`$ given in the literature which scatters between 4 and 6 meV for planar junctions prepared with Pr123, CeO<sub>2</sub> and SrAlTaO<sub>6</sub> barriers . To understand this result we have to look at Scanning Tunneling Spectroscopy (STS) data. Typical STS measurements done on both Y123 thin films and high quality single crystals give gap structures at about 5 and 20 meV .These fine structures observed in tunneling spectra were explained successfully by Miller at al by considering that Y123 is constituted by a stack of strong and weak superconducting layers which contribute with different weights to the tunneling spectra . In particular the 5 meV gap has been attributed to the BaO and CuO layers situated between the CuO<sub>2</sub> blocks. Since these BaO and CuO layers are common to both Y123 and Pr123 we expect them to constitute the interfaces which are relevant to our tunneling process. Given that $`T_c`$ in our superlattices is only reduced by 20% we do not expect the CuO<sub>2</sub> superconducting gap to be strongly suppressed. On the other hand given that a 4-6 meV gap structures are observed in measurements done with different barriers we think that the gap $`\mathrm{\Delta }_c5`$ meV is indeed an intrinsic property of Y123. The existence of this small c-axis gap is consistent with the strong thermal smearing of the gap feature at about 50 K, which corresponds to an energy of the same magnitude as $`\mathrm{\Delta }_c`$. For the moment it is not clear to us which will be the influence of the CuO chains to the symmetry of the order parameter. However since these chains form together with the apical oxygen CuO<sub>2</sub> cells oriented along to the $`c`$\- axis, it is likely that a $`c`$-axis gap would exist in Y123 with a $`s`$-wave like symmetry. To enhance the other features present in $`\sigma (U)`$, we plot in Fig. 3 $`\sigma /\sigma _b`$ for temperatures between 2.0 and 65 K. For more clarity the data is vertically shifted. Below 25 K the Simmons model is used to calculate $`\sigma _b`$ (Eq. 1). Above that temperature the resonant tunneling expression given by Eq. 2 is employed. We would like to emphasize that for all temperatures the low bias features were included in the estimation of $`\sigma _b(U)`$. Below 30 K we observe a number of reproducible features which were superposed to the superconducting gap. For increasing temperatures, the gap and these structures are smeared out by thermal fluctuations. At about 30 K the only signature of the gap is a soft voltage dependence of $`\sigma `$ and a zero bias peak which starts to develop, grows up to about 50 K and finally disappears near $`T_c`$. Although there is not for the moment a clear explanation for the origin of this zero bias peak, its disappearance close to $`T_c`$ shows that it is clearly related to tunneling of superconducting pairs. As suggested by Abrikosov this anomaly could be due to resonant tunneling through localized states in the barrier. The absence of a zero bias peak at lower temperatures is consistent with this picture: the success in fitting the data with a Simmons model indicates that below 25 K, the Pr123 layers behave predominantly like normal tunneling barriers having one or no resonant states. The arrows in Fig. 3 show sharper indentations in $`\sigma (U)`$ which can be followed up to 30 K. To investigate the additional features present in the low temperature conductivity we plot in Fig. 4 $`\sigma /\sigma _b`$ for U between 0.1 and 0.5 V. The vertical lines correspond to the minima of the oscillations $`U_m`$ which are particularly visible at lower temperatures. To find the periodicity of these oscillations, we plot in Fig. 5 $`U_m`$ vs. an integer index $`n`$. A clear zero crossing of the linear fit is obtained by choosing an index n=9 for the lowest extracted value of $`U_m`$. From the linear fit we deduce a period of $`(11.1\pm 0.5)`$ mV. If we remember that this value corresponds to 8-10 junctions connected in series, a single junction would show a periodicity of $`\delta U=(1.2\pm 0.1)`$ meV. The expected $`c`$-axis density of states of a superlattice where the superconducting gap is a one dimensional periodic step has been already calculated by van Gelder in 1969 . With the HTS materials, the increasing interest on superlattices inspired the work of Hahn in extending the model to the three dimensional case . These models predict the opening of gaps which were particularly visible in the one dimensional case. In the 3-dimensional case they are smeared out, although still present. The main result from Ref. is that above the superconducting gap these additional gap structures should appear with a periodicity of: $$\frac{\xi _c}{s}=\frac{1}{\pi ^2}\frac{\delta U}{\mathrm{\Delta }_c}$$ (3) where $`s`$ is the periodicity of the superlattice, $`\xi _c`$ the $`c`$-axis coherence length, $`\mathrm{\Delta }_c`$ the $`c`$-axis gap and $`\delta U`$ the periodicity of the sub-gap structures. By taking $`s=10.5`$ nm, and $`\mathrm{\Delta }_c=5.0\pm 0.5`$ meV we deduce from Eq. 3 a $`c`$-axis coherence length of $`\xi _c=0.27`$ nm . This result is close to the value of $`\xi _c=0.16\pm 0.01`$ nm deduced from an analysis of fluctuation conductivity in Y123/Pr123 superlattices . If we assume for Y123 an anisotropy of $`\gamma 5`$ we would obtain a in-plane coherence length $`\xi _{ab}=\gamma \xi _c1.4`$ nm. This value is close to the generally quoted $`\xi _{ab}=1.5`$ nm for Y123 . The larger structures indicated by the arrows in Fig. 3, correspond to a periodicity of $`100\pm 1`$ mV. It is interesting to notice that the ratio of this periodicity divided by the periodicity of $`U_m`$ is $`9.0\pm 0.4`$.This value corresponds to the ratio between $`s`$ and a Y123 unit cell! From Eq. 3 and Fig. 4 we deduce that $`\xi _c`$ is practically temperature independent below 20 K. We show in this paper that by constructing superlattices it is possible to generate sub- gap structures. These features can be directly used to determine in an independent way a $`c`$\- axis coherence length $`\xi _c=0.24\pm 0.03`$ nm for a 2 u.c. thick Y123. The agreement of these results with previous estimations of Y123 coherence lengths shows that $`\mathrm{\Delta }_c=5.0\pm 0.5`$ meV could be indeed a $`c`$-axis superconducting gap related to the CuO chains. From the observed sub-gap structures, we conclude that $`\xi _c(T)`$ is practically temperature independence below 20 K. Due to thermal fluctuations, this analysis could not be extended up to higher temperatures. The authors would like to thank K. Gray, J.F. Zasadzinski, R.A. Klemm, R. Schilling and J. Mannhart for valuable and stimulating discussions. This work was supported by the German BMBF through Contract 13N6916 and the European Union through the Training and Mobility of Researchers (ERBFMBICT972217).
no-problem/0002/hep-th0002050.html
ar5iv
text
# References AEI-2000-1 NBI-HE-2000-3 Feb 7, 2000 A non-perturbative Lorentzian path integral for gravity J. Ambjørn$`^{a,}`$ J. Jurkiewicz$`^{b,}`$ and R. Loll$`^{c,}`$ <sup>a</sup> The Niels Bohr Institute, Blegdamsvej 17, DK-2100 Copenhagen Ø, Denmark email: ambjorn@nbi.dk <sup>b</sup> Institute of Physics, Jagellonian University, Reymonta 4, PL 30-059 Krakow, Poland email: jurkiewi@thrisc.if.uj.edu.pl <sup>c</sup> Albert-Einstein-Institut, Max-Planck-Institut für Gravitationsphysik, Am Mühlenberg 1, D-14476 Golm, Germany email: loll@aei-potsdam.mpg.de Abstract A well-defined regularized path integral for Lorentzian quantum gravity in three and four dimensions is constructed, given in terms of a sum over dynamically triangulated causal space-times. Each Lorentzian geometry and its associated action have a unique Wick rotation to the Euclidean sector. All space-time histories possess a distinguished notion of a discrete proper time. For finite lattice volume, the associated transfer matrix is self-adjoint and bounded. The reflection positivity of the model ensures the existence of a well-defined Hamiltonian. The degenerate geometric phases found previously in dynamically triangulated Euclidean gravity are not present. The phase structure of the new Lorentzian quantum gravity model can be readily investigated by both analytic and numerical methods. In spite of numerous efforts, we still have not found a consistent theory of quantum gravity which would enable us to describe local quantum phenomena in the presence of strong gravitational fields. Moreover, the entanglement of technical problems with more fundamental issues concerning the structure of such a theory often makes it difficult to pinpoint why any particular quantization program has not been successful. There seems to be a broad consensus that a correct non-perturbative treatment should involve in an essential way the full “space of all metrics” (as opposed to linearized perturbations around flat space) and the diffeomorphism group, i.e. the invariance group of the classical gravitational action, $$S[g_{\mu \nu }]=\frac{k}{2}d^dx\sqrt{detg}(R2\mathrm{\Lambda }),d=4,$$ (1) with $`k^1=8\pi G_{\mathrm{Newton}}`$ and the cosmological constant $`\mathrm{\Lambda }`$. One approach that does not rely on the existence of supersymmetry tries to define the theory by means of a non-perturbative path integral. The aim is not to evaluate this in a stationary-phase approximation, but as a genuine “sum over all geometries”. Since even the pure gravity theory in spite of its large invariance group possesses local field degrees of freedom, such a sum must be regularized to make it meaningful. The two most popular approaches, quantum Regge calculus and dynamical triangulations , both employ simplicial discretizations of space-time, on which then the behaviour of metric and matter fields is studied. One drawback of these (mainly numerical) investigations is that so far they have been conducted only for Euclidean space-time metrics $`g_{\mu \nu }^{eucl}`$ instead of the physical, Lorentzian metrics. This is motivated by the analogy with non-perturbative Euclidean (lattice) field theories on a fixed, flat background, whose results can under suitable conditions be “Wick-rotated” to their Minkowskian counterparts. The amplitudes $`\mathrm{exp}iS`$ are substituted in the Euclidean path integral by the real weights $`\mathrm{exp}(S^{eucl})`$ for each configuration. Real weight factors are mandatory for the applicability of standard Monte Carlo techniques and, more generally, for the convergence of the state sums. Unfortunately, it is not at all clear how to relate path integrals over Euclidean geometries to those over Lorentzian ones. This has to do with the fact that in a generally covariant field theory like gravity the metric is a dynamical quantity, and not part of the fixed background structure. For general metric configurations, there is no preferred notion of “time”, and hence no obvious prescription of how to Wick-rotate. One might worry that with a discretization of space-time the diffeomorphism invariance of the continuum theory is irretrievably lost. However, the example of two-dimensional Euclidean gravity theories provides evidence that this is not necessarily so. There, one can choose a conformal gauge in the continuum formulation, and obtain an effective action by a Faddeev-Popov construction. Physical quantities computed in this approach coincide with those computed using an intermediate discretization in the form of so-called dynamical triangulations. In this latter approach one approximates the sum over all metrics modulo diffeomorphisms by a sum over all possible equilateral triangulations of a given topological manifold. Local geometric degrees of freedom are given by the geodesic lengths (all equal to some constant $`a`$) of the triangle edges and by deficit angles around vertices. Since different triangulations correspond to different geometries, the set-up has no residual gauge invariance, and it is straightforward to define (regularized versions of) diffeomorphism-invariant correlation functions. In the continuum limit, as the diffeomorphism-invariant cutoff $`a`$ is taken to zero, the sum over triangulations gives rise to an effective measure on the space of geometries (whose explicit functional form is not known). This makes the regularization by dynamical triangulations, which is also applicable in higher dimensions, an appealing method for investigating quantum gravity theories. It is a further advantage that the formalism is amenable to numerical simulations, which have been conducted extensively in dimensions 2, 3 and 4. Alas, in $`d=3,4`$, and for Euclidean signature, an interesting continuum limit has not been found. This seems to be related to the dominance of degenerate geometries. At small bare gravitational coupling $`G_{\mathrm{Newton}}`$, the dominant configurations are branched polymers (or “stacked spheres”) with Hausdorff dimension $`d_H=2`$, whereas at large $`G_{\mathrm{Newton}}`$ the geometries “condense” around one or several singular links or vertices with a very high coordination number, resulting in a very large $`d_H`$. These extreme geometric phases and the first-order phase transition separating them are qualitatively well described by mean field calculations . Another unsatisfactory aspect of the Euclidean model (as well as other discretized models of gravity) is our inability to rotate back to Lorentzian space-time. In order to tackle these problems, two of us have recently constructed a Lorentzian version of the dynamically triangulated gravitational path integral in two dimensions . The basic building blocks are triangles with one space-like and two time-like edges. The individual Lorentzian geometries are glued together from such triangles in a way that satisfies certain causality requirements. The model turns out to be exactly soluble and its associated continuum theory lies in a new universality class of 2d gravity models distinct from the usual Euclidean Liouville gravity. One central lesson from the two-dimensional example is that the causality conditions imposed on the Lorentzian model act as a “regulator” for the geometry. Most importantly, they suppress changes in the spatial topology, that is, branching of baby universes is not allowed. As a result, the effective quantum geometry in the Lorentzian case is smoother and in some senses better behaved: a) in spite of large fluctuations of the geometry, its Hausdorff dimension has the canonical value $`d_H=2`$, unlike in Euclidean gravity, which has a fractal dimension $`d_H=4`$; b) in spite of a strong interaction between matter and gravity when the system is coupled to Ising spins, the combined system remains consistent even beyond the $`c=1`$ barrier, unlike what happens in Euclidean gravity . Motivated by these results, we have constructed a discretized Lorentzian path integral for gravity in 3 and 4 space-time dimensions. Unlike in two dimensions, the action is no longer trivial, and the Wick rotation problem must be solved. Also the geometries themselves are more involved, and the geometry of the $`(d1)`$-dimensional spatial slices is no longer described by just a single variable. We have succeeded in constructing a model with the following properties: * Lorentzian space-time geometries (histories) are obtained by causally gluing sets of Lorentzian building blocks, i.e. $`d`$-dimensional simplices with simple length assignments; * all histories have a preferred discrete notion of proper time $`t`$; $`t`$ counts the number of evolution steps of a transfer matrix between adjacent spatial slices, the latter given by $`(d1)`$-dimensional triangulations of equilateral Euclidean simplices; * for a fixed space-time volume $`N_d`$ (= number of simplices of top-dimension), both the Euclidean and the Lorentzian discretized gravity actions are bounded from above and below; * the number of possible triangulations is exponentially bounded as a function of the space-time lattice volume; * each Lorentzian discrete geometry can be “Wick-rotated” to a Euclidean one, defined on the same (topological) triangulation; * at the level of the discretized action, the “Wick rotation” is achieved by an analytical continuation in the dimensionless ratio $`\alpha =l_{\mathrm{time}}^2/l_{\mathrm{space}}^2`$ of the squared time- and space-like link length; for $`\alpha =1`$ one obtains the usual Euclidean action of dynamically triangulated gravity; * for finite lattice volume, the model is (site) reflection-positive, and the transfer matrix is symmetric and bounded, ensuring the existence of a well-defined Hamiltonian operator; * the extreme phases of degenerate geometries found in the Euclidean models cannot be realized in the Lorentzian case. For the sake of definiteness and simplicity, we will concentrate mostly on the three-dimensional case. The discussion carries over virtually unchanged to $`d=4`$, the details of which will be given elsewhere . (Obviously, the corresponding continuum theories will be very different, one describing a topological quantum field theory, and the other a field theory of interacting gravitons.) The classical continuum action is simply eq. (1), with $`d=3`$. Each discrete Lorentzian space-time will be given by a sequence of two-dimensional compact spatial slices of fixed topology, which for simplicity we take to be that of a two-sphere. Each slice carries an integer “time” label $`t`$, so that the space-time topology is $`I\times S^2`$, with $`I`$ some real interval. The metric data will be encoded by triangulating this underlying space by three-dimensional simplices with definite edge length assignments. There are two types of edges: “space-like” ones (of length squared $`l^2=a^2>0`$, with the lattice spacing $`a>0`$) which are entirely contained in a slice $`t=const.`$, and “time-like” ones (of length squared $`l^2=\alpha a^2<0`$) which start at some slice $`t`$ and end at the next slice $`t+1`$. This implies that all lattice vertices are located at integer $`t`$. A metric space-time is built up by “filling in” for all times the three-dimensional sandwiches between $`t`$ and $`t+1`$. We only consider regular gluings which lead to simplicial manifolds. Our basic building blocks are given by three types of Lorentzian tetrahedra, * 3-to-1 tetrahedra (three vertices contained in slice $`t`$ and one vertex in slice $`t+1`$): they have three space- and three time-like edges; their number in the sandwich $`[t,t+1]`$ will be denoted by $`N_{31}(t)`$; * 1-to-3 tetrahedra: the same as above, but upside-down; the tip of the tetrahedron is at $`t`$ and its base lies in the slice $`t+1`$; notation $`N_{13}(t)`$; * 2-to-2 tetrahedra: one edge (and therefore two vertices) at each $`t`$ and $`t+1`$; they have two space- and four time-like edges; notation $`N_{22}(t)`$. Each of these triangulated space-times carries a discrete causal structure obtained by giving each time-like link an orientation in the positive $`t`$-direction. Then, two lattice vertices are causally related if there is a sequence of positively oriented links connecting the two. The discretized form of the Lorentzian action (1) is obtained from Regge’s prescription for simplicial manifolds , see for details. The action is written as a function of the deficit angles around the one-dimensional edges and of the three-dimensional volumes, which in turn can be expressed as functions of the squared edge lengths of the fundamental building blocks. The contribution to the action from a single sandwich $`[t,t+1]`$ is $$\mathrm{\Delta }S_\alpha (t)=4\pi ak\sqrt{\alpha }+(N_{31}(t)+N_{13}(t))(akK_1a^3\lambda L_1)+N_{22}(t)(akK_2a^3\lambda L_2),$$ (2) where $`K_1(\alpha )`$ $`=`$ $`\pi \sqrt{\alpha }3\mathrm{arcsinh}{\displaystyle \frac{1}{\sqrt{3}\sqrt{4\alpha +1}}}3\sqrt{\alpha }\mathrm{arccos}{\displaystyle \frac{2\alpha +1}{4\alpha +1}},`$ $`K_2(\alpha )`$ $`=`$ $`2\pi \sqrt{\alpha }+2\mathrm{arcsinh}{\displaystyle \frac{2\sqrt{2}\sqrt{2\alpha +1}}{4\alpha +1}}4\sqrt{\alpha }\mathrm{arccos}{\displaystyle \frac{1}{4\alpha +1}},`$ $`L_1(\alpha )`$ $`=`$ $`{\displaystyle \frac{\sqrt{3\alpha +1}}{12}},L_2(\alpha )={\displaystyle \frac{\sqrt{2\alpha +1}}{6\sqrt{2}}}.`$ (3) Note that the sandwich action (2) already contains appropriate boundary contributions, such that $`S`$ is additive under the gluing of contiguous slices. In relation (2), we have used the rescaled cosmological constant, $`\lambda =k\mathrm{\Lambda }`$. At each time $`t`$ the physical states $`|g`$ are parametrized by piece-wise linear geometries, given by unlabelled triangulations $`g`$ of $`S^2`$ in terms of equilateral Euclidean triangles. For a finite spatial volume $`N`$ (counting the triangles in a spatial slice), the number of states is exponentially bounded as a function of $`N`$ and the vectors $`|g`$, defined to be orthogonal, span a finite-dimensional Hilbert space $`_N`$. The transfer matrix $`\widehat{T}_N`$ will act on the Hilbert space $$H^{(N)}:=\underset{i=N_{\mathrm{min}}}{\overset{N}{}}_i,$$ (4) and the states $`|g`$ will be normalized according to $$g_1|g_2=\frac{1}{C_{g_1}}\delta _{g_1,g_2},\underset{g}{}C_g|gg|=\widehat{1}.$$ (5) The symmetry factor $`C_g`$ is the order of the automorphism group of the two-dimensional triangulation $`g`$, which for large triangulations is almost always equal to 1. With each step $`\mathrm{\Delta }t=1`$ we can now associate a transfer matrix $`\widehat{T}_N`$ describing the evolution of the system from $`t`$ to $`t+1`$, with matrix elements $$g_2|\widehat{T}_N(\alpha )|g_1G_\alpha (g_1,g_2;1)=\underset{\mathrm{sandwich}(g_1g_2)}{}\text{e}^{i\mathrm{\Delta }S_\alpha }.$$ (6) The sum is taken over all distinct interpolating three-dimensional triangulations of the “sandwich” with boundary geometries $`g_1`$ and $`g_2`$, each with a spatial volume $`N`$. The propagator $`G_N(g_1,g_2;t)`$ for arbitrary time intervals $`t`$ is obtained by iterating the transfer matrix $`t`$ times, $$G_N(g_1,g_2;t)=g_2|\widehat{T}_N^t|g_1,$$ (7) and satisfies the semigroup property $$G_N(g_1,g_2;t_1+t_2)=\underset{g}{}C_gG_N(g_2,g;t_2)G_N(g,g_1;t_1).$$ (8) The infinite-volume limit is obtained by letting $`N\mathrm{}`$ in eqs. (6)-(8). A brief remark is in order on our notion of “time”: the label $`t`$ is to be thought of as the discretized analogue of proper time, as experienced by an idealized collection of freely falling observers. We do not claim that this is a distinguished notion of time for pure quantum gravity, but it is a possible choice, in the present case suggested by our regularization. Note that in continuum formulations the proper time gauge is not usually considered, because it is a gauge choice that – considered for arbitrary geometries – goes bad in an arbitrarily short time. This problem does not occur in the discrete case: by construction we only sum over space-time geometries for which there is a globally well-defined (discrete) “proper time”. The action $`S`$ associated with an entire space-time $`S^1\times S^2`$ of length $`t`$ in time-direction is obtained by summing expression (2) over all $`t^{}=1,2,\mathrm{}t`$ and identifying the two boundaries. The result can be expressed as a function of three geometric “bulk” variables, for example, the total number of vertices $`N_0`$, the total number of tetrahedra $`N_3`$ and $`t`$, $`S_\alpha (N_0,N_3,t)`$ $`=`$ $`N_0\left(4ak(K_1K_2)4a^3\lambda (L_1L_2)\right)+N_3\left(akK_2a^3\lambda L_2\right)`$ (9) $`+t\left(4ak(\pi \sqrt{\alpha }2(K_1K_2))+8a^3\lambda (L_1L_2)\right).`$ Because of the well-known inequality $`N_0(N_3+10)/3`$, valid for all closed three-dimensional simplicial manifolds, this implies the boundedness of the discretized Lorentzian action at fixed three-volume. This result is analogous to what happens in Euclidean dynamical triangulations. We write the partition function as $$Z_\alpha (k,\lambda ,t)=\underset{T𝒯_t(S^1\times S^2)}{}\text{e}^{iS_\alpha (N_0(T),N_3(T),t(T))},$$ (10) with $`𝒯_t(S^1\times S^2)`$ denoting the set of all Lorentzian triangulations of $`S^1\times S^2`$ of length $`t`$. It will turn out that a necessary condition for the existence of a meaningful continuum limit is the exponential boundedness of the number of possible triangulations as a function of the space-time volume $`N_3`$: only if the growth is at most exponential in $`N_3`$, can this divergence potentially be counterbalanced by a cosmological constant term exponentially damped in $`N_3`$. In our case, exponential boundedness follows trivially from the same property for Euclidean triangulations (where it has been proven rigorously for $`d=3,4`$ ), since the Lorentzian space-times form a subset of the former. Note that the convergence of the partition function implies the absence of divergent “conformal modes”. As it stands, the sum (10) over complex amplitudes has little chance of converging, due to the contributions of an infinite number of triangulations with arbitrarily large volume $`N_3`$. In order to make it well-defined, one must perform a Wick rotation, just as in ordinary quantum field theory. Thanks to the presence of a distinguished global time variable in our model, we can associate a unique Euclidean triangulated space-time with every Lorentzian history contributing in (9). It is obtained by taking the same topological triangulation and changing the squared lengths of all time-like edges from $`\alpha a^2`$ (Lorentzian) to $`\alpha a^2`$ (Euclidean), leaving the space-like edges unchanged. We can then use Regge’s prescription for calculating the (real) Euclidean action $`S_\alpha ^\mathrm{E}(N_0,N_3,t)`$ associated with the resulting Euclidean metric space-time (where $`\alpha `$ is always taken to be positive). After some algebra one verifies that by a suitable analytic continuation in the complex $`\alpha `$-plane from positive to negative real $`\alpha `$, the Euclidean and Lorentzian actions are related by $$S_\alpha (N_0,N_3,t)=iS_\alpha ^\mathrm{E}(N_0,N_3,t),$$ (11) for $`\alpha >\frac{1}{2}`$. This latter inequality has its origin in the triangle inequality for Euclidean triangles: $`S^\mathrm{E}`$ is real only for $`\alpha \frac{1}{2}`$, and the limit $`\alpha =\frac{1}{2}`$ corresponds to the degenerate case of totally collapsed triangles. Moreover, $`\alpha =1`$ is the only point on the real axis in which the coefficient of $`t`$ in the Lorentzian action (9) vanishes (this corresponds to $`\alpha =1`$ in eq. (11)). In this case one rederives the familiar expression employed in equilateral Euclidean dynamical triangulations, namely, $$\frac{1}{i}S_1S_1^\mathrm{E}=ak(2\pi N_16N_3\mathrm{arccos}\frac{1}{3})+a^3\lambda N_3\frac{1}{6\sqrt{2}}.$$ (12) Our strategy for evaluating the partition function is now clear: for any choice of $`\alpha >\frac{1}{2}`$, continue (9) to $`\alpha `$, so that $$\underset{T𝒯_t(S^1\times S^2)}{}\text{e}^{iS_\alpha (N_0,N_3,t)}\stackrel{\alpha \alpha }{}\underset{T𝒯_t(S^1\times S^2)}{}\text{e}^{S_\alpha ^\mathrm{E}(N_0,N_3,t)}.$$ (13) Because of the exponential boundedness, the Wick-rotated Euclidean state sum in (13) is now convergent for suitable choices of the bare couplings $`k`$ and $`\lambda `$. We can therefore proceed in two ways: either attempt to perform the sum analytically, by solving the combinatorics of possible causal gluings of the tetrahedral building blocks (as has been done in $`d=2`$ ), or use Monte-Carlo methods to simulate the system at finite volume. Once the continuum limit has been performed, we can rotate back to Lorentzian signature by an analytic continuation of the continuum proper time $`T`$ (which in the case of canonical scaling is of the form $`T=at`$; not to be confused with the triangulation $`T`$) to $`iT`$. If we are only interested in vacuum expectation values of time-independent observables and the properties of the Hamiltonian, we do not need to perform the Wick rotation explicitly, just as in usual Euclidean quantum field theory. Let us now establish some properties of the discrete real transfer matrix $`\widehat{T}\widehat{T}(\alpha =1)`$ of our model that are necessary for the existence of a well-defined Hamiltonian, defined as $`\widehat{h}:=\frac{1}{2a}\widehat{T}^2`$. These will be useful in any proof of the existence of a self-adjoint continuum Hamiltonian $`\widehat{H}`$. It is difficult to imagine boundedness and positivity arising in the limit from regularized models that do not have these properties. We will show that $`\widehat{T}_N`$ is symmetric, bounded for finite spatial volume $`N`$, and that the two-step transfer matrix $`\widehat{T}^2`$ is positive. Symmetry is obvious by inspection. The sandwich action (2) is symmetric under the exchange of in- and out-states (corresponding to $`N_{31}N_{13}`$). So is (6), because the counting of interpolating states does not depend on which of the geometries $`g_1`$, $`g_2`$ defines the in-state, say. The boundedness of $`\widehat{T}_N`$ for finite spatial volume $`N`$ follows from the finite dimensionality of the Hilbert space $`^{(N)}`$ it acts on and the fact that there is only a finite number of possibilities to interpolate between two given spatial triangulations $`g_1`$ and $`g_2`$ in one step. Positivity of the two-step transfer matrix, $`\widehat{T}_N^20`$, follows from the reflection positivity of our model under reflection with respect to planes of constant integer-$`t`$ (for regular lattices, this property is also referred to as site-reflection positivity, c.f. ). In order to be able to define a Hamiltonian as $$\widehat{h}_N:=\frac{1}{2a}\widehat{T}_N^2,$$ (14) we must make sure that the square of the transfer matrix is strictly positive, $`\widehat{T}_N^2>0`$. We do not expect that $`\widehat{T}_N`$ has any zero-eigenvectors, because this would imply the existence of a “hidden” symmetry of the discretized theory. It may of course happen that there are “accidental” zero-eigenvectors for certain values of $`N`$. In this case, we will adopt the standard procedure of removing the subspace $`𝒩^{(N)}`$ spanned by these vectors from the Hilbert space, resulting in a physical Hilbert space given by the quotient $`H_{ph}^{(N)}=H^{(N)}/𝒩^{(N)}`$. It should be emphasized that although the summation in the path integral is performed in the “Euclidean sector” of the theory, our construction is not a priori related to any path integral for Euclidean gravity proper. The point, already made in the two-dimensional case , is that we sum only over a selected class of geometries, which are equipped with a causal structure. Such a restriction incorporates the Lorentzian nature of gravity and has no analogue in Euclidean gravity. We therefore expect our Lorentzian statistical mechanics model to have a totally different phase structure from that of Euclidean dynamical triangulations. This expectation is corroborated by an analysis of the “extreme phases” of Lorentzian quantum gravity, to determine which configurations dominate the path integral $$Z_\alpha ^\mathrm{E}(k,\lambda ,t)=\underset{T𝒯_t(S^1\times S^2)}{}\text{e}^{S_\alpha ^\mathrm{E}},$$ (15) for either very small or very large inverse Newton’s constant $`k>0`$. In order to make a direct comparison with the Euclidean analysis , we set without loss of generality $`\alpha =1`$ in eq. (15) and rewrite the Euclidean action (12) as $$S_1^\mathrm{E}=k_3N_3k_1N_1,$$ (16) with the couplings $$k_1=2\pi ak,k_3=6ak\mathrm{arccos}\frac{1}{3}+a^3\lambda \frac{1}{6\sqrt{2}}.$$ (17) In the thermodynamic limit $`N_3\mathrm{}`$, and assuming a scaling behaviour such that $`t/N_30`$, one derives kinematical bounds on the ratio of links and tetrahedra, $`\xi :=N_1/N_3`$, namely, $$1\xi \frac{5}{4}.$$ (18) This is to be contrasted with the analogous result in the Euclidean case, where $`1\xi \frac{4}{3}`$. It implies that the branched-polymer (or “stacked-sphere”) configurations, which are precisely characterized by $`\xi =\frac{4}{3}`$, and which dominate the Euclidean state sum at large $`k_1`$, cannot be realized in the Lorentzian setting. The opposite extreme, at small $`k_1`$, is associated with the saturation of the inequality $$N_1\frac{1}{2}N_0(N_01),$$ (19) and in the Euclidean theory goes by the name of “crumpled phase”. At equality, every vertex is connected to every other vertex, corresponding to a manifold with a very large Hausdorff dimension. Again, it is impossible to come anywhere near this phase in the continuum limit of the Lorentzian model. Instead of (19), we have now separate relations for the numbers $`N_1^{(\mathrm{sl})}`$ and $`N_1^{(\mathrm{tl})}`$ of space- and time-like edges, $$N_1^{(\mathrm{sl})}=\underset{t}{}(3N_0(t)6)=3N_06t,N_1^{(\mathrm{tl})}\underset{t}{}N_0(t)N_0(t+1).$$ (20) Assuming canonical scaling, the right-hand side of ineq. (19) behaves like (length)<sup>6</sup>, whereas the second relation in (20) scales only like (length)<sup>5</sup>. We conclude that the phase structure of Lorentzian gravity must be very different from that of the Euclidean theory. In particular, the extreme branched-polymer and crumpled configurations are not allowed in the Lorentzian theory. This is another example of causal structure acting as a “regulator” of geometry. It also raises the hope that the mechanism governing the phase transition will be different, and potentially lead to a non-trivial continuum theory, in three as well as in four dimensions. Acknowledgements. J.A. acknowledges the support of MaPhySto – Center for Mathematical Physics and Stochastics – financed by the National Danish Research Foundation. J.J. acknowledges partial support through KBN grants no. 2P03B 019 17 and 2P03B 998 14.
no-problem/0002/astro-ph0002493.html
ar5iv
text
# The stellar content of brightest cluster galaxies ## 1 Introduction Elliptical or cD Brightest Cluster Galaxies (BCGs) have long been used as cosmological probes, in attempts to determine parameters such as H<sub>0</sub> and q<sub>0</sub> (Sandage 1972; Sandage & Hardy 1973; Hoessel, Gunn & Thuan 1980; Sandage & Tammann 1990). These studies make use of the ease of selection of such objects at cosmological distances, which is a result of their bright optical luminosities and privileged locations defined by other galaxies and by the centroids of cluster X-ray emission. They also appeal to another, more surprising property of BCGs: they appear to be excellent standard candles, with Sandage & Hardy finding a scatter of only 0.28 mag in visual absolute magnitude, after correction for cluster Bautz-Morgan and richness class. Lauer & Postman , in a study using more modern CCD techniques, found a scatter of 0.33 mag in R-band absolute magnitudes of BCGs. They were able to reduce this scatter to 0.25 mag by using the relation of absolute magnitude with the ‘Structure Parameter’ $`\alpha `$ of Hoessel , which is defined as $$\alpha =\frac{d\mathrm{log}L_m}{d\mathrm{log}r}|_{r_m}$$ where $`r_m`$ is a metric radius of 9.6 kpc and $`L_m`$ is the luminosity within that metric radius. Hoessel explains $`\alpha `$ as being a dimensionless parametrization of galaxy size, and empirically it is found to increase with BCG luminosity. Indeed he found a scatter in $`\alpha `$-corrected BCG absolute magnitudes of just 0.21 mag, even lower than that of Lauer & Postman . This uniformity in BCG luminosities remains something of a mystery. There is indeed good reason for expecting a wider disparity in BCG properties than in those of other galaxy types. Hoessel and Lauer showed that $``$30% of BCGs have multiple nuclei, a frequency which is greater than would be predicted by chance superpositions, and which indicates that galactic cannibalism is very common in BCGs at the present epoch. The continued accretion of cluster members would seem to ensure that BCGs as a population should have diverse properties, and it would seem probable that this should be reflected both in their total luminosity and in their composition. One motivation for the present paper is to test whether BCGs are more homogeneous or more diverse than the general population of elliptical galaxies. In this paper, we look at the stellar populations of BCGs, by measuring the strength of the 2.293$`\mu `$m CO absorption feature for 21 BCGs, and comparing the distribution of values with similar measurements for a large sample of field, group and cluster ellipticals (Mobasher & James 1996; James & Mobasher 1999; Mobasher & James 2000). CO strength contains information on recency of star formation, since it is very strong in supergiants (present 10<sup>7</sup>–10<sup>8</sup> years after a burst of star formation), strong in the cool AGB stars which contribute significantly to the near-IR light after 10<sup>8</sup>–10<sup>9</sup> years (Renzini & Buzzoni 1986; Oliva et al. 1995), and somewhat weaker in older populations. It also displays some metallicity dependence, being weak in very low metallicity globular clusters . This dependence was quantified by Doyon, Joseph & Wright , and further studied in Mobasher & James . Such studies of BCGs are particularly significant considering the apparent large-scale velocity flow found by Lauer & Postman . Using a sample of 119 BCGs out to a redshift of 15000 kms<sup>-1</sup>, they found the restframe defined by the galaxies to differ from that of the Cosmic Microwave Background by almost 700 kms<sup>-1</sup>. This result has been interpreted as evidence for a cosmological streaming flow, but an alternative explanation would be that BCG properties vary systematically around the sky, for example due to stellar population changes from galaxy to galaxy. This provides a further motivation for the present study. The organisation of this paper is as follows. Section 2 describes the selection of target galaxies, the observations, and the data reduction. Section 3 contains the main results, including a comparison of CO absorption strengths of BCGs and other elliptical galaxies, and correlations of CO strengths with other galaxy parameters. Section 4 summarises the main conclusions. ## 2 Sample Selection, Observations and Data Reduction The galaxy sample was selected from the BCG list of Lauer & Postman . All have measured recession velocities less than 15,000 kms<sup>-1</sup>, with R band photometry presented in Lauer & Postman . The observations presented here were carried out using the United Kingdom Infrared Telescope (UKIRT) during the 4 nights of 21–24 February 1999. The instrument used was the long-slit near-IR spectrometer CGS4, with the 40 line mm<sup>-1</sup> grating and the long-focal-length (300 mm) camera. The 4-pixel-wide slit was chosen, corresponding to a projected width on the sky of 2.4 arcsec. Working in 1st order at a central wavelength of 2.2 $`\mu m`$, this gave coverage of the entire K window. The CO absorption feature, required for this study, extends from 2.293 $`\mu m`$ (rest frame) into the K-band atmospheric cut-off. The principal uncertainty in determining the absorption depth comes from estimating the level and slope of the continuum shortward of this absorption which requires wavelength coverage down to at least 2.2 $`\mu m`$ and preferably to shorter wavelengths. There are many regions of the continuum free from lines even at this relatively low resolution. The effective resolution, including the degradation caused by the wide slit, is about 230. For each observation, the galaxy was centred on the slit by maximising the IR signal, using an automatic peak-up facility. Total on-chip integration times of 12 minutes were used for the brightest and most centrally-concentrated ellipticals while an integration time of 24 minutes was more typically required. During this time, the galaxy was slid up and down the slit at one minute intervals by 22 arcsec, giving two offset spectra which were subtracted to remove most of the sky emission. Moreover, the array was moved by 1 pixel in the spectral direction between integrations to enable bad pixel replacement in the final spectra. Stars of spectral types A0–A6, suitable for monitoring telluric absorption, were observed in the same way before and after each galaxy, with airmasses matching those of the galaxy observations as closely as possible. Flat fields and argon arc spectra were taken using the CGS4 calibration lamps. A total of 21 brightest cluster galaxies was observed. The data reduction was performed using the FIGARO package in the STARLINK environment. The spectra were flatfielded and polynomials fitted to estimate and remove the sky background. These spectra were then shifted to the rest frame of the galaxy, using redshifts from Lauer & Postman . The atmospheric transmissions were corrected by dividing each spectrum with the spectrum of the star observed closely in time to the galaxy, and at a similar airmass. The resulting spectrum was converted into a normalised, rectified spectrum by fitting a power-law to featureless sections of the continuum and dividing the whole spectrum by this power-law, extrapolated over the full wavelength range. Two rectified spectra are shown in Fig. 1. The apparent emission features at 2.14 $`\mu `$m and 2.10 $`\mu `$m are artefacts caused by absorptions in the A stars used for atmospheric transmission correction, and appear in different positions because of the restframe corrections. To measure the depth of the CO absorption feature, the procedure outlined in James & Mobasher is used. The restframe, rectified spectra were rebinned to a common wavelength range and number of pixels, to avoid rounding errors in the effective wavelength range sampled by a given number of pixels. The CO strength for each spectrum was determined using the method of Puxley, Doyon & Ward . They advocate the use of an equivalent width, CO<sub>EW</sub>, which is determined within the CO absorption feature between rest-frame wavelengths of 2.293 $`\mu `$m and 2.320 $`\mu `$m. This wavelength range was found by Puxley et al. to give maximum sensitivity to stellar population variations, and can be used for galaxies with recession velocities of up to $``$18000 kms<sup>-1</sup> before the spectral region of interest shifts out of the usable K window. This is not the case for the CO index CO<sub>sp</sub> used by Doyon et al. , which extends over a restframe wavelength range of 2.320–2.400 $`\mu `$m and would have been affected by large and uncertain telluric absorption and emission for the highest redshift galaxies in the present sample. Thus we only present CO<sub>EW</sub> values in this paper. A further advantage of the CO<sub>EW</sub> definition of Puxley et al. is that it is almost completely unaffected by velocity dispersion effects, due to the wide range of wavelength over which the absorption is measured. Puxley et al. find the velocity dispersion corrections to be insignificant, which we confirmed by smoothing low-velocity-dispersion galaxy spectra to an effective velocity dispersion of 500 kms<sup>-1</sup>. The resulting change in CO<sub>EW</sub> was $``$0.25%, very much smaller than the random errors. The errors on the CO<sub>EW</sub> values include three components. The first was calculated from the standard deviation in the fitted continuum points, on the assumption that the noise level remains constant through the CO absorption, giving an error on both the continuum level and on the mean level in the CO absorption, which were added in quadrature. The second error component comes from the formal error provided by the continuum fitting procedure. This procedure could leave a residual tilt or curvature in the spectrum, and the formal error was used to quantify this contribution. The final component was an estimate of the error induced by redshift and wavelength calibration uncertainties. All three errors were of similar sizes, with only the first varying from spectrum to spectrum, as a result of signal-to-noise variations (see Fig. 1), and all three were added in quadrature to give the value quoted in Table 1. ## 3 Results The equivalent widths of CO absorption features for the sample of BCGs observed in this study are presented in Table 1. The data included in this table are Abell catalogue numbers (column 1), BCG names (column 2), CO<sub>EW</sub> values with 1–$`\sigma `$ errors (column 3), recession velocity in kms<sup>-1</sup> (column 4), absolute R-band magnitude corresponding to the metric luminosity $`L_m`$ (column 5), structure parameter ($`\alpha `$) (column 6) and the magnitude residual relative to the best-fit $`L_m`$$`\alpha `$ relation (column 7), (columns 4–7 are all taken from Lauer & Postman , who assumed a Hubble constant of 80 kms<sup>-1</sup>Mpc<sup>-1</sup>). Columns 8 and 9 contain velocity dispersions and Mg<sub>2</sub> metallicity indices, where available, from Faber et al. . We find the mean CO<sub>EW</sub> value for the 21 BCGs (3.35$`\pm `$0.03) to be effectively identical to that of the Coma cluster ellipticals (3.37$`\pm `$0.04) , and to that of the 31 ellipticals from a range of clusters discussed by James & Mobasher (3.29$`\pm `$0.06). The cluster and BCG distributions lie between the distributions of ‘isolated’ and ‘group’ field ellipticals discussed by James & Mobasher and shown in Fig. 2a. The major difference between the BCG CO<sub>EW</sub> values and those of other ellipticals is the remarkably small range in the former: the standard deviation for BCGs is 0.156, compared to 0.240 for Coma ellipticals, 0.337 for general cluster galaxies, and 0.422 for cluster plus field ellipticals. Indeed, the scatter in BCG CO absorption strengths is that predicted from the error estimates on the individual CO<sub>EW</sub> values, and so the intrinsic scatter may be much smaller still. Given the small number of BCG galaxies, a Kolmogorov-Smirnov test cannot distinguish between the distributions of BCG and cluster or Coma galaxies in Figs. 2b and 2c, but there is less than 10% chance that the BCGs are drawn from the same parent population as all the non-BCG ellipticals, and less than 1% chance that they are from the same population as field ellipticals (Fig. 2a). The BCGs are drawn from a much narrower region of the galaxy luminosity function than are the comparison samples in Fig. 2, which could affect the interpretation of this result. The 21 BCGs have a range in M<sub>R</sub> of -22.0 to -23.1, little more than a magnitude. R-band photometry is not available for all the comparison galaxies, but good estimates can be made from published optical and near-IR photometry, leading to an estimated range of M<sub>R</sub> of -19.4 to -22.8 for the Coma cluster ellipticals, and -19.8 to -22.5 for the field and cluster ellipticals discussed by James and Mobasher . We investigated whether the differences in CO<sub>EW</sub> scatter shown in Fig. 2 result from these differences in luminosity range by regressing CO<sub>EW</sub> on absolute magnitude, and studying the distributions of CO<sub>EW</sub> residuals about the best-fit lines. The distributions of these residuals are shown in Fig. 3. The dashed, diagonally shaded columns represent the residuals for the BCGs; the thick, dotted columns are those for the Coma cluster galaxies; and the solid lines represent the residuals for the field, group and cluster galaxies from James and Mobasher . The standard deviations of the CO<sub>EW</sub> residuals are 0.133 nm for the BCGs, 0.221 nm for the Coma ellipticals, and 0.419 nm for the cluster plus field ellipticals. This reinforces the conclusion from Fig. 2 that the BCGs have substantially more homogeneous CO strengths than the other elliptical galaxies studied, and this result does not appear to be a selection effect caused by the small luminosity range of the BCGs. This uniformity in CO<sub>EW</sub> values is the main result of this paper, and it is important to consider what it implies in terms of differences between BCGs and other ellipticals. Both high metallicity and recency of star formation are expected to increase CO<sub>EW</sub> values. The effect of metallicity on CO<sub>EW</sub> values can be estimated for the galaxies with measured Mg<sub>2</sub> indices, using the following method. From Fig. 37 of Worthey , a change in \[Fe/H\] from -0.25 to 0.00 changes the Mg<sub>2</sub> index from 0.216 to 0.258, and the change is approximately linear over the modelled range. Thus we infer a relation of the form $$\delta Mg_2=0.168\delta [Fe/H].$$ Doyon et al. find a relation between \[Fe/H\] and their CO index CO<sub>sp</sub>, $$\delta CO_{sp}=0.11\delta [Fe/H],$$ and from the definitions in Puxley et al. it is straightforward to convert from the index CO<sub>sp</sub> to CO<sub>EW</sub>. Then, the measured scatter in Mg<sub>2</sub> index of 0.029 for the BCGs should cause a scatter of 0.060 nm in CO<sub>EW</sub>, 38% of the observed scatter. For Coma ellipticals, the measured Mg<sub>2</sub> scatter is 0.024, equivalent to a scatter of 0.049 nm in CO<sub>EW</sub>, 20% of that observed, and for the field and cluster sample, the Mg<sub>2</sub> scatter is 0.030, and the predicted CO<sub>EW</sub> scatter 0.062 nm, 15% of that observed. Note also that the scatters in Mg<sub>2</sub> values are very similar in the three subsamples, whereas they have very different CO<sub>EW</sub> distributions. Thus, we conclude that metallicity differences have little effect on the measured CO<sub>EW</sub> values for the elliptical galaxies studied here, and propose that star formation history is the dominant factor causing the larger scatter for non-BCG ellipticals. If so, the differences in the distributions of CO<sub>EW</sub>, shown in Fig. 2 would be the result of wider variations in star formation history for general field and cluster ellipticals than for the BCGs. This indicates that BCGs formed their stars very early; if there has been more recent star formation in these galaxies then the rate of star formation as a function of epoch must have been very uniform from galaxy to galaxy. Given the narrow range in BCG CO<sub>EW</sub> values, it is unrealistic to expect very strong correlations with other BCG parameters. Nevertheless, Fig. 4 does show a good correlation with absolute R-band magnitude in a 10 kpc metric aperture, M<sub>R</sub>, with a correlation coefficient of 0.51 and a probability of 98.4% that this represents a true correlation (i.e. 1.6% probability that it could arise by chance). This is significant enough to be useful as a distance indicator: the scatter in M<sub>R</sub> for the 21 galaxies observed is 0.326 mag, which reduces to 0.280 mag when the M<sub>R</sub> values are corrected for the CO<sub>EW</sub> effect. The slope of the regression line of M<sub>R</sub> on CO<sub>EW</sub> is somewhat smaller than that for the trend in M<sub>K</sub> (total K-band absolute magnitude) vs CO<sub>EW</sub> for Coma cluster ellipticals , at -1.1$`\pm `$0.5 mag/nm c.f. -1.6$`\pm `$0.7 mag/nm for the Coma galaxies. However, this difference is not statistically significant ($``$0.6$`\sigma `$). It is not possible to determine whether the absolute magnitude–CO<sub>EW</sub> relations are consistent for the various samples because of the lack of homogeneous photometry, and the consequent need for large and uncertain colour and aperture corrections. Similarly, there is a strong correlation between CO<sub>EW</sub> and residuals (dM<sub>α</sub>) about the relation of M<sub>R</sub> with structure parameter $`\alpha `$ (Fig. 5), in the sense that galaxies with high CO<sub>EW</sub> tend to be bright relative to the mean relation (correlation coefficient 0.60, significance 99.6%). The residuals (dM<sub>α</sub>) are reduced from 0.243 mag to 0.195 mag by correcting for the trend with CO<sub>EW</sub> shown in Fig. 5. There is no correlation between CO<sub>EW</sub> and the structure parameter $`\alpha `$ itself. Given the trends found in Figs. 4 & 5, it is instructive to explore if these effects could cause the putative streaming flow signal detected by Lauer & Postman using the full sample of BCGs. However, we find no significant correlation between CO<sub>EW</sub> and direction on the sky (Fig. 6). This implies that the Lauer & Postman apparent detection of a bulk flow was not an artefact of differing stellar populations between sample galaxies, although we have of course only looked at a small fraction (18%) of their sample. The trends in figures 4 & 5 are both in the sense that brighter galaxies have higher indices: that in Fig. 4 could be a consequence of a metallicity–absolute magnitude relation, and there is indeed evidence of a weak correlation of CO<sub>EW</sub> with metallicity (Fig. 7). The correlation coefficient here is 0.52, but the relation has only 80% significance due to only 8 BCGs having tabulated Mg<sub>2</sub> values. Fig. 7 also shows the trend in CO<sub>EW</sub> with metallicity for 31 Coma cluster galaxies . The BCGs clearly lie at higher mean metallicity than do the Coma cluster ellipticals (mean Mg<sub>2</sub> values 0.328 for the BCGs and 0.294 for the Coma ellipticals). Using the relation between Mg<sub>2</sub> and \[Fe/H\] from Worthey , and that between \[Fe/H\] and CO absorption strength found by Doyon et al. , this predicts a difference in mean CO<sub>EW</sub> of 0.05 nm, compared to the observed difference of 0.04$`\pm `$0.07 nm in the same sense. The small variation found here confirms the weakness of the dependence of CO<sub>EW</sub> on metallicity, as found by Doyon et al. , at least at the high metallicity values typical of centres of bright galaxies, and also confirms our earlier conclusion that star formation history is the dominant effect in determining CO<sub>EW</sub> strength. This is further confirmed by the wide spread in the CO<sub>EW</sub> values of isolated and group ellipticals , with no corresponding change in their metallicity. Finally, we find a weak correlation of CO<sub>EW</sub> with velocity dispersion for 14 galaxies with data in Table 1 (Fig. 8). This corresponds to a correlation coefficient of 0.294, and is significant at the 69% level. The slope and correlation coefficient are the same as was found for 31 Coma cluster ellipticals, also plotted in Fig. 8, but the mean correlation for the BCGs is again offset, to higher velocity dispersion at a given CO<sub>EW</sub>. The mean CO<sub>EW</sub> is almost identical for the two samples, whilst the BCGs have a much higher average velocity dispersion, and hence mass, as expected. ## 4 Conclusions We find that BCGs are much more homogeneous in evolved red stellar content than ellipticals overall, and BCGs are somewhat more homogeneous than Coma cluster ellipticals. The measured scatter in the CO<sub>EW</sub> indices for BCGs is comparable to the measurement errors. We interpret this as implying a more uniform and probably earlier star formation history for BCGs than for normal ellipticals. Metallicity does not appear to be the controlling parameter of CO absorption strength for elliptical galaxies. Absolute magnitudes, and magnitude residuals relative to the structure parameter relation of Hoessel correlate well with CO absorption depth. This may imply the presence of an additional intermediate-age population, or a higher metallicity population, in the galaxies which are over-luminous relative to the mean relation. However, this effect is very small and such populations, if present, must be weaker than in most field and cluster ellipticals, given the high degree of homogeneity in BCG CO<sub>EW</sub> values. It may be possible to use these correlations to define higher precision distance indicators for BCGs, which could be used for galaxies with redshifts up to 18,000 kms<sup>-1</sup>, as is indicated by the reduced scatter in R-band absolute magnitude, and the reduced scatter about the structure parameter relation after correction for the correlation with CO<sub>EW</sub>, discussed in section 3. Recent ROSAT observations reveal that a significant number of the Lauer & Postman galaxies do not lie at the X-ray centroids of their clusters (Paul Lynam, private communication), and there may be better candidates for the dominant central cluster galaxy. Thus our conclusions may refer more generally to bright galaxies towards cluster centres than to individual BCGs inhabiting the very centre of the cluster potential. ## 5 Acknowledgements We acknowledge the anonymous referee for many useful suggestions. The United Kingdom Infrared Telescope is operated by the Joint Astronomy Centre on behalf of the UK Particle Physics and Astronomy Research Council.
no-problem/0002/math0002185.html
ar5iv
text
# Amenable actions and exactness for discrete groups ## Abstract. It is proved that a discrete group $`G`$ is exact if and only if its left translation action on the Stone-Čech compactification is amenable. Combining this with an unpublished result of Gromov, we have the existence of non exact discrete groups. The author is supported by JSPS In \[KW\], Kirchberg and Wassermann discussed exactness for groups. A discrete group $`G`$ is said to be exact if its reduced group $`C^{}`$-algebra $`C_\lambda ^{}(G)`$ is exact. Throughout this paper, $`G`$ always means a discrete group and we identify $`G`$ with the corresponding convolution operators on $`\mathrm{}_2(G)`$. Amenability of a group action was discussed by Anantharaman-Delaroche and Renault in \[ADR\]. The left translation action of a group $`G`$ on its Stone-Čech compactification $`\beta G`$ was considered by Higson and Roe in \[HR\]. This action is amenable if and only if the uniform Roe algebra $$UC^{}(G):=C^{}(\mathrm{}_{\mathrm{}}(G),G)=\overline{span}\{s\mathrm{}_{\mathrm{}}(G):sG\}𝔹(\mathrm{}_2(G))$$ is nuclear. Since a $`C^{}`$-subalgebra of an exact $`C^{}`$-algebra is exact, $`C_\lambda ^{}(G)`$ is exact if $`UC^{}(G)`$ is nuclear. In this article, we will prove the converse. A function $`u:G\times G`$ is called a positive definite kernel if the matrix $`[u(s_i,s_j)]𝕄_n`$ is positive for any $`n`$ and $`s_1,\mathrm{},s_nG`$. If $`u`$ is a positive definite kernel on $`G\times G`$ such that $`u(s,s)1`$ for all $`sG`$, then the Schur multiplier $`\theta _u`$ on $`𝔹(\mathrm{}_2(G))`$ defined by $$\theta _u(x)=[u(s,t)x_{s,t}]_{s,tG}$$ for $`x=[x_{s,t}]𝔹(\mathrm{}_2(G))`$ is a completely positive contraction. (See the section 3.6 in \[Pa\].) ###### Lemma 1 (Section 5 in \[Pa\]). Let $`B`$ be a $`C^{}`$-algebra and $`n`$. Then, the map $$\mathrm{CP}(B,𝕄_n)\varphi f_\varphi (𝕄_n(B))_+^{}$$ defined by $$f_\varphi (X)=\underset{i,j}{}\varphi (x_{ij})_{ij}$$ for $`X=[x_{ij}]𝕄_n(B)`$ gives a bijective correspondence between the set of all completely positive maps $`\mathrm{CP}(B,𝕄_n)`$ from $`B`$ to $`𝕄_n`$ and the set of all positive linear functionals $`(𝕄_n(B))_+^{}`$ on $`𝕄_n(B)`$. For vectors $`\xi `$ and $`\eta `$ in a Hilbert space $``$, we define a linear functional $`\omega _{\xi ,\eta }`$ on $`𝔹()`$ by $`\omega _{\xi ,\eta }(x)=(x\xi ,\eta )`$ for $`x𝔹()`$. For a subset $`_0`$ in $``$, we denote by $`𝒱(_0)`$ the (possibly non-closed) linear span of $`\{\omega _{\xi ,\eta }:\xi ,\eta _0\}`$. The following is a variation of Kirchberg’s theorem. ###### Lemma 2. Let $`A`$ be a unital exact $`C^{}`$-subalgebra of $`𝔹()`$ and let $`_0`$ be a total subset in $``$. Then, for any finite subset $`EA`$ and $`\epsilon >0`$, we have $`\theta :A𝔹()`$ such that 1. $`\theta `$ is of finite rank and unital completely positive, 2. $`\theta (x)x<\epsilon `$ for all $`xE`$, 3. there are $`f_k𝒱(_0)`$ and operators $`y_k`$ in $`𝔹()`$ such that $$\theta (x)=\underset{k=1}{\overset{d}{}}f_k(x)y_k$$ for $`xA`$. ###### Proof. We may assume that $`1E`$. Since $`A`$ is exact, it follows from Kirchberg’s theorem \[K, W\] that the inclusion map of $`A`$ into $`𝔹()`$ is nuclear. Thus, there are $`n`$ and unital completely positive maps $`\varphi :𝔹()𝕄_n`$ and $`\psi :𝕄_n𝔹()`$ such that $$\psi \varphi (x)x<\epsilon /2\text{ for all }xE.$$ Let $`f_\varphi (𝕄_n(𝔹()))^{}=𝔹(^n)^{}`$ be the corresponding linear functional defined as in Lemma 1. Since $`𝒱(_0^n)𝔹(^n)_+^{}`$ is weak dense in $`𝔹(^n)_+^{}`$, we can approximate $`f_\varphi `$ by linear functionals in $`𝒱(_0^n)𝔹(^n)_+^{}`$ in the weak-topology. It follows that we can approximate $`\varphi `$ in the point-norm topology by completely positive maps $`\varphi ^{}`$ such that $`\varphi ^{}()_{ij}`$ is in $`𝒱(_0)`$. Thus, for arbitrary $`0<\delta <1`$, we may find such $`\varphi ^{}`$ with $$\varphi ^{}(x)\varphi (x)<\delta \text{ for all }xE.$$ Let $`p=\varphi ^{}(1)`$. Since $`1E`$ and $`0<\delta <1`$, $`p`$ is invertible. Thus, we can define a unital completely positive map $`\varphi ^{\prime \prime }:A𝕄_n`$ by $$\varphi ^{\prime \prime }(x)=p^{1/2}\varphi ^{}(x)p^{1/2}$$ for $`xA`$. Taking $`\delta >0`$ sufficiently small, we have $$\varphi ^{\prime \prime }(x)\varphi (x)<\epsilon /2\text{ for all }xE.$$ Finally, put $`\theta =\psi \varphi ^{\prime \prime }`$ and we are done. ∎ The following was inspired by the work of Guentner and Kaminker \[GK\]. ###### Theorem 3. For a discrete group $`G`$, the following are equivalent. 1. The reduced group $`C^{}`$-algebra $`C_\lambda ^{}(G)`$ is exact. 2. For any finite subset $`EG`$ and any $`\epsilon >0`$, there are a finite subset $`FG`$ and a positive definite kernel $`u:G\times G`$ such that $$u(s,t)0\text{ only if }st^1F$$ and that $$|1u(s,t)|<\epsilon \text{ if }st^1E.$$ 3. The uniform Roe algebra $`UC^{}(G)`$ is nuclear. ###### Proof. (i)$``$(ii). We follow the proof of Theorem 3.1 in \[GK\]. We give ourselves a finite subset $`EG`$ and $`\epsilon >0`$. By the assumption, there is $`\theta :C_\lambda ^{}(G)𝔹(\mathrm{}_2(G))`$ satisfying the conditions in Lemma 2 for $`E_\lambda ^{}(G)`$, $`\epsilon `$ and $`_0=\{\delta _s:sG\}`$. If $$\theta =\underset{k=1}{\overset{d}{}}\omega _{\delta _{p(k)},\delta _{q(k)}}y_k$$ for $`p(k),q(k)G`$, then we put $`F=\{q(k)p(k)^1:k=1,\mathrm{},d\}`$. We define $`u:G\times G`$ by $$u(s,t)=(\delta _s,\theta (st^1)\delta _t)$$ for $`s,t\mathrm{\Gamma }`$. It is not hard to check that $`u`$ has the desired properties. (ii)$``$(iii). Let $`\{E_i\}_{iI}`$ be an increasing net of finite subsets of $`G`$ containing the unit $`e`$. By the assumption, there are finite subsets $`F_iG`$ and a net of positive definite kernels $`\{u_i\}_{iI}`$ such that $$u_i(s,t)0\text{ only if }st^1F_i$$ and that $$|1u_i(s,t)|<|E_i|^1\text{ if }st^1E_i.$$ We may assume that $`u(s,s)1`$ for all $`sG`$. Let $`\theta _i:𝔹(\mathrm{}_2(G))𝔹(\mathrm{}_2(G))`$ be the Schur multiplier associated with the positive definite kernel $`u_i`$. Then, $`\theta _i`$’s are completely positive contractions and it can be seen that $$ran\theta _ispan\{s\mathrm{}_{\mathrm{}}(G):sF_i\}UC^{}(G)$$ and that $$\underset{iI}{lim}\theta _i(x)x=0\text{ for all }xUC^{}(G).$$ Let $`\mathrm{\Phi }`$ be the restriction map from $`𝔹(\mathrm{}_2(G))`$ onto $`\mathrm{}_{\mathrm{}}(G)`$, i.e., $`\mathrm{\Phi }(x)(s)=(x\delta _s,\delta _s)`$ for $`x𝔹(\mathrm{}_2(G))`$. For $`iI`$ and $`sF_i`$, we define a complete contraction $`\sigma _i^s:𝔹(\mathrm{}_2(G))\mathrm{}_{\mathrm{}}(G)`$ by $$\sigma _i^s(x)=\mathrm{\Phi }(s^1\theta _i(x))$$ for $`x𝔹(\mathrm{}_2(G))`$. Then, we have $`\theta _i(x)=_{sF_i}s\sigma _i^s(x)`$ for $`iI`$ and $`x𝔹(\mathrm{}_2(G))`$. To prove that $`UC^{}(G)`$ is nuclear, we take a unital $`C^{}`$-algebra $`B`$. It suffices to show that $`UC^{}(G)_{\mathrm{min}}B=UC^{}(G)_{\mathrm{max}}B`$. Let $$Q:UC^{}(G)_{\mathrm{max}}BUC^{}(G)_{\mathrm{min}}B$$ be the canonical quotient map. Since $`\mathrm{}_{\mathrm{}}(G)`$ is nuclear, we observe that $$\sigma _i^s\mathrm{id}_B:UC^{}(G)_{\mathrm{min}}B\mathrm{}_{\mathrm{}}(G)_{\mathrm{min}}BUC^{}(G)_{\mathrm{max}}B$$ is a well-defined contraction. Since $`\theta _i`$’s are completely positive contractions, so are $$\theta _i\mathrm{id}_B:UC^{}(G)_{\mathrm{max}}BUC^{}(G)_{\mathrm{max}}B$$ (see Theorem 10.8 in \[Pa\]) and we have $$\underset{iI}{lim}\theta _i\mathrm{id}_B(z)z=0\text{ for all }zUC^{}(G)_{\mathrm{max}}B.$$ Combining this with the factorization $$\theta _i\mathrm{id}_B(z)=\underset{sF_i}{}(s1)(\sigma _i^s\mathrm{id}_B(Q(z))),$$ we see that $`\mathrm{ker}Q=\{0\}`$. (iii)$``$(i). This is obvious. ∎ ###### Remark 4. Recently, Gromov found examples of finitely presented groups which are not uniformly embeddable into Hilbert spaces \[G\]. As it was suggested in \[GK\], these examples of Gromov are indeed not exact since the condition (ii) in Theorem 3 assures uniform embeddings. Our definition of exactness for discrete groups is different from the original one \[KW\], but this is justified by Theorem 5.2 in \[KW\]. Also, we can reprove this using Theorem 3. Indeed, for a closed 2-sided ideal $`I`$ in a $`C^{}`$-algebra $`A`$, the corresponding sequence $$0UC^{}(G;I)UC^{}(G;A)UC^{}(G;A/I)0$$ is exact if the condition (ii) in Theorem 3 holds, where for a $`C^{}`$-algebra $`B𝔹()`$, we set $$UC^{}(G;B)=C^{}(\mathrm{}_{\mathrm{}}(G;B),1_{}\lambda (G))𝔹(_2\mathrm{}_2(G)).$$ Our definition of amenable action is different from Definition 2.2 in \[HR\] and there is a delicate problem when we are dealing with non second countable space, but this is justified when $`G`$ is countable. Indeed, let $`u`$ be a positive definite kernel as in the condition (ii) in Theorem 3. We may assume that $`u(s,s)=1`$ for all $`sG`$. Regarding $`u`$ as a positive element in $`UC^{}(G)`$, we let $`\xi _s=u^{1/2}\delta _s\mathrm{}_2(G)`$. Then, we have $`(\xi _t,\xi _s)=u(s,t)`$ for $`s,tG`$. Now, define $`\mu :G\mathrm{}_1(G)`$ by $`\mu _s(t)=|\xi _s(t^1s)|^2`$ for $`s,tG`$. It can be verified that $`\mu _s_1=\xi _s_2^2=1`$ and that $$s\mu _t\mu _{st}_1=|\xi _t|^2|\xi _{st}|^2_1|\xi _t||\xi _{st}|_2|\xi _t|+|\xi _{st}|_22\sqrt{2\epsilon }$$ for all $`sE`$ and $`tG`$. We observe that the range of $`\mu `$ is relatively weak compact in $`\mathrm{}_1(G)`$ since $`u^{1/2}`$ is in $`UC^{}(G)`$. Thus, we can extend $`\mu `$ on the Stone-Čech compactification $`\beta G`$ by continuity. This completes the proof of our claim. From the theory of exact operator spaces developed by Pisier \[Pi\], it is enough to check the condition (ii) in Theorem 3 for finite subsets $`E`$ that contain the unit $`e`$ and are contained in a given set of generators of $`G`$. See \[Y, H, HR, GK\] for the connection to the Novikov conjecture.
no-problem/0002/quant-ph0002007.html
ar5iv
text
# Coexistence of thermal noise and squeezing in the intensity fluctuations of small laser diodes ## I Introduction In the early days of laser physics, one of the most fundamental properties attributed to lasers was the reduction of photon number fluctuations below thermal levels at threshold . More recently, the possibility to reduce the intensity noise of semiconductor laser diodes even below the shot noise limit has once again drawn attention to the photon number statistics of light emitted by laser devices . However, in the case of semiconductor lasers, the microscopic laser dynamics depends sensitively on the charge carrier densities which provide the optical gain. Quantum theories of laser light often fail to include this dynamical degree of freedom. In particular, the theories which show the reduction of photon number fluctuations in the laser cavity below thermal noise usually rely on the adiabatic elimination of the carrier dynamics . In the following, it is shown that, without this elimination of the carrier relaxation dynamics, small semiconductor lasers may exhibit thermal noise even far above threshold. In particular, the photon number fluctuation inside the cavity may even be thermal at quantum efficiencies grater than 50 %, allowing a suppression of the low frequency intensity fluctuations below the shot noise limit. This coexistence of thermal noise and squeezing can be derived theoretically from the same photon number rate equations as the thermal noise. It is therefore necessary to distinguish the photon number statistics at short times from the time integrated statistics of the low frequency noise component. Specifically, the low frequency noise is not associated with any well defined mode, but represents a collective property of many modes due to the limited temporal coherence of the emitted light. This implies that even an ideal single mode laser exhibits the anti-correlated intensity statistics observed in multi-mode lasers . Even the squeezing observed in a perfect single mode laser can therefore include a large amount of thermal noise and does not usually correspond to the generation of a minimum uncertainty state. The paper is organized as follows: In section II, the rate equations are formulated. Within this framework we present our definition of the laser threshold. In the subsequent section III, the photon number fluctuations are determined which allow the definition of a noise threshold. In section IV, the low frequency limit of intensity noise is derived and the squeezing threshold is defined. In section V, the results are summarized and the conditions for a coexistence of squeezing and thermal noise is obtained. In section VI, the nature of collective multi-mode squeezing is discussed and limitations of quantum optical applications are pointed out. ## II Rate equations for the energy flow in lasers A laser device converts the energy injected into the gain medium into laser light by means of light field amplification inside the laser cavity. Since the energy in the gain medium and the energy of the light field are both quantized, the energy flow cannot be entirely smooth and continuous. Instead, it is described by transition rates. Figure 1 illustrates the transition rates between the two energy reservoirs and the environment. Note that the rates given pertain to a single mode laser under the assumption of a linear carrier density dependence for gain and spontaneous emission. The laser device is thus characterized by the spontaneous emission factor $`\beta `$, the excitation lifetime $`\tau `$, the excitation number at transparency $`N_T`$ and the cavity loss rate $`\kappa `$. The rate equations for the excitation number $`N`$ and the cavity photon number $`n`$ at an injection rate $`j`$ may be formulated according to figure 1 as $`{\displaystyle \frac{d}{dt}}N`$ $`=`$ $`j{\displaystyle \frac{1}{\tau }}N2{\displaystyle \frac{\beta }{\tau }}(NN_T)n+q_N(t)`$ $`{\displaystyle \frac{d}{dt}}n`$ $`=`$ $`2\left({\displaystyle \frac{\beta }{\tau }}(NN_T)\kappa \right)n+{\displaystyle \frac{\beta }{\tau }}N+q_n(t),`$ (1) where $`q_N(t)`$ and $`q_n(t)`$ represent the shot noise terms corresponding to the respective transitions into and out of the gain medium excitation $`N`$ and the cavity photon number $`n`$, respectively. Note that equation (II) corresponds to the rate equation of Lax and Louisell . Although a precise quantum mechanical treatment would require the solution of a master equation for both the light field and the gain medium, it is possible to reduce this equation to the diagonal elements in photon and excitation number. The statistical equations then correspond to those of a classical particle flow, allowing a formulation of both rate equations and shot noise terms. The noise terms are zero on average. However, their fluctuations and correlations are given by the transition rates associated with the respective energy reservoir, $`q_N(t)q_N(t+\mathrm{\Delta }t)`$ $`=`$ $`\left(\sigma j+{\displaystyle \frac{\beta }{\tau }}(n+1)N+2{\displaystyle \frac{\beta }{\tau }}N_Tn+{\displaystyle \frac{\beta }{\tau }}N\right)\delta (\mathrm{\Delta }t)`$ $`q_n(t)q_n(t+\mathrm{\Delta }t)`$ $`=`$ $`\left(2\kappa n+2{\displaystyle \frac{\beta }{\tau }}N_Tn+{\displaystyle \frac{\beta }{\tau }}N\right)\delta (\mathrm{\Delta }t)`$ $`q_N(t)q_n(t+\mathrm{\Delta }t)`$ $`=`$ $`\left(2{\displaystyle \frac{\beta }{\tau }}N_Tn+{\displaystyle \frac{\beta }{\tau }}N\right)\delta (\mathrm{\Delta }t).`$ (2) Pump noise suppression is described by the pump noise factor $`\sigma `$. For the purpose of describing the possibility of squeezing, only the ideal case of $`\sigma =0`$ will be considered. Moreover, it should be noted that the actual output intensity $`I(t)`$ of the laser device is a fluctuating quantity given by $$I(t)=2\kappa n+q_I(t),$$ (3) where the quantum noise statistics of $`q_I(t)`$ read $`q_I(t)q_I(t+\mathrm{\Delta }t)`$ $`=`$ $`2\kappa n\delta (\mathrm{\Delta }t)`$ $`q_n(t)q_I(t+\mathrm{\Delta }t)`$ $`=`$ $`2\kappa n\delta (\mathrm{\Delta }t)`$ $`q_N(t)q_I(t+\mathrm{\Delta }t)`$ $`=`$ $`0.`$ (4) Within the framework of our general laser model, equations (II) to (II) provide a complete description of the fluctuating laser intensity $`I(t)`$ for all time-scales and frequencies. The characteristics of a specific device are determined by only four device parameters, i.e. the spontaneous emission factor $`\beta `$, the excitation lifetime $`\tau `$, the excitation number at transparency $`N_T`$ and the cavity loss rate $`\kappa `$. In general, these parameters are given by the gain medium and the cavity design. For semiconductor lasers with an active volume of $`V`$, typical material properties are $`\beta V`$ $``$ $`10^{14}\text{cm}^3`$ $`{\displaystyle \frac{N_T}{V}}`$ $``$ $`10^{18}\text{cm}^3`$ $`\tau `$ $``$ $`3\times 10^9\text{s}.`$ (5) The cavity loss rate $`\kappa `$ should be smaller than the maximal gain in order to achieve lasing. For the parameters above, this corresponds to the condition that $$\kappa <\frac{\beta N_T}{\tau }3.33\times 10^{12}\text{s}^1.$$ (6) Usually, the quality of a laser cavity will not be much higher than needed, so the loss rate for semiconductor laser cavities is typically around $`10^{12}\text{s}^1`$. Consequently, the main device parameter for semiconductor laser diodes is the size as given by $`V`$ or, alternatively, by the spontaneous emission factor $`\beta `$. Since the size of typical laser diodes is in the micrometer range, spontaneous emission factors for standard diodes range from $`\beta =10^3`$ for small vertical cavity surface emitting lasers to about $`\beta =10^6`$ for edge emitters. Although larger devices are possible, it becomes increasingly difficult to stabilize a single mode. Consequently, a single mode theory will tend to underestimate the laser noise observed in large semiconductor laser diodes. The light-current characteristic of a laser device described by equation (1) may be obtained from the time averaged energy flow given by the stationary solution of the rate equations. The stationary carrier number average $`\overline{N}`$ and the stationary photon number average $`\overline{n}`$ read $`\overline{N}`$ $`=`$ $`{\displaystyle \frac{1+\frac{1}{2n_T}}{1+\frac{1}{2\overline{n}}}}N_T`$ $`\overline{n}`$ $`=`$ $`{\displaystyle \frac{jj_{th}}{4\kappa }}{\displaystyle \frac{1}{4}}+\sqrt{({\displaystyle \frac{jj_{th}}{4\kappa }}+{\displaystyle \frac{1}{4}})^2+{\displaystyle \frac{j_{th}}{4\kappa }}},`$ (7) where the photon number at transparency $`n_T`$ and the threshold current $`j_{th}`$ are given by $`n_T`$ $`=`$ $`{\displaystyle \frac{\beta N_T}{2\kappa \tau }}`$ $`j_{th}`$ $`=`$ $`lim_{\overline{n}\mathrm{}}(1\beta ){\displaystyle \frac{\overline{N}}{\tau }}=2\kappa {\displaystyle \frac{1\beta }{\beta }}\left(n_T+{\displaystyle \frac{1}{2}}\right).`$ (8) The threshold current is defined by extrapolating the asymptotic linear increase of $`\overline{n}(j)`$ far above threshold to the threshold region. Since the excitation number $`\overline{N}`$ is pinned for $`\overline{n}\mathrm{}`$, the extrapolated threshold current is equal to the constant loss rate far above threshold. In the following, the noise properties of the light field will be discussed with respect to the light field intensity given in terms of the average photon number $`\overline{n}`$. It is therefore useful to define the photon number at threshold by $$n_{th}=\overline{n}(j_{th})=\sqrt{\frac{1}{16}+\frac{j_{th}}{4\kappa }}\frac{1}{4}.$$ (9) Note that the order of magnitude for both threshold current $`j_{th}`$ and threshold photon number $`n_{th}`$ is defined by the spontaneous emission factor $`\beta `$. For $`\beta 1`$ and $`n_T=3/2`$, the threshold current $`j_{th}`$ given by equation (II) is equal to $`4\kappa /\beta `$ and the photon number at threshold is $`\beta ^{1/2}`$. In terms of electrical currents, $`\kappa =10^{12}\text{s}^1`$ corresponds to $`1.6\times 10^7\text{A}`$. Therefore, the approximate electrical threshold current $`\text{I}_{th}`$ of a typical semiconductor laser diode is related to the spontaneous emission factor $`\beta `$ by $`2\beta \text{I}_{th}10^6\text{A}`$. This formula allows a simple quantitative estimate of the spontaneous emission factor from the threshold current. For example, a threshold current of 5 mA indicates a spontaneous emission factor of $`\beta =10^4`$. The noise properties discussed in the following can thus be related directly to the electrical threshold current observed in the light-current characteristics of laser diodes. ## III Photon number fluctuations The noise characteristics of laser light can be investigated by solving the linearized Langevin equations near the stationary solution. The linearization neglects the bilinear modification of the transition rate given by $`2\beta \tau ^1\delta N\delta n`$. This modification may become relevant for large correlated fluctuations in both $`n`$ and $`N`$. However, even for large $`\delta n`$, the smallness of $`\delta N`$ and the lack of correlation between $`\delta n`$ and $`\delta N`$ usually justify the linear approximation. The linearized dynamics of the fluctuations $`\delta N=N\overline{N}`$ and $`\delta n=n\overline{n}`$ read $`{\displaystyle \frac{d}{dt}}\delta N`$ $`=`$ $`\mathrm{\Gamma }_N\delta Nr\omega _R\delta n+q_N(t)`$ $`{\displaystyle \frac{d}{dt}}\delta n`$ $`=`$ $`r^1\omega _R\delta N\gamma _n\delta n+q_n(t),`$ (10) where the relevant time-scales are given by the electronic relaxation rate $`\mathrm{\Gamma }_N`$, the optical relaxation rate $`\gamma _n`$, and the coupling frequency $`\omega _R`$. The hole-burning ratio $`r`$ scales the interaction between photon fluctuations and excitation fluctuations. The four parameters characterizing the fluctuation dynamics are functions of the device properties and the average photon number $`\overline{n}`$, which read $`\mathrm{\Gamma }_N`$ $`=`$ $`{\displaystyle \frac{1}{\tau }}(1+2\beta \overline{n})`$ $`\gamma _n`$ $`=`$ $`2\kappa {\displaystyle \frac{n_T+\frac{1}{2}}{\overline{n}+\frac{1}{2}}}`$ $`\omega _R`$ $`=`$ $`2\kappa \sqrt{\beta {\displaystyle \frac{\overline{n}n_T}{\kappa \tau }}}`$ $`r`$ $`=`$ $`\sqrt{{\displaystyle \frac{\kappa \tau }{\beta }}{\displaystyle \frac{(\overline{n}n_T)}{(\overline{n}+\frac{1}{2})^2}}}.`$ (11) The complete set of two time correlation functions describing the temporal fluctuations may now be derived analytically by obtaining the response function of the linear dynamics and applying it to the statistics of the noise input components $`q_N(t)`$ and $`q_n(t)`$. However, it is usually possible to identify the major dynamical processes observable in the fluctuation dynamics by concentrating only on the fastest time-scales. In particular, three regimes may be distinguished: * Relaxation oscillations, $`\omega _R\mathrm{\Gamma }_N+\gamma _n`$ If the coupling frequency $`\omega _R`$ is much larger than the relaxation rates $`\mathrm{\Gamma }_N`$ and $`\gamma _n`$, the fluctuation dynamics is described by relaxation oscillations with a frequency of $`\omega _R`$ and a relaxation rate of $`(\mathrm{\Gamma }_N+\gamma _n)/2`$. * Optical relaxation, $`\gamma _n\mathrm{\Gamma }_N+\omega _R`$ If the optical relaxation rate $`\gamma _n`$ is much larger than the electronic relaxation rate $`\mathrm{\Gamma }_N`$ and the coupling frequency $`\omega _R`$, the excitation dynamics has no significant effect on the fluctuation dynamics of the light field. The fluctuation dynamics is then approximately described by thermal fluctuations with a coherence time of $`\gamma _n^1`$. * Adiabatic hole-burning, $`\mathrm{\Gamma }_N\gamma _n+\omega _R`$ If the electronic relaxation rate $`\mathrm{\Gamma }_N`$ is much larger than the optical relaxation rate $`\gamma _n`$ and the coupling frequency $`\omega _R`$, the excitation number quickly relaxes to the stationary value defined by the much slower photon number fluctuations. This stationary value of the excitation number acts back on the photon number fluctuation through the coupling rate $`\omega _R`$, increasing the relaxation rate in the light field by $`\omega _R^2/\mathrm{\Gamma }_N`$. The fluctuation dynamics is then given by exponentially damped fluctuations which are thermal for $`\gamma _n\mathrm{\Gamma }_N>\omega _R^2`$ and become sub-thermal for $`\gamma _n\mathrm{\Gamma }_N<\omega _R^2`$. In large lasers, this solution is typically valid close to threshold. Figure 2 shows the operating regimes corresponding to the three cases given above as a function of spontaneous emission factor $`\beta `$ and average photon number $`\overline{n}`$. Since quantum optics textbooks often characterize the light field not by two time correlations but by the stationary photon number distribution in the laser cavity, it is interesting to analyze the magnitude of the fluctuations given by the variance $`\delta n^2`$. Using equations (II) and (III), an analytical expression can be derived for the photon number fluctuations. For $`\beta 1`$ and $`\overline{n}n_T`$, it reads $$\frac{\overline{\delta n^2}}{\overline{n}^2}\left(1+\frac{2\beta \overline{n}^3(2\beta \overline{n}+1)}{(n_T+\frac{1}{2})(4\beta (\kappa \tau +1)\overline{n}^2+\overline{n}+2\kappa \tau (n_T+\frac{1}{2}))}\right)^1.$$ (12) Figure 3 shows a contour plot of the fluctuations as a function of spontaneous emission factor $`\beta `$ and photon number $`\overline{n}`$ for $`3\kappa \tau =10^4`$ and $`n_T=3/2`$. It should be noted that the thermal noise region with $`\delta n^2\overline{n}^2`$ extends far beyond the laser threshold for $`\beta >10^6`$. If the photon number noise threshold $`n_\delta `$ is defined as the point at which the photon number fluctuations drop to one half of the thermal noise level, this threshold is given by $$\frac{2\beta n_\delta ^3(2\beta n_\delta +1)}{(n_T+\frac{1}{2})(4\beta (\kappa \tau +1)n_\delta ^2+n_\delta +2\kappa \tau (n_T+\frac{1}{2}))}=1.$$ (13) This definition of the threshold may be approximated by distinguishing three types of laser devices, depending on the magnitude of the spontaneous emission factor $`\beta `$. The three laser types are * Macroscopic lasers, defined by $`\beta ^1>2(2\kappa \tau )^2(n_T+1/2)`$, with a noise threshold $`n_\delta =n_{th}`$ identical with the laser threshold given by equation (9). * Mesoscopic lasers, defined by $`4\kappa \tau (n_T+1/2)<\beta ^1<2(2\kappa \tau )^2(n_T+1/2)`$, with a noise threshold $`n_\delta =2\kappa \tau (n_T+1/2)>n_{th}`$ slightly above the laser threshold given by equation (9). * Microscopic lasers, defined by $`\beta ^1<4\kappa \tau (n_T+1/2)`$, with a noise threshold $`n_\delta =(\kappa \tau /2)n_{th}`$, significantly higher than the laser threshold given by equation (9). Mesoscopic and microscopic lasers therefore have thermal photon number statistics even above threshold. Recent experimental studies conducted independently on a solid state laser system seem to confirm this result . Note that almost all semiconductor laser diodes fall into these two categories, since the parameters given by equation (II) indicate that macroscopic semiconductor lasers must have a spontaneous emission factor $`\beta `$ smaller than $`10^8`$, which corresponds to a threshold current of no less than 50 A. The borderline between mesoscopic and microscopic semiconductor laser devices is found at around $`\beta =10^4`$ or 5 mA threshold current. Thus, most modern semiconductor laser devices should exhibit thermal photon number fluctuations even above the laser threshold. Microscopic lasers show thermal fluctuations even above quantum efficiencies greater than 50% ($`2\kappa \overline{n}=j_{th}`$). Since high quantum efficiency is the key to squeezing the laser output by suppressing the pump noise, such devices can produce a squeezed light output even in the presence of thermal photon number fluctuations in the laser cavity. ## IV Low frequency noise Naturally, it is not possible to measure the light field inside the cavity. Nevertheless most quantum theories tend to concentrate on the state of the cavity field modes without any realistic assumptions on the emission process. It is one of the merits of the original work on squeezing the laser output that it points out the practical relevance of distinguishing between the light inside the cavity and the light emitted by the laser device. On short time-scales, this difference may seem to be irrelevant, because the emission process only adds some partition noise for times shorter than $`\kappa ^1`$ and otherwise produces the same type of statistics as the field inside the cavity. However, energy conservation introduces a constraint at longer time-scales. Because of this external constraint, the fluctuations of the photon current outside the cavity may actually be lower than the photon number fluctuations inside . If the excitation dynamics are eliminated, this constraint affects both the photon number fluctuations inside the cavity and the low frequency noise equally . If the excitations in the gain medium provide an additional energy reservoir, however, care must be taken to distinguish the long term effects of such energy storage on the low frequency noise from the short term effects of energy exchange between the gain medium and the light field on the photon number fluctuations. The field outside the cavity represents energy lost from the laser device, while a time average over the field inside the cavity does not have this meaning. In particular, a single photon might stay inside the cavity for a long time or for a short time - in the field outside, it will only appear once. In order to describe the low frequency part of intensity fluctuations, it is therefore necessary to discuss the output intensity $`I(t)`$ introduced in equation (3). The fluctuations of the average intensity $`\overline{I}=2\kappa \overline{n}`$ are given by $$\delta I(t)=I(t)\overline{I}=2\kappa \delta n+q_I(t).$$ (14) The two time correlation function of the intensity fluctuation is then given by $$\delta I(t)\delta I(t+\mathrm{\Delta }t)=4\kappa ^2\delta n(t)\delta n(t+\mathrm{\Delta }t)+2\kappa q_I(t)\delta n(t+\mathrm{\Delta }t)+q_I(t)q_I(t+\mathrm{\Delta }t).$$ (15) The last term represents the shot noise level $`L_{\text{SN}}`$ induced by quantum fluctuations at the cavity mirrors. It is therefore convenient to use this term as a normalization term when performing the time average representing the limit of low frequencies, $$\frac{\overline{\delta I^2}(\omega 0)}{L_{\text{SN}}}=\frac{_0^{\mathrm{}}𝑑\tau \left(4\kappa ^2\delta n(t)\delta n(t+\tau )+2\kappa q_I(t)\delta n(t+\tau )\right)}{_0^{\mathrm{}}𝑑\tau q_I(t)q_I(t+\tau )}+1.$$ (16) Using this normalization to the shot noise level, the low frequency limit of the intensity noise can be calculated using the linearized Langevin equation (III) with the noise terms given by equations (II) and (II). For $`\beta 1`$ and $`\overline{n}n_T`$, the result reads $$\frac{\overline{\delta I^2}(\omega 0)}{L_{SN}}\frac{(n_T+\frac{1}{2})\left(4\beta \overline{n}^3+2\overline{n}^2+(n_T+\frac{1}{2})\right)}{\left(2\beta \overline{n}^2+(n_T+\frac{1}{2})\right)^2}+\sigma \frac{4\beta ^2\overline{n}^4+4\beta (n_T+\frac{1}{2})\overline{n}^3}{\left(2\beta \overline{n}^2+(n_T+\frac{1}{2})\right)^2}.$$ (17) Figure 4 illustrates the low frequency noise characteristics for various levels of pump noise suppression $`\sigma `$. Note that the low frequency noise below about two times threshold current ($`2\kappa \overline{n}=j_{th}`$) does not depend very much on pump noise suppression. Moreover, the peak value of low frequency noise always coincides with the laser threshold. Above two times threshold, however, the squeezing obtained for $`\overline{n}\mathrm{}`$ is given by $`\sigma `$. In the following, the maximal squeezing potential represented by the $`\sigma =0`$ result will be investigated. Since squeezing is defined by intensity noise below the shot noise limit, the squeezing threshold $`n_{sq}`$ for $`\sigma =0`$ can be defined as the point at which $`\overline{\delta I^2}(\omega 0)=L_{SN}`$. Using $`\beta 1`$ and $`n_T>1/2`$, this threshold is found to be given by $$n_{sq}=\frac{1}{2\beta }\left((n_T+\frac{1}{2})+\sqrt{(n_T+\frac{1}{2})^2+(n_T+\frac{1}{2})}\right)\frac{n_T+\frac{1}{2}}{\beta }\frac{j_{th}}{2\kappa }.$$ (18) The squeezing threshold is thus found at two times threshold current, corresponding to a quantum efficiency of 50%. ## V Coexistence of squeezing and thermal noise Above two times threshold ($`2\kappa \overline{n}=j_{th}`$), the low frequency noise component of a semiconductor laser device may be squeezed below the shot noise level by suppressing the noise in the injected current. All the same, microscopic devices with spontaneous emission factors $`\beta `$ above $`10^4`$ and corresponding threshold currents below 5 mA still exhibit thermal photon number fluctuations on short time-scales. Therefore, an operating regime exists in which squeezing and thermal noise coexist in the same light field emission. This regime is defined as the region between the squeezing threshold given in equation (18) and the noise threshold given in equation (13). The laser threshold and the two fluctuation thresholds are shown in figure 5. At about $`\beta =10^3`$, the two thresholds cross. Therefore, coexistence of thermal noise and squeezing can be observed in devices with $`\beta >10^3`$. Figure 6 shows the photon number fluctuations and the low frequency noise for a mesoscopic ($`\beta =10^4`$) and a microscopic ($`\beta =10^2`$) laser device. In the microscopic device, coexistence of thermal noise and squeezing is clearly observable just above two times threshold ($`2\kappa \overline{n}=j_{th}`$). What is the relationship between the thermal photon number fluctuations inside the cavity observed on picosecond time-scales and the suppression of noise below the shot noise limit on time-scales longer than the relaxation rates of the laser dynamics? An attempt to visualize this relationship is shown in figure 7. The short term fluctuations average out as the temporal average is taken. This effect is due to the anti-correlation of fluctuations at intermediate time differences. In the case of over-damped relaxation oscillations, the short term fluctuations are given by a fast optical relaxation $`\gamma _n`$ and a much slower relaxation of the carrier system $`\mathrm{\Gamma }_N`$. At a quantum efficiency of 50 % ($`2\kappa \overline{n}=j_{th}`$), the two time correlations of $`I(t)`$ is approximately given by $$\delta I(t)\delta I(t+\mathrm{\Delta }t)2\kappa \overline{n}\delta (\mathrm{\Delta }t)+4\kappa ^2\overline{n}^2\mathrm{exp}\left(\gamma _n\mathrm{\Delta }t\right)4\kappa ^2\overline{n}^2\frac{\mathrm{\Gamma }_N}{\gamma _n}\mathrm{exp}\left(\mathrm{\Gamma }_N\mathrm{\Delta }t\right).$$ (19) While the time integrated contribution of the short time bunching is exactly equal to the time integrated contribution of the long time anti-bunching, the thermal bunching contributions dominate near $`\mathrm{\Delta }t=0`$ by a ratio equal to the time-scale ratio $`\gamma _n/\mathrm{\Gamma }_N1`$. The total low frequency noise is then equal to the shot noise term only, because the thermal fluctuations are anti-correlated on a time-scale of $`1/\mathrm{\Gamma }_n\tau `$ by the slow relaxation dynamics of the excitations in the gain medium. Figure 8 illustrates this transition from bunching to anti-bunching in the two time correlation of the laser output. Note that the optical coherence time is equal to or even smaller than $`1/\gamma _n`$. In particular, the line-width enhancement effects described by the $`\alpha `$ factor and the Peterman factor, respectively, are known to cause an additional reduction of the phase coherence not related to the photon number relaxation. Therefore, the first order coherence time will be much shorter than the time during which thermal bunching is observed in the photon number statistics. This lack of first order coherence suggests that the squeezing of low frequency intensity noise below the shot noise limit is only obtained by a summation of the light field intensities of many modes with independent phase fluctuations. This type of collective squeezing cannot be attributed to a single coherent light field mode. Instead, it should be considered a multi-mode property. ## VI Implications for quantum optics The light emitted by a laser propagates in the multi-mode continuum of the unconfined electromagnetic field. It is therefore difficult to describe it in terms of a discrete mode structure. As a result, photon number measurements cannot be assigned to well defined modes. The randomness of photon detection events is a consequence of this conceptual difficulty. Nevertheless, time integrated measurements of photon number can provide precise information about the number of photons within a given volume. It is tempting to associate this collective information directly with the single mode photon number. However, the lack of information available about the actual photon number distribution among the many modes within the observed volume should be considered as well. In particular, if all possible photon number distributions of $`n`$ photons among $`M`$ modes are considered to be equally likely, the situation corresponds to the micro-canonical ensemble of thermodynamics . The density matrix of any mode $`i`$ which is an arbitrary superposition of the $`M`$ modes then reads $$\widehat{\rho }=\frac{N!(M1)}{(N+M1)!}\underset{n=0}{\overset{N}{}}\frac{(M+N2n)!}{(Nn)!}nn\frac{M}{N+M}\underset{n=0}{\overset{\mathrm{}}{}}\left(1+\frac{M}{N}\right)^nnn,$$ (20) where the approximation is for large $`M`$ and $`N`$. The photon number distribution of every single mode corresponds to a thermal distribution at a temperature proportional to the inverse logarithm of $`1+M/N`$ even though the total photon number in the $`M`$ modes is given precisely by $`N`$. While the photon number in each mode fluctuates thermally, the fluctuations in the total photon number are suppressed by anti-correlations between the modes. Such anti-correlations have been described theoretically and observed experimentally between different longitudinal modes and between orthogonal polarizations in the squeezed light emission from semiconductor lasers. As the discussion in this paper indicates, it should be observable in the temporal mode structure as well. For the case of over-damped relaxation oscillations, equation (19) shows the anti-correlations between emission modes separated by a time roughly equal to the excitation lifetime $`1/\mathrm{\Gamma }_N`$. If the coherence length is assumed to be about $`1/\gamma _n`$, the intensity distribution given by equation (19) might be interpreted according to equation (20) with $`M=\gamma _n/\mathrm{\Gamma }_N`$ and $`N=2\kappa \overline{n}/\mathrm{\Gamma }_N`$. Thus the light field statistics given by equation (19) suggest that not a single mode of the total light field emission is in a squeezed state. If the low frequency noise is squeezed below the shot noise limit, this effect can be explained as a purely collective property without implications for any particular mode. Specifically, the slight anti-correlation in photon number between different modes shown by the two time correlation function given in equation (19) above does not represent quantum mechanical entanglement, since there is virtually no phase correlation between the respective modes. A semi-classical interpretation of the intensity noise is therefore sufficient to explain the squeezing properties of the laser field. ## VII Conclusions We have shown that thermal noise in laser cavities may coexist with squeezing in the low frequency intensity fluctuations. The reason for this coexistence is that squeezing is caused by the long term anti-correlation of fluctuations caused by the slow loss rate of excitations from the gain medium, while the light field fluctuations within the optical coherence time are dominated by stimulated emission and photon bunching. Consequently, squeezing the low frequency intensity fluctuations of a semiconductor laser diode does not usually reduce the photon number fluctuations of any single mode. Instead, only the total photon number of a large number of modes is controlled. This type of collective squeezing should be distinguished from the single mode squeezing obtained e.g. by optical parametric amplification. While squeezing in semiconductor laser diodes represents a significant achievement in controlling the energy flow of light on the quantum level, it does not produce the entanglement properties which are typically observed in the single mode squeezing of optical parametric amplifiers. This limitation severely restricts the use of collectively squeezed light in quantum optical applications. ## Acknowledgements One of us (HFH) would like to acknowledge support from the Japanese Society for the Promotion of Science, JSPS.
no-problem/0002/quant-ph0002043.html
ar5iv
text
# Phase properties of a new nonlinear coherent state ## 1 Introduction Coherent states are important in different branches of physics particularly in quantum optics. Historically coherent state of the harmonic oscillator (which is the coherent state corresponding to the Heisenberg-Weyl algebra) was first constructed by Schrödinger . Subsequently coherent states corresponding to various Lie algebras like Su(1,1), Su(2) etc. have also been constructed and has been shown to play important roles in the description of various quantum optical processes . Recently another type of coherent states called the nonlinear coherent states or the f-coherent states have been constructed. In contrast to the coherent states mentioned above these are coherent states corresponding to nonlinear algebras. However nonlinear coherent states are not mere mathematical objects. It has been shown that nonlinear coherent states are useful in connection with the motion of a trapped ion. We note that unlike Lie algebras,the commutators of the generators of nonlinear algebras is a nonlinear function of the generators. As a consequence it is difficult to apply the BCH disentangling theorem to construct displacement operator coherent states corresponding to nonlinear algebras. To avoid this difficulty nonlinear coherent states were constructed as eigenstates of a generalised annihilation operator . However,nonlinear coherent states can still be constructed using a displacement operator,albeit, a modified one and it has been shown that such states exhibit nonclassical behaviour . In the present paper we shall study phase properties of such a nonlinear coherent state. In particular we shall obtain the Pegg-Barnett phase distribution and compare it with the phase distributions associated with Q-function and the Wigner function. We shall also evaluate the number-phase uncertainty relation and examine number-phase squeezing of the nonlinear coherent state. The organisation of the paper is as follows: in section 2 we derive phase distributions and number-phase uncertainty relation for displacement operator nonlinear coherent states; in section 3 we discuss numerical results obtained using the results of section 2; finally section 4 is devoted to a conclusion. ## 2 New nonlinear coherent states and their phase distributions To begin with we note that the generalised creation and annihilation operators associated with nonlinear cohrent states are of the form $$A^{}=f(N)a^{},A=af(N),N=a^{}a$$ (1) where $`a^{}`$ and $`a`$ are standard harmonic oscillator creation and annihilation operators and $`f(x)`$ is a reasonably well behaved real function,called the nonlinearity function. From the relations (1) it follows that $`A`$, $`A^{}`$ and $`N`$ satisfy the following closed algebraic relations: $$[A,A^{}]=f^2(N)(N+1)f^2(N1)N,[N,A]=A,[N,A^{}]=A^{}$$ (2) Thus (2) represent a deformed Heisenberg algebra whose nature of deformation depends on the nonlinearity function $`f(n)`$. Clearly for $`f(n)=1`$ we regain the Heisenberg algebra. Nonlinear coherent states $`|\chi >`$ are then defined as right eigenstates of the generalised annihilation operator $`A`$ and in a number state basis is given by : $$|\chi >=C\underset{n=0}{\overset{\mathrm{}}{}}\frac{d_n}{\sqrt{n}!}\chi ^n|n>$$ (3) where $`C`$ is a normalisation constant and the coefficients $`d_n`$ are given by $$d_0=1,d_n=[\mathrm{\Pi }_{i=1}^nf(i)]^1$$ (4) We note that the canonical conjugate of the generalised annihilation and creation operator $`A`$ and $`A^{}`$ are given by $$B=a\frac{1}{f(N)},B^{}=\frac{1}{f(N)}a^{}$$ (5) Thus $`A`$ and $`B^{}`$ and their hermitian conjugates satisfy the algebras $$[A,B^{}]=1,[B,A^{}]=1$$ (6) Now following ref we consider the operators $`(\alpha =|\alpha |e^{i\varphi })`$ $$D_1(\alpha )=exp(\alpha B^{}\alpha ^{}A)$$ $$D(\alpha )=exp(\alpha A^{}\alpha ^{}B)$$ (7) and note that for two operators $`X`$ and $`Y`$ such that $`[X,Y]=1`$ the BCH disentangling theorem is of the form $$exp(\alpha X\alpha ^{}Y)=exp(\frac{\alpha \alpha ^{}}{2})exp(\alpha X)exp(\alpha ^{}Y)$$ (8) Then the nonlinear coherent state corresponding to the first of the two algebras in (6) is defined as $`|\alpha >_1=D_1(\alpha )|0>`$. Now applying (8) we obtain $$|\alpha >_1=c_1\underset{n=0}{\overset{\mathrm{}}{}}\frac{d_n}{\sqrt{n}!}\alpha ^n|n>$$ (9) Comparing this with the nonlinear coherent state $`|\chi >`$(see (3)) we find that both are exactly the same(provided of course we use the same nonlinearity function in both the cases). The new nonlinear coherent state is then defined as $`|\alpha >=D(\alpha )|0>`$ i.e, it is the coherent state corresponding to the second algebra in (6). As before using the relation (8) we obtain $$|\alpha >=c\underset{n=0}{\overset{\mathrm{}}{}}\frac{d_n^1}{\sqrt{n}!}\alpha ^n|n>$$ (10) where $`c`$ is a normalisation constant which can be determined from the condition $`<\alpha |\alpha >=1`$ and is given by $$c^2=[\underset{n=0}{\overset{\mathrm{}}{}}\frac{d_n^2}{n!}(\alpha ^{}\alpha )^n]^1$$ (11) We now consider the phase probability distributions for the new nonlinear coherent state (10). According to the Pegg-Barnett formalism a complete set of $`(s+1)`$ orthonormal phase states $`\theta _p`$ are defined by $$|\theta _p>=\frac{1}{\sqrt{(s+1)}}\underset{n=0}{\overset{s}{}}exp(in\theta _p)|n>$$ (12) where $`|n>`$ are the number states which spans the $`(s+1)`$ dimensional state space and $`\theta _p`$ are given by $$\theta _p=\theta _0+\frac{2\pi p}{s+1},p=0,1,2,\mathrm{},s$$ (13) In (13) $`\theta _0`$ is arbitrary and indicates a particular basis in the phase space. The hermitian phase operator is then defined as $$\mathrm{\Phi }_\theta =\underset{p=0}{\overset{s}{}}\theta _p|\theta _p><\theta _p|$$ (14) The expectation value of the phase operator with respect to the new nonlinear state $`|\alpha >`$ is given by $$<\alpha |\mathrm{\Phi }_\theta |\alpha >=\underset{p=0}{\overset{s}{}}\theta _p|<\theta _p|\alpha >|^2$$ (15) where $`|<\theta _p|\alpha >|^2`$ is the probability of being in the state $`|\theta _p>`$. Then in the limit $`s\mathrm{}`$ we get from (15) $$<\alpha |\mathrm{\Phi }_\theta |\alpha >=_{\theta _0}^{\theta _0+2\pi }\theta P(\theta )𝑑\theta $$ (16) where the continuous probability distribution $`P(\theta )`$ is given by $$P(\theta )=lim_s\mathrm{}\frac{s+1}{2\pi }|<\theta _p|\alpha >|^2$$ (17) Now choosing $`\theta _0`$ as $$\theta _0=\varphi \frac{\pi s}{s+1}$$ (18) and using (17) we obtain the Pegg-Barnett phase probability distribution for the new nonlinear coherent states (10): $$P_{PB}(\theta )=\frac{1}{2\pi }\left[1+2c^2\underset{n>k}{}\frac{d_n^1d_k^1}{\sqrt{n!k!}}cos[(nk)\theta ]\right],\pi \theta \pi $$ (19) With the phase probability distribution known various quantum mechanical averages in the phase space can be obtained using this function. For example the phase variance is given by $$<(\mathrm{\Delta }\mathrm{\Phi }_\theta )^2>=_\pi ^\pi \theta ^2P(\theta )𝑑\theta =\frac{\pi ^2}{3}+4c^2\underset{n>k}{}\frac{d_n^1d_k^1}{\sqrt{n!k!}}\frac{(1)^{nk}}{(nk)^2}$$ (20) It may be noted that since $`N`$ and $`\mathrm{\Phi }_\theta `$ are canonically conjugate operators they obey the uncertainty relation $$<(\mathrm{\Delta }N)^2><(\mathrm{\Delta }\mathrm{\Phi }_\theta )^2>\frac{1}{4}|<[N,\mathrm{\Phi }_\theta ]>|^2$$ (21) where $`<(\mathrm{\Delta }X)^2>=<X^2><X>^2`$ and the right hand side of (21) is given by $$[N,\mathrm{\Phi }_\theta ]=i[12\pi P(\theta _0)]$$ (22) Now to examine number-phase squeezing we introduce the following squeezing parameters: $$S_N=\frac{2<(\mathrm{\Delta }N)^2>}{|<[N,\mathrm{\Phi }_\theta ]>|}1,S_\mathrm{\Phi }=\frac{2<(\mathrm{\Delta }\mathrm{\Phi }_\theta )^2>}{|<[N,\mathrm{\Phi }_\theta ]>|}1$$ (23) If $`S_N<0`$ ($`S_\mathrm{\Phi }<0`$) then the nonlinear coherent state is number(phase) squeezed. We note that the phase quasiprobability distributions $`P_{Q,W}(\theta )`$ associated with the Husimi Q-function and the Wigner function can be obtained by integrating these functions over the radial variable $`|\beta |`$. The forms of these distributions are given by $$P_{Q,W}(\theta )=\frac{1}{2\pi }\left[1+2c^2\underset{n>k}{}\frac{d_n^1d_k^1}{\sqrt{n!k!}}cos[(nk)\theta ]F(n,k)\right],\pi \theta \pi $$ (24) where the coefficients $`F(n,k)`$ in the case of Q-function are given by $$F(n,k)=\frac{\mathrm{\Gamma }(\frac{n+k}{2}+1)}{\sqrt{n!k!}}$$ (25) while in the case of Wigner function they are given by $$F(n,k)=2^{(nk)/2}\sqrt{\frac{k!}{n!}}\frac{\mathrm{\Gamma }(\frac{n}{2}+1)}{(\frac{k}{2})!},\mathrm{n}\mathrm{even}$$ $$=2^{(nk)/2}\sqrt{\frac{k!}{n!}}\frac{\mathrm{\Gamma }(\frac{n+1}{2})}{(\frac{k1}{2})!},\mathrm{n}\mathrm{odd}$$ (26) ## 3 Phase properties We shall now analyse various phase distributions for the nonlinear coherent state. However,before we do this it is necessary to specify a nonlinearity function $`f(n)`$. It is clear that for different choices of the nonlinearity function we shall get different nonlinear coherent states. In the present case we choose a nonlinearity function which has been used in the description of the motion of a trapped ion : $$f(n)=L_n^1(\eta ^2)[(n+1)L_n^0(\eta ^2)]^1$$ (27) where $`\eta `$ is known as the Lamb-Dicke parameter and $`L_n^m(x)`$ are generalised Lagurre polynomials. We shall now evaluate the distribution functions (19) and (24) with the nonlinearity function given by (27). In fugure 1 we plot Pegg-Barnett phase distribution $`P_{PB}(\theta )`$ against $`\theta `$ keeping $`\alpha `$ fixed and using different values of $`\eta `$ for the three curves. From figure 1 we find that for lower values of $`\eta `$ the distribution is broad at the top. However as $`\eta `$ increases peaks begin to develop slowly and for a reasonably large value of $`\eta `$ there are two well developed peaks at $`\theta =\pm \pi /2`$. The appearence of the peaks is an indication of quantum interference. In figure 1a we plot the Pegg-Barnett phase distribution keeping $`\eta `$ fixed at $`.8`$ and varying $`\alpha `$. From the figures it is seen that the qualitative features of the distribution remains essentially the same when one of the parameters is kept fixed while the other varies. However it may be noted that as $`\alpha `$ increases the peak structure becomes more and more prominent. Interestingly for $`\alpha =.37`$(it is the value where the phase variance is minimum, see below) the distribution shows practically no bifurcation. In figure 2 we plot the three distributions $`P_{PB}(\theta )`$ and $`P_{Q,W}(\theta )`$ for the same values of the parameters. From the figure we find that although the distributions are of the same form they are not quite the same. It is seen that $`P_{PB}`$ is roughly intermediate between $`P_Q`$ and $`P_W`$. Also the Wigner distribution $`P_W`$ has the sharpest peaks while the Husimi Q distribution $`P_Q`$ is the broadest. However in the present case $`P_W`$ does not assume any negative value. In figure 3 we have plotted phase variance against $`\alpha `$ for $`\eta =.8`$. From the figure we find that phase variance decreases upto a certain value of $`\alpha `$ and then again starts increasing. From the figure we find that phase variance assumes the minimum value at $`\alpha =.37`$. Thus for $`\alpha =.37`$ and around this value of $`\alpha `$ the best measurement of phase is possible. We note that the parameter values are not special but the phase variance shows the same pattern for other parameter values too. In figure 4 we plot the squeezing parameters $`S_N`$ and $`S_\mathrm{\Phi }`$. From the figure we find that $`S_N<0`$ for a considerable range of $`\alpha `$. This implies that the nonlinear coherent state exhibits squeezing in the N component. However, $`S_\mathrm{\Phi }`$ is always positive implying absence of squeezing in the $`\mathrm{\Phi }`$ component. We note that it is interesting to compare the squeezing behaviour of the new coherent state (10) and the one given by (3). In figure 5 we plot the graphs of $`S_N`$ corresponding to these states. From the figure it is seen that while for low values of $`\alpha `$ the coherent state (3) is more squeezed than (10),for larger values as well as a larger range of $`\alpha `$,the coherent state (10) remains squeezed while (3) does not remain so. Now we compare the phase squeezing of the nonlinear coherent states (3) and (10). From fig 6 we find that $`S_\mathrm{\Phi }<0`$ for (3) while $`S_\mathrm{\Phi }>0`$ for (10). Thus from (23) it follows that the nonlinear coherent state (3) exhibits phase squeezing while (10) does not. Finally to examine the number-phase uncertainty relation (21),we consider the quantity $`F(\alpha )=\sqrt{<(\mathrm{\Delta }N)^2><(\mathrm{\Delta }\mathrm{\Phi }_\theta )^2>}\frac{1}{2}|<[N,\mathrm{\Phi }_\theta ]>|`$. Thus $`F(\alpha )0`$ and $`F(\alpha )=0`$ would imply that the state is an intelligent state. On the other hand a nonzero value of $`F(\alpha )`$ is a measure which indicates how much the state is away from being an intelligent state. In figure 7 we plot $`F(\alpha )`$ against $`\alpha `$ for $`\eta =.8`$. From the figure it is seen that $`F(\alpha )`$ is nonzero and has an increasing trend. The maximum in fig 7 indicates how much the nonlinear coherent state (10) can be different from an intelligent state. Thus we conclude that the nonlinear coherent state (10) is not an intelligent state. ## 4 Conclusion In this article we have considered a class of nonlinear coherent states constructed using an operator similar to the displacement operator. We have examined a number of their phase properties. In particular we have computed various phase distributions and compared them. Also it has been shown that the states (10) exhibit number squeezing.
no-problem/0002/cond-mat0002379.html
ar5iv
text
# Light Scattering from Nonequilibrium Concentration Fluctuations in a Polymer Solution ## I Introduction Density and concentration fluctuations in fluids and fluid mixtures can be investigated experimentally by light-scattering techniques. The nature of these fluctuations when the system is in equilibrium is a subject well understood. Here we shall consider fluctuations in nonequilibrium steady states (NESS), when an external and constant temperature gradient is applied, while the system remains in a hydrodynamically quiescent state. That is, we shall deal with fluctuations that are intrinsically present in thermal nonequilibrium states in the absence of any convective instabilities. Such fluctuations have received considerable attention during the past decade. It was originally believed that, because of the existence of local equilibrium in NESS, the time correlation function of the scattered-light intensity would be the same as in equilibrium, but in terms of spatially varying thermodynamic and transport properties corresponding to the local value of temperature. However, it has been demonstrated that qualitative differences do appear. The first complete expression for the spectrum of the nonequilibrium fluctuations in a one-component fluid subjected to a stationary gradient was obtained by Kirkpatrick et al. by using mode-coupling theory. They showed that the central Rayleigh line of the spectrum would be substantially modified as a result of the presence of a temperature gradient. Because of a coupling between the temperature fluctuations and the transverse-velocity fluctuations through the temperature gradient, spatially long-range nonequilibrium temperature and viscous fluctuations appear, modifying the Rayleigh spectrum of the scattered-light intensity. Their results were subsequently confirmed on the basis of fluctuating hydrodynamics. The effect is largest for the transverse-velocity fluctuations in the direction of the temperature gradient which corresponds to the situation that the scattering wave vector, $`𝐤`$, is perpendicular to the temperature gradient $`T`$, which configuration will be assumed throughout the present paper. In that case, the strengths of the nonequilibrium temperature and viscous fluctuations are predicted to be proportional to $`(T)^2/k^4`$. The dependence on $`k^4`$ implies that, in real space, the nonequilibrium correlation functions become long ranged. The spatially long-range nature of the correlation functions in NESS is nowadays understood as a general phenomenon arising from the violation of the principle of detailed balance. Experimentally, the long-range nature of the nonequilibrium fluctuations can be probed by light-scattering measurements at small wave numbers $`k`$, i.e. at small scattering angles $`\theta `$. Such experiments have been performed in one-component liquids and excellent agreement between theory and experiments has been obtained. In binary systems, the situation is a little more complicated. In liquid mixtures or in polymer solutions a temperature gradient will induce a concentration gradient through the Soret effect. This induced concentration gradient is parallel to the temperature gradient and has the same or opposite direction depending on the sign of the Soret coefficient, $`S_T`$. In this case, nonequilibrium fluctuations appear, not only because of a coupling between the temperature fluctuations and the transverse-velocity fluctuations through the temperature gradient, but also because of a coupling between the concentration fluctuations and the transverse-velocity fluctuations through the induced concentration gradient. The nonequilibrium Rayleigh-scattering spectrum has been calculated for binary liquid mixtures, both on the basis of mode-coupling theory and on the basis of fluctuating hydrodynamics, with identical results. In liquid mixtures in thermal nonequilibrium states not only nonequilibrium temperature and nonequilibrium viscous fluctuations, but also nonequilibrium concentration fluctuations exist. The strengths of all three types of nonequilibrium fluctuations are again predicted to be proportional to $`(T)^2/k^4`$. The theory was extended by Segrè et al. to include the effects of gravity and by Vailati and Giglio to include time-dependent nonequilibrium states. The nature of the nonequilibrium fluctuations in the vicinity of a convective instability has also been investigated, in which case the nonequilibrium modes become propagative. The influence of boundary conditions, which may become important when $`𝐤`$ is parallel to $`T`$, was considered by Pagonabarraga et al.. Experiments have been performed to study nonequilibrium fluctuations in liquid mixtures. The three types of nonequilibrium fluctuations have been observed in liquid mixtures of toluene and n-hexane and the strength of all three types of nonequilibrium fluctuations were indeed proportional to $`(T)^2/k^4`$, as expected theoretically. Initially, it seemed that also the prefactors of the amplitudes of these nonequilibrium fluctuations were in agreement with the theoretical predictions. However, a definitive assessment was hampered by a lack of reliable experimental information on the Soret coefficient. To our surprise, subsequent accurate measurements of the Soret coefficient of liquid mixtures of toluene and n-hexane obtained both by Köhler and Müller and by Zhang et al. yielded values for the Soret coefficient that were about 25% lower than the values needed to explain the quantitative magnitude of the amplitudes of the observed nonequilibrium fluctuations. Subsequent measurements of the nonequilibrium concentration fluctuations in a mixture of aniline and cyclohexane, obtained by Vailati and Giglio did not have sufficient accuracy to resolve this issue. As an alternative approach, we decided to investigate nonequilibrium concentration fluctuations in a polymer solution. In a polymer solution the same nonequilibrium enhancement effects are expected to exist, but the mass-diffusion coefficient, $`D`$, in this case is several orders of magnitude lower than in ordinary liquid mixtures. In addition, the Soret coefficient is two orders of magnitude larger that the Soret coefficient of ordinary liquid mixtures. These two facts, as discussed below, simplify the theory because, as also happens in an equilibrium polymer solution, the concentration fluctuations become dominant and they are readily observed by light scattering. Both the data acquisition and the data analysis become much easier in this case. Thus a polymer solution would seem to be an ideal system to further investigate nonequilibrium concentration fluctuations. For this purpose we have selected solutions of polystyrene in toluene, for which reliable information on the thermophysical properties is available. We have performed small-angle Rayleigh-scattering experiments in polystyrene-toluene solutions subjected to various externally applied temperature gradients. A summary of our results has been presented in a Physical Review Letter. In the present paper we provide a full account of the experiment and of the analysis of the experimental data. ## II Theory A theory specifically developed for the fluctuations in polymer solutions in thermal nonequilibrium states is not yet available in the literature. However, for polymer solutions in the dilute and semidilute solution regime, as long as entanglement effects can be neglected, it should be possible to use the same fluctuating hydrodynamics equations as those for ordinary liquid mixtures. As mentioned in the introduction, the complete expression for the Rayleigh spectrum of a binary liquid in the presence of a stationary temperature gradient was evaluated by Law and Nieuwoudt. The dynamic structure factor contains three diffusive modes. The decay rate $`\lambda _\nu =\nu k^2`$ of one of these modes is determined by the kinematic viscosity $`\nu `$; this mode disappears in thermal equilibrium, i.e. when $`T0`$. The other two modes are also present in the Rayleigh spectrum of a liquid mixture in equilibrium and have the decay rates $`\lambda _\pm `$ given by: $$\lambda _\pm =\frac{k^2}{2}(D_T+𝒟)\frac{k^2}{2}\left[(D_T+𝒟)^24D_TD\right]^{1/2},$$ (1) where $`D_T`$ is the thermal diffusivity, $`D`$ the binary mass-diffusion coefficient, in the case of a polymer solution to be referred to as collective diffusion coefficient, and where $$𝒟=D(1+ϵ)$$ (2) with $$ϵ=T\frac{[S_Tw(1w)]^2}{c_{P,c}}\left(\frac{\mu }{w}\right)_{p,T}.$$ (3) In Eq. (3) $`w`$ is the concentration expressed as mass fraction of the polymer, $`c_{P,c}`$ the isobaric specific heat capacity of the solution at constant concentration, and $`\mu `$ is the difference between the chemical potentials per unit mass of the solvent and the solute. As usual, the Soret coefficient, $`S_T`$, specifies the ratio between temperature and concentration gradients and it is defined through the steady-state phenomenological equation: $$w=S_Tw(1w)T.$$ (4) The two modes with decay rates $`\lambda _\pm `$ incorporate a coupling between temperature and concentration fluctuations. This coupling becomes important in compressible fluid mixtures near a critical locus. However, when $`ϵ1`$, the dynamic structure factor of a binary system in NESS consists of just three exponentials with decay rates given by: $$\lambda _\nu =\nu k^2,\lambda _+=D_Tk^2,\lambda _{}=Dk^2.$$ (5) Physically, it means that the temperature gradient $`T`$ induces nonequilibrium viscous fluctuations appearing as a new term in the Rayleigh spectrum and it leads to nonequilibrium enhancements of the temperature and concentration fluctuations. The condition $`ϵ1`$ is always fulfilled in the low-concentration limit. But for some liquid mixtures $`ϵ`$ is also small at all concentrations. For instance, in an equimolar mixture of toluene and n-hexane Segrè et al. found $`ϵ0.028`$. For the dilute polymer solutions considered in the present paper, $`ϵ0.010`$. Hence, the approximations implying the presence of three diffusive modes with the simple decay rates given by Eq. (5) are even more justified for a polymer solution than for the liquid mixtures studied in previous papers. Furthermore, in ordinary polymer solutions $`D/D_T5\times 10^4`$; such a small value simplifies the analysis of the concentration fluctuations, because the decay in the time correlation function coming from the concentration mode (decay rate $`\lambda _{}`$) will be well separated from the decays of the modes with decays rates $`\lambda _\nu `$ and $`\lambda _+`$. Actually, for the small angles employed in our experiments, the decay time of the $`\lambda _{}`$ mode is typically 0.5-1.5 s, whereas the decay times of the other two modes are around $`10^410^5`$ s. In addition, for a dilute polymer solution, the Rayleigh factor ratio, that determines the ratio of the scattering intensities of the concentration fluctuations and the temperature fluctuations, is much larger than unity so that the contributions of temperature fluctuations are negligibly small in practice. Moreover, when the experimental correlograms are analyzed beginning at $`10^2`$ s, viscous and temperature fluctuations have already decayed. This kind of approximation is usually assumed in the theory of light scattering from polymer solutions in equilibrium states. In conclusion, for heterodyne light-scattering experiments in sufficiently dilute polymer solutions, the expression for the time-dependent correlation function of the scattered light becomes: $$C(k,t)=C_0[1+A_c(𝐤,T)]e^{Dk^2t},$$ (6) where $`C_0`$ is the signal to local-oscillator background ratio representing the amplitude of the correlation function in thermal equilibrium. In Eq. (6) the term $`A_c(𝐤,T)`$ represents the nonequilibrium enhancement of the concentration fluctuations. This term is anisotropic and depends on the scattering angle. When $`𝐤T`$, $`A_c`$ reaches a maximum given by: $$A_c(k,T)=A_c^{}(w)\frac{(T)^2}{k^4},$$ (7) where the strength of the enhancement, $`A_c^{}(w)`$, is given by: $$A_c^{}(w)=\frac{\left[w(1w)\right]^2S_{T}^{}{}_{}{}^{2}}{\nu D}\left(\frac{\mu }{w}\right)_{p,T}\left[1+2\frac{D}{D_T}(1+\zeta )\right],$$ (8) where $`\zeta `$ is a dimensionless correction term related to the ratio of $`(n/T)_{P,w}`$ and $`(n/w)_{P,T}`$. As mentioned earlier for the polymer solutions to be considered, $`D/D_T`$ is of the order of $`5\times 10^4`$, so that the correction term inside the square brackets can be neglected in practice. Hence, the expression for $`A_c^{}(w)`$ reduces to: $$A_c^{}(w)=\frac{\left[w(1w)\right]^2S_{T}^{}{}_{}{}^{2}}{\nu D}\left(\frac{\mu }{w}\right)_{p,T}.$$ (9) The dependence of the nonequilibrium enhancement $`A_c`$ of the concentration fluctuations on $`k^4`$ indicates that the correlations in real space are long ranged. Actually, a $`k^4`$ dependence in Fourier space corresponds to a linear increase of the correlation function in real space . The rapid increase of the strength of the nonequilibrium fluctuations with decreasing values of the wavenumber will saturate for sufficiently small $`k`$ due to the presence of gravity. The gravity effect was predicted by Segrè et al. and has been confirmed by some beautiful experiments of Vailati and Giglio. Our set of working equations, Eqs. (6), (7) and (9), can be also deduced from the theory of Vailati and Giglio for nonequilibrium fluctuations in time-dependent diffusion processes. In that paper only concentration and velocity fluctuations are considered around a nonequilibrium time-dependent state. By applying an inverse Fourier transform to Eq. (25) of Vailati and Gilio, we obtain for the autocorrelation function of the concentration fluctuations for $`𝐤w`$: $$\delta w\delta w^{}=S(k)\mathrm{exp}\left[Dk^2t\left(1\frac{R(k)}{R_c}\right)\right],$$ (10) where $`R(k)/R_c`$ is the Rayleigh-number ratio, which accounts for the effects of gravity and is defined in Eq. (22) of Vailati and Giglio. $`S(k)`$ is the static structure factor, defined in Eq. (26) of the same paper. It should be noticed that, for consistency, in our Eq. (10) we have used the symbol $`w`$ instead of the $`c`$ used by Vailati and Giglio. Both symbols have the same meaning, as it is stated after Eq. (2) of Vailati and Giglio that: ”$`c`$ is the weight fraction of the denser component”. Thus Eq. (26) of Vailati and Giglio for the static structure factor $`S(k)`$ can be written as: $$S(k)=\frac{k_BT}{16\pi ^4\rho }\left(\frac{w}{\mu }\right)_{p,T}\frac{1}{1R(k)/R_c}\left[1+\frac{(w)^2}{\nu Dk^4}\left(\frac{\mu }{w}\right)_{p,T}\right].$$ (11) The derivation of Vailati and Giglio was originally developed for time-dependent isothermal diffusion processes in the presence of gravity, where only concentration gradients are present. The validity of this theory for stationary nonequilibrium states is obvious, and we can consider $`w`$ to be constant, neglecting the weak dependence on space and time considered by Vailati and Giglio. Neglecting gravity effects is equivalent to assuming that the Rayleigh-number ratio is almost zero, $`R(k)/R_c0`$, as can be easily shown from Eq. (22) in the paper of Vailati and Giglio. Introducing Eq. (4) into Eq. (11) and neglecting $`R(k)/R_c`$, one readily verifies that Eqs. (10) and (11) are equivalent to Eqs. (7) and (9). We note that Eqs. (6), (7) and (9) can also be obtained as a limiting case of the equations derived by Schmitz for nonequilibrium concentration fluctuations in a colloidal suspension . Schmitz considered a colloidal suspension, in the presence of a constant gradient, $`\varphi `$, in the volume fraction, $`\varphi `$, of the colloidal particles, maintained against diffusion by continuous pumping of solvent between two semipermeable and parallel walls. The relevant expression is Eq. (7.7) in the article of Schmitz, which gives the dynamic structure factor, $`S(k,\omega )`$, of the nonequilibrium concentration fluctuations as a function of the wavenumber $`k`$ and the frequency $`\omega `$. The expression obtained by Schmitz includes non-local and memory effects due to the large particle sizes in comparison with the scattering wavelength. The non-local and memory effects cause the transport coefficients to depend on the frequency and the wave number. In our case these effects can be neglected, because the polymer molecules are much smaller that the colloidal particles considered by Schmitz and the condition $`kR_\mathrm{g}1`$, where $`R_\mathrm{g}`$ is the radius of gyration, is fulfilled. With this simplification the original expression obtained by Schmitz reduces to: $$S(k,\omega )=S(k)[1+A_c]\frac{2Dk^2}{\omega ^2+(Dk^2)^2},$$ (12) where $`A_c`$ is given by: $$A_c=\frac{w^2}{\nu D}\left(\frac{\mu }{w}\right)_{p,T}\left(\frac{\varphi }{\varphi }\right)^2\frac{1}{k^4}.$$ (13) In deriving Eq. (13) we have used the relationship between pumping rate and volume fraction gradient, as given by Eqs. (7.6) and (7.11) in the paper of Schmitz. We have also converted the various concentration units used by Schmitz: $`n`$ which is the number of particles per unit volume ($`n=\rho /wm_\mathrm{p}`$) and $`c`$ which is the number of particles per unit mass ($`c=w/m_\mathrm{p}`$), $`\rho `$ being the density of the suspension and $`m_\mathrm{p}`$ the mass of one particle. Furthermore, we are using the difference between solvent and solute chemical potentials per unit mass, whereas in the paper of Schmitz $`\mu `$ is the difference in chemical potentials per particle. Although the theory of Schmitz for the nonequilibrium concentration fluctuations was derived for the case that the concentration gradient is produced by a solvent flow, the results will be applicable to NESS under a constant temperature gradient. Since the solvent flow does not appear explicitly in Eq. (13), we may also apply the equation to a system subjected to a stationary volume-fraction gradient caused by other driving forces, such as a temperature gradient. For our polymer solutions $`(\varphi /\varphi )^2(w/w)^2`$, the difference being less than 0.5%. With this approximation and by using Eq. (4), it is straightforward to show the equivalence of the theory for nonequilibrium concentration fluctuations in a colloidal suspension and in a polymer solution, independent of whether the concentration gradient is established by solvent pumping or induced by a temperature gradient. ## III Experimental Method The polystyrene used to prepare the polymer solutions was purchased from the Tosoh Corporation (Japan); it has a mass-averaged molecular weight $`M_W=96,400`$ and a polydispersivity of 1.01, as specified by the manufacturer. The solvent toluene was Fisher certified reagent purchased from Baker Chemical Co. with a stated purity of better than 99.8%. By using the correlation proposed by Noda et al., $`R_\mathrm{g}^2=1.38\times 10^2M_W^{1.19}`$, the radius of gyration, $`R_\mathrm{g}`$, and the overlap concentration, $`w^{}`$, for this polymer in toluene solutions may be estimated as $`R_\mathrm{g}11.0`$ nm and $`w^{}3.1\%`$, respectively. Therefore, for the viscoelastic and entanglement effects to be negligible, the concentrations employed in this work should not be much higher than $`3.1\%`$. Five polystyrene/toluene solutions below $`w^{}`$ and one slightly above $`w^{}`$ were prepared gravimetrically as described by Zhang et al.. To assure the homogeneity of the mixture, the solutions were agitated (magnetic stirrer or shaking by hand)for at least one hour before further use. The light-scattering experiments were performed with an apparatus specifically designed for small-angle Rayleigh scattering in an horizontal fluid layer subjected to a vertical temperature gradient. A diagram of the optical cell is shown in Fig. 1. The following paragraphs will describe the apparatus in detail, and references to the symbols in Fig. 1 will be used. The polymer solution in the actual light-scattering cell (E) is confined by a circular quartz ring (D) that is sandwiched between top and bottom copper plates (A and F, respectively). The cell is filled through two stainless steel capillary tubes (G) which had been soldered into holes in the top and bottom plates. The inner and the outer diameters of the quartz ring (D) are 1.52 cm and 2.05 cm, respectively. Two identical optical windows (B) are used for letting a laser beam pass through the liquid solution. Both windows are cylindrical and made from sapphire because of the relatively high thermal conductivity of this material. The windows are epoxied into the centers of the top and the bottom copper plates by a procedure similar to the one described by Law et al.. Once the optical windows are installed, the top and bottom copper plates are sealed against the quartz ring by indium O-rings. To avoid heat conduction from the upper to the lower plate other than through the liquid, the two copper plates are held together by teflon screws (not shown in the figure). The tension of these teflon screws was adjusted to set the flat surfaces of the optical windows parallel to within 10 seconds of arch. This was accomplished by passing a laser beam through the cell and monitoring the interference pattern of the reflected beams from the inner surfaces of the windows while the screws were fastened. Once the cell was mounted, the distance $`d`$ between the windows was accurately determined by measuring the angular variation of these interference fringes. The result was $`d=(0.118\pm 0.005)`$ cm. This value was confirmed by also measuring the separation with a cathetometer. Since the nonequilibrium enhancement of the fluctuations given by Eq. (7) depends on $`k^4`$, small scattering angles are required to obtain measurable values of $`A_c`$; in practice scattering angles $`\theta `$ between 0.4 and 0.9 were used. A major difficulty with such small-angle experiments is the presence of strong static scattering from the optical surfaces. To reduce the background scattering, we needed very clean windows. The optical surfaces of the windows and the other optical components of the experimental arrangement were cleaned by applying first a Windex solution and then acetone with cotton swabs. Moreover, we employ thick cell windows to remove the air-window surfaces from the field of view of the detector. In addition, the outer surface of the windows and the other optical surfaces were broad-band (488-623 nm) anti-reflection coated (CVI Laser Corporation), to reduce the back reflections and forward scattering intensity. As a result of these efforts, we obtained high enough signal to background ratios to allow accurate measurements of the scattered-light intensities. To observe the intrinsic nonequilibrium fluctuations, it is essential to avoid any convection in the liquid layer. This is accomplished by heating the horizontal fluid layer from above. Furthermore, bending and defocusing of the light beam due to the refractive-index gradient induced by the temperature gradient could cause serious limitations in the resolution of the angles. This difficulty is avoided by employing a vertical incident light beam, parallel to the temperature gradient. To eliminate stray deflections caused by air currents near the cell, the whole assembly is covered by a plexiglass box with a small hole at the top to let the laser beam pass through. The temperature at the top plate was maintained by a computer controlled resistive heater winding (C). The temperature at the bottom plate is controlled by circulating constant-temperature fluid from a Forma Model 2096 bath through channels in the base of the cell (I), which is in good thermal contact with the bottom plate. With these devices, the temperatures of both the hot and cold plates can be held constant to within $`\pm `$20 mK. These temperatures remain fixed over the collection time of any experimental run. Temperatures of both plates are monitored by thermistors inserted in holes (H) next to the sample windows and deep enough to be located very close to the liquid layer. Due to the relatively high thermal conductivity of the sapphire windows and the symmetry of the arrangement, the lateral temperature gradients are small. Numerical modeling of the thermal conduction process yields negligible differences between the measured temperature gradient and the actual temperature gradient between the windows. A coherent beam ($`\lambda =632.8`$ nm) from a 6 mW stabilized cw He-Ne laser is focused with a 20 cm focal length lens onto the polymer solution in the scattering cell. The scattering angle, $`\theta `$ is selected with a small pinhole (500 $`\mu `$m) located 197 mm after the cell. The collecting pinhole is placed in a plane orthogonal to the transmitted beam. The distance between the point where the beam hits the collection plane and the pinhole was carefully measured with a vernier micrometer scale. In the original light-scattering experiments of Law et al. in NESS of a one-component liquid, the scattering wave number, $`k`$, was determined from equilibrium light-scattering measurements and the known value of the thermal diffusivity, $`D_T`$. In the present experiments we have determined $`k`$ by directly measuring the scattering angle $`\theta `$. Since the intensities of the nonequilibrium fluctuations depend on $`k^4`$, the measurement of $`k`$ has to be done with considerable accuracy. By working out the ray tracing problem, taking into account refraction at the two window surfaces, we can deduce the scattering angle, $`\theta `$, from the location of the pinhole relative to that of the transmitted beam For this calculation the thickness of the bottom window, the distance from the window to the collecting plane, the refractive index of the window and the refractive index, $`n`$, of the solution are needed. It was assumed that the scattering volume is in the middle of the cell, but since the cell is very thin the possible corrections to this assumption are negligible. The distances were measured with an accuracy of $`\pm 0.05`$ cm. The value 1.7660 of the refractive index of the window at the wavelength $`\lambda `$ of the incident light was obtained from the manufacturer. The refractive index, $`n`$, of the polystyrene solutions as a function of the polymer concentration at 25C was measured with a thermostated Abbé refractometer as described by Zhang et al.. For the polystyrene solutions considered in the present paper the refractive index can be represented by: $$n(w)=1.49049+0.09220w+0.02556w^2.$$ (14) The scattering wave number $`k`$ is related to the scattering angle $`\theta `$ through the Bragg condition $`k=\frac{4\pi n}{\lambda }\mathrm{sin}(\theta /2)`$, where $`\lambda `$ is the wavelength of the incident light. Taken into account the experimental errors in the different magnitudes relevant to the calculation, we were able to determine $`k`$ with an accuracy of 1.5%. As discussed in section II, we want to investigate the concentration fluctuations, which yield the dominant contribution to the Rayleigh spectrum. Since typical decay times for this mode at the small angles employed are always larger than 0.5 s, we are interested in the experimental correlograms for times starting around 10<sup>-2</sup> s. Since this value is well above the region where photomultiplier (PM) afterpulsing effects are important, unlike in previous work, cross-correlation was not necessary for our measurements. This simplifies the experimental arrangement. The light exiting through the pinhole is focused, through corresponding optics, onto the field selecting pinhole in front of a single PM. The signal from the PM and the corresponding discriminator is analyzed with an ALV-5000 multiple tau correlator. ## IV Experimental Procedure and Experimental Results Rayleigh-scattering measurements were obtained for six different solutions of polystyrene in toluene, with concentrations $`w`$ in weight fraction ranging from 0.50% to 4.00%. For each solution, the measurements were performed at three to five scattering angles, ranging from 0.4 to 0.9, which corresponds to scattering wave numbers $`k`$ ranging from 900 cm<sup>-1</sup> to 2000 cm<sup>-1</sup>. These small angles cause the stray light from window-surface scattering to be dominant, assuring that the measurements are in the heterodyne regime. Hence, the light scattered from the inner surfaces of the windows plays the role of a local oscillator and provides the background with which our signal, the light scattered from the polymer solution, is mixed. Before starting the experiments, the light-scattering cell was carefully cleaned by flushing the cell with pure toluene for at least one hour. Next, a gentle stream of nitrogen gas was continuously directed through the cell to dry the inner walls and to remove dust particles. After this cleaning procedure the polymer solution was introduced into the cell through a 0.5 $`\mu `$m millipore Millex HPLC teflon filter. We have been careful to remove bubbles from the light-scattering cell while we were filling it with the polymer solution. This cleaning and filling procedure was repeated each time the concentration of the solution inside the cell was changed. It should be noted that a portion of the same polymer solution was introduced into an optical beam-bending cell for measuring the diffusion coefficient, $`D`$, and the Soret coefficient, $`S_T`$, of the solutions as described by Zhang et al.. These independent measurements of the diffusion and Soret coefficients will be used to compare the results of our light-scattering measurements with the theoretical predictions. In the case of polymer solutions it is especially important to have independent measurements of these quantities for the same polymer/solvent system because, as commented below, there is a sizable dispersion in the literature data, mainly caused by the dependence of these coefficients on parameters difficult to control, such as the polydispersity of the sample. Once the cell was filled with a polystyrene solution with known polymer concentration, we started the experiments by setting the temperature of the top and the bottom plates of the cell at 25C. A light-scattering angle was then selected with the collection pinhole, and the distance in the collection plane between the pinhole and the center of the transmitted forward-beam spot was measured accurately. As already mentioned, this procedure, with the corresponding calculations, yielded scattering wave vectors $`k`$ with an accuracy of 1.5%. Once a scattering angle was selected, the optics was arranged to focus the light exiting through the collecting pinhole into the field-stop pinhole at the PM. When the temperature of the polymer solution had stabilized, we used the ALV-5000 to collect at least ten equilibrium light-scattering correlation functions, with the polymer solution in thermal equilibrium at 25C. The photon count rate of these measurements ranged from 0.3 to 2.5 MHz, depending on the run. Each correlation lasted from 30 to 60 minutes, with signal to background ratios from $`1\times 10^4`$ to $`1\times 10^3`$. These small signal to background ratios confirm that our measurements are in the heterodyne regime. After having completed the measurements with the polymer solution in thermal equilibrium, we applied various values of the temperature gradient to the polymer solution by increasing and decreasing the temperatures of the top and bottom plates symmetrically, so that all experimental results correspond to the same average temperature of 25C. The maximum temperature difference employed was 4.1C, which corresponds to a maximum temperature gradient of 34.6 K cm<sup>-1</sup>. The variation in the thermophysical properties of the solutions is negligible in this small temperature interval, and the property values corresponding to 25C have been used for the calculations. After changing the temperatures, we waited at least two hours, to be sure that the concentration gradient was fully developed. Then, for each value of the temperature gradient, about six correlograms were taken. In Fig. 2, typical experimental light-scattering correlograms, obtained with the ALV-5000 correlator at $`k=1030`$ cm<sup>-1</sup> are shown for the solution with polymer mass fraction $`w=2.50\%`$, as a function of the temperature gradient $`T`$. The figure shows the well sampled amplitude of the correlation at short times for the correlograms. A simple glance at Fig. 2 confirms that the amplitude of the correlation function increases with increasing values of the temperature gradient $`T`$. The correlograms, in the range from 10<sup>-2</sup> s to 1 s, depending on the run, were fitted to a single exponential, in accordance with Eq. (6). In all cases, both in equilibrium and in nonequilibrium, very good fits were obtained. Actually, the intensity of the scattered light is observed over a range of wave numbers corresponding to the non-zero aperture of the collecting pinhole. This effect can be accounted for by representing the experimental correlation function in terms of a Gaussian convolution, as explained in previous publications. However, we found that the resulting corrections to the parameter values obtained from fits to a single exponential amounted to less than 1% for the present experiments and could be neglected. Hence, experimental values for the decay rate, $`Dk^2`$, and for the prefactor $`C_0[1+A_c(k,T)]`$ were obtained by directly fitting the time-correlation function data of the scattered light to a single exponential, as given by Eq. (6). The experimental values obtained for the decay rates, $`Dk^2`$, for the polymer solutions at various values of the scattering wave number $`k`$ and the temperature gradient $`T`$ are presented in Table I. Each value displayed in Table I is the average from several (at least five) experimental correlograms obtained at the same conditions. The prefactor of the exponential obtained from the fitting procedure was multiplied by the average count rate during the run to get the average intensity (in 10<sup>6</sup> counts/s) in each run. The dimensionless enhancement, $`A_c`$, of the concentration fluctuations at a given nonzero value of $`T`$ was obtained from the ratio of the nonequilibrium to the equilibrium intensities, measured at the same $`k`$, by means of Eq. (6). The experimental values obtained for $`A_c`$ are presented in Table II. Again, each value displayed in Table II is an average of several correlograms measured at the same conditions. The uncertainties quoted in Tables I and II represent standard deviations of the values obtained from the sets of experimental correlograms. ## V Analysis of Experimental Results ### A Mass-diffusion coefficient $`D`$ Dividing the decay rates displayed in Table I by the square of the known wave numbers, a mass-diffusion coefficient, $`D`$, is obtained for each $`w`$, $`k`$ and $`T`$. As an example, we show in Fig. 3 the values of $`D`$ thus obtained for the polymer solution with $`w=0.50\%`$ obtained from the light-scattering measurements at various values of the wave number $`k`$, as a function of the applied temperature gradient $`T`$. The horizontal line in Fig. 3 represents the weighted average value of $`D`$. It is readily seen that the experimental $`D`$ is independent of $`T`$, which implies that the applied temperature gradient does not affect the translational diffusion dynamics of the polymer molecules and that we indeed observed concentration fluctuations for all $`T`$. Figure 3 also demonstrates that $`D`$ is independent of the wave number $`k`$. Thus the decay rates in Table I are indeed proportional to $`k^2`$, which confirms the correctness of our measurements of $`k`$. For each concentration we determined a simple value of $`D`$ as an average of the experimental data obtained at various $`k`$ and $`T`$. In the averaging process the individual accuracies were taken into account. The resulting values of $`D`$ for the different polymer solutions are presented in Table III and displayed in Fig. 4 as a function of the polymer concentration. For this purpose we prefer to use the concentration $`c`$ in g cm<sup>-3</sup>, because this unit is more widely employed in the literature for polymer solutions. To change the concentration units we used the relationship: $$\rho (w)=0.86178+0.1794w+0.0296w^2(\mathrm{g}\mathrm{cm}^3),$$ (15) as reported by Scholte. The error bars in Fig. 4 have been calculated by adding 3% of the value of $`D`$ to the standard deviations given in Table III, so as to account for the uncertainty in the wave number $`k`$. In Fig. 4 we have also plotted the values obtained by Zhang et al. for the collective diffusion coefficient of the same polymer solutions with an optical beam-bending technique. This figure shows that the values obtained from the two methods for the diffusion coefficient agree within the experimental accuracy. The straight line displayed in Fig. 4 represents the linear relationship: $$D(c)=D_0(1+k_Dc),$$ (16) with the values $$D_0=(4.71\pm 0.08)\times 10^7\mathrm{cm}^2\mathrm{s}^1,$$ (18) and $$k_D=(22\pm 2)\mathrm{cm}^3\mathrm{g}^1.$$ (19) for the diffusion coefficient at infinite dilution, $`D_0`$, and the hydrodynamic interaction parameter, $`k_D`$, as determined by Zhang et al. for the polymer solution with the same molecular mass $`M_W=96,400`$. It may be observed that the $`D_0`$ and $`k_D`$ values proposed by Zhang et al. yield a satisfactory description of the dependence of our $`D`$ values on the concentration. An extensive survey of literature values for $`D_0`$ and $`k_D`$ of polystyrene in toluene solutions was performed, and ten different references were examined . Most authors have studied the molecular weight dependence of these parameters and propose relationships which usually have the form of power laws. The results of our survey are presented in Table IV. In some cases where the scaling equations are not directly given by the authors, we have performed the corresponding fits to obtain the power-law dependence of $`D_0`$ and $`k_D`$ on $`M_W`$. All data in Table IV correspond to polystyrene in toluene at the temperature of 25C. While $`k_D`$ should be nearly independent of temperature because toluene is a good solvent, $`D_0`$ depends on temperature through a Stokes-Einstein relation. Neglecting the dependence on temperature of the hydrodynamic radius of polystyrene in toluene, we represent the dependence of $`D_0`$ on the temperature, $`T`$, by: $$D_0\frac{T}{\eta _0(T)},$$ (20) where $`\eta _0(T)`$ is the viscosity of the pure solvent (toluene) as a function of temperature which can be found in the literature. Equation (20) was used for making temperature corrections in cases where the values of $`D_0`$ reported in the literature had been measured at temperatures other than 25C. Furthermore, we did not consider literature values measured at temperatures more than $`\pm `$10 K from 25C. The sixth column of Table IV contains the values extrapolated to $`M_W=96,400`$, from the $`D_0M_W`$ and $`k_DM_W`$ relationships shown in the second column of the same table. From the information in Table IV we conclude that $`D_0=(4.91\pm 0.22)\times 10^7`$ cm<sup>2</sup> s<sup>-1</sup> and $`k_D=21\pm 5`$ cm<sup>3</sup> g<sup>-1</sup>, to be compared with the values $`D_0=(4.71\pm 0.08)\times 10^7`$ cm<sup>2</sup> s<sup>-1</sup> and $`k_D=22\pm 2`$ cm<sup>3</sup> g<sup>-1</sup> quoted in Eq. (16) and adopted by us. Note that standard deviations of the extrapolated values for $`D_0`$ and $`k_D`$ from the literature are $`\pm 4.7\%`$ and $`\pm 22\%`$, respectively. The corresponding spread of the literature values of $`D`$ is indicated by the two dashed lines in Fig. 4. The upper line was calculated by taking for $`D_0`$ and $`k_D`$ the literature averages plus their standard deviations; the lower line was calculated by taking for $`D_0`$ and $`k_D`$ the literature averages minus their standard deviations. ### B Nonequilibrium enhancement $`A_c`$ of the concentration fluctuations Having confirmed the validity of our experimental results for the decay rates, we now consider the nonequilibrium enhancements, $`A_c(T,k)`$, of the concentration fluctuations reported in Table II. As can be readily seen, the nonequilibrium enhancements show a dramatic increase with increasing temperature gradients. The experimental values obtained for the enhancement are plotted in Fig. 5 as a function of $`(T)^2/k^4`$, for the six polymer solutions investigated. The results of a least-squares fit of the experimental points to a straight line going through the origin are also displayed. The information presented in Fig. 5 confirms that, in the range of scattering wave vectors investigated, the nonequilibrium enhancement of the concentration fluctuations is indeed proportional to $`(T)^2/k^4`$ in accordance with Eq. (7). The slope of the lines in Fig. 5 yields experimental values for the strength of the enhancement, $`A_c^{}(w)`$, listed in Table V. To compare the experimental results with the theoretical prediction, Eq. (9), we need several thermophysical properties of the solutions, namely: the concentration derivative of the difference in the chemical potentials per unit mass, $`(\mu /w)_{p,T}`$, the zero-shear viscosity, $`\eta `$, the mass-diffusion coefficient, $`D`$, and the Soret coefficient, $`S_T`$. a) The derivative of the difference in chemical potentials was calculated from its relationship with the osmotic pressure, $`\mathrm{\Pi }`$, which, for the small concentrations used can be simplified to: $$\left(\frac{\mu }{w}\right)_{P,T}=\frac{1}{\rho w}\left(\frac{\mathrm{\Pi }}{w}\right)_{P,T},$$ (21) Values for the concentration derivative of the osmotic pressure of the solutions, were obtained from the extensive work of Noda et al.. Specifically, the universal function: $$\left(\frac{\mathrm{\Pi }}{c}\right)_{P,T}=\frac{RT}{M_W}\left[1+2\left(\frac{3\sqrt{\pi }}{4}\frac{c}{c^{}}\right)+\frac{3}{4}\left(\frac{3\sqrt{\pi }}{4}\frac{c}{c^{}}\right)^2\right]$$ (22) represents the behavior of this property from the very dilute to the concentrated regime. The overlap concentration, $`c^{}=0.0272`$ g cm<sup>-3</sup>, was calculated, as in section III, from the correlations proposed by the same authors. b) To calculate the zero-shear viscosity, we employed the Huggins relationship: $$\eta =\eta _0(1+[\eta ]c+k_\mathrm{H}[\eta ]^2c^2),$$ (23) where $`\eta _0`$ is the viscosity of the pure solvent, for which the value 552.7 mPa$``$s was taken, $`[\eta ]`$ is the intrinsic viscosity of polystyrene in toluene and $`k_\mathrm{H}`$ the Huggins coefficient for the same system. The intrinsic viscosity was obtained from the correlation: $`[\eta ]=9.0610^3M_W^{0.74}(\mathrm{cm}^3\mathrm{g}^1)`$. For the Huggins coefficient, the usual value in a good solvent, $`k_\mathrm{H}=0.35`$, was adopted; no molecular weight dependence has been reported in the literature for $`k_\mathrm{H}`$. c) Values for the collective mass-diffusion coefficient, $`D`$, of our solutions were presented in the previous section, when analyzing the decay rates of the correlograms. They are displayed in Table III. For a continuous representation of $`D`$ as a function of concentration we use Eq. (16), with the parameters given by Eq. (16). d) The Soret coefficient, $`S_T`$, was measured by Zhang et al. for the same solutions used in our light-scattering experiments. It is worth noting that the Soret coefficients measured by Zhang et al. agree, within experimental error, with other recent $`S_T`$ values for polystyrene in toluene reported in the literature. Since the strength of the enhancement depends on the square of the Soret coefficient, we need a good continuous representation of these data to make a theoretical prediction of $`A_c^{}`$ as a function of the concentration. We assume that $`S_T`$ scales as the inverse of $`D`$, as rationalized by Brochard and de Gennes, and use a relationship proposed by Nystrom and Rootsand used successfully in Zhang, et al. for the diffusion coefficient and Soret coefficient. The equation for $`S_T`$ is $$S_T=S_{T0}\frac{(1+X_S)^A}{1+A_SX_S(1+X_S)^B},$$ (24) where $`A=\frac{(1\nu )}{(3\nu 1)}`$ and $`B=\frac{(23\nu )}{(3\nu 1)}`$ are exponents evaluated with $`\nu =0.588`$, $`X_S=r_S(k_Sc)`$ is a scaling variable, and $`A_S=A+r_S^1`$. The virial constants $`S_{T0}`$ and $`k_S`$ are defined by making a series expansion of Eq. (24) around $`c=0`$ to give $`S_T=S_{T0}(1k_Sc)+\mathrm{}`$, where $`k_S`$ is the first virial coefficient of $`S_T`$. We used the values of $`k_S=24`$ cm<sup>3</sup> g<sup>-1</sup> and $`S_{T0}=0.24`$ K<sup>-1</sup> found by Zhang et al. for this molecular weight and then did a weighted, least-squares fit to the concentration dependent $`S_T`$ data of Zhang et al. to find a value of $`r_S`$. We find $`r_S=1.16\pm 0.07`$. Equation (24) gives an excellent representation of the experimental Soret coefficient data, as shown in Fig. 6. In Fig. 7 we present a comparison between the experimental values for the nonequilibrium-enhancement strength, $`A_c^{}`$, and the values calculated form Eq. (9) with the information for the various thermophysical properties as specified above. The error bars associated with the experimental data displayed in Fig. 7 have been calculated by adding 6% to the statistical errors quoted in Table V so as to account for a 1.5% uncertainty in the values of the wave number $`k`$. The theoretical values have an estimated uncertainty of at least 5%. Taking into account these uncertainties, we conclude that the observed strength of the nonequilibrium concentration fluctuations is in agreement with the values predicted on the basis of fluctuating hydrodynamics in the concentration range investigated. ## VI Conclusions The existence of long-range concentration fluctuations in polymer solutions subjected to stationary temperature gradients has been verified experimentally. As in the case of liquid mixtures, the nonequilibrium enhancement of the concentration fluctuations has been found to be proportional to $`(T)^2/k^4`$. Unlike the case of liquid mixtures, good agreement between the experimental and theoretical values for the strength of the enhancement of the concentration fluctuations, $`A_c^{}`$, has been found here. This indicates the validity of fluctuating hydrodynamics to describe nonequilibrium concentration fluctuations in dilute and semidilute polymer solutions. Further theoretical and experimental work will be needed to understand nonequilibrium concentration fluctuations in polymer solutions at higher concentrations. Our present results complement the considerable theoretical and experimental progress recently made in understanding the dynamics of concentration fluctuations of polymer solutions under shear flow. It has been demonstrated that concentration fluctuations in semidilute polymer solutions subjected to shear flow are also enhanced dramatically. This enhancement of the intensity of the concentration fluctuations in shear-induced NESS, is similar to the enhancement reported here, when the NESS is achieved by the application of a stationary temperature gradient. On the other hand, the long-range nature of the concentration fluctuations when a concentration gradient is present, has also recently been observed in a liquid mixture by shadowgraph techniques, yielding additional evidence of the interesting nature of this topic. ## Acknowledgements We are indebted to J.F. Douglas for valuable discussions and to S.C. Greer for helpful advice concerning the characterization of the polymer sample. J.V.S. acknowledges the hospitality of the Institute for Theoretical Physics of the Utrech University, where part of the manuscript was prepared. The research at the University of Maryland is supported by the U.S. National Science Foundation under Grant CHE-9805260. J.M.O.Z. was funded by the Spanish Department of Education during his postdoctoral stage at Maryland, when part of the work was done.
no-problem/0002/cond-mat0002109.html
ar5iv
text
# No Quasi-long-range Order in Strongly Disordered Vortex Glasses: a Rigorous Proof ## Abstract The paper contains a rigorous proof of the absence of quasi-long-range order in the random-field $`O(N)`$ model for strong disorder in the space of an arbitrary dimensionality. This result implies that quasi-long-range order inherent to the Bragg glass phase of the vortex system in disordered superconductors is absent as the disorder or external magnetic field is strong. The nature of the vortex phases of disordered superconductors is a subject of active current investigations. A plausible picture includes three phases : vortex liquid (VL), vortex glass (VG) and Bragg glass (BG). In all those phases an Abrikosov lattice is absent and only short-range order (SRO) is expected in VG and VL. A higher degree of ordering is predicted in BG. It is argued that in this state the vortex array is quasi-long-range-ordered . Thus, Bragg peaks can be observed in BG as if the system had an Abrikosov lattice. In the other phases Bragg peaks are not found . The phase transitions from BG to VG and VL are presumably associated with topological defects . The above picture is supported by variational and renormalization group calculations for the random-field XY model which is the simplest model of the vortex array in disordered superconductors. Besides, this model is useful for our understanding of many other disordered systems . Hence, the detailed knowledge of its properties is important. Unfortunately, the only rigorous result about the random-field XY model is the absence of long-range order (LRO) . The present paper contains a new rigorous result: QLRO is absent as the disorder is sufficiently strong. It is interesting that our proof is quite simple in contrast to the very nontrivial demonstration of the absence of LRO for arbitrarily weak disorder . It is expected that the vortex system of the disordered superconductor has no ordering for strong disorder due to appearance of the dislocations. The XY model is a convenient polygon for understanding of the role of the topological defects. Since the vortex glass phase of disordered superconductors corresponds to strong disorder, our rigorous result supports the conjecture that the vortex glass phase has no QLRO. The simplest model of the vortex array in disordered superconductors has the following Hamiltonian $$H=d^3r[K(u(𝐫))^2+h\mathrm{cos}(2\pi u(𝐫)/a\theta (𝐫))],$$ (1) where $`u`$ is the vortex displacement, $`a`$ the constant of the Abrikosov lattice in the absence of disorder, $`\theta `$ the random phase. The one-component displacement field $`u(𝐫)`$ describes anisotropic superconductors. The generalization for the isotropic case is straightforward. The ordering can be characterized in terms of the form-factor $$G(r)=\mathrm{cos}2\pi (u(\mathrm{𝟎})u(𝐫))/a,$$ (2) where the angular brackets denote the thermal and disorder average. This correlation function contains information about neutron scattering. LRO corresponds to a finite large-distance asymptotics $`G(r\mathrm{})\mathrm{constant}`$, QLRO is described by the power law $`G(r)r^\eta `$ and SRO corresponds to the exponential decay of the correlation function $`G(r)`$ at large $`r`$. We demonstrate that as the random-field amplitude $`h`$ Eq. (1) is large the system possesses only SRO. Since $`h`$ depends on the strength of the disorder in the sample and the external magnetic field we see that SRO corresponds to the situation at which either the disorder or magnetic field is strong. We consider not only the XY model (1) but also the more general random-field $`O(N)`$ model. Its Hamiltonian has the following structure $$H=J\underset{ij}{}𝐒_i𝐒_jH\underset{i}{}𝐒_i𝐧_i,$$ (3) where $`𝐒_i`$ are the $`p`$-component unit spin vectors on the simple cubic lattice in the D-dimensional space, $`𝐧_i`$ is the random unit vector describing orientation of the random field at site $`i`$, the angular brackets denote the summation over the nearest neighbors on the lattice. The XY Hamiltonian (1) corresponds to $`p=2`$. In this case the relation between (1) and (3) is given by the formulae $`S_x=\mathrm{cos}2\pi u/a,S_y=\mathrm{sin}2\pi u/a`$, where $`S_{x,y}`$ are the spin components. The idea of our proof is based on the fact that the orientation of any spin $`𝐒_i`$ depends mostly on the random fields at the nearest sites as the random-field amplitude $`H`$ is sufficiently strong. We shall see that the knowledge of the random fields in the region of size $`Nb`$ with the center in site $`i`$, where $`b`$ is the lattice constant, allows us to determine the orientation of the spin $`𝐒_i`$ with the accuracy $`\mathrm{exp}(\mathrm{constant}N)`$. Thus, the orientations of any two distant spins depend on the realizations of the random field in two non-intersecting regions up to exponentially small corrections. The values of the random field in these regions are uncorrelated. Hence, the correlations of the distant spins are exponentially small. Below we consider the case of the zero temperature. Hence, the system is in the ground state. We assume that the amplitude of the random field $$H=2DJ(1+\sqrt{2}+\delta ),\delta >0.$$ (4) Let us estimate the angle between an arbitrary spin $`𝐒_i`$ and the local random field $`𝐡_i=H𝐧_i`$. The Weiss field $`𝐇_W=𝐡_i+𝐇_J`$ acting on the spin includes the random field $`𝐡_i`$ and the exchange contribution $`𝐇_J=J_j𝐒_j`$, where $`𝐒_j`$ are the nearest neighbors of the spin $`𝐒_i`$. The latter $`H_J2DJ`$ since the number of the nearest neighbors is $`2D`$, where $`D`$ is the spatial dimension. Hence, the minimal possible modulus of the Weiss field is $`(HH_J)(H2DJ)`$. Any spin is oriented along the local Weiss field $`𝐇_W`$. Let us consider the triangle two sides of which are $`𝐡_i`$ and $`𝐇_J`$, and the third side is parallel to $`𝐒_i`$. The laws of sinuses allow us to show that the maximal possible angle between $`𝐡_i`$ and $`𝐒_i`$ is $`\varphi _1=\mathrm{arcsin}(2DJ/H)`$. We shall now determine the orientation of the spin $`𝐒_i`$ iteratively. Let the zero approximation $`𝐒_i^0`$ be oriented along the random field $`𝐡_i`$. Let the first approximation $`𝐒_i^1`$ be oriented along the Weiss field $`𝐇_W^0`$, calculated with the zero approximation for the neighboring spins: $`𝐇_W^0=𝐡_i+J_j𝐧_j`$, where $`_j`$ denotes the summation over the nearest neighbors. The second approximation $`𝐒_i^2`$ is determined with the Weiss field in the first approximation, etc. Any approximation $`𝐒_i^k`$ depends on the random fields only at a finite set of the lattice sites. The distance between any such site and site $`i`$ is no more than $`kb`$, where $`b`$ is the lattice constant. In any approximation the Weiss field $`H_W^kH2DJ`$. Let $`𝐬_i^k`$ be the difference between the $`k`$th and $`(k1)`$th approximations for the spin $`𝐒_i`$. Then $`|𝐇_W^k𝐇_W^{(k1)}|2DJm^k`$, where $`m^k`$ denotes the maximal value of $`s_l^k`$. Hence, we find with the laws of sinuses that the angle between $`𝐒_i^k`$ and $`𝐒_i^{(k1)}`$ is less than $`\varphi _k=\mathrm{arcsin}(2DJm^k/(H2DJ))`$. Since $`s_i^{(k+1)}2\mathrm{sin}(\varphi _k/2)`$, one obtains the following estimation: $`m^{(k+1)}\sqrt{2\{1\sqrt{1[2DJm^k/(H2DJ)]^2}\}}=\sqrt{2[2DJm^k/(H2DJ)]^2/[1+\sqrt{1[2DJm^k/(H2DJ)]^2}]}`$ (5) $`2\sqrt{2}DJm^k/(H2DJ)=m^k/[1+\delta /\sqrt{2}]m^1/[1+\delta /\sqrt{2}]^k,`$ (6) where Eq. (4) is used. Now we are in the position to estimate the correlation function (2). In terms of the $`O(N)`$ model it is given by the expression $$G(r)=𝐒(\mathrm{𝟎})𝐒(𝐫).$$ (7) Let $`N=[r/2b]1`$, where the square brackets denote the integer part. We decompose the values of the spins in the following way $`𝐒(𝐱)=𝐒^N(𝐱)+_{k>N}𝐬^k(𝐱)`$. The $`N`$th approximations $`𝐒^N(𝐱)`$ and $`𝐒^N(\mathrm{𝟎})`$ depend on the orientations of the random fields in different regions which have no intersection. Hence, the correlation function $`𝐒^N(\mathrm{𝟎})𝐒^N(𝐫)`$ is the product of the averages of the two multipliers $`𝐒^N`$ and thus equals to zero due to the isotropy of the distribution of the random field. All other contributions to Eq. (7) can be estimated with Eq. (5) and are exponentially small as the functions of $`N`$. This proves that the correlation function $`G(r)`$ has an exponentially small asymptotics at large $`r`$. Thus, for strong disorder (4) both LRO and QLRO are absent. A challenging question concerns the presence of QLRO in the weakly disordered random-field systems. Another interesting related problem is the question about QLRO in the random-anisotropy $`O(N)`$ model . Unfortunately, our approach cannot be directly generalized for this problem since the zeroth-order approximation $`𝐒_i^0`$ is not unique in the random-anisotropy model: any spin has two preferable orientations. In conclusion, we have proved that for strong disorder the random-field $`O(N)`$ model has no QLRO. The renormalization group calculations suggest that at $`N>2`$ QLRO is absent for arbitrarily weak disorder. On the other hand, in the random-field $`O(2)`$ model without dislocations the renormalization group predicts QLRO. Our result shows that in the system with the topological defects QLRO is absent at least as the disorder is strong. This prediction is relevant for the vortex glass state of the disordered superconductors.
no-problem/0002/cond-mat0002027.html
ar5iv
text
# Multi-scaling properties of truncated Lévy flights ## I Introduction Lévy flights are random processes based on Lévy stable distributions. They have been utilized successfully in modeling various complex spatio-temporal behavior of non-equilibrium dissipative systems such as fluid turbulence, anomalous diffusion, and financial markets. The Lévy stable distribution is self-similar to its convolutions. It has a long power-law tail that decays much more slowly compared with an exponential decay, which gives rise to infinite variance. However, in practical situations, it is usually truncated due to nonlinearity or finiteness of the system. In order to incorporate this fact, Mantegna and Stanley introduced the notion of truncated Lévy flight. It is based on a truncated Lévy stable distribution with a sharp cut-off in its power-law tail. Therefore, the distribution is not self-similar when convoluted, and has finite variance. Thus, the truncated Lévy flight converges to a Gaussian process due to the central limit theorem in contrast to the ordinary Lévy stable process. However, as they pointed out, its convergence to a Gaussian is very slow and the process exhibits anomalous behavior in a wide range before the convergence. Koponen reproduced their result analytically using a different type of truncated Lévy distribution with a smoother cut-off. Dubrulle and Laval applied their idea to the velocity field of 2D fluid turbulence, and claimed that the truncation is essential for the multi-scaling property of the velocity field to appear. They showed that the broken self-similarity of the distribution makes a qualitative difference on the scaling property of the corresponding random process, i.e., the ordinary Lévy stable process exhibits mere single-scaling, while the truncated Lévy process exhibits multi-scaling, although their analysis was mostly based on numerical calculation and the obtained scaling exponents were rather inaccurate. The idea of truncated Lévy flights has also been applied to the analysis of financial data such as stock market prices and foreign exchange rates. Multi-scaling analyses of the financial data have also been attempted. In this paper, we treat the truncated Lévy flights analytically based on the smooth truncation introduced by Koponen, and clarify their multi-scaling properties. They exhibit the simplest form of multi-scaling, i.e., bi-fractality, due to the characteristic shape of the truncated Lévy distribution. Our results may have some relevance to the multi-scaling properties of the velocity field in 2D fluid turbulence and of the fluctuation of the stock market prices. ## II Truncated Lévy distribution Let $`P(x)`$ an probability distribution and $`e^{\psi (\zeta )}`$ its characteristic function, i.e., $$P(x)=\frac{1}{2\pi }_{\mathrm{}}^{\mathrm{}}e^{\psi (\zeta )}e^{i\zeta x}𝑑\zeta ,e^{\psi (\zeta )}=_{\mathrm{}}^{\mathrm{}}P(x)e^{i\zeta x}𝑑x.$$ (1) As explained in Feller, (the argument of) the characteristic function for a Lévy stable distribution is given by that of a compound Poisson process: $$\psi (\zeta )=_{\mathrm{}}^{\mathrm{}}\left(e^{i\zeta x}1i\zeta \tau (x)\right)f(x)𝑑x,$$ (2) where $`f(x)`$ is a probability distribution of increments and $`\tau (x)`$ is a certain centering function. $`f(x)`$ is assumed to be $$f_L(x)=\{\begin{array}{cc}Cq|x|^{1\alpha }\hfill & (x<0),\hfill \\ Cpx^{1\alpha }\hfill & (x>0),\hfill \end{array}$$ (3) where $`C>0`$ is a scale constant, $`0<\alpha <2`$, $`p0`$, $`q0`$, and $`p+q=1`$. The function $`\tau (x)`$ is chosen as $`0`$ for $`0<\alpha <1`$ and $`x`$ for $`1<\alpha <2`$. By integration, we obtain $$\psi _L(\zeta )=C\mathrm{\Gamma }(\alpha )|\zeta |^\alpha \left(\mathrm{cos}\frac{\alpha \pi }{2}\pm i(qp)\mathrm{sin}\frac{\alpha \pi }{2}\right)(\alpha 1),$$ (4) where the upper sign applies when $`\zeta >0`$, and the lower for $`\zeta <0`$. This is a well-known form of the Lévy stable characteristic function, and we obtain a Lévy stable distribution $`P_L(x)`$ through an inverse Fourier transform (see Figs. 1 and 2). This characteristic function satisfies $`n\psi _L(\zeta )=\psi _L(n^{1/\alpha }\zeta )`$, which means that the corresponding probability distribution is stable to convolution, i.e., $$P_L^n(x)=n^{1/\alpha }P_L^1(n^{1/\alpha }x).$$ (5) Here $`P^n(x)`$ denotes a $`n`$-times convoluted distribution of $`P^1(x)P(x)`$, i.e., $`P^n(x)=(2\pi )^1e^{n\psi (\zeta )}e^{i\zeta x}𝑑\zeta `$. The Lévy stable distribution $`P_L(x)`$ is symmetric when $`qp=0`$, and one-sided when $`qp=\pm 1`$ and $`0<\alpha <1`$.<sup>*</sup><sup>*</sup>*The distribution is one-sided to the right ($`x>0`$) when $`qp=1`$ and to the left ($`x<0`$) when $`qp=1`$. It has a power-law tail of the form $`|x|^{1\alpha }`$, and the absolute moment $`x^q:=_{\mathrm{}}^{\mathrm{}}|x|^qP_L(x)𝑑x`$ does not exist for $`q\alpha `$. Now, let us truncate this Lévy stable distribution following Koponen. We introduce a cut-off parameter $`\lambda >0`$ and truncate the original $`f_L(x)`$ in Eq. (3) as $$f_{TL}(x)=\{\begin{array}{cc}Cq|x|^{1\alpha }e^{\lambda |x|}\hfill & (x<0),\hfill \\ Cpx^{1\alpha }e^{\lambda x}\hfill & (x>0).\hfill \end{array}$$ (6) For the case $`0<\alpha <1`$, $`\tau (x)`$ can be omitted, and we obtain by integration $$\psi _{TL}(\zeta ;\lambda )=C\mathrm{\Gamma }(\alpha )\left\{q(\lambda +i\zeta )^\alpha +p(\lambda i\zeta )^\alpha \lambda ^\alpha \right\},$$ (7) or, by expanding the first two terms $$\psi _{TL}(\zeta ;\lambda )=C\mathrm{\Gamma }(\alpha )\left\{\left(\lambda ^2+\zeta ^2\right)^{\alpha /2}\left(\mathrm{cos}\alpha \theta +i(qp)\mathrm{sin}\alpha \theta \right)\lambda ^\alpha \right\},$$ (8) where $`\theta =\mathrm{arctan}\left(\zeta /\lambda \right)`$. Apart from the scale constant, this is the characteristic function of truncated Lévy distribution given by Koponen.Note the misprint of Eq. (3) in Ref. . It should read $`\mathrm{ln}\widehat{P}(k)=c\left\{c_0(k^2+\nu ^2)^{\nu /2}/\mathrm{cos}(\pi \nu /2)\mathrm{}\right\}`$. For the case $`1<\alpha <2`$, we use a centering function $`\tau (x)=x`$ and obtain $$\psi _{TL}(\zeta ;\lambda )=C\mathrm{\Gamma }(\alpha )\left\{q(\lambda +i\zeta )^\alpha +p(\lambda i\zeta )^\alpha \lambda ^\alpha i\alpha \lambda ^{\alpha 1}(qp)\zeta \right\}.$$ (9) In this case, an extra term appears in addition to Eq. (7), which induces a drift of the probability distribution when $`qp0`$. It can easily be seen that in the limit $`\lambda +0`$, these characteristic functions go back to the Lévy stable characteristic function Eq. (4). We obtain a truncated Lévy distribution $`P_{TL}(x;\lambda )`$ through an inverse Fourier transform from Eq. (7) or Eq. (9). The parameter $`\lambda `$ modifies the behavior of $`\psi _{TL}(\zeta ;\lambda )`$ when $`\zeta `$ is comparable to $`\lambda `$ and removes the singularity of the characteristic function at the origin. It thus changes the behavior of $`P_{TL}(x;\lambda )`$ when $`x\lambda ^1`$ and introduces an exponential cut-off to the power-law tail of $`P_{TL}(x;\lambda )`$. Therefore, all absolute moments $`x^q`$ of $`P_{TL}(x;\lambda )`$ are finite in contrast to the case of the Lévy stable distribution $`P_L(x)`$. The characteristic function $`\psi _{TL}(\zeta ;\lambda )`$ is infinitely divisible but no longer stable. The convolutions of its corresponding probability distribution cannot be collapsed by scaling and shifting the variable $`x`$. However, they can still be collapsed by scaling both $`x`$ and $`\lambda `$ appropriately; the characteristic function $`\psi _{TL}(\zeta ;\lambda )`$ satisfies $`n\psi _{TL}(\zeta ;\lambda )=\psi _{TL}(n^{1/\alpha }\zeta ;n^{1/\alpha }\lambda )`$, which means that the corresponding $`n`$-times convoluted probability distribution satisfies $$P_{TL}^n(x;\lambda )=n^{1/\alpha }P_{TL}^1(n^{1/\alpha }x;n^{1/\alpha }\lambda ).$$ (10) We utilize this fact for calculating the scaling exponents of the truncated Lévy flights.In the context of random processes we discuss below, this operation can be considered as a renormalization-group transformation. The limiting Lévy stable process corresponds to a fixed point, and the exponent $`1/\alpha `$ can be regarded as a sort of critical exponent. In Figs. 1 and 2, truncated Lévy distributions $`P_{TL}(x;\lambda )`$ and their convolutions are displayed for symmetric ($`qp=0`$) and one-sided ($`qp=1`$) cases in comparison with the corresponding ordinary Lévy stable distributions $`P_L(x)`$ in a log-log scale. As expected, each truncated Lévy distribution has a cut-off at $`x\lambda ^1=10^3`$ after a power-law decay. The cut-off position gradually approaches the origin with the convolution, and the self-similarity of the convoluted distribution is broken in the tail part. We can extend the power-law decaying part arbitrarily longer by making $`\lambda `$ smaller. Hereafter, we assume the cut-off parameter $`\lambda `$ to be very small, i.e., the cut-off is far away from the origin, since we are interested in the transient anomalous behavior of the corresponding random process before it converges to a Gaussian due to the central limit theorem. ## III Truncated Lévy flights The truncated Lévy flight is a temporally discrete stochastic process characterized by the truncated Lévy distribution $`P_{TL}(x;\lambda )`$. At each time step $`i`$, a particle jumps a random distance $`x(i)`$ chosen independently from $`P_{TL}(x;\lambda )`$. The position $`y(i)`$ of the particle started from the origin is given by $`y(i)=_{j=1}^ix(j)`$. Here we consider two representative cases of truncated Lévy flights, i.e., the symmetric case ($`qp=0`$, $`0<\alpha <2`$) and the one-sided case ($`qp=1`$, $`0<\alpha <1`$). Figure 3 shows typical realizations of the jump $`x(i)`$ and the position of the particle $`y(i)`$ for the symmetric case. The time sequence of the jump $`x(i)`$ is intermittent; $`x(i)`$ mostly takes small values but sometimes takes very large values. Correspondingly, the movement of the particle $`y(i)`$ is also intermittent. This intermittency gives rise to the anomalous scaling property of the trace of $`y(i)`$ in which we are interested. Figure 4 displays typical realizations of $`x(i)`$ and $`y(i)`$ for the one-sided case. Since the probability distribution $`P_{TL}(x;\lambda )`$ vanishes for $`x<0`$, each jump takes a positive value and the position of the particle increases monotonically. As in the symmetric case, their time sequences are intermittent. ## IV Multi-scaling properties In order to characterize the intermittent time sequences shown in Figs. 3 and 4, we introduce multi-scaling analysis. It has been employed successfully in characterizing velocity and energy dissipation fields of fluid turbulence and rough interfaces of surface growth phenomena. Multi-scaling analysis concerns a partition function of the measure defined suitably for the field under consideration. Here we focus on the apparent similarity of the intermittent time sequences of truncated Lévy flights to those of fluid turbulence, i.e., the similarity of $`y(i)`$ in Fig. 3 to the velocity field, and that of $`x(i)`$ in Fig. 4 to the energy dissipation field. Thus, we apply the “multi-affine” analysis to the trace of $`y(i)`$ for the symmetric sequence. The measure $`h(n)`$ is defined as the absolute hight difference of $`y(i)`$ between two points separated by a distance $`n`$, i.e., $`h(n)=|y(i+n)y(i)|`$. The distribution of $`h(n)`$ does not depend on $`i`$, since the increment of this process is statistically stationary. The partition function is then defined as $`Z_h(n;q)=h(n)^q=\left|_{j=i+1}^{i+n}x(j)\right|^q`$, where $`\mathrm{}`$ denotes a statistical average. This function is called “structure function” in the context of fluid turbulence. On the other hand, for the one-sided case, we focus on the trace of $`x(i)`$ and apply the “multi-fractal” analysis. The measure $`m(n)`$ is defined as the area below the trace of $`x(i)`$ between two points separated by a distance $`n`$, i.e., $`m(n)=_{j=i+1}^{i+n}x(j)`$, and the partition function is defined as $`Z_m(n;q)=m(n)^q=\left(_{j=i+1}^{i+n}x(j)\right)^q`$. These partition functions are expected to scale with $`n`$ as $`Z_h(n;q)n^{\zeta (q)}`$ and $`Z_m(n;q)n^{\tau (q)}`$ for small $`n`$. Further, if these scaling exponents $`\zeta (q)`$ and $`\tau (q)`$ exhibit nonlinear dependence on $`q`$, the corresponding measures $`h(n)`$ and $`m(n)`$ are called multi-affine and multi-fractal, respectively. <sup>§</sup><sup>§</sup>§The multi-fractal partition function $`Z_m(n;q)`$ defined here is different from the traditional one that is defined as $`N(n)^1m(n)^q`$, where $`N(n)`$ is a number of boxes of size $`n`$ that are needed to cover the whole sequence. This makes a difference of $`1`$ to the scaling exponent $`\tau (q)`$ for the one-dimensional case considered here. In Fig. 5, we display the partition functions $`Z_m(n;q)`$ for several values of $`q`$ for the one-sided case in a log-log scale. As can be seen from the figure, each partition function exhibits power-law dependence on $`n`$ for small $`n`$. We obtain a similar figure for the partition functions $`Z_h(n;q)`$ for the symmetric case. The corresponding scaling exponents $`\zeta (q)`$ and $`\tau (q)`$ are shown in Fig. 6. Each curve exhibits strong non-linear dependence on $`q`$; it is linear for small $`q`$ and constant for large $`q`$. Thus, the sequences generated by truncated Lévy flights clearly possess multi-scaling properties. ## V Derivation of the scaling exponents Now let us derive the scaling exponents $`\zeta (q)`$ and $`\tau (q)`$ from the characteristic function Eq. (7). It is clear from the definition of the partition functions that they can be calculated once we know the probability distribution of the sum of random variables $`z(n):=_{j=1}^nx(j)`$. Since the jumps are independent from each other, the probability distribution of $`z(n)`$ is given by a $`n`$-times convolution of the truncated Lévy distribution $`P_{TL}(x;\lambda )`$, i.e., by $`P_{TL}^n(x;\lambda )`$. As we explained previously, $`P_{TL}^n(x;\lambda )`$ can easily be obtained from the original $`P_{TL}(x;\lambda )`$ by scaling $`x`$ and $`\lambda `$. Making use of this fact, the $`q`$-th absolute moment $`z(n)^q`$ of $`z(n)`$ can be calculated as $`z(n)^q_\lambda `$ $`=`$ $`A{\displaystyle _0^{\mathrm{}}}z^qP_{TL}^n(z;\lambda )𝑑z=A{\displaystyle _0^{\mathrm{}}}z^qn^{1/\alpha }P_{TL}(n^{1/\alpha }z;n^{1/\alpha }\lambda )𝑑z`$ (11) $`=`$ $`n^{q/\alpha }A{\displaystyle _0^{\mathrm{}}}z^qP_{TL}(z;n^{1/\alpha }\lambda )𝑑z=n^{q/\alpha }z(1)^q_{n^{1/\alpha }\lambda },`$ (13) where the constant $`A`$ is $`2`$ for the symmetric case and $`1`$ for the one-sided case. Here we explicitly indicated the parameter $`\lambda `$ of the distribution $`P_{TL}(x;\lambda )`$ as the subscript of the average. Note that if $`z(1)^q_\lambda `$ does not depend on $`\lambda `$, the scaling exponent is simply given by a linear function $`q/\alpha `$, and the process does not exhibit multi-scaling. This is the case for the ordinary Lévy stable distribution. Thus, all we need to calculate is the moment $`z(1)^q_\lambda `$. However, of course, an analytical expression for $`P_{TL}(x;\lambda )`$ is not attainable except a few specific cases. Here we adopt an approximation which utilizes the facts that the truncated Lévy distribution is different from the ordinary Lévy stable distribution only in the tail part, and that it has a power-law decaying part with an exponent $`1\alpha `$ in the middle (see Figs. 1 and 2). Therefore, it is expected that the moment whose degree $`q`$ is lower than $`\alpha `$ is almost the same as that obtained from the ordinary Lévy stable distribution, and the moment for $`q>\alpha `$ is mostly determined by the asymptotic tail of the truncated Lévy distribution. Note that the moment for $`q>\alpha `$ does not exist for the ordinary Lévy stable distribution. (i) Lower moments ($`0<q<\alpha `$). We can approximate $`P_{TL}(x;\lambda )`$ by $`P_L(x)`$ in this case, since they are different only in their tail parts, which does not contribute to the moments lower than $`\alpha `$ significantly. Therefore, $`z(1)^q_\lambda `$ does not depend on $`\lambda `$ approximately, and we obtain $`z(n)^qconst.n^{q/\alpha }`$. Thus, the scaling exponents $`\zeta (q)`$ and $`\tau (q)`$ are given by $`q/\alpha `$ for $`0<q<\alpha `$. The broken self-similarity of the distribution is not important in this regime. (ii) Higher moments ($`q>\alpha `$). Since the tail of the distribution mainly contributes to the higher moments, we can calculate $`z(1)^q_\lambda `$ approximately if we know the asymptotic form of the distribution. For this purpose, we expand the characteristic function as in the case of the series expansion of ordinary Lévy stable distribution. By expanding the integrand, the truncated Lévy distribution can be expressed as $`P_{TL}(x;\lambda )`$ $`=`$ $`e^{C\mathrm{\Gamma }(\alpha )\lambda ^\alpha }{\displaystyle \frac{1}{2\pi }}{\displaystyle _{\mathrm{}}^{\mathrm{}}}\mathrm{exp}\left[C\mathrm{\Gamma }(\alpha )\left\{q(\lambda +i\zeta )^\alpha +p(\lambda i\zeta )^\alpha \right\}\right]e^{i\zeta x}𝑑\zeta `$ (14) $`=`$ $`Re{\displaystyle \frac{i}{\pi x}}e^{C\mathrm{\Gamma }(\alpha )\lambda ^\alpha }{\displaystyle \underset{k=0}{\overset{\mathrm{}}{}}}{\displaystyle \frac{1}{k!}}\left\{C\mathrm{\Gamma }(\alpha )\lambda ^\alpha \right\}^k{\displaystyle _0^{\mathrm{}}}\left[q\left(1+{\displaystyle \frac{z}{\lambda x}}\right)^\alpha +p\left(1{\displaystyle \frac{z}{\lambda x}}\right)^\alpha \right]^ke^z𝑑z`$ (15) for the case $`0<\alpha <1`$. At the lowest order ($`k=1`$), we recover the original distribution of the increments: $$P_{TL}(x;\lambda )e^{C\mathrm{\Gamma }(\alpha )\lambda ^\alpha }Cpx^{1\alpha }e^{\lambda x}(x1).$$ (16) With this approximation, the moment $`z(1)^q_\lambda `$ is calculated as $$z(1)^q_\lambda e^{C\mathrm{\Gamma }(\alpha )\lambda ^\alpha }Cp_0^{\mathrm{}}z^{q1\alpha }e^{\lambda z}𝑑z=e^{C\mathrm{\Gamma }(\alpha )\lambda ^\alpha }Cp\mathrm{\Gamma }(1+q)\lambda ^{q+\alpha },$$ (17) and we obtain $$z(n)^q_\lambda =n^{q/\alpha }z(1)^q_{n^{1/\alpha }\lambda }n^{q/\alpha }\left(n^{1/\alpha }\lambda \right)^{q+\alpha }n^1.$$ (18) Thus, the scaling exponents $`\zeta (q)`$ and $`\tau (q)`$ are given by $`1`$ for $`q>\alpha `$. In summary, we approximately derived the following expression for the multi-scaling exponents $`\zeta (q)`$ and $`\tau (q)`$: $$\zeta (q),\tau (q)=\{\begin{array}{cc}q/\alpha & (0<q<\alpha ),\\ & \\ 1& (q>\alpha ).\end{array}$$ (19) Note that the above approximation becomes more and more accurate as we extend the power-law decaying part by decreasing $`\lambda `$, and this result is exact in the asymptotic limit. In Fig. 6, these theoretical curves are compared with the experimental results. Except for small deviations near the transition points, the theoretical results well reproduce the experimental results. Although our estimation here is done for the case $`0<\alpha <1`$, the theoretical result Eq. (19) seems to be also applicable for $`1<\alpha <2`$.This implies that Eq. (15) still has its meaning as an asymptotic expansion for $`1<\alpha <2`$. This is proved for the case of ordinary Lévy stable distribution. This type of simple multi-scaling is sometimes called “bi-fractality” and is known, for example, in randomly forced Burgers’ equation. ## VI Discussion We analyzed the multi-scaling properties of the truncated Lévy flights based on the smooth truncation introduced by Koponen. As Dubrulle and Laval claimed, truncation is essential for the multi-scaling properties to appear. We clarified this fact and derived the functional form of the scaling exponents for both symmetric and one-sided cases. As we mentioned previously, the cut-off parameter $`\lambda `$ may represent the finiteness of the system under consideration. Then it would be natural to assume $`\lambda `$ as a decreasing function of the system size $`L`$, for example $`\lambda =L^1`$, and we can think of the system-size dependence of the truncated Lévy flight. Of course, distribution functions $`P_{TL}`$ for different system sizes cannot be collapsed simply by rescaling $`x`$, i.e., finite-size scaling does not hold.We may also take a different viewpoint, where $`\lambda `$ is another tunable parameter independent of the system size $`L`$. Then Eq. (10) may be viewed as a finite-size scaling relation between systems of size $`L/n`$ and $`L`$, similar to that in statistical mechanics. However, approximate finite-size scaling relations still hold separately, if we divide $`P_{TL}`$ into two parts at the power-law decaying part, i.e., into the self-similar part and the tail part. As we explained, the self-similar part is insensitive to $`\lambda =L^1`$, and $`P_{TL}`$ for different system sizes collapse to a single curve in this region without rescaling. On the other hand, the tail part is asymptotically given by $`P_{TL}(x;L^1)x^{1\alpha }e^{x/L}`$, which can be rescaled as $`L^{1+\alpha }P_{TL}(xL;L^1)`$ to give a universal curve. Thus, we have two different approximate finite-size scaling relations in different regions of $`x`$, which is separated by the power-law decaying part. Of course, this is closely related to the bi-fractal behavior of the scaling exponents. Similar asymptotic finite-size scaling is also reported in the sandpile models of self-organized criticality. Recently, Chechkin and Gonchar discussed the finite sample number effect on the scaling properties of a stable Lévy process. (They treated only the symmetric case using a different argument from ours, which was more qualitative and therefore more general in a sense.) They claimed that, due to the finite sample number effect, “spurious” multi-affinity is observed in the numerical simulation and derived the “spurious” multi-scaling exponent. Interestingly, or in some sense obviously, their “spurious” multi-scaling exponent is the same as our multi-scaling exponent, since the truncation of the power-law by $`\lambda `$ can also be interpreted as mimicking the finite sample number effect of experiments. Our calculation presented in this paper is similar to our previous work on the multi-scaling properties of the amplitude and difference fields of anomalous spatio-temporal chaos found in systems of non-locally coupled oscillators. The distribution treated there was not the truncated Lévy type but had a form like $`(1+x^2)^{\alpha /2}e^{\lambda |x|}`$. Since this form is easily generated by a simple multiplicative stochastic process, some attempts have been made to model the economic activity using this type of distribution. The bi-fractal behavior of the scaling exponent is the simplest case of multi-scaling, while experimentally observed scaling exponents usually exhibit more complex behavior. Actually, it has long been discussed in the context of fluid turbulence what the shape of the distribution should be to reproduce the experimentally observed scaling exponent. In order to reproduce the behavior of the scaling exponent more realistically in the framework of the truncated Lévy flights discussed here, introduction of correlations to the random variables will be necessary. Studies in this direction are expected. ###### Acknowledgements. The author gratefully acknowledges Professor Michio Yamada and University of Tokyo for warm hospitality. He also thanks Dr. A. Lemaître, Dr. H. Chaté, and anonymous referees for useful comments. This work is supported by the JSPS Research Fellowships for Young Scientists.
no-problem/0002/math0002103.html
ar5iv
text
# Asymptotic density and the asymptotics of partition functions 1991 Mathematics Subject Classification. Primary 11P72; Secondary 11P81, 11B82,11B05. Key words and phrases. Partition functions, asymptotics of partitions, inverse theorems for partitions, additive number theory, asymptotic density. ## 1 The growth of $`p_A(n)`$ Let $`A`$ be a nonempty set of positive integers. The counting function $`A(x)`$ of the set $`A`$ counts the number of positive elements of $`A`$ that do not exceed $`x`$. Then $`0A(x)x`$, and so $`0A(x)/x1`$ for all $`x`$. The lower asymptotic density of $`A`$ is $$d_L(A)=\underset{x\mathrm{}}{lim\; inf}\frac{A(x)}{x}.$$ The upper asymptotic density of $`A`$ is $$d_U(A)=\underset{x\mathrm{}}{lim\; sup}\frac{A(x)}{x}.$$ We have $`0d_L(A)d_U(A)1`$ for every set $`A`$. If $`d_L(A)=d_U(A)`$, then the limit $$d(A)=\underset{x\mathrm{}}{lim}\frac{A(x)}{x}$$ exists, and is called the asymptotic density of the set $`A`$. A partition of $`n`$ with parts in $`A`$ is a representation of $`n`$ as a sum of not necessarily distinct elements of $`A`$, where the number of summands is unrestricted. The summands are called the parts of the partition. The partition function $`p_A(n)`$ counts the number of partitions of $`n`$ into parts belonging to the set $`A`$. Two partitions that differ only in the order of their parts are counted as the same partition. We define $`p_A(0)=1`$ and $`p_A(n)=0`$ for $`n1`$. The partition function for the set of all positive integers is denoted $`p(n)`$. Clearly, $`0p_A(n)p(n)`$ for every integer $`n`$ and every set $`A`$. A classical result of Hardy and Ramanujan and Uspensky states that $$\mathrm{log}p(n)c_0\sqrt{n},$$ where $$c_0=\pi \sqrt{\frac{2}{3}}=2\sqrt{\frac{\pi ^2}{6}}.$$ Erdős has given an elementary proof of this result. Let $`\mathrm{gcd}(A)`$ denote the greatest common divisor of the elements of $`A`$. If $`d=\mathrm{gcd}(A)>1`$, consider the set $`A^{}=\{a/d:aA\}`$. Then $`A^{}`$ is a nonempty set of positive integers such that $`\mathrm{gcd}(A^{})=1`$, and $$p_A(n)=\{\begin{array}{cc}0\hfill & \text{if }n0(modd)\text{}\hfill \\ p_A^{}\left(n/d\right)\hfill & \text{if }n0(modd)\text{}\hfill \end{array}$$ Thus, it suffices to consider only partition functions for sets $`A`$ such that $`\mathrm{gcd}(A)=1`$. In this paper we investigate the relationship between the upper and lower asymptotic densities of a set $`A`$ and the asymptotic behavior of $`\mathrm{log}p_A(n)`$. In particular, we give a complete and elementary proof of the theorem that, for $`\alpha >0`$, the set $`A`$ has density $`\alpha `$ if and only if $`\mathrm{log}p_A(n)c_0\sqrt{\alpha n}`$. This result was stated, with a sketch of a proof, in a beautiful paper of Erdős . Many other results about the asymptotics of partition functions can be found in Andrews \[1, Chapter 6\] and Odlyzko . ## 2 Some lemmas about partition functions ###### Lemma 1 Let $`A`$ be a set of positive integers. If $`p_A(n_0)1`$, then $`p_A(n+n_0)p_A(n)`$ for every nonnegative integer $`n`$. Proof. The inequality is true for $`n=0`$, since $`p_A(n_0)1=p_A(0)`$. We fix one partition $`n_0=a_1^{}+\mathrm{}+a_r^{}`$. Let $`n1`$. To every partition $$n=a_1+\mathrm{}+a_k$$ we associate the partition $$n+n_0=a_1+\mathrm{}+a_k+a_1^{}+\mathrm{}+a_r^{}.$$ This is a one–to–one map from partitions of $`n`$ to partitions of $`n+n_0`$, and so $`p_A(n)p_A(n+n_0)`$. ###### Lemma 2 Let $`A`$ be a nonempty set of positive integers, and let $`a_1A`$. For every number $`xa_1`$ there exists an integer $`u`$ such that $$xa_1<ux$$ and $$\mathrm{max}\{p_A(n):0nx\}=p_A(u).$$ Proof. If $`a_1A`$, then $`p_A(a_1)1`$. By Lemma 1, $$p_A(n)p_A(n+a_1)$$ for every nonnegative integer $`n`$. Therefore, the partition function $`p_A(n)`$ is increasing in every congruence class modulo $`a_1`$. If $`0ra_11`$, then $$\mathrm{max}\{p_A(n):0nx,nr(moda_1)\}=p_A(u_r)$$ for some integer $`u_r(xa_1,x].`$ It follows that $$\mathrm{max}\{p_A(n):0nx\}=p_A(u),$$ where $$u=\mathrm{max}\{u_0,u_1,\mathrm{},u_{a_11}\}(xa_1,x].$$ This completes the proof. ###### Lemma 3 Let $`A`$ be a nonempty finite set of relatively prime positive integers, and let $`k`$ be the cardinality of $`A`$. Let $`p_A(n)`$ denote the number of partitions of $`n`$ into parts belonging to $`A`$. Then $$p_A(n)=\left(\frac{1}{_{aA}a}\right)\frac{n^{k1}}{(k1)!}+O\left(n^{k2}\right).$$ Proof. This is an old result. The usual proof (Netto , Pólya–Szegö \[9, Problem 27\]) is based on the partial fraction decomposition of the generating function for $`p_A(n)`$. There is also an arithmetic proof due to Nathanson . ###### Lemma 4 Let $`n_0`$ be a positive integer, and let $`A`$ be the set of all integers greater than or equal to $`n_0`$. Then $`p_A(n)`$ is increasing for all positive integers $`n`$, and strictly increasing for $`n3n_0+2`$. Proof. If $`1n<n_0`$, then $`p_A(n)=0`$. We say that a partition $`a_1+a_2+\mathrm{}+a_r`$ has a unique largest part if $`a_1>a_2\mathrm{}a_r`$. Let $`nn_0`$. Then $`p_A(n)1`$ since $`nA`$. To every partition $`\pi `$ of $`n`$ we associate a partition of $`n+1`$ by adding 1 to the largest part of $`\pi `$. This is a one–to–one map from the set of all partitions of $`n`$ and to the set of partitions of $`n+1`$ with a unique largest part, and so $`p_A(n)p_A(n+1)`$ for $`n1`$. Let $`n3n_0+2`$. If $`nn_0`$ is even, then $`a=(nn_0)/2n_0+1`$, and $`n=2a+n_0`$. If $`nn_0`$ is odd, then $`a=(nn_01)/2n_0+1`$, and $`n=2a+(n_0+1)`$. In both cases, $`aA`$. Therefore, if $`n3n_0+2`$, then there exists a partition of $`n`$ with parts in $`A`$ and with no unique largest part, and so $`p_A(n)<p_A(n+1)`$. This completes the proof. A set of positive integers is cofinite if it contains all but finitely many positive integers. ###### Lemma 5 Let $`A`$ be a cofinite set of positive integers. Then $$\mathrm{log}p_A(n)c_0\sqrt{n}.$$ Proof. Since $`A`$ is cofinite, we can choose an integer $`n_0>1`$ such that $`A`$ contains the set $`A^{}=\{nn_0\}`$. Then $$p_A^{}(n)p_A(n)p(n).$$ Since $`\mathrm{log}p(n)c_0\sqrt{n}`$, it suffices to prove that $`\mathrm{log}p_A^{}(n)c_0\sqrt{n}`$. Let $`F=\{n:1nn_01\}`$. Applying Lemma 3 with $`k=n_01`$, we obtain a constant $`c1`$ such that $`p_F(n)cn^{n_02}`$ for all positive integers $`n`$. Each part of a partition of $`n`$ must belong either to $`A^{}`$ or to $`F`$, and so every partition of $`n`$ is uniquely of the form $`n=n^{}+(nn^{})`$, where $`n^{}`$ is a sum of elements of $`A^{}`$ and $`nn^{}`$ is a sum of elements of $`F`$. By Lemma 4, the partition function $`p_A^{}(n)`$ is increasing. Let $`nn_0`$. Then $`p_A^{}(n)1`$ and $`p(n)`$ $`=`$ $`{\displaystyle \underset{n^{}=0}{\overset{n}{}}}p_A^{}(n^{})p_F(nn^{})`$ $``$ $`cn^{n_02}{\displaystyle \underset{n^{}=0}{\overset{n}{}}}p_A^{}(n^{})`$ $``$ $`2cn^{n_01}p_A^{}(n).`$ Taking logarithms of both sides, we obtain $`\mathrm{log}p(n)`$ $``$ $`\mathrm{log}2c+(n_01)\mathrm{log}n+\mathrm{log}p_A^{}(n)`$ $``$ $`\mathrm{log}2c+(n_01)\mathrm{log}n+\mathrm{log}p(n)`$ and so $`{\displaystyle \frac{\mathrm{log}p(n)}{c_0\sqrt{n}}}`$ $``$ $`{\displaystyle \frac{\mathrm{log}2c+(n_01)\mathrm{log}n}{c_0\sqrt{n}}}+{\displaystyle \frac{\mathrm{log}p_A^{}(n)}{c_0\sqrt{n}}}`$ $``$ $`{\displaystyle \frac{\mathrm{log}2c+(n_01)\mathrm{log}n}{c_0\sqrt{n}}}+{\displaystyle \frac{\mathrm{log}p(n)}{c_0\sqrt{n}}}.`$ Taking the limit as $`n`$ goes to infinity, we have $`\mathrm{log}p_A^{}(n)c_0\sqrt{n}`$. This completes the proof. ## 3 Abelian and tauberian theorems In this section we derive two results in analysis that will be used in the proof of Theorem 7. To every sequence $`B=\{b_n\}_{n=0}^{\mathrm{}}`$ of real numbers we can associate the power series $`f(x)=_{n=0}^{\mathrm{}}b_nx^n`$. We shall assume that the power series converges for $`|x|<1`$. We think of the function $`f(x)`$ as a kind of average over the sequence $`B`$. Roughly speaking, an abelian theorem asserts that if the sequence $`B`$ has some property, then the function $`f(x)`$ has some related property. Conversely, a tauberian theorem asserts that if the function $`f(x)`$ has some property, then the sequence $`B`$ has a related property. The following theorem is abelian. ###### Theorem 1 Let $`B=\{b_n\}_{n=0}^{\mathrm{}}`$ be a sequence of nonnegative real numbers such that the power series $`f(x)=_{n=0}^{\mathrm{}}b_nx^n`$ converges for $`|x|<1`$. If $$\underset{n\mathrm{}}{lim\; inf}\frac{\mathrm{log}b_n}{\sqrt{n}}2\sqrt{\alpha },$$ (1) then $$\underset{x1^{}}{lim\; inf}(1x)\mathrm{log}f(x)\alpha .$$ (2) If $$\underset{n\mathrm{}}{lim\; sup}\frac{\mathrm{log}b_n}{\sqrt{n}}2\sqrt{\beta },$$ (3) then $$\underset{x1^{}}{lim\; sup}(1x)\mathrm{log}f(x)\beta .$$ (4) In particular, if $`\alpha >0`$ and $$\mathrm{log}b_n2\sqrt{\alpha n},$$ (5) then $$\mathrm{log}f(x)\frac{\alpha }{1x}.$$ (6) Proof. Let $`0<\epsilon <1.`$ Inequality (1) implies that there exists a positive integer $`N_0=N_0(\epsilon )`$ such that $$b_n>e^{2(1\epsilon )\sqrt{\alpha n}}\text{for all }nN_0.$$ For $`0<x<1`$, we let $`x=e^t`$, where $`t=t(x)=\mathrm{log}x>0`$, and $`t`$ decreases to 0 as $`x`$ increases to 1. If $`nN_0`$, then $$b_nx^n>e^{2(1\epsilon )\sqrt{\alpha n}}e^{tn}=e^{2(1\epsilon )\sqrt{\alpha n}tn}.$$ Completing the square in the exponent, we obtain $$2(1\epsilon )\sqrt{\alpha n}tn=\frac{(1\epsilon )^2\alpha }{t}t\left(\sqrt{n}\frac{(1\epsilon )\sqrt{\alpha }}{t}\right)^2,$$ and so $$b_nx^n>e^{\frac{(1\epsilon )^2\alpha }{t}}e^{t\left(\sqrt{n}\frac{(1\epsilon )\sqrt{\alpha }}{t}\right)^2}.$$ Choose $`t_0>0`$ such that $$\frac{(1\epsilon )^2\alpha }{t_0^2}>N_0+1,$$ and let $`x_0=e^{t_0}<1`$. If $`x_0<x<1`$ and $`x=e^t`$, then $`0<t<t_0`$. Let $$n_x=\left[\frac{(1\epsilon )^2\alpha }{t^2}\right].$$ Then $$N_0<\frac{(1\epsilon )^2\alpha }{t^2}1<n_x\frac{(1\epsilon )^2\alpha }{t^2}$$ and $$\frac{(1\epsilon )\sqrt{\alpha }}{t}1<\sqrt{\frac{(1\epsilon )^2\alpha }{t^2}1}<\sqrt{n_x}\frac{(1\epsilon )\sqrt{\alpha }}{t}.$$ It follows that $$\left(\sqrt{n_x}\frac{(1\epsilon )\sqrt{\alpha }}{t}\right)^2<1,$$ and so $$b_{n_x}x^{n_x}>e^{\frac{(1\epsilon )^2\alpha }{t}}e^t=e^{\frac{(1\epsilon )^2\alpha }{t}t}.$$ Since $`b_nx^n0`$ for all $`n0`$, we have $$f(x)=\underset{n=0}{\overset{\mathrm{}}{}}b_nx^nb_{n_x}x^{n_x}>e^{\frac{(1\epsilon )^2\alpha }{t}t}.$$ Therefore, $$\mathrm{log}f(x)>\frac{(1\epsilon )^2\alpha }{t}t$$ and $$t\mathrm{log}f(x)>(1\epsilon )^2\alpha t^2.$$ Since $$t=\mathrm{log}x1x\text{as }x1^{}\text{,}$$ it follows that $`\underset{x1^{}}{lim\; inf}(1x)\mathrm{log}f(x)`$ $`=`$ $`\underset{x1^{}}{lim\; inf}t\mathrm{log}f(x)`$ $``$ $`\underset{t0^+}{lim\; inf}\left((1\epsilon )^2\alpha t^2\right)`$ $`=`$ $`(1\epsilon )^2\alpha .`$ This inequality is true for every $`\epsilon >0,`$ and so $$\underset{x1^{}}{lim\; inf}(1x)\mathrm{log}f(x)\alpha .$$ This proves (2). If (3) holds, then there exists a positive integer $`N_0=N_0(\epsilon )`$ such that $$b_n<e^{2(1+\epsilon )\sqrt{\beta n}}\text{for all }nN_0$$ Let $`x=e^t`$. Then $`f(x)`$ $`=`$ $`{\displaystyle \underset{n=0}{\overset{\mathrm{}}{}}}b_nx^n`$ $`<`$ $`{\displaystyle \underset{n=0}{\overset{N_01}{}}}b_nx^n+e^{\frac{(1+\epsilon )^2\beta }{t}}{\displaystyle \underset{n=N_0}{\overset{\mathrm{}}{}}}e^{t\left(\sqrt{n}\frac{(1+\epsilon )\sqrt{\beta }}{t}\right)^2}`$ $`=`$ $`c_1(\epsilon )+e^{\frac{(1+\epsilon )^2\beta }{t}}{\displaystyle \underset{n=N_0}{\overset{\mathrm{}}{}}}e^{t\left(\sqrt{n}\frac{(1+\epsilon )\sqrt{\beta }}{t}\right)^2},`$ where $$0\underset{n=0}{\overset{N_01}{}}b_nx^n\underset{n=0}{\overset{N_01}{}}b_n=c_1(\epsilon ).$$ If $$n>\left[\frac{16\beta }{t^2}\right]=N_1(t)=N_1,$$ then $$\sqrt{n}>\frac{4\sqrt{\beta }}{t}>\frac{2(1+\epsilon )\sqrt{\beta }}{t}$$ and $$\sqrt{n}\frac{(1+\epsilon )\sqrt{\beta }}{t}>\frac{\sqrt{n}}{2}.$$ It follows that $$e^{t\left(\sqrt{n}\frac{(1+\epsilon )\sqrt{\beta }}{t}\right)^2}<e^{t\left(\frac{\sqrt{n}}{2}\right)^2}=e^{\frac{tn}{4}},$$ and so $`{\displaystyle \underset{n=N_1+1}{\overset{\mathrm{}}{}}}e^{t\left(\sqrt{n}\frac{(1+\epsilon )\sqrt{\beta }}{t}\right)^2}`$ $`<`$ $`{\displaystyle \underset{n=N_1+1}{\overset{\mathrm{}}{}}}e^{\frac{tn}{4}}`$ $`=`$ $`{\displaystyle \frac{e^{t(N_1+1)/4}}{1e^{t/4}}}`$ $`<`$ $`{\displaystyle \frac{8e^{4\beta /t}}{t}},`$ since $`1t/4<e^{t/4}<1t/8`$ for $`0<t<1`$. Moreover, $$\underset{n=N_0}{\overset{N_1}{}}e^{t\left(\sqrt{n}\frac{(1+\epsilon )\sqrt{\beta }}{t}\right)^2}<N_1\frac{16\beta }{t^2}.$$ Consequently, $`f(x)`$ $``$ $`c_1(\epsilon )+e^{\frac{(1+\epsilon )^2\beta }{t}}\left({\displaystyle \frac{16\sqrt{\beta }}{t^2}}+{\displaystyle \frac{8e^{4\beta /t}}{t}}\right)`$ $``$ $`{\displaystyle \frac{c_2(\epsilon )e^{\frac{(1+\epsilon )^2\beta }{t}}}{t^2}}.`$ Therefore, $$\mathrm{log}f(x)\frac{(1+\epsilon )^2\beta }{t}+\mathrm{log}\frac{c_2(\epsilon )}{t^2},$$ and so $$t\mathrm{log}f(x)(1+\epsilon )^2\beta +t\mathrm{log}\frac{c_2(\epsilon )}{t^2}.$$ Then $$\underset{x1^{}}{lim\; sup}(1x)\mathrm{log}f(x)=\underset{t0^+}{lim\; sup}t\mathrm{log}f(x)(1+\epsilon )^2\beta .$$ This inequality is true for every $`\epsilon >0`$, and so $$\underset{x1^{}}{lim\; sup}(1x)\mathrm{log}f(x)\beta .$$ This proves (4). If (5) holds, that is, if $$\underset{n\mathrm{}}{lim}\frac{\mathrm{log}b_n}{2\sqrt{n}}=\sqrt{\alpha }>0,$$ then (1) and (3) hold with $`\alpha =\beta `$. These inequalities imply (2) and (4), and so $$\underset{x1^{}}{lim}(1x)\mathrm{log}f(x)=\alpha ,$$ or, equivalently, $$\mathrm{log}f(x)\frac{\alpha }{1x}.$$ This completes the proof. The statement that (5) implies (6) appears in Erdős . The following tauberian theorem generalizes a well–known result of Hardy and Littlewood . ###### Theorem 2 Let $`B=\{b_n\}_{n=0}^{\mathrm{}}`$ be a sequence of nonnegative real numbers such that the power series $$f(x)=\underset{n=0}{\overset{\mathrm{}}{}}b_nx^n$$ converges for $`|x|<1`$. Let $$S_B(n)=\underset{k=0}{\overset{n}{}}b_k.$$ Let $`c>0.`$ If $$\underset{x1^{}}{lim\; sup}(1x)f(x)c,$$ (7) then $$\underset{n\mathrm{}}{lim\; sup}\frac{S_B(n)}{n}c.$$ (8) If $$\underset{x1^{}}{lim\; inf}(1x)f(x)c,$$ (9) then $$\underset{n\mathrm{}}{lim\; inf}\frac{S_B(n)}{n}c.$$ (10) In particular, if $$f(x)\frac{c}{1x}\text{as }x1^{}\text{,}$$ (11) then $$S_B(n)cn.$$ (12) Proof. The Hardy–Littlewood theorem states that (11) implies (12). The proofs that (7) implies (8) and that (9) implies (10) require only a simple modification of Karamata’s method, as presented in Titchmarsh \[10, Chapter 7\]. ## 4 Direct and inverse theorems for $`p_A(n)`$ A direct theorem uses information about the sequence $`A`$ to deduce properties of the partition function $`p_A(n)`$. An inverse theorem uses information about the partition function $`p_A(n)`$ to deduce properties of the sequence $`A`$. We begin with a direct theorem. ###### Theorem 3 Let $`A`$ be an infinite set of positive integers with $`\mathrm{gcd}(A)=1`$. If $`d_L(A)\alpha `$, then $$\underset{n\mathrm{}}{lim\; inf}\frac{\mathrm{log}p_A(n)}{c_0\sqrt{n}}\sqrt{\alpha }.$$ If $`d_U(A)\beta `$, then $$\underset{n\mathrm{}}{lim\; sup}\frac{\mathrm{log}p_A(n)}{c_0\sqrt{n}}\sqrt{\beta }.$$ Proof. Let $`A=\{a_k\}_{k=1}^{\mathrm{}}`$, where $`a_1<a_2<\mathrm{}`$. Since $`\mathrm{gcd}(A)=1`$, there is an integer $`\mathrm{}_0`$ such that $`\mathrm{gcd}\{a_k:1k\mathrm{}_01\}=1`$. Let $`\epsilon >0`$. If $`d_U(A)\beta `$, there exists an integer $`k_0=k_0(\epsilon )\mathrm{}_0`$ such that, for all $`kk_0`$, $$\frac{k}{a_k}=\frac{A(a_k)}{a_k}<\beta +\epsilon ,$$ and so $$k<(\beta +\epsilon )a_k.$$ Let $`A^{}=\{a_kA:kk_0\}`$ and $`F=AA^{}=\{a_kA:1kk_01\}`$. Let $`n`$ and $`n^{}`$ be positive integers, $`n^{}n`$, and let $$n^{}=a_{k_1}+a_{k_2}+\mathrm{}+a_{k_r}$$ be a partition of $`n^{}`$ with parts in $`A^{}`$. Then $`k_ik_0`$ for all $`i=1,\mathrm{},r`$. To this partition of $`n^{}`$ we associate the partition $$m=k_1+k_2+\mathrm{}+k_r.$$ Since $`k_i<(\beta +\epsilon )a_{k_i}`$ for $`i=1,\mathrm{},r`$, we have $`m`$ $`<`$ $`(\beta +\epsilon )a_{k_1}+(\beta +\epsilon )a_{k_2}+\mathrm{}+(\beta +\epsilon )a_{k_r}`$ $`=`$ $`(\beta +\epsilon )n^{}`$ $``$ $`(\beta +\epsilon )n.`$ This is a one–to–one mapping from partitions of $`n^{}`$ with parts in $`A^{}`$ to partitions of integers less than $`(\beta +\epsilon )n`$, and so $`p_A^{}(n^{})`$ $``$ $`{\displaystyle \underset{m<(\beta +\epsilon )n}{}}p(m)`$ $``$ $`(\beta +\epsilon )n\mathrm{max}\{p(m):m<(\beta +\epsilon )n\}`$ $``$ $`(\beta +\epsilon )np([(\beta +\epsilon )n])`$ $`<`$ $`2np([(\beta +\epsilon )n]),`$ since the unrestricted partition function $`p(n)`$ is strictly increasing. We have $`A=A^{}F`$, where $`A^{}F=\mathrm{}`$. The set $`F`$ is a nonempty finite set of integers of cardinality $`k_01`$, and $`\mathrm{gcd}(F)=1`$ since $`k_0\mathrm{}_0`$. By Theorem 3, there exists a constant $`c`$ such that $$p_F(n)cn^{k_02}$$ for every positive integer $`n`$. Every partition of $`n`$ with parts in $`A`$ can be decomposed uniquely into a partition of $`n^{}`$ with parts in $`A^{}`$ and a partition of $`nn^{}`$ with parts in $`F`$, for some nonnegative integer $`n^{}n`$. Then $`p_A(n)`$ $`=`$ $`{\displaystyle \underset{n^{}=0}{\overset{n}{}}}p_A^{}(n^{})p_F(nn^{})`$ $``$ $`cn^{k_02}{\displaystyle \underset{n^{}=0}{\overset{n}{}}}p_A^{}(n^{})`$ $``$ $`cn^{k_02}(n+1)\mathrm{max}\{p_A^{}(n^{}):n^{}=0,1,\mathrm{},n\}`$ $``$ $`2cn^{k_01}\mathrm{max}\{p_A^{}(n^{}):n^{}=0,1,\mathrm{},n\}`$ $`<`$ $`2cn^{k_01}2np([(\beta +\epsilon )n])`$ $`=`$ $`4cn^{k_0}p([(\beta +\epsilon )n]).`$ Since $`\mathrm{log}p(n)c_0\sqrt{n}`$, it follows that for every $`\epsilon >0`$ there exists an integer $`n_0(\epsilon )`$ such that $$\mathrm{log}p(n)<(1+\epsilon )c_0\sqrt{n}$$ for $`nn_0(\epsilon ).`$ Therefore, $`\mathrm{log}p_A(n)`$ $``$ $`\mathrm{log}4c+k_0\mathrm{log}n+\mathrm{log}p([(\beta +\epsilon )n])`$ $`<`$ $`\mathrm{log}4c+k_0\mathrm{log}n+(1+\epsilon )c_0\sqrt{(\beta +\epsilon )n}`$ for $`n(n_0(\epsilon )+1)/(\beta +\epsilon ).`$ Dividing by $`c_0\sqrt{n}`$, we obtain $$\frac{\mathrm{log}p_A(n)}{c_0\sqrt{n}}\frac{\mathrm{log}4c+k_0\mathrm{log}n}{c_0\sqrt{n}}+(1+\epsilon )\sqrt{\beta +\epsilon },$$ and so $$\underset{n\mathrm{}}{lim\; sup}\frac{\mathrm{log}p_A(n)}{c_0\sqrt{n}}(1+\epsilon )\sqrt{\beta +\epsilon }.$$ Since this inequality is true for all $`\epsilon >0`$, we obtain $$\underset{n\mathrm{}}{lim\; sup}\frac{\mathrm{log}p_A(n)}{c_0\sqrt{n}}\sqrt{\beta }.$$ Next we prove that if $`d_L(A)\alpha `$, then $$\underset{n\mathrm{}}{lim\; inf}\frac{\mathrm{log}p_A(n)}{c_0\sqrt{n}}\sqrt{\alpha }.$$ This inequality is trivial if $`\alpha =0`$, since $`\mathrm{log}p_A(n)/c_0\sqrt{n}0`$ for all sufficiently large $`n`$. Let $`\alpha >0`$ and $$0<\epsilon <\alpha .$$ There exists an integer $`k_0=k_0(\epsilon )`$ such that, for all $`kk_0`$, $$\frac{k}{a_k}=\frac{A(a_k)}{a_k}>\alpha \epsilon ,$$ and so $$a_k<\frac{k}{\alpha \epsilon }.$$ Since $`\mathrm{gcd}(A)=1`$, every sufficiently large integer can be written as a sum of elements of $`A`$, and so there exists an integer $`N_0`$ such that $`p_A(n)1`$ for all $`nN_0`$. Let $`p^{}(n)`$ denote the number of partitions of $`n`$ into parts $`kk_0`$. To every partition $$n=k_1+\mathrm{}+k_r\text{with }k_1\mathrm{}k_rk_0\text{,}$$ we associate the partition $$m=a_{k_1}+\mathrm{}+a_{k_r}.$$ Then $$m<\frac{k_1}{\alpha \epsilon }+\mathrm{}+\frac{k_1}{\alpha \epsilon }=\frac{n}{\alpha \epsilon }.$$ This is a one–to–one mapping from partitions of $`n`$ with parts greater than or equal to $`k_0`$ to partitions of integers $`m`$ less than $`n/(\alpha \epsilon )`$, and so $`p^{}(n)`$ $``$ $`{\displaystyle \underset{m<\frac{n}{\alpha \epsilon }}{}}p_A(m)`$ $``$ $`{\displaystyle \frac{n}{\alpha \epsilon }}\mathrm{max}\left\{p_A(m):m<{\displaystyle \frac{n}{\alpha \epsilon }}\right\}`$ $`<`$ $`{\displaystyle \frac{n}{\alpha \epsilon }}p_A(u_n),`$ where, by Lemma 2 (since $`a_1A`$), the integer $`u_n`$ belongs to the bounded interval $$\frac{n}{\alpha \epsilon }a_1<u_n\frac{n}{\alpha \epsilon }.$$ The sequence $`\{u_n\}_{n=1}^{\mathrm{}}`$ is not necessarily increasing, but $$\underset{n\mathrm{}}{lim}u_n=\mathrm{}.$$ Let $`d`$ be the unique positive integer such that $$0<(\alpha \epsilon )a_1d<(\alpha \epsilon )a_1+1.$$ For every $`i,j1`$, $`u_{(i+j)d}u_{id}`$ $`>`$ $`\left({\displaystyle \frac{(i+j)d}{\alpha \epsilon }}a_1\right){\displaystyle \frac{id}{\alpha \epsilon }}`$ $`=`$ $`{\displaystyle \frac{jd}{\alpha \epsilon }}a_1`$ $``$ $`(j1)a_1.`$ It follows that $`u_{(i+1)d}>u_{id}`$, and so the sequence $`\{u_{id}\}_{i=1}^{\mathrm{}}`$ is strictly increasing. Similarly, $`u_{(i+j)d}u_{id}`$ $`<`$ $`{\displaystyle \frac{(i+j)d}{\alpha \epsilon }}\left({\displaystyle \frac{id}{\alpha \epsilon }}a_1\right)`$ $`=`$ $`{\displaystyle \frac{jd}{\alpha \epsilon }}+a_1`$ $`<`$ $`(j+1)a_1+{\displaystyle \frac{j}{\alpha \epsilon }}.`$ Let $`j_0`$ be the unique integer such that $$\frac{N_0}{a_1}+1j_0<\frac{N_0}{a_1}+2.$$ Then $$u_{id}u_{(ij_0)d}>(j_01)a_1N_0$$ for all $`ij_0`$. For every integer $`nj_0d`$ there exists a unique integer $`\mathrm{}j_0`$ such that $$u_\mathrm{}dn<u_{(\mathrm{}+1)d}.$$ Then $$nu_{(\mathrm{}j_0)d}<u_{(\mathrm{}+1)d}u_{(\mathrm{}j_0)d}<(j_0+2)d+\frac{j_0+1}{\alpha \epsilon }$$ and $$nu_{(\mathrm{}j_0)d}u_\mathrm{}du_{(\mathrm{}j_0)d}>N_0.$$ Since $$p_A(nu_{(\mathrm{}j_0)d})1,$$ Lemma 1 implies that $$p_A(n)p_A(u_{(\mathrm{}j_0)d})>\left(\frac{\alpha \epsilon }{(\mathrm{}j_0)d}\right)p^{}((\mathrm{}j_0)d).$$ Since $$n<u_{(\mathrm{}+1)d}\frac{(\mathrm{}+1)d}{\alpha \epsilon },$$ it follows that $$(\mathrm{}j_0)d>(\alpha \epsilon )n(j_0+1)d.$$ Since $`p^{}(n)`$ is the partition function of a cofinite subset of the positive integers, Lemma 5 implies that for $`n`$ sufficiently large, $`\mathrm{log}p_A(n)`$ $`>`$ $`\mathrm{log}p^{}((\mathrm{}j_0)d)+\mathrm{log}(\alpha \epsilon )\mathrm{log}(\mathrm{}j_0)d`$ $`>`$ $`(1\epsilon )c_0\sqrt{(\mathrm{}j_0)d}+\mathrm{log}(\alpha \epsilon )\mathrm{log}(\mathrm{}j_0)d`$ $`>`$ $`(1\epsilon )c_0\sqrt{(\alpha \epsilon )n(j_0+1)d}+\mathrm{log}(\alpha \epsilon )\mathrm{log}(\mathrm{}j_0)d.`$ Dividing by $`c_0\sqrt{n}`$, we obtain $$\underset{n\mathrm{}}{lim\; inf}\frac{\mathrm{log}p_A(n)}{c_0\sqrt{n}}(1\epsilon )\sqrt{\alpha \epsilon }.$$ This inequality holds for $`0<\epsilon <\alpha `$, and so $$\underset{n\mathrm{}}{lim\; inf}\frac{\mathrm{log}p_A(n)}{c_0\sqrt{n}}\sqrt{\alpha }.$$ This completes the proof. ###### Theorem 4 Let $`A`$ be a set of positive integers with $`\mathrm{gcd}(A)=1`$. If $`d(A)=\alpha >0`$, then $`\mathrm{log}p_A(n)c_0\sqrt{\alpha n}`$. Proof. This follows from Theorem 3 with $`\alpha =\beta `$. ###### Theorem 5 Let $`a_1,\mathrm{},a_{\mathrm{}},m`$ be integers such that $$1a_1<\mathrm{}<a_{\mathrm{}}m$$ and $$(a_1,\mathrm{},a_{\mathrm{}},m)=1.$$ Let $`A`$ be the set of all positive integers $`a`$ such that $`aa_i(modm)`$ for some $`i=1,\mathrm{},\mathrm{}`$. Then $$\mathrm{log}p_A(n)c_0\sqrt{\frac{\mathrm{}n}{m}}.$$ Proof. The set $`A`$ satisfies $`\mathrm{gcd}(A)=1`$ and $`d(A)=\mathrm{}/m`$, and so the result follows from Theorem 4 with $`\alpha =\mathrm{}/m`$. Using Erdős’s elementary method, Nathanson has also given a direct proof of Theorem 5. ###### Theorem 6 Let $`A`$ be a set of positive integers with $`\mathrm{gcd}(A)=1`$. If $`d(A)=0`$, then $`\mathrm{log}p_A(n)=o(\sqrt{n}).`$ Proof. If $`A`$ is infinite, this follows from Theorem 3 with $`\beta =0`$. If $`A`$ is finite, this follows from Lemma 3. The next result is an inverse theorem; it shows how the growth of the partition function $`p_A(n)`$ determines the asymptotic density of the sequence $`A`$. ###### Theorem 7 Let $`A`$ be an infinite set of positive integers with $`\mathrm{gcd}(A)=1`$. If $`\alpha >0`$ and $$\mathrm{log}p_A(n)c_0\sqrt{\alpha n}=2\sqrt{\frac{\pi ^2\alpha n}{6}},$$ (13) then $`A`$ has asymptotic density $`\alpha `$. Proof. The generating function $$f(x)=\underset{n=0}{\overset{\mathrm{}}{}}p_A(n)x^n=\underset{aA}{}(1x^a)^1$$ converges for $`|x|<1`$, and $`\mathrm{log}f(x)`$ $`=`$ $`{\displaystyle \underset{aA}{}}\mathrm{log}(1x^a)`$ $`=`$ $`{\displaystyle \underset{aA}{}}{\displaystyle \underset{k=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{x^{ak}}{k}}`$ $`=`$ $`{\displaystyle \underset{\mathrm{}=1}{\overset{\mathrm{}}{}}}b_{\mathrm{}}x^{\mathrm{}},`$ where $$b_{\mathrm{}}=\underset{\genfrac{}{}{0pt}{}{aA}{\mathrm{}=ak}}{}\frac{1}{k}0.$$ Let $$S_B(x)=\underset{\mathrm{}x}{}b_{\mathrm{}}.$$ Then $`S_B(x)0`$ for all $`x`$, and $`S_B(x)=0`$ if $`x<1`$. We have $`S_B(n)`$ $`=`$ $`{\displaystyle \underset{\mathrm{}=1}{\overset{n}{}}}b_{\mathrm{}}={\displaystyle \underset{\mathrm{}=1}{\overset{n}{}}}{\displaystyle \underset{\genfrac{}{}{0pt}{}{aA}{\mathrm{}=ak}}{}}{\displaystyle \frac{1}{k}}`$ $`=`$ $`{\displaystyle \underset{k=1}{\overset{n}{}}}{\displaystyle \frac{1}{k}}{\displaystyle \underset{\genfrac{}{}{0pt}{}{aA}{an/k}}{}}1={\displaystyle \underset{k=1}{\overset{n}{}}}{\displaystyle \frac{1}{k}}A\left({\displaystyle \frac{n}{k}}\right).`$ By Möbius inversion, we have $$A(n)=\underset{k=1}{\overset{n}{}}\frac{\mu (k)}{k}S_B\left(\frac{n}{k}\right).$$ By Theorem 1, the asymptotic formula (13) implies that $$(1x)\mathrm{log}f(x)\frac{\pi ^2\alpha }{6}\text{ as }x1^{}\text{.}$$ Theorem 2 implies that $$S_B(n)\frac{\pi ^2\alpha n}{6}.$$ We define the function $`r(x)`$ by $$\frac{S_B(x)}{x}=\frac{\pi ^2\alpha }{6}+r(x).$$ Then $`r(x)=o(x)`$. For every $`\epsilon >0`$ there exists an integer $`n_0=n_0(\epsilon )>e^2`$ such that $$|r(x)|<\epsilon $$ for all $`xn_0.`$ If $`k>n/n_0`$, then $`n/k<n_0`$ and $`0S_B(n/k)S_B(n_0)`$. Therefore, $`A(n)`$ $`=`$ $`{\displaystyle \underset{k=1}{\overset{n}{}}}{\displaystyle \frac{\mu (k)}{k}}S_B\left({\displaystyle \frac{n}{k}}\right)`$ $`=`$ $`{\displaystyle \underset{1kn/n_0}{}}{\displaystyle \frac{\mu (k)}{k}}S_B\left({\displaystyle \frac{n}{k}}\right)+{\displaystyle \underset{n/n_0<kn}{}}{\displaystyle \frac{\mu (k)}{k}}S_B\left({\displaystyle \frac{n}{k}}\right)`$ $`=`$ $`{\displaystyle \frac{\pi ^2\alpha n}{6}}{\displaystyle \underset{1kn/n_0}{}}{\displaystyle \frac{\mu (k)}{k^2}}+n{\displaystyle \underset{1kn/n_0}{}}{\displaystyle \frac{\mu (k)}{k^2}}r\left({\displaystyle \frac{n}{k}}\right)`$ $`+{\displaystyle \underset{n/n_0<kn}{}}{\displaystyle \frac{\mu (k)}{k}}S_B\left({\displaystyle \frac{n}{k}}\right).`$ We evaluate these three terms separately. Since $$\underset{1kn/n_0}{}\frac{\mu (k)}{k^2}=\frac{6}{\pi ^2}\underset{k>n/n_0}{}\frac{\mu (k)}{k^2}=\frac{6}{\pi ^2}+O\left(\frac{n_0}{n}\right),$$ it follows that $$\frac{\pi ^2\alpha n}{6}\underset{1kn/n_0}{}\frac{\mu (k)}{k^2}=\alpha n+O\left(1\right).$$ The second term satisfies $$\left|n\underset{1kn/n_0}{}\frac{\mu (k)}{k^2}r\left(\frac{n}{k}\right)\right|\epsilon n\underset{1kn/n_0}{}\frac{1}{k^2}=O(\epsilon n).$$ The last term is bounded independent of $`n`$, since $$\left|\underset{n/n_0<kn}{}\frac{\mu (k)}{k}S_B\left(\frac{n}{k}\right)\right|S_B(n_0)\underset{n/n_0<kn}{}\frac{1}{k}2S_B(n_0)\mathrm{log}n_0=O(1).$$ Therefore, $$A(n)=\alpha n+O(\epsilon n)+O(1),$$ and so $`d(A)=\alpha `$. This completes the proof. ###### Theorem 8 Let $`A`$ be a set of positive integers with $`\mathrm{gcd}(A)=1`$, and let $`\alpha >0`$. Then $`d(A)=\alpha `$ if and only if $$\mathrm{log}p_A(n)c_0\sqrt{\alpha n}.$$ Proof. This follows immediately from Theorem 3 and Theorem 7. Remark. Let $`A`$ be an infinite set of positive integers with $`\mathrm{gcd}(A)=1`$. Let $`\alpha `$ and $`\beta `$ be nonnegative real numbers such that $$\underset{n\mathrm{}}{lim\; inf}\frac{\mathrm{log}p_A(n)}{c_0\sqrt{n}}\sqrt{\alpha }$$ and $$\underset{n\mathrm{}}{lim\; sup}\frac{\mathrm{log}p_A(n)}{c_0\sqrt{n}}\sqrt{\beta }.$$ Does it follow that $`d_L(A)\alpha `$ and $`d_U(A)\beta `$? This would imply that $`d_L(A)=\alpha `$ if and only if $`lim\; inf_n\mathrm{}\mathrm{log}p_A(n)/c_0\sqrt{n}=\sqrt{\alpha }`$, and $`d_U(A)=\beta `$ if and only if $`lim\; sup_n\mathrm{}\mathrm{log}p_A(n)/c_0\sqrt{n}=\sqrt{\beta }`$.
no-problem/0002/cond-mat0002030.html
ar5iv
text
# Step wise destruction of the pair correlations in micro clusters by a magnetic field ## Abstract The response of $`nm`$-size spherical superconducting clusters to a magnetic field is studied for the canonical ensemble of electrons in a single degenerate shell. For temperatures close to zero, the discreteness of the electronic states causes a step like destruction of the pair correlations with increasing field strength, which shows up as peaks in the susceptibility and heat capacity. At higher temperatures the transition becomes smoothed out and extends to field strengths where the pair correlations are destroyed at zero temperature. The electron pair correlations in small systems where the single-particle spectrum is discrete and the mean level spacing is comparable with the pairing gap have recently been studied by means of electron transport through $`nm`$-scale Al clusters . The pair correlations were found to sustain an external magnetic field of several Tesla, in contrast to much weaker critical field of $`H_c=99`$ Gauss of bulk Al. A step wise destruction of the pair correlations was suggested . It is caused by the subsequent excitation of quasi particle levels, which gain energy due to the interaction of their spin with the external field. This mechanism is very different from the transition to the normal state caused by a magnetic field applied to a macroscopic superconductor. Hence it is expected that physical quantities, as the susceptibility $`\chi `$ and the specific heat capacity $`C`$, which indicate the transition, behave very differently in the micro cluster. The present letter addresses this question. For the micro clusters the energy to remove an electron is much larger than the temperature $`T`$. The fixed number of the electrons on the cluster was demonstrated by the tunneling experiments . Hence, one must study the transition from the paired to the unpaired state in the frame of the canonical ensemble. The small number of particles taking part in superconductivity causes considerable fluctuations of the order parameter, which modify the transition . Consequences of particle number conservation for the pair correlations in micro clusters have also been discussed recently in , where a more complete list of references to earlier work can be found. In order to elucidate the qualitative features we consider the highly idealized model of pair correlations between electrons in a degenerate level, which permits to calculate the canonical partition function. A perfect Al-sphere of radius $`R=(15)nm`$ confines $`N210^3310^5`$ free electrons. Its electron levels have good angular momentum $`l`$. Taking the electron spin into account, each of these levels has a degeneracy of $`2M`$, where $`M=2l+1`$. For a spherical oscillator potential, the average angular momentum at the Fermi surface is $`łN^{1/3}1.4`$, where $`N`$ is the number of free electrons. The distance between these levels is $`\mathrm{\Delta }e10`$ $`meV`$ is much larger than the BCS gap parameter $`\mathrm{\Delta }`$, which is less than 1$`meV`$. Therefore, it is sufficient to consider the pair correlations within the last incompletely filled level. This single-shell model also applies to a hemisphere, because its spectrum consists of the spherical levels with odd $`l`$, and to clusters with a superconducting layer covering an insulating sphere (cf. ) or hemisphere. The single-shell model Hamiltonian $`H=H_{pair}\omega (L_z+2S_z),`$ (1) $`H_{pair}=GA^+A,A^+={\displaystyle \underset{k>0}{}}a_k^+a_{\overline{k}}^+,`$ (2) consists of the pairing interaction $`H_{pair}`$, which acts between the electrons in the last shell with the effective strength $`G`$, and the interaction with the magnetic field. We introduced the Larmour frequency $`\mathrm{}\omega =\mu _BB`$, the Bohr magneton $`\mu _B`$, the $`z`$-components of the total orbital angular momentum and spin $`L_z`$ and $`S_z`$. The label $`k=\{\lambda ,\sigma \}`$ denotes the $`z`$ -projections of orbital momentum and spin of the electrons, respectively, and $`A^+`$ creates an electron pair on states $`(k,\overline{k})`$, related by the time reversal. The magnetic susceptibility and heat capacity of the electrons are $$\chi =\frac{\mu _B^2}{\mathrm{}^2V}\frac{^2F(T,\omega )}{\omega ^2},C=\frac{^2F(T,\omega )}{T^2},$$ (3) The free energy $`F`$ derived from the Hamiltonian (1) gives only the paramagnetic part $`\chi _P`$ of the susceptibility, because we left out the term quadratic in $`B`$. For the fields we are interested in (magnetic length is small as compared to the cluster size), the latter can be treated in first order perturbation theory, generating the diamagnetic part of the susceptibility $$\chi _D=\frac{m\mu _B^2<x^2+y^2>}{\mathrm{}^2V}410^6N^{2/3},$$ (4) where $`m`$ is the electron mass is and $`V`$ the volume of the cluster. It is nearly temperature and field independent . The numerical estimate for Al assumes constant electron density. For $`nm`$ scale clusters $`\chi _D10^3`$. It is much smaller than $`\chi _D1`$ for macroscopic superconductors, which show the Meissner effect. Since the magnetic field penetrates the cluster it can sustain a very high field of $`BTesla`$. On the other hand, the $`\chi _D`$ is three orders of magnitude larger than the Landau diamagnetic susceptibility observed in normal bulk metals. The exact solutions to the pairing problem of particles in a degenerate shell were found in nuclear physics in terms of representations of the group $`SU_2`$. The eigenvalues $`E_\nu `$ of $`H_{pair}`$ are $$E_\nu =\frac{G}{4}(N_{sh}\nu )(2M+2N_{sh}\nu ).$$ (5) where $`N_{sh}`$ is the number of particles in the shell. The seniority, which is the number of unpaired particles, is constrained by $`0\nu N_{sh}`$ and $`\nu M`$. The degenerate states $`\{\nu ,i\}`$ of given seniority $`\nu `$ differ by their magnetic moments $`\mu _Bm_{\nu ,i}`$, where $`i=\{L\mathrm{\Lambda }S\mathrm{\Sigma }\}`$ takes all values of the total orbital $`(L)`$ and total spin $`(S)`$ momenta and their total $`z`$-projections $`(\mathrm{\Lambda },\mathrm{\Sigma })`$ that are compatible with the Pauli principle for $`\nu `$ electrons. In presence of a magnetic field the states have the energy $$U_{\nu ,i}(\omega )=E_\nu \omega m_{\nu ,i},m_{\nu ,i}=(\mathrm{\Lambda }+2\mathrm{\Sigma })_{\nu ,i},$$ (6) and the canonical partition function becomes $`Z={\displaystyle \underset{\nu ,i}{}}\mathrm{exp}(\beta U_{\nu ,i})`$ (7) $`={\displaystyle \underset{\nu }{}}\mathrm{exp}(\beta E_\nu )[\mathrm{\Phi }_\nu \mathrm{\Phi }_{\nu 2}(1\delta _{\nu .0})],`$ (8) $`\beta =1/T,\mathrm{\Phi }_\nu ={\displaystyle \underset{i}{}}\mathrm{exp}(\beta \omega m_{\nu ,i}).`$ (9) To evaluate the sums we take into account symmetry of the wave functions of the $`\nu `$ unpaired electrons and reduce the sums (9) to products of sums over orbital projections of completely antisymmetric states (one column Young diagram, cf. ) with $`\nu /2+\mathrm{\Sigma }`$ and $`\nu /2\mathrm{\Sigma }`$ electrons. $`\mathrm{\Phi }_\nu ={\displaystyle \underset{\mathrm{\Sigma }=\mathrm{\Sigma }_{min}}{\overset{\nu /2}{}}}2(1+\delta _{\mathrm{\Sigma }.0})^1\stackrel{~}{\mathrm{\Phi }}_{\nu /2+\mathrm{\Sigma }}\stackrel{~}{\mathrm{\Phi }}_{\nu /2\sigma }\mathrm{cosh}(2\beta \omega \mathrm{\Sigma }),`$ (10) $`\mathrm{\Sigma }_{min}=[1()^\nu ]/4`$ (11) $`\stackrel{~}{\mathrm{\Phi }}_k=\delta _{k.0}+(1\delta _{k.0}){\displaystyle \underset{\mu =1}{\overset{k}{}}}{\displaystyle \frac{\mathrm{sinh}\beta \omega \frac{2l+2\mu }{2}}{\mathrm{sinh}\beta \omega \frac{\mu }{2}}}.`$ (12) The derivation of (7 \- 12) will be published separately . The pair correlation energy is $`E_c(T,\omega )={\displaystyle \frac{1}{Z}}{\displaystyle \underset{\nu }{}}E_\nu exp(\beta E_\nu )[\mathrm{\Phi }_\nu \mathrm{\Phi }_{\nu 2}(1\delta _{\nu .0})]`$ (13) $`\mathrm{\Delta }_c^2(T,\omega )/G.`$ (14) Here we have defined the parameter $`\mathrm{\Delta }_c`$, which measures the amount of pair correlations. Applying the mean field approximation and the grand canonical ensemble to our model, the thus introduced $`\mathrm{\Delta }_c`$ becomes the familiar BCS gap parameter $`\mathrm{\Delta }`$. Accordingly we also refer to $`\mathrm{\Delta }_c`$ as the ”canonical gap”. However, $`\mathrm{\Delta }_c`$ must be clearly distinguished from $`\mathrm{\Delta }`$ because it incorporates the correlations caused by the fluctuations of the order parameter $`\mathrm{\Delta }`$. For the case of a half filled shell and even $`N_{sh}`$, the BCS gap is $`\mathrm{\Delta }(0)\mathrm{\Delta }(T=0,\omega =0)=GM/2`$. Ref. found $`\mathrm{\Delta }(0)=0.30.4meV`$ for Al-clusters with $`R=510nm`$, which sets the energy scale. Let us first consider the destruction of the pair correlations at $`T=0`$. The lowest state of each seniority multiplet has the maximal magnetic moment $$m_\nu =\frac{\mu _B}{4}\{\nu (2M\nu )\frac{1}{2}[1()^\nu ]+4(1\delta _{\nu .0})\}.$$ (15) According to (5), (6) and (15) the state of lowest energy changes from $`\nu `$ to $`\nu +2`$ at $`\omega _{\nu +2}={\displaystyle \frac{2\mathrm{\Delta }(0)}{M}}\left[\delta _{\nu .0}+{\displaystyle \frac{M\nu }{M\nu 1}}(1\delta _{\nu .0})\right].`$ (16) At each such step $`m_\nu `$ increases according to (15). The pair correlations are reduced because two electron states are blocked. At the last step leading to the maximum seniority $`\nu _{max}`$ all electron states are blocked. Hence the field $`B_c`$ corresponding to $`\omega _c=\omega _{\nu _{max}}`$ can be regarded as the critical one, which destroys the pairing completely at $`T=0`$. For a half filled shell $`\omega _c=3\mathrm{\Delta }(0)/M`$ for even electron number and $`4\mathrm{\Delta }(0)/M`$ for odd ($`\nu _{max}=M1`$ and $`M`$, respectively). Fig. 1 illustrates the step wise destruction of pairing by blocking for the half filled shell $`M=11`$ ($`l=5`$). This mechanism was discussed in , where the crossing of states with different seniority could be observed. It is a well established effect in nuclear physics, where the states of maximum angular momentum, are observed as ”High-K isomers” . Fig. 1 also shows results for the mean field (BCS) approximation (cf. ) to the single shell model. The pair correlations are more rapidly destroyed. The quantum fluctuations of the order parameter stabilize the pairing. We introduce $`T_c`$ as the temperature at which the mean field pair gap $`\mathrm{\Delta }(T_c,\omega =0)`$ takes the value zero when the magnetic field is absent. For the half filled shell $`T_c=\mathrm{\Delta }(0)/2`$. Fig. 1 shows the mean field gap $`\mathrm{\Delta }(T,\omega )`$, It behaves as expected from macroscopic superconductors: The frequency where $`\mathrm{\Delta }=0`$ shifts towards smaller values with increasing $`T`$. Fig. 2 shows that the temperature where $`\mathrm{\Delta }=0`$ shifts from $`T_c`$ to lower values for $`\omega >0`$. However, fig. 1 also demonstrates that the canonical gap $`\mathrm{\Delta }_c`$ behaves differently. For $`T=0.8T_c`$ there is a region above $`\omega _c`$ where there are still pair correlations. For $`T=2T_c`$ this region extends to $`2\omega _c`$. The pair correlations fall off very gradually with $`\omega `$. Fig. 2 shows how these ”temperature induced” pair correlations manifest themselves with increasing $`T`$. For $`\omega =0`$ there is a pronounced drop of of $`\mathrm{\Delta }_c`$ around $`T_c`$, which signalizes the break down of the static pair field. Above this temperature there is a long tail of dynamic pairing. For $`\omega \omega _c`$ the dynamic pair correlations only built up with increasing $`T`$. The temperature induced pairing can be understood in the following way: At $`T=0`$, all electrons are unpaired when the state of maximum seniority becomes the ground state for $`\omega >\omega _{crit}`$. At $`T>0`$ excited states with lower seniority enter the canonical ensemble, reintroducing the pair correlations. Here we have adopted the terminology of nuclear physics, calling ”static” the mean field (BCS) part of the pair correlations and ”dynamic” the quantal and statistical fluctuations of the mean field (or equivalently of the order parameter). The ”pair vibrations”, which are oscillations of the pair field around zero , are well established in nuclei. Fluctuation induced superconductivity was discussed before . The fluctuations play a particularly important role in the systems the size of which is smaller than the coherence length, Fig. 3 shows a very small cluster ($`M=7`$) with very pronounced steps. Already at $`T=0.1T_c`$ the steps are noticeably washed out. In the single-shell model the step length is $`\omega _\nu \omega _{\nu 2}\mathrm{\Delta }(0)/M`$. Accordingly, no individual steps are recognizable at $`T=0.1T_c`$ for the large cluster ($`M=23`$) shown. Yet there is some irregularity around $`\omega =0.7\omega _c`$, which is a residue of the discreteness of the electronic states. It is thermally averaged out for $`T=0.2T_c`$. Hence only for $`M<50`$ i. e. $`N<210^4`$ and $`T<0.1T_c`$ the step wise change of $`\mathrm{\Delta }_c`$ is observable. The discreteness of the electronic levels has dramatic consequences for the the susceptibility at low temperatures. As shown in the upper panel of fig. 4, $`\chi _P`$ has pronounced peaks at the frequencies where the states with higher seniority and magnetic moment take over. The paramagnetic contribution is very sensitive to the temperature and to the fluctuations of the order parameter. Using the BCS mean field approximation we find much more narrow peaks, which are one to two orders of magnitude higher. For the larger cluster in the lower panel the individual steps are no longer resolved, resulting in a peak of $`\chi _P`$ near $`\omega =0.7\omega _c`$. Since for the considered temperatures it is unlikely to excite states with finite magnetic moment, $`\chi _P`$ is small at low $`\omega `$ . It grows with $`\omega `$ because these states come down. It falls off at large $`\omega `$ when approaching the maximum magnetic moment of the electrons in the shell. The curve $`T=0.1T_c`$ shows still a double peak structure, which is residue of the discreteness of the electron levels. The heat capacity is displayed in fig. 5. Is has a double peak structure for the $`M=7`$ cluster at $`T=0.02T_c`$. In this case $`C`$ is very small because the spacing between the states of different $`\nu `$ is much larger than $`T`$. Near a crossing the spacing becomes small and $`C`$ goes up. The dip appears because at the crossing frequency the two states are degenerate. Then they do not contribute to $`C`$ because their relative probability does not depend on $`T`$. For $`T=0.1T_c`$ the probability to excite states with different seniority has increased and $`C`$ takes substantial values between the crossings. The dips due to the degeneracy at the crossings remain. For the $`M=23`$ cluster $`C`$ shows only two wiggles, which are the residue of the discreteness of the electronic levels. The deviation of real clusters from sphericity will attenuate the orbital part of $`\chi _P`$ and round the steps of $`\mathrm{\Delta }_c`$ already at $`T=0`$. The back-bending phenomenon observed in deformed rotating nuclei is an example. How strongly the orbital angular momentum is suppressed needs to be addressed by a more sophisticated model than the present one. In any case, there will be steps caused by the reorientation of the electron spin, if the spin orbit coupling is small as in Al . Most of the findings of the present paper are expected to hold qualitatively for these spin flips. In summary, at a temperature $`T<0.1T_c`$ an increasing external magnetic field causes the magnetic moment of small spherical superconducting clusters ($`R<5nm`$) to grow in a step like manner. Each step reduces the pair correlations until they are destroyed. The steps manifest themselves as peaks in the magnetic susceptibility and the heat capacity. The steps are washed out at $`T>0.2T_c`$. For $`TT_c`$, reduced but substantial pair correlations persist to a higher field strength than for $`T=0`$. This phenomenon of the temperature-induced pairing in a strong magnetic field is only found for the canonical ensemble. Supported by the grant INTAS-93-151-EXT.
no-problem/0002/astro-ph0002477.html
ar5iv
text
# Dependence of Spiral Galaxy Distribution on Viewing Angle in RC3 This work is supported by both the National Natural Science Foundation of China under Grant 19603003, and K. C. Wong Education Foundation, Hong Kong. ## Abstract The normalized inclination distributions are presented for the spiral galaxies in RC3. The results show that, except for the bin of $`81^{}`$-$`90^{}`$, in which the apparent minor isophotal diameters that are used to obtain the inclinations, are affected by the central bulges, the distributions for Sa, Sab, Scd and Sd are well consistent with the Monte-Carlo simulation of random inclinations within 3-$`\sigma `$, and Sb and Sbc almost, but Sc is different. One reason for the difference between the real distribution and the Monte-Carlo simulation of Sc may be that some quite inclined spirals, the arms of which are inherently loosely wound on the galactic plane and should be classified to Sc galaxies, have been incorrectly classified to the earlier ones, because the tightness of spiral arms which is one of the criteria of the Hubble classification in RC3 is different between on the galactic plane and on the tangent plane of the celestial sphere. Our result also implies that there might exist biases in the luminosity functions of individual Hubble types if spiral galaxies are only classified visually. <sup>1</sup> Shanghai Astronomical Observatory, Chinese Acadenmy of Scienes, Shanghai 200030 <sup>2</sup> National Astronomical Observatories, Chinese Acadenmy of Scienes, Beijing 100012 <sup>3</sup> Chinese Academy of Science-Peking University Join Beijing Astrophysical Center, Beijing 100087 <sup>4</sup> Joint Lab of Optical Astronomy, Chinese Acadenmy of Scienes, Shanghai 200030 <sup>5</sup> Young Astronomical Center, Shanghai Astronomical Observatory, Chinese Acadenmy of Scienes, Beijing 100871 E-mail: (majun, gxsong, cgshu)@center.shao.ac.cn PACS: 98.52.Nr, 98.62.Ve A spiral galaxy inherently consists of a halo, bulge and thin disk with spiral structure, which emerges from the central region or the end of a bright bar. Optical images of spirals, which are the projected ones on the tangent plane of the celestial sphere, are dominated by the light coming from stars, as modified by the extinction and reddening of dust. If galaxies with the same morphologies are oriented at different angles of inclination (i.e., the angle between the galactic plane and the tangent plane of the celestial sphere), their images are different. It means that the inclination of a spiral galaxy affects its image, especially, the extent to which the arms are unwound. When a spiral galaxy has a large inclination, its arms would appear tightly coiled on the image even if they should be loosely wound on the galactic plane. Hubble<sup>1,2</sup> introduced an early scheme to classify galaxies. Its concepts are still in use, which consists of a sequence starting from elliptical, through lenticular, to spiral galaxies. This scheme has been extended by some astronomers<sup>3-14</sup> over the years, who try to employ multiple classification criteria. Galaxies originally classified Sc on the Hubble system cover a large interval along the sequence, ranging from regular well-developed arms in early Sc to nearly chaotic structures in very late Sc. de Vaucouleurs<sup>4,5</sup> introduced Scd, Sd, Sdm, Sm and Im to Sc and SBc. In his classification, Sc class has well-developed arms, and the arms become more and more chaotic and the tightness of spiral arms is not considered any more from Scd to Im. Galaxy morphological classification is still mainly done visually by dedicated astronomers on the images, based on the Hubble’s original scheme and its modification. It is possible for each of observers to give slightly different weights to various criteria, although the criteria for classification are accepted generally by them. Lahav et al.<sup>15</sup> and Naim et al.<sup>16</sup> investigated the consistency of visual morphological classifications of galaxies from six independent observers, and found that individual observers agree with one another with combined rms dispersions of between 1.3 and 2.3 type units, typically about 1.8 units of the revised Hubble numerical type, although there are some cases where the range of types given to it was more than 4 units in width. In the present letter, we investigate how dependence of spiral galaxies distribution on viewing angles is. The sample adopted in this letter is from the Third Reference Catalogue of Bright Galaxies by de Vaucoulours et al.<sup>17</sup> (hereafter, RC3), the Hubble types of which are from Sa to Sd. In RC3, it is complete for galaxies that have apparent diameters at the $`D_{25}`$ isophotal level larger than 1 arcminute and the total B-band apparent magnitudes $`B_T`$ brighter than about 15.5, with a redshift not in excess of 15000 kms<sup>-1</sup>. In order to complete the analyzed sample in the absolute magnitude, we select the galaxies from RC3 that are larger than 1 arcminute at the $`D_{25}`$ isophotal level, brighter than -20.1 for the absolute B-band magnitudes and limited within a redshift of 10000 kms<sup>-1</sup>. It can be confirmed that the selected sample within these ranges is complete in both luminosity and space. The adopted values of $`d_{25}/D_{25}`$ are from 0.2 to 1.0 (including 0.2 and 1.0) for calculating the inclinations ($`D_{25}`$ and $`d_{25}`$ are the apparent major and minor isophotal diameters measured at or reduced down to the surface brightness level $`\mu _B=25.0`$ B magnitudes per square arcsecond in B-band.). Our statistical sample contains 2519 spiral galaxies. A Hubble constant of 75 kms<sup>-1</sup>Mpc<sup>-1</sup> is assumed throughout. The inclination of a spiral galaxy is not only an important parameter, but also difficult to determine. If we assume that the thickness of the spiral plane is rather inconsiderable compared with its extension and, when a spiral galaxy is inclined moderately to the plane of sky and the thickness of nucleus can be omitted, the inclination may be obtained by $$\gamma =\mathrm{arccos}(\frac{d}{D}),$$ (1) where $`D`$ and $`d`$ are the apparent major and minor isophotal diameters, respectively. When a spiral galaxy is seen quite edge-on, it is not possible to count with the nuclear part as having a thickness to be left out of consideration. Then, Eq. (1) can not be used to calculate the inclination, because the apparent minor isophotal diameter consists of two parts, one is attributed by the disk and, another by the bulge that decreases the real value of inclination. Considering that the disk is not infinitely thin, Tully<sup>18</sup> corrected Eq. (1) by $$\gamma =\mathrm{arccos}\sqrt{[(\frac{d}{D})^20.2^2]/(10.2^2)}+3^{}.$$ (2) The constant of $`3^{}`$ is added in accordance with an empirical recipe. But, when $`d/D=0.2`$, Eq. (2) will give a wrong inclination, that is $`\gamma =93^{}`$. The adopted formula for obtaining the inclination of a spiral galaxy in the present letter is $$\gamma =\mathrm{arccos}\sqrt{[(\frac{d_{25}}{D_{25}})^20.2^2]/(10.2^2)}$$ (3) in order to avoid the inclination that is larger than $`90^{}`$. Figure 1 plots the normalized inclination distribution for whole spiral galaxies in our sample with bin size of $`10^{}`$. Open circles present individual values. At the same time, we generate a Monte-Carlo sample of random inclinations for comparison, which are presented by filled circles with the error bars of the solids and dots denoting 1-$`\sigma `$ and 3-$`\sigma `$ levels respectively. From this picture, we can see that, except for the bin of $`81^{}`$-$`90^{}`$, the distribution is almost consistent with the Monte-Carlo random distribution within 3-$`\sigma `$. This means that, in RC3, when the apparent minor isophotal diameters are obtained, the effects of central bulges have not been eliminated for some edge-on spiral galaxies. Figs. 2-8 plot the normalized inclination distributions for Sa-Sd galaxies with bin size of $`10^{}`$, respectively. Open circles also denote the values in each bin. In Figs. 2-8, the Monte-Carlo samples of random inclinations are also produced and the error bars have the same meanings as Fig. 1. They show that, except for the bin of $`81^{}`$-$`90^{}`$, the distributions for Sa, Sab, Scd and Sd are well consistent with the Monte-Carlo simulation of random inclinations within 3-$`\sigma `$, Sb and Sbc almost. The distribution of Scd is consistent with the Monte-Carlo simulation within 1-$`\sigma `$ very well. But, the distribution of Sc is quite different from the Monte-Carlo simulation. Based on the analysis above, we find that, in RC3, except for Scd and Sd, the central bulges of which can be neglected, the effects of central bulges may have not been eliminated for some inclined spiral galaxies when the apparent minor isophotal diameters are obtained. At the same time, the effects of central bulges only can not accout for the difference between the real distribution and the Monte-Carlo simulation for Sc, and other factors are needed. A candidate may be that some quite inclined spirals, the arms of which are inherently loosely wound on the galactic plane and should be classified to Sc galaxies, have been incorrectly classified to the earlier ones, because the tightness of spiral arms, which is one of the criteria of the Hubble classification in RC3, is different between on the galactic plane and on the tangent plane of the celestial sphere. Our result also implies that there might exist biases in the luminosity functions of individual Hubble types if spiral galaxies are only classified visually. Fig. 1. Normalized inclination distribution for whole sample spirals with bin size of $`10^{}`$. The open and close circles are individual values and a Monte-Carlo simulation of random inclinations. The solid and dotted lines denote 1-$`\sigma `$ and 3-$`\sigma `$ levels, respectively. Fig. 2. Same as Fig. 1, but for Sa galaxies. Fig. 3. Same as Fig. 1, but for Sab galaxies. Fig. 4. Same as Fig. 1, but for Sb galaxies. Fig. 5. Same as Fig. 1, but for Sbc galaxies. Fig. 6. Same as Fig. 1, but for Sc galaxies. Fig. 7. Same as Fig. 1, but for Scd galaxies. Fig. 8. Same as Fig. 1, but for Sd galaxies.
no-problem/0002/hep-th0002190.html
ar5iv
text
# References NYU-TH/00/02/02 Metastable Gravitons and Infinite Volume Extra Dimensions G. Dvali, G. Gabadadze and M. Porrati Department of Physics, New York University, New York, NY 10003 Abstract We address the issue of whether extra dimensions could have an infinite volume and yet reproduce the effects of observable four-dimensional gravity on a brane. There is no normalizable zero-mode graviton in this case, nevertheless correct Newton’s law can be obtained by exchanging bulk gravitons. This can be interpreted as an exchange of a single metastable 4D graviton. Such theories have remarkable phenomenological signatures since the evolution of the Universe becomes high-dimensional at very large scales. Furthermore, the bulk supersymmetry in the infinite volume limit might be preserved while being completely broken on a brane. This gives rise to a possibility of controlling the value of the bulk cosmological constant. Unfortunately, these theories have difficulties in reproducing certain predictions of Einstein’s theory related to relativistic sources. This is due to the van Dam-Veltman-Zakharov discontinuity in the propagator of a massive graviton. This suggests that all theories in which contributions to effective 4D gravity come predominantly from the bulk graviton exchange should encounter serious phenomenological difficulties. If Standard Model particles are localized on a brane, the volume of extra dimensions can be as large as a millimeter without conflicting to any experimental observations . This is also true for warped spaces in which the extra dimensions are non-compact but have a finite volume (for earlier works on warped compactifications see ). In the framework of Ref. the volume $`VL^N`$ of extra $`N`$ space dimensions sets the normalization of a four-dimensional graviton zero-mode. Therefore, the relation between the observable and the fundamental Planck scales ($`M_P`$ and $`M_{Pf}`$ respectively) reads as follows: $`M_P^2=M_{Pf}^{2+N}V.`$ (1) A similar relation holds for the Randall-Sundrum (RS) scenario , where the role of $`L^2`$ is played by the curvature of $`\mathrm{AdS}_5`$. In this case the extra dimension is not compact, nevertheless its length (or volume) is finite and is determined by the bulk cosmological constant $`L1/\sqrt{|\mathrm{\Lambda }|}`$. In the scenario of Ref. gravity becomes high-dimensional at distances $`r<<L`$ with the corresponding change in Newton’s law $$1/r1/r^{1+N}.$$ (2) The same holds true for RS-type scenarios with non-compact extra dimensions . The purpose of the present letter is to study whether the volume of extra space can be truly infinite while the four-dimensional Planck mass is still finite. In this case the relation (1) should somehow be evaded. Since there are no normalizable zero modes in such cases, the effects of 4D gravity must be reproduced by exchanging the continuum of bulk modes. An example of this type was recently proposed in Ref. . The physical reason why such an exchange can indeed mimic the $`1/r`$ Newtonian law can be understood as follows. The four-dimensional graviton, although is not an eigenstate of the linearized theory, can still exist as a metastable resonance with a finite lifetime $`\tau _g`$. In such a case one might hope that the exchange of this graviton approximates Newton’s law at distances shorter than the graviton lifetime but changes the laws of gravity at larger scales. The question is whether the four-dimensional effective theory obtained in this way is phenomenologically viable. In the present paper we will argue that, despite the correct Newtonian limit, the infinite volume scenario has problems in reproducing other predictions of Einstein’s theory. This problem is shared by any model in which the dominant contribution to 4D gravity comes from the exchange of continuum states. The reason behind such a discrepancy is the van Dam-Veltman-Zakharov discontinuity in the propagator of a massive spin-2 field in the massless limit . Very briefly, the physical effects of additional polarizations of a massive graviton survive in the massless limit and change dramatically predictions of the theory . As a result, any theory which relies on the exchange of massive (no matter how light) gravitons, will give rise to predictions which differ from those of General Relativity. We shall consider the five-dimensional Einstein gravity coupled to an arbitrary energy-momentum tensor $`T_{AB}`$ which is independent of four space-time coordinates $`x_\mu `$. We will assume that $`T_{AB}`$ results from a certain combination of branes, bulk cosmological constant $`\mathrm{\Lambda }`$ and classical field configurations which preserve four-dimensional Poincaré invariance. Einstein’s equations $`G_{AB}=T_{AB}+g_{AB}\mathrm{\Lambda }`$ give rise to a metric of the following form $$ds^2=A(z)\left(ds_4^2dz^2\right),$$ (3) where $`z`$ is an extra coordinate, and we assume that the four-dimensional metric $`ds_4^2`$ is flat. The volume of the extra space in this construction is determined by an integral $$_{\mathrm{}}^+\mathrm{}A^{5/2}(z)𝑑z.$$ (4) For instance, in the RS framework $`A(z)=(1+H|z|)^2`$, where $`H`$ is proportional to the square root of the cosmological constant $`H\sqrt{|\mathrm{\Lambda }|}`$. This warp factor gives rise to a finite expression in (4). An infinite volume theory is obtained if, for instance, the value of $`A(z)`$ tends to a nonzero constant as $`z`$ goes to $`\pm \mathrm{}`$. If the volume is finite, there is a normalizable zero-mode graviton in the spectrum of fluctuations about this background . Indeed, let us parametrize the four-dimensional graviton fluctuations as follows: $$ds^2=A(z)\left[(\eta _{\mu \nu }+h_{\mu \nu }(x,z))dx^\mu dx^\nu dz^2\right].$$ (5) The corresponding linearized Schrödinger equation for the excitation $`h_{\mu \nu }(x,z)=A^{3/4}(z)\mathrm{\Psi }(z)h_{\mu \nu }^{(0)}(x_\mu )`$ takes the form: $`\left(_z^2+{\displaystyle \frac{3}{4}}\left\{{\displaystyle \frac{A^{\prime \prime }}{A}}{\displaystyle \frac{A^2}{4A^2}}\right\}\right)\mathrm{\Psi }(z)=m^2\mathrm{\Psi }(z).`$ (6) Where $`\eta ^{\mu \nu }_\mu _\nu h^{(0)}=m^2h^{(0)}`$ and primes denote differentiation with respect to $`z`$. This equation has a zero-mode solution for a generic form of a warp factor $`A(z)`$: $$\mathrm{\Psi }_{\mathrm{zm}}(y)=A^{3/4}(z).$$ (7) This implies that the $`z`$ dependent part of the fluctuations $`h(x,z)`$ is just a constant, i.e., $`h(x,z)=\mathrm{const}.\mathrm{exp}(ipx)`$. If the integral in (4) diverges, the volume is infinite and the zero-mode is not normalizable. Thus, the spectrum in this case would consist of continuum states only. One should notice, however, that even in this case the correct Newtonian limit may be recovered in some approximation by exchanging the continuum of non-localized states! The physical reason for such a behavior is that the 4D localized graviton, although not an eigenstate, can still exist as a metastable resonance with an exponentially long lifetime, $`\tau _g`$. Therefore, the exchange of this resonance could give rise to a correct Newtonian potential at intermediate distances. Note that this is just a reformulation of the fact that the tower of the bulk states conspire in such a way that the Newtonian potential is seen as an approximation. To make these discussions a bit precise let us turn to the propagator of a massive metastable graviton. This is given as follows: $$G_4^{\mu \nu \alpha \beta }(x)=\frac{dp^4}{(2\pi )^4}\left(\frac{1}{2}(g^{\mu \alpha }g^{\nu \beta }+g^{\mu \beta }g^{\nu \alpha })\frac{1}{3}g^{\mu \nu }g^{\alpha \beta }+𝒪(p)\right)D(p^2,m_0^2,\mathrm{\Gamma })e^{ip(x)},$$ (8) where $`D(p^2,m_0^2,\mathrm{\Gamma })`$ stands for the scalar part of a massive resonance propagator, $`D(p^2,m_0^2,\mathrm{\Gamma })=(p^2m_0^2+im_0\mathrm{\Gamma })^1`$. $`\mathrm{\Gamma }`$ denotes the width of the resonance. The momentum dependent part in the tensor structure gives zero contribution when the propagator is convoluted with a conserved energy-momentum tensor, thus, this part will be omitted. In order to make a contact with continuum modes, let us use the following spectral representation for $`D`$: $`D(p^2,m_0^2,\mathrm{\Gamma })={\displaystyle \frac{1}{2\pi }}{\displaystyle \frac{\rho (s)}{sp^2+iϵ}𝑑s},`$ (9) where $`s`$ denotes the Mandelstam variable and $`\rho (s)`$ is a spectral density. If the assumption of the resonance dominance is made, then $`\rho (s)`$ is approximated by a sharply peaked function around the resonance mass $`s=m_0^2`$. In what follows we assume that the resonance lifetime is very big and we neglect the effects of a nonzero resonance width (these will modify gravity laws at very large distances only). Exchanging such a particle between two static sources one obtains the potential $$V(r)\frac{e^{\sqrt{s}r}}{r}\rho (s)𝑑s.$$ (10) This expressions reproduces a standard $`1/r`$ interaction at distances $`r<<(m_0)^1`$ in the single, narrow-resonance approximation, $`\rho (s)\delta (sm_0^2)`$. On the other hand, we can expand the spectral density $`\rho (s)`$ into the complete set of bulk modes $$\rho (s)=_0^{\mathrm{}}|\psi _m(0)|^2\delta (sm^2)𝑑m.$$ (11) Here, $`\psi _m(0)`$ denote the wave functions of the bulk modes at the point $`z=0`$. Using this expression for the spectral density one finds the following potential $$V(r)_0^{\mathrm{}}\frac{e^{mr}}{r}|\psi _m(0)|^2𝑑m.$$ (12) This is nothing but the potential mediated by the continuum of the bulk modes. Thus, the effect of the metastable graviton, when it exists, can be read off the expression which includes all the bulk modes. These two descriptions are complementary to each other. In the case when the resonance exists, the continuum modes can conspire in such a way that (12) yields the $`1/r`$ law in a certain approximation. The inverse statement is also likely to be true. In the appendix we will show explicitly the presence of a resonance state in a model with infinite volume extra dimension and $`1/r`$ potential produced by the bulk modes . As we mentioned above, such model gives Newtonian gravity only at intermediate distances. At large distances, the five-dimensional laws of gravity should be restored (due to the metastable nature of the resonance). This phenomenon could have dramatic cosmological and astrophysical consequences. Indeed, at large cosmic scales the time-dependence of the scale factor $`R(t)`$ in Freedman-Robertson-Walker metric would dramatically change due to the change in the laws of gravity<sup>1</sup><sup>1</sup>1A different possibility to modify the long distance gravity due to an additional massive graviton was proposed earlier in Ref. . In the view of phenomenological problems discussed below, this graviton should be very weakly coupled.. Another interesting comment concerns bulk supersymmetry. Since the volume of the extra dimension is infinite, it might be possible to realize the following scenario. The bulk is exactly supersymmetric and SUSY is completely broken on a brane (this could be a non-BPS brane which is stable for some topological reasons ). The transmission of SUSY breaking from the brane worldvolume to the bulk is suppressed by the volume of the bulk and is vanishing. In such a case one could imagine a setup where the bulk cosmological constant is zero due to the bulk SUSY<sup>2</sup><sup>2</sup>2Note that local SUSY does not necessarily imply vanishing of the vacuum energy. However, this can be accomplished by imposing on a model additional global symmetries.. Having these attractive features of the theories with truly infinite extra dimensions discussed, we move to some phenomenological difficulties of these models. In fact, we will argue below that these theories cannot reproduce other predictions of Einstein’s general relativity. The reason is that all the spin-2 modes that dominantly contribute to the four-dimensional gravity in this case are massive modes. It has been known for a long time that propagator of massive spin-2 states has no continuous massless limit. As a result the effects of the massless spin-2 graviton are different from the massive one, no matter how small the mass is. Let us show how this affects the phenomenology of infinite volume theories. The four-dimensional gravity on a brane is reproduced by an exchange of the continuum of bulk gravitons. At a tree level this gives $$G_5_0^{\mathrm{}}𝑑m𝑑x^4d^4x^{}T_{\mu \nu }(x)G_m^{\mu \nu \alpha \beta }(xx^{})T_{\alpha \beta }^{}(x^{}),$$ (13) where $`T_{\mu \nu }(x)`$ and $`T_{\mu \nu }^{}(x^{})`$ are the energy-momentum tensors for two gravitating sources. For $`m0`$ the graviton propagator is given by $$G_m^{\mu \nu \alpha \beta }(xx^{})=\frac{dp^4}{(2\pi )^4}\frac{\frac{1}{2}(g^{\mu \alpha }g^{\nu \beta }+g^{\mu \beta }g^{\nu \alpha })\frac{1}{3}g^{\mu \nu }g^{\alpha \beta }+𝒪(p)}{p^2m^2iϵ}e^{ip(xx^{})},$$ (14) whereas for $`m=0`$ we have $$G_0^{\mu \nu \alpha \beta }(xx^{})=\frac{dp^4}{(2\pi )^4}\frac{\frac{1}{2}(g^{\mu \alpha }g^{\nu \beta }+g^{\mu \beta }g^{\nu \alpha })\frac{1}{2}g^{\mu \nu }g^{\alpha \beta }+𝒪(p)}{p^2iϵ}e^{ip(xx^{})}.$$ (15) As we see, the tensor structures in the two cases are different. In the massless limit, the propagator exhibits the celebrated van Dam-Veltman-Zakharov discontinuity. This is due to the difference in the number of degrees of freedom for massive and massless spin-2 fields. In our case this difference is very transparent, KK gravitons at each mass level “eat up” three extra degrees of freedom of $`g_{5\mu }`$ and $`g_{55}`$ components of the higher dimensional metric (“graviphotons” and “graviscalars” respectively). Since we choose a model in which there is no normalizable-zero mode, the whole answer is given by the bulk continuum. Let us show that the 4D gravity which is obtained in this way cannot reproduce observable effects of General Relativity. Let us first consider the Newtonian limit. In this case, we take two static point-like sources $$T_{\mu \nu }(x)=m_1\delta _{\mu 0}\delta _{\nu 0}\delta (\stackrel{}{x}),T_{\mu \nu }^{}(x^{})=m_2\delta _{\mu 0}\delta _{\nu 0}\delta (\stackrel{}{x^{}}\stackrel{}{r}).$$ (16) For this setup the bulk graviton exchange gives $$\frac{2}{3}m_1m_2G_5𝑑m\frac{e^{mr}}{r}|\psi _m(0)|^2.$$ (17) Since the leading behavior of the integral for the particular case at hand is $`1/r`$, $$𝑑m\frac{e^{mr}}{r}|\psi _m(0)|^2\frac{a}{r}+\mathrm{},$$ (18) the correct Newtonian limit may be reproduced ($`a`$ is some normalization constant). On the other hand, since the exchange of one normalizable massless graviton would give $$G_N\frac{1}{2}\frac{m_1m_2}{r},$$ (19) we have to set $$aG_5=\frac{3G_N}{4}.$$ (20) This identification provides the correct Newtonian potential for static sources. So far so good. Unfortunately, the problem arises when one tries to account for moving sources. To see this let us take one of the sources to be a moving point-like particle of mass $`m_2`$ and proper time $`\tau `$. The energy-momentum tensor for this particle is written as: $$T_{\mu \nu }^{}(x^{})=m_2𝑑\tau \dot{x_\mu }\dot{x_\nu }\delta (x^{}x(\tau )).$$ (21) The result of the bulk graviton exchange then gives $$G_5m_1m_2𝑑\tau (\dot{x_0}\dot{x_0}\frac{1}{3}\dot{x_\mu }\dot{x^\mu })𝑑m\frac{e^{mr(\tau )}}{r(\tau )}|\psi _m(0)|^2,$$ (22) where $`r=\stackrel{}{x}(\tau )`$. With the identification (20), in the leading order this yields $$\frac{3}{4}G_Nm_1m_2𝑑\tau (\dot{x_0}\dot{x_0}\frac{1}{3}\dot{x_\mu }\dot{x^\mu })\frac{1}{r(\tau )}.$$ (23) On the other hand the exchange of a normalizable graviton zero-mode produces the following result $$G_Nm_1m_2𝑑\tau (\dot{x_0}\dot{x_0}\frac{1}{2}\dot{x_\mu }\dot{x^\mu })\frac{1}{r(\tau )}.$$ (24) This shows the discrepancy between the predictions of the two theories. In particular, the same procedure applied to the problem of bending of light by the Sun gives the discrepancy by the factor $`3/4`$. Indeed, for the bending of light in the gravitational field of the Sun, the tree-level bulk graviton exchange gives: $`G_5M_{\mathrm{Sun}}T_{00}(k,q,ϵ_\mu ,ϵ_\nu ^{}){\displaystyle 𝑑m\frac{\delta (k_0q_0)}{(kq)^2m^2i\epsilon }|\psi _m(0)|^2}`$ $`{\displaystyle \frac{3}{4}}{\displaystyle \frac{G_NM_{\mathrm{Sun}}T_{00}\delta (k_0q_0)}{|\stackrel{}{k}\stackrel{}{q}|^2}}+\mathrm{},`$ (25) where $`T_{00}(k,q,ϵ_\mu ,ϵ_\nu ^{})`$ is the component of the energy-momentum tensor for photons in the momentum representation, and $`k(ϵ_\mu )`$ and $`q(ϵ_\nu ^{})`$ are the momenta(polarizations) of initial and final photons. This is just $`3/4`$ of the result of the 4D massless graviton exchange. Summarizing, we have shown that in theories with truly infinite extra dimensions the correct four-dimensional Newtonian gravity can be obtained at intermediate distances due to a metastable resonance graviton. This description is complementary to the exact summation of continuum modes. Due to the finite lifetime of the resonance the laws of gravity are modified at large distances. This would give rise to interesting cosmological consequences. Moreover, these models could allow to preserve bulk supersymmetry while it is completely broken on a brane. Unfortunately, these models encounter a number of phenomenological difficulties. The effects of additional polarization degrees of freedom of massive gravitons survive even in the massless limit and lead to substantial discrepancies with the predictions of General Relativity. It might be possible to cure these discrepancies by introducing new very unconventional interactions. The addition of dilaton-type scalars coupled to $`T_\mu ^\mu `$ seems to make things worse. Acknowledgments We wish to thank Ian Kogan and Valery Rubakov for comments. The work of G.D. is supported in part by David and Lucile Packard Foundation Fellowship for Science and Engineering and by Alfred P. Sloan Research Fellowship. That of G.G. is supported by NSF grant PHY-94-23002. M.P. is supported in part by NSF grant PHY-9722083. Note added After this paper was prepared for submission the work appeared. The authors of this work have also realized that metastable gravitons can be responsible for the $`1/r`$ law in theories with infinite extra dimensions. However, the generic phenomenological difficulties of this class of theories which is a crucial part of our work have not been addressed in . After this work appeared on the net, we were informed by V. Rubakov that in the revised version of Ref. the role of a metastable graviton was also elucidated and its decay width was calculated. Appendix Below we show the presence of a resonance in a system with an infinite extra dimension. The particular example which we consider is the one studied in Ref. . The five-dimensional interval defining the background and four-dimensional graviton fluctuations is set as follows (here we choose to use a non-conformaly flat metric in order to be consistent with the conventions of ): $`ds^2=\left[A(y)\eta _{\mu \nu }+h_{\mu \nu }(x,y)\right]dx^\mu dx^\nu dy^2.`$ (26) There is a 3-brane with positive tension $`T`$ which is located at $`y=0`$. In addition, there are two 3-branes with equal negative tensions $`T/2`$ located at a distance $`y_c`$ to the left and right of the positive-tension brane. For $`|y|<y_c`$ the space is $`\mathrm{AdS}_5`$ and the warp-factor is normalized as $`A(y)=\mathrm{exp}(2Hy)`$. Furthermore, for $`|y|>y_c`$, the space becomes Minkowskian and the corresponding warp-factor is a constant, $`c^2\mathrm{exp}(2Hy_c)`$. Thus, at large distances, i.e. $`|y|>>y_c`$, the system reduces to a single tensionless brane embedded in five-dimensional Minkowski space-time. For simplicity of presentation in what follows we will deal with the positive semi-axis only, i.e., $`y0`$ (the negative part of the whole $`y`$ axis is restored by reflection symmetry). Choosing the traceless covariant gauge for graviton fluctuations ($`h_\mu ^\mu =0,^\mu h_{\mu \nu }=0`$) the Einstein equations take the form: $`\mathrm{\Psi }^{\prime \prime }4H^2\mathrm{\Psi }+m^2e^{2Hy}\mathrm{\Psi }=0,0<y<y_c,`$ $`\mathrm{\Psi }^{\prime \prime }+{\displaystyle \frac{m^2}{c^2}}\mathrm{\Psi }=0,y>y_c.`$ (27) Where we have introduced the $`y`$ dependent part of the fluctuations as follows $`h(x,y)\mathrm{\Psi }(y)\mathrm{exp}(ipx)`$. Furthermore, the mass-shell condition for graviton fluctuations is defined as $`p^2=m^2`$. The equations presented above should be accompanied by the Israel matching conditions at the points where the branes are located. For the particular case at hand these conditions take the form $`\mathrm{\Psi }^{}+2H\mathrm{\Psi }=0,y=0;`$ $`\mathrm{\Psi }^{}|_{\mathrm{jump}}=2H\mathrm{\Psi },y=y_c.`$ (28) The solutions to equations (27) are combinations of Bessel functions for $`0<y<y_c`$, and exponentials for $`y>y_c`$: $`\mathrm{\Psi }_m(y)=A_mJ_2\left({\displaystyle \frac{m}{H}}e^{Hy}\right)+B_mN_2\left({\displaystyle \frac{m}{H}}e^{Hy}\right),0<y<y_c,`$ (29) $`\mathrm{\Psi }_m(y)=C_m\mathrm{exp}\left(i{\displaystyle \frac{m}{c}}(yy_c)\right)+D_m\mathrm{exp}\left(i{\displaystyle \frac{m}{c}}(yy_c)\right),y>y_c.`$ (30) The constant coefficients $`A_m,B_m,C_m`$ and $`D_m`$ are determined by using the matching conditions (28) (along with the normalization equation). The presence of a resonance state requires that the coefficient of the incoming wave in the solution ($`D_m`$ in this case) vanishes at a point in the complex $`m`$ plane. This determines a resonance. Calculating $`D_m`$ and putting $`D_m=0`$ one finds: $`K_1(\rho )I_2\left(\rho e^{Hy}\right)+I_1(\rho )K_2\left(\rho e^{Hy}\right)=I_1(\rho )K_1\left(\rho e^{Hy}\right)K_1(\rho )I_1\left(\rho e^{Hy}\right),`$ (31) where we have introduced a new variable $`\rho im/H`$. This relation can be solved for small values of $`\rho `$. The result is $`\rho 2\mathrm{e}\mathrm{x}\mathrm{p}\left(3Hy_c\right)`$. Therefore, the resonance width is proportional to $`\mathrm{\Gamma }H\mathrm{exp}\left(3Hy_c\right).`$ (32) In the limit $`y_c\mathrm{}`$, the resonance width goes to zero and one recovers a zero-mode graviton localized on a positive tension brane .
no-problem/0002/cond-mat0002116.html
ar5iv
text
# Incorporation of Density Matrix Wavefunctions in Monte Carlo Simulations: Application to the Frustrated Heisenberg Model ## I Introduction The Density Matrix Technique (DMRG) has proven to be a very efficient method to determine the groundstate properties of low dimensional systems . For a quantum chain it produces extremely accurate values for the energy and the correlation functions. In two dimensional systems the calculational effort increases rapidly with the size of the system. The most favorable geometry is that of a long small strip. In practice the width of the strip is limited to around 8 to 10 lattice sites. Greens Function Monte Carlo (GFMC) is not directly limited by the size of the system but by the efficiency of the importance sampling. When the system has a minus sign problem the statistics is ruined in the long run and accurate estimates are impossible. Many proposals have been made to alleviate or avoid the minus sign problem with varying success, but all of them introduce uncontrollable errors in the sampling. In the DMRG calculation of the wavefunction the minus sign problem is not manifestly present. In all proposed cures of the minus sign problem the errors decrease when the guiding wavefunction approaches the groundstate. The idea of this paper is that DMRG wavefunctions are much better, also for larger systems, than the educated guesses which usually feature as guiding wave functions. Moreover DMRG is a general technique to construct a wavefunction without knowing too much about the nature of the groundstate, with the possibility to systematically increase the accuracy. Thus DMRG wavefunctions would do very well when they could be used as guiding functions in the importance sampling of the GFMC. There is a complicating factor which prevents a straightforward implementation of this idea due to the fact that interesting systems are so large that it is impossible to use a wavefunction via a look–up table. The value of the wavefunction in a configuration has to be calculated by an in–line algorithm. This has limited the guiding wavefunctions to simple expressions which are fast to evaluate. Consequently such guiding wavefunctions are not an accurate representation of the true groundstate wavefunction, in particular if the physics of the groundstate is not well understood. In this paper we describe a method to read out the DMRG wavefunction in an efficient way by using a special representation of the DMRG wavefunction. A second problem is that a good guiding wavefunction alleviates the minus sign problem, but cannot remove it as long as it is not exact. We resolve this dilemma by applying the method of Stochastic Reconfiguration which has recently been proposed by Sorella . The viability of our method is tested for the frustrated Heisenberg model. The behavior of the Heisenberg antiferromagnet has been intruiging for a long time and still is in the center of research. The groundstate of the antiferromagnetic 1-dimensional chain with nearest neighbor coupling is exactly known. In higher dimensions only approximate theories or simulation results are available. The source of the complexity of the groundstate are the large quantum fluctuations which counteract the tendency of classical ordering. The unfrustrated 2–dimensional Heisenberg antiferromagnet orders in a Néel state and by numerical methods the properties of this state can be analyzed accurately . The situation is worse when the interactions are competing as in a 2-dimensional square lattice with antiferromagnetic nearest neighbor $`J_1`$ and next nearest neighbor $`J_2`$ coupling. This spin system with continuous symmetry can order in 2 dimensions at zero temperature, but it is clear that the magnetic order is frustrated by the opposing tendencies of the two types of interaction. The ratio $`J_2/J_1`$ is a convenient parameter for the frustration. For small values the system orders antiferromagnetically in a Néel type arrangement, which accomodates the nearest neighbor interaction. For large ratios a magnetic order in alternating columns of aligned spins (columnar phase) will prevail; in this regime the roles of the two couplings are reversed: the nearest neighbor interaction frustrates the order imposed by the next nearest neighbor interaction. In between, for ratios of the order of 0.5, the frustration is maximal and it is not clear which sort of groundstate results. This problem has been attacked by various methods but not yet by DMRG and only very recently by GFMC . This paper addresses the issue by studying the spin correlations. A simple road to the answer is not possible since the behavior of the system with frustration presents some fundamental problems. The most severe obstacle is that frustration implies a sign problem which prevents the straightforward use of the GFMC simulation technique. Moreover the frustration substantially complicates the structure of the groundstate wavefunction. Generally frustration encourages the formation of local structures such as dimers and plaquettes which are at odds, but not incompatible, with long range magnetic order. These correlation patterns are the most interesting part of the intermediate phase and the main goal of this investigation. Many attempts have been made to clarify the situation. Often simple approximations such as mean–field or spin–wave theory give useful information about the qualitative behavior of the phase diagram. A fairly sophistocated mean–field theory using the Schwinger boson representation does not give an intermediate phase . Given the complexity of the phase diagram and the subtlety of the effects it is not clear whether such approximate methods can give in this case a reliable clue to the qualitative behavior of the system. Exact calculations have been performed on small systems up to size $`6\times 6`$ by Schulz et al. . Although this information is very accurate and unbiased to possible phases, the extrapolation to larger systems is a long way, the more so in view of indications that the anticipated finite size behavior only applies for larger systems. Another drawback of these small systems is that the groundstate is assumed to have the full symmetry of the lattice. Therefore the symmetry breaking, associated with the formation of dimers, ladders or plaquettes, which is typical for the intermediate state, can not be observed directly. More convincing are the systematic series expansion as reported recently by Kotov et al. , and by Singh et al. , which bear on an infinite system. They start with an independent dimers (plaquettes) and study the series expansion in the coupling between the dimers (plaquettes). By the choice of the state, around which the perturbation expansion is made, the type of spatial symmetry breaking is fixed. These studies favor in the intermediate regime the dimer state over the plaquette state. Their dimer state has dimers organized in ladders in which the chains and the rungs have nearly equal strength. So the system breaks the translational invariance only in one direction. The energy differences are however small and the series is finite, so further investigation is useful. Our simulations yield correlations in good agreement with theirs, but do not confirm the picture of translational invariant ladders. Instead we find an additional weaker symmetry breaking along the ladders, such that we come closer to the plaquette picture. Very recently Capriotti and Sorella have carried out a GFMC simulation for $`J_2=0.5J_1`$ and have studied the susceptibilities for the orientational and translational symmetry breaking. They conclude that the groundstate is a plaquette state with full symmetry between the horizontal and vertical direction. From the purely theoretical side the problem has been discussed by Sachdev and Read on the basis of a large spin expansion. From their analysis a scenario emerges in which the Néel phase disappears upon increasing frustration in a continuous way. Then a gapped spatial–inhomogeneous phase with dimerlike correlations appears. For even higher frustration ratios a first order transition takes place to the columnar phase. Although this scenario is qualitative, without precise location of the phase transition points, it definitively excludes dimer formation in the magnetically ordered Néel and columnar phase. It is remarkable that two quite different order parameters (the magnetic order and the dimer order) disappear simultaneously and continuously on opposite sides of the phase transition. In this scenario, this is taken as an indication of some kind of duality of the two phases. Given all these predictions it is of utmost interest to further study the nature of the intermediate state. Due to the smallness of the differences in energy between the various possibilities, the energy will not be the ideal test for the phase diagram. Therefore we have decided to focus directly on the spin correlations as a function of the ratio $`J_2/J_1`$. In this paper we first investigate the 2–dimensional frustrated Heisenberg model by constructing the DMRG wave function of the groundstate for long strips up to a width of 8 sites. The groundstate energy and the spin stiffness which are calculated, confirm the overal picture described above, but the results are not accurate enough to allow for a conclusive extrapolation to larger systems. Then we study an open 10x10 lattice by means of the GFMC technique using DMRG wavefunctions as guiding wavefunction for the importance sampling. The GFMC are supplemented with Stochastic Reconfiguration as proposed by Sorella as an extension of the Fixed Node technique . This method avoids the minus sign problem by replacing the walkers regularly by a new set of positive sign with the same statistical properties. The first observation is that GFMC improves the energy of the DMRG in a substantial and systematic way as can be tested in the unfrustrated model where sufficient information is available from different sources. Secondly the spin correlations become more accurate and less dependent on the technique used for constructing the DMRG wavefunction. The DMRG technique is focussed on the energy of the system and less on the correlations. The GFMC probes mostly the local correlations of the system as all the moves are small and correspond to local changes of the configurations. With these spin correlations we investigate the phase diagram for various values of the frustration ratio $`J_2/J_1`$. The paper begins with the definition of the model to avoid ambiguities. Then a short description of our implementation of the DMRG method is given. We go into more detail about the way how the constructed wavefunctions can be used as guiding wavefunctions in the GFMC simulation. This is a delicate problem since the full construction of a DMRG wavefunction takes several hours on a workstation. Therefore we separate off the construction of the wavefunction and cast it in a form where the configurations can be obtained from each other by matrix operations on a vector. So the length of the computation of the wavefunction in a configuration scales with the square of the number of states included in the DMRG wavefunction. But even then the actual construction of the value of the wavefunction in a given configuration is so time consuming that utmost effiency must be reached in obtaining the wavefunction for successive configurations. The remaining sections are used to outline the GFMC and the Stochastic Reconfiguration and to discuss the results. We concentrate on the correlation functions since we see them as most significant for the structure of the phases. We give first a global evaluation of the correlation function patterns for a wide set of frustration ratios and then focus on a number of points to see the dependence on the guiding wavefunction and to deduce the trends. The paper closes with a discussion and a comparison with other results in the literature. ## II The Hamiltonian The hamiltonian of the system refers to spins on a square lattice. $$=J_1\underset{(i,j)}{}𝐒_i𝐒_j+J_2\underset{[i,j]}{}𝐒_i𝐒_j.$$ (1) The $`𝐒_i`$ are spin $`\frac{1}{2}`$ operators and the sum is over pairs of nearest neigbors $`(i,j)`$ and over pairs of next nearest neighbors $`[i,j]`$ on a quadratic lattice. Both coupling constants $`J_1`$ and $`J_2`$ are supposed to be positive. $`J_1`$ tries to align the nearest neigbor spin in an antiferromagnetic way and $`J_2`$ tries to do the same with the next nearest neighbors. So the spin system is frustrated, implying an intrinsic minus sign in the simulations that cannot be gauged away by a rotation of the spin operators. In order to prepare for the representation of the hamiltonian we express the spin components in spin raising and lowering operators $$𝐒_i𝐒_j=\frac{1}{2}(S_i^+S_j^{}+S_i^{}S_j^+)+S_i^zS_j^z.$$ (2) We will use the $`z`$ component representation of the spins and a complete state of the spins will be represented as $$|R=|s_1,s_2,\mathrm{},s_N,$$ (3) where the $`s_j`$ are eigenvalues of the $`S_j^z`$ operator. The diagonal matrix elements of the hamiltonian are in the representation (3) given by $$R||R=J_1\underset{(i,j)}{}s_is_j+J_2\underset{[i,j]}{}s_is_j.$$ (4) The off-diagonal elements are between two nearby configurations $`R^{}`$ and $`R`$. $`R^{}`$ is the same as $`R`$ except at a pair of nearest neighbors sites $`(i,j)`$ or next nearest neighbor sites $`[i,j]`$, for which the spins $`s_i`$ and $`s_j`$ are opposite. In $`R^{}`$ the pair is turned over by the hamiltonian. Then $$R^{}||R=\frac{1}{2}J_1\mathrm{or}R^{}||R=\frac{1}{2}J_2,$$ (5) depending on whether a nearest or a next nearest pair is flipped. ## III The DMRG Procedure The DMRG procedure approximates the groundstate wavefunction by searching through various representations in bases of a given dimension $`m`$ . Here we take the standard method (with two connecting sites) for granted and make the preparations for the extraction of the wavefunction. The system is mapped onto a 1-dimensional chain (see Fig. 2) and separated into two parts: a left and right hand part. They are connected by one site. Each part is represented in a basis of at most $`m`$ states. With a representation of all the operators in the hamiltonian in these bases one can find the groundstate of the system. We thus have several representations of the groundstate depending on the way in which the system is divided up into subsystems. The point is to see how these representations are connected and how they possibly can be improved. We take a representation for the right hand parts and improve those on the left. So we assume that for a given division we have the groundstate of the whole system and we want to enlarge the left hand side at the expense of the right hand side. The first step is to include the connecting site in the left hand part. This enlarges the basis for the left hand side from $`m`$ to $`2m`$ and a selection has to be made of $`m`$ basis states. This goes with the help of the density matrix for the left hand side as induced by the wavefunction for the whole system. For later use we write out the basic equations for the density matrix in the configuration representation. Let, at a certain stage in the computation, $`|\mathrm{\Phi }`$ be the approximation to the groundstate. The configurations of the right hand part and the left hand part are denoted by $`R_r`$ and $`R_l`$. Then the density matrix for the left hand part reads $$R_l|\rho |R_l^{}=\underset{R_r}{}R_l,R_r|\mathrm{\Phi }\mathrm{\Phi }|R_l^{},R_r.$$ (6) In practice we do not solve the eigenvalues of the density matrix in the configuration representation, but in a projection on a smaller basis. White has shown that the best way to represent the state $`|\mathrm{\Phi }`$ is to select the $`m`$ eigenstates $`|\alpha `$ with the largest eigenvalue $$\underset{R_l^{}}{}R_l|\rho |R_l^{}R_l^{}|\alpha =\lambda _\alpha R_l|\alpha .$$ (7) The next step is to break up the right hand part into a connecting site and a remainder. With the basis for this remainder and the newly acquired basis for the left hand part we can again compute the groundstate of the whole system as indicated in the lower part of the figure. Now we are in the same position as we started, with the difference that the connecting site has moved one position to the right. Thus we may repeat the cycle till the right hand part is so small that it can exactly be represented by $`m`$ states or less. Then we have constructed for the left hand part a new set of bases, all containing $`m`$ states, for system parts of variable length. Next we reverse the roles of left and right and move back in order to improve the bases for the right hand parts with the just constructed bases for the left hand part. The process may be iterated till it converges towards a steady state. The great virtue of the method is that it is variational. In each step the energy will lower till it saturates. In 1-dimensional system the method has proven to be very accurate . So one wonders what the main trouble is in higher dimensions. In Fig. 3 we have drawn 2 possible ways to map the system on a 1-dimensional chain. One sees that if we divide again the chain into a left hand part and a right hand part and a connecting site, quite a few sites of the left hand part are nearest or next nearest neighbors of sites of the right hand part. So the coupling between the two parts of the chain is not only through the connecting site but also through sites which are relatively far away from each other in the 1-dimensional path. The operators for the spins on these sites are not as well represented as those of the connecting site, which is fully represented by the two possible spin states. Yet the correlations between the interacting sites count as much for the energy of the system as those interacting with the connecting site. One may say that the further away two interacting sites are in the 1-dimensional chain the poorer their influence is accounted for. This consideration explains in part why open systems can be calculated more accurately than closed systems, even in 1-dimensional systems. It is an open question which map of the 2-dimensional onto a 1-dimensional chain gives the best representation of the groundstate of the system. Also other divisions of the system than those suggested by a map on a 1–dimensional chain are possible and we have been experimenting with arrangements which reflect better the 2–dimensional character of the lattice. They are promising but the software for these is not as sophisticated as the one developed by White for the 1-dimensional chain. We therefore have restricted our calculations to the two paths shown here. The second choice. the “meandering” path, was motivated by the fact that it has the strongest correlated sites most nearby in the chain and this choice was indeed justified by a lower energy for a given dimension $`m`$ of the representation than for the “straight” path. The DMRG calculations as well as the corresponding GFMC simulations are carried out for both paths. The meandering path has to be preferred over the straight path as the DMRG wavefunctions generally give a better energy value and the simulations suffer less from fluctuations. Nevertheless we have also investigated the straight path, since the path chosen leaves its imprints on the resulting correlation pattern and the paths break the symmetries in different ways. Both paths have an orientational preference. In open systems the translational symmetry is broken anyway, but the meandering path has in addition a staggering in the horizontal direction. This together with the horizontal nearest neighbor sites appearing in the meandering path gives a preference for horizontal dimerlike correlations in this path. On the other hand the straight path prefers the dimers in the vertical direction. Comparing the results of the two choices, allows us to draw further conclusions on the nature of the intermediate state. ## IV Extracting configurations from the DMRG wavefunction It is clear that the wavefunction which results from a DMRG-procedure is quite involved and it is not simple to extract its value for a given configuration. We assume now that the DMRG wavefunction has been obtained by some procedure and we will give below an algorithm to obtain efficiently the value for an arbitrary configuration (see also for an alternative description). The first step is the construction of a set of representations for the wavefunction in terms of two parts (without a connecting site in between). Let the left hand part contain $`l`$ sites and the other part $`Nl`$ sites. We denote the $`m`$ basis states of the left hand part by the index $`\alpha `$ and those of the right hand part by $`\overline{\alpha }`$. The eigenstates of the two parts are closely linked and related as follows $$\{\begin{array}{ccc}\hfill R_l|\alpha & =& \frac{1}{\sqrt{\lambda _\alpha }}\underset{R_r}{}\mathrm{\Phi }|R_l,R_rR_r|\overline{\alpha },\hfill \\ \hfill R_r|\overline{\alpha }& =& \frac{1}{\sqrt{\lambda _\alpha }}\underset{R_l}{}\alpha |R_lR_l,R_r|\mathrm{\Phi }.\hfill \end{array}$$ (8) It means that for every eigenvalue $`\lambda _\alpha `$ there is and eigenstate $`\alpha `$ for the left hand part and an $`\overline{\alpha }`$ for the right hand density matrix. The proof of (8) follows from insertion in the density matrix eigenvalue equation (7). The second step is a relation for the groundstate wavefunction in terms of these eigenfunctions. Generally we have $$R_l,R_r|\mathrm{\Phi }=\underset{\alpha ,\overline{\beta }}{}R_l|\alpha R_r|\overline{\beta }\alpha \overline{\beta }|\mathrm{\Phi },$$ (9) while due to (8) we find $$\begin{array}{ccc}\hfill \alpha \overline{\beta }|\mathrm{\Phi }& =& \underset{R_l,R_r}{}\alpha |R_l\overline{\beta }|R_rR_l,R_r|\mathrm{\Phi }\hfill \\ & =& \sqrt{\lambda _\alpha }\underset{R_r}{}\overline{\beta }|R_rR_r|\overline{\alpha }=\delta _{\alpha ,\beta }\sqrt{\lambda _\alpha }.\hfill \end{array}$$ (10) Thus we can represent the groundstate as $$R_l,R_r|\mathrm{\Phi }=\underset{\alpha }{}\sqrt{\lambda _\alpha ^l}R_l|\alpha _lR_r|\overline{\alpha }_{Nl}.$$ (11) For this part of the problem we have to compute and store the set of $`m`$ eigenvalues $`\lambda _\alpha ^l`$ for each division $`l`$. We point out again that we have on the left hand side the wavefunction and on the right hand side representations for given division $`l`$, which all lead to the same wavefunction. The last step is to see the connection between these representations. As intermediary we consider a representation of the wavefunction with one site $`s_l`$ separating the spins $`s_1\mathrm{}s_{l1}`$ on the left hand side from $`s_{l+1}\mathrm{}s_N`$ on the right hand side. Using the same basis as in (11) we have $$\begin{array}{c}s_1\mathrm{}s_{l1},s_l,s_{l+1}\mathrm{}s_N|\mathrm{\Phi }=\hfill \\ _{\alpha ,\alpha ^{}}s_1\mathrm{}s_{l1}|\alpha \varphi _{\alpha ,\alpha ^{}}^l(s_l)s_{l+1}\mathrm{}s_N|\overline{\alpha ^{}}.\hfill \end{array}$$ (12) We compare this representation in two ways with (11). First we combine the middle site with the left hand part. This leads to $`m`$ states which can be expressed as linear combinations of the states of the enlarged segment $$\underset{\alpha }{}s_1\mathrm{}s_{l1}|\alpha \varphi _{\alpha ,\alpha ^{}}^l(s_l)=\underset{\alpha ^{\prime \prime }}{}s_1\mathrm{}s_l|\alpha ^{\prime \prime }T_{\alpha ^{\prime \prime },\alpha ^{}}^l.$$ (13) In fact this relation is the very essence of the DMRG procedure. The wave function in the larger space is projected on the eigenstates of the the density matrix of that space. Since the process of zipping back forth has converged there is indeed a fixed relation (13). However when we insert (13) into (12) and compare it with (11) we conclude that the matrix $`T`$ must be diagonal $$T_{\alpha ^{\prime \prime },\alpha ^{}}^l=\delta _{\alpha ^{\prime \prime },\alpha ^{}}\sqrt{\lambda _\alpha ^{}^l}.$$ (14) This leads to the recursion relation $$s_1\mathrm{}s_l|\alpha ^{}=\underset{\alpha }{}s_1\mathrm{}s_{l1}|\alpha A_{\alpha ,\alpha ^{}}^l(s_l)$$ (15) with $$A_{\alpha ,\alpha ^{}}^l(s_l)=\varphi _{\alpha ,\alpha ^{}}^l(s_l)/\sqrt{\lambda _\alpha ^{}^l}.$$ (16) The second combination concerns the contraction of the middle site with the right hand part. This leads to the recursion relation $$s_l\mathrm{}s_N|\overline{\alpha }=\underset{\alpha ^{}}{}B_{\alpha ,\alpha ^{}}^{l1}(s_l)s_{l+1}\mathrm{}s_N|\overline{\alpha }^{}$$ (17) with $$B_{\alpha ,\alpha ^{}}^{l1}(s)=\varphi _{\alpha ,\alpha ^{}}^l(s)/\sqrt{\lambda _\alpha ^{l1}}.$$ (18) The $`A`$ and $`B`$ matrices are the essential ingredients of the calculation of the wavefunction. From (18) and (16) follows that they are related as $$B_{\alpha ,\alpha ^{}}^{l1}(s)=\sqrt{\lambda _\alpha ^{}^l/\lambda _\alpha ^{l1}}A_{\alpha ,\alpha ^{}}^l(s).$$ (19) By the recursion relations the basis states are expressed as products of $`m\times m`$ matrices. The determination of the DMRG wavefunction and the matrices $`A`$ (or $`B`$) is part of the determination of the DMRG wavefunction which is indeed lengthy but fortunately no part of the simulation. The matrices can be stored and contain the information to calculate the wavefunction for any configuration. The value of the wavefunction is now obtained as the product of matrices acting on a vector. Thus the calculational effort scales with $`m^2`$. Using relation (19) one reconfirms by direct calculation that the wavefunction is indeed independent of the division $`l`$. When the simulation is in the configuration $`R`$, all the $`R_l|\alpha _l`$ and the $`R_r|\overline{\alpha }_{Nl}`$ are calculated and stored, with the purpose to calculate the wavefunctions more efficiently for the configurations $`R^{}`$ which are connected to $`R`$ by the hamiltonian and which are the candidates for a move. The structure of of these nearby states is $`R^{}=s_1\mathrm{}s_{l_2}\mathrm{}s_{l_1}\mathrm{}s_N(l_2>l_1)`$. So we have that for $`R^{}`$ the representation $$R^{}|\mathrm{\Phi }=\underset{\alpha }{}\sqrt{\lambda _\alpha ^{l_2}}s_1\mathrm{}s_{l_2}\mathrm{}s_{l_1}|\alpha s_{l_2+1}\mathrm{}s_N|\overline{\alpha }$$ (20) holds. Now we see the advantage of having the wavefunction stored for all the divisions. The second factor in (20) is already tabulated; the first factor involves a number of matrix multiplications equal to the distance in the chain of the two spins $`l_1`$ and $`l_2`$ till one reaches a tabulated function. One can use the tables for a certain number of moves but after a while it starts to pay off to make a fresh list. ## V Green Function Monte Carlo simulations The GFMC technique employs the operator $$𝒢=1ϵ$$ (21) and uses the fact that the groundstate $`|\mathrm{\Psi }_0`$ results from $$|\mathrm{\Psi }_0𝒢^n|\mathrm{\Phi },ϵ1,nϵ1$$ (22) where in principle $`|\mathrm{\Phi }`$ may be any function which is non-orthogonal to the groundstate. In view of the possible symmetry breaking, the overlap is a point of serious concern on which we come back in the discussion. In practice we will use the best $`|\mathrm{\Phi }`$ that we can construct conveniently by the DMRG–procedure described above. The closer $`|\mathrm{\Phi }`$ is to the groundstate the smaller the number of factors $`n`$ in the product needs to be in order to find the groundstate. Evaluating (22) in the spin representation gives for the projection on the trial wavefunction the following long product $$\mathrm{\Phi }|\mathrm{\Psi }_0\underset{𝐑}{}\mathrm{\Phi }|R_M\left[\underset{i=1}{\overset{M}{}}R_i|𝒢|R_{i1}\right]R_0|\mathrm{\Phi }.$$ (23) Here the sum is over paths $`𝐑=(R_M,\mathrm{}R_1,R_0)`$ which will be generated by a Markov process. The Markov process involves a transition probability $`T(R_iR_{i1})`$ and the averaging process uses a weight $`m(R)`$. Its is natural to connect the transition probabilities to the matrix elements of the Greens Function $`𝒢`$. But here comes the sign problem into the game: the transition probabilities have to be positive (and normalized). So we put the transition rate proportional to the absolute value of the matrix element of the Greens Function $$T(RR^{})=\frac{|R|𝒢|R^{}|}{_{R^{\prime \prime }}|R^{\prime \prime }|𝒢|R^{}|}.$$ (24) This implies that we have to use a sign function $`s(R,R^{})`$ $$s(R,R^{})=\frac{R|𝒢|R^{}}{|R|𝒢|R^{}|}$$ (25) and a weight factor $$m(R)=\underset{R^{}}{}|R^{}|𝒢|R|.$$ (26) All these factors together form the matrixelement of the Green Function $$R|𝒢|R^{}=T(RR^{})s(R,R^{})m(R^{}).$$ (27) If the matrix elements of the Greens Function were all positive, or could be made positive by a suitable transformation, we would not have to introduce the sign function. We leave its consequences to the next section. By the representation (27) we can write the contribution of the path as a product of transition probabilities, signs and local weights. The transition probabilities control the growth of the Markov chain. The signs and weights constitute the weight of a path $$M(𝐑)=m_f(R_M)\left[\underset{i=1}{\overset{M}{}}s(R_i,R_{i1})m(R_{i1})\right]m_i(R_0).$$ (28) The initial and final weight have to be chosen such that the weight of the paths corresponds to the expansion (23). For the innerproduct $`\mathrm{\Phi }|\mathrm{\Psi }_0`$ we get $$m_i(R)=R|\mathrm{\Phi },m_f(R)=\mathrm{\Phi }|R.$$ (29) With this final weight we have projected the groundstate on the trial wave. This allows us to calculate the so–called mixed averages. For that purpose we define the local estimator $`𝒪`$ $$O(R)=\frac{\mathrm{\Phi }|𝒪|R}{\mathrm{\Phi }|R},$$ (30) which yields the mixed average $$𝒪_m\frac{\mathrm{\Phi }|𝒪|\mathrm{\Psi }_0}{\mathrm{\Phi }|\mathrm{\Psi }_0}=\frac{_𝐑O(R_M)M(𝐑)}{_𝐑M(𝐑)}.$$ (31) For operators not commuting with the hamiltonian the mixed average is an approximation to the groundstate average. Later on we will improve on it. In this raw form the GFMC would hardly work because all paths are generated with equal weight. One can do better by importance sampling in which one transforms the problem to a Greens Function with matrix elements $$R|\stackrel{~}{𝒢}|R^{}=\frac{\mathrm{\Phi }|RR|𝒢|R^{}}{\mathrm{\Phi }|R^{}}.$$ (32) Generally this can be seen as a similarity transformation on the operators and from now on everywhere operators with a tilde are related to their counterpart without a tilde as in (32). It gives only a minor change in the formulation. The transition rates are based on the matrix elements of $`\stackrel{~}{𝒢}`$ and so are the signs and weights. Thus we have a set of definitions like (24)–(26) with everywhere a tilde on top. It leads also to a change of the initial and final weight $$\stackrel{~}{m}_i(R)=|\mathrm{\Phi }|R|^2,\stackrel{~}{m}_f(R)=1.$$ (33) By chosing these weights the formula (30) for the average still applies with a weight $`\stackrel{~}{M}(𝐑)`$ made up as in (28) with the weights and signs with a tilde. Using the tilde operators the local estimator (30) reads $$O(R)=\underset{R^{}}{}R^{}|\stackrel{~}{𝒪}|R.$$ (34) We will speak about the various paths in terms of independent walkers that sample these paths. As some walkers become more important than others in the process, it is wise to improve the variance by branching, which we will discuss later with the sign problem. Before we embark on the discussion of the sign problem we want to summarize a number of aspects of the GFMC simulation relevant to our work. * The steps in the Markov process are small, only the ones induced by one term of the hamiltonian feature in a transition to a new state. This makes the subsequent states quite correlated. So many steps have to be performed before a statistical independent configuration is reached; on the average a number of the order of the number of sites. * In every configuration the wave function for a number of neighboring states (the ones which are reachable by the Greens Function), has to be evaluated. This is a time consuming operation and it makes the simulation quite slow, because out of the possibilities (of order $`N`$) only one is chosen and all the information gathered on the others is virtually useless. * The necessity to choose a small $`ϵ`$ in the Greens Function seems a further slow down of the method, but it can be avoided by the technique of continuous time steps developed by Ceperley and Trivedi . In this method the possibility of staying in the same configuration (the diagonal element of the Greens Function) is eliminated and replaced by a waiting time before a move to another state is made. (For further details in relation to the present paper we refer to ) * The average (30) can be improved by replacing it by $$𝒪_{im}=2\frac{\mathrm{\Phi }|𝒪|\mathrm{\Psi }_0}{\mathrm{\Phi }|\mathrm{\Psi }_0}\frac{\mathrm{\Phi }|𝒪|\mathrm{\Phi }}{\mathrm{\Phi }|\mathrm{\Psi }_0}$$ (35) of which the error with respect to the true average is of second order in the deviation of $`|\mathrm{\Phi }`$ from $`|\mathrm{\Psi }_0`$. For conserved operators, such as the energy, this correction is not needed since the the mixed average gives already the correct value. ## VI The sign problem and its remedies In the hamiltonian (1) the $`z`$ component of the spin operator keeps the spin configuration invariant, whereas the $`x`$ and $`y`$ components change the configuration. The typical change is that a pair of nearest or next nearest neighbors is spin reversed. Inspecting the Greens Function it means that all changes to another configuration involve a minus sign! Thus the Greens Function is as far as possible from the ideal of positive matrix elements. The diagonal terms are positive, but they always are positive for sufficiently small $`ϵ`$. Importance sampling can remove minus signs in the transition rates, when the ratio of the guiding wavefunction involves also a minus sign. For $`J_20`$ no guiding wavefunction can remove the minus sign problem completely. In Fig. 3 we show a loop of two nearest neighbor spin flips followed by a flip in a next nearest neighbor pair, such that the starting configuration is restored. The product of the ratios of the guiding wavefunction drops out in this loop, but the product of the three matrix elements has a minus sign. So at least one of the transitions must involve a minus sign. For unfrustrated systems these loops do not exist and one can remove the minus sign by a transformation of the spin operators $$S_i^xS_i^x,S_i^yS_i^y,S_i^zS_i^z$$ (36) which leave the commutation operators invariant. Applying this transformation on every other spin (the white fields of a checkerboard) all flips involving a pair of nearest neighbors then give a positive matrix element for the Greens Function. So when $`J_2=0`$ the appearant sign problem is transformed away. For sufficiently small $`J_2`$, Marshall has shown that the wave function of the system has only positive components (after the “Marshall” sign flip (36)). So the minus sign problem is not due to the wave function but to the frustration. (For the Hubbard model it is the guiding wave function which must have minus signs due to the Pauli principle, while the bare transition probabilities can be taken positive). Due to the minus sign the weight of a long path picks up a arbitrary sign. Generally the weights are also growing along a path. Thus if various paths are traced out by a number of independent walkers, the average over the paths or the walkers becomes a sum over large terms of both signs, or differently phrased: the average becomes small with respect to the variance; the signal gets lost in the noise. Ceperley and Alder constructed a method, Fixed Node Monte Carlo (FNMC), which avoids the minus sign problem at the expense of introducing an approximation. Their method is designed for continuum systems and handling fermion wavefunctions. They argued that the configuration space in which the wavefunction has a given sign, say positive, is sufficient for exploring the properties of the groundstate, since the other half of the configuration space contains identical information. Thus they designed a method in which the walkers remain in one domain of a given sign, essentially by forbidding to cross the nodes of the wavefunction. The approximation is that one has to take the nodal structure of the guiding wavefunction for granted and one cannot improve on that, at least not without sophistocation (nodal release). The method is variational in the sense that errors in the nodal structure always raise the groundstate energy. It seems trivial to take over this idea to the lattice but it is not. The reason is that in continuum systems one can make smaller steps when a walker approaches a node without introducing errors. In a lattice system the configuration space is discrete; so the location of the node is not strictly defined. The important part is that, loosely speaking the nodes are between configurations and one cannot make smaller moves than displacing a particle over a lattice distance or flip a pair of spins. Van Bemmel et al. adapted the FNMC concept to lattice systems preserving its variational character. This extension to the lattice suffers from the same shortcoming as the method of Ceperley and Alder: the “nodal” structure of the guiding wavefunction is given and cannot be improved by the Monte Carlo process. Recently Sorella proposed a modification which overcomes this drawback. It is based on two ingredients. Sorella noticed that the following effective hamiltonian yields also an upper bound to the energy: $$\begin{array}{ccc}\hfill R|\stackrel{~}{}_{\mathrm{eff}}|R^{}& =& \{\begin{array}{ccc}\hfill R|\stackrel{~}{}|R^{}& \mathrm{if}& R|\stackrel{~}{}|R^{}<0\hfill \\ \hfill \gamma R|\stackrel{~}{}|R^{}& \mathrm{if}& R|\stackrel{~}{}|R^{}>0(\gamma 0)\hfill \end{array}\hfill \\ \hfill R|\stackrel{~}{}_{\mathrm{eff}}|R& =& R|\stackrel{~}{}|R+(1+\gamma )V_{\mathrm{sf}}(R)\hfill \end{array}$$ (37) Here the “sign flip” potential is the same as that of ten Haaf et al. and given by $$V_{\mathrm{sf}}(R)=\underset{R_{\mathrm{na}}^{}}{}R_{\mathrm{na}}^{}|\stackrel{~}{}|R$$ (38) where the subscript “na” (not–allowed) on $`R^{}`$ restricts the summation to the moves for which the matrix element of the hamiltonian is positive (37). If the guiding wavefunction were to coincide with the true wavefunction, the simulation of the effective hamiltonian, which is sign free by construction, yields exact averages. So one may expect that good guiding wavefunctions lead to good upperbounds for the energy. This upperbound increases with $`\gamma `$, indicating that $`\gamma =0`$ seems the best choice, which is the effective hamiltonian of ten Haaf et al.. That hamiltonian however is a truncated version of the true hamiltonian in which all the dangerous moves are eliminated. The sign flip potential must correct this truncation by suppressing the probability that the walker will stay in a configuration with a large potential. The second ingredient uses the fact that the hamiltonian (37) explores a larger phase space and therefore contains more information than the truncated one. Parallel to the simulation of the effective hamiltonian one can calculate the weights for the true hamiltonian. As we saw in the summary of the GFMC method, forcefully made positive transition rates still contain the correct weights when supplemented with sign functions. For the true weights of the transition probabilities as given in (37), the “sign function” must be chosen as $$s(R,R^{})=\{\begin{array}{ccc}1\hfill & \mathrm{if}& R|\stackrel{~}{}|R^{}>0\hfill \\ 1/\gamma \hfill & \mathrm{if}& R|\stackrel{~}{}|R^{}<0\hfill \\ \frac{1ϵR|\stackrel{~}{}|R}{1ϵR|\stackrel{~}{}_{\mathrm{eff}}|R}\hfill & \mathrm{if}& R=R^{}\hfill \end{array}$$ (39) With these “signs” in the weights a proper average can be calculated, but these averages suffer from the sign problem, the more so the smaller $`\gamma `$ is as one sees from (39). So some intermediate value of $`\gamma `$ has to be chosen. Fortunately the results are not too sensitively dependent on $`\gamma `$; the value $`\gamma =0.5`$ is a good compromise and has been taken in our simulations. In any simulation some walkers obtain a large weight and others a small one. To lower the variance branching is regularly applied, which means a multiplication of the heavily weighted walkers in favor of the removal of those with small weight. It is not difficult to do this in an unbiased way. Sorella proposed to use the branching much more effectively in conjunction with the signs defined in (39). The average sign is an indicator of the usefulness of the set of walkers. Start with a set of walkers with positive sign. When the average sign becomes unacceptably low, the process is stopped and a reconfiguration takes place. The walkers are replaced by another set with positive weights only, such that a number of measurable quantities gives the same average. The more observables are included the more faithful is the replacement. The construction of the equivalent set requires the solution of a set of linear equations. With the new set of walkers one continues the simulation on the basis of the effective hamiltonian and one keeps track of the true weights with signs. The reconfiguration on the basis of some observables gives at the same time a measure for these obervables. Thus measurement and reconfiguration go together. As the number of observables that can be included is limited some biases are necessarily introduced. Sorella showed that the error in the energy of the guiding wave function is easily reduced by a factor of 10, whereas reduction by the FNMC of ten Haaf et al. rather gives only a factor 2. ## VII Results for the DMRG In this section we give a brief summary of the results of a pure DMRG–calculation. Extensive details can be found in . The systems are strips of widths up to $`W=8`$ and of various lengths $`L`$. They are periodic in the small direction and open in the long direction. The periodicity enables us to study the spin stiffness. We have chosen open boundaries in the long direction to avoid the errors in the DMRG wavefunction due to periodic boundaries. Since we have good control of the scaling behavior in $`L`$ we extrapolate to $`L\mathrm{}`$ . In the small direction we are restricted to $`W=2,4,6`$ and 8 as odd values are not compatible with the antiferromagnetic character of the system. For wider system sizes the number of states which has to be taken into account exceeds the possiblities of the present workstations. Our criterion is that the value of the energy does not drift anymore appreciably upon the inclusion of more states. This does not mean that the wavefunction is virtually exact, since the energy is a rather insensitive probe for the wavefunction. For instance correlation functions still improve from the inclusion of more states. In Fig. 5 we present the energy as function of the ratio $`J_2/J_1`$, for strip widths 4,6 and 8 together with the best extrapolation to infinite width systems. The figure strongly suggests that the infinite system undergoes a first order phase transition around a value 0.6. This can be attributed to the transition to a columnar order (lines of opposite magnetisation). It is impossible to deduce more information from such an energy curve as other phase transitions are likely to be continuous with small differences in energy between the phases. The spin stiffness can be calculated with the DMRG–wavefunction for systems which are periodic in at least one direction . The result of the computation is plotted in Fig. 5. One observes a substantial decrease of $`\rho _s`$ in the frustrated region indicating the appearance of a magnetically disordered phase. In contrast to the energy the data do not allow a meaningful extrapolation to large widths. The lack of clear finite size scaling behavior in the regime of small values of $`W`$ prevents to draw firm conclusions on the disappearence of the stiffness in the middle regime. For the correlation functions following from the DMRG wavefunction we refer to . ## VIII Results for GFMC with SR We now come to the crux of this study: the simulations of the system with GFMC, using the DMRG wavefunctions to guide the importance sampling. All the simulations have been carried out for $`10\times 10`$ lattice with open boundaries. Standardly we have 6000 walkers and we run the simulations for about $`10^4`$ measurements. These measuring points are not fully independent and the variance is determined by chopping up the simulations into 50-100 groups, often carried out in parallel on different computers. We first give an overall assessment of the correlation function pattern and then analyze some values of the ratio $`J_2/J_1`$. In the first series we have used the guiding wavefunction on the basis of the meandering path Fig. 3(b), because it gives a better energy than the straight option (a) . The number of basis states is $`m=75`$, which is small enough to carry out the simulations with reasonable speed and large enough that trends begin to manifest themselves. Measurements of a number of correlation functions are made in conjunction with Stochastic Reconfiguration as described in section 7. The details of these calculations are given in Table I. Note that the DMRG guiding wavefunction gives a better energy for the meandering path than for the straight path for values of $`J_2/J_1`$ up to 0.6. From 0.7 on this difference is virtually absent. This undoubtly has to do with the change to the columnar state which can equally well be realized by both paths. The value of $`ϵ`$ has been chosen as a compromise: independent measurements require a large $`ϵ`$ but the minus sign problem requires to apply often Stochastic Reconfiguration i.e. a small $`ϵ`$. One sees that in the heavily frustrated region the $`ϵ`$ must be taken small. In fact the more detailed calculations for $`J_2=0.3J_1`$ and $`J_2=0.5J_1`$ were carried out with $`ϵ=0.01`$. In Fig. 6 and 7 we have plotted a sequence of visualizations of the correlations. From top to bottom (zig–zag) they give the correlations for the values of $`J_2/J_1`$. In order to highlight the differences a distinction is made between correlations which are above average (solid lines) and below average (broken lines). All nearest neighbor spin correlations shown are negative. In all the pictures one sees the influence of the boundaries on the spin correlations. Only 1/4 of the lattice has been pictured, the other segments follow by symmetry. The upper right corner, which corresponds to the center of the lattice, is the most significant for the behavior of the bulk. The overall trend is that spatial variations in the correlation functions occur in growing size with $`J_2/J_1`$. On the side of low $`J_1/J_2`$ (Néel phase) one sees dimer patterns in the horizontal direction, they turn over to vertical dimers (around $`J_2=0.7J_1`$) and rapidly disappear in the columnar phase. This is again support for the fact that the columnar phase is separated from the intermediate state by a first order phase transition. Open boundary conditions have the disadvantage of boundary effects, which make it more difficult to distinguish between spontaneous and induced breaking of the translational symmetry. On the other hand for open boundaries, dimers, plaquettes or any other interruption of the translational symmetry have a natural reference frame. The correlations are not only influenced by the boundaries of the system, also the guiding DMRG–wavefunction leaves its imprint on the results. This is mainly due to the fact that we have only mixed estimators for the correlation functions, which show a mix of the guiding wavefunction and the true wavefunction. The improved estimator, used in these pictures, corrects for this effect to linear order in the deviation.<sup>*</sup><sup>*</sup>*Forward walking allows to make a pure estimate of the correlations, but requires much more calculations. The ladder like structure in the DMRG path is reflected in a ladder like pattern in the correlations as an inspection of the correlations in the DMRG wavefunctions (not shown here) reveals. But ladders are clearly also present in the GFMC results shown in the pictures. In order to eliminate the influence of the guiding wavefunction we scrutinize some of values of $`J_2/J_1`$ in more detail, by inspecting how the results depend on the size of the basis in the DMRG wavefunction and on the choice of the DMRG path. Since we are mostly interested in the behavior of the infinite lattice, we discuss mainly the behavior of the correlations in and around the central plaquette. So we study a sequence of DMRG wavefunctions for $`m=32`$, 75, 100, 128 and 150(200) and carry out for each of them extensive GFMC simulations. First we look to the case $`J_2=0`$, which is easy because we know that it must be Néel ordered and therefore it serves as a check on the calculations. Then we take $`J_2=0.3J_1`$ which is the most difficult case since it is likely to be close to a phase transition. Finally we inspect $`J_2=0.5J_1`$ where we are fairly sure that some dimerlike phase is realized. ### A $`J_2=0`$ For the unfrustrated Heisenberg model we have several checkpoints for our calculations. We can find to a high degree of accuracy the groundstate energy and we are sure that the Néel phase is homogeneous, i.e. that the correlations show no spatial variation other than that of the antiferromagnet. We have two ways of estimating the energy of a $`10\times 10`$ lattice. The first method is based on finite size interpolation. From DMRG calculations we have an exact value for a $`4\times 4`$ lattice, an accurate value for the $`6\times 6`$ lattice and a good value for the $`8\times 8`$ lattice. There is also the very accurate calculation of Sandvik for an infinitely large lattice, yielding the value of $`e_0=0.669437(5)`$. The leading finite size correction goes as $`1/L`$. Including also a $`1/L^2`$ term we have esimated the value for a $`10\times 10`$ lattice as 0.629(1) and incorporated this value in Table II(a). We stress that this is an interpolation for which the value of Sandvik is the most important input. The second method is less well founded and uses the experience that DMRG energy estimates can be improved considerably by extrapolating to zero truncation error. When plotted as function of this truncation error the energy is often remarkably linear. In Table II(b) we give for a series of bases $`m=32,75,100,128`$ and 150, the values of the truncation error and the corresponding DMRG energy per site together with the extrapolation on the basis of linear behavior.The value for $`m=150`$ is not in line with the others. This can be explained by the fact that the construction of this DMRG wavefunction was slightly different from the others in which the basis was built up gradually. Note that the two estimates are compatible. In Table II(b) we have also listed the values of the GFMC simulations for the corresponding values of $`m`$. They do agree quite well with these estimates in particular with the one based on finite size scaling. We point out that one would have to go very far in the number of states in the DMRG calculation to obtain an accuracy that is easily obtained with GFMC. Thus the combination of GFMC and DMRG does really better than the individual components. One might wonder why there is still a drift to lower energy values in the GFMC simulations (which is also present in the tables to come). The reason is that the DMRG wavefunction is strictly zero outside a certain domain of configurations, because the truncation of the basis involves also the elimination of certain combinations of conserved quantities of the constituing parts. The domain of the wavefunction grows with the size of the basis. Turning now to the correlations it seems that they are homogeneous in the center of the lattice for $`J_2=0`$. However a closer inspection reveals small differences. In Table III we list the asymmetries in the horizontal and vertical directions of the spin correlations in and around the central plaquette as function of the number of states. If we number the spins on the lattice as $`𝐒_{n,m}`$ with $`1n,m10`$, the central plaquette has the coordinates (5,5), (5,6), (6,5) and (6,6). We then define the asymmetry parameters $`\mathrm{\Delta }_x`$ and $`\mathrm{\Delta }_y`$ as $$\{\begin{array}{c}\mathrm{\Delta }_x=\frac{1}{4}𝐒_{4,5}𝐒_{5,5}+𝐒_{4,6}𝐒_{5,6}+𝐒_{6,5}𝐒_{7,5}+𝐒_{6,6}𝐒_{7,6}\frac{1}{2}𝐒_{5,5}𝐒_{6,5}+𝐒_{5,6}𝐒_{6,6}\hfill \\ \mathrm{\Delta }_y=\frac{1}{4}𝐒_{5,4}𝐒_{5,5}+𝐒_{6,4}𝐒_{6,5}+𝐒_{5,6}𝐒_{5,7}+𝐒_{6,6}𝐒_{6,7}\frac{1}{2}𝐒_{5,5}𝐒_{5,6}+𝐒_{6,5}𝐒_{6,6}\hfill \end{array}$$ (40) So $`\mathrm{\Delta }_x`$ is the average value of the correlations on the 4 horizontal bonds which are connected to the central plaquette minus the average of the values on the 2 horizontal bonds in the plaquette. Similarly $`\mathrm{\Delta }_y`$ corresponds to the vertical direction. The values for the asymmetry in Table III in the vertical direction are so small that they have no significance. Note that the anticipated decrease in $`\mathrm{\Delta }_x`$ is slow in DMRG and therefore also slow in the mixed estimator of the GFMC. The improved estimator (35) however is truely an improvement! So one sees that all the observed small deviations from the homogeneous state will disappear with the increase of the number of states in the basis of the DMRG wavefunction. (In general the accuracy of the correlations is determined by that of the GFMC simulations. We get as variance a number of the order 0.01, implying twice that value for the improved estimator) The vanishing of $`\mathrm{\Delta }_x`$ and $`\mathrm{\Delta }_y`$ also prove that finite size effects are small in the center of the $`10\times 10`$ lattice. From these data we may conclude that the GFMC can make up for the errors in the DMRG wavefunction for a relative low number of basis states. We have not carried out a similar series for the straight path since this will certainly show no dimers as will become clear from the following cases. ### B $`J_2=0.3J_1`$ This case is the most difficult to analyze since it is expected to be close to a continuous phase transition from the Néel state to a dimerlike state. As is known the DMRG structure of the wavefunction is not very adequate to cope with the long–range correlation in the spins typical for a critical point. In Table IV we have presented the same data as in Table III but now for $`J_2=0.3`$. There is no pattern in the energy as function of the truncation error $`\delta `$. The decrease of the energy as function of the size of the basis $`m`$ is in the DMRG wavefunctions is not saturated. The GFMC simulations lead to a notably lower energy and they do hardly show a leveling off as function of the basis of the guiding wavefunction. All these points are indicators that the DMRG wavefunction is rather far from convergence and that more accurate data would require a much larger basis. As far as the staggering in the correlations is concerned the values for $`\mathrm{\Delta }_x`$ are significant, also because the simulation results generally increase the values. Those for $`\mathrm{\Delta }_y`$ are not small enough to be considered as noise. Given the fact that most authors locate the phase transition at higher values $`J_20.4J_1`$ we would expect both $`\mathrm{\Delta }`$’s to vanish. So either the dimerlike state is realized for values as low as $`J_2=0.3J_1`$ or dimer formation already starts in the Néel state. To get more insight in the nature of the groundstate we have also carried out the same set of simulations on the straight path (a) in Fig. 3. This guiding wavefunction shows virtually no formation of dimers in any direction as can be observed from Table V. In spite of the fact that the trends indicated in the table have not come to convergence one may draw a few conclusions from the comparison of the two sets of simulations. The overal impression is that the meandering guiding wavefunction represents a groundstate of a different symmetry as compared to the straight path guiding wavefunction. The meandering wavefunction prefers dimers in the horizontal direction and the straight wavefunction leads to some dimerization in the vertical direction. The difference also shows up in the energy, it is not only large on the DMRG level but it also persists at the GFMC level. We see similar trends in the next case. ### C $`J_2=0.5J_1`$ By any estimate this value of the next nearest neighbor coupling leads to a dimerlike state if it exists at all. No accurate data are available on the energy of the $`10\times 10`$ system to compare to our results. In Table VI we list the data for a set of DMRG wavefunctions with bases $`m=32`$, 75, 100, 128, 150 and 200. The DMRG values of the energy (with exception of the value for $`m=32`$) can be extrapolated to zero truncation error with the limiting value $`E_0=48.4(1)`$, which corresponds very well with the level in the GFMC values for larger sizes of the basis. This indicates again that the GFMC simulations can make up for the shortcoming of the DMRG wavefunction. One would indeed have to enlarge the basis to $`m`$ of the order of 1000 in order to achieve the value of the energy of the simulations which use DMRG guiding wavefunctions with a basis of the order of 100. The staggering in the correlations expressed by the quantities $`\mathrm{\Delta }_x`$ for the horizontal direction and $`\mathrm{\Delta }_y`$ for the vertical direction, has values that are significant. If one looks to the contributions of the DMRG wavefunction and the GFMC simulation separately, one observes that the overall values do agree quite well, with the tendency that the GFMC simulations lowers the staggerring in the horizontal direction and slightly increases it in the vertical direction. So we may conclude that indeed in the groundstate of the $`J_2=0.5J_1`$ system, the correlations of the spins are not translation invariant but show a staggering. However these results neither confirm the picture that the dimerstate is the lowest (as suggested by Kotov et al. ) nor that the plaquettestate is the groundstate (as concluded by Capriotti and Sorella ). We comment on these discrepancies further in the discussion. Again it is worthwhile to compare these results with a simulation on the basis of the straight path (a) in Fig. 3. Here it is manifest that the straight path prefers to have the dimers in the vertical direction. Again the impression is that the straight path leads to a different symmetry as compared to the meandering path. It is not only the different preference in the main direction of the dimers, also the secondary dimerization in the perpendicular direction, notably in the meandering case, is not present in the straight case. The fairly large difference in energy on the DMRG level becomes quite small on the GFMC level. ## IX Discussion We have presented a method to employ the DMRG wavefunctions as guiding wavefunctions for a GFMC simulation of the groundstate. Generally the combination is much better than the two individual methods. The GFMC simulations considerably improve the DMRG wavefunction. In the intermediate regime the properties of the GFMC simulations depend on the guiding wavefunction as the results for two different DMRG guiding wavefunctions show. The method has been used to observe spin correlations in the frustrated Heisenberg model on a square lattice. In this discussion we focus on the intermediate region where the model is most frustrated and which is the “piece de resistance” of the present research. We see patterns of strongly correlated nearest neighbor spins, to be called dimers. To indicate what me mean by strong and weak we give the values in and around the central square of the $`10\times 10`$ lattice, for the case $`J_2=0.5J_1`$. In Fig. 7(c) we have given the values of the central square extrapolated to an infinite lattice. The values are based on the improved estimator and it is interesting to see the trends. The horizontal strong correlation of -0.42 is the result of the DMRG value -0.44 and the GFMC value -0.43, while the weak bond -0.15 is the result of the DMRG value -0.09 and the GFMC value -0.12. Thus the GFMC weakens the order parameter $`D_x`$ associated with the staggering. For the vertical direction there is hardly a change from DMRG to GFMC. One has to go to the next decimal to see the difference. The strong bond equals -0.368 and is coming from the DMRG value -0.375 and the GFMC value -0.371, while the improved weak bond of -0.271 is the resulting value of -0.275 for DMRG and -0.273 for GFMC. Before we comment on this result we discuss the influence of the choice of the guiding wave function. We note that for both points $`J_2=0.3J_1`$ and $`J_2=0.5J_1`$ the two choices for the DMRG wavefunction give different results. First of all the main staggering is for the meandering path (b) of Fig. 3 in the horizontal direction, while the straight path (a) of Fig. 3 prefers the dimers in the vertical direction. There is not much difference in the values of the strong and weak correlations. Secondly the straight path shows no appreciable staggering in the other direction, so one may wonder whether the observed effect for the meandering path is real. In our opinion this difference has to do with the effect that the DMRG wavefunction “locks in” on a certain symmetry. The straight path yields a groundstate which is truely dimerlike in the sense that it is translational invariant in the direction perpendicular to the dimers. The meandering path locks in on a different groundstate which holds the middle between a dimerlike and a plaquettelike state. The GFMC simulations cannot overcome this difference in symmetry, likely because the two lowest states with different symmetry are virtually orthogonal. On the DMRG level there is a large difference in energy between the two states, favoring the meandering path strongly, on the GFMC level this difference has become very small. With this observation in mind we compare our result with other findings. The results of the series expansions , and are shown in Fig. 7(a). Their correlations organize themselves in spinladders. The correlations on the rungs of the ladder are $`0.45\pm 0.5`$ which compares well with our strongest horizontal correlation and this holds also for the weak horizontal correlation (–0.12 vs –0.15). The most noticeble difference is the value of our weak correlation in the vertical direction (–0.27 vs –0.36) while the strong correlation (–0.37 vs –0.36) agrees. There is no real conflict between our result and theirs since the symmetry they find is fixed by the state around which the series expansion is made. So our claim is only that our state with different symmetry is the lower one. In fact in the paper of Singh et al. , it is noted that the susceptibility to a staggering operator in the perpendicular direction (our $`\mathrm{\Delta }_y`$) becomes very large in the dimer state for $`J_2=0.5J_1`$ which we take as an indication of the nearby lower state. The analytical calculations in and however do not support the existence of the state we find. Neither do we find support for the plaquette state found in , which we have sketched in Fig. 7(b). The evidence of this investigation is based on the boundedness of the susceptibility for the operator which breaks the orientational symmetry and the divergence of the susceptibility for the order parameter breaking translational invariance (corresponding to $`\mathrm{\Delta }_x`$). They have not separately investigated the values of $`\mathrm{\Delta }_x`$ and $`\mathrm{\Delta }_y`$ since their groundstate has the symmetry of the lattice and one would find automatically the same answer. They conclude that in absence of orientational order parameter and with the presence of the translational order parameter the state must be plaquettelike. We believe that their result is influenced by the guiding wavefunction for which the one-step Lanczos approximation is taken. This wavefunction certainly has the symmetry of the square and again GFMC cannot find a groundstate with a different symmetry. Finally we comment on the fact that we find the dimerization already for values as low as $`J_2=0.3J_1`$ at least for the meandering path. As we have mentioned earlier the results as function of the number of states have not sufficiently converged to make a firm conclusion, the more so since there is a large difference between DMRG and GFMC. Still it could be an indication that the phase transition from the Néel state to the dimer state takes place for lower values than the estimated $`J_2=0.38J_1`$ . Thus many questions are left over, amongst others how the order parameters behave as function of the frustation ratio in the intermediate region. We feel that the combination of DMRG and GFMC is a good tool to investigate these issues since they demonstrate ad oculos the correlations in the intermediate state. Acknowledgement The authors are indebted to Steve White for making his software available. One of us (M. S. L. duC. de J.) gratefully acknowledges the hospitality of Steve for a stay at Irvine of 3 months, where the basis of this work was laid. The authors have also benefitted from illuminating discussions with Subir Sachdev and Jan Zaanen. The authors want to acknowledge the efficient help of Michael Patra with the simulations on the cluster of PC’s of the Instituut-Lorentz.
no-problem/0002/cond-mat0002025.html
ar5iv
text
# Self-Duality in Superconductor-Insulator Quantum Phase Transitions ## ACKNOWLEDGMENTS I’m grateful to M. Krusius for the kind hospitality at the Low Temperature Laboratory in Helsinki. I wish to acknowledge useful conversations with J. Hove, N. Kopnin, M. Krusius, K. Nguyen, M. Paalanen, A. Sudbø, and especially G. Volovik. This work was funded in part by the EU sponsored programme Transfer and Mobility of Researchers under contract No. ERBFMGECT980122.
no-problem/0002/cond-mat0002401.html
ar5iv
text
# Memory Effects in Granular Material ## Abstract We present a combined experimental and theoretical study of memory effects in vibration-induced compaction of granular materials. In particular, the response of the system to an abrupt change in shaking intensity is measured. At short times after the perturbation a granular analog of aging in glasses is observed. Using a simple two-state model, we are able to explain this short-time response. We also discuss the possibility for the system to obey an approximate pseudo-fluctuation-dissipation theorem relationship and relate our work to earlier experimental and theoretical studies of the problem. PACS numbers: 45.70.-n, 61.43.Fs, 81.05.Rm Granular materials comprise an important class of complex systems whose simple fundamental mechanics gives rise to rich macroscopic phenomenology . Recent experiments on granular compaction suggest they are an ideal system for studying jamming, a phenomenon lying outside the domain of conventional statistical physics, yet highly reminiscent of glassiness. These studies showed that a loose packing of glass beads subjected to vertical “tapping” slowly compacts, asymptoting to a higher steady state packing fraction. This “equilibrium” packing fraction is somewhat lower than the random close packing limit, $`\rho _{\mathrm{rcp}}0.64`$, and is a decreasing function of the vibration intensity, typically parameterized by $`\mathrm{\Gamma }`$, the peak applied acceleration normalized by gravity, $`\mathrm{g}`$. The relaxation dynamics are extremely slow, taking many thousands of taps for the packing fraction, $`\rho `$, to approach its steady state value. During this evolution, $`\rho `$ increases logarithmically with the number of taps, $`t`$, which is typical for self–inhibiting processes . The average time scale $`\tau `$ of the relaxation decreases with $`\mathrm{\Gamma }`$, and in this sense the shaking intensity plays, at least qualitatively, the role of temperature. For small $`\mathrm{\Gamma }`$, the relaxation rate becomes so slow that the system cannot reach the steady state density within the experimental time scale. It was also found that compaction can be maximized through an annealing procedure. This process involves a slow “cooling” of the system starting from a high shaking intensity $`\mathrm{\Gamma }`$. These slow relaxation and annealing properties of this system are reminiscent of conventional glasses. Another qualitative similarity to glasses is observable in the density fluctuation spectrum of the granular system near equilibrium. The spectrum was found to be strongly non-Lorentzian , revealing the existence of multiple time scales in the system. The shortest and the longest relaxation timescales differ by as much as three order of magnitude, and the behavior of the spectrum for the intermediate frequencies is highly non-trivial; in certain regimes it can be fitted with a power law. These previous experimental observations are suggestive of glassy behavior and this connection has been explored in recent models of compaction using ideas from magnetic systems . However, a more direct test of the glassy nature of granular compaction comes from measurements of the response of the system to sudden perturbations in the effective temperature, given by $`\mathrm{\Gamma }`$. This idea originates from classical experiments for the study of aging in glasses and has recently been explored using computer simulations . In this letter we present direct experimental observations of memory effects in a vibrated granular system obtained by measuring the short-time response to an instantaneous change in tapping acceleration $`\mathrm{\Gamma }`$ and propose a simple theoretical framework. We used the experimental set-up described in refs. : 1 mm–diameter glass beads were vertically shaken in a tall, evacuated, 19 mm-diameter glass tube, and the packing density of the beads was measured using capacitors mounted at four heights along the column. The simplest form of this experiment consists of a single instantaneous change of vibration intensity from $`\mathrm{\Gamma }_1`$ to $`\mathrm{\Gamma }_2`$ after $`t_0`$ taps. For $`\mathrm{\Gamma }_2<\mathrm{\Gamma }_1`$ (Fig. 1a) we found that on short time scales the compaction rate increases. This is in sharp contrast to what one may expect from the long-time behavior found in previous experiments where the relaxation is slower for smaller vibration accelerations. For $`\mathrm{\Gamma }_2>\mathrm{\Gamma }_1`$ (Fig. 1b) we found that the system dilates immediately following $`t_0`$. These results too, are opposite from the long-time behavior seen in previous experiments where the compaction rate increased: Not only does the compaction rate decrease, it becomes negative (i.e. the system dilates). Note that after several taps the “anomalous” dilation ceases and there is a crossover to the “normal” behavior, with the relaxation rate becoming the same as in constant–$`\mathrm{\Gamma }`$ mode. Thus, most of the shaking history is forgotten after a short time. These data constitute a short-term memory effect: the future evolution of $`\rho `$ after time $`t_0`$ depends not only on $`\rho (t_0)`$, but also information about the previous tapping history, contained in other “hidden” variables. In order to demonstrate this in a more explicit manner, we modified the above experiment. In this second set of three experiments the systems was driven to the same density $`\rho _0`$ with three different accelerations $`\mathrm{\Gamma }_0`$, $`\mathrm{\Gamma }_1`$, and $`\mathrm{\Gamma }_2`$. After $`\rho _0`$ was achieved at time $`t_0`$, the system was tapped with the same intensity $`\mathrm{\Gamma }_0`$ for all three experiments. As seen in Figure 2, the evolution for $`t>t_0`$ strongly depends on the pre-history. The need for extra state variables in the problem is consistent with strongly non-Lorentzian behavior of the fluctuation spectrum, observed in earlier experiments. Indeed, if the evolution could be prescribed by a single master equation for local density, it would result in a single-relaxation-time exponential decay of the density fluctuations near equilibrium. Instead, a wide distribution of characteristic times is suggested by the spectrum. To give a theoretical interpretation of the above results, we view the problem as an evolution in the space of discrete “microscopic” states corresponding to different realizations of the packing topology (i.e. of the contact network). For each tap there is a possibility for a transition from one microscopic state to another. Since the dynamics is dissipative and the system is under external gravity, a transition to a denser configuration is typically more probable that the reverse one. We now assume that the short–term dynamics of the system are dominated by a number of local flip–flop modes with relatively high transition rates in both directions. This model replaces the complicated configuration space with a set of independent two-state systems, each of which is characterized by two transition rates, $`\kappa _{eg}>\kappa _{ge}`$. $`\kappa _{eg}/\kappa _{ge}`$ gives the ratio of the equilibrium probabilities of populating each state: “ground” and “excited”. As we have argued, the higher probability ground state is typically the one with higher density, i.e. the volume change $`v`$ between the ground and the excited states is normally positive (see Fig. 3 for a schematic description of the model). Our two-state approximation is close in its spirit to recent Grinev–Edwards and de Gennes models . We now introduce the concept of a base-line density, $`\rho _b`$, which corresponds to all the elementary modes at their ground states. Obviously, the experimentally–observed density is different from $`\rho _b`$ due to a non-zero fraction of excited states: $$\rho =\rho _b(t)\left(1\frac{1}{V}\underset{n}{}v^{(n)}\left(1+\frac{\kappa _{ge}^{(n)}}{\kappa _{eg}^{(n)}}\right)^1\right).$$ (1) The summation here is performed over all the dominant two-state modes, $`V`$ is the total volume and $`v^{(n)}`$ is the volume difference between the excited and the ground $`n`$-th state. Assuming that the vibration intensity $`\mathrm{\Gamma }`$ is a qualitative analog of temperature, we expect the population of the excited states, $`P(\mathrm{\Gamma })=(1+\kappa _{ge}^{(n)}/\kappa _{eg}^{(n)})^1`$, to grow with $`\mathrm{\Gamma }`$, starting from zero at $`\mathrm{\Gamma }=0`$. Hence, for a given $`\rho _b`$, the total density $`\rho `$ will be lower at higher acceleration. This explains the observed effect of an abrupt change of $`\mathrm{\Gamma }`$. After a switch from $`\mathrm{\Gamma }_1`$ to $`\mathrm{\Gamma }_2`$ at time $`t_0=0`$, the flip-flop mode contribution to the total density would relax to its new equilibrium value in the following way: $$\mathrm{\Delta }_{\mathrm{\Gamma }_1\mathrm{\Gamma }_2}(t)=\rho _bvF_{\mathrm{\Gamma }_1,\mathrm{\Gamma }_2}(v,\kappa )\left(1exp(\kappa t)\right)𝑑v𝑑\kappa $$ (2) Here $`\kappa `$ is the relaxation rate of an individual mode, and the distribution function $`F_{\mathrm{\Gamma }_1,\mathrm{\Gamma }_2}(v,\kappa )`$ is introduced as follows: $`F_{\mathrm{\Gamma }_1,\mathrm{\Gamma }_2}(v,\kappa ){\displaystyle \frac{1}{V}}{\displaystyle \underset{n}{}}\left(P^{(n)}(\mathrm{\Gamma }_2)P^{(n)}(\mathrm{\Gamma }_1)\right)\delta (vv^{(n)})`$ (3) $`\delta \left(\kappa \kappa _{ge}^{(n)}(\mathrm{\Gamma }_2)\kappa _{eg}^{(n)}(\mathrm{\Gamma }_2)\right).`$ (4) Since the observed density changes in compaction experiment are normally less than $`1\%`$ of the total density, one can estimate the typical separation between neighboring flip-flop systems as 5 particle sizes, which yields a good justification for our no-coupling approximation. According to Eq. (2), if $`F_{\mathrm{\Gamma }_1,\mathrm{\Gamma }_2}`$ does not vanish in the limit $`\kappa 0`$, the late stage of the relaxation of $`\mathrm{\Delta }_{\mathrm{\Gamma }_1\mathrm{\Gamma }_2}(t)`$ is given by the power law: $`\delta _{\mathrm{\Gamma }_1,\mathrm{\Gamma }_2}(t)\mathrm{\Delta }_{\mathrm{\Gamma }_1\mathrm{\Gamma }_2}(t)\mathrm{\Delta }_{\mathrm{\Gamma }_1\mathrm{\Gamma }_2}(\mathrm{})=`$ (5) $`\rho _b{\displaystyle vF_{\mathrm{\Gamma }_1,\mathrm{\Gamma }_2}(v,\kappa )e^{\kappa t}𝑑v𝑑\kappa }{\displaystyle \frac{1}{t}}.`$ (6) Note that $`\rho _b`$ is also dependent on time: although this cannot be described within our two-state approximation, the collection of elementary modes slowly evolves. Thus, one can observe two different processes: on short time scales, a fast relaxation due to the flip–flop modes is dominant, while over the long times, the dynamics are determined by the logarithmically slow evolution of the baseline density $`\rho _b(t)`$. The crossover between the two regimes is particularly obvious in Fig. 1b, where it results in a non-monotonic evolution. For experiments performed at sufficiently late stages of the density relaxation, the dynamics of the baseline density could be neglected compared to the contribution of the flip-flop modes (note that what we call a late-stage relaxation corresponds in fact to mesoscopic time scales which are always shorter than the relaxation time for $`\rho _b`$). It has to be emphasized that the described experiments provide us with a tool for study of the response of the system, which is not limited to the nearly–equilibrium regime. One can use our simple model to predict the response of the system to a more complicated pattern of changes of $`\mathrm{\Gamma }`$. First, we reach, using annealing dynamics, a “quasi-steady” state at amplitude $`\mathrm{\Gamma }_0`$, so that one can consider $`\rho _b`$ constant later on. Let us switch the shaking acceleration from $`\mathrm{\Gamma }_0`$ to $`\mathrm{\Gamma }_1`$ for a finite number of taps $`\delta t`$, and then switch it back to $`\mathrm{\Gamma }_0`$. During the intermediate $`\mathrm{\Gamma }_1`$–stage, the system does not have enough time to completely relax to its new equilibrium. In our two-state model, the modes whose relaxation rate (at $`\mathrm{\Gamma }_1`$) is below $`\delta t^1`$ remain unrelaxed. Assuming that the slow modes at $`\mathrm{\Gamma }_1`$ are mostly the same as at $`\mathrm{\Gamma }_0`$, we can calculate the backward density relaxation similarly to Eq. (5), with $`F(v,\kappa )`$ effectively depleted below a minimal rate, $`\kappa _0`$. This cut-off frequency, $`\kappa _0`$, is expected to decrease monotonically with increasing perturbation duration $`\delta t`$. The resulting density relaxation after returning to $`\mathrm{\Gamma }_0`$ is given by: $$\delta _{\mathrm{\Gamma }_1,\mathrm{\Gamma }_0}(t)\frac{exp(\kappa _0t)}{t}.$$ (7) We tested these predictions by performing this three stage experiment, varying the duration, $`\delta t`$, of the perturbation($`\mathrm{\Gamma }_1`$) stage (Fig. 4). As predicted, the time needed to recover the steady-state density increases with the number of taps $`\delta t`$ spent in the “hot” regime $`\mathrm{\Gamma }_1>\mathrm{\Gamma }_0`$. In the coordinates chosen, the relaxation curves should follow the $`\delta t=\mathrm{}`$ dynamics until the saturation at the cut-off time, $`\kappa _0^1(\delta t)`$. We approximate the distribution function $`F`$ by a constant above this low frequency cut-off at $`\kappa _0^1(\delta t)`$, up to a high-frequency cut-off, $`\kappa _h1\mathrm{t}\mathrm{a}\mathrm{p}^1`$. This eliminates the unphysical low-$`t`$ divergence in Eq. (7). Figure 4 shows fits of the data to Eq. (2), where $`\kappa _0(\delta t)`$ is determined from the fit. The best-fit is achieved at $`\kappa _h=0.4`$, and the variation of this parameter would result in a simple rescaling of the time axis. Figure 4 demonstrates good agreement between model and experiment, with some systematic error at the earliest relaxation stage (which is an expected result of our oversimplified description of the short time dynamics). For the late stage relaxation, we conclude that (i) within our experimental precision, the $`\delta t=\mathrm{}`$ relaxation is consistent with the predicted $`1/t`$ law; (ii) finite–$`\delta t`$ relaxation curves can be parameterized by a low frequency cut-off, $`\kappa _0`$; and (iii) $`\kappa _0`$ is a decreasing function of the waiting time $`\delta t`$, shown in the insert of the Figure 4. We now relate our picture to previous experimental and theoretical results. As discussed earlier, the wide range of relaxation times reveals itself both in our response measurements and in the the fluctuation spectra of the density. It is tempting to relate these two kinds of data through an analog of a Fluctuation-Dissipation Theorem (FDT). Of course, there is no fundamental reason for FDT to be applicable to the granular system . Even though the above two-state model could be mapped onto a thermal system (in which FDT is expected to work), the thermodynamic variable conjugate to density in the context of such a mapping has no clear physical meaning. Nevertheless, below we outline the pseudo-FDT relationship expected for the granular system under rather natural approximation. Namely, we neglect the correlation between the volume change $`v`$ and the life time $`\kappa ^1`$ of an individual mode, i.e. assume $`F_{0,\mathrm{\Gamma }}(v,\kappa )=f(\kappa )g(v)`$. Then the density autocorrelation function can be written as follows: $`\delta \rho (0)\delta \rho (t)_\mathrm{\Gamma }={\displaystyle \frac{\rho ^2}{2V}}{\displaystyle \left(v^2v^2\right)exp(\kappa t)f(\kappa )𝑑\kappa }=`$ (8) $`\rho {\displaystyle \frac{v^2v^2}{2vV}}\delta _{0,\mathrm{\Gamma }}(t).`$ (9) Thus, the density correlator is simply proportional to the response function corresponding to the switch between a very low acceleration (at which virtually all the modes are in their ground states) and the given one, $`\mathrm{\Gamma }`$. An experimental check of this relationship requires further high-precision studies of both the relaxation dynamics and the fluctuation spectrum. Our model also gives a simple interpretation to the decreasing dependence of the steady-state density on $`\mathrm{\Gamma }`$: it can be attributed to the growth of the population of the excited states, $`P(\mathrm{\Gamma })`$. Indeed, the corresponding correction to the total density is about $`1\%`$, i.e. of the same order as the variation of the equilibrium packing fraction with $`\mathrm{\Gamma }`$ . The slow dynamics associated with the evolution of the base-line density can also be addressed within our approach. For doing so we need to account for the coupling of individual modes. Namely, it would be a reasonable assumption that a relaxation of one mode to its ground state may frustrate such a transition for some of its neighbors (e.g. in 3D the most compact local cluster can be created only at the expense of less dense neighboring regions). Thus, we arrive at an effective anti-ferromagnetic (AF) coupling (of an infinite strength) between the two-state modes. This extension of our model makes it remarkably similar to the so-called reversible Parking Lot Model (PLM) , which has been successful in describing many aspects of granular compaction experiments . Recent simulations based on the “tetris model” for compaction also find slow glassy responses to changes in $`\mathrm{\Gamma }`$, but do not capture the short-term memory effect described here. . In the PLM, D-dimensional space (the parking lot) gets packed with finite-size objects (cars), which may arrive and depart with fixed rates and which are not allowed to overlap. Now, the transcendental relationship between PLM and the granular compaction experiment is easier to explain: both PLM and our coupled flip-flop model belong to the same generic class of AF-type systems (in the case of PLM, a local two-state mode is represented by a particle whose center of mass may or may not be placed at point x; the mode coupling is due to the hard-core interactions). The PLM is known to capture the slow dynamics of granular compaction and some features of its fluctuation spectrum . In fact, we performed numerical simulations of the PLM that display the memory effects discussed in this work. In conclusion, we used a sequence of abrupt switches of the shaking intensity $`\mathrm{\Gamma }`$ to study the response of a vibrated granular system. This technique can be used in the vicinity of the steady state density, as well as far from equilibrium. The major result is the direct demonstration of a memory effect in the system: the evolution is not predetermined by the local density alone, and its description requires introduction of additional “hidden” variables. Our phenomenological model for this behavior is built on minimal assumptions about the dynamics of the system and produces results which are generic and are expected to be valid for a wide class of more realistic microscopic models. Acknowledgments: We would like to thank Sue Coppersmith, Leo Kadanoff, Sid Nagel, and Tom Witten for insightful discussions and Fernando Villarruel for help with the experiments. This work was supported by the NSF under Award CTS-9710991 and by the MRSEC Program of the NSF under Award DMR-9808595.
no-problem/0002/hep-ph0002144.html
ar5iv
text
# TAUP - 2621-2000WIS2000/3/Feb.-DPP The contribution of Σ^∗→Λ⁢𝜋 to measured Λ polarizationSupported in part by grants from US-Israel Bi-National Science Foundation and from the Israeli Science Foundation. ## Abstract Calculations of the polarization of $`\mathrm{\Lambda }`$ and $`\overline{\mathrm{\Lambda }}`$ particles after fragmentation of a polarized quark produced in processes like $`Z`$-decay and deep inelastic polarized lepton scattering must include $`\mathrm{\Lambda }`$ and $`\overline{\mathrm{\Lambda }}`$ produced as decay products of $`\mathrm{\Sigma }^0`$ and $`\mathrm{\Sigma }^{}`$ as well as those produced directly. These decay contributions are significant and not feasibly included in theoretical calculations based on QCD without additional input from other experimental data. Furthermore these contributions depend on the spin structure of the $`\mathrm{\Sigma }^0`$ or $`\mathrm{\Sigma }^{}`$ and are not directly related to the structure function of the $`\mathrm{\Lambda }`$ Draft: , 4:11 PM The interpretation of studies of the nucleon spin structure functions is that quarks in the nucleon carry only $``$30% of the nucleon spin and that the strange (and non-strange) sea is polarized opposite to the polarization of the valence quarks. An attempt to shed light on this very problematic conclusion was made through measurement of the polarization of $`\mathrm{\Lambda }`$ and $`\overline{\mathrm{\Lambda }}`$ produced near the Z pole in $`e^+e^{}`$ collisions and in polarized lepton Deep Inelastic Scattering on unpolarized targets . Several theoretical works were published on this subject in which predictions and calculations relevant to the interpretation of the experimental results were presented . A difficulty present in most experiments is that they cannot distinguish between $`\mathrm{\Lambda }`$ and $`\overline{\mathrm{\Lambda }}`$ produced directly or as decay products. The main contribution from decays is the $`\mathrm{\Sigma }^{}\mathrm{\Lambda }\pi `$. The purpose of the present note is to emphasize the importance of taking into account the contribution from this process. To illustrate this point, if 100% polarized strange quarks hadronize directly to a $`\mathrm{\Lambda }`$ , the $`\mathrm{\Lambda }`$ polarization will be also 100% if the spin structure of the $`\mathrm{\Lambda }`$ is as expected by the naïve quark model. On the other hand, if they hadronize to a $`\mathrm{\Sigma }^{}`$ the $`\mathrm{\Lambda }`$ particles resulting from its decay will be only 55% polarized . If the spin structure of the $`\mathrm{\Lambda }`$ is as derived from SU(3) symmetry using the measured spin structure of the proton, 100% strange quarks will result in 73% polarized $`\mathrm{\Lambda }`$ if produced directly compared with the same 55% if coming from $`\mathrm{\Sigma }^{}`$ decay. The polarization of the $`\mathrm{\Lambda }`$ particles observed in any experiment can be written $$P(\mathrm{\Lambda })=\frac{N_{nd}P_{nd}(\mathrm{\Lambda })+N_\mathrm{\Sigma }^{}BR(\mathrm{\Sigma }^{}\mathrm{\Lambda }\pi )P_\mathrm{\Sigma }^{}(\mathrm{\Lambda })+N_{\mathrm{\Sigma }^o}P_{\mathrm{\Sigma }^o}(\mathrm{\Lambda })}{N_\mathrm{\Lambda }}$$ (1) where $`N_\mathrm{\Lambda }`$, $`N_\mathrm{\Sigma }^{}`$ and $`N_{\mathrm{\Sigma }^o}`$ denote respectively the numbers of $`\mathrm{\Lambda }`$’s, $`\mathrm{\Sigma }^{}`$’s and $`\mathrm{\Sigma }^o`$’s produced in the experiment, $`BR(\mathrm{\Sigma }^{}\mathrm{\Lambda }\pi )`$ denotes the branching ratio for the $`\mathrm{\Sigma }^{}\mathrm{\Lambda }\pi `$ decay, $`P_\mathrm{\Sigma }^{}(\mathrm{\Lambda })`$ and $`P_{\mathrm{\Sigma }^o}(\mathrm{\Lambda })`$ denote respectively the polarizations of the $`\mathrm{\Lambda }`$’s produced via the $`\mathrm{\Sigma }^{}\mathrm{\Lambda }\pi `$ decay and the $`\mathrm{\Sigma }^o\mathrm{\Lambda }\gamma `$ decay, and $`N_{nd}`$ and $`P_{nd}(\mathrm{\Lambda })`$ denote the number and polarization of $`\mathrm{\Lambda }`$’s produced via all other ways; i.e. which which do not go via the $`\mathrm{\Sigma }^{}`$ or $`\mathrm{\Sigma }^o`$, $$N_{nd}=N_\mathrm{\Lambda }N_\mathrm{\Sigma }^{}BR(\mathrm{\Sigma }^{}\mathrm{\Lambda }\pi )N_{\mathrm{\Sigma }^o}$$ (2) The individual terms in the numerator of eq.(1) are all distinct and measurable. Any calculation of the polarization of the final observed $`\mathrm{\Lambda }`$ must consider all these contributions if they are not separated experimentally. One might argue that the $`\mathrm{\Sigma }^{}\mathrm{\Lambda }\pi `$ decay is a strong interaction described in terms of quarks and gluons in QCD and should be included in the inclusive polarized fragmentation function. Clearly the $`\mathrm{\Sigma }^{}`$ intermediate state must be already included in any fragmentation function which takes into account $`all`$ strong interactions in the description of a process in which a struck quark turns into a $`\mathrm{\Lambda }`$ plus anything else. This point of view is implied in the treatments which attempt to use the $`\mathrm{\Lambda }`$ polarization data to extract fragmentation functions. But the expression for the $`\mathrm{\Lambda }`$ polarization eq.(1) is rigorous. Thus a theoretical formulation which gives a prediction for this polarization must also include a prediction for the precise values for all parameters appearing in eq.(1) We immediately find a crucial weak point in all attempts to obtain a theoretical estimate for the value of the $`\mathrm{\Lambda }`$ polarization. Any theoretical attempt to obtain the value of the branching ratio $`BR(\mathrm{\Sigma }^{}\mathrm{\Lambda }\pi )`$ must take into account fine threshold effects like the small SU(3) breaking produced by the $`\mathrm{\Lambda }\mathrm{\Sigma }`$ mass difference which vanishes in the SU(3) symmetry limit. Note that in the SU(3) symmetry limit the predicted ratio of the branching ratios of the two $`\mathrm{\Sigma }^{}`$ decay modes is in strong disagreement with experiment: $$\left(\frac{BR(\mathrm{\Sigma }^{}\mathrm{\Lambda }\pi )}{BR(\mathrm{\Sigma }^{}\mathrm{\Sigma }\pi )}\right)_{theo}=1/2\left(\frac{BR(\mathrm{\Sigma }^{}\mathrm{\Lambda }\pi )}{BR(\mathrm{\Sigma }^{}\mathrm{\Sigma }\pi )}\right)_{exp}=7.3\pm 1.2$$ (3) The disagreement is more than an order of magnitude. There is also the problem of obtaining the values of $`N_{nd}`$, $`N_\mathrm{\Sigma }^{}`$ and $`N_{\mathrm{\Sigma }^o}`$. If one assumes a purely statistical model in which all states of two nonstrange quarks and one strange quark are equally probable the ratio $$N_{nd}/N_\mathrm{\Sigma }^{}/N_{\mathrm{\Sigma }^o}=1:6:1$$ (4) where the factor 6 arises from the $`(2J+1)`$ spin factor and the three charge states of the $`\mathrm{\Sigma }^{}`$ which all decay into $`\mathrm{\Lambda }\pi `$. The experimental values are very different. There is also the problem that all three charge states are equally produced by the fragmentation of a struck $`s`$ quark, while a struck $`u`$ or $`d`$ quark can only produce the two charge states containing the struck quark. All these factors complicate any attempt at this stage to predict values of $`N_{nd}`$, $`N_\mathrm{\Sigma }^{}`$ and $`N_{\mathrm{\Sigma }^o}`$ from any purely theoretical model without any external experimental input. Thus predictions for polarization of the $`\mathrm{\Lambda }`$’s observed in any experiment must include as input the known experimental branching ratio $`BR(\mathrm{\Sigma }^{}\mathrm{\Lambda }\pi )`$ as well as the values of $`N_{nd}`$, $`N_\mathrm{\Sigma }^{}`$ and $`N_{\mathrm{\Sigma }^o}`$ obtained from other experiments or from Monte Carlo programs which rely on a number of free parameters which are adjusted to fit vast quantities of data. Note that the $`\mathrm{\Sigma }^o`$ decays electromagnetically. Its decay is never included in any strong interaction fragmentation function and the $`\mathrm{\Lambda }^{}s`$ produced via its production and decay must be considered separately in all fragmentation models. We now note that the polarization of $`\mathrm{\Lambda }`$ produced from the decay of a $`\mathrm{\Sigma }^{}`$ or $`\mathrm{\Sigma }^o`$ is proportional to the polarization of the decaying $`\mathrm{\Sigma }^{}`$ or $`\mathrm{\Sigma }^o`$ with coefficients depending only on angular momentum Clebsch-Gordan coefficients and completely independent of the spin structure of the $`\mathrm{\Lambda }`$: $$P_\mathrm{\Sigma }^{}(\mathrm{\Lambda })=P_\mathrm{\Sigma }^{}C(\mathrm{\Sigma }^{});P_{\mathrm{\Sigma }^o}(\mathrm{\Lambda })=P_{\mathrm{\Sigma }^o}C(\mathrm{\Sigma }^o);P_{\mathrm{\Sigma }^{}\mathrm{\Sigma }^o}(\mathrm{\Lambda })=P_\mathrm{\Sigma }^{}C(\mathrm{\Sigma }^{}\mathrm{\Sigma })$$ (5) where $`P_\mathrm{\Sigma }^{}`$ and $`P_{\mathrm{\Sigma }^o}`$ denote the polarizations respectively of the $`\mathrm{\Sigma }^{}`$ and $`\mathrm{\Sigma }^o`$ before their decays, and $`C(\mathrm{\Sigma }^{})`$, $`C(\mathrm{\Sigma }^o)`$ and $`C(\mathrm{\Sigma }^{}\mathrm{\Sigma })`$ denote the model-independent functions of Clebsch-Gordan coefficients describing the ratio of the polarization of the final $`\mathrm{\Lambda }`$ to the polarization of the decaying baryon. The explicit values of these functions are given in ref. , where it is shown that the polarization of the final $`\mathrm{\Lambda }`$ in all models for the baryons and the dynamics of the decay process is given by the polarization of the strange quark in the simple constituent quark model for the decaying baryon. We immediately note that only the polarization of the directly-produced $`\mathrm{\Lambda }`$ can depend upon the spin-flavor structure of the $`\mathrm{\Lambda }`$. The other terms in eq.(1) depend upon the spin-flavor structure of the $`\mathrm{\Sigma }^{}`$ and the $`\mathrm{\Sigma }^o`$, but are independent of the spin-flavor structure of the $`\mathrm{\Lambda }`$. The expression eq.(1) for the polarization of all the $`\mathrm{\Lambda }`$’s produced in a given experiment is easily generalized to obtain the polarization of $`\mathrm{\Lambda }`$’s restricted to a given domain of various kinematic variables. It is necessary to note that the momenta of $`\mathrm{\Lambda }`$’s produced from a decay of a $`\mathrm{\Sigma }^{}`$ or $`\mathrm{\Sigma }^o`$ are different from those of the parent baryon. Thus to obtain the polarization of $`\mathrm{\Lambda }`$’s produced in a given kinematic range one must integrate the expressions for $`N_\mathrm{\Sigma }^{}`$ and $`N_{\mathrm{\Sigma }^o}`$ over momenta with the appropriate weighting factors and angular distributions needed to produce the $`\mathrm{\Lambda }`$’s in the correct kinematic range. All this cannot be done reliably in present theoretical calculations and has to be taken from Monte Carlo simulations that are tuned and tested and thus reproduce well many related experimental observables. The polarization of $`\mathrm{\Lambda }`$ and $`\overline{\mathrm{\Lambda }}`$ produced at the Z-pole was calculated using two ingredients: the polarization of the $`s`$ and $`\overline{s}`$ quarks produced at the pole and their hadronization into $`\mathrm{\Lambda }`$ and $`\overline{\mathrm{\Lambda }}`$ . The first part was derived from weak interactions and can be predicted directly. The hadronization of the $`s,\overline{s}`$ directly to $`\mathrm{\Lambda }`$ and $`\overline{\mathrm{\Lambda }}`$ or as decay products was calulated using Monte Carlo simulations. The authors found that about 20% of the $`\mathrm{\Lambda }`$ polarization is contributed by $`\mathrm{\Lambda }`$ particles originating from $`\mathrm{\Sigma }^{}`$ decay. These calculations used the naïve quark model wave functions for the baryons. This shows that inclusion of this contribution was essential for obtaining agreement with the data. In any attempt to go beyond the naïve quark model and include the information obtained from DIS on the spin-flavor structure of the proton, it is clearly necessary not only to have a model for the spin-flavor structure of the $`\mathrm{\Lambda }`$, but also of the $`\mathrm{\Sigma }^{}`$ and the $`\mathrm{\Sigma }^o`$ as well. In studies of $`\mathrm{\Lambda }`$ polarization in deep inelastic scattering of polarized muon the $`x_F`$ dependence of this contribution is studied (see fig. 1). It is found that up to $`x_F0.5`$ the contribution from $`\mathrm{\Sigma }^{}`$ decay to the $`\mathrm{\Lambda }`$ polarization is dominant. It is only for larger $`x_F`$ values that polarization from directly produced $`\mathrm{\Lambda }`$ ’s becomes dominant. It should be noted that the $`\mathrm{\Lambda }`$ yield is dropping fast for large $`x_F`$ and consequently most of the data is taken in the region where the $`\mathrm{\Lambda }`$ polarization is dominated by $`\mathrm{\Sigma }^{}`$ decay. The contribution from this process must therefore be considered very carefully. This contribution was addressed only in some of the theoretical calculations of this process .
no-problem/0002/cond-mat0002270.html
ar5iv
text
# 1/𝑓^𝛼 fluctuations in a ricepile model ## Abstract The temporal fluctuation of the average slope of a ricepile model is investigated. It is found that the power spectrum $`S(f)`$ scales as $`1/f^\alpha `$ with $`\alpha 1.3`$ when grains of rice are added only to one end of the pile. If grains are randomly added to the pile, the power spectrum exhibits $`1/f^2`$ behavior. The profile fluctuations of the pile under different driving mechanisms are also discussed. PACS numbers: 05.40.-a, 05.65.+b, 45.70.-n, 05.70.Ln The term flick noise refers to the phenomenon that a signal $`s(t)`$ fluctuates with a power spectrum $`S(f)1/f^\alpha `$ at low frequency. Since the exponent $`\alpha `$ is often close to $`1`$, flick noise is also called $`1/f`$ noise. The power spectrum of a signal is defined as the Fourier transform of its auto-correlation function $`C(t_0,t)=s(t_0)s(t)`$, where $`\mathrm{}`$ denotes ensemble average. Usually the signal $`s(t)`$ under study is stationary and its auto-correlation function only depends on $`tt_0`$. The auto-correlation function can be alternatively calculated as $$C(\tau )=_{\mathrm{}}^{\mathrm{}}s(t)s(t+\tau )𝑑t.$$ (1) If the signal is real-valued, its power spectrum is just $$S(f)=\left|\widehat{s}(f)\right|^2$$ (2) according to Wiener-Khinchin Theorem. Here $`\widehat{s}(f)`$ is the Fourier transform of the signal $$\widehat{s}(f)=_{\mathrm{}}^{\mathrm{}}s(t)e^{i2\pi ft}𝑑t.$$ (3) In nature and laboratory, many physical systems show flick noise. For example, flick noise appears in a variety of systems ranging from the light of quasars to water flows in rivers , music and speech , and electrical measurement . Despite its ubiquity, a universal explanation of the flick noise is still lacking. Bak, Tang, and Wiesenfeld have proposed that self-organized criticality (SOC) may be the mechanism underlying flick noise. They also demonstrated their idea of SOC with a cellular automaton model, the BTW sandpile model. However, the model they proposed does not exhibit flick noise in its 1D and 2D versions . Especially the 1D BTW sandpile model does not even exhibit SOC behavior. Experiments on piles of rice grains were done to investigate if real granular systems display SOC behaviors. It was found that ricepiles with elongated rice grains exhibit SOC behaviors. The key ingredient here is the heterogeneity of the system. Because elongated rice grains could be packed in different ways , the ricepile thus has many different metastable states. A 1D model of ricepile was proposed by the same group of authors; avalanche distribution and transit-time distributions were discussed . The model was later investigated by several other authors including us . Attentions were paid to the avalanche dynamics and transit-time statistics of the model. In this paper, we are concerned with temporal fluctuation of the average slope of the system. We want to investigate if it has flick noise or flick-like behavior. The ricepile model is defined as follows: Consider a one-dimensional array of sites $`1`$, $`2`$, $`\mathrm{}`$, $`L`$. Each site contains an integer number $`h_i`$ of rice grains. Here $`h_i`$ is called the local height of the ricepile at site $`i`$. The system is initialized by setting $`h_i=0`$ for all sites. This means the ricepile is to be built up from scratch. The system is driven by dropping rice grains onto the pile. If one grain of rice is dropped at site $`i`$, then the height of column $`i`$ will increase by $`1`$, $`h_ih_i+1`$. With the dropping of rice grains, a ricepile is built up. The local slope of the pile is defined as $`z_i=h_ih_{i+1}`$. Whenever the local slope $`z_i`$ exceeds certain threshold $`z_i^c`$ (specified in the following), site $`i`$ will topple and one grain of rice will be transferred to its neighbor site on the right. That is, $`h_ih_i1`$ and $`h_{i+1}h_{i+1}+1`$. In this model, rice grains are allowed to leave the pile only from the right end of the pile, while the left end of the pile is closed. So the boundary condition is kept as $`h_0=h_1`$ and $`h_{L+1}=0`$. The ricepile is said to be stable if no local slope exceeds threshold value, that is, $`z_iz_i^c`$ for all $`i`$. Rice grains are dropped to the pile only when the pile is stable. In an unstable state, all sites $`i`$ with $`z_i>z_i^c`$ topple in parallel. The topplings of one or more sites is called an avalanche event, and the size of an avalanche is defined as the number of topplings involved in the avalanche event. The values of $`z_i^c`$’s are essential to the definition of the model. As in Ref. , every time site $`i`$ topples, the threshold slope $`z_i^c`$ will take a new value randomly chosen from $`1`$ to $`r`$ with equal probability. Here $`r`$ is an integer no less than $`1`$. The parameter $`r`$ reflects the level of medium disorder. The larger the $`r`$, the higher the level of medium disorder of the system. If $`r=1`$ the model becomes the $`1`$D BTW sandpile model. When $`r=2`$, the model reduces to the model studied in reference . In ref. , we have investigated the effect of disorder on the universality of the avalanche size distribution and transit time distribution. In the present work, we performed extensive numerical simulations on the system evolution. Let us first study the case where rice grains are added only to the left end of the pile, i.e., only to the site $`i=1`$, which is in accordance with the experiment setup . We shall refer to this driving mechanism as the fixed driving. When the stationary state is reached, we record the average slope $`z(t)=h_1(t)/L`$ of the pile after every avalanche. Here time $`t`$ is measured in the number of grained added to the system. Thus we got a time series $`z(t)`$. Typical results about the fluctuation of the slope $`z(t)`$ can be seen in Fig.1.a. We calculate the power spectrum of this time series according to Eq.(2). We find that for not too small systems the slope fluctuation displays $`1/f`$-like behavior. In fact, we got a power spectrum $`S(f)`$ scaling as $`1/f^\alpha `$ at low frequency, with the exponent $`\alpha =1.3\pm 0.1`$. We note the following points: (a) the temporal behavior of the ricepile model is dramatically different from the 1D BTW sandpile model. In the 1D BTW model, the system has a stable state , which, once reached, cannot be altered by subsequent droppings of sand grains. Thus the 1D BTW model does not display SOC behaviors. The slope of the 1D sandpile model assumes constant values. The power spectrum is thus of the form $`S(f)=\delta (f)`$. In words, there is only dc component in the power spectrum for the 1D BTW model. (b) With the introduction of varying threshold $`z_i^c`$ into the ricepile model the behavior of the system becomes much more rich. The avalanche distribution follows power law , which is an important signature of SOC. Besides that, the temporal fluctuation of $`z(t)`$ has a power spectrum of flick type at low frequency. We notice that the power spectrum is flattened at low frequency for small system. This is a size effect. It is believed that flick noise fluctuation is closely related to the long range spatial correlation in the system. For small system, the spatial correlation that can be built up is limited by the system size, thus the long range temporal correlation required by flick noise is truncated off at low frequency. Our numerical results verified the above discussion. In Fig. 2, we show the power spectrum of $`z(t)`$ for different system sizes. It is clear that when the system size increases the $`1/f^\alpha `$ behaviors extends to lower and lower frequency. We also investigated the effect of the level of disorder on the power spectrum. This was done by simulating the system with different values of $`r`$. In Fig. 3 we show the power spectra for the system with different $`r`$. It can be seen from this figure, that the power-law (straight in the log-log plot) part extends to lower frequency for higher value of $`r`$. The effect can be understood by the following discussion. When the level of disorder is increased, the amplitude of the slope fluctuation also increases. Larger amplitude fluctuation of $`z(t)`$ requires more grains to be added to the system. Hence the period of this fluctuation increase, which gives rise to the increase of low frequency components in the power spectrum. We have made statistics on deviation of the ricepile slope from its average value in the stationary state. We found that the greater the $`r`$, the greater the deviation. Note that the value of $`r>1`$ does not affect the exponent $`\alpha `$, as it does not alter the universal avalanche exponent $`\tau `$ for the avalanche distribution . It is also interesting to study the effect of driving mechanism on the temporal behavior of the system. Now, instead of dropping rice grains to the left end of the pile, we drop rice grains to randomly chosen sites of the system. This way of dropping rice grains represents a different driving mechanism, which we shall refer to as the random driving. Numerical simulations with this random driving were made and the time series $`z(t)`$ was recorded. For this way of driving, typical result about the fluctuation of $`z(t)`$ can be seen in Fig. 1.b. It seems that $`z(t)`$ now fluctuates more regularly than it does for the fixed driving. We found that under random driving the temporal fluctuation has a trivial $`1/f^2`$ behavior as in the case of various previously studied sandpiles . In Fig. 3, we compare the power spectra of the two cases with different driving mechanisms. Two groups of curves are shown in this figure, each with a different exponent $`\alpha `$. For the fixed driving, we have $`\alpha 1.25`$, while for the random driving we have $`\alpha 2.0`$. The distinction of the two behaviors is quite clear. This shows that the driving mechanism, as an integrated part of the model, has an important role in the temporal behaviors of the system. Recent work on the continuous version of BTW sandpile model, and that on quasi-1D BTW model also showed that certain driving mechanisms are necessary conditions for these models to display $`1/f`$ fluctuation for the total amount of sand (or say, energy). Here we present an heuristic discussion on why the exponent $`\alpha `$ is smaller for fixed driving than for random driving. For fixed driving, the height $`h_1`$ at site $`i=1`$ changes almost every dropping, and so is the average slope $`z(t)`$. Thus the high frequency component of the $`z(t)`$ fluctuation has a heavier weight in its power spectrum, and this will make the exponent $`\alpha `$ smaller. While for the random driving, the rice grains are dropped to the pile at random sites, the spatial correlations previously built up can be easily destroyed, making the $`z(t)`$ behave more or less as an random walk. So the exponent $`\alpha `$ for this case shall be very close to $`2`$. For random driving, each site has the same probability in receiving a grain in every drop. For the fixed driving, however, only the left end site receive grains, so there is in some sense breaking of symmetry, which would lead to different scalings in the slope fluctuations. To see more about the effect of driving mechanism, it is helpful to investigate the profile fluctuations of the pile under different drivings. Because avalanches change the surface of the ricepile, the pile fluctuates around an average profile, and the size of the fluctuations characterize the active zone of the ricepile surface. Let the standard deviation of height at site $`i`$ be $`\sigma _h(i,L)=\sqrt{h_i^2h_i^2}`$. Here $`\mathrm{}`$ represents average over time. Then we calculate the profile width of the ricepile, $`w=\frac{1}{L}_i\sigma _h(i,L)`$, which is a function of the system size. In Fig. 4, we show the profile width of the ricepile for different $`r`$ and different driving mechanisms. It can be seen that $`w`$ scales with $`L`$ as $`wL^\chi `$, and that $`\chi =0.25\pm 0.01`$ for fixed driving, $`\chi =0.09\pm 0.01`$ for random driving. For given driving mechanism, $`w`$ increases with increasing $`r`$. As we stated in Ref. , the parameter $`r`$ reflects the level of medium disorder in the rice pile. Although the level of disorder does not change the scaling exponent $`\chi `$, it does affect the amplitude of fluctuations. For greater $`r`$, the profile width is larger. It can be seen in the figure, that the data points for $`r=4`$ are above that for $`r=2`$, for given driving mechanism. A recent experiment showed that piles of polished rice grains has a smaller profile width than unpolished ones, which have higher level of medium disorder. In conclusion, we have studied the temporal fluctuation of the slope of ricepile model in its critical stationary state. The power spectrum of this fluctuation is closely related to the driving mechanism. When the rice grains are dropped to the left end of the pile, the slope fluctuates with a flick-type power spectrum, with the exponent $`\alpha =1.3\pm 0.1`$. When the driving mechanism is changed to random driving, the model displays $`1/f^2`$ behaviors. Greater system size and higher level of disorder will extend the frequency range where the power spectrum has the form $`1/f^\alpha `$. The author thanks Prof. Vespignani for helpful discussions. This work was supported by the National Natural Science Foundation of China under Grant No. 19705002, and the Research Fund for the Doctoral Program of Higher Education (RFDP).
no-problem/0002/cond-mat0002169.html
ar5iv
text
# Boundary-induced phase transitions in traffic flow \[ ## Abstract Boundary-induced phase transitions are one of the surprising phenomena appearing in nonequilibrium systems. These transitions have been found in driven systems, especially the asymmetric simple exclusion process. However, so far no direct observations of this phenomenon in real systems exists. Here we present evidence for the appearance of such a nonequilibrium phase transition in traffic flow occurring on highways in the vicinity of on- and off-ramps. Measurements on a German motorway close to Cologne show a first-order nonequilibrium phase transition between a free-flow phase and a congested phase. It is induced by the interplay of density waves (caused by an on-ramp) and a shock wave moving on the motorway. The full phase diagram, including the effect of off-ramps, is explored using computer simulations and suggests means to optimize the capacity of a traffic network. \] One-dimensional physical systems with short-ranged interactions in thermal equilibrium do not exhibit phase transitions. This is no longer true if the action of external forces sets up a steady mass transport and drives the system out of equilibrium. Then boundary conditions, usually insignificant for an equilibrium system, can induce nonequilibrium phase transitions. Moreover, such phase transitions may occur in a rather wide class of driven complex systems, including biological and sociological mechanisms involving many interacting agents. In spite of the importance of this phenomenon and a number of theoretical studies , it has never been observed directly. So far only indirect experimental evidence for a boundary-induced phase transition exists in older studies of the kinetics of biopolymerization on nucleic acid templates . In the present work we report the first direct observation of a boundary-induced phase transition in traffic flow. Analysis of traffic data sets taken on a motorway near Cologne exhibits transitions from free-flow traffic to congested traffic, caused by a boundary effect, viz. the presence of an on-ramp. These transitions are characterized by a discontinuous reduction of the average speed . Vehicular traffic on a motorway is controlled by a mixture of bulk and boundary effects caused by on- and off-ramps, varying number of lanes, speed limits, leaving aside temporary effects of changing weather conditions and other non-permanent factors. The fundamental characteristic of the bulk motion is the stationary flow-density diagram, i.e. the fundamental diagram, which incorporates the collective effects of individual drivers behavior such as trying to maintain an optimal high speed while keeping safety precautions. The qualitative shape of the flow-density diagram $`j(\rho )`$ is largely independent of the precise details of the road and hence amenable to numerical analysis using either stochastic lattice gas models or partial differential equations . A by now well-established lattice gas model for traffic flow, the cellular automaton model by Nagel and Schreckenberg , reproduces empirical traffic data rather well. Fig. 1 shows simulation data for the fundamental diagram obtained from the Nagel-Schreckenberg (NaSch) model. This has to be compared to measurements of the flow $`j`$ taken with the help of detectors on the motorway A1 near Cologne which show a maximum of about 2000 vehicles/hour at a density of about $`\rho ^{}=20`$ vehicles per lane and km . At densities below $`\rho ^{}`$ one observes free flow, while for larger densities one observes congested traffic. In addition to the density dependence of the flow two important characteristics are derived directly from the fundamental diagram: the shock velocity of a ‘domain wall’ between two stationary regions of densities $`\rho ^{},\rho ^+`$ $$v_{shock}=\frac{j(\rho ^+)j(\rho ^{})}{\rho ^+\rho ^{}},$$ (1) obtained from mass conservation, and the collective velocity $$v_c=\frac{j(\rho )}{\rho }$$ (2) which is the velocity of the center of mass of a local perturbation in a homogeneous, stationary background of density $`\rho `$. Both velocities are readily observed in real traffic. The collective velocity $`v_c`$ describes the upstream movement of a local, compact jam. In the density range $`25\mathrm{}130`$ cars/km $`v_c`$ ranges from approximately $`10`$ km/hour to $`20`$ km/hour (Fig. 1) which has to be compared with the empirically observed value $`v15`$ km/hour . The shock velocity is the velocity of the upstream front of an extended, stable traffic jam. The formation of a stable shock is usually a boundary-driven process, caused by a ‘bottleneck’ on a road. Bottlenecks on a highway arise from a reduction in the number of lanes and from on-ramps where additional cars enter a road . The experimental data considered here (see Fig. 2 for the relevant part of the highway) show boundary effects caused by the presence of an on-ramp. Far upstream from the on-ramp, free flow of vehicles with density $`\rho ^{}`$ and flow $`j^{}j(\rho ^{})`$ is maintained. Just before the on-ramp the vehicle density is $`\rho ^+`$ with corresponding flow $`j^+j(\rho ^+)`$. Note that no experimental data are available for $`\rho ^{}`$, $`j^{}`$ and $`\rho ^+`$, $`j^+`$ as well as the activity of the ramp. The only data come from a detector located upstream from the on-ramp which measures a traffic density $`\widehat{\rho }`$ and the corresponding flow $`\widehat{ȷ}`$. Next the effects of the on-ramp are considered. Cars entering the motorway cause the mainstream of vehicles to slow down locally. Therefore, the vehicle density just before the on-ramp increases to $`\rho ^+>\rho ^{}`$. Then a shock, formed at the on-ramp, will propagate with mean velocity $`v_{shock}`$ (see (1)). Depending on the sign of $`v_{shock}`$, two scenarios are possible: 1) $`v_{shock}>0`$ (i.e. $`j^+>j^{}`$): In this case the shock propagates (on average) downstream towards the on-ramp. Only by fluctuations a brief upstream motion is possible. Therefore the detector will measure a traffic density $`\widehat{\rho }=\rho ^{}`$ and flow $`\widehat{ȷ}=j^{}`$. 2) $`v_{shock}<0`$ (i.e. $`j^+<j^{}`$): The shock wave starts propagating with the mean velocity $`v_{shock}`$ upstream, thus expanding the congested traffic region with density $`\rho ^+`$. The detector will now measure $`\widehat{\rho }=\rho ^+`$ and flow $`\widehat{ȷ}=j^+`$. Let us now discuss the transition between these two scenarios. Suppose one starts with a situation where $`j^+>j^{}`$ is realized. If now the far-upstream-density $`\rho ^{}`$ increases it will reach a critical point $`\rho _{crit}<\rho ^{}`$ above which $`j^{}>j^+`$, i.e., the free flow upstream $`j^{}`$ prevails over the flow $`j^+`$ which the ‘bottleneck’, i.e. the on-ramp, is able to support. At this point shock wave velocity $`v_{shock}`$ will change sign (see (1)) and the shock starts traveling upstream. As a result, the stationary bulk density $`\widehat{\rho }`$ measured by the detector upstream from the on-ramp will change discontinuously from the critical value $`\rho _{crit}`$ to $`\rho ^+`$. This marks a nonequilibrium phase transition of first order with respect to order parameter $`\widehat{\rho }`$. The discontinuous change of $`\widehat{\rho }`$ leads also an abrupt reduction of the local velocity. Notice that the flow $`\widehat{ȷ}=j^+`$ through the on-ramp (then also measured by the detector) will stay independent of the free flow upstream from the congested region $`j^{}`$ as long as the condition $`j^{}>j^+`$ holds. Empirically this phenomenon can be seen in the traffic data taken from measurements at the detector D1 on the motorway A1 close to Cologne . Fig. 3 shows a typical time series of the one-minute velocity averages. One can clearly see the sharp drop of the velocity at about 8 a.m. Also the measurements of the flow versus local density, i.e. the fundamental diagram (Fig. 4), support our interpretation. Two branches can be distinguished. The increasing part corresponds to an almost linear rise of the flow with density in the free-flow regime . In accordance with our considerations this part of the flow diagram is not affected by the presence of the on-ramp at all and one measures $`\widehat{ȷ}=j^{}`$ which is the actual upstream flow. The second branch are measurements taken during congested traffic hours, the transition period being omitted for better statistics. The transition from free flow to congested traffic is characterized by a discontinuous reduction of the local velocity. However, as predicted above the flow does not change significantly in the congested regime. In contrast, in local measurements large density fluctuations can be observed. Therefore in this regime the density does not take the constant value $`\rho ^+`$ as suggested by the argument given above, but varies from 20 veh/km/lane to 80 veh/km/lane (see Fig. 4). One should stress here that congested traffic data are usually not easy to interpret, because the traffic conditions (mean inflow and outflow of cars on the on- and off-ramps, and so the bulk mean flow) are changing in time. According to our arguments, in a congested regime the detector measures $`j^+`$, solely due to the on-ramp activity. Therefore, $`j^+(t)=j(\rho ^+(t))<j^{}(t)`$ must be satisfied. During times of very dense traffic one expects always cars ready to enter the motorway at the on-ramp, thus guaranteeing a sufficient and approximately constant on-ramp activity. The measured flow is constant over long periods of time which is in agreement with the notion that the transition is due a stable traffic jam. Spontaneously emerging and decaying jams would lead to the observation of a non-constant flow. The use of our approach is not limited to a qualitative explanation of the traffic data. Beyond that it can also be used to calculate the phase diagrams of systems with open boundary conditions for a large class of traffic models. We modeled a section of a road with on-ramp on the left and off-ramp (on-ramp) on the right using the NaSch cellular automaton . We modify the basic model by using open boundary conditions with injection of cars at the left boundary (corresponding to in-flow into the road segment) and removal of cars at the right boundary (corresponding to outflow). Therefore it can also be regarded as a generalization of the asymmetric simple exclusion process to particles with higher velocity. During the simulations local measurements of the velocity have been performed analogous to the experimental setup. For comparison the results of the computer simulations have been included in Fig. 3. Note that even the quantitive agreement with the empirical data is very good. This has been achieved by using a finer discretization of the model, i.e. the length of the cell is considered as $`l=2.5m`$. The results were obtained for $`L=960`$, $`p=0.25`$ and $`v_{max}=13`$. We kept the input probability $`\alpha =0.65`$ constant. Then the free-flow part is obtained using $`\beta =1.`$ and the congested part using $`\beta =0.55`$. The transition was observed at ten minutes after we reduced the output probability. The “detector” was located at the link from site $`480`$ to $`481`$. Fig. 5 shows the full phase diagram of the NaSch model with open boundary conditions. It describes the stationary bulk density $`\widehat{\rho }`$ as a function of the far-upstream in-flow boundary density $`\rho ^{}`$ and the effective right boundary density $`\rho ^+`$. For the case of an on-ramp (or shrinking road width etc.) at the right boundary corresponds to the situation discussed above. Here, the density is increased locally to $`\rho ^+>\rho ^{}`$. In agreement with the empirical observation we find a line of first order transitions from a free flow (FF) phase with bulk density $`\widehat{\rho }=\rho ^{}`$ to a congested (CT) phase with $`\widehat{\rho }=\rho ^+`$. On this line $`v_{shock}`$ changes sign. The case of an off-ramp (or expansion of road space etc.) leads to a local decrease $`\rho ^+<\rho ^{}`$ of the density. Here the collective velocity $`v_c`$ (2) plays a prominent role. As long as $`v_c`$ is positive (i.e. in the free-flow regime $`\rho ^{}<\rho ^{}`$, see Fig. 1), perturbations caused by a small increase of the upstream boundary density $`\rho ^{}`$ gradually spread into the bulk, rendering $`\widehat{\rho }=\rho ^{}`$ (FF regime). At $`\rho ^{}=\rho ^{}`$, $`v_c`$ changes sign and an overfeeding effect occurs: a perturbation from the upstream boundary does not spread into the bulk and therefore further increase of the upstream boundary density does not increase the bulk density. The system enters the maximal flow (MF) phase with constant bulk density $`\widehat{\rho }=\rho ^{}`$ and flow $`j(\rho ^{})=j_{max}`$. The transition to the MF phase is of second order, because $`\widehat{\rho }`$ changes continuously across the phase transition point. The existence of a maximal flow phase was not emphasized in the context of traffic flow up to now. At the same time, it is the most desirable phase, carrying the maximal possible throughput of vehicles $`j_{max}`$. For practical purposes our observations may directly be used in order to operate a highway in the optimal regime. E.g. the flow near a lane reduction could be increased significantly if the traffic state at the entry would allow to attain the maximal possible flow of the bottleneck. This could be achieved by controlling the density far upstream, e.g. by closing temporally an on-ramp, such that the cars still enter the bottleneck with high velocity. We stress that the stationary phase diagram Fig. 5 is generic in the sense that it is determined solely by the macroscopic flow-density relation. The number of lanes of the road, the distribution of individual optimal velocities, speed limits, and other details enter only in so far as they determine the exact values characterizing the flow-density relation for that particular road. We also note that throughout the paper we assumed the external conditions to vary slowly, so that the system has enough time to readjust to its new stationary state. Experimenting with different cellular traffic models in a real time scale shows that the typical time to reach a stationary state in a road segment of about 1.2 km is of the order of 3-5 min, which is reasonably small. Close to phase transitions lines, however, where the shock velocity vanishes, this time diverges and intrinsically non-stationary dynamic phenomena take the lead. In conclusion, we have shown that traffic data collected on German motorways provide evidence for a boundary-induced nonequilibrium phase transition of first order from the free flowing to the congested phase. The features of this phenomenon are readily understood in terms of the flow-density diagram. The dynamical mechanism leading to this transition is an interplay of shocks and local fluctuations caused by an on-ramp. Full investigation of a cellular automaton model for traffic flow reproduces this phase transition, but also exhibits a richer phase diagram with an interesting maximal flow phase. These results are not only important from the point of view of nonequilibrium physics, but also suggest new mechanisms of traffic control. Acknowledgments: We thank Lutz Neubert for useful discussions and help in producing Figs. 3 and 4. L. S. acknowledges support from the Deutsche Forschungsgemeinschaft under Grant No. SA864/1-1.
no-problem/0002/nlin0002054.html
ar5iv
text
# Synchronization of Chaotic Systems by Common Random Forcing ## I Introduction The issue of whether chaotic systems can be synchronized by common random noise sources has attracted much attention recentlymar94 ; mal96 ; san97 ; min98 ; per99 ; ali97 . It has been reported that for some chaotic maps, the introduction of the same (additive) noise in independent copies of the same map could lead (for large enough noise intensity) to a collapse onto the same trajectory, independently of the initial condition assigned to each of the copiesmar94 . This synchronization of chaotic systems by the addition of random terms is a remarkable and counterintuitive effect of noise. Nowadays, some contradictory results exist for the existence of this phenomenon of noise–induced synchronization. It is purpose of this paper to give explicit examples in which it is shown that one can indeed obtain such a synchronization. Moreover, the examples open the possibility to obtain such a synchronization in electronic circuits, hence suggesting that noise-induced synchronization of chaotic circuits can indeed be used for encryption purposes. Although the issue of which is the effect of noise in chaotic systems was considered at the beginning of the 80’smt83 , to our knowledge the first atempt to synchronize two chaotic systems by using the same noise signal was considered by Maritan and Banavarmar94 . These authors analysed the logistic map in the presence of noise: $$x_{n+1}=4x_n(1x_n)+\xi _n$$ (1) where $`\xi _n`$ is the noise term, considered to be uniformly distributed in a symmetric interval $`[W,+W]`$. They showed that, if $`W`$ was large enough (i.e. for a large noise intensity) two different trajectories which started with different initial conditions but used otherwise the same sequence of random numbers, would eventually coincide into the same trajectory. This result was heavily critisized by Pikovskypik94 who argued that two systems can synchronize only if the largest Lyapunov exponent is negative. He then shows that the largest Lyapunov exponent of the logistic map in the presence of noise is always positive and concludes that the synchronization is, in fact, a numerical effect of lack of precision of the calculation. Furthermore, Malesciomal96 pointed out that the noise used to simulate Eq.(1) in mar94 was not really symmetric. This is because the requirement $`x_n(0,1),n`$, actually leads to discard those values for the random number $`\xi _n`$, which do not fulfill such condition. The average value of the random numbers which have been accepted is different from zero, hence producing an effective biased noise, i.e. one which does not have zero mean. The introduction of a non-zero mean noise means that we are altering essentially the properties of the deterministic map. Noise induced synchronization has been since studied for other chaotic systems such as the Lorenz modelmar94 ; mal96 and the Chua circuitsan97 ; per99 . Synchronization of trajectories starting with different initial conditions but using otherwise the same sequence of random numbers was observed in the numerical integration of a Lorenz system in the presence of a noise distributed uniformly in the interval $`[0,W_L]`$, i.e. again a noise which does not have a mean of zeromar94 . Other detailed studiesmal96 also conclude that it is not possible to synchronize trajectories in a Lorenz system by adding an unbiased noise. Similarly, the studies of the Chua circuit always conclude that a biased noise is needed for synchronizationper99 . Therefore a widespread belief exists that it is not possible to synchronize two chaotic systems by injecting the same noisy signal to both of them. However, in this paper we give numerical evidence that it is possible to synchronize two chaotic systems by the addition of a common noise which is Gaussian distributed and not biased. We analyse specifically a 1-d map and the Lorenz system, both in the chaotic region. The necessary criterion introduced in ref.pik94 is fully confirmed and some heuristic arguments are given about the general validity of our results. Finally, we conclude with some open questions relating the general validity of our results. ## II Results The first example is that of the map: $$x_{n+1}=f(x_n)+ϵ\xi _n$$ (2) where $`\xi _n`$ is a set of uncorrelated Gaussian variables of zero mean and variance 1. We use explicitely $$f(x)=\mathrm{exp}\left[\left(\frac{x0.5}{\omega }\right)^2\right]$$ (3) We plot in Fig.(1a) the bifurcation diagram of this map. In the noiseless case, we can see the typical windows in which the system behaves chaotically. The associated Lyapunov exponent, $`\lambda `$, in these regions is positive. For instance, for $`\omega =0.3`$ (the case we will be considereing throughout the paper) it is $`\lambda 0.53`$. In Fig.(1b) we observe that the Lyapunov exponent becomes negative for most values of $`\omega `$ for large enough noise level $`ϵ`$. Again for $`\omega =0.3`$ and now for $`ϵ=0.2`$ it is $`\lambda =0.17`$. For the noiseless case, it is $`\lambda >0`$ and trajectories starting with differential initial conditions, obviously, remain different for all the iteration steps, see Fig.(2a). However, when moderated levels of noise ($`ϵ0.1`$) are used, $`\lambda `$ becomes negative and trajectories starting with different initial conditions, but using the same sequence of random numbers, synchronize perfectly, see Fig.(2b). According to pik94 , convergence of trajectories to the same one, or loss of memory of the initial condition, can be stated as negativity of the Lyapunov exponent. The Lyapunov exponent of a map $`x_{n+1}=F(x_n)`$ is defined as $$\lambda =\underset{N\mathrm{}}{lim}\frac{1}{N}\underset{i=1}{\overset{N}{}}\mathrm{ln}|F^{}(x_i)|$$ (4) It is the average of (the logarithm of the absolute value of) the successive slopes $`F^{}`$ found by the trajectory. Slopes in $`[1,1]`$ contribute to $`\lambda `$ with negative values, indicating trajectory convergence. Larger or smaller slopes contribute with positive values, indicating trajectory divergence. Since the deterministic and noisy maps satisfy $`F^{}=f^{}`$ one is tempted to conclude that the Lyapunov exponent is not modified by the presence of noise. However, there is noise-dependence through the trajectory values $`x_i`$, $`i=1,\mathrm{},N`$. In the absence of noise, $`\lambda `$ is positive, indicating trajectory separation. When synchronization is observed, the Lyapunov exponent becomes negative, as required by the argument in pik94 . By using the definition of the invariant measure on the attractor, or stationary probability distribution $`P_{st}(x)`$, the Lyapunov exponent can be calculated also as $$\lambda =\mathrm{log}|F^{}(x)|=\mathrm{log}|f^{}(x)|P_{st}(x)\mathrm{log}|f^{}(x)|dx$$ (5) Here we see clearly the two contributions to the Lyapunov exponent: although the derivative $`f^{}(x)`$ does not change when including noise in the trajectory, the stationary probability does change (see Fig.3), thus producing the observed change in the Lyapunov exponents. Synchronization, then, can be a general feature in maps which have a large region in which the derivative $`|f^{}(x)|`$ is smaller than one. Noise will be able, then , to explore that region and yield, on the average, a negative Lyapunov exponent. The second system we have studied is the well known Lorenz model with random terms addedlorenz ; mar94 : $`\dot{x}`$ $`=`$ $`p(yx)`$ $`\dot{y}`$ $`=`$ $`xz+rxy+ϵ\xi `$ (6) $`\dot{z}`$ $`=`$ $`xybz`$ $`\xi `$ now is white noise: a Gaussian random process of mean zero and delta correlated, $`\xi (t)\xi (t^{})=\delta (tt^{})`$. We have used $`p=10`$, $`b=8/3`$ and $`r=28`$ which, in the deterministic case, $`ϵ=0`$ are known to lead to a chaotic behavior (the largest Lyapunov exponent is $`\lambda 0.9>0`$). We have integrated numerically the above equations using the Euler method with a time step $`\mathrm{\Delta }t=0.001`$. For the deterministic case, trajectories starting with different initial conditions are completely uncorrelated, see Fig. (4a). This is also the situations for small values of $`ϵ`$. However, when using a noise intensity $`ϵ=40`$ the noise is strong enough to induce synchronization of the trajectories. Again the presence of the noise terms makes the largest Lyapunov exponent become negative (for $`ϵ=40`$ it is $`\lambda 0.2`$). As in the example of the map, after some transient time, two different evolutions which have started in completely different initial conditions synchronize towards the same value of the three variables (see Fig. (4b) for the $`z`$ coordinate). One could argue that the intensity of the noise is very large. However, the basic structure of the “butterfly” Lorenz attractor remains unchanged as shown in Fig. (5). In conclusion, we have shown that it is possible for noise to synchronize trajectories of a system which, deterministically, is chaotic. The novelty of our results is that the noise used in the two examples, a 1-d map and the Lorenz system, is unbiased, i.e. has always zero mean. There still remain many open questions in this field. They involve the development of a general theory, probably based in the invariant measure, that could give us a criterion to determine the range of parameters (including noise levels) for which the Lyapunov exponent becomes negative, thus allowing synchronization. Another important question relates the structural stability of this phenomenon. Any practical realization of this system can not produce two identical samples. If one wants to use sthocastic synchronization of electronic emitters and receivers (as a means of encryption) one should be able to determine which is the allowed discrepancy between circuits before the lack of synchronization becomes unacceptable. Acknowledgements We thank financial support from DGESIC (Spain) projects numbers PB94-1167 and PB97-0141-C02-01. ## Figure captions FIGURE 1: (a) Bifurcation diagram of the map given by Eqs.(2) and (3) in the absence of noise terms. (b) Lyapunov exponent for the noiseless map ($`ϵ=0`$, continuous line) and the map with a noise intensity $`ϵ=0.1`$ (dotted line) and $`ϵ=0.2`$ (dot-dashed line). FIGURE 2: Plot of two realizations $`x^{(1)}`$, $`x^{(2)}`$ of of the map given by Eqs. (2) and (3). Each realization consists of 10,000 points which have been obtained by iteration of the map starting in each case from a different initial condition (100,000 initial iterations have been discarded and are not shown). In figure (a) there is no noise, $`ϵ=0`$ and the trajectories are independent of each other. In figure (b) we have use a level of noise $`ϵ=0.2`$ producing a perfect synchronization (after discarding some initial iterations). FIGURE 3: Plot of the stationary distribution for the map given by Eqs.(2) and (3) in the (a) deterministic case $`ϵ=0`$, and (b) the case with noise along the trajectory, $`ϵ=0.2`$. FIGURE 4: Same than figure (1) for the $`z`$ variable of the Lorenz system, Eqs.(6) in the (a) deterministic case $`ϵ=0`$ and (b) $`ϵ=40`$. Notice the perfect synchronization in case (b). “Butterfly” attractor of the Lorenz system in the cases (a) of no noise $`ϵ=0`$ and (b) $`ϵ=40`$.
no-problem/0002/astro-ph0002001.html
ar5iv
text
# QSO Absorption Line Constraints on Intragroup High–Velocity Clouds ## 1. Introduction The origin(s) of high–velocity clouds (HVCs), gaseous material that departs from the Galactic rotation law by more than 100 km s<sup>-1</sup>, is a topic under debate. Undoubtedly, some HVCs arise from tidal streams (e.g. the Magellanic Stream), and from fountain processes local to the Galaxy (Wakker & van Woerden (1997)). Recently, however, the hypothesis that most HVCs are distributed ubiquitously throughout the Local Group and are relics of group formation has returned to favor (Blitz et al. (1999); Braun & Burton (1999)). In the intragroup HVC hypothesis (1) the cloud kinematics follow the Local Group standard of rest (LGSR), not the Galactic standard of rest (GSR), with the exception of some HVCs related to tidal stripping or Galactic fountains (Blitz et al. (1999); Braun & Burton (1999)); (2) the cloud Galactocentric distances are typically 1 Mpc; (3) the extended HVC cloud complexes are presently accreting onto the Milky Way; (4) the clouds are local analogs of the Lyman limit absorbers observed in quasar spectra; (5) the clouds have masses of $`10^7`$ M and greater; and (6) the metallicities are lower than expected if the material originated from blowout or fountains from the Milky Way (Wakker et al. (1999); Bowen & Blades (1993)). Blitz et al. (1999; hereafter BSTHB) suggest that there are $`300`$ clouds above the $`21`$–cm $`N(\text{H}\text{i})`$ detection threshold of $`2\times 10^{18}`$ cm<sup>-2</sup>. These clouds have radii $`15`$ kpc and are ubiquitous throughout the group. Braun and Burton (1999, hereafter BB) cataloged $`65`$ Local Group CHVCs, which represent a homogeneous subset of the HVC population discussed by BSTHB. High resolution $`21`$–cm observations (Braun & Burton (2000)) show that the CHVCs, have compact, $`N(\text{H}\text{i})>10^{19}`$ cm<sup>-2</sup>, cold cores with few km s<sup>-1</sup> FWHM surrounded by extended “halos” with FWHM $`25`$ km s<sup>-1</sup>. The typical radius is $`5`$$`8`$ kpc at the estimated distance of $`700`$ kpc. The BB sample is homogeneous but is not complete; they estimate that there could be as many as $`200`$ Local Group CHVCs. Recently, Zwaan and Briggs (2000) reported evidence in contradiction of the intragroup hypothesis. In a blind Hi $`21`$–cm survey of extragalactic groups, sensitive to $`N(\text{H}\text{i})10^{18}`$ cm<sup>-2</sup> (capable of detecting $`10^7`$ M Hi clouds), they failed to locate any extragalactic counterparts of the Local Group HVCs. This is in remarkable contrast to the numbers predicted. If intragroup HVCs exist around all galaxies or galaxy groups, and the Hi mass function is the same in extragalactic groups as measured locally, then Zwaan & Briggs should have detected $`70`$ in groups and $`250`$ around galaxies ($`10`$ and $`40`$ for the CHVCs, respectively). Thus, the Zwaan and Briggs result is in conflict with the intragroup HVC hypothesis. Since the hypothesized intragroup clouds are remnants of galaxy formation and are shown to be stable against destruction mechanisms (BSTHB), they are predicted to form at very high redshifts and to be ubiquitous in galaxy groups to the present epoch. In this Letter, we argue that the version of the intragroup HVC hypothesis presented by BSTHB is also in conflict with the observed redshift number density of moderate redshift $`(z0.5)`$ Mgii and Lyman limit (LLS) quasar absorption line systems. We also find that the properties of the extragalactic analogs of the BB CHVCs are severely constrained. In general, the redshift number density of a non–evolving population of objects, to be interpreted as the number per unit redshift, is written $$\frac{dN}{dz}=C_f\frac{n\sigma c}{H_0}\left(1+z\right)\left(1+2q_0z\right)^{1/2},$$ (1) where $`C_f`$ is the covering factor, $`n`$ is the number density of absorbing structures, and $`\sigma `$ and $`C_f`$ are the surface area presented by each structure and its covering factor for detectable absorption. Throughout, we use $`H_0=100`$ km s<sup>-1</sup> Mpc<sup>-1</sup> and $`q_0=0.5`$, which gives $`dN/dz(1+z)^{1/2}`$. ## 2. Mg II Systems The statistics of Mgii absorbers are well–established at $`0.3z2.2`$. For rest–frame equivalent widths of $`W(\text{Mg}\text{ii})>0.3`$ Å (“strong” Mgii absorption) Steidel & Sargent (1992) found $`dN/dz=0.8\pm 0.2`$ for $`z0.5`$ with a redshift dependence consistent with no evolution expectations. Normal, bright ($`L0.1L^{}`$) galaxies are almost always found within $`40`$ kpc of strong Mgii absorbers (Bergeron & Boissé (1991); Bergeron et al. (1992); Le Brun et al. (1993); Steidel, Dickinson, & Persson (1994); Steidel (1995); Steidel et al. (1997)). From the Steidel, Dickinson, and Persson survey, all but $`3`$ of $`58`$ strong MgII absorbers, detected toward $`51`$ quasars, have identified galaxies with a coincident redshift within that impact parameter (sky projected separation from the quasar line of sight) (see Charlton & Churchill (1996)). Also, it is rare to observe a galaxy with an impact parameter less than $`40h^1`$ kpc that does not give rise to Mgii absorption with $`W(\text{Mg}\text{ii})>0.3`$ Å (Steidel (1995)). In $`25`$ “control fields” of quasars, without observed strong Mgii absorption in their spectra, only two galaxies had impact parameters less than $`40h^1`$ kpc (see also Charlton & Churchill (1996)). As such, the regions within $`40h^1`$ kpc of typical galaxies account for the vast majority of Mgii absorbers above this equivalent width threshold; there is nearly a “one–to–one” correspondence. If we accept these results, it would imply that there is little room for a contribution to $`dN/dz`$ from a population of intragroup clouds which have impact parameters much greater than $`40h^1`$ kpc. However, the predicted cross section for Mgii absorption from the extragalactic intragroup clouds analogous to HVCs would be substantial. We quantify the overprediction of the redshift path density by computing the ratio of $`dN/dz`$ of the intragroup clouds to that of Mgii absorbing galaxies, $$\frac{(dN/dz)_{cl}}{(dN/dz)_{gal}}=F\left(\frac{f_{cl}}{f_{gal}}\right)\left(\frac{N_{cl}}{N_{gal}}\right)\left(\frac{R_{cl}}{R_{gal}}\right)^2,$$ (2) where $`F`$ is the fraction of Mgii absorbing galaxies that reside in groups having intragroup HVC–like clouds, $`f_{cl}`$ is the fraction of the area of the clouds and $`f_{gal}`$ is the fraction of the area of the galaxies that would produce $`W(\text{Mg}\text{ii})>0.3`$ Å along the line of sight, and $`N_{cl}`$ and $`N_{gal}`$ are the number of clouds and galaxies per group, respectively. The cross section of the group times the intragroup cloud covering factor, $`C_f\pi R_{gr}^2`$, is equal to $`N_{cl}\pi R_{cl}^2`$. The total predicted $`dN/dz`$ for Mgii absorbers with $`W(\text{Mg}\text{ii})>0.3`$ Å is then, $$\left(\frac{dN}{dz}\right)_{tot}=\left(\frac{dN}{dz}\right)_{gal}\left[1+\frac{(dN/dz)_{cl}}{(dN/dz)_{gal}}\right].$$ (3) If virtually all Mgii absorbers are accounted for by galaxies, it is required that $`(dN/dz)_{tot}(dN/dz)_{gal}`$; the left hand side of Equation 2 must be very close to zero. In the BSTHB version of the intragroup HVC model, the “best” expected values are $`N_{cl}=300`$ and $`R_{cl}=15`$ kpc (BSTHB; Blitz 2000, private communication ); if we take $`R_{gal}=40`$ kpc and $`f_{gal}=1`$ (Steidel (1995)), and assuming $`N_{gal}=4`$, we find that the covering factor for Mgii absorption from extragalactic analogs to the Local Group HVCs would exceed that from galaxies by a factor of $`10`$ for $`F=1`$ and $`f_{cl}=1`$, giving $`(dN/dz)_{tot}9`$. More recently, Blitz and Robinshaw (2000) have suggested that sizes may be smaller ($`R_{cl}=8`$ kpc) when beam–smearing is considered. Considering this as an indication of uncertainties in the BSTHB parameters, and considering $`2<N_{gal}<6`$ for the typical number of group galaxies, we find, for $`F=1`$ and $`f_{cl}=1`$, a range $`2<(dN/dz)_{cl}/(dN/dz)_{gal}<21`$. This corresponds to $`2.6<(dN/dz)_{tot}<17.6`$. It is unlikely that $`F`$ is significantly less than unity; the majority of galaxies reside in groups like the Local Group that would have HVC analogs. In order that $`(dN/dz)_{tot}(dN/dz)_{gal}`$, $`f_{cl}0.2`$ is required. It is not clear what fraction $`f_{cl}`$ of HVCs with $`N(\text{H}\text{i})`$ above the $`10^{18}`$ cm<sup>-2</sup> detection threshold will give rise to $`W(\text{Mg}\text{ii})0.3`$ Å because the equivalent width is sensitive to the metallicity and internal velocity dispersion of the clouds. Based upon Cloudy (Ferland (1996)) photoionization equilibrium models, a cloud with $`N(\text{H}\text{i})10^{18}`$ cm<sup>-2</sup>, subject to the ionizing metagalactic background (Haardt & Madau (1996)), would give rise to Mgii absorption with $`N(\text{Mg}\text{ii})10^{14}N_{18}(Z/Z_{})`$ cm<sup>-2</sup>, where $`N_{18}`$ is the Hi column density in units of $`10^{18}`$ cm<sup>-2</sup> and $`Z/Z_{}`$ is the metallicity in solar units. For optically thick clouds, those with $`N(\text{H}\text{i})10^{17.5}`$ cm<sup>-2</sup>, this result is insensitive to the assumed ionization parameter<sup>1</sup><sup>1</sup>1The ionization parameter is the ratio of the number density of hydrogen ionizing photons to the number density of electrons, $`n_\gamma /n_e`$. over the range $`10^{4.5}`$ to $`10^{1.5}`$. BSTHB expect HVCs to have $`Z/Z_{}0.1`$. For $`N_{18}=2`$ and $`Z/Z_{}=0.1`$, clouds with internal velocity dispersions of $`\sigma _{cl}20`$ km s<sup>-1</sup> (Doppler $`b28`$ km s<sup>-1</sup>) give rise to $`W(\text{Mg}\text{ii})0.5`$ Å. For $`\sigma _{cl}=10`$ km s<sup>-1</sup> ($`b=14`$ km s<sup>-1</sup>), $`W(\text{Mg}\text{ii})=0.3`$ Å. The CHVC “halos” typically have FWHM of $`29`$$`34`$ km s<sup>-1</sup>, which corresponds to $`\sigma _{cl}12`$$`14`$ km s<sup>-1</sup> (Braun & Burton (1999)). Thus it appears that most lines of sight through the BSTHB extragalactic analogs will produce strong Mgii absorption. Certainly $`f_{cl}>0.2`$, so there is a serious discrepancy between the predicted $`(dN/dz)_{tot}`$ and the observed value. However, if the intragroup clouds have lower metallicities, this would result in smaller $`W(\text{Mg}\text{ii})`$. Unfortunately, there has only been one metallicity estimate published for an HVC, which may or may not be related to the Galaxy. Braun and Burton (2000) estimate that CHVC 125+41-207, with $`W(\text{Mg}\text{ii})=0.15`$ Å, has a metallicity of $`0.04<Z/Z_{}<0.07`$, however this is quite uncertain because of the effects of beam smearing on measuring the $`N(\text{H}\text{i})`$ value. Because of the uncertainties, we simply state that a population of low metallicity clouds could reduce the discrepancy between the predicted redshift density for intragroup clouds, $`(dN/dz)_{cl}`$, and the observed value of $`(dN/dz)_{tot}`$. However, then the expected number of smaller $`W(\text{Mg}\text{ii})`$ systems to arise from intragroup clouds would be increased. The observed Mgii equivalent width distribution rises rapidly below $`0.3`$ Å (“weak” Mgii absorbers), such that $`dN/dz=2.2\pm 0.3`$ for $`W(\text{Mg}\text{ii})>0.02`$ Å at $`z=0.5`$ (Churchill et al. (1999)). To this equivalent width limit, Mgii absorption could be observed from intragroup HVCs with $`N_{18}=2`$ and metallicities as low as $`Z/Z_{}=0.0025`$ \[for $`N(\text{Mg}\text{ii})=10^{11.7}`$ cm<sup>-2</sup>, $`W(\text{Mg}\text{ii})`$ is independent of $`\sigma _{cl}`$\]. However, almost all ($`9`$ out of a sample of $`10`$) Mgii absorbers with $`W(\text{Mg}\text{ii})<0.3`$ Å do not have associated Lyman limit breaks (Churchill et al. 2000a ); that is, their $`N(\text{H}\text{i})`$ is more than a decade below the sensitivity of $`21`$–cm surveys. Thus, based upon available data, roughly $`90`$% of the “weak” Mgii absorbers do not have the properties of HVCs, and therefore are not analogous to the clouds invoked for the intragroup HVC scenario. If $`10`$% of the weak Mgii absorbers are analogs to the intragroup HVCs, they would contribute an additional $`0.20`$ to $`(dN/dz)_{cl}`$. Since the BB CHVC extragalactic analogs have smaller cross sections, we should separately consider whether they would produce a discrepancy with the observed Mgii absorption statistics. BB observed $`N_{cl}=65`$ and inferred a typical $`R_{cl}=5`$$`8`$ kpc for the CHVCs, however a complete sample might have $`N_{cl}=200`$. Assuming $`N_{gal}=2`$$`6`$, $`R_{gal}=40`$ kpc, $`f_{gal}=1`$, and $`F=1`$, for the BB subsample of the HVC population, we obtain $`0.17f_{cl}<(dN/dz)_{cl}/(dN/dz)_{gal}<4.0f_{cl}`$. The cores of the CHVCs have $`N(\text{H}\text{i})>10^{19}`$ cm<sup>-2</sup> and they occupy only about $`15`$% of the detected extent. For $`Z/Z_{}>0.01`$ and $`\sigma _{cl}=10`$ km s<sup>-1</sup>, these cores can produce $`W(\text{Mg}\text{ii})0.3`$ Å over their full area. It follows that $`f_{cl}=0.15`$, which yields $`0.025<(dN/dz)_{cl}/(dN/dz)_{gal}<0.6`$. Depending on the specific parameters, there may or may not be a conflict with the observed $`(dN/dz)_{tot}`$ for strong Mgii absorption. The “halos” of the CHVCs have $`N(\text{H}\text{i})>10^{18}`$ cm<sup>-2</sup> and, as discussed above, would produce weak Mgii absorption for $`Z/Z_{}>0.005`$ over most of the cloud area. This implies contribution to $`(dN/dz)`$ from BB CHVC analogs that is in the range $`0.14<(dN/dz)_{cl}<3.2`$. If the number were at the high end of this range, the cross section would be comparable to the observed $`(dN/dz)`$ for weak Mgii absorbers at $`z=0.5`$. However, as noted above when considering the BSTHB scenario, there is a serious discrepancy. Only $`10`$% of the weak Mgii absorbers show a Lyman limit break, so extragalactic analogs of the BB CHVCs can only be a fraction of the weak Mgii population. Regions of CHVCs at larger radii, with $`N(\text{H}\text{i})`$ below the threshold of present $`21`$–cm observations, are constrained to have $`Z/Z_{}0.01`$ in order that they do not produce a much larger population of weak Mgii absorbers with Lyman limit break than is observed. ## 3. Lyman Limit Systems The redshift number density of LLS also places strong constraints on intragroup environments that give rise to Lyman breaks in quasar spectra. This argument is not sensitive to the assumed cloud velocity dispersion and/or metallicity. Statistically, $`dN/dz`$ for Mgii systems is consistent (1 $`\sigma `$) with $`dN/dz`$ for LLS. At $`z0.5`$, LLS have $`dN/dz=0.5\pm 0.3`$ (Stengler–Larrea et al. (1995)) and Mgii systems have $`dN/dz=0.8\pm 0.2`$ (Steidel & Sargent (1992)). Churchill et al. (2000a) found a Lyman limit break \[i.e. $`N(\text{H}\text{i})10^{16.8}`$ cm<sup>-2</sup>\] for each system in a sample of ten having $`W(\text{Mg}\text{ii})>0.3`$ Å. LLS and Mgii absorbers have roughly the same redshift number density and therefore Mgii–LLS absorption must almost always arise within $`40`$ kpc of galaxies (Steidel (1993)). As such, there is little latitude for a substantial contribution to $`dN/dz`$ from intragroup Lyman limit clouds. Using Equation 1, we could estimate this contribution by considering the volume density of galaxy groups and the cross section for HVC Lyman limit absorption in each. However, the volume density of groups is not well measured, particularly out to $`z=0.5`$. Instead, we make a restrictive argument based upon a comparison between the cross sections for Lyman limit absorption of $`L^{}`$ galaxies and for HVCs in a typical group (similar to the discussion of Mgii absorbers in § 2). Again, we simply compare the values of $`C_f`$ for the different populations of objects in a typical group. The covering factor for HVCs within the group is $$C_f=N_{cl}\frac{\left(R_{cl}\right)^2}{\left(R_{gr}\right)^2}.$$ (4) The best estimate for the BSTHB version of the intragroup HVC model, with $`N_{cl}=300`$, $`R_{cl}=15`$ kpc, and a group radius $`R_{gr}=1.5`$ Mpc, gives $`C_f=0.03`$ for $`N(\text{H}\text{i})2\times 10^{18}`$ cm<sup>-2</sup>. If instead we use the BB number of observed CHVCs, $`N_{cl}=65`$, and $`R_{cl}=5`$$`8`$ kpc, we obtain a much smaller number, $`0.0007<C_f<0.0018`$. However, if the BB sample is corrected for incompleteness such that $`N_{cl}=200`$, these numbers increase so that $`0.002<C_f<0.006`$. In comparison, a typical group with $`4`$ $`L^{}`$ galaxies, each with a Lyman limit absorption cross section of $`R_{cl}40`$ kpc, would have $`C_f=0.002`$. If they existed with the properties discussed, the extragalactic analogs to the BSTHB HVCs would dominate the contribution of $`L^{}`$ galaxies to the $`dN/dz`$ of LLS by at least a factor of $`15`$, and this is only considering HVC regions with $`N(\text{H}\text{i})>2\times 10^{18}`$ cm<sup>-2</sup> that are detected in the $`21`$–cm surveys. Any extensions in area below this threshold value \[down to $`N(\text{H}\text{i})5\times 10^{16}`$ cm<sup>-2</sup>\] would worsen the discrepancy. As such, the BSTHB hypothesis is definitively ruled out. For regions of BB CHVCs with $`N(\text{H}\text{i})>2\times 10^{18}`$ cm<sup>-2</sup>, the covering factor ranges from $`C_f=0.0007`$ to $`C_f=0.006`$ depending on assumed sizes and corrections for incompleteness. This ranges from $`35`$$`300`$% of the cross section for the $`L^{}`$ galaxies. The total observed $`dN/dz`$ for Lyman limit absorption (down to $`\mathrm{log}N(\text{H}\text{i})=17`$ cm<sup>-2</sup>) is only $`0.5`$, even a $`35`$% contribution to the Lyman limit cross section from HVCs that are separate from galaxies creates a discrepancy. This would imply that the result that most lines of sight within $`40`$ kpc of a typical $`L^{}`$ galaxy produce Lyman limit absorption is incorrect. This would further imply that there is a substantial population of strong Mgii absorbers without Lyman limit breaks (to account for $`dN/dz=0.8`$ for strong Mgii absorption) or of strong Mgii absorbers not associated with galaxies. Both types of objects are rarely observed (Churchill et al. 2000a ; Bergeron & Boissé (1991); Bergeron et al. (1992); Le Brun et al. (1993); Steidel, Dickinson, & Persson (1994); Steidel (1995); Steidel et al. (1997)). Furthermore, $`C_f=0.002`$ for BB CHVCs only takes into account the fraction of the BB CHVC areas with $`N(\text{H}\text{i})>2\times 10^{18}`$ cm<sup>-2</sup>. Therefore, the extended “halos” around the CHVCs are also constrained not to contribute substantial cross section for Lyman limit absorption along extragalactic lines of sight. ## 4. Summary We have made straight–forward estimates of the predicted redshift number density at $`z0.5`$ of Mgii and LLS absorption from hypothetical extragalactic analogs to intragroup HVCs as expected by extrapolating from the BSTHB and BB Local Group samples. We find that it is difficult to reconcile the intragroup hypothesis for HVCs with the observed $`dN/dz`$ of Mgii and LLS systems. The discrepancy between the $`dN/dz`$ of Mgii–LLS absorbers and the observed covering factor of “intragroup” HVCs could be reduced if the HVCs have a clumpy structure. Such structure would result in Mgii–LLS absorption observable only in some fraction, $`f_{los}`$, of the lines of sight through the cloud. Effectively, this reduces the covering factor for Lyman limit absorption, or the value of $`f_{cl}`$ in equation (2) for Mgii absorbers. Considering beam smearing in $`21`$–cm surveys, substructures would be detected above a $`N(\text{H}\text{i})>2\times 10^{18}`$ cm<sup>-2</sup> $`21`$–cm detection threshold if their column densities were $`N(\text{H}\text{i})_{sub}>2\times 10^{18}/f_{los}`$ cm<sup>-2</sup>. The predicted $`dN/dz`$ for HVC–like clouds could be reduced by a factor of ten if $`f_{los}0.1`$, giving $`N(\text{H}\text{i})_{sub}>2\times 10^{19}`$ cm<sup>-2</sup>. All the gas outside these higher Hi column density substructures would need to be below the Lyman limit or the arguments in § 3 would hold. It is difficult to reconcile such a density distribution with the high resolution observations of BB CHVCs which show diffuse halos around the core concentrations (Braun & Burton (2000)), but these ideas merit further consideration. ### 4.1. The BSTHB Scenario We conclude that the predicted $`dN/dz`$ from the hypothetical population of intragroup HVCs along extragalactic sight lines to quasars from the BSTHB scenario would exceed: 1) the $`dN/dz`$ of Mgii absorbers with $`W(\text{Mg}\text{ii})0.3`$ Å. This class of absorber is already known to arise within $`40h^1`$ kpc of normal, bright galaxies (Bergeron & Boissé (1991); Bergeron et al. (1992); Le Brun et al. (1993); Steidel, Dickinson, & Persson (1994); Steidel (1995); Steidel et al. (1997)). 2) the $`dN/dz`$ of “weak” Mgii absorbers with $`0.02<W(\text{Mg}\text{ii})<0.3`$ Å absorption. In principle, weak Mgii absorption could arise from low metallicity, $`0.005Z/Z_{}<0.1`$, intragroup HVCs. However, the majority of observed weak systems are already known to be higher metallicity, $`Z/Z_{}0.1`$, sub–Lyman limit systems (Churchill et al. (1999); Rigby et al. (2000)). 3) the $`dN/dz`$ of Lyman limit systems. These would be produced by all extragalactic BSTHB HVC analogs regardless of metallicity. However, most Lyman limit systems are seen to arise within $`40h^1`$ kpc of luminous galaxies (Steidel (1993); Churchill et al. 2000b ). These points do not preclude a population of infalling intragroup clouds which do not present a significant cross section for $`21`$–cm absorption, as predicted by CDM models (Klypin et al. (1999); Moore et al. (1999)). In fact, such intragroup objects could be related to sub–Lyman limit weak Mgii absorbers (Rigby et al. (2000)). ### 4.2. The BB Scenario The properties of the BB CHVC population are also significantly constrained by Mgii and Lyman limit absorber statistics: 1) They could produce $`W(\text{Mg}\text{ii})0.3`$ Å in excess of what is observed if a large incompleteness correction is applied (i.e. so that $`N_{cl}=200`$), or if relatively large sizes ($`R_{cl}8`$ kpc) are assumed. 2) They would be expected to contribute to the $`dN/dz`$ of weak \[$`W(\text{Mg}\text{ii})>0.02`$ Å\] Mgii absorption. However, based upon observations (Churchill et al. 2000a ), only $`10`$% of the population of weak Mgii absorbers have Lyman–limit breaks. Therefore, only a small fraction of weak Mgii absorption could arise in extragalactic BB CHVC analogs. 3) The $`dN/dz`$ for Lyman limit absorption from the hypothesized BB CHVC population could be a significant fraction, or comparable to that expected from the local environments of $`L^{}`$ galaxies (within $`40`$ kpc); the observed value is already consistent with that produced by the galaxies. 4) The CHVCs are observed to have a cool core with $`N(\text{H}\text{i})>10^{19}`$ cm<sup>-2</sup>, surrounded by a halo which typically extends to $`R_{cl}5`$ kpc. It is natural to expect that the Hi extends out to larger radii at smaller $`N(\text{H}\text{i})`$ and should produce a Lyman limit break out to the radius at which $`N(\text{H}\text{i})<10^{16.8}`$ cm<sup>-2</sup>. Although there is expected to be a sharp edge to the Hi disk at $`N(\text{H}\text{i})10^{17.5}`$ or $`10^{18}`$ cm<sup>-2</sup> (Maloney (1993); Corbelli & Salpeter (1994); Dove & Shull (1994)), physically we would expect that this edge would level off at $`10^{17.5}`$ cm<sup>-2</sup>, such that a significant cross section would be presented at $`10^{16.8}<N(\text{H}\text{i})<10^{17.5}`$ cm<sup>-2</sup>. Another possibility is that there is a sharp cutoff of the structure at $`N(\text{H}\text{i})2\times 10^{18}`$ cm<sup>-2</sup>, but this is contrived. ## 5. Conclusion We are forced to the conclusion that there can only be a limited number of extragalactic infalling group HVC analogs at $`z0.5`$. Future data could force a reevaluation of the relationships between galaxies, Lyman limit systems, and Mgii absorbers, but it seems unlikely that the more serious inconsistencies we have identified could be reconciled in this way. A clumpy distribution of Hi could be constructed that would reduce the discrepancy, but would require very diffuse material (below the Lyman limit) around dense cores. Evolution in the population of HVCs is another possibility. If the extragalactic background radiation declined from $`z=0.5`$ to the present, the clouds would have been more ionized in the past, and therefore would have had smaller cross sections at a given $`N(\text{H}\text{i})`$. However, this does not explain why Zwaan and Briggs (2000) do not see the $`z=0`$ extragalactic analogs to the HVCS or CHVCs. Our results are entirely consistent with theirs, and the implications are the same: the discrepancies between the Local Group HVC population and the statistics of Mgii and Lyman limit absorbers can only be reconciled if most of the extragalactic HVC analogs are within $`100`$$`200`$ kpc of galaxies, and not at large throughout the groups. We thank L. Blitz, J. Bregman, J. Mulchaey, B. Savage, K. Sembach, T. Tripp, and especially B. Wakker and Buell Jannuzi, and our referees for stimulating discussions and comments. Support for this work was provided by NSF grant AST–9617185 (J. R. R. was supported by an REU supplement) and by NASA grant NAG 5–6399.
no-problem/0002/cond-mat0002310.html
ar5iv
text
# From waves to avalanches: two different mechanisms of sandpile dynamics ## Abstract Time series resulting from wave decomposition show the existence of different correlation patterns for avalanche dynamics. For the $`d=2`$ Bak-Tang-Wiesenfeld model, long range correlations determine a modification of the wave size distribution under coarse graining in time, and multifractal scaling for avalanches. In the Manna model, the distribution of avalanches coincides with that of waves, which are uncorrelated and obey finite size scaling, a result expected also for the $`d=3`$ Bak et al. model. PACS numbers:05.65.+b,05.40.-a,45.70.Ht,64.60.Ak Full information on a self–organized critical (SOC) process is contained in the time series, if the time step is the most microscopic conceivable. The self-similarity of such a process, due to its intermittent, avalanche character, should be revealed by the scaling of the time autocorrelation or its power spectrum. In spite of this, time series analyses have been seldom performed on sandpiles or similar systems, and mostly concerned the response to finite, random external disturbances, i.e. the problem of $`1/f`$ noise . Most efforts in the characterization of SOC scaling concentrated on probability distribution functions (PDF’s) of global properties of the avalanche events, which occupy large intervals of the microscopic evolution time . The numerical analysis of such PDF’s is often difficult, and universality issues can not be easily solved. The situation is even more problematic if, like for the two-dimensional (2D) Bak-Tang-Wiesenfeld sandpile (BTW) , the usually assumed finite size scaling (FSS) form reveals inadequate for the PDF’s, and needs to be replaced by a multifractal one . These findings rise the additional issue of why the 2D BTW displays such multifractal scaling, while apparently very similar sandpiles, like the Manna model (M) , do not . In recent theoretical approaches to the BTW and other Abelian sandpiles , a major role has been played by the waves of toppling into which avalanches can be decomposed . For the BTW, the PDF’s of waves, as sampled from a large collection of successive avalanches, obey FSS with exactly known exponents . By analyzing the succession of waves as a stationary time series, one could hope to determine the statistical properties of avalanches. However, so far, no precise information on the correlations of such series was obtained. In this Letter we generalize the wave description to the M in 2D. The study of the respective wave time series reveals that BTW and M are prototypes of two very different scenarios for avalanche dynamics. In the M case, successive wave sizes are totally uncorrelated. As a consequence, avalanche and wave PDF’s have identical scaling properties, consistent with FSS. For the BTW, to the contrary, wave sizes show long range correlations and persistency in time. PDF’s of “block variables” given by sums of $`n`$ successive wave sizes show that these correlations are responsible for the fact that avalanche scaling differs from the wave one and has multifractal features. For the 3D BTW, on the other hand, our results suggest validity of a scenario identical to the M one, and lead to conjecture exact avalanche exponents coinciding with those of the wave PDF. The 2D BTW is defined on a square $`L\times L`$ lattice. An integer $`z_i`$, the number of grains, is assigned to site $`i`$. Starting from a stable configuration ($`z_iz_c=4`$, $`i`$), a grain is added at a randomly chosen site; after each addition, all sites exceeding the stability threshold, $`z_k>z_c`$, undergo toppling, distributing one grain to each one of the nearest neighbors. The topplings, which dissipate grains when occurring at the edges, continue until all sites are stable, and a new grain is added. The $`s`$ topplings between two consecutive additions form an avalanche. After many additions, the system organizes into a stationary critical state. Manna studied a two–state version, M, of the sandpile. The sites can be either empty or occupied; grains are added randomly, and when one of them drops onto an occupied site, a “hard core” repulsion pushes two particles out to randomly chosen nearest neighbors. Compared to the BTW, in which toppling is deterministic, this model has an extra stochastic ingredient in the microscopic evolution, and $`z_c=1`$. One can define a special ordering in the topplings of the BTW by introducing the wave decomposition of avalanches . After the site of addition, $`O`$, topples, any other unstable site is allowed to topple except possibly $`O`$. This is the first wave of toppling. If $`O`$ is still unstable, it is allowed to topple once again, leading to the second wave. This continues until $`O`$ becomes stable. Thus, an avalanche is broken into a sequence of waves. During a wave, sites topple only once, and for an $`m`$-wave avalanche $`s=_{k=1}^ms_k`$, where $`s_k`$ are the topplings of the $`k`$-th wave. Following the definitions for the BTW, we implement here a wave decomposition of avalanches also for the M. Unlike for the BTW, sites involved in a wave may topple more than once. Furthermore, the toppling order chosen implies now a peculiar sequence of stable configurations visited upon addition of grains, but the realization probabilities of possible configurations are independent of this order . The BTW wave size PDF has FSS form $`P_w(s)s^{\tau _w}p_w(s/L^{D_w})`$, with $`\tau _w=1`$, $`D_w=2`$ in 2D, and $`p_w`$ a suitable scaling function . For the M waves we also found a PDF obeying FSS, with $`\tau _w=1.31\pm 0.02`$ and $`D_w=2.75\pm 0.01`$. These exponents are remarkably consistent with those estimated for avalanches . Such a coincidence certainly does not apply to the BTW. Attempts to derive exact BTW avalanche exponents within FSS were based on the observation that often waves within an avalanche show a final, long contraction phase, and on a scaling assumption for the corresponding $`s_ks_{k+1}`$ . A more adequate ansatz for the conditional PDF of $`s_{k+1}`$ given $`s_k`$ , and Markovianity assumptions, did not help in better characterizing avalanche scaling along these lines . In fact the 2D BTW obeys a multifractal form of scaling, which is not catched by such simplified schemes . The wave time series $`\{s_k\}`$ provide coarse grained dynamical descriptions. In the $`L\mathrm{}`$ limit, these descriptions are infinitely rescaled with respect to those at microscopic time scale, but still infinitely finer than the mere records of successive avalanche sizes. This intermediate time scale reveals essential in order to understand the dynamics indside avalanches, whose size sequence we found to be uncorrelated in the sense discussed below for waves. The microscopic scale gave no significant results in comparing the two models, since, at that level, similar strong correlations exists in both of them, due to the parallel updating algorithm. We determined for BTW and M the autocorrelation $$C(t,L)=\frac{s_{k+t}s_k_L\mu ^2}{s_k^2_L\mu ^2},$$ (1) with $`t=1,2,\mathrm{}`$, and $`\mu =s_k_L`$, the time averages being taken over up to $`10^7`$ waves for $`L=128,256,512,1024`$ and $`2048`$. A first, striking result is that, as soon as $`t>0`$ ($`C(0,L)=1`$ by normalization), waves are uncorrelated in the M. Indeed, as $`L`$ grows, $`C`$ manifestly approaches $`0`$ as soon as $`t>0`$ (Fig. 1). To the contrary, $`C`$ is long range for the BTW, because it approaches $`0`$ only for $`t`$ exceeding the maximum number of waves in an avalanche (Fig. 1), which we found to scale $`L`$, for $`L\mathrm{}`$. For the BTW we further tested the FSS form $$C(t,L)=t^{\tau _c}g(t/L^{D_c})\text{(}\text{t,L≫1}\text{)},$$ (2) on the basis of the $`L\mathrm{}`$ scaling of the moments $$t^q_L=\underset{t}{}C(t,L)t^qL^{\sigma _c(q)}.$$ (3) FSS would imply the piecewise linear form $$\sigma _c(q)=\{\begin{array}{cc}D_c(q\tau _c+1)\hfill & \text{if }q\tau _c1,\hfill \\ 0\hfill & \text{if }q<\tau _c1.\hfill \end{array}$$ (4) Fig. 2 shows the extrapolated $`\sigma _c(q)`$, which has an approximately linear part for $`1q4`$, consistent with $`D_c=1.02\pm 0.05`$ and $`\tau _c=0.40\pm 0.05`$. The curvature for $`q<1`$ is due to the fact that, for finite $`L`$, a logarithm ($`\tau _c=0`$) can not be easily distinguished from a power law with $`\tau _c0`$. The inset of Fig. 2 shows this logarithmic growth of $`t^q_L`$. We conclude that, for the BTW, a simple power law tail $`C(t,\mathrm{})t^{\tau _c}`$, is a first, rough, approximation. The increment $`y(t)=_{k=1}^ts_k`$ is comparable to the trail of a fractional Brownian motion with Hurst exponent $`H=(2\tau _c)/2`$ , such that $`H=0.80\pm 0.03`$ for the BTW. $`H`$ can be measured directly from the fluctuation $$F(t,L)=\left[\mathrm{\Delta }y(t)^2_L\mathrm{\Delta }y(t)_{L}^{}{}_{}{}^{2}\right]^{1/2},$$ (5) with $`\mathrm{\Delta }y(t)=y(k+t)y(k)`$ , which should scale as $`F(t,\mathrm{})t^H`$. Fig. 3 reports tests of this scaling for $`L=2048`$. For the BTW, $`H0.85`$ at low $`t`$, in agreement with $`\tau _c=0.40\pm 0.05`$. A crossover to $`H=1/2`$ is observed for large $`t`$. The crossover time of course increases with $`L`$. $`H=1/2`$ corresponds to a process with $`C`$ exponentially decaying with $`t`$, or with $`C(t,\mathrm{})=\delta _{t,0}`$, as is the case for the M. Thus, the crossover is due to the fact that, beyond the maximal time duration of avalanches, waves are uncorrelated. $`H1/2`$ implies long range correlations, as we find for the BTW. Furthermore, $`H>1/2`$ corresponds to persistency: an increasing or decreasing trend of $`y`$ in the past mostly implies a similar tendency in the future. This accounts for the observed expansion and contraction phases in avalanche growth . The above features of $`C`$ should be responsible for the peculiar long range on–site correlations of the noise expected when mapping the BTW onto a discrete interface growth equation . The BTW has been shown to display a non constant gap in the high $`q`$ moments of some avalanche PDF’s . Thus, it should not surprise if, unlike assumed in Eq. (4), also $`C`$ would manifest similar multifractal properties. Indeed, a more accurate analysis shows that, for high $`q`$, the gap $`d\sigma _c(q)/dq`$ grows slowly with $`q`$, beyond the above estimate of $`D_c`$. This multifractal character can be embodied in the more general scaling ansatz $$C(t=L^\alpha ,L)=L^{f_c(\alpha )\alpha },\text{(}\text{L≫1}\text{)},$$ (6) with a nonlinear singularity spectrum $`f_c>\mathrm{}`$ in an $`\alpha `$-interval covering the whole range of possible gaps and linked to $`\sigma _c`$ by Legendre transform. $`C`$ would satisfy the FFS ansatz (2) only if $`f_c`$ were a linear function, i.e. $`f_c(\alpha )=(\tau _c1)\alpha `$ if $`\alpha [0,D_c]`$, $`f_c(\alpha )=\mathrm{}`$ otherwise. The FSS picture given above is in fact only an approximation. This explains also the slight curvature of the $`F`$ plot for low $`t`$, which makes the direct measurement of $`H`$ ambiguous (Fig. 3). Rather than attempting a more precise determination of $`\sigma _c`$ and $`f_c`$, below we clarify the difference between BTW and M, and the origin of multiscaling in the former, in the light of probabilistic concepts. Waves have a relatively simple behavior. So, in a renormalization group (RG) spirit, we can coarse grain the time, by looking at the PDF, $`P^{(n)}(s,L)`$, of the sum of the sizes of $`n`$ consecutive waves, regardless of the avalanche they belong to. Since avalanches are constituted by an infinite number of waves for $`L\mathrm{}`$, by sending also $`n\mathrm{}`$, we expect $`P^{(n)}`$ to approach the PDF of avalanche sizes as a RG fixed point. This approach can be monitored on the basis of the effective moment scaling exponents defined by $$s^q_{n,L}=𝑑ss^qP^{(n)}(s,L)L^{\sigma _{s,n}(q)}.$$ (7) For the M, $`\sigma _{s,n}(q)`$ does not depend on $`n`$, and, within our accuracy, is equal from the start to $`\sigma _s(q)`$, such that $`𝑑ss^qP(s,L)L^{\sigma _s(q)}`$, with $`P(s,L)`$ representing the avalanche size PDF. To the contrary, in the BTW, as $`n`$ increases, $`\sigma _{s,n}(q)`$ varies and moves gradually towards the appropriate $`\sigma _s(q)`$ (Fig. 4). The result for the M can be explained on the basis of the fact that PDF’s of independent variables, satisfying a scaling ansatz of the type (6), have a spectrum which does not change under convolution . For example, if we put $`P_w(s=L^\alpha ,L)L^{f_w(\alpha )\alpha }`$, where $`f_w`$ is the spectrum of the wave size PDF, one can verify that $`P^{(2)}(L^\alpha ,L)=`$ (9) $`{\displaystyle d(L^\beta )P_w(L^\alpha L^\beta ,L)P_w(L^\beta ,L)}L^{f_w(\alpha )\alpha }.`$ Thus, also the spectrum associated with $`P^{(n)}(s,L)`$, which is the convolution of $`n`$ $`P_w`$’s, does not depend on $`n`$. This implies the $`n`$–independence of $`\sigma _{s,n}(q)`$, which is determined once $`f_w`$ is given. Now, let $`P(s,m,L)`$ be the probability of having an avalanche with $`s`$ topplings and $`m`$ waves: due to the uncorrelated character of different wave sizes, for the M $`P(s,m,L)`$ is also the convolution of $`m`$ $`P_w`$’s, i.e $`P(s,m,L)=P^{(m)}(s,L)`$. Therefore, if $`P(m,L)`$ is the PDF of the total number, $`m`$, of waves in an avalanche, one has $$P(s,L)=\underset{m}{}P^{(m)}(s,L)P(m,L),$$ (10) such that $$L^{\sigma _s(q)}\underset{m}{}L^{\sigma _{s,m}(q)}P(m,L),$$ (11) which, together with the above results, imply $`\sigma _{s,n}(q)=\sigma _s(q)`$. The coarse graining of waves in the M does not modify the block PDF spectrum, which is from the start at its fixed point, representing also the scaling of avalanches. In the BTW case, the nontrivial composition of correlated waves is responsible for the change with $`n`$ of the effective singularity spectrum of $`P^{(n)}`$ (Fig. 4). For $`n=1`$, the simple linear FSS form $`f_w(\alpha )=(\tau _w1)\alpha `$ ($`\alpha [0,D_w]`$) applies, while for $`n\mathrm{}`$ one should recover the nonlinear form needed to describe the complex scaling of $`P(s,L)`$ . In practice, sampling limitations prevent us from reaching very high $`n`$. However, even if convergence is relatively slow, the tendency of $`\sigma _{s,n}(q)`$ to move towards $`\sigma _s(q)`$ is very manifest. We could clearly detect the increase with $`n`$ of the gap $`d\sigma _{s,n}(q)/dq`$, at fixed high $`q`$: for example, we determined $`d\sigma _{s,n}(4)/dq2.00,2.20,2.25,2.38`$, for $`n=1,8,12,24`$, respectively. Furthermore, while the asymtoptic gap for $`P_w(s,L)`$ gets readily to the maximum value, i.e. $`d\sigma _{s,1}(q)/dq=2`$ for $`q1`$, as soon as $`n>1`$ a constant gap could not be detected for $`\sigma _{s,n}(q)`$; this confirms the progressive appearance of effective multifractal scaling for the block variable. In summary, a comparative analysis of wave time series shows that the different forms of universal scaling in the BTW and M are determined by distinct dynamical correlation patterns. For the M case, the wave level of description is in fact coarse grained enough to fully account, without further modifications, also for the avalanche level. Waves are uncorrelated and their PDF, satisfying FSS, coincides with the avalanche PDF, as far as exponents are concerned. To the contrary, in the BTW, under coarse graining, long time correlations substantially modify the scaling properties of waves, determining also multiscaling features. We regard these as prototype mechanisms to be generally expected in sandpile and similar SOC models. The 2D BTW behavior is probably less generic than the M one. For example, in the 3D BTW, the fact that avalanches are constituted by an average number of waves which remains finite for $`L\mathrm{}`$, strongly suggests a M type mechanism and FSS for the avalanche PDF. Indeed, even if subsequent waves are not strictly uncorrelated, like in the 2D Manna model, $`C`$ should decay extremely fast, with an $`L`$ independent time cutoff. Thus, coarse graining in time can not substantially modify the PDF of block variables, with respect to $`P_w`$. We expect the exactly known wave exponents to apply also to the avalanche size PDF. Indeed, numerically determined avalanche exponents for 3D BTW turn out to be strikingly close to the wave values . We thank C. Tebaldi and C. Vanderzande for useful discussions. Partial support from the European Network Contract No. ERBFMRXCT980183 is acknowledged.
no-problem/0002/cond-mat0002102.html
ar5iv
text
# Packing of Compressible Granular Materials ## Abstract 3D Computer simulations and experiments are employed to study random packings of compressible spherical grains under external confining stress. Of particular interest is the rigid ball limit, which we describe as a continuous transition in which the applied stress vanishes as $`(\varphi \varphi _c)^\beta `$, where $`\varphi `$ is the (solid phase) volume density. This transition coincides with the onset of shear rigidity. The value of $`\varphi _c`$ depends, for example, on whether the grains interact via only normal forces (giving rise to random close packings) or by a combination of normal and friction generated transverse forces (producing random loose packings). In both cases, near the transition, the system’s response is controlled by localized force chains. As the stress increases, we characterize the system’s evolution in terms of (1) the participation number, (2) the average force distribution, and (3) visualization techniques. PACS: 81.06.Rm Dense packings of spherical particles are an important starting point for the study of physical systems as diverse as simple liquids, metallic glasses, colloidal suspensions, biological systems, and granular matter . In the case of liquids and glasses, finite temperature molecular dynamics (MD) studies of hard sphere models have been particularly important. Here one finds a first order liquid-solid phase transition as the solid phase volume fraction, $`\varphi `$, increases. Above the freezing point, a metastable disordered state can persist until $`\varphi \varphi _{\text{RCP}}^{}`$ , where $`\varphi _{\text{RCP}}`$ is the density of random close packing (RCP)— the densest possible random packing of hard spheres. This Letter is concerned with the non-linear elastic properties of granular packings. Unlike glasses and amorphous solids, this is a zero temperature system in which the interparticle forces are both non-linear, and path (i.e., history) dependent. \[Because these forces are purely repulsive, mechanical stability is achieved only by imposing external stress.\] The structure of packing depends in detail on the forces acting between the grains during rearrangement of grains; indeed, different rearrangement protocols can lead to either RCP or random loose packed (RLP) systems. In the conventional continuum approach to this problem, the granular material is treated as an elasto-plastic medium . However, this approach has been challenged by recent authors who argue that granular packings represents a new kind of fragile matter and that more exotic methods, e.g., the fixed principal axis ansatz, are required to describe their internal stress distributions. These new continuum methods are complemented by microscopic studies based on either contact dynamics simulations of rigid spheres or statistical models, such as the q-model, which makes no attempt to take account of the character of the inter-grain forces . In our view, a proper description of the stress state in granular systems must take account of the fact that the individual grains are deformable. We report here on a 3D study of deformable spheres interacting via Hertz-Mindlin contact forces. Our simulations cover four decades in the applied pressure and allow us to understand the regimes in which the different theoretical approaches described above are valid. Since the grains in our simulations are deformable, the volume fraction can be increased above the hard sphere limit and we are able to study the approach to the RCP and RLP states from this realistic perspective. Within this framework, the rigid grain limit is described as a continuous phase transition where the order parameter is the applied stress, $`\sigma `$, which vanishes continuously as $`(\varphi \varphi _c)^\beta `$. Here $`\varphi _c`$ is the critical volume density, and $`\beta `$ is the corresponding critical exponent. We emphasize that the fragile state corresponding to rigid grains is reached by looking at the limit $`\varphi \varphi _c^+`$ from above. Of particular importance is the fact that $`\varphi _c`$ depends on the type of interaction between the grains. If the grains interact via normal forces only , they slide and rotate freely mimicking the rearrangements of grains during shaking in experiments . We then obtain the RCP value $`\varphi _c=0.634(4)(\varphi _{\text{RCP}})`$. By contrast, if the grains interact by combined normal and friction generated transverse forces, we get RLP at the critical point with $`\varphi _c=0.6284(2)<\varphi _{\text{RCP}}`$. The power-law exponents characterizing the approach to $`\varphi _c`$ are not universal and depend on the strength of friction generated shear forces. Our results indicate that the transitions at both RCP or RLP are driven by localized force chains. Near the critical density there is a percolative fragile structure which we characterize by the participation number (which measures localization of force chains), the probability distribution of forces, and also by visualization techniques. A subset of our results are experimentally verified using carbon paper measurements to study force distributions in the granular assembly. We also consider in some detail the relationship between our work and recent experiments in 2D Couette geometries . Numerical Simulations: To better understand the behavior of real granular materials, we perform granular dynamics simulations of unconsolidated packings of deformable spherical glass beads using the Discrete Element Method developed by Cundall and Strack . Unlike previous work on rigid grains, we work with a system of deformable elastic grains interacting via normal and tangential Hertz-Mindlin forces plus viscous dissipative forces . The grains have shear modulus 29 GPa, Poisson’s ratio 0.2 and radius 0.1 mm. Our simulations employ periodic boundary conditions and begin with a gas of 10000 non-overlapping spheres located at random positions in a cube 4 mm on a side. Generating a mechanically stable packing is not a trivial task . At the outset, a series of strain-controlled isotropic compressions and expansions are applied until a volume fraction slightly below the critical density. At this point the system is at zero pressure and zero coordination number. We then compress along the $`z`$ direction, until the system equilibrates at a desired vertical stress $`\sigma `$ and a non-zero average coordination number $`Z`$. Figure 1a shows the behavior of the stress as a function of the volume fraction. We find that the pressure vanishes at a critical $`\varphi _c=0.6284(2)`$. Although we cannot rule out a discontinuity in the pressure at $`\varphi _c`$— as we could expect for a system of hard spheres— our results indicates that the transition is continuous and the behavior of the pressure can be fitted to a power law form $$\sigma (\varphi \varphi _c)^\beta ,$$ (1) where $`\beta =1.6(2)`$. Our 3D results contrast with recent experiments of slowly sheared grains in 2D Couette geometries where a faster than exponential approach to $`\varphi _c`$ was found, while they agree qualitative with similar continuous transition found in compressed emulsions and foams . Figure 1b shows the behavior of the mean coordination number, $`Z`$, as a function of $`\varphi `$. We find $$ZZ_c(\varphi \varphi _c)^\theta ,$$ (2) where $`Z_c=4`$ is a minimal coordination number, and $`\theta =0.29(5)`$ is a critical exponent. At criticality the system is very loose and fragile with a very low coordination number. The value of $`Z_c`$ can be understood in term of constraint arguments as discussed in ; in the rigid ball limit, for a disordered system with both normal and transverse forces, we find $`Z_c=D+1=4`$ . As we compress the system more contacts are created, providing more constraints so that the forces become overdetermined. We notice that $`\varphi _c`$ obtained for this system is considerably lower than the best estimated value at RCP , $`\varphi _{\text{RCP}}=0.6366(4)`$ obtained by Finney using ball bearings. This latter value is obtained by carefully vibrating the system and letting the grains settle into the most compact packing. Numerically, this is achieved by allowing the grains reach the state of mechanical equilibrium interacting only via normal forces. By removing the transverse forces, grains can slide freely and find most compact packings than with transverse forces. Numerically we confirm this by equilibrating the system at zero transverse force. The critical packing fraction found in this way is $`\varphi _c=0.634(4)`$($`\varphi _{\text{RCP}}`$ within error bars). The stress behaves as in Eq. (1) but with a different exponent $`\beta =2.0(2)`$ (Fig. 1a). At the critical volume fraction the average coordination number is now $`Z_c=6`$ \[and $`\theta =0.94(5)`$, Fig. 1b\], which again can be understood using constraint arguments which would give a minimal coordination number equal to 2D for frictionless rigid balls . We conclude that the value $`\varphi _c0.6288<\varphi _{\text{RCP}}`$ found with transverse forces corresponds to the RLP limit, experimentally achieved by pouring balls in a container but without allowing for further rearrangements . Experimentally, stable loose packings with $`\varphi `$ as low as 0.60 have been found . In our simulations, $`\varphi _c`$ lower than $`0.6288`$ can be obtained by increasing the strength of the tangential forces. This is in agreement with experiments of Scott and Kilgour who found that the maximum packing density of spheres decreases with the surface roughness (friction) of the balls. While previous studies characterized RCP’s and RLP’s by using radial distribution functions and Voronoi constructions , we take a different approach which allow us to compare our results directly with recent work on force transmissions in granular matter. Previous studies of granular media indicate that, for forces greater than the average value, the distribution of inter-grain contact forces is exponential . In addition, photo-elastic visualization experiments and simulations show that contact forces are strongly localized along “force chains” which carry most of the applied stress. The existence of force chains and exponential force distributions are thought to be intimately related. Here we analyze this scenario in the entire range of pressures: from the $`\varphi _c`$ limit and up. Figure 2a shows the force distribution obtained in the simulations with friction balls. At low stress, the distribution is exponential in agreement with previous experiments and models. When the system is compressed further, we find a gradual transition to a Gaussian force distribution. We observe a similar transition in our simulations involving frictionless grains under isotropic compression. This suggests that our results are generic, and do not depend, qualitatively, on the preparation history or on the existence of friction generated transverse forces between the grains. Physically, we find that the transition from Gaussian to exponential force distribution is driven by the localization of force chains as the applied stress is decreased. In granular materials, with particles of similar size, localization is induced by the disorder of the packing arrangement. To quantify the degree of localization, we consider the participation number $`\mathrm{\Pi }`$: $$\mathrm{\Pi }\left(M\underset{i=1}{\overset{M}{}}q_i^2\right)^1.$$ (3) Here $`M`$ is the number of contacts between the spheres, $`Z=2M/N`$ is the average coordination number, and $`N`$ is the number of spheres. $`q_if_i/_{j=1}^Mf_j`$, where $`f_i`$ is the magnitude of the total force at every contact. From the definition (3), $`\mathrm{\Pi }=1`$ indicates a limiting state with a spatially homogeneous force distribution ($`q_i=1/M`$, $`i`$). On the other hand, in the limit of complete localization, $`\mathrm{\Pi }1/M0`$ and $`M\mathrm{}`$. Figure 2c shows our results for $`\mathrm{\Pi }`$ versus $`\sigma `$. Clearly, the system is more localized at low stress than at high stress. Initially, the growth of $`\mathrm{\Pi }`$ is logarithmic, indicating a smooth delocalization transition. This behavior is seen up to $`\sigma `$ 2.1 MPa, after which the participation number saturates to a higher value: $$\begin{array}{cccc}\hfill \mathrm{\Pi }(\sigma )& & \mathrm{log}(\sigma )\hfill & [\sigma <\text{2.1 MPa}]\hfill \\ \hfill \mathrm{\Pi }(\sigma )& & 0.62\hfill & [\sigma >\text{2.1 MPa}]\hfill \end{array}$$ (4) This behavior suggests that, near the critical density, the forces are localized in force chains sparsely distributed in space. As the applied stress is increased, the force chains become more dense, and are thus distributed more homogeneously. How might we expect the participation number to depend upon other system parameters when the forces are transmitted principally by force chains? In an idealized situation, the system has $`N_{FC}`$ force chains, each of which has $`N_z`$ spheres. Each sphere in a force chain has two major load bearing contacts, which loads must be approximately equal. In the lateral directions, roughly four weak contacts are required for stability. These contacts carry a fraction $`\alpha <1`$ of the major vertical load. All other contacts have $`f_i0`$. Under these assumptions, $$\mathrm{\Pi }=\frac{2}{Z}\frac{(1+2\alpha )^2}{(1+2\alpha ^2)}\frac{N_{FC}N_z}{N}\frac{2}{Z}\frac{(1+2\alpha )^2}{(1+2\alpha ^2)}.$$ (5) The last inequality becomes an equality iff all the balls are in force chains. From our simulations at large pressure $`\alpha 2/5`$, so at $`Z8`$, $`\mathrm{\Pi }0.62`$, which implies that the system has been completely homogenized. Although Eq. (5) is oversimplified, we believe that the change in slope in Fig. 2c is emblematic of the complete disappearance of well-separated chains. The localization transition can be understood by studying the behavior of the forces during the loading of the sample. Clearly, visualizing forces in 3D systems is a complicated task. In order to exhibit the rigid structure from the system we visually examine all the forces larger than the average force; these carry most of the stress of the system. The forces smaller than the average are thought to act as an interstitial subset of the system providing stability to the buckling of force chains . We look for force chains by starting from a sphere at the top of the system, and following the path of maximum contact force at every grain. We look only for the paths which percolate, i.e., stress paths spanning the sample from the top to the bottom. In Fig. 3 we show the evolution of the force chains thus obtained for two extreme cases of confining stress. We clearly see localization at low confining stress: the force-bearing network is concentrated in a few percolating chains. At this point the grains are weakly deformed but still well connected. We expect a broad force distribution, as found in this and previous studies. As we compress further, new contacts are created and the density of force chains increases. This in turn gives rise to a more homogeneous spatial distribution of forces, which is consistent with the crossover to a narrow Gaussian distribution. Experiments: Some of the predictions of our numerical study can be tested using standard carbon paper experiments , which have been used successfully in the past to study the force fluctuations in granular packings. A Plexiglas cylinder, 5 cm diameter and varying height (from 3 cm to 5 cm), is filled with spherical glass beads of diameter $`0.8\pm 0.05`$ mm. At the bottom of the container we place a carbon paper with white glossy paper underneath. We close the system with two pistons and we allow the top piston to slide freely in the vertical direction, while the bottom piston is held fixed to the cylinder. The system is compressed in the vertical direction with an Inktron<sup>TM</sup> press and the beads at the bottom of the cylinder left marks on the glossy paper. We digitize this pattern and calculate the “darkness” of every mark on the paper. To calibrate the relationship between the marks and the force, a known weight is placed on top of a single bead; for the forces of interest in this study (i.e., from $`0.05`$N to 6 N), there is a roughly linear relation between the darkness of the dot and the force on the bead. We perform the experiment for different external forces, ranging from 2000 N to 9000 N, and different cylinder heights. The corresponding vertical stress, $`\sigma `$, at the bottom of the cylinder ranges between 100 KPa and 2.3 MPa (as measured from the darkness of the dots). The results of four different measurements are shown in Fig. 2b. For $`\sigma `$ smaller than $`750`$ KPa, the distribution of forces, $`f`$, at the bottom piston decays exponentially: $$P(f)=f^1\mathrm{exp}[f/f],[\text{ }\sigma <\text{ 750 KPa}],$$ (6) where $`f`$ is the average force. When the stress is increased above 750 KPa there is a gradual crossover to a Gaussian force distribution as we find in the simulations. For example, at 2.3 MPa we have $$P(f)\mathrm{exp}\left[k^2\left(ff_o\right)^2\right],[\sigma =\text{ 2.3 MPa}].$$ (7) where $`kf_o1`$, and therefore $`ff_o`$. Similar results have been found in 2D geometries . Discussion: In summary, using both numerical simulations and experiments, we have studied unconsolidated compressible granular media in a range of pressures spanning almost four decades. In the limit of weak compression, the stress vanishes continuously as $`(\varphi \varphi _c)^\beta `$, where $`\varphi _c`$ corresponds to RLP or RCP according to the existence or not of transverse forces between the grains, respectively. At criticality, the coordination number approaches a minimal value $`Z_c`$ (=4 for friction and 6 for frictionless grains) also as a power law. Our result $`Z_c=6`$ agrees with experimental analysis of Bernal packings for close contacts between spheres fixed by means of wax , and our own analysis of the Finney packings using the actual sphere center coordinates of 8000 steel balls. However, no similar experimental study exists for RLP which could be able to confirm $`Z_c=4`$. A critical slowing down— the time to equilibrate the system increases near $`\varphi _c`$— and the emergency of shear rigidity (to be discussed elsewhere) is also found at criticality. The distribution of forces is found to decay exponentially. The system is dominated by a fragile network of relatively few force chains which span the system. When the stress is increased away from $`\varphi _c`$ to the point that the number of contacts has significantly increased from its initial value $`Z_c`$ we find: (1) the distribution of forces crosses over to a Gaussian (2) the participation number increases, and then abruptly saturates and (3) the density of force chains increases to the point where it no longer makes sense to describe the system in those terms. Our simulations indicate that the crossover is associated with a loss of localization and the ensuing homogenization of the force-bearing stress paths.
no-problem/0002/physics0002038.html
ar5iv
text
# Semiclassical description of multiphoton processes ## I Introduction In the last two decades multiphoton processes have been studied intensively, experimentally as well as theoretically. The inherent time-dependent nature of an atomic or molecular excitation process induced by a short laser pulse renders a theoretical description problematic in two respects. Firstly, a full quantum calculation in three dimensions requires a large computational effort. For this reason, quantum calculations have been restricted to one active electron in most cases . Secondly, an intuitive understanding for an explicitly time dependent process seems to be notoriously difficult, exemplified by pertinent discussions about stabilization in intense laser fields . Many studies have been carried out to gain an intuitive understanding of the two most prominent strong field phenomena, namely High Harmonic Generation (HHG) and Above Threshold Ionization (ATI). In the well established early analytical formulation by Keldysh, Faisal and Reiss the atomic potential is treated as a perturbation for the motion of the electron in a strong laser field . This picture is still used in more recent models, where the classical dynamics of the electron in the laser field is explicitly considered, e.g. in Corkum’s rescattering model which can explain the cutoff observed in HHG for linearly polarized laser light in one spatial dimension . The corresponding Hamiltonian reads $$H=H_0+E_0f(t)x\mathrm{sin}(\omega _0t+\delta ),$$ (1) where $`H_0=\frac{1}{2}p^2+V(x)`$ is the atomic Hamiltonian, $`f(t)`$ is the time-profile of the laser pulse with maximum amplitude $`E_0`$, and $`\omega _0`$ is the laser frequency. The interaction of the electron with the atom is specified by the potential $`V`$. Lewenstein et al. extended Corkum’s rescattering idea to a quasiclassical model which contains one (relevant) bound state not influenced by the laser field on the one hand side and electrons which only feel the laser field on the other side . This simple model explains qualitatively the features of HHG well. The same is also true for an alternative model, where the electron is bound by a zero-range potential However, the basic question if and to which extent these multiphoton processes can be understood semiclassically, i.e., by interference of classical trajectories alone, remains unanswered. It is astonishing that no direct semiclassical investigation of the Hamiltonian Eq. (1) has been performed while a number of classical as well as quantum calculations for Eq. (1) have been published. However, only recently, a semiclassical propagation method has been formulated which can be implemented with reasonable numerical effort. This is very important for the seemingly simple Hamiltonian Eq. (1) whose classical dynamics is mixed and in some phase space regions highly chaotic which requires efficient computation to achieve convergence. Equipped with these semiclassical tools we have studied multiphoton phenomena semiclassically in the frame work of Eq. (1). In comparison to the exact quantum solution we will work out those features of the intense field dynamics that can be understood in terms of interference of classical trajectories. The plan of the paper is as follows. In section II we provide the tools for the calculation of a semiclassical, time-dependent wavefunction. In section III we discuss Above-Threshold-Ionization (ATI) and work out the classical quantities which structure semiclassically the relevant observables. In section IV we use this knowledge for the description of Higher-Harmonic-Generation (HHG). Section V concludes the paper with a comparison of HHG and ATI from a semiclassical perspective and a short summary. ## II Calculation of the semiclassical wave function A (multi-dimensional) wave function $`\mathrm{\Psi }_\beta (𝐱,t)`$ can be expressed as $$\mathrm{\Psi }(𝐱,t)=_0^t𝑑𝐱^{}K(𝐱,𝐱^{},t)\mathrm{\Psi }(𝐱^{}).$$ (2) Here, $`\mathrm{\Psi }(𝐱^{})`$ is the initial wave function at $`t=0`$ and $`K(𝐱,𝐱^{},t)`$ denotes the propagator. We will not use the well-known semiclassical Van Vleck-Gutzwiller (VVG) propagator which is inconvenient for several reasons. Firstly, one has to deal with caustics, i.e. singularities of the propagator, and secondly, it is originally formulated as a boundary value problem. For numerical applications much better suited (and for analytical considerations not worse) is the so called Herman-Kluk (HK) propagator which is a uniformized propagator in initial value representation , formulated in phase space, $`K^{HK}(𝐱,𝐱^{},t)`$ $`={\displaystyle \frac{1}{(2\pi \mathrm{})^n}}{\displaystyle 𝑑𝐩𝑑𝐪C_{\mathrm{𝐪𝐩}}(t)e^{iS_{\mathrm{𝐪𝐩}}(t)/\mathrm{}}}`$ (3) $`g_\gamma (𝐱;𝐪(t),𝐩(t))g_\gamma ^{}(𝐱^{};𝐪,𝐩)`$ (4) with $$g_\gamma (𝐱;𝐪,𝐩)=\left(\frac{\gamma }{\pi }\right)^{n/4}\mathrm{exp}\left(\frac{\gamma }{2}\left(𝐱𝐪\right)^2+\frac{i}{\mathrm{}}𝐩\left(𝐱𝐪\right)\right)$$ (5) and $$C_{\mathrm{𝐪𝐩}}(t)=\left|\frac{1}{2}\left(𝐐_𝐪+𝐏_𝐩i\mathrm{}\gamma 𝐐_𝐩\frac{1}{i\mathrm{}\gamma }𝐏_𝐪\right)\right|^{\frac{1}{2}}.$$ (6) Each phase space point ($`𝐪,𝐩`$) in the integrand of Eq. (3) is the starting point of a classical trajectory with action $`S_{\mathrm{𝐪𝐩}}(t)`$. The terms $`𝐗_𝐲`$ in the weight factor $`C_{\mathrm{𝐪𝐩}}(t)`$ are the four elements of the monodromy matrix, $`𝐗_𝐲=𝐱_t/𝐲`$. The square root in Eq. (6) has to be calculated in such a manner that $`C_{\mathrm{𝐪𝐩}}(t)`$ is a continuous function of $`t`$. The integrand in Eq. (3) is – depending on the system – highly oscillatory. Although we restrict ourselves to one spatial dimension (see Eq. (1)) the number of trajectories necessary for numerical convergence can reach $`10^7`$. We note in passing that an integration by stationary phase approximation over momentum and coordinate variables reduces the HK-propagator to the VVG-propagator . In all calculations presented here we have used a Gaussian wave packet as initial wave function, $$\mathrm{\Psi }_\beta (x^{})=\left(\frac{\beta }{\pi }\right)^{1/4}\mathrm{exp}\left(\frac{\beta }{2}\left(x^{}q_\beta \right)^2\right).$$ (7) With this choice, the overlap $$f_{\gamma \beta }(q,p)g_\gamma ^{}(x^{};q,p)\mathrm{\Psi }_\beta (x^{})𝑑x^{}$$ (8) can be calculated analytically and Eq. (2) reads together with Eq. (3) $`\mathrm{\Psi }_\beta ^{HK}(x,t)=`$ $`\left({\displaystyle \frac{4\gamma \beta }{\alpha ^2}}\right)^{\frac{1}{4}}{\displaystyle \frac{1}{2\pi \mathrm{}}}{\displaystyle 𝑑p𝑑qe^{iS_{qp}(t)/\mathrm{}}}`$ (9) $`C_{qp}(t)g_\gamma (x;q(t),p(t))f_{\gamma \beta }(q,p)`$ (10) with $`\alpha =\gamma +\beta `$. For all results presented here we have taken $`\gamma =\beta `$. For comparison with our semiclassical calculations we determined the quantum mechanical wave function using standard Fast Fourier Transform split operator methods . ## III Above Threshold Ionization We start from Eq. (1) with $`\delta =0`$ and use a rectangular pulse shape $`f(t)`$ which lasts for $`4.25`$ optical cycles. This setting is very similar to the one used in. The energy spectrum of the electrons can be expressed by the Fourier transform of the autocorrelation function after the pulse, i.e. for times $`t>t_f`$, $$\sigma (\omega )=Re\underset{t_f}{\overset{\mathrm{}}{}}e^{i\omega t}\mathrm{\Psi }(t)|\mathrm{\Psi }_f𝑑t,$$ (11) where $`\mathrm{\Psi }_f=\mathrm{\Psi }(t_f)`$ is the wave function after the pulse and correspondingly $$|\mathrm{\Psi }(t)=e^{iH_0(tt_f)/\mathrm{}}|\mathrm{\Psi }_f$$ (12) is calculated by propagating $`\mathrm{\Psi }_f`$ for some time with the atomic Hamiltonian $`H_0`$ only after the laser has been switched off. ### A Quantum mechanical and semiclassical spectra for ATI We will present results for two types of potentials to elucidate the dependence of the semiclassical approximation on the form of the potential. #### 1 Softcore potential First we apply the widely used softcore potential $$V(x)=\frac{1}{\sqrt{x^2+a}}$$ (13) with $`a=1`$ and with an ionization potential $`I_p=0.670`$​ a.u.. We have checked that the correlation function differs little if calculated with the exact ground state or with the ground state wave function approximated by the Gaussian of Eq. (7) where $`\beta =0.431`$​ a.u. and $`q_\beta =0`$. However, the semiclassical calculation is considerably simplified with a Gaussian as initial state as can be seen from Eqs. (7-10). Therefore we use this initial state and obtain the propagated semiclassical wavefunction in the closed form Eq. (10). In Fig. 1 the quantum and semiclassical results at a frequency $`\omega _0=0.148`$​ a.u. and a field strength $`E_0=0.15`$​ a.u. are compared. The Keldysh parameter has the value $`1.14`$. The quantum mechanical calculation (dotted line) shows a typical ATI spectrum. Intensity maxima with a separation in energy of $`\mathrm{}\omega _0`$ are clearly visible. The first maximum has the highest intensity while the second maximum is suppressed. The semiclassical result (solid line) is ambiguous: On the one hand there are clear ATI maxima with a separation of $`\mathrm{}\omega _0`$. All peaks but the first one have roughly the correct magnitude. Again the second maximum is missing. On the other hand we see a constant shift (about $`0.02`$​ a.u.) of the spectrum towards higher energies. Therefore, a quantitative semiclassical description is impossible, at least with the present parameters and the softcore potential. Next, we will clarify whether the shift in the spectrum is an inherent problem of a semiclassical ATI calculation or if it can be attributed to properties of the softcore potential. #### 2 Gaussian potential To this end we take a potential which has been used to model the “single bound state” situation mentioned in the introduction . It is of Gaussian form $$V(x)=V_0\mathrm{exp}\left(\sigma x^2\right).$$ (14) With our choice of parameters $`V_0=0.6`$​ a.u. and $`\sigma =0.025`$​ a.u., the potential contains six bound states and can be approximated, at least in the lower energy part, by a harmonic potential for which semiclassical calculations are exact. Hence, the semiclassical ATI spectrum with this potential should be more accurate if the discrepancies in Fig. 1 are due to the potential and not due to the laser interaction. The ground state wave function itself is again well approximated by the Gaussian Eq. (7) with $`\beta =0.154`$​ a.u. and $`q_\beta =0`$. The laser has a frequency $`\omega _0=0.09`$​ a.u., a field strength $`E_0=0.049`$​ a.u., and a pulse duration of $`4.25`$ cycles. The Keldysh parameter has the value $`1.87`$. We obtain a quantum mechanical ATI spectrum (dotted line in Fig. 2) with six distinct maxima. The semiclassical spectrum (solid line) is not shifted, the location of the maxima agrees with quantum mechanics. Hence, one can conclude that the softcore potential is responsible for the shift. The height of the third maximum is clearly underestimated and the details of the spectrum are exaggerated by the semiclassical calculation. Apart from these deviations the agreement is good enough to use this type of calculation as a basis for a semiclassical understanding of ATI. ### B Semiclassical interpretation of the ATI spectrum #### 1 Classification and coherence of trajectories With the chosen parameters most of the trajectories ionize during the pulse ($`92\%`$). We consider a trajectory as ionized if the energy of the atom $$\epsilon (t)=p(t)^2/2+V(q(t))$$ (15) becomes positive at some time $`t_n`$ and remains positive, i.e. $`\epsilon (t)>0`$ for $`t>t_n`$. Typically, the trajectories ionize around an extremum of the laser field. Tunnelling can not be very important, otherwise the agreement between quantum mechanics and semiclassics would be much worse. The Keldysh parameter of $`1.87`$ suggests that we are in between the tunnelling and the multiphoton regime. Interestingly, the semiclassical description is successful although we are way below energies of the classically allowed over the barrier regime. An obvious criterion for the classification of the trajectories is the time interval of the laser cycle into which their individual ionization time $`t_n`$ falls, see Fig. 3. Typically ionization of trajectory happens around $`t_n=(2n1)T/4`$ when the force induced by the laser reaches a maximum. Hence, the ionized trajectories can be attached to time intervals $`I_n=[(n1)T/2,nT/2]`$. In Fig. 3 we have plotted four trajectories from the intervals $`I_1`$ to $`I_4`$ which end up with an energy $`E=0.36`$​ a.u.. After ionization each trajectory shows a quiver motion around a mean momentum $`p_f`$ . One can distinguish two groups of intervals, namely those with trajectories ionized with positive momentum $`p_f`$ (the intervals $`I_{2k1}`$) and those with trajectories with negative $`p_f`$ (the intervals $`I_{2k}`$). These two groups contribute separately and incoherently to the energy spectrum as one might expect since the electrons are easily distinguishable. One can see this directly from the definition Eq. (11) of the electron energy spectrum. For relative high energies $`\mathrm{}\omega `$ the (short-range) potential may be neglected in the Hamiltonian $`H_0`$ and we get $`\sigma (\omega )`$ $`=Re{\displaystyle \underset{t_f}{\overset{\mathrm{}}{}}}e^{i\omega t}\mathrm{\Psi }_f|e^{iH_0(tt_f)}|\mathrm{\Psi }_f𝑑t`$ (16) $`Re{\displaystyle \underset{0}{\overset{\mathrm{}}{}}}e^{i\omega t}\mathrm{\Psi }_f|e^{ip^2t/2\mathrm{}}|\mathrm{\Psi }_f𝑑t`$ (17) $`={\displaystyle \underset{\mathrm{}}{\overset{\mathrm{}}{}}}\delta \left(\omega p^2/2\mathrm{}\right)\left|\mathrm{\Psi }_f(p)\right|^2𝑑p`$ (18) $`=\left(\left|\mathrm{\Psi }_f(\sqrt{2\mathrm{}\omega })\right|^2+\left|\mathrm{\Psi }_f(\sqrt{2\mathrm{}\omega })\right|^2\right)(\mathrm{}/2\omega )^{1/2}`$ (19) $`\sigma _{}(\omega )+\sigma _+(\omega ).`$ (20) Hence, to this approximation, the ATI spectrum is indeed given by the incoherent sum of two terms belonging to different signs of the momenta of electrons ionized in different time intervals as described above. Figure Fig. 4(a) shows that Eq. (20) is a good approximation. Only for small $`\omega `$ the spectra do not agree, where the kinetic energy is comparable with the (neglected) potential energy. Quantum mechanically, all contributions from trajectories which lead to the same momentum $`p_f`$ of the electron are indistinguishable and must be summed coherently. To double check that the interference from different intervals $`I_n`$ is responsible for the ATI peaks, we can artificially create a spectrum by an incoherent superposition $`\stackrel{~}{\sigma }_+=\sigma _2+\sigma _4+\sigma _6+\sigma _8`$ of contributions from trajectories ionized in the intervals $`I_{2j}`$. This artificially incoherent sum (Fig. 4(b)) shows similarity neither with $`\sigma _+(\omega )`$ nor with any kind of ATI spectrum. #### 2 Classical signature of bound and continuum motion in the laser field The great advantage of an ab initio semiclassical description lies in the possibility to make dynamical behavior transparent based on classical trajectories, particularly in the case of explicit time dependent problems where our intuition is not as well trained as in the case of conservative Hamiltonian systems. The classical quantities enter semiclassically mostly through the phase factor $$\mathrm{exp}\left(i[S_{qp}(t)p(t)q(t)]/\mathrm{}\right)\mathrm{exp}[i\mathrm{\Phi }/\mathrm{}]$$ (21) which each trajectory contributes to the wave function Eq. (10). Although the prefactor $`C_{qp}(t)`$ in Eq. (10) may be complex itself, the major contribution to the phase comes from the effective action $`\mathrm{\Phi }`$ in the exponent of Eq. (21). Figure 5 shows the energy $`\epsilon `$ of the atom and the accumulated phase $`\mathrm{\Phi }`$. One can recognize a clear distinction between a quasi-free oscillation in the laser field after the ionization and the quasi-bound motion in the potential. The latter is characterized by an almost constant averaged bound energy $`\epsilon (t)`$ (Fig. 5(a)) of the individual trajectory giving rise to an averaged linear increase of the phase (Fig. 5(b)). After ionization the phase decreases linearly with an oscillatory modulation superimposed by the laser field. The almost linear increase of $`\mathrm{\Phi }`$ without strong modulation of the laser field during the bound motion of the electron is remarkable, particularly looking at the laser induced modulations of the bound energy seen in Fig. 5(a). The averaged slope of the phase (positive for bound motion, negative for continuum motion) corresponds via $`d\mathrm{\Phi }/dt=E`$ to an averaged energy. The behavior can be understood by a closer inspection of the action $`\mathrm{\Phi }(t)`$ $``$ $`S_{qp}(t)p(t)q(t)`$ (22) $`=`$ $`{\displaystyle _0^t}(2TH\dot{p}(\tau )q(\tau )\dot{q}(\tau )p(\tau ))𝑑\tau qp.`$ (23) Here, $`T=p^2(t)/2`$ refers to the kinetic energy and $`H`$ to the entire Hamiltonian of Eq. (1), the dot indicates a derivative with respect to time, and $`qq(t=0)`$. With the help of Hamilton’s equations and a little algebra $`\mathrm{\Phi }`$ from Eq. (22) can be simplified to $$\mathrm{\Phi }(t)=_0^t\left(\epsilon (\tau )q(\tau )\frac{dV}{dq}\right)𝑑\tau $$ (24) where $`\epsilon `$ is the atomic energy Eq. (15). With Eq. (24) we can quantitatively explain the slope of $`\mathrm{\Phi }`$ in Fig. 5(b). For the low energies considered the potential Eq. (14) can be approximated harmonically, $$V(q)V_0+V_0\sigma q^2$$ (25) Averaging $`\mathrm{\Phi }`$ over some time yields then $`\mathrm{\Phi }(t)V_0t`$, for any bound energy of a classical trajectory since for an oscillator averaged kinetic and potential energy are equal. Indeed, the numerical value for the positive slope in Fig. 5(b) is $`0.6`$​ a.u. in agreement with the value for $`V_0`$. For the ionized part of the trajectories we may assume that the potential vanishes. The corresponding solutions for electron momentum $`p(t)`$ follows directly from Hamilton’s equation $`\dot{p}=E_0\mathrm{sin}\omega _0t`$, $$p(t)=\frac{E_0}{\omega _0}\mathrm{cos}(\omega _0t)+p,$$ (26) where $`p`$ is the mean momentum. Without potential the phase from Eq. (24) reduces to $`\mathrm{\Phi }(t)=p^2(\tau )/2𝑑\tau `$ and we obtain with Eq. (26) $`\mathrm{\Phi }_c(t)`$ (27) $`={\displaystyle \frac{U_p}{2\omega _0}}\mathrm{sin}(2\omega _0t){\displaystyle \frac{E_0p}{\omega _0^2}}\mathrm{sin}\omega _0t(U_p+p^2/2)t`$ (28) with the ponderomotive potential $`U_p=E_0^2/4\omega _0^2`$. We note in passing that Eq. (28) is identical to the time dependent phase in the Volkov state (see the appendix). #### 3 Semiclassical model for ATI The clear distinction between classical bound and continuum motion in the laser field as demonstrated by Fig. 5 and illuminated in the last section, allows one to derive easily the peak positions of the ATI spectrum. Moreover, this distinction also supports the so called strong field approximation (e.g. ) where electron dynamics in the laser field is modelled by one bound state and the continuum. While this is postulated in as an approximation and justified a posteriori by the results the corresponding approximation is suggested in the present context of a semiclassical analysis by the full classical dynamics, i.e., the behavior of the trajectories, as shown in 5. There, we have seen that each classical bound motion leads to the characteristic linear increase of the phase. If the entire phase space corresponding to the initial (ground state) wave function is probed with many trajectories of different energy, the dominant contribution will appear at the bound state energy which implies $$\mathrm{\Phi }_b(t)I_pt,$$ (29) where $`I_p`$ is the ionization potential. The time for which a trajectory does not fall into one of the two classes, bound or continuum, is very short (Fig. 5). Hence, we can approximately compose the true phase $`\mathrm{\Phi }=\mathrm{\Phi }_b+\mathrm{\Phi }_c`$. However, we don’t know for an electron with mean momentum $`p`$ when it was ionized. Hence, we have to sum over all trajectories with different ionization times $`\tau `$ but equal final momentum $`p=p_f`$ which leads to the propagated wavefunction $`\mathrm{\Psi }_f(t,p)`$ $`{\displaystyle _{t_0}^t}𝑑\tau \mathrm{exp}[i/\mathrm{}(\mathrm{\Phi }_b(\tau )+\mathrm{\Phi }_c(t)\mathrm{\Phi }_c(\tau ))]`$ (30) $``$ $`{\displaystyle \underset{n,m}{}}J_n\left({\displaystyle \frac{E_0p}{\omega _0^2}}\right)J_m\left({\displaystyle \frac{U_p}{2\omega _0}}\right){\displaystyle _{t_0}^t}𝑑\tau e^{i\tau \mathrm{\Delta }_{mn}/\mathrm{}},`$ (31) where the phase $`\mathrm{\Delta }`$ is given by $$\mathrm{\Delta }_{mn}=I_p+U_p+p^2/2(n+2m)\mathrm{}\omega _0.$$ (32) From Eq. (32) and Eq. (30) follows that ATI peaks appear at integer multiples $`n\mathrm{}\omega _0`$ of the laser frequency, when $$\frac{p^2}{2}=n\mathrm{}\omega _0I_pU_p.$$ (33) One can also see from Eq. (30) that the ATI maxima become sharper with each optical cycle that supplies ionizing trajectories. Of course, this effect is weakened by the spreading of the wavepacket hidden in the prefactor of each trajectory contribution (see Eq. (10)) not considered here. Trajectories that are ionized during different laser cycles accumulate a specific mean phase difference. The phase difference depends on the number $`k`$ of laser cycles passed between the two ionization processes: $$\mathrm{\Delta }\mathrm{\Phi }(p)=kT\left(I_p+\frac{p^2}{2}+U_p\right).$$ (34) The trajectories interfere constructively if $$\mathrm{\Delta }\mathrm{\Phi }(p)=\mathrm{\hspace{0.17em}2}\pi l\frac{1}{2}p^2=\frac{l}{k}\omega _0I_pU_p.$$ (35) If an energy spectrum is calculated exclusively with trajectories from two intervals separated by $`k`$ cycles there should be additional maxima in the ATI spectrum with a distance $`\mathrm{}\omega _0/k`$. As a test for this semiclassical interpretation of the ATI mechanism we have calculated three spectra with trajectories where the mean time delay between ionizing events is given by $`\mathrm{\Delta }t=T`$, $`\mathrm{\Delta }t=2T`$ and $`\mathrm{\Delta }t=3T`$. For the spectrum Fig. 6​ (a) we have used exclusively trajectories from the intervals $`I_2`$ and $`I_4`$ ($`\mathrm{\Delta }t=T`$). One can see broad maxima separated by $`\mathrm{}\omega _0`$ in energy. Trajectories from the intervals $`I_2`$ and $`I_6`$ (see Fig. 6​ (b)) form a spectrum where the maxima are separated by $`\mathrm{}\omega _0/2`$ – as predicted for $`\mathrm{\Delta }t=2T`$. In analogy the separation for the ATI maxima in a spectrum with trajectories from the intervals $`I_2`$ and $`I_8`$ is given by $`\mathrm{}\omega _0/3`$ (Abb. 6​ (c)). The interference of trajectories ionized in many subsequent cycles suppresses the non-integer maxima according to Eq. (32). If the field strength is high enough the atom is completely ionized during the first cycle. The opportunity for interference gets lost and we end up with an unstructured energy spectrum. In an extreme semiclassical approximation we would have evaluated the integral in Eq. (30) by stationary phase. The condition $$d/d\tau [\mathrm{\Phi }_b(\tau )\mathrm{\Phi }_c(\tau )]I_p+p^2(\tau )/2=0$$ (36) leads to complex ionization times $`t_n`$ whose real part is periodic and allows for two ionizing events per laser cycle, close to the extrema of the laser amplitude. The derivation is simple but technical, therefore we don’t carry it out explicitely here. However, it explains the observation that ionization occurs close to the extrema of the laser field and it also makes contact with the tunnelling process often referred to in the literature since the complex time can be interpreted as tunnelling at a complex ”transition” energy. Clearly, our semiclassical analysis as described here supports the picture which has been sketched in interpreting a quantum calculation. The authors assume that wave packets are emitted every time the laser reaches an extremum. The interference of the different wave packets gives rise to the ATI peaks. In the following we will discuss the process of higher harmonic generation (HHG) which is closely related to ATI. In fact, the separation into a bound and continuum part of the electron description is constitutive for HHG as well, the prominent features, such as cutoff and peak locations, can be derived from the same phase properties Eq. (30) as for ATI. However, there is a characteristic difference, how these phases enter. ## IV High Harmonic Generation First, we briefly recapitulate the findings of , where we have calculated the harmonic spectrum with the softcore potential Eq. (13). With our choice of $`a=2`$ the ionization potential is given by $`I_p=0.5`$​ a.u.. The laser field has a strength $`E_0=0.1`$​ a.u., a frequency $`\omega _0=0.0378`$​ a.u. and a phase $`\delta =\pi /2`$. The initial wave packet with a width of $`\beta =0.05`$​ a.u. is located at $`q_\beta =E_0/\omega _0^2=70`$​ a.u.. Note, that the cutoff energy $`E_C`$ in such a symmetric laser scattering experiment is given by $$E_C=I_p+2U_p.$$ (37) From the dipole acceleration (see Fig. 7) $$d(t)=\mathrm{\Psi }(t)\left|\frac{dV(x)}{dx}\right|\mathrm{\Psi }(t),$$ (38) follows by Fourier transform $$\sigma (\omega )=d(t)\mathrm{exp}(i\omega t)𝑑t$$ (39) the harmonic power spectrum (see Fig. 8). Clearly, our semiclassical approach represents a good approximation. The dipole acceleration shows the characteristic feature that fast oscillations (which are responsible for the high harmonics in Fourier space) show only up after some time, here after $`t=T`$. This is the first time where trajectories are trapped. Trapping can only occur if (i) $`t_n=nT/2`$, (ii) the trajectories reach a turning point (i.e. $`p(t_n)=0`$), and (iii) if at this time the electron is close to the nucleus ($`q(t_n)0`$). The trapped trajectories constitute a partially bound state which can interfere with the main part of the wavepacket (trajectories) still bouncing back and forward over the nucleus driven by the laser. The group of briefly bound (i.e. trapped or stranded) trajectories can be clearly identified, either by their small excursion in space (Fig. 9a) or by the positive slope of their action (Fig. 9b) as it was the case for ATI (compare with Fig. 5). By artificially discarding the initial conditions in the semiclassical propagator which lead to trapped trajectories one can convincingly demonstrate that the plateau in HHG generation is a simple interference effect . Here, we are interested firstly in linking ATI to HHG by using the same separation in bound and continuum parts of the dynamics already worked out for ATI. Secondly, we want to go one step further and construct a wavefunction based on this principle. Semiclassically, we have to look first at the phases of the observable. Therefore, we define a linear combination for the wavefunction from the respective phase factors for bound and continuum motion. Considering only terms in the exponent the harmonic spectrum Eq. (39) reads simply $`\sigma (\omega ){\displaystyle 𝑑t\mathrm{exp}(i\omega t)}`$ (40) $`\left|\mathrm{exp}\left(i\mathrm{\Phi }_c(t)/\mathrm{}\right)+c\mathrm{exp}\left(i\mathrm{\Phi }_b(t)/\mathrm{}\right)\right|^2,`$ (41) where $`c0`$ is a (so far) arbitrary constant. In principle, $`c=c(t)`$, however its change in time is much slower than that of the optical oscillations of the phases $`\mathrm{\Phi }(t)`$, hence we may approximate $`c`$ by a constant. The bound and continuum phases, $`\mathrm{\Phi }_b`$ and $`\mathrm{\Phi }_c`$, are defined in Eq. (29) and Eq. (28), respectively. For $`\mathrm{\Phi }_c`$ we have $`p=0`$, since this is the dominant contribution from the center of the wavepacket which was initially at rest. The result is shown in Fig. 10. Indeed, the plateau with the harmonics is generated, however, the initial exponential decrease is missing since we have neglected all prefactors of the semiclassical wavefunction which describe the dispersion of the wavepacket. Consequently, one can evaluate Eq. (40) in stationary phase approximation. The integrand of Eq. (40) becomes stationary if $$\frac{d}{dt}\left[\mathrm{}\omega t\pm \left(\mathrm{\Phi }_b(t)\mathrm{\Phi }_c(t)\right)\right]=\mathrm{\hspace{0.17em}0}$$ (42) which happens at $$\mathrm{}\omega =2U_p\mathrm{sin}^2(\omega t)+I_p.$$ (43) From Eq. (43) we conclude the cut-off law $$\omega _{\text{max}}=2U_p+I_p,$$ (44) as expected for laser assisted electron ion scattering . Using the same expansion into Bessel functions as in Eq. (30) we obtain for the spectrum Eq. (40): $`{\displaystyle 𝑑t\mathrm{exp}\left(\frac{i}{\mathrm{}}\left[\left(\mathrm{}\omega U_pI_p\right)t+\frac{U_p}{2\omega _0}\mathrm{sin}\left(2\omega _0t\right)\right]\right)}`$ (45) $`={\displaystyle \underset{k=\mathrm{}}{\overset{\mathrm{}}{}}}{\displaystyle 𝑑te^{it\left(\mathrm{}\omega U_pI_p+2k\mathrm{}\omega _0\right)/\mathrm{}}\text{J}_k\left(\frac{U_p}{2\mathrm{}\omega _0}\right)}.`$ (46) Therefore, we see maxima in the harmonic spectrum for $$\mathrm{}\omega _k=U_p+I_p2k\omega _0.$$ (47) We can go one step further and construct a full time-dependent wavefunction from this semiclassical approximation, namely $$\mathrm{\Psi }(x,t)=\mathrm{\Psi }_\beta ^{sc}(x,t)+c\mathrm{\Psi }_0(x)\mathrm{exp}(itI_p/\mathrm{}).$$ (48) Here, $`\mathrm{\Psi }_0(x)\mathrm{exp}(iI_pt/\mathrm{})`$ is the time dependent ground state wave function (without the laser field) and $`\mathrm{\Psi }_\beta ^{sc}(x,t)`$ is a (semiclassical) wavepacket in the laser field but without potential. Calculating the dipole acceleration and the resulting harmonic spectrum with this wavefunction leads to a remarkably good approximation of the true quantum spectrum (compare Fig. 8 with Fig. 11). The dispersion of the wavepacket leads to the lower plateau compared to Fig. 10. ## V Conclusions ### A Semiclassical comparison between ATI and HHG Clearly, the main structure such as the plateau, cutoff (HHG) and the occurrence of peaks and their separation in energy (ATI and HHG) is a property of the difference of the classical time-dependent actions $`\mathrm{\Phi }_b(t)\mathrm{\Phi }_c(t)`$ alone. However, the HHG power spectrum Eq. (39) is an integral over all the time for which the electron wavepacket is exposed to the laser field. In contrast, the ATI spectrum is obtained in the long-time limit $`t\mathrm{}`$ after the laser has been switched off. This difference may explain why the HHG results tend to be better than the ATI results semiclassically: Any semiclassical approximation (which is not exact) become worse for large times. A second point refers to the fact that the characteristic phase difference $`\mathrm{\Phi }_b(t)\mathrm{\Phi }_c(t)`$ appears already in the wavefunction Eq. (30) for ATI, while for HHG it occurs only in the expectation value Eq. (38). However, this difference is artificial, since the expectation value, or better its Fourier transform the power spectrum, is not the observable of higher harmonic radiation. The correct expression is the dipole-dipole correlation function $`R`$ which can be approximated as $`R|\sigma (\omega )|^2`$ under single atom conditions or in the case of an ensemble of independent atoms which radiate . Hence, in both cases, ATI and HHG, the peak structure appears already on the level of the quantum amplitude (or wavefunction) and is amplified in the true observable. ### B Summary We have given a time-dependent fully semiclassical description of multiphoton processes. The prominent ATI and HHG features emerge naturally from properties of the classical trajectories whose contributions to the semiclassical wavefunction interfere semiclassically. Any effect of this semiclassical interference can be double-checked by disregarding the phases. This leads (with the same trajectories) to a classical observable. As we have seen, to a good approximation the classical action for an individual trajectory can be composed of one part $`\mathrm{\Phi }_b`$ for the time the electron is bound (disregarding the laser field) and of another part $`\mathrm{\Phi }_c`$ for the time the electron is in the continuum (disregarding the atomic potential). The relevant phase difference $`\mathrm{\Phi }_b\mathrm{\Phi }_c`$ leads in both cases, ATI and HHG, to the prominent harmonic structures in terms of the laser energy $`\mathrm{}\omega _0`$. Finally, we have been able to construct a simple wavefunction for higher harmonics generated in laser assisted scattering. Its key element is an explicitely time-dependent wavepacket of the electron under the influence of the laser field. Starting from an initial Gaussian distribution localized in space the wavepacket disperses in time providing the correct decrease of the intensity of the lower harmonics and in turn the correct height of the plateau. Financial support from the DFG under the Gerhard Hess-Programm and the SFB 276 is gratefully acknowledged. ## We want to calculate the semiclassical wave function of a free particle in a laser field according to Eq. (10). A particle in a laser field $`V_L(x,t)=E_0\mathrm{sin}(\omega t)`$ moves with $`p(t)=p+{\displaystyle \frac{E_0}{\omega }}\mathrm{cos}(\omega t)p+\stackrel{~}{p}(t)`$ (49) $`q(t)=q+pt+{\displaystyle \frac{E_0}{\omega ^2}}\mathrm{sin}(\omega t)q+pt+\stackrel{~}{q}(t)`$ (50) The weight factor $`C_{qp}(t)`$ is given by $$C_{qp}(t)=\left(1\frac{i\mathrm{}\gamma }{2}t\right)^{\frac{1}{2}}.$$ (51) For the phase factor $`S_{qp}(t)p(t)q(t)`$ we get: $`S_{qp}(t)p(t)q(t)=`$ $`{\displaystyle \frac{U_p}{2\omega }}\mathrm{sin}(2\omega t)U_pt`$ (52) $`{\displaystyle \frac{p^2}{2}}t\stackrel{~}{q}(t)pqp`$ (53) Evaluating Eq. (10) with the stationary phase approximation, which is exact for quadratic potentials, leads to the condition that $`f(q,p)=`$ $`{\displaystyle \frac{i}{\mathrm{}}}\left(xp(t){\displaystyle \frac{p^2}{2}}t\stackrel{~}{q}(t)p{\displaystyle \frac{\gamma }{\alpha }}qp{\displaystyle \frac{\beta }{\alpha }}q_\beta p\right)`$ (54) $`{\displaystyle \frac{\gamma }{2}}\left(xq(t)\right)^2{\displaystyle \frac{\gamma \beta }{2\alpha }}\left(qq_\beta \right)^2{\displaystyle \frac{1}{2\mathrm{}^2\alpha }}p^2`$ (55) must have an extremum. With $`{\displaystyle \frac{f}{q}}=\mathrm{\hspace{0.17em}0}=\gamma \left[xq(t)\right]{\displaystyle \frac{\gamma \beta }{\alpha }}(qq_\beta ){\displaystyle \frac{i}{\mathrm{}}}{\displaystyle \frac{\gamma }{\alpha }}p`$ (56) $`{\displaystyle \frac{f}{p}}=\mathrm{\hspace{0.17em}0}=\gamma \left[xq(t)\right]t{\displaystyle \frac{1}{\mathrm{}^2\alpha }}p`$ (57) $`+{\displaystyle \frac{i}{\mathrm{}}}\left(xpt\stackrel{~}{q}(t){\displaystyle \frac{\gamma }{\alpha }}q{\displaystyle \frac{\beta }{\alpha }}q_\beta \right)`$ (58) we find $`q_s={\displaystyle \frac{x\stackrel{~}{q}(t)+i\mathrm{}\beta tq_\beta }{1+i\mathrm{}\beta t}}`$ (59) $`p_s={\displaystyle \frac{i\mathrm{}\beta }{1+i\mathrm{}\beta t}}\left(x\stackrel{~}{q}(t)q_\beta \right).`$ (60) After some algebra we arrive at the stationary exponent $`f(q_s,p_s)`$ $`={\displaystyle \frac{i}{\mathrm{}}}x\stackrel{~}{p}(t){\displaystyle \frac{\beta }{2\left(1+i\mathrm{}\beta t\right)}}\left(x\stackrel{~}{q}(t)q_\beta \right)^2`$ (61) $`={\displaystyle \frac{i}{\mathrm{}}}x\stackrel{~}{p}(t){\displaystyle \frac{i}{\mathrm{}}}{\displaystyle \frac{\mathrm{}^2\beta ^2t}{2\sigma (t)}}\left(x\stackrel{~}{q}(t)q_\beta \right)^2`$ (62) $`{\displaystyle \frac{\beta }{2\sigma (t)}}\left(x\stackrel{~}{q}(t)q_\beta \right)^2,`$ (63) where $`\sigma (t)`$ is given by $$\sigma (t)=\mathrm{\hspace{0.17em}1}+\beta ^2\mathrm{}^2t^2.$$ (64) The determinant of the second derivatives of $`f`$ still has to be calculated. With $`{\displaystyle \frac{^2f}{q^2}}={\displaystyle \frac{\gamma ^4+2\gamma \beta }{\alpha }}{\displaystyle \frac{^2f}{p^2}}={\displaystyle \frac{i}{\mathrm{}}}t\gamma t^2{\displaystyle \frac{1}{\mathrm{}^2\alpha }}`$ (65) $`{\displaystyle \frac{^2f}{qp}}={\displaystyle \frac{i}{\mathrm{}}}{\displaystyle \frac{\gamma }{\alpha }}\gamma t`$ (66) we get $$det\left(\begin{array}{cc}\frac{^2f}{q^2}& \frac{^2f}{qp}\\ \frac{^2f}{pq}& \frac{^2f}{p^2}\end{array}\right)=\frac{2\gamma }{\mathrm{}^2\alpha }\left(\left[1i\gamma \mathrm{}t/2\right]\left[1+i\beta \mathrm{}t\right]\right).$$ (67) The factor $`\gamma `$ cancels as it should be and we are left with $`\mathrm{\Psi }_\beta ^{sc}(x,t)=`$ $`\left({\displaystyle \frac{\beta }{\pi }}\right)^{1/4}\sqrt{{\displaystyle \frac{1}{1+i\mathrm{}\beta t}}}`$ (68) $`\mathrm{exp}\left({\displaystyle \frac{i}{\mathrm{}}}\left[\stackrel{~}{p}(t)x{\displaystyle \frac{U_p}{2\omega }}\mathrm{sin}(2\omega t)U_pt\right]\right)`$ (69) $`\mathrm{exp}\left({\displaystyle \frac{i}{\mathrm{}}}{\displaystyle \frac{\mathrm{}^2\beta ^2}{2\sigma (t)}}\left(x\stackrel{~}{q}(t)q_\beta \right)^2t\right)`$ (70) $`\mathrm{exp}\left({\displaystyle \frac{\beta }{2\sigma (t)}}\left(x\stackrel{~}{q}(t)q_\beta \right)^2\right).`$ (71) This semiclassical time dependent wavepacket is quantum mechanically exact and corresponds to a superposition of Volkov solutions according to a Gaussian distribution at time $`t=0`$ . The fact that the semiclassical wavefunction is exact is a direct consequence of the Ehrenfest theorem which implies that interactions $`Vx^n`$, $`n=0,1,2`$ have quantum mechanically exact semiclassical solutions.
no-problem/0002/cond-mat0002437.html
ar5iv
text
# Stochastic models of exotic transport ## 1 Prerequisites A simple Brownian motion, in its canonical (model) manifestations, does not seem to hide any surprises. It is our main goal in the present paper to reconsider those ingredients of the standard formalism which, while relaxed or slightly modified, would lead to conceptually new and possibly exotic (in the sense of being non-typical) features. One of possible ”defects” of the standard theory is that the Brownian motion is incapable to originate spatiotemporal patterns (structures) and rather washes them out, even if initially in existence. In this particular context, ”active” Brownian particles were introduced, -. Most generally, they are supposed to remain in a feedback relationship with the environment (heat reservoir, thermostat, random medium, deterministic driving system or whatever else) through which they propagate. If regarded as a thermal bath, the environment is basically out of equilibrium. Obviously, a concrete meaning of being active relies on a specific choice of the model for the thermostat (deterministic or random, linear or nonlinear etc.) and the detailed particle-medium coupling mechanism. Another ”defective” feature of the standard theory is rooted in its possible molecular (chaos) foundations where the major simplification is needed while passing from the Boltzmann collision theory to the novel kinetic framework set by the Kramers equation, cf. . In the process, microscopic (molecular) energy-momentum conservation laws completely evaporate from the formalism. Then, there is no obvious way how to reconcile random exchanges of energy and momentum between the Brownian particle and the thermostat with the manifest isothermality requirement. An issue of the heat production/removal and resulting thermal inhomogeneities is normally ignored (cf. however - for a discussion of various thermostat models, and the notion of thermostatting in non-equilibrium dynamics). That was the starting point of our discussion of the origins of isothermal flows and the ultimate usage of the third Newton law in the mean to justify the concept of the Brownian motion with a recoil (via feedback relationship with the bath), . In conformity with the metaphor, : ”everything depends on everything else”, we relax the habitual sharp distinction between an entirely passive particle (nonetheless performing a continual erratic motion)and its exclusively active (perpetually at rest and in equlibrium) driving agency - the thermostat. In the present paper we take the following point of view on major signatures of the particle activity while moving at random due to external (environmental) reasons: any action upon the particle exerted by the environment induces (on suitable space and time scales) a compensating reaction in the carrier random medium. In loose terms of Refs. - we attribute to any (even single) Brownian particle an ability to generate perturbations of the medium (named a selfconsistent field in Ref. and perturbations of noise in Ref. ) which in turn influence its further movement. That is a non-liner feedback relationship mentioned before: while inducing a random dynamics of a particle (action), the medium suffers alterations (reaction), which modify next stages of the particle motion (feedback). Such features are obviously alien to the standard formalism. There is no experimentally reliable way to watch Brownian particles individually on time scales below say $`\frac{1}{100}s`$, nor get any insight into the associated fine detailed particle-bath interaction phenomena. The only realistic procedure to quantify observable features of the Brownian motion is to exploit its familiar Janus face (cf. ). Namely, the same mathematical formalism essentially applies to a single particle and to a statistical ensemble of identical non-interacting Brownian particles. Then a hydrodynamical analogy can be exploited on both levels of description, , once we invoke a standard kinetic reasoning (used in passing from the Boltzmann equation to gas or liquid dynamics equations). Effectively, we need to evaluate (conditional) averages over a statistical ensemble of particles, which (in loose terminology that follows Ref. , see also ) constitutes one component (noninteracting Brownian ”gas”) of a coupled two-component continuous system. Another component is the thermostat. What are the ultimate kinetic features of the Brownian motion (diffusion process) critically relies on the thermostat model i.e. specific mechanisms of the energy/heat exchange due to the coupling between the thermostat and the Brownian ”gas”. This particular issue we shall investigate in some detail in below, by resorting to somewhat nontypical (exotic in the Brownian motion context) methods. ## 2 Local conservation laws for the Brownian motion and Smoluchowski-type diffusion It is useful to exploit a standard phase-space argument that is valid, under isothermal conditions, for a Markovian diffusion process taking place in (or relative to) a driving flow $`\stackrel{}{w}(\stackrel{}{x},t)`$ with as yet unspecified dynamics nor concrete physical origin. (In particular, such a flow can receive an interpretation of a selfconsistent field generated in the environment by Brownian particles themselves, cf. Ref. .) We account for an explicit external force (here, acceleration $`\stackrel{}{K}=\stackrel{}{F}/m`$) exerted upon diffusing particles, while not directly affecting the driving flow itself. Then, infinitesimal increments of phase-space random variables read: $$d\stackrel{}{X}(t)=\stackrel{}{V}(t)dt$$ (1) $$d\stackrel{}{V}(t)=\beta [\stackrel{}{w}(\stackrel{}{x},t)\stackrel{}{V}(t)]dt+\stackrel{}{K}(\stackrel{}{x})dt+\beta \sqrt{2D}d\stackrel{}{W}(t).$$ (2) Following the leading idea of the Smoluchowski approximation, we assume that $`\beta `$ is large, and consider the process on time scales significantly exceeding $`\beta ^1`$. Then, an appropriate choice of the velocity field $`\stackrel{}{w}(\stackrel{}{x},t)`$ may in principle guarantee the convergence of the spatial part $`\stackrel{}{X}(t)`$ of the process to the Itô diffusion process with infinitesimal increments : $$d\stackrel{}{X}(t)=\stackrel{}{b}(\stackrel{}{x},t)dt+\sqrt{2D}d\stackrel{}{W}(t).$$ (3) In this case, the forward drift of the process reads $`\stackrel{}{b}(\stackrel{}{x},t)=\stackrel{}{w}(\stackrel{}{x},t)+\frac{1}{\beta }\stackrel{}{K}(\stackrel{}{x})`$. Notice that the $`\beta ^1\stackrel{}{K}`$ contribution can be safely ignored if we are interested in the dominant driving motion. Throughout the paper we are interested in Markovian diffusion processes, which propagate respectively the phase-space or configuration space probability densities (weak solutions of stochastic differential equations are thus involved). In the configuration space variant, we deal with a Markovian stochastic process whose probability density $`\rho (\stackrel{}{x},t)`$ evolves according to the standard Fokker-Planck equation $$_t\rho =D\mathrm{}\rho \stackrel{}{}(\stackrel{}{b}\rho )$$ (4) and the forward drift is not necessarily of the form dictated by Eqs. (1) and (2). We admit here more general forward drift functions, cf. Ref. , which do not allow for a simple additive decomposition and thus are capable to give account of nonlinearities in the particle-bath coupling. One can easily transform the Fokker-Planck equation to the familiar form of the continuity equation (hydrodynamic mass conservation law) $`_t\rho =\stackrel{}{}(\stackrel{}{v}\rho )`$ by defining $`\stackrel{}{v}=\stackrel{}{b}D\frac{\stackrel{}{}\rho }{\rho }`$. The current velocity $`\stackrel{}{v}`$ obeys a local (momentum per unit of mass) conservation law which directly originates from the rules of the Itô calculus for Markovian diffusion processes, and from the first moment equation in the diffusion approximation of the Kramers theory, : $$_t\stackrel{}{v}+(\stackrel{}{v}\stackrel{}{})\stackrel{}{v}=\stackrel{}{}(\mathrm{\Omega }Q).$$ (5) While looking similar to the standard Euler equation appropriate for the lowest order hydrodynamical description of gases and liquids, this equation conveys an entirely different physical message. First of all, for a class of forward drifts that are gradient fields the most general admissible form of an auxiliary potential $`\mathrm{\Omega }(\stackrel{}{x},t)`$ reads: $$\mathrm{\Omega }(\stackrel{}{x},t)=2D[_t\varphi +\frac{1}{2}(\frac{\stackrel{}{b}^2}{2D}+\stackrel{}{}\stackrel{}{b})].$$ (6) Here $`\stackrel{}{b}(\stackrel{}{x},t)=2D\stackrel{}{}\varphi (\stackrel{}{x},t)`$. In reverse, by choosing a bounded from below continuous function to represent conservative force fields (after taking the gradient) i. e. otherwise arbitrary $`\mathrm{\Omega }`$, we can always disentangle the above (Riccatti-type) identity with respect to the drift field. Moreover, instead of the standard pressure term (consider a state equation $`P\rho ^\alpha ,\alpha >0`$), there appears a contribution from more complicated (derivatives !) $`\rho `$-dependent potential $`Q(\stackrel{}{x},t)`$. It is is given in terms of the so-called osmotic velocity field $`\stackrel{}{u}(\stackrel{}{x},t)=D\stackrel{}{}ln\rho (\stackrel{}{x},t)`$: $$Q(\stackrel{}{x},t)=\frac{1}{2}\stackrel{}{u}^2+D\stackrel{}{}\stackrel{}{u}.$$ (7) An equivalent form of the enthalpy-related potential $`Q`$ is $`Q=2D^2\frac{\mathrm{}\rho ^{1/2}}{\rho ^{1/2}}`$. A general expression for the local diffusion current is $`\stackrel{}{j}=\rho \stackrel{}{v}=\rho (\stackrel{}{b}D\frac{\stackrel{}{}\rho }{\rho })`$. This local flow in principle may be experimentally observed for a cloud of suspended particles in a liquid. The current $`\stackrel{}{j}`$ is nonzero in non-equilibrium situations and a non-negligible matter transport occurs as a consequence of the Brownian motion, on the ensemble average. We thus cannot avoid local heating/cooling phenomena that need to push the environment out of equilibrium. That leads to obvious temperature inhomogeneities, which are normally disregarded, cf. Ref. . If the forward drift is interpreted as a gradient of a suitable function ($`b=2D\varphi =\mathrm{\Phi }`$) and we take $`\mathrm{\Phi }(\stackrel{}{x},0)`$ as the initial data for the $`t0`$ evolution), then we have: $$\mathrm{\Omega }=_t\mathrm{\Phi }+\frac{1}{2}|\stackrel{}{}\mathrm{\Phi }|^2+D\mathrm{}\mathrm{\Phi }.$$ (8) If we decide that the above Hamilton-Jacobi-type equation is to be solved with respect to the field $`\mathrm{\Phi }(\stackrel{}{x},t)`$, its solution (and general solvability issue) relies on the choice of a bounded from below, continuous function $`\mathrm{\Omega }(\stackrel{}{x},t)`$, without any a priori knowledge of forward drifts. Viewed that way, Eq. (7) sets limitations on admissible forms of the space-time dependence of any conceivable self-consistent field to be generated in the bath by Brownian particles (cf. Ref. ). There is no freedom at all for postulating various partial differential equations, if their solutions are to be interpreted as forward drifts of Markovian diffusion processes, . There is also interesting to observe that a gradient field ansatz for the diffusion current velocity $`\stackrel{}{v}=\stackrel{}{}S`$ allows to transform the momentum conservation law of a Markovian diffusion process to the universal Hamilton-Jacobi form: $$\mathrm{\Omega }=_tS+\frac{1}{2}|\stackrel{}{}S|^2+Q$$ (9) where $`Q(\stackrel{}{x},t)`$ was defined before as the ”pressure/enthalpy”-type function. (That form looks deceivingly similar to the standard hydrodynamic conservation law, valid for liquids and gases at thermal equilibrium, where instead of $`Q`$ an enthaply function normally appears.) By performing the gradient operation we recover the previous hydrodynamical form of the law. In the above, the contribution due to $`Q`$ is a direct consequence of an initial probability measure choice for the diffusion process, while $`\mathrm{\Omega }`$ alone does account for an appropriate forward drift of the process, playing at the same time the role of the volume force potential ($`Q=\frac{P}{\rho }`$ contributes to energy-momentum transfer effects through the boundaries of any volume). ## 3 Moment equations for the free Brownian motion: Getting out of conventions The derivation of a hierarchy of local conservation laws (moment equations) for the Kramers equation can be patterned after the standard procedure for the Boltzmann equation, . Those laws do not form a closed system and additional specifications (like the familiar thermodynamic equation of state) are needed to that end. In case of the isothermal Brownian motion, when considered in the large friction regime (e.g. Smoluchowski diffusion approximation), it suffices to supplement the Fokker-Planck equation by one more conservation law only to arrive at a closed system. To give a deeper insight into what really happens on the way from the phase-space theory of the Brownian motion to its approximate configuration-space (Smoluchowski) version, let us consider the familiar Ornstein-Uhlenbeck process (in velocity/momentum) in its extended phase-space form. For clarity of discussion, we discus random dynamics for one degree of freedom only. In the absence of external forces, the kinetic (Kramers-Fokker-Planck equation) reads: $$_tW+u_xW=\beta _u(Wu)+q\mathrm{}_uW$$ (10) where $`q=D\beta ^2`$. Here $`\beta `$ is the friction coefficient, $`D`$ will be identified later with the spatial diffusion constant, and provisionally we set $`D=k_BT/m\beta `$ in conformity with the Einstein fluctuation-dissipation identity. The joint probability distribution (in fact, density) $`W(x,u,t)`$ for a freely moving Brownian particle which at $`t=0`$ initiates its motion at $`x_0`$ with an arbitrary inital velocity $`u_0`$ can be given in the form of the maximally symmetric displacement probability law: $$W(x,u,t)=W(R,S)=[4\pi ^2(FGH^2)]^{1/2}exp\{\frac{GR^2HRS+FS^2}{2(FGH^2)}\}$$ (11) where $`R=xu_0(1e^{\beta t})\beta ^1`$, $`S=uu_0e^{\beta t}`$ while $`F=\frac{D}{\beta }(2\beta t3+4e^{\beta t}e^{2\beta t})`$$`G=D\beta (1e^{2\beta t})`$ and $`H=D(1e^{\beta t})^2`$. Marginal probablity densities, in the Smoluchowski regime (we take for granted that time scales $`\beta ^1`$ and space scales $`(D\beta ^1)^{1/2}`$ are irrelevant), take familiar forms of the Maxwell-Boltzmann $`w(u,t)=(\frac{m}{2\pi kT})^{1/2}exp(\frac{mu^2}{2k_BT})`$ and the diffusion kernel $`w(x,t)=(4\pi Dt)^{1/2}exp(\frac{x^2}{4Dt})`$ respectively. A direct evaluation of the first and second local moment of the phase-space probability density: $$<u>=𝑑uuW(x,u,t)=w(R)[(H/F)R+u_0e^{\beta t}]$$ (12) $$<u^2>=𝑑uu^2W(x,u,t)=(\frac{FGH^2}{F}+\frac{H^2}{F^2}R^2)(2\pi F)^{1/2}exp(\frac{R^2}{2F})$$ (13) after passing to the diffusion (Smoluchowski) regime, , allows to recover the local (configuration space conditioned) moment $`<u>_x=\frac{1}{w}<u>`$ which reads $$<u>_x=\frac{x}{2t}=D\frac{w(x,t)}{w(x,t)}$$ (14) while for the second local moment $`<u^2>_x=\frac{1}{w}<u^2>`$ we arrive at $$<u^2>_x=(D\beta D/2t)+<u>_x^2.$$ (15) By inspection one verifies that the transport (Kramers) equation for $`W(x,u,t)`$ implies local conservation laws: $$_tw+(<u>_xw)=0$$ (16) and $$_t(<u>_xw)+_x(<u^2>_xw)=\beta <u>_xw.$$ (17) By introducing (we strictly follow the moment equations strategy of the traditional kinetic theory of gases and liquids) the notion of pressure function $`P_{kin}`$ (we choose another notation to make a difference with the previous notion $`P`$ of the pressure function, cf. $`Q=\frac{P}{\rho }`$): $$P_{kin}(x,t)=(<u^2>_x<u>_x^2)w(x,t)$$ (18) we can analyze the local momentum conservation law $$(_t+<u>_x)<u>_x=\beta <u>_x\frac{P_{kin}}{w}$$ (19) In the Smoluchowski regime the friction term is cancelled away by a counterterm coming from $`\frac{1}{w}P_{kin}`$ so that $$(_t+<u>_x)<u>_x=\frac{D}{2t}\frac{w}{w}=\frac{P}{w}=Q$$ (20) where $`P=D^2w\mathrm{}lnw`$. For comparison with the notations (local conservation laws) of the previous section one needs to replace $`w(x,t)`$ by $`\rho (x,t)`$ and $`<u>_x`$ by $`v(x,t)`$ in all formulas that pertain to the Smoluchowski regime. Further exploiting the kinetic lore, we can tell few words about the temperature of Brownian particles as opposed to the (posibly equilibrium) temperature of the thermal bath. Namely, in view of (we stay in the Smoluchowski regime) $`P_{kin}(D\beta \frac{D}{2t})w`$ where $`D=\frac{k_BT}{mm\beta }`$, we can formally set: $$\mathrm{\Theta }=\frac{P_{kin}}{w}(k_BT\frac{D}{2t})<k_BT.$$ (21) That quantifies the degree of thermal agitation (temperature) of Brownian particles to be less than the thermostat temperature. Heat is continually pumped from the thermostat to the Brownian ”gas”, until asymptotically both temperatures equalize. This may be called a ”thermalization” of Brownian particles. In the process of thermalization the Brownian ”gas” temperature monotonically grows up until the mean kinetic energy of particles and that of mean flows asymptotically approach the familiar kinetic relationship: $$\frac{w}{2}(<u^2>_x<u>_x^2)𝑑x=k_BT$$ (22) In view of this medium $``$ particles heat transfer issue, one must be really careful while associating habitual thermal equilibrium conditions with essentially non-equilibrium phenomena, cf. Ref. for more extended discussion. ## 4 Hydrodynamical reasoning: In search for exotic Once local conservation laws were introduced, it seems instructive to comment on the essentially hydrodynamical features (compressible fluid/gas case) of the problem. Specifically, the ”pressure” term $`Q`$ is here quite annoying from the traditional kinetic theory perspective. That is quite apart from the fact that our local conservation laws have a conspicuous Euler form appropriate for the standard hydrodynamics of gases and liquids. Let us stress that in case of normal liquids the pressure is exerted upon any control volume (droplet) by the surrounding fluid. We may interpret that as a compression of a droplet. In case of Brownian motion, we deal with a definite decompression: particles are driven away from areas of higher concentration (probability of occurence). Hence, typically the Brownian ”pressure” is exerted by the droplet upon its surrounding. Following the hydrodynamic tradition let us analyze that ”pressure” issue in more detail. We consider a reference volume (control interval, finite droplet) $`[\alpha ,\beta ]`$ in $`R^1`$ (or $`\mathrm{\Lambda }R^1`$ ) which at time $`t[0,T]`$ comprises a certain fraction of particles (Brownian ”fluid” constituents). The time rate of particles loss or gain by the volume $`[\alpha ,\beta ]`$ at time $`t`$, is equal to the flow outgoing through the boundaries i.e. $$_t_\alpha ^\beta \rho (x,t)𝑑x=\rho (\beta ,t)v(\beta ,t)\rho (\alpha ,t)v(\alpha ,t)$$ (23) which is a consequence of the continuity equation. To analyze the momentum balance, let us allow for an infinitesimal deformation of the boundaries of $`[\alpha ,\beta ]`$ to have entirely compensated the mass (particle) loss or gain: $$[\alpha ,\beta ][\alpha +v(\alpha ,t)\mathrm{}t,\beta +v(\beta ,t)\mathrm{}t]$$ (24) Effectively, we pass then to the locally co-moving frame. That implies $$lim_{\mathrm{}t0}\frac{1}{\mathrm{}t}\left[_{\alpha +v_\alpha \mathrm{}t}^{\beta +v_\beta \mathrm{}t}\rho (x,t+\mathrm{}t)𝑑x_\alpha ^\beta \rho (x,t)𝑑x\right]=$$ (25) $$=lim_{\mathrm{}t0}\frac{1}{\mathrm{}t}\left[_{\alpha +v_\alpha \mathrm{}t}^\alpha \rho (x,t)𝑑x+\mathrm{}t_\alpha ^\beta (_t\rho )𝑑x+_\beta ^{\beta +v_\beta \mathrm{}t}\rho (x,t)𝑑x\right]=0$$ Let us investigate what happens to the local matter flows $`(\rho v)(x,t)`$, if we proceed in the same way (leading terms only are retained): $$_{\alpha +v_\alpha \mathrm{}t}^{\beta +v_\beta \mathrm{}t}(\rho v)(x,t+\mathrm{}t)𝑑x_\alpha ^\beta (\rho v)(x,t)𝑑t$$ (26) $$(\rho v^2)(\alpha ,t)\mathrm{}t+(\rho v^2)(\beta ,t)\mathrm{}t+\mathrm{}t_\alpha ^\beta [_t(\rho v)]𝑑x$$ In view of local conservation laws we have $`_t(\rho v)=(\rho v^2)+\rho (\mathrm{\Omega }Q)`$ and the rate of change of momentum associated with the control volume $`[\alpha ,\beta ]`$ is (here per unit of mass) $$lim_{\mathrm{}t0}\frac{1}{\mathrm{}t}\left[_{\alpha +v_\alpha \mathrm{}t}^{\beta +v_\beta \mathrm{}t}(\rho v)(x,t+\mathrm{}t)_\alpha ^\beta (\rho v)(x,t)\right]=_\alpha ^\beta \rho (\mathrm{\Omega }Q)dx$$ (27) However, $`Q=\frac{P}{\rho }`$ and $`P=D^2\rho \mathrm{}ln\rho `$. Therefore: $$_\alpha ^\beta \rho (\mathrm{\Omega }Q)dx=_\alpha ^\beta \rho \mathrm{\Omega }dx_\alpha ^\beta Pdx=$$ (28) $$=E[\mathrm{\Omega }]_\alpha ^\beta +P(\alpha ,t)P(\beta ,t)$$ Clearly, $`\mathrm{\Omega }`$ refers to the Euler-type volume force, while $`Q`$ (or more correctly, $`P`$) refers to the ”pressure” effects entirely due to the particle transfer rate through the boundaries of the considered volume. The missing ingredient of our discussion is the time developement of the kinetic energy of the matter flow $`\frac{1}{2}(\rho v^2)`$ (per unit of mass) transported through the chosen volume. Let us therefore evaluate: $$_{\alpha +v_\alpha \mathrm{}t}^{\beta +v_\beta \mathrm{}t}\frac{1}{2}(\rho v^2)(x,t+\mathrm{}t)𝑑x_\alpha ^\beta \frac{1}{2}(\rho v^2)(x,t)𝑑t$$ (29) $$\frac{1}{2}(\rho v^3)(\alpha ,t)\mathrm{}t+\frac{1}{2}(\rho v^3)(\beta ,t)\mathrm{}t+\mathrm{}t_\alpha ^\beta [_t\frac{1}{2}(\rho v^2)]𝑑x$$ We have $`_t(\frac{1}{2}\rho v^2)=\frac{1}{2}v^2(\rho v)\rho v(Q\mathrm{\Omega }+\frac{1}{2}v^2)=(\frac{1}{2}\rho v^3)\rho v(Q\mathrm{\Omega })`$. Consequently, the time rate of the kinetic energy (of the flow) loss/gain by the volume reads: $$lim_{\mathrm{}t0}\frac{1}{\mathrm{}t}\left[_{\alpha +v_\alpha \mathrm{}t}^{\beta +v_\beta \mathrm{}t}\frac{1}{2}(\rho v^2)(x,t+\mathrm{}t)_\alpha ^\beta \frac{1}{2}(\rho v^2)(x,t)\right]=_\alpha ^\beta (\rho v)(\mathrm{\Omega }Q)dx$$ (30) In the integrand one immediately recognizes an expression (up to the mass parameter) for the power released/absorbed by the control volume at time $`t`$. In particular, by taking advantage of the identity $`Q=\frac{P}{\rho }`$ we can rewrite the pure ”pressure” contribution as follows: $`_\alpha ^\beta \rho vQdx=_\alpha ^\beta vPdx`$ which clearly is a direct analog of the standard mechanical expression for the power release ($`\frac{dE}{dt}=Fv`$). Surely, one cannot interpret the above local averages as an outcome of an innocent operation of the random medium upon particles. There is a non-negligible transfer of energy and momentum to be accounted for in association with the Brownian motion, and a number of problems (when can we disregard temperature gradients ?) pertaining to local heating and cooling phenomena suffered by the environment (in terms of local averages) should be consistently resolved. ## 5 Implementing the feedback In contrast to molecular chaos derivations based on the Boltzmann equation, , in case of the Brownian motion we have no access into the microscopic fine details of particle propagation. Instead, we need to postulate the mathematically reliable form of the dynamics, even if ending up with obvious artifacts of the formalism (like e.g. non-differentiable paths, or their infinite spatial variation on arbitrarily short time intervals, in case of the Wiener process or, even worse, white-noise input). Our major hypothesis about an exotic particle-bath coupling pertains to local averages and is motivated by a fairly intuitive picture of the behaviour of suspended particles in thermally inhomogeneous media. Namely, it is well known that hot areas support much lower density of suspended particles (can even be free of any dust admixtures) than the lower temperature areas. Dynamically we can interpret this phenomenon as a repulsion of suspended particles by warm areas and an attraction by the cool ones, cf. Ref. . In the course of our discussion we have spent quite a while on demonstrating that a non-trivial energy/heat exchange is completely ignored in the standard approach to the Brownian motion (even, if under suitable circumstances one has good reasons to do that). Effectively, the model dramatically violates basic conservation laws of physics. That derives from the assumption that the thermostat is perpetually in the state of rest (in the mean), in thermal equilibrium and thus free of any thermal currents. The force/acceleration term (we turn back to the three-dimensional notation) $`\stackrel{}{}(\mathrm{\Omega }Q)`$ appears in all formulas that refer to energy/momentum flows supported by local averages of the Brownian motion. Their sole reason is the action of the random medium upon particles which causes an expansion of the Brownian ”swarm” out of areas of higher concentration. That however needs a local cooling of the medium (Brownian particles are being ”thermalized” by increasing their mobility e.g. temperature). On sufficiently low time scales that should amount to an instantaneous reaction of the medium in terms of thermally induced currents: Brownian particles are driven back (attraction !) to the cooler areas i.e. float down the temperature gradients as long as they are non-vanishing, . We shall bypass the thermal inhomogeneity issue by resorting to an explicit energy-momentum balance information available through local conservation laws for the Brownian motion. If we regard the term $`\stackrel{}{}(\mathrm{\Omega }Q)`$ as a quantification of the sole medium action upon particles, then the most likely quantification of the medium reaction should be exactly the opposite i.e. $`\stackrel{}{}(Q\mathrm{\Omega })`$. Told otherwise, medium ”effort” to release momentum and energy from the Brownian ”droplet” (control volume), at a time rate determined by the functional form of $`\stackrel{}{}(\mathrm{\Omega }Q)(x,t)`$, induces a compensating (in view of the heat deficit) energy and momentum delivery to that volume needed to remove the thermal inhomogeneity of the thermostat. As a consequence, Brownian particles propagate through the medium which is non longer in the state of rest and develops intrinsic (mean, on the ensemble average) flows. A mathematical encoding of this Brownian recoil principle , or third Newton law in the mean hypothesis is rather well established. The momentum conservation law for the process with a recoil (the reaction term replaces the decompressive action term) will read: $$_t\stackrel{}{v}+(\stackrel{}{v}\stackrel{}{})\stackrel{}{v}=\stackrel{}{}(Q\mathrm{\Omega })$$ (31) implying that $$_tS+\frac{1}{2}|\stackrel{}{}S|^2Q=\mathrm{\Omega }$$ (32) stands for the corresponding Hamilton-Jacobi equation, cf. , instead of ”normal” one. A suitable adjustment (re-setting) of the initial data is here necessary, cf. . In the coarse-grained picture of motion we shall deal with a sequence of repeatable feedback scenarios realized on the Smoluchowski process time scale: the Brownian ”swarm” expansion build-up is accompanied by the parallel counter-flow build-up, which in turn modifies the subsequent stage of the Brownian ”swarm” migration (being interpreted to modify the forward drift of the process) and the corresponding, built-up anew counter-flow. Perhaps surprisingly, we are still dealing with Markovian diffusion-type processes, . The link is particularly obvious if we observe that the new Hamilton-Jacobi equation can be formally rewritten in the previous form by introducing: $$\mathrm{\Omega }_r=_tS+\frac{1}{2}|\stackrel{}{}S|^2+Q$$ (33) where $`\mathrm{\Omega }_r=2Q\mathrm{\Omega }`$ and $`\mathrm{\Omega }`$ represents the previously defined potential function characterizing any Smoluchowski (or more general) diffusion process. It is however $`\mathrm{\Omega }_r`$ which would determine forward drifts of the Markovian diffusion process with a recoil. Those must come out from the Cameron-Martin-Girsanov identity $`\mathrm{\Omega }_r=2Q\mathrm{\Omega }=2D[_t\varphi +\frac{1}{2}(\frac{\stackrel{}{b}^2}{2D}+\stackrel{}{}\stackrel{}{b})]`$. After complementing the Hamilton-Jacobi-type equation by the continuity equation, we again end up with a closed system of conservation laws. The system is badly nonlinear and coupled, but its linearisation can be immediately given in terms of an adjoint pair of Schrödinger equations with a potential $`\mathrm{\Omega }`$ (the imaginary unit $`i`$ on the left-hand-side in below is not an error !), . Indeed, $$i_t\psi =D\mathrm{}\psi +\frac{\mathrm{\Omega }}{2D}\psi $$ (34) with a solution represented in the polar form $`\psi =\rho ^{1/2}exp(iS)`$ and its complex adjoint makes the job. The choice of $`\psi (\stackrel{}{x},0)`$ sets here a solvable Cauchy problem. Notice that, in view of the Schrödinger-type linearization, for time-indepedent $`\mathrm{\Omega }`$ (conservative forces) the total energy $`_{R^3}(\frac{v^2}{2}Q+\mathrm{\Omega })\rho d^3x`$ of the system is a conserved finite quantity. We thus reside within the framework of so-called finite energy diffusion processes, whose mathematical features received some attention in the literature. In particular, it is known that in the absence of volume forces, a superdiffusion appears, while harmonic volume forces allow for non-dispersive diffusion-type processes, .
no-problem/0002/cond-mat0002198.html
ar5iv
text
# The plasma-insulator transition of spin-polarized Hydrogen ## Abstract A mixed classical-quantum density functional theory is used to calculate pair correlations and the free energy of a spin-polarized Hydrogen plasma. A transition to an atomic insulator phase is estimated to occur around $`r_s=2.5`$ at $`T=10^4K`$, and a pressure $`P0.5Mbar`$. Spin polarization is imposed to prevent the formation of $`H_2`$ molecules. Département de Physique des Matériaux (UMR 5586 du CNRS), Université Claude Bernard-Lyon1, 69622 Villeurbanne Cedex, France Department of Chemistry, University of Cambridge Lensfield Road, Cambridge CB2 1EW, UK PACS numbers: 31.15Ew, 61.20.-p, 64.70.-p Although Hydrogen is generally considered to be the simplest of elements, its expected metallization under pressure has proved a rather elusive transition. It is now accepted that the behaviour of solid and fluid molecular Hydrogen ($`H_2`$) may be very different at high pressures. Despite considerable experimental efforts with static, room temperature compression of solid $`H_2`$ in diamond anvils beyond pressures of 2 Mbar, there is still no compelling evidence for a metallic state . The situation is somewhat more favourable for fluid $`H_2`$, since shock compression to 1.4Mbar, and a temperature of about 3000K, led to measurements of metallic resistivities . However, theoretical interpretation is hampered by the absence of a clear-cut scenario; in particular it is not clear whether molecular dissociation precedes ionization or conversely . The presence of several species, $`H_2`$, $`H_2^+`$, H, $`H^+`$ and electrons at a “plasma phase transition” complicates a theoretical analysis considerably; to gain a clearer picture of pressure-induced ionization, it may be instructive to consider a model system, which would not involve molecular dissociation. The model system considered in this letter is spin-polarized Hydrogen. If all electron spins are assumed to be polarized by a strong external magnetic field $`𝐁`$, only triplet pair states $`{}_{}{}^{3}\mathrm{\Sigma }`$ can be formed, preventing the binding into $`H_2`$ molecules. The low pressure phase will be made up of H atoms and the only possible scenario upon compression will be the ionization of atoms to form an electron - proton plasma, which is expected to be crystalline at low temperature and fluid at higher temperatures. A rough estimate of the magnetic field needed to spin - polarize the electrons is obtained by equating the magnetic coupling energy $`\mu _BB`$ (where $`\mu _B`$ is the magnetic moment of an electron) to the difference between the triplet and singlet H-H potential energy functions, calculated at the equilibrium distance of the $`H_2`$ molecule ; this leads to $`B10^5Tesla`$. This value exceeds the highest magnetic fields achievable in a laboratory by 3 orders of magnitude, but is well within the range of astrophysical situations. The present calculation neglects possible orbital effects due to a strong applied magnetic field; such effects are expected to be small and their inclusion would lead to a much more involved and less transparent calculation. We prefer to think of our model system as a plasma which has been prepared in a spin-polarized state, and is assumed to remain such even when B is switched off. The subsequent calculation will be restricted to fluid Hydrogen. The thermodynamic properties of the low pressure atomic $`H`$ phase may be easily calculated from the known triplet pair potential by standard methods of the theory of classical fluids. We calculated the atom-atom pair distribution function $`g(r)`$ from the HNC integral equation , and deduced from it the equation of state via the virial and compressibility routes. The resulting excess free energies per atom are plotted in Fig.4 as a function of the usual density parameter $`r_s=a/a_0`$, along the isotherm $`T=10^4K`$; here $`a_0`$ is the Bohr radius, and $`a=[3/(4\pi n)]^{1/3}`$, where $`n`$ is the number of $`H`$ atoms per unit volume. There is a thermodynamic inconsistency, typical of HNC theory, but the small difference between the “virial” and “compressibility” free energies will have no influence on our conclusions. To allow for a meaningful comparison with the free energy calculated for the high pressure plasma phase, the free energies shown in Fig.4 contain an electron binding energy contribution of $`0.5a.u.`$ . It is implicitly assumed that this binding energy, valid for isolated atoms (i.e. in the limit $`r_s\mathrm{}`$) does not change upon compression up to $`r_s=2.5`$, due to overlap and distortion of the individual electron $`1s`$ orbitals. A statistical description of the high pressure phase is more challenging. The key parameter characterizing the spin-polarized electron component is its Fermi energy $`ϵ_F=2.923/r_s^2a.u.`$ ;the corresponding Fermi temperature $`T_F\mathrm{9.2\hspace{0.17em}10}^5/r_s^2K`$. Along the isotherm $`T=10^4K`$ considered in the present calculations, the electrons may be considered to be completely degenerate (i.e. in their ground state) up to $`r_s3`$. The degeneracy temperature of the protons is $`2000`$ times smaller, so that for $`T=10^4K`$, the latter may be considered as being essentially classical, down to $`r_s0.5`$. The proton component is characterized by the Coulomb coupling constant $`\mathrm{\Gamma }=e^2/(ak_BT)=31.56/r_s`$ along the above isotherm, showing that classical Coulomb correlations are expected to be strong over the density range $`1r_s3`$ considered in this paper. Note that while $`\mathrm{\Gamma }`$ decreases as $`r_s`$ increases, the corresponding electron Coulomb coupling constant $`\gamma =e^2/(aϵ_F)=0.342r_s`$ increases. In the ultra-high density regime $`r_s1`$, the electron kinetic energy dominates, and the proton and electron components decouple in first approximation (“two-fluid” model); the weak proton-electron coupling may be treated by linear response theory , suitably adapted to the spin-polarized case. Within linear response, the free energy per atom (ion-electron pair) splits into three terms: the ground-state energy of the uniform, spin-polarized electron gas (“jellium”), $`ϵ_e`$, the free energy of protons in a uniform neutralizing background (the so-called “one-component plasma” or OCP), $`f_{OCP}`$, and the first order correction due to linear screening of the Coulomb interactions by the electron gas, $`\mathrm{\Delta }f`$: $$f=\frac{F(\mathrm{\Gamma },r_s)}{N}=ϵ_e+f_{OCP}+\mathrm{\Delta }f$$ (1) where $`ϵ_e(r_s)`$ is taken to be the sum of kinetic ($`1.754/r_{s}^{}{}_{}{}^{2}`$), exchange ($`0.5772/r_s`$) and correlation contributions; $`f_{OCP}`$ is given by an accurate fit to Monte Carlo simulations of the OCP ; $`\mathrm{\Delta }f`$ follows from first order thermodynamic perturbation theory : $$\mathrm{\Delta }f=\frac{1}{2(2\pi )^3}S_{OCP}(k)\widehat{w}(k)𝑑𝐤$$ (2) where $`S_{OCP}(k)`$ is the static structure factor of the OCP (which plays the role of reference system). According to linear response theory, $`\widehat{w}(k)`$ is the difference between screened and bare ion-ion pair potentials: $$\widehat{w}(k)=\frac{4\pi e^2}{k^2}\left[\frac{1}{ϵ(k)}1\right]$$ (3) where $`ϵ(k)`$ is the dielectric function of the electron gas which we calculated within the RPA from the Lindhard susceptibility of a gas of spin-polarized, non-interacting electrons, supplemented by a local exchange and correlation correction . All necessary ingredients for the calculation of $`\mathrm{\Delta }f`$ may be found in , and the resulting free energy curve is shown in Fig.4. Although linear response cannot, a priori, be expected to be quantitatively accurate for $`r_s>1`$, it provides a rough estimate of the plasma to atomic phase transition, from the intersection of the free energy curves, which is seen to occur at $`r_s1.9`$. The corresponding transition pressure would be $`2.3Mbar`$. However, as $`r_s`$ increases, the ion-electron coupling becomes stronger, and the non-linear response of the electron component to the “external” potential field provided by the protons, is expected to lower the free energy of the plasma phase. To explore the non-linear regime we have adapted the HNC-DFT formulation of our earlier work on (unpolarized) metallic H to the spin-polarized case. Within this formulation, proton-proton and proton-electron correlations are treated at the HNC level, which is expected to be a good approximation for the long-range Coulomb interactions, while the energy of the inhomogeneous electron gas follows from the density functional ($`E=Nϵ_e`$): $$E\left[\rho (𝐫)\right]=E_K\left[\rho (𝐫)\right]+E_H\left[\rho (𝐫)\right]+E_X\left[\rho (𝐫)\right]+E_C\left[\rho (𝐫)\right]$$ (4) where $`\rho (𝐫)`$ denotes the local electron density, and $`E_K`$, $`E_H`$, $`E_X`$ and $`E_C`$ are the kinetic, Hartree, exchange and correlation contributions. For $`E_K`$ we adopted the Thomas-Fermi approximation, corrected by a square gradient term: $$E_K\left[\rho (𝐫)\right]=C_K[\rho (𝐫)]^{5/3}𝑑𝐫+\frac{\lambda }{8}\frac{|\rho (𝐫)|^2}{\rho (𝐫)}𝑑𝐫$$ (5) where $`C_K=3(6\pi ^2)^{2/3}/10a.u.`$, while the choice of $`1/9<\lambda <1`$ will be specified below. The mean field Hartree term is of the usual form: $$E_H\left[\rho (𝐫)\right]=\frac{1}{2}𝑑𝐫𝑑𝐫^{}\frac{\mathrm{\Delta }\rho (𝐫)\mathrm{\Delta }\rho (𝐫^{})}{|𝐫𝐫^{}|}$$ (6) where $`\mathrm{\Delta }\rho (𝐫)=\rho (𝐫)n`$, while: $$E_X[\rho (𝐫)]=C_X[\rho (𝐫)]^{4/3}𝑑𝐫$$ (7) with $`C_X=3(6/\pi )^{1/3}/4a.u.`$ . The correlation contribution $`E_C[\rho (𝐫)]`$ (within the LDA) can be found in . This functional yields an explicit form for the electron-electron direct correlation function $`c_{22}(r)`$ (henceforth the indices 1 and 2 will refer to protons and electrons respectively) . The remaining direct and total correlation functions $`c_{11}(r)`$, $`c_{12}(r)`$, $`h_{11}(r)`$ and $`h_{12}(r)`$ are calculated by a numerical resolution of the HNC closure equations and the quantum version of the Ornstein-Zernike (OZ) relations , which form a closed set of coupled non-linear integral equations for the four functions. Solutions were obtained by a standard iterative procedure along the isotherm $`T=10^4K`$ and for density parameters in the range $`0.5r_s2.5`$, corresponding to more than one-hundred-fold compression of the lowest density state ($`r_s=2.5`$), which would correspond to $`0.17gr/cm^3`$. The temperature is roughly equal to that expected inside Saturn, and comparable to temperatures reached in shock compression experiments on the NOVA laser facility at Livermore . The iterative solutions were first obtained at the highest densities ($`r_s=0.5`$ and $`1`$), where linear response theory provides reasonably accurate intial input. The prefactor $`\lambda `$ in the square gradient correction to the electron kinetic energy functional (5) was adjusted to provide the best match between the HNC-DFT result for the local radial density of electrons around a proton, $`r^2g_{12}(r)=r^2[1+h_{12}(r)]`$, and its linear response prediction, at the highest densities ($`r_s=0.5`$ and $`1`$), where linear response should be most accurate; $`g_{12}(r)`$ turns out to be rather sensitive to $`\lambda `$, and the best agreement is achieved for $`\lambda =0.18`$, which is close to the value $`1/5`$ frequently advocated in electronic structure calculations for atoms . Results for the local radial density $`r^2g_{12}(r)`$ are shown in Fig.1 for several values of $`r_s`$. As expected, electrons pile up increasingly at small $`r`$ as $`r_s`$ increases, and a shoulder is seen to develop around $`r/a0.4`$. For comparison, the linear response prediction is shown at $`r_s=1`$, while at the lowest density ($`r_s=2.5`$), an estimate of $`g_{12}(r)`$ in the atomic phase is obtained by adding to the electron density in a $`H`$ atom (namely $`r^2\rho (r)=r^2\mathrm{exp}(2r)/\pi a.u.`$) the convolution of the latter with the atom-atom pair distribution function $`g(r)`$. HNC-DFT results for the proton-proton pair distribution function are shown in Fig.2 for three densities. As expected, proton-proton correlations are seen at first to weaken, as the density decreases, due to enhanced electron screening. However at the lowest density ($`r_s=2.5`$), weakly damped oscillations build up at long distances, which may be indicative of an incipient instability of the proton-electron plasma. The atom-atom $`g(r)`$ at the same density agrees reasonably well with $`g_{11}(r)`$ up to the first peak, but it does not exhibit the long-range correlations in the latter. The proton-proton structure factors $`S_{11}(k)`$ are plotted in Fig.3 for several values of $`r_s`$. A considerable qualitative change is again seen to occur at the lowest density ($`r_s=2.5`$), where the main peak is shifted to larger $`k`$, while a significant peak builds up at $`k=0`$. Such enhanced “small angle scattering” is reminiscent of the behaviour observed in simple fluids near a spinodal (subcritical) instability. In fact we were unable to obtain convergence of the HNC-DFT integral equations for $`r_s>2.5`$, which hints at an instability of the electron-proton plasma at lower densities. This strongly suggests a transition to the insulating atomic phase, but the simple density functional used in this work cannot properly describe the recombination of protons and electrons into bound (atomic) states . In order to confirm this scenario, the free energy of the plasma phase should be compared to that of the atomic phase. This is easily achieved within the high density linear response regime, as shown earlier. However the calculation of the free energy in the non-linear regime appropriate for lower densities ($`r_s>1`$) is less straightforward . In fact the present HNC-DFT formulation provides only one direct link with thermodynamics, namely via the compressibility relation : $$\underset{k0}{lim}S_{11}(k)=\underset{k0}{lim}S_{12}(k)=nk_BT\chi __T$$ (8) where $`\chi __T`$ denotes the isothermal compressibility of the plasma. From the calculated values of $`\chi __T`$, the free energy of the plasma follows by thermodynamic integration, starting from a reference state (e.g. $`r_s=1`$) for which the linear response estimate is expected to be accurate. The resulting “compressibility” free energy curve is plotted in Fig.4. Somewhat unexpectedly it lies above the linear response prediction. An alternative route to the free energy is via the virial relation for the pressure; only an approximate virial expression is known within the present HNC-DFT formulation , and the resulting “virial” free energy curve is also shown in Fig.4. It falls well below the “compressibility” free energy curve, thus illustrating the well known thermodynamic inconsistency of the HNC closure for Coulombic fluids . Any reasonable extrapolation of the two free energy curves would miss the low density limit $`0.5a.u.`$ by as much as twenty per cent. We suggest instead an estimate of the free energy of the plasma phase by taking the average of the “compressibility” and “virial” values, despite the lack of foundamental a priori justification for doing this. A short extrapolation of the resulting curve is likely to intersect or smoothly join on to the free energy of the atomic phase just beyond $`r_s=2.5`$. An intersection would correspond to a first-order phase transition, reminiscent of the “plasma phase transition” of Saumon and Chabrier . However, due to the uncertainty on the thermodynamics of the plasma phase, a continuous transition cannot be ruled out. The transition pressure $`P`$ would be of the order of $`0.5Mbar`$, well below the current experimental and theoretical estimates for the transition of fluid molecular Hydrogen to a conducting state . In summary the structure and thermodynamic results derived from an HNC-DFT theory of the spin-polarized proton-electron plasma strongly suggest that this plasma will recombine into an insulating atomic phase at $`r_s2.5`$, for a temperature $`T=10^4K`$. We are presently exploring the behaviour of the system at lower temperatures.
no-problem/0002/cond-mat0002243.html
ar5iv
text
# Vibrational properties of the one-component 𝜎 phase ## I Introduction The local atomic order in disordered condensed materials is well defined and governs many physical properties . Quite often, for a disordered marerial, it is possible to find a corresponding crystal with similar local and even intermediate-range order which give rise to similarities in many structural and dynamical features of these two solids. Such a crystal can be regarded as a reference crystalline structure (crystalline counterpart) for the corresponding disordered substance. In some cases, the reference structure can be uniquely defined. The simplest examples are toy structural models with force-constant and/or mass disorder. In these toy models, the atoms occupy their equilibrium positions at the sites of a crystalline lattice (e.g. simple cubic), which can be considered to be a reference one (see e.g. ). Another related example is a binary substitutional alloy, the reference system for which is a periodic point lattice with one of the two atomic species placed at the lattice sites . The disorder in such models does not influence the equilibrium positions of the atoms arranged in an ideal crystalline lattice. This makes possible the use of approximate analytical approaches (e.g. the coherent potential approximation ) to treat the vibrational properties of the models, provided that the vibrational properties of the counterpart crystal are known. In amorphous solids, or glasses, the atoms do not occupy the sites of a crystalline lattice, which results in positional disorder. For these materials, a choice of a reference structure becomes problematic. Good counterparts can usually be found among the crystalline polymorphs having the same (or similar) chemical composition as the corresponding glass. For example, $`\alpha `$-cristobalite appears to be a good crystalline counterpart for vitreous silica . The main purpose of this paper is to investigate numerically the vibrational properties of a one-component $`\sigma `$-phase crystal which is conjectured to be a good crystalline counterpart for a one-component glass with icosahedral local order (IC glass) . The motivation for this choice of a crystalline counterpart of the IC glass is the following. The computational model of the IC glass is based on a simple empirical pair interatomic potential resembling the effective interionic potentials conjectured for simple metallic glass-forming alloys . The use of the same potential allows us to construct models of bcc and $`\sigma `$-phase crystals that are stable for a wide range of thermodynamical parameters . Of these two crystalline structures, the $`\sigma `$ phase is expected to be a good reference structure for the IC glass because of the following reasons: The supercooled IC liquid (where the interactions between atoms are described by the same potential ) undergoes a transition either to the IC glass or to a dodecagonal quasicrystal depending on the quench rate . This quasicrystal has similar local structural properties with the IC glass . However, the absence of global periodic order in the quasicrystalline phase makes the analysis of its vibrational properties a task of comparable complexity to that for the glass itself. The $`\sigma `$ phase is one of the closest low-order crystalline approximants for this dodecagonal quasicrystal , which means that these two (crystalline and quasicrystalline) structures are built up from the same structural units. This implies that the IC glass and the $`\sigma `$ phase, being both tetrahedrally closed-packed structures , are nearly isomorphous in terms of local order. Knowledge of the vibrational properties of the $`\sigma `$-phase crystal allows for a direct comparison with those of the IC glass. The apparent similarity in the vibrational densities of states of these two structures gives stronger support for the choice of this crystalline counterpart for the IC glass. The $`\sigma `$-phase structure, used in our computations, has been obtained by means of molecular dynamics simulation with the use of an interatomic pair potential . The vibrational properties have been investigated by using both a the normal-mode analysis and by computing the spectra of appropriate time-correlation functions. The paper is arranged as follows: The $`\sigma `$-phase structure is described in Sec. II. The model and technical details of the calculations are presented in Sec. III. In Sec. IV we present the results of the simulations. Some concluding remarks are contained in Sec. V. ## II The $`\sigma `$ phase In this Section, we review the known structural and dynamical properties of the $`\sigma `$ phase. ### A Structure The $`\sigma `$ phase belongs to an important class of tetrahedrally close-packed crystallographic structures , viz. the Frank-Kasper phases . The first coordination shells of the constituent atoms in these structures form triangulated (Frank-Kasper) polyhedra composed entirely of slightly distorted tetrahedra. The four possible coordination numbers ($`Z`$) in these structures are $`Z=12,14,15`$ and $`16`$. The least distorted tetrahedra are found in icosahedra ($`Z12`$ polyhedra). Structures of small clusters of atoms interacting via pairwise central potentials favor icosahedral order as having the lowest energy. The prototype $`\sigma `$ phase structures are $`\beta `$-U and Cr<sub>48</sub>Fe<sub>52</sub> . There are $`30`$ atoms per tetragonal unit cell ($`tP30`$) with $`c/a0.52`$, where $`c`$ and $`a`$ are the dimensions of the cell (lattice parameters). The space group of this phase is $`P4_2/mnm`$. There are $`10`$ atoms with the coordination number $`12`$, $`Z12`$ or icosahedra, $`16`$ $`Z14`$ atoms and $`4`$ $`Z15`$ atoms. The $`72^{}`$ disclination lines form a network (a major skeleton, in the parlance of Frank and Kasper ), where rows of $`Z14`$ atoms parallel to the tetragonal $`c`$-axis thread planar networks of $`Z14`$ and $`Z15`$ atoms. A projection of the $`\sigma `$-phase structure down the $`c`$-axis is shown in Fig. 1. The Frank-Kasper phases share their significant geometrical property of icosahedral local order with simple metallic glasses . Some liquid alloys which form Frank-Kasper phases have a tendency to freeze into metastable amorphous structures (metallic glasses) when quenched sufficiently rapidly . It is now generally well accepted that, at least in the case of metallic alloys of simple constitution, glass formation is caused by the incompatibility of local icosahedral coordination with the translational symmetry in Euclidean space (geometrical frustration) . There exist statistical mechanical arguments in favor of this scenario of glass formation based on a Landau free-energy analysis. The average coordination number, $`\overline{Z}`$, in the Frank-Kasper phases ($`13.333\overline{Z}13.5`$) is very close to that of a sphere-packed “ideal glass” model ($`\overline{Z}_{\mathrm{ideal}}=13.4`$). In a sense, this “ideal glass” could be regarded as a Frank-Kasper phase with an infinitely large unit cell . Thus the class of Frank-Kasper phases is a natural choice for reference crystalline structures for metallic glasses of simple constitution. From a structural point of view, the $`\sigma `$ phase can be also regarded as a crystalline low-order approximant for dodecagonal quasicrystals . Such quasicrystals , morphologically close to Frank-Kasper phases, represent an alternative class of noncrystallographic structures which combine icosahedral local order with non-translational long-range order manifested by infinitely sharp diffraction peaks. ### B Dynamics Similarities in the local structure of metallic glasses and Frank-Kasper phases are reflected in the dynamical properties of these materials. The available data about the vibrational dynamics of Frank-Kasper phases is limited to some of the Laves phases, a subclass of the Frank-Kasper phases . For instance, the similarity between the phonon-dispersion relations of the Mg<sub>70</sub>Zn<sub>30</sub> glass and those of the Laves phase MgZn<sub>2</sub> was emphasized by Hafner . Another interesting aspect of the dynamics of Frank-Kasper phases is related to the appearance of soft vibrational modes in these materials. Such a soft low-frequency optic mode at the $`\mathrm{\Gamma }`$-point (the origin of the reciprocal lattice) has been found numerically in the same Laves phase MgZn<sub>2</sub> . The frequency of this mode decreases with increasing pressure (accompanied by volume compression) and eventually becomes negative, indicating a structural phase transition . The authors of Ref. applied a group-theoretical analysis and demonstrated that the polarization vector of the soft optic mode in MgZn<sub>2</sub> is determined by the structure symmetry and is independent of interatomic interactions. This suggests that the soft-mode character of some vibrations is a generic property of Frank-Kasper phases. However, no soft-mode behavior was observed in the isostructural CaMg<sub>2</sub> where the mass ratio of the constituent elements is reversed with respect to MgZn<sub>2</sub>. Hafner suggested that the soft modes in MgZn<sub>2</sub> should be attributed to the relatively large mass of the Zn atoms. This is an example of the chemical composition of the materials introducing considerable difficulties into the analysis of the interplay of the structure and the dynamics. A numerical simulation of a one-component Frank-Kasper phase allows us to eliminate this uncertainty. We investigate the behavior of the lowest-frequency optic modes in the one-component $`\sigma `$ phase with variable pressure in Sec. IV B 3. ## III Methods of computation We have constructed a thermodynamically stable structural model of the one-component $`\sigma `$ phase by means of classical molecular dynamics simulation. In this case, success strongly depends on the choice of the interatomic potential for the model. For example, the Lennard-Jones potential, widely used in creating simple models of liquids , glasses , and crystals , is not suitable for this purpose, because the $`\sigma `$ phase is not stable for it in the range of thermodynamical parameters investigated below (the stable phase is the fcc lattice). Instead, we use a pair interatomic potential suggested in Ref. and show that it is possible to construct a one-component $`\sigma `$ phase which is stable for a wide range of thermodynamical parameters . ### A Model As a mathematical model for the study of atomic dynamics in the crystalline $`\sigma `$-phase structure, we consider a classical system of $`N`$ identical particles interacting via a spherically symmetric pair potential. The pair potential used in this study is designed to favor icosahedral local order. The main repulsive part of this potential is identical to that of the Lennard-Jones potential $`u_{\mathrm{LJ}}(r)=4ϵ[(\sigma /r)^{12}(\sigma /r)^6]`$; therefore all the quantities in these simulations are expressed in reduced Lennard-Jones units , i.e. with $`\sigma `$, $`ϵ`$, and $`\tau _0=(m\sigma ^2/ϵ)^{1/2}`$ chosen as length, energy, and time units. To convert the reduced units to physical units one can refer to argon ($`m`$ = 39.948 a.m.u.) by choosing the Lennard-Jones parameters $`\sigma =3.4`$ Å and $`ϵ/k_B=120`$ K. In this case, our frequency unit $`\nu _0=\tau _0^1`$ corresponds to 0.4648 THz. The analytical expression defining the potential is given in Ref. . This potential resembles those for simple glass-forming metallic alloys with only the first of the Friedel oscillations being retained (see Fig. 2). We use conventional molecular dynamics simulations in which the Newtonian equations of motion are solved using a finite-difference algorithm with time step equal to 0.01 while the particles are enclosed in a simulation box of volume $`V`$ with periodic boundary conditions. In this case, the total energy $`E`$ is a constant of motion and time averages obtained in the course of simulations approximate the ensemble averages in the microcanonical (constant-$`NVE`$) statistical ensemble. Wavevectors, $$𝐐=n_x𝐐_{x,0}+n_y𝐐_{y,0}+n_z𝐐_{z,0}$$ (1) with $`n_x`$, $`n_y`$ and $`n_z`$ being integers, compatible with periodic boundary conditions are multiples of the three fundamental wavevectors $`𝐐_{x,0}=\frac{2\pi }{L_x}(1,0,0)`$, $`𝐐_{y,0}=\frac{2\pi }{L_y}(0,1,0)`$ and $`𝐐_{z,0}=\frac{2\pi }{L_z}(0,0,1)`$, where $`L_x`$, $`L_y`$, and $`L_z`$ are the dimensions of the (tetragonal) simulation box. In order to have a sufficient number of allowed wavevectors within the Brillouin zone, the sample dimensions must be sufficiently large. The time-correlation functions resulted from the molecular dynamics simulation reported below were obtained for a system of 20580 particles ($`7\times 7\times 14`$ unit cells with $`30`$ atoms per unit cell). This relatively large system size also gives sufficient statistical accuracy . Where necessary, we performed simulations in other ensembles by modifying the equations of motion . The arrangement of atoms in a unit cell of the model $`\sigma `$-phase structure used in our computer simulations is shown in fig. 1(b). The optimal (with respect to an energy minimization) $`c/a`$ ratio was taken to be equal to 0.5273. To determine the optimal model structure, it was sufficient to perform molecular dynamics simulations with the number of particles in the system equal to $`N=1620`$ ($`3\times 3\times 6`$ unit cells). Details of the preparation of this atomic configuration are given in section IV A. ### B Time-correlation functions A straightforward method to analyze the vibrational dynamics in a molecular dynamics model is to imitate inelastic neutron scattering experiments by calculating the dynamical structure factor $`S(𝐐,\omega )`$ , proportional to the neutron scattering cross-section , which is the spectrum of the density autocorrelation function: $$F(𝐐,t)=<\rho (𝐐,t)\rho (𝐐,0)>$$ (2) where $$\rho (𝐐,t)=\underset{k=1}{\overset{N}{}}\mathrm{exp}(i𝐐𝐫_k(t))$$ (3) is the Fourier transform of the local particle density , $`N`$ is the number of particles in the system, $`𝐫_k(t)`$ is the position vector of particle $`k`$, and the wavevector $`𝐐`$ takes on the values according to Eq. (1). A longitudinal phonon is associated with a maximum in $`S(𝐐,\omega )`$ at a fixed $`𝐐`$. In order to get information about the transverse modes from $`S(𝐐,\omega )`$, one has to select wavevectors outside the first Brillouin zone . In a more convenient way, the vibrational modes can be studied using the current autocorrelation function : $$C_𝐞(𝐐,t)=\frac{Q^2}{N}<j_𝐞(𝐐,t)j_𝐞(𝐐,0)>$$ (4) where $$j_𝐞(𝐐,t)=\underset{k=1}{\overset{N}{}}(𝐞𝐯_k(t))\mathrm{exp}[i𝐐𝐫_k(t)]$$ (5) is the Fourier transform of the local current, $`𝐯_k(t)`$ is the velocity of particle $`k`$, and $`𝐞`$ is the unit polarization vector. Note that for the longitudinal polarization, $`𝐞𝐐`$, Eq. (4) can be obtained from (2) by double differentiation with respect to time. For the transverse-current correlation functions, the polarization vector must be chosen consistently with the lattice symmetry. At a temperature $`T`$, the vibrational density of states $`g(\omega )`$ can be calculated as the Fourier transform of the normalized velocity autocorrelation function : $$Z(t)=\frac{1}{3NT}\underset{k=1}{\overset{N}{}}𝐯_k(t)𝐯_k(0)$$ (6) We computed the time-correlation functions using the overlapped data collection technique . The number of overlapped measurements used for statistical averaging was about 10000. The time origins of the measurements were separated by 0.2 r.u. (20 time steps). In order to reduce the finite-time truncation effects in the spectra of the time-correlation functions, we used a Gaussian window function with the half-width equal to 3 r.u. ### C Normal-mode analysis To calculate the dispersion relations in the harmonic approximation, we used the standard method based on diagonalization of the Fourier transformed dynamical matrix . From the known dispersion relations, $`\omega _s(𝐐)`$, $`s=1,2,\mathrm{},90`$, the vibrational density of states can be computed by integration over the first Brillouin zone according to $$g(\omega )=\frac{v}{r(2\pi )^3}\underset{s=1}{\overset{r}{}}_{\mathrm{BZ}}\delta (\omega \omega _s(𝐐))𝑑𝐐$$ (7) where the sum is over all $`r`$ dispersion branches and $`v`$ stands for the volume of the unit cell. In the computations, a Gaussian function with small but finite width is substituted for the $`\delta `$-function. The half-width of the Gaussian function in our computations was equal to about 0.05. Formally, the wavevector $`𝐐`$ in Eq. (7) is a continuous variable, but in the simulations the integral was estimated by a sum over a uniform rectangular grid of $`100\times 100\times 100`$ points in the first Brillouin zone. ## IV Results ### A Optimization of the structure in the $`\sigma `$ phase In this subsection, we describe the method of construction of the thermodynamically stable model of the $`\sigma `$ phase and analyse the range of its stability. There are several ways to obtain numerical values of the atomic coordinates in the $`\sigma `$ phase. One way is to use those available for Cr<sub>48</sub>Fe<sub>52</sub> . Alternatively, a unit cell of the $`\sigma `$-phase structure can be constructed either by manipulating the kagomé tiling according to the algorithm given by Frank and Kasper in ref. or by stacking the square and triangular basic elements of the dodecagonal quasicrystal model into the $`3^2,4,3,4`$ square-triangle net (see fig. 1(a)). The arrangements of the atoms resulting from these constructions do not correspond exactly, although the difference is rather small – the root-mean-square distance between the corresponding atoms in different configurations is of the order of a few percent of the root-mean-square distance between different atoms in the same configuration. In either case, the resulting structure is an approximate one in the sense that the atomic positions do not correspond to a minimal potential-energy configuration for a given interaction potential. To obtain the true structure corresponding to the potential, the approximate configuration must be relaxed by a molecular dynamics program. Moreover the $`c/a`$ ratio is slightly different for different natural $`\sigma `$-phase crystals, which means that this ratio is not uniquely defined. The atomic configuration of the $`\sigma `$ phase used in this study was prepared as follows. A sample of $`N=1620`$ particles ($`3\times 3\times 6`$ unit cells) was constructed by filling a tetragonal box of appropriate dimensions with $`\sigma `$-phase unit cells. We used the unit-cell atomic configuration suggested by Gähler with $`c/a=0.5176`$. The number density, $`\rho =N/V`$, $`V=L_xL_yL_z`$, of this atomic configuration was $`\rho =1.0048`$. This configuration was then used to provide the initial atomic coordinates for variable-shape $`NST`$ (constant number of atoms, pressure tensor, and temperature) molecular dynamics , the run performed at $`S=0`$ and $`T=0`$. This procedure is equivalent to a potential-energy minimization by the steepest descent method under the condition of independent pressure balance in each spatial dimension. The variable-shape $`NST`$ run resulted in an ideal crystalline structure for which the fractional coordinates of the atoms in all unit cells were identical within the precision of the calculations. The structure of the $`\sigma `$ phase thus obtained is characterized by the minimum potential energy per atom, $`U_{\mathrm{min}}=2.5899`$, with respect to variations of thermodynamical parameters. In order to check this, we have performed similar $`NST`$ runs at different pressures and indeed found that the energy is minimum at zero pressure (see Fig. 3a,b). The density for the optimal structure has been found to be $`\rho =0.8771`$ and the ratio of the lattice parameters is $`c/a=0.5273`$, close to that of $`\beta `$-U ($`c/a=0.5257`$, at $`T=720^{}`$C) . The potential-energy minimum for the bcc structure at the same density with the same potential was $`U_{\mathrm{min}}=2.6148`$. At zero pressure, the density of the bcc structure is $`\rho =0.8604`$ and the minimum potential energy per atom is $`U_{\mathrm{min}}=2.6357`$, i.e. in both cases lower than for the $`\sigma `$ phase. It was shown earlier that the potential energy per atom for the $`\sigma `$ phase becomes lower than that for the bcc structure at the same density as the temperature increases. This is consistent with the fact that natural crystalline $`\sigma `$ phases are stable only at high temperatures. They undergo a solid-solid phase transition to a simpler crystalline phase as the temperature decreases. In our simulations, however, the system was stable in the range of temperatures $`T0.9`$ for as long as $`t_{\mathrm{run}}=5000`$. We have also investigated the thermodynamical stability of the $`\sigma `$ phase under variable pressure. We have found that the $`\sigma `$ phase is stable for pressures in the range $`5P12`$. At high pressures $`P12`$, a structural transformation occurs, resulting in the fcc structure. The phase diagram of the IC potential is not known at present. We can expect that at densities greater than the triple-point density for the Lennard-Jones system, $`\rho 0.85`$ , the solid-fluid coexistence curve for the IC system is close to that for the LJ system. We can only estimate that the melting temperature at $`\rho =0.8771`$ is about $`0.8T0.9`$ from the fact that the $`\sigma `$-phase crystalline structure is stable at $`T=0.8`$ on the time scale of our computations. No diffusion was observed at temperatures up to $`T=0.8`$. At $`T=0.9`$ the system stayed in a metastable superheated state for about $`t_{\mathrm{run}}=5000`$, after which it melted. We used the density and the $`c/a`$ ratio obtained from the $`NST`$ run to perform $`NVE`$ (constant number of atoms, volume, and total energy) molecular dynamics runs starting from the three configurations mentioned above and scaling the velocities to zero at each time step, which is also equivalent to a potential-energy minimization by the steepest-descent method. The same was done for one instantaneous configuration corresponding to the temperature $`T=0.8`$. The configurations resulting from this procedure were identical, which is an indication that there is a well-defined potential-energy minimum corresponding to a unique crystallographic arrangement of atoms within the $`\sigma `$-phase unit cell. This structure, scaled so that $`a=c=1`$, is shown in fig. 1(b). The atomic layers with $`z`$ close to 0.25 and 0.75, which are not closely packed, show a small but significant puckering – an effect present in the $`\beta `$-U and Cr<sub>48</sub>Fe<sub>52</sub> structures. ### B Vibrational dynamics Above, we have discussed the similarities in the local structure of the $`\sigma `$ phase and the IC glass. These similarities are expected to cause the vibrational spectra in these two materials also to be similar. In order to check this assumption, in this subsection we investigate the vibrational properties of the $`\sigma `$ phase and compare the vibrational spectra of this crystal and the IC glass. If the vibrational spectra are similar, the $`\sigma `$ phase can be considered to be a good crystalline reference structure for the IC glass. One consequence of these similarities is that we can use the data about the vibrational properticies of the $`\sigma `$ phase crystal to explain the nature of the vibrational excitations in the corresponding amourphous structure. #### 1 Phonon dispersion in the $`\sigma `$ phase The $`\sigma `$ phase has $`30`$ atoms per unit cell which result in $`3`$ acoustic and $`87`$ optic branches, as shown in Fig. 4. The vibrational density of states (VDOS) obtained by integration over the first Brillouin zone (see Eq. (9)) is shown on the right-hand side of Fig. 4. The linear dispersion of the acoustic branches in the low-frequency range ($`\omega 4`$) results in the Debye law for the VDOS, $`g(\omega )=3\omega ^2/\omega _\mathrm{D}^3`$, with the Debye frequency equal to $`\omega _\mathrm{D}23.89`$. The Debye frequency has been estimated from a fit of the initial part of the VDOS by a parabolic function. The optic branches are densely distributed above the acoustic part. There are no large gaps in the spectrum, which is a consequence of tight binding and the mutual penetration of the basic structural units (Frank-Kasper polyhedra) in the $`\sigma `$ phase. In other words, there are no isolated structural units, like molecules in molecular crystals and crystalline fullerens , or tetrahedra in silica , the vibrations of which form separate optic bands. At some of the zone boundaries (e.g. the X point; see Fig. 4) the dispersion curves do not show zero derivatives. This is because the space group $`P4_2/mnm`$ of the $`\sigma `$ phase is nonsymmorphic, i.e. it contains nonpoint symmetry elements involving fractional translations . #### 2 Comparison with the IC glass An informative characteristic of the vibrational dynamics in the IC glass which can be compared with the $`\sigma `$ phase is its VDOS (see Fig. 5) which can be easily obtained from the velocity autocorrelation function Eq. (6). The VDOS for the bcc lattice is also presented for comparison in the figure (the dashed line). We can clearly see that the frequency range of the whole spectrum is the same for the $`\sigma `$ phase and the IC glass but differs for the bcc lattice. The shape of the IC-glass spectrum mainly reproduces the basic features of the $`\sigma `$ phase spectrum and can be imagined as a superposition of broadened (by disorder) crystalline peaks. This is a consequence of the presence of a large number of optic modes in the vibrational spectrum of the sigma-phase structure located in the same frequency region as the whole spectrum of the IC glass. Therefore, the similarities in the VDOS of the $`\sigma `$ phase and the IC glass strongly support the choice of the $`\sigma `$ phase as a crystalline counterpart. #### 3 Soft modes in the $`\sigma `$ phase As was mentioned in Sec. II B, an interesting feature of atomic dynamics in the Frank-Kasper phases is related to the appearance of low-frequency soft modes. We have investigated whether a soft mode appears in our model of the $`\sigma `$-phase structure. For this purpose, we followed the evolution of the vibrational spectrum with variable pressure (see Fig. 6 and, indeed, found that one of the lowest-frequency optic modes (doubly degenerate) in the $`\mathrm{\Gamma }`$-point shows soft-mode behavior. The frequency of this mode decreases both with decreasing and increasing pressure (see Fig. 7). The decrease of the mode frequency at negative pressures is not surprising and reflects the softening of the whole vibrational spectrum (see Fig. 6a). However, with increasing pressure, the whole spectrum is shifted to higher frequencies (see Fig. 6c), while the frequency of the soft mode (a small peak around $`\omega =3.5`$ in Fig. 6c) moves in the opposite direction, approaching zero and thus indicating a structural instability (structural phase transition to the fcc lattice) at a critical pressure $`P_{}12.5`$ (see Fig. 7). Around this value of the pressure the structure of the $`\sigma `$ phase becomes extremely unstable and an investigation of the details of atomic motion requires a thorough analysis. We hope to address this point in another study. #### 4 Anharmonicity in the $`\sigma `$ phase One of the interesting questions concerning the vibrational dynamics of the $`\sigma `$ phase is related to the range of applicability of the harmonic approximation for the lattice vibrations. We are able to anwer this question by investigating the vibrational spectrum using the velocity autocorrelation function with increasing temperature and comparing it with the results of the normal-mode analysis (harmonic approximation). To assess the degree of temperature-induced anharmonicity, we computed the dispersion relations for the symmetry direction $`[001]`$ ($`𝐐𝐜`$, $`\mathrm{\Gamma }`$Z in Fig. 4) at different temperatures by using both these techniques (molecular dynamics and normal-mode analysis) and compared the results. These are shown in Fig. 8 for several low- and high-frequency dispersion branches for two temperatures $`T=0.01`$ and $`T=0.8`$. At intermediate frequencies, the density of dispersion branches is so high that a comparison between the results of the two methods of calculation of dispersion relations is hardly possible, mainly because of the finite width of the respective peaks in the spectra of the current autocorrelation functions (see sec. III B). Due to the fact that the $`\sigma `$-phase space group $`P4_2/mnm`$ is nonsymmorphic, i.e. it contains nonpoint symmetry elements involving fractional translations , the phonon-dispersion relations, derived from the peak positions in $`C_l(𝐐,\omega )`$ and $`C_t(𝐐,\omega )`$, appear in the extended zone scheme . The optic modes cannot be measured in the vicinity of the origin of the first Brillouin zone $`Q=0`$ ($`Q=|𝐐|`$), because this long-wavelength limit corresponds to motion of the system as a whole, forbidden by the periodic boundary conditions. Information about these modes is available at the boundaries ($`Q=2\pi /c,6\pi /c`$) and at the origin ($`Q=4\pi /c`$) of the second extended Brillouin zone. The molecular dynamics results for the dispersion relations at $`T=0.01`$ are adapted from the second extended Brillouin zone. To make possible the comparison with the results obtained in the harmonic approximation, the data in the region $`Q[5\pi /c,6\pi /c]`$ were folded with respect to $`Q=5\pi /c`$ into the region $`Q[4\pi /c,5\pi /c]`$, which corresponds to half of an irreducible zone. From these results we can see that the harmonic approximation works quite well at the low temperature of $`T=0.01`$. At $`T=0.8`$, only the acoustic branches could be resolved without ambiguity. Therefore, for this temperature, we used the data available from the first Brillouin zone. One important signature of temperature-induced anharmonicity is a softening of the acoustic modes, i.e. a lowering of the acoustic branches with respect to those calculated in the harmonic approximation which occurs as the temperature increases . This effect can be clearly seen for $`T=0.8`$ in Fig. 8. In accordance with this observation, the vibrational density of states for this temperature, shown in Fig. 9, exhibits the presence of excess states with respect to the harmonic approximation. A deviation from the harmonic approximation in $`g(\omega )`$ at low frequencies (see the inset in Fig. 9) starts to be noticeable at a temperature of about $`T=0.2`$. Therefore, we can conclude the the lattice dynamics in the $`\sigma `$ phase is harmonic in a wide range of temperatures $`T0.2`$. Finally, we would like to note the similarity between the high-temperature VDOS for the $`\sigma `$ phase and the low-temperature VDOS for the IC glass (see Fig. 10). Since $`g(\omega )`$ for the glass is only slightly temperature dependent, we show it only for $`T=0.01`$. The fact that the densities of states for the glass and the high-temperature $`\sigma `$ phase in fig. 10 are remarkably similar clearly indicates that the effect of the thermally-induced dynamical disorder in the crystalline structure on the vibrational spectrum is similar to that of the configurational disorder characteristic of the amorphous structure. ## V Conclusions In this paper we have studied the structural and vibrational properties of a $`\sigma `$-phase crystal. First, we have shown that it is possible to construct a structural model of a one-component $`\sigma `$ phase by means of molecular dynamics simulations using an appropriate pair potential. This $`\sigma `$-phase structure is stable in a wide range of thermodynamical parameters. Our model of the $`\sigma `$ phase contains only one atomic component. This is important in understanding the role of topological icosahedral order alone on the structural and dynamical properties and avoids the effects arising from the presence of different atomic species. Second, we have investigated atomic vibrational dynamics of the $`\sigma `$ phase. In particular, we have found the range of applicability of the harmonic approximation in a description of atomic dynamics. We have also demonstrated the existence of soft modes in the $`\sigma `$ phase which leads to a structural phase transformation with increasing pressure. Third, we have demonstrated that the $`\sigma `$ phase is a good crystalline counterpart of the IC glass. This has been done on the basis of a comparative analysis of the vibrational dynamics (vibrational density of states). We think that the results on the vibrational properties of the $`\sigma `$ phase discussed above can be used in an analysis of the peculiar vibrational properties of the IC glass (e.g. the Boson peak ). We also believe that the computational data of the vibrational properties of the $`\sigma `$ phase could be of value for metallurgy where this phase has received much detailed attention, chiefly because of the detrimental effect which the formation of this phase has on mechanical properties of certain steels . ## Acknowledgements S.I.S. and M.D. thank Trinity College for hospitality. We are grateful to H.R. Schober for bringing to our attention Ref. .
no-problem/0002/physics0002013.html
ar5iv
text
# Dynamics of Atom-Mediated Photon-Photon Scattering II: Experiment ## I Introduction As described in Part I , atom-mediated photon-photon scattering is the microscopic process underlying the optical Kerr nonlinearity in atomic media. The Kerr nonlinearity produces such effects as self-phase modulation, self-focusing and self-defocusing and four-wave mixing. In an atomic medium, resonanant nonlinearities can give rise to very large nonlinear optical effects, suggesting the possibility of nonlinear optical interactions with only a few photons . Unfortunately, the same resonances which could facilitate such experiments make them difficult to analyze . In Part I we showed theoretically that the photon-photon interaction is not intrinsically lossy, and can be fast on the time scale of atomic relaxation. Here we describe an experiment to directly measure the time-duration of the photon-photon interaction in a transparent medium. In the scattering experiment, two off-resonance laser beams collide in a rubidium vapor cell and scattering products are detected at right angles. The process of phase-matched resonance fluorescence in this geometry has been described as spontaneous four-wave mixing , a description which applies to our off-resonant excitation as well. This geometry has been of interest in quantum optics for generating phase-conjugate reflection . Elegant experiments with a barium atomic beam showed antibunching in multi-atom resonance fluorescence, but a separation of timescales was not possible since the detuning, linewidth and doppler width were all of comparable magnitude. ## II Setup A free-running 30mW diode laser at 780nm was temperature stabilized and actively locked to the point of minimum fluorescence between the hyperfine-split resonances of the D2 line of rubidium. Saturation spectroscopy features could be observed using this laser, indicating a linewidth $`\delta \nu <200`$ MHz. This linewidth is small compared with the detuning from the nearest absorption line $`\delta \nu =1.3`$ GHz . Direct observation of the laser output with a fast photodiode (3 dB rolloff at 9 GHz) showed no significant modulation in the frequency band $`100`$ MHz – $`2`$ GHz. The laser beam was was shaped by passage through a single-mode polarization-maintaining fiber, collimated and passed through a scattering cell to a retro-reflection mirror. The beam within the cell was linearly polarized in the vertical direction. The beam waist (at the retroreflection mirror) was $`0.026`$ cm $`\times 0.023`$ cm (intensity FWHM, vertical $`\times `$ horizontal). The center of the cell was 1.9 cm from the retroreflection mirror, thus within a Rayleigh range of the waist. With optimal alignment, the laser could deliver 1.95 mW to the cell, giving a maximal Rabi frequency of $`\mathrm{\Omega }_{\mathrm{Rabi}}2\times 10^9`$s<sup>-1</sup>, significantly less than the minimal detuning of $`\delta =2\pi \times 1.3`$ GHz $`=8\times 10^9`$s<sup>-1</sup>. For this reason, we have neglected saturation of the transitions in the analysis. The retro-reflected beam returned through the fiber and was picked off by a beamsplitter. The single-mode fiber acted as a near-ideal spatial filter and the returned power through the fiber provides a quantitative measure of the mode fidelity on passing through the rubidium cell. With optimal alignment it was possible to achieve a mode fidelity (described below) of 36%. The cell, an evacuated cuvette filled with natural abundance rubidium vapor, was maintained at a temperature of 330 K to produce a density of about 1.6 $`\times 10^{10}`$ cm<sup>-3</sup>. Irises near the cell limited the field of view of the detectors. Stray light reaching the detectors was negligible, as were the detectors’ dark count rates of $`<100`$ cps. With the aide of an auxiliary laser beam, two single-photon counting modules (SPCMs) were positioned to detect photons leaving the detection region in opposite directions. In particular, photons scattered at right-angles to the incident beams and in the direction perpendicular to the drive beam polarization were observed. Each detector had a 500 $`\mu `$m diameter active area and a quantum efficiency of about 70%. The detectors were at a distance of 70 cm from the center of the cell. The effective position of one detector could be scanned in two dimensions by displacing the alignment mirrors with inchworm motors. A time-to-amplitude converter and multichannel analyzer were used to record the time-delay spectrum. The system time response was measured using sub-picosecond pulses at 850nm as an impulse source. The response was well described by a Lorentzian of width 810 ps (FWHM). Optimal alignment of the laser beam to the input fiber coupler could not always be maintained against thermal drifts in the laboratory. This affected the power of the drive beams in the cell but not their alignment or beam shape. These were preserved by the mode-filtering of the fiber. Since the shape of the correlation function depends on beam shape and laser tuning but not on beam power, this reduction in drive power reduced the data rate but did not introduce errors into the correlation signal. ### A Experimental Results The time-delay spectrum of a data run of 45 hours is shown in Fig. 2. The detectors were placed to collect back-to-back scattering products to maximize the photon-photon scattering signal. A Gaussian function $`P(t_At_B)`$ fitted to the data has a contrast $`[P(0)P(\mathrm{})]/P(\mathrm{})`$ of $`0.046\pm 0.008`$, a FWHM of $`1.3\pm 0.3`$ ns, and a center of $`0.07\pm 0.11`$ ns. This center position is consistent with zero, as one would expect by the symmetry of the scattering process. For comparison, a reference spectrum is shown. This was taken under the same conditions but with one detector intentonally misaligned by much more than the angular width of the scattering signal. The angular dependence of the scattering signal was investigated by acquiring time-delay spectra as a function of detector position. To avoid drifts over the week-long acquision, the detector was scanned in a raster pattern, remaining on each point for 300 s before shifting to the next. Points were spaced at 1 mm intervals. Total live acquisition time was 9 hours per point. The aggregate time-spectrum from each location was fitted to a Gaussian function with fixed width and center determined from the data of Fig. 2. The position-dependent contrast C(x,y) is shown in Fig. 3. A negative value for the contrast means that the best fit had a coincidence dip rather than a coincidence peak at zero time. These negative values are not statistically significant. Fitted to a Gaussian function, C(x,y) has a peak of 0.044 $`\pm .010`$ and angular widths (FWHM) of $`1.1\pm 0.7`$ mrad and $`3.7\pm 0.4`$ mrad in the horizontal and vertical directions, respectively. These angular widths are consistent with the expected coherence of scattering products . Seen from the detector positions, the excitation beam is narrow in the vertical direction, with a Gaussian shape of beam waist $`w_y=0.009`$ cm, but is limited in the horizontal direction only by the apertures, of size $`\mathrm{\Delta }z=0.08`$ cm. Thus we expect angular widths of $`0.9\mathrm{mrad}`$ and $`3.25\mathrm{mrad}`$, where the first describes diffraction of a Gaussian, the second diffraction from a hard aperture. ## III Comparison to theory The correlation signal predicted by the theory of Part I is shown in Fig. 4. The ideal contrast is 1.53 and the FWHM is 870 ps. The shape of the time correlations is altered by experimental limitations. First, beam distortion in passing through the cell windows reduces the photon-photon scattering signal. Second, finite detector response time and finite detector size act to disperse the signal. None of these effects alters the incoherent scattering background. Beam distortion is quantified by the fidelity factor introduced in Part I $$F4\frac{\left|d^3xG(𝐱)H(𝐱)\right|^2}{\left[d^3x\left(|G(𝐱)|^2+|H(𝐱)|^2\right)\right]^2}$$ (1) The greatest contrast occurs when $`H`$ is the phase-conjugate, or time-reverse of $`G`$, i.e., when $`H(𝐱)=G^{}(𝐱)`$. In this situation $`F=1`$. Under the approximation that the field envelopes obey the paraxial wave equations $`{\displaystyle \frac{d}{dz}}G`$ $`=`$ $`{\displaystyle \frac{i}{2k}}_{}^2G`$ (2) $`{\displaystyle \frac{d}{dz}}H`$ $`=`$ $`{\displaystyle \frac{i}{2k}}_{}^2H,`$ (3) Green’s theorem can be used to show that the volume integral is proportional to the mode-overlap integral $$d^3xG(𝐱)H(𝐱)=\mathrm{\Delta }z𝑑x𝑑yG(𝐱)H(𝐱),$$ (4) where the last integration is taken at any fixed $`z`$ and $`\mathrm{\Delta }z`$ is the length of the interaction region. Similarly, the beam powers are invariant under propagation and the mode fidelity can be expressed entirely in terms of surface integrals as $`F`$ $`=`$ $`4\left|{\displaystyle 𝑑x𝑑yG(𝐱)H(𝐱)}\right|^2`$ (6) $`\times \left[{\displaystyle 𝑑x𝑑y\left(|G(𝐱)|^2+|H(𝐱)|^2\right)}\right]^2.`$ The overlap of $`G`$ and $`H`$ also determines the efficiency of coupling back into the fiber. This allows us to determine $`F`$. In terms of $`P_{\mathrm{in}}`$, the power leaving the output fiber coupler and $`P_{\mathrm{ret}}`$, the power returned through the fiber after being retro-reflected, this is $$F=\frac{4}{(1+T^2)^2}\frac{P_{\mathrm{ret}}}{\eta TP_{\mathrm{in}}}.$$ (7) where $`\eta =0.883`$ is the intensity transmission coefficient of the fiber and coupling lenses and $`T=0.92`$ is the transmission coefficient for a single-pass through a cell window. We find $`F=0.36\pm 0.03`$. The mode fidelity acts twice to reduce contrast, once as the drive beams enter the cell, and again on the photons leaving the cell. This beam distortion has no effect on the incoherent scattering background, thus the visibility is reduced by $`F^2`$. The finite time response of the detector system acts to disperse the coincidence signal over a larger time window. This reduces the maximum contrast by a factor of 0.27 and increases the temporal width to 1.62 ns. Similarly, the finite detector area reduces the maximum contrast by a factor of 0.81 and spreads the angular correlations by a small amount. The resulting coincidence signal is shown in Fig. 6. Fitted to a Gaussian, the final signal contrast is 0.042 $`\pm 0.007`$, where the uncertainty reflects the uncertainty in $`F`$. This is consistent with the observed contrast of 0.044 $`\pm `$ 0.010. ## IV conclusion We have measured the temporal and angular correlations in photon-photon scattering mediated by atomic rubidium vapor. We found good agreement between experiment and the perturbative theory presented in Part I. The observed temporal correlations are of the order of one nanosecond, much faster than the system can relax by radiative processes. This is consistent with the prediction that the duration of the photon-photon interaction is determined by the inhomogeneous broadening of the vapor.
no-problem/0002/hep-ph0002093.html
ar5iv
text
# References EXACT SOLUTIONS OF THE DIRAC EQUATION FOR MODIFIED COULOMBIC POTENTIALS ANTONIO SOARES DE CASTRO<sup>a</sup> AND JERROLD FRANKLIN<sup>b</sup> <sup>a</sup>DFQ, UNESP, C.P. 205, 12500-000 Guaratinguetá SP, Brasil <sup>b</sup>Department of Physics, Temple University, Philadelphia, PA 19122-6082 Received February 2000 > Exact solutions are found to the Dirac equation for a combination of Lorentz scalar and vector Coulombic potentials with additional non-Coulombic parts. An appropriate linear combination of Lorentz scalar and vector non-Coulombic potentials, with the scalar part dominating, can be chosen to give exact analytic Dirac wave functions. The method works for the ground state or for the lowest orbital state with $`l=j\frac{1}{2}`$, for any $`j`$. In a previous letter, simple exact solutions were found for the Dirac equation for the combination of a Lorentz vector Coulomb potential with a linear confining potential that was a particular combination of Lorentz scalar and vector parts. In this letter, we extend the method of Ref. to the more general case of an arbitrary combination of Lorentz scalar and vector Coulombic potentials with a particular combination of Lorentz scalar and vector non-Coulombic potentials, $`S(r)`$ and $`V(r)`$. The non-Coulombic potentials can have arbitrary radial dependence, but must be related by $$V(r)=\frac{E}{m}S(r),$$ (1) where $`E`$ is the bound state energy for a particle of mass $`m`$. This is the same relation between the vector and scalar non-Coulombic potentials as was required in Ref. The Dirac equation we solve is $$\left[\alpha 𝐩+\beta m\frac{(\lambda +\beta \eta )}{r}+V(r)+\beta S(r)\right]\psi =E\psi ,$$ (2) where $`𝜶`$ and $`\beta `$ are the usual Dirac matrices. The four component wave function $`\psi `$ can be written in terms of two component spinors $`u`$ and $`v`$ as $$\psi =N\left(\begin{array}{c}u\\ v\end{array}\right),$$ (3) with $`N`$ an appropriate normalization constant. The two component spinors $`u`$ and $`v`$ satisfy the equations $`(\sigma 𝐩)v`$ $`=`$ $`\left[Em+{\displaystyle \frac{(\lambda +\eta )}{r}}V(r)S(r)\right]u`$ (4) $`(\sigma 𝐩)u`$ $`=`$ $`\left[E+m+{\displaystyle \frac{(\lambda \eta )}{r}}V(r)+S(r)\right]v.`$ (5) The key step in generating relatively simple exact solutions of the Dirac equation is to choose a particularly simple form for the function $`v(r)`$ $$v(r)=i\gamma (\sigma \widehat{𝐫})u(r),$$ (6) where $`\gamma `$ is a constant factor, to be determined by the solution to the Dirac equation. This is the form of $`v(r)`$ that was found in Ref. for a Coulomb plus linear confining potential. This ansatz for $`v(r)`$ has also been used as the basis for generating approximate saddle point solutions for the Dirac and Breit equations. For now, we limit our discussion to spherically symmetric states $`u(r)`$ that depend only on the radial coordinate. We will discuss orbitally excited states later in this paper. Using the form of $`v(r)`$ given by Eq. (6), equations (4) and (5) reduce to two first order ordinary diffential equations for $`u(r)`$ $`{\displaystyle \frac{du}{dr}}`$ $`=`$ $`{\displaystyle \frac{1}{\gamma }}\left[EmV(r)S(r)+{\displaystyle \frac{(\lambda +\eta 2\gamma )}{r}}\right]u(r)`$ (7) $`{\displaystyle \frac{du}{dr}}`$ $`=`$ $`\gamma \left[E+mV(r)+S(r)+{\displaystyle \frac{(\lambda \eta )}{r}}\right]u(r).`$ (8) Equations (7) and (8) are two independent equations for the same quantity, so that each term in one equation can be equated with the corresponding term in the other equation having the same radial dependence. This leads to $$\gamma ^2=\frac{mE}{m+E}=\frac{S(r)+V(r)}{S(r)V(r)}=\frac{\lambda +\eta 2\gamma }{\eta \lambda }.$$ (9) The relations in Eq. (9) can be rearranged, after some algebra, to give $$\gamma =\frac{\lambda +\eta }{1+b},$$ (10) with $$b=\pm \sqrt{1\lambda ^2+\eta ^2}.$$ (11) The constant b can have either sign. Although $`b`$ must be positive in the pure Coulombic case, we will see that a negative $`b`$ is possible if the Lorentz scalar potential $`S(r)`$ is more singular at the origin than $`1/r`$. The bound state energy can be written as $$E=m\left(\frac{1\gamma ^2}{1+\gamma ^2}\right)=m\left(\frac{b\lambda \eta }{\lambda b\eta }\right),$$ (12) or as $$E=m\frac{V(r)}{S(r)}.$$ (13) The wave function $`u(r)`$ can be found by solving differential equation (8) to give $$u(r)=r^{b1}\mathrm{exp}\left[a\left(r+\frac{1}{m}S(r)𝑑r\right)\right],$$ (14) where the constant a is given by $$a=\gamma (m+E)=\pm \sqrt{m^2E^2}.$$ (15) The constants $`\gamma `$ and $`a`$ could be negative if $`S(r)`$ approaches a large enough negative constant or diverges negatively as $`r`$ becomes infinite. The integral in equation (14) can diverge at the origin or as $`r\mathrm{}`$, or for any finite $`r`$, as long as the quantity in square brackets in Eq. (14) remains negative. The constraint equation (13) shows that in order for this class of exact solutions to apply, the Lorentz vector and scalar non-Coulombic potentials must have the same radial dependence. As long as this constraint is satisfied, the results in Eqs. (10)-(15) represent a complete exact solution for the wave function and bound state energy of the Dirac Hamiltonian given in equation (2). We note from Eq. (12) that the energy seems to depend only on the Coulombic coupling constants $`\lambda `$ and $`\eta `$, and does not seem to depend on the non-Coulombic potentials. However, we could alternatively say from Eq. (13) that the energy depends only on the ratio of the non-Coulombic potentials $`V(r)`$ and $`S(r)`$, and does not seem to depend on the Coulombic coupling constants. Then, Eq. (12) could be considered a constraint equation on the Coulombic coupling constants. The actual situation is that the energy does depend on all the potentials but, because of the severe constraints imposed by equations (12) and (13) taken together, the energy can be written in terms of one set of potentials or the other. Although the possibility of this class of exact solutions is limited by the constraints on the potentials, this still permits a wide range of non-Coulombic potentials. We now consider conditions imposed on the potentials and the wave function parameters by the physical requirements that the potentials be real and the wave function normalizable. We see from Eq. (9) that $`\gamma `$ must be real, and then from Eq. (10) that $`b`$ must be real. This requires the Coulombic potentials to satisfy the condition $$1\lambda ^2+\eta ^20.$$ (16) The reality of $`\gamma `$ restricts possible bound state energies to the range $$m<E<m.$$ (17) Note that negative energies can occur, but $`E+m`$ cannot be negative. Also, $`E`$ cannot equal $`\pm m`$, because this would lead to an unormalizable wave function. This condition on $`E`$, along with Eq. (13), means that $`V(r)`$ must always be less in magnitude than $`S(r)`$. We discuss the remaining conditions on the parameters in terms of three sub-classes of solution: 1. The ”normal” class of solutions has $`b`$, $`\gamma `$, and $`a`$ all positive. In this case, we see from Eq. (10) that the Coulombic potentials must satisfy the further condition $$\lambda +\eta >0.$$ (18) 2. Sub-case 2 has $`b`$ negative, with $`\gamma `$ and $`a`$ still positive. The constant $`b`$ can be negative if the product $`aS(r)`$ is positively divergent at the origin faster than $`1/r`$. Then each of Eqs. (10)-(15) holds just as for positive $`b`$, and the wave function is still normalizable. Sub-case 1 with positive $`b`$ transforms smoothly into the pure Coulombic solution as the non-Coulombic potential tends to zero everywhere. But this is not true for sub-case 2 with $`b`$ negative. This sub-case requires the non-Coulombic potential to be dominant at the origin, and so has no corresponding pure Coulombic limit. 3. Sub-case 3 has a negative $`\gamma `$ and a negative $`a`$, while $`b`$ can have either sign, as discussed in sub-cases 1 and 2 above. A negative $`a`$ is possible if the non-Coulombic potential diverges or approaches a constant as $`r\mathrm{}`$, so that the integral in Eq. (15) diverges faster than $`r`$ at large $`r`$. Since $`a`$ is negative, the potential $`S(r)`$ must be negative at large $`r`$. Then all of Eqs. (10)-(15) hold as for positive $`a`$, and the wave function is still normalizable. This case is highly unusual, because it allows the possibility of a potential that is negative everywhere and diverges negatively at both the origin and infinite $`r`$. We know of no other example in quantum mechanics where a potential that diverges negatively at infinity can lead to a normalizable ground state. The reason this is possible here can be seen from Eqs. (7) and (8). There it is seen that $`S(r)`$ enters the differential equations for $`u(r)`$ only in the combinations $`\gamma S(r)`$ or $`S(r)/\gamma `$. Since these effective potentials are positive, the resulting wave function is normalizable. As with sub-case 2, the case with $`\gamma `$ and $`a`$ negative cannot not approach a pure Coulombic case because the non-Coulombic potential must be dominant at large r. We now look at some special cases. If the non-Coulombic potentials are absent, then the solutions are for a general linear combination of Lorentz vector and Lorentz scalar Coulombic potentials. If either constant, $`\lambda `$ or $`\eta `$, is zero, we recover the usual solutions of the Dirac equation for a pure scalar or vector Coulombic potential. The Coulombic potentials cannot both be absent (while keeping a non-Coulombic part) because then $`\gamma `$ would be zero and b one, leading to a constant, unnormalizable wave fucntion. For a power law non-Coulombic potential of the form $$S(r)=\mu (s+1)r^s,s1,$$ (19) the wave function will be given by $$u(r)=r^{b1}exp\left[a\left(r\frac{\mu }{m}r^{s+1}\right)\right].$$ (20) All three sub-classes are possible for this wave function, depending on the ranges of the parameters $`b`$, $`a`$, $`\mu `$, and $`s`$. The method does not work for radially excited states, because the simple ansatz of Eq. (6) for $`v(r)`$ does not lead to consistent equations for $`du/dr`$ in that case. But, the method does work for the lowest orbitally excited state for which $`l=j\frac{1}{2}`$. This can be seen by writing the Dirac wave function in terms of radial functions $`f(r)`$ and $`g(r)`$ $$\psi =N\left(\begin{array}{c}f(r)𝒴_{j\frac{1}{2}}^j\\ ig(r)𝒴_{j+\frac{1}{2}}^j\end{array}\right),$$ (21) where $`𝒴_l^j`$ is a two component angular spinor function corresponding to total angular momentum $`j`$ and orbital momentum $`l`$. The radial functions satisfy the equations $`\left({\displaystyle \frac{d}{dr}}+{\displaystyle \frac{1\kappa }{r}}\right)f`$ $`=`$ $`\left[E+m+{\displaystyle \frac{(\lambda \eta )}{r}}+S(r)V(r)\right]g`$ (22) $`\left({\displaystyle \frac{d}{dr}}+{\displaystyle \frac{1+\kappa }{r}}\right)g`$ $`=`$ $`\left[Em+{\displaystyle \frac{(\lambda +\eta )}{r}}S(r)V(r)\right]f,`$ (23) where $`\kappa =j+\frac{1}{2}`$ is the principal quantum number of the state. For the radial function $`g(r)`$, we make the ansatz $$g(r)=\gamma f(r).$$ (24) Then, equations (22) and (23) reduce to two first order differential equations for $`f(r)`$ $`{\displaystyle \frac{df}{dr}}`$ $`=`$ $`\gamma \left[E+mV(r)+S(r)+{\displaystyle \frac{(\lambda \eta )}{r}}{\displaystyle \frac{(\kappa 1)}{\gamma r}}\right]f(r)`$ (25) $`{\displaystyle \frac{df}{dr}}`$ $`=`$ $`{\displaystyle \frac{1}{\gamma }}\left[EmV(r)S(r)+{\displaystyle \frac{(\lambda +\eta )}{r}}{\displaystyle \frac{\gamma (\kappa +1)}{r}}\right]f(r).`$ (26) Equating the corresponding terms having the same radial dependence in these two equations results in $$\gamma ^2=\frac{mE}{m+E}=\frac{S(r)+V(r)}{S(r)V(r)}=\frac{\lambda +\eta 2\gamma \kappa }{\eta \lambda }.$$ (27) Equation (27) is the same as Eq. (9) with the replacements $$\lambda \lambda /\kappa ,\eta \eta /\kappa .$$ (28) Thus, equations (10)-(13), and (15) hold for the orbitally excited case, with the replacements $`\lambda \lambda /\kappa `$ and $`\eta \eta /\kappa `$. The radial wave function is $`f(r)`$ is given by $$f(r)=r^{\kappa b1}exp\left[a\left(r+\frac{1}{m}S(r)𝑑r\right)\right].$$ (29) Note that the constant $`b`$ used in this paper differs from the $`b`$ defined by Eq. (22) of Ref. 1. The $`b`$ in this paper is the $`b`$ in Ref. 1 divided by $`\kappa `$. In summary, we have presented a class of potentials for which the Dirac equation has relatively simple exact analytical solutions for the ground state of each angular momentum state with $`l=j\frac{1}{2}`$.
no-problem/0002/astro-ph0002015.html
ar5iv
text
# An extended multi-zone model for the MCG-6-30-15 warm absorber ## 1 INTRODUCTION X-ray absorption by partially ionized, optically thin material, along the line of sight of the central engine, the so called warm absorber, is one of the prominent features in the X-ray spectrum of many AGN . The presence of such gas was first postulated in order to explain the unusual form of the X-ray spectrum of QSO MR 2251-178 . A direct confirmation of the existence of circumnuclear ionized matter came from ASCA, which for the first time was able to resolve the OVII and OVIII absorption edges (at rest energies of 0.74 and 0.87 keV , respectively) in the X-ray spectra of the Seyfert 1 galaxy MCG$``$6-30-15 . Systematic studies of warm absorbers in Seyfert 1 galaxies with ASCA have shown their ubiquity, being detected in half of the sources . Warm absorbers were usually described by single zone, photoionization equilibrium models. Under this assumption, simple variability patterns were expected: increasing ionization of the matter when the source brightens. This was not found by Otani et al. during their long-look ASCA observation of the nearby (z=0.008) Seyfert 1 galaxy MCG$``$6-30-15. The source varied with large amplitude on short time scales ($`10^4`$ s or so). The depth of the OVII edge, on the contrary, stayed almost constant, while that of the OVIII edge was anticorrelated with the flux. To explain the behaviour of the OVII and OVIII edges, the authors adopted a multizone model in which the OVII and OVIII edges originate from spatially distinct regions. OVII ions in the region responsible for the OVII edge, the outer absorber, have a long recombination timescale (i.e. weeks or more), whereas the OVIII edge arises from more highly ionized material, the inner absorber, in which most oxygen is fully stripped. The recombination timescale for the OIX ions in the inner absorber is of the order of $`10^4s`$ or less. A decrease in the primary ionizing flux is then accompanied by the recombination of OIX to OVIII, giving the observed variation in the OVIII edge depth. Orr et al. raised the possibility of a more complex warm absorber. During their MCG$``$6-30-15 BeppoSAX observation the depth of the OVII edge was marginally consistent with being constant, whereas the optical depth for OVIII, $`\tau (OVIII)`$, exhibited significant variability. The authors claimed that its large value during epoch 1<sup>1</sup><sup>1</sup>1The epochs in the BeppoSAX observation are chronologically numerated (i.e. number 1 corresponds to the first epoch of the observation, etc.). ($`1.7\pm 0.5`$, $`1\sigma `$ uncertainty) was inconsistent with the values at all other epochs. They also pointed out that the value of $`\tau (OVIII)`$ during epoch 5 (a low luminosity state) did not show the expected anticorrelaction with the ionizing flux . In both Otani et al. and Orr et al., $`\tau (OVIII)`$ was plotted versus count rate. In order to compare ASCA and BeppoSAX observations, these two observational results for $`\tau (OVIII)`$ have been plotted versus luminosity<sup>2</sup><sup>2</sup>2The conversion from count rate to luminosity has been obtained using $`H_o=50`$ $`\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$, $`q_0=0`$ and PIMMS (http://heasarc.gsfc.nasa.gov/Tools/w3pimms.html). in 0.1-10 keV in figure 1 (instead of versus count rate). The conversion from count rate to luminosity is different for each apparatus and therefore, the comparison can not be made using count rate, but luminosity. The only discrepancy between both sets of data is the result for epoch 1 (labelled 1 in figure 1)<sup>3</sup><sup>3</sup>3This epoch was at the very beginning of the BeppoSAX observation. In a private communication, A. Orr notes that this high value for the optical depth for OVIII could not be due to instrumental effects, since no similar behaviour, neither at the beginning nor during the observation has been observed in any other BeppoSAX target yet.. Note also that there is no disagreement at all with any other point (Orr et al. mentioned epoch 5 as problematic, but in this plot it appears consistent with the rest of the Otani et al. data). In this paper we present a simple photoionization model that accounts for the experimental results of both ASCA and BeppoSAX observations. Section 2 describes the code used to model the inner warm absorber. The application to the MCG$``$6-30-15 warm absorber is presented in Section 3. The extension of the multi-zone model for the warm absorber (i. e. the third component) is addressed in Section 4. Finally our conclusions are discussed in Section 5. ## 2 TIME DEPENDENT PHOTOIONIZATION CODE In order to reproduce both ASCA and BeppoSAX observations, the state of the inner absorber has been modelled using a time dependent photoionization code for oxygen . This code treats the material in the inner absorber as containing only ions of oxygen in an otherwise completely ionized hydrogen plasma. A given oxygen ion is assumed to be in one of the 9 states corresponding to its ionization level. Different excitation levels of a given ionization state are not treated. The total abundance of oxygen is fixed at $`7.41\times 10^4`$ relative to hydrogen. Both ions and electrons are assumed to be in local thermal equilibrium (LTE) with a common temperature T and the plasma is assumed to be strictly optically-thin at all frequencies (i.e. radiative transfer processes are not considered). A point source of ionizing radiation is placed at a distance $`R`$ from this material with a frequency dependent luminosity $`L_\nu `$ (total luminosity $`L`$). In the calculations presented here, the ionizing continuum is taken to be a power-law with a photon index $`\mathrm{\Gamma }=2`$ extending from $`\nu _{min}`$ to $`\nu _{max}`$: i.e. we take $$L_\nu =\frac{L}{\mathrm{\Lambda }\nu }$$ (1) where $`\mathrm{\Lambda }=\mathrm{ln}(\nu _{max}/\nu _{min})`$. The upper and lower cutoff frequencies are chosen such that $`h\nu _{max}=100`$ keV , and $`\mathrm{\Lambda }=10`$, corresponding to $`h\nu _{min}4.5`$ eV . The physical processes included in the code are: photoionization, Auger ionization, collisional ionization, radiative recombination, dielectronic recombination, bremsstrahlung, Compton scattering and collisionally-excited O$`\lambda `$1035 resonance line cooling. Given these processes, the ionization and thermal structure of the plasma is evolved from an initial state assumed to have a temperature of $`T=10^5`$ K , and equal ionization fractions in each state. The ionization structure of the oxygen is governed by 9 equations of the form: $$\frac{dn_i}{dt}=\underset{prod.}{}\delta _{prod.}\underset{dest.}{}\delta _{dest.}$$ (2) where $`n_i`$ is the number density of state-i, $`\delta `$ stands for the ionization/recombination rate per unit volume, the first summation on the right hand side (RHS) is over all ionization and recombination mechanisms that produce state-i and the second summation is over all such mechanisms that destroy state-i. The thermal evolution of the plasma is determined by the local heating/cooling rate and the macroscopic constraint. The net heating rate per unit volume is given by: $$\frac{dQ}{dt}=\underset{heat}{}\mathrm{\Delta }_{heat}\underset{cool}{}\mathrm{\Delta }_{cool}$$ (3) where $`\mathrm{\Delta }`$ stands for the heating/cooling rate per unit volume, the first summation on the RHS is over all heating processes and the second term is over all cooling processes. For a constant density plasma, we have $`du=dQ`$ where $`u`$ is the internal energy per unit volume and is given by: $$u=\frac{3}{2}n_ek_BT.$$ (4) Thus, the rate of change of temperature is related to the heating/cooling rate via: $$\frac{dT}{dt}=\frac{2}{3n_ek_B}\frac{dQ}{dt}.$$ (5) The state of the plasma is evolved using the ionization equations represented by (2) and the energy equation (3). A simple step-by-step integration of these differential equation is performed. The time step for the integration process is allowed to dynamically change and is set to be $`0.1\times t_{sh}`$ where $`t_{sh}`$ is the shortest relevant timescale<sup>4</sup><sup>4</sup>4At each step the ionization, collisional and recombination timescales of each oxygen ionization state is calculated. The shortest of these timescales is taken to be $`t_{sh}`$.. In the case of a constant ionizing luminosity, it is found that the system always evolves to an equilibrium state, as expected. The model has been compared against the photoionization code cloudy (version 9004, ) in the case of a plasma in photoionization equilibrium. Figures 2 and 3 report this comparison.There is a qualitative agreement between the model (solid line) and cloudy (dashed line) in figure 2 for the case of a pure hydrogen/oxygen plasma. The main goal of this paper is to reproduce the time variability of the oxygen edges, and, this figure shows how our model mirrors quite well the behavior of this element. cloudy includes many more physical processes and realistic elemental abundances and, as seen in figure 3, the agreement is not so good when more elements are considered<sup>5</sup><sup>5</sup>5Realistic elemental abundances can be important both for the ionization structure and for the heating/cooling rate of the plasma, and since features of the stability curve result from peaks in the heating and cooling functions (which are in turn associated with particular ionic species), the comparison presented in figure 2 is expected to present a much better agreement. The consideration of realistic abundances is beyond the scope of this paper and is currently being studied.. We have also investigated the reaction of a constant density plasma to an inverted top hat function for the light-curve (i.e. initially $`T=10^5`$ K , equal ionization fractions in each state, the luminosity is $`L=3\times 10^{43}`$ $`\mathrm{erg}\mathrm{s}^1`$and held constant. The system is allowed to achieve an equilibrium at an ionization parameter $`\xi =\frac{L}{nR^2}=75`$ $`\mathrm{erg}\mathrm{cm}\mathrm{s}^1`$, where $`n`$ is the density of the warm plasma and $`R`$ is the distance of the slab of the plasma from the ionizing source of radiation with isotropic ionizing luminosity $`L`$. The luminosity is then halved for a short period, $`\mathrm{\Delta }t`$, and then back to its initial value) . During these changes, the state of the plasma is recorded as a function of time. This choice of $`\xi `$ places the inner warm absorber a region where $`f_{O8}L^1`$ (i.e. in this regime $`f_{O8}`$ is expected to be inversely proportional to $`\xi `$ or equivalently to $`L`$). The investigation has been carried out for different values of $`n`$ and $`R`$ keeping $`n>2\times 10^7`$ $`\mathrm{cm}^3`$and $`R<1.4\times 10^{17}`$ cm (i.e. the constraints given by Otani et al. for the inner warm absorber). Fig. 4 shows an example of the results obtained. As expected, $`f_{O8}`$ increases when $`L`$ is halved, showing a more noticeable increment when $`L`$ is halved over a longer period. ## 3 APPLICATION TO THE MCG$``$6-30-15 WARM ABSORBER To reproduce the observational data using our model, it is required to obtain the value of the optical depth for OVIII once the ionization fraction $`f_{O8}`$ has been calculated by the code. Considering the cross section for OVIII, $`\sigma _8`$, as constant along the line-of-sight and using the fact that the material contains only ions of oxygen in an otherwise completely ionized plasma, the optical depth for OVIII, $`\tau (OVIII)`$, can be written as: $$\tau (OVIII)=\frac{\sigma _8f_{O8}\mathrm{\Delta }Rn_e}{\frac{1}{Abun}+\underset{i=1}{\overset{9}{}}f_{Oi}\times (i1)}$$ (6) where $`\sigma _8=10^{19}cm^2`$ , $`\mathrm{\Delta }R`$ is the line-of-sight distance through the ionized plasma, $`n_e`$ is the electron density, $`Abun`$ is the oxygen abundance relative to hydrogen and $`f_{Oi}`$ is the ionization fraction for i ionization species for oxygen. The light-curve used to reproduced the ASCA observation is given by Otani et al. . For the BeppoSAX observation, the light-curve has been defined as a step function with a constant value for the luminosity $`L`$ over the different time periods given by Orr et al. . The constant value for $`L`$ over each period is chosen to be that given in Orr et al. (1997, Fig. 3) plus a period of $`2\times 10^4`$ s previous to epoch 1 with $`L=1.8\times 10^{43}`$ $`\mathrm{erg}\mathrm{s}^1`$(a standard value for the luminosity during the observation). Different parameters for the warm absorber have been investigated using the constraints given by Otani et al. . With the only exception of point 1 (see figure 1), a general good agreement for all other experimental points is found. Following a suggestion by Orr et al. , we have modified the light-curve for the BeppoSAX observation including a previous epoch to it for which the luminosity is much lower (the range of values used is $`10^{40}10^{43}`$ $`\mathrm{erg}\mathrm{s}^1`$)<sup>6</sup><sup>6</sup>6An example of a Seyfert 1 galaxy exhibiting an unusual low flux state (a decrease of more than one order of magnitude in luminosity) is presented in Uttley et al. .. The reason for using this very low value for the luminosity is that in a regime for which $`f_{O8}L^1`$, the highest values $`f_{O8}`$ are expected for low values of $`L`$. However, the suggested explanation does not reproduce the high value of $`\tau (OVIII)`$ for epoch 1 in the BeppoSAX observation. Even for those values of $`\xi `$ that give a maximum for $`f_{O8}`$ (i.e. $`\xi 50`$ $`\mathrm{erg}\mathrm{cm}\mathrm{s}^1`$), the ionization fraction for O(VIII) is still too low to account for the high $`\tau (OVIII)`$ value at epoch 1. ## 4 A THIRD WARM ABSORBER COMPONENT The model we propose to explain both ASCA and BeppoSAX observations incorporates a new zone for the warm absorber. Let warm absorber 1 $``$ WA1 be the inner warm absorber which parameters are: $`R<1.4\times 10^{17}`$ cm , $`n>2\times 10^7`$ $`\mathrm{cm}^3`$and $`\mathrm{\Delta }R10^{14}`$ cm . The outer warm absorber will be warm absorber 2 $``$ WA2 with parameters $`R>3\times 10^{18}`$ cm , $`n<2\times 10^5`$ $`\mathrm{cm}^3`$and $`\mathrm{\Delta }R10^{14}`$ cm . In our model a warm absorber 3 $``$ WA3 is situated between WA1 and WA2. The WA3 radius and density will have values between those of WA1 and WA2. Therefore, while WA1 and WA2 respond on timescales of hours and weeks respectively, WA3 is expected to respond to variations in the ionizing flux on timescales of days. WA3 would have an ionization parameter $`\xi `$ of the order of 500 $`\mathrm{erg}\mathrm{cm}\mathrm{s}^1`$for $`L=3\times 10^{43}`$ $`\mathrm{erg}\mathrm{s}^1`$. Hence the ionization fraction for this value of $`L`$ is too low to be detected. Only when the luminosity of the source is sufficiently low (i.e. $`\xi 50`$ $`\mathrm{erg}\mathrm{cm}\mathrm{s}^1`$), WA3 reveals its presence by contributing to the total optical depth for OVIII. The medium between WA1 and WA2 would be constituted by a continuum of warm absorbers: clouds with different densities situated at different radii. Only some of them happen to be at the radius and have densities that efficiently absorbs the OVIII edge energy (i.e. most of them are undetectable). In Baldwin et al. a model for the BLR is presented, in which individual BLR clouds can be thought of as machines for reprocessing radiation. As long as there are enough clouds at the correct radius and with the correct gas density to efficiently form a given line, the line will be formed with a relative strength which turns out to be very similar to the one observed. Similarly in our model, only WA1 and WA2 are detectable for the ordinary values of the ionizing flux. Only for the case of a state of low luminosity, WA3 will be unmasked. Other zones, as yet unseen, may be present. Assuming then the presence of WA3 and also an epoch of low luminosity previous to the BeppoSAX observation, the expected WA3 behaviour would be: i) when $`L0.4\times 10^{43}`$ $`\mathrm{erg}\mathrm{s}^1`$(previous to epoch 1), the ionization parameter $`\xi 50`$ $`\mathrm{erg}\mathrm{cm}\mathrm{s}^1`$, giving a high value for $`f_{O8}`$. This period lasts for approximately $`10^5`$ s , so the plasma has time to recombine. ii) when $`L(1,5)\times 10^{43}`$ $`\mathrm{erg}\mathrm{s}^1`$(i.e. during the observation), then $`\xi 150,750`$ $`\mathrm{erg}\mathrm{cm}\mathrm{s}^1`$. For these high values of $`\xi `$, oxygen is practically fully stripped and therefore there is a very small contribution to the optical depth for OVIII. The range of parameters investigated for WA3 is $`R=(2,8)\times 10^{17}`$ cm, $`n=5\times 10^5,10^7`$ $`\mathrm{cm}^3`$and $`\mathrm{\Delta }R`$ in the interval that gives a column density for WA3 approximately equal to $`3\times 10^{22}`$ $`\mathrm{cm}^3`$. After taking into account the soft X-ray absorption due to WA1, the response of WA3 has been calculated and an example of the results obtained is presented in figures 5 and 6, where the general good agreement is also extended to point 1. WA3 has also been modelled using cloudy and we found a drop in the transmitted portion of the incident continuum at $`7.8`$ keV , of approximately $`1\%`$ (i.e. undetectable for current instruments). The coronal lines strength has also been checked using cloudy<sup>7</sup><sup>7</sup>7See Sect. 3.3 for a discussion of the state of the coronal lines atomic data used in cloudy.. The ratio of the modelled to observed fluxes for WA3 are all below $`0.1\times f_c`$, where $`f_c`$ is the covering fraction<sup>8</sup><sup>8</sup>8Porquet et al. give some restrictions on the density of the WA in order to avoid producing coronal line equivalent widths larger than observed. Although WA3 presents no problems at all for any $`f_c`$, WA2 does, unless a low value of $`f_c`$ is considered. This possibility is currently being investigated.. Finally, the contribution to $`\tau `$(OVII) from each component for our model has been calculated and, as expected, we found that WA2 is the main responsible for the OVII edge (its contribution makes 96% of the total value of $`\tau `$(OVII)). ## 5 DISCUSSION Warm absorbers have been the subject of extensive studies during the last decade. Such regions are not spatially resolved, and all the available information about their geometry is obtained from analysis of the variability of the oxygen edges. The explanation we offer for the time variability of the MCG$``$6-30-15 warm absorber during both ASCA and BeppoSAX observations does not invoke complex processes, but a very simple photoionization model together with the presence of a multi-zone warm absorber. This would be constituted by a continuum of clouds at different radii and different densities, such that only some of them contribute to the total optical depth for OVIII depending on the value of the luminosity. As a final remark note how in our model WA3 is much more volume filling than WA1 ($`\mathrm{\Delta }R/R0.1`$ for WA3 while $`\mathrm{\Delta }R/R10^3`$ for WA1). This suggests that WA3 could be part of the inter-cloud medium of WA1. ## 6 ACKNOWLEDGMENTS This work has been supported by PPARC and Trinity College (R.M.) and by the Royal Society (A.C.F.). C.S.R. thanks support from Hubble Fellowship grant HF-01113.01-98A. This grant was awarded by the Space Telescope Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA under contract NAS 5-26555. C.S.R. also thanks support from NASA under LTSA grant NAG5-6337.
no-problem/0002/cond-mat0002047.html
ar5iv
text
# Phase Transition in a Traffic Model with Passing ## I Introduction Traffic flows on single-lane roads with no passing exhibit clustering since queues of fast cars accumulate behind slow cars. These clusters form and grow even when car density is small. The initial analysis of cluster formation was carried out in the earlier days of traffic theory, and this subject continued growing ever then. If passing is introduced, the clusters may stop growing after reaching a certain size. Indeed, previous work indicated that after a transient regime a steady state is reached. The models of Refs. assume that any car in a cluster can pass the leading car and the passing rate is independent on the location of the car within the cluster. This is certainly an oversimplification of the everyday traffic scenarios. The complementary case when only the next-to-leading cars can pass is also an idealization, yet it is closer to reality. Below we show that the latter model is also richer phenomenologically as it undergoes a dynamical phase transition. We first comment on possible theoretical approaches. A mean-field theory is the primary candidate, and we believe that it may be very good, perhaps even exact, since clustering and passing mix positions and velocities of the cars. The Boltzmann equation approach is an appropriate mean-field scheme, and in our earlier work we indeed used it. However, the present model, where only the next-to-leading car is allowed to pass, is significantly more difficult than the model where passing was possible for all cars. Indeed, it appears impossible even to write down closed Boltzmann equations for the distribution functions like $`P(v,t)`$ and $`P_m(v,t)`$, the density of all clusters moving with velocity $`v`$, and the density of clusters of $`m`$ cars, respectively. Therefore our theoretical analysis is performed in the framework of the Maxwell approach. This scheme simplifies “collision” terms by replacing the actual collision rates, which are proportional to velocity difference of collision partners, by constants. Despite this essentially uncontrolled approximation, the Maxwell approximation is very popular in kinetic theory and it has already been used in traffic. The important feature of our model is quenched disorder, which manifests itself in the random assignment of intrinsic velocities. Road conditions (construction zones, turns, hills, etc.) present another source of quenched randomness in real driving situations , which is ignored in our model. Quenched disorder significantly affects characteristics of many-particle systems, especially in low spatial dimensions. This general conclusion applies to the present one-dimensional traffic model as we shall show below. ## II Maxwell approximation We now formally define the model. Free cars move with quenched intrinsic velocities randomly assigned from some distribution $`P_0(v)`$. When a car or a cluster encounters a slower one, it assumes its velocity and a larger cluster is formed. In every cluster, the next-to-leading car is allowed to pass and resume driving with its intrinsic velocity. The rate of passing is assumed to be a constant. Thus clusters move and aggregate deterministically, while passing is stochastic. The system is initialized by randomly placing single cars and assigning them uncorrelated intrinsic velocities. Within the Maxwell approach, the joint size-velocity distribution function (the density of clusters of size $`m`$ moving with velocity $`v`$) $`P_m(v,t)`$ obeys $`{\displaystyle \frac{P_m(v,t)}{t}}`$ $`=`$ $`\gamma (1\delta _{m,1})[P_{m+1}(v,t)P_m(v,t)]`$ (1) $`+`$ $`\gamma \delta _{m,1}[N(v,t)+P_2(v,t)]c(t)P_m(v,t)`$ (2) $`+`$ $`{\displaystyle _v^{\mathrm{}}}𝑑v^{}{\displaystyle \underset{i+j=m}{}}P_i(v^{},t)P_j(v,t).`$ (3) Here $`\gamma `$ is the passing rate, so terms proportional to $`\gamma `$ account for escape, while the rest describes clustering. The escape terms are the same within Boltzmann and Maxwell approaches, and they are actually exact. The collision terms are mean-field by nature, and they are different in the Boltzmann and Maxwell approaches. For instance, in the Boltzmann case, the integral term must involve $`v^{}v`$. Eqs. (1) also contain $`c(t)`$, the total cluster density $`c(t)={\displaystyle \underset{j1}{}}{\displaystyle _0^{\mathrm{}}}𝑑vP_j(v,t),`$ (4) and $`N(v,t)`$, the density of clusters in which the next-to-leading car has intrinsic velocity $`v`$. This $`N(v,t)`$ causes the major trouble since it cannot be expressed through $`P_j(v,t)`$. One might try to close Eqs. (1) by introducing $`F_k(v,v^{},t)`$, the density of clusters moving with the velocity $`v^{}`$ whose $`k^{\mathrm{th}}`$ car has intrinsic velocity $`v`$. Clearly, $`N(v,t)=_0^v𝑑v^{}F_2(v,v^{},t)`$, and it appears that equations for $`F_k(v,v^{},t)`$ are closed. A more careful look, however, reveals that the governing equation for $`F_2(v,v^{},t)`$ includes three-velocity correlators. Thus, at the first sight, the Boltzmann and Maxwell approaches appear to be equally incapable of providing closed equations for the joint size-velocity distribution function. Still, the Maxwell framework has an advantage that it does provide a closed description on the level of the cluster size distribution. Indeed, integrating Eqs. (1) over velocity and defining $`P_m(t)_0^{\mathrm{}}𝑑vP_m(v,t)`$, we find that the cluster size distribution $`P_m(t)`$ obeys $`{\displaystyle \frac{dP_m}{dt}}=\gamma [P_{m+1}P_m]cP_m+{\displaystyle \frac{1}{2}}{\displaystyle \underset{i+j=m}{}}P_iP_j`$ (5) for $`m2`$, and $`{\displaystyle \frac{dP_1}{dt}}=\gamma [P_2P_1+c]cP_1.`$ (6) Besides this formal derivation of Eqs. (5)–(6) by direct integration of Eqs. (1), it is possible to obtain these equations by enumerating all possible ways in which clusters evolve. For instance, consider Eq. (6). Collisions reduce the density of single cars, and the collision rate is clearly equal to $`c(t)`$, as it is velocity-independent in the framework of the Maxwell approach. The escape term in Eq. (6) is understood by observing that the rate of return of single cars into the system is equal to $`\gamma \left[2P_2+{\displaystyle \underset{j3}{}}P_j\right]=\gamma \left[P_2P_1+c\right].`$ Here $`P_2(t)`$ is singled out since passing transforms it into two single cars while an escape from larger clusters produces only one freely moving car. Eqs. (5)–(6) are closed. Mathematically similar equations were investigated previously in the context of the aggregation-fragmentation model. Therefore, we merely present essential steps of the analysis. Restricting ourselves to the steady state and introducing notations $`P_m=\gamma F_m`$, $`c_{\mathrm{}}=\gamma F`$, we recast Eqs. (5)–(6) into $$FF_m=F_{m+1}F_m+\delta _{m,1}F+\frac{1}{2}\underset{i+j=m}{}F_iF_j.$$ (7) These equations should be solved together with the constraints $`_{m1}P_m=c_{\mathrm{}}`$ and $`_{m1}mP_m=1`$, i.e., $$\underset{m1}{}F_m=F,\underset{m1}{}mF_m=\gamma ^1.$$ (8) Note that the sum $`_{m1}mP_m(t)`$ is obviously constant due to car conservation. The constant is equal to the initial concentration $`c_0`$ as cars were initially unclustered. Here and below we always choose $`c_0=1`$. As in Ref. , we introduce the generating function $$(z)=\underset{m1}{}(z^m1)F_m.$$ (9) This generating function obeys $$\frac{1}{2}^2+\frac{1z}{z}+\frac{(1z)^2}{z}F=0,$$ (10) with the solution $$(z)=\frac{z1}{z}\left\{1\sqrt{12zF}\right\}.$$ (11) The steady state solution (11) exists only when the generating function is real for all the $`0z1`$. Hence, we require that $`2F1`$. Assuming that this condition is satisfied, we expand the generating function in the powers of $`z`$ to obtain the steady state concentrations: $$F_m=\frac{(2F)^m}{2\sqrt{\pi }}\left\{\frac{\mathrm{\Gamma }\left(m\frac{1}{2}\right)}{\mathrm{\Gamma }(m+1)}2F\frac{\mathrm{\Gamma }\left(m+\frac{1}{2}\right)}{\mathrm{\Gamma }(m+2)}\right\}.$$ (12) This solution is still incomplete as we have not yet determined $`F`$. To find $`F`$ we use the sum rules (8). The first sum rule is manifestly obeyed, while the second sum rule yields $`mF_m=d/dz|_{z=1}=1\sqrt{12F}=\gamma ^1`$. Thus, $`F=\frac{2\gamma 1}{2\gamma ^2}`$, which translates into $`c_{\mathrm{}}=11/2\gamma `$. The steady state solution (12) exists for sufficiently high passing rates, $`\gamma \gamma _c=1`$. For $`\gamma >1`$ and large $`m`$, the steady state size distribution simplifies to $$P_mCm^{3/2}\left[1\left(1\gamma ^1\right)^2\right]^m,$$ (13) with $`C=(4\pi )^{1/2}\gamma ^1(\gamma 1)^2`$. Apart from a power-law prefactor, the size distribution exhibits an exponential decay, $`P_me^{m/m^{}}`$, in the large size limit. The characteristic size diverges, $`m^{}(\gamma 1)^2`$ as the passing rate approaches the critical value $`\gamma _c=1`$. In the critical case, the size distribution has a power-law form $$F_m=\frac{3}{4\sqrt{\pi }}\frac{\mathrm{\Gamma }\left(m\frac{1}{2}\right)}{\mathrm{\Gamma }(m+2)}m^{5/2}.$$ (14) Let now the passing rate drops below the critical value ($`\gamma <\gamma _c`$). Since $`F`$ cannot grow beyond $`F_c=1/2`$, it stays constant. Therefore, $`F_m`$ is given by the same Eq. (14) as in the critical case, and the cluster size distribution reads $`P_m=\gamma F_m`$. This implies $`c_{\mathrm{}}=\gamma /2`$, i.e., the sum rule $`P_m=c_{\mathrm{}}`$ is valid. The second sum rule is formally violated: $`mP_m=\gamma 1`$, i.e. the cluster size distribution (14) accounts only for the fraction of all the cars present in the system. The only possible explanation is the formation of an infinite cluster that contains all the excessive cars. The second sum rule then shows that $`1\gamma `$ of all the cars in the system are in this infinite cluster. Thus within the framework of the Maxwell approximation, our traffic model displays a phase transition which separates the disordered and jammed phases. The steady state cluster concentration has different dependence on the passing rate for these two phases: $$c_{\mathrm{}}=\{\begin{array}{cc}11/2\gamma ,\hfill & \gamma >1\text{;}\hfill \\ \gamma /2,\hfill & \gamma <1\text{.}\hfill \end{array}$$ (15) In the disordered phase, the size distribution decays exponentially in the large size limit. In the jammed phase, $`P_m`$ has a power law tail and in addition there is an infinite cluster which contains the following fraction of cars: $$I=\{\begin{array}{cc}0,\hfill & \gamma >1\text{;}\hfill \\ 1\gamma ,\hfill & \gamma <1\text{.}\hfill \end{array}$$ (16) This phase transition is similar to phase transitions in driven diffusive systems without passing and to phase transitions in aggregation-fragmentation models. Also, the mechanism of the formation of the infinite cluster has a strong formal analogy to Bose-Einstein condensation . Turning back to the joint size-velocity distribution (1), we note that the lack of an exact expression for $`N(v)`$ in terms of $`P_m(v)`$ does not mean the lack of a mean-field relation between these quantities. Indeed, the density $`N(v)`$ of clusters in which the next-to-leading car has intrinsic velocity $`v`$, can be written as $$N(v)=_0^v𝑑v^{}\underset{j2}{}P_j(v^{})\frac{C(v)}{_v^{}^{\mathrm{}}𝑑v^{\prime \prime }C(v^{\prime \prime })}.$$ (17) Here $`_{j2}P_j(v^{})`$ is the density of “true” clusters (i.e., freely moving cars are excluded) moving with velocity $`v^{}`$. Then, $`C(v)=P_0(v)P(v)`$ is the density of cars with intrinsic velocity $`v`$ which are currently slowed down, i.e., they are neither single cars, nor cluster leaders. Assuming that the velocities of cars inside clusters are perfectly mixed, $`C(v)/_v^{}^{\mathrm{}}𝑑v^{\prime \prime }C(v^{\prime \prime })`$ gives the probability density that the next-to-leading car in a true $`v^{}`$-cluster has the velocity $`v`$. The product form of Eq. (17) reveals its mean-field nature, which is consistent with the spirit of our theoretical approach. One can verify that Eq. (17) agrees with the sum rule $`𝑑vN(v)=_{j2}P_j`$, thus providing a useful check of self-consistency. Although Eqs. (1) with $`N(v)`$ given by (17) seem very complex even in the steady-state regime, several conclusions can be derived without getting their complete solution. We first simplify Eqs. (1) by introducing auxiliary functions $$Q_m(v)=_v^{\mathrm{}}𝑑v^{}P_m(v^{}).$$ (18) By inserting $`P_m=\frac{dQ_m}{dv}`$ into the Eqs. (1), integrating resulting equations over $`v`$, and using the boundary conditions $`Q_m(v=\mathrm{})=0`$, we find $`\gamma \left[Q_{m+1}(v)Q_m(v)\right]`$ $``$ $`cQ_m(v)+{\displaystyle \frac{1}{2}}{\displaystyle \underset{i+j=m}{}}Q_i(v)Q_j(v)`$ (19) $`=`$ $`\delta _{m1}\gamma q(v),`$ (20) with $`q(v)=Q_1(v){\displaystyle _v^{\mathrm{}}}𝑑v^{}N(v^{}).`$ (21) Eqs. (19) are almost identical to the Eqs. (5)–(6), the velocity is just a parameter. Consequently, we anticipate qualitatively similar results, $`Q_m(v)m^{3/2}e^{m/m^{}}`$, and $$P_m(v)m^{1/2}e^{m/m^{}}.$$ (22) with the characteristic size $`m^{}(v,\gamma )`$ dependent on both velocity and passing rate. Our more rigorous generating function analysis, performed along the lines described above, confirms the asymptotic form (22). ## III Simulations Now let us examine what conclusions obtained within the Maxwell approach are relevant for the original model. We first re-derive the condition for the phase transition in the complete velocity-dependent form. Let us consider a system of reference with the origin moving with the slowest car. We assume that the system is sufficiently large for the slowest car to have negligible velocity. We compare the total flux of cars clustering behind this slowest car, $`mP_mv_m`$, to the rate of escape, $`\gamma `$. Here $`v_m`$ is an average velocity of a cluster of size $`m`$. When the rate of escape becomes less than the rate of accumulation of the cars, the cluster behind the slowest car (analog of the “infinite cluster” for finite systems) grows to remove the excessive cars from the system. Hence, the phase transition point $`\stackrel{~}{\gamma }_c`$ is defined as $$\underset{m1}{}mP_mv_m=\stackrel{~}{\gamma }_c.$$ (23) For the Maxwell model, where $`v_m=1`$ for all $`m`$, Eq. (23) reduces to $`mP_m=\gamma _c=1`$ as obtained above. Since large clusters usually form behind slow cars, $`v_m`$ is a decreasing function of the cluster size $`m`$. In particular, $`v_m`$ is always smaller than the average car velocity $`v`$, implying $`\stackrel{~}{\gamma }_c<1`$. For a rough estimate of $`v_m`$, consider a cluster of $`m`$ cars and assume that intrinsic velocities of the cars in the cluster are independent. The leading car has the minimal velocity, so the size-velocity distribution reads $$P_m(v)mP_0(v)\left[_v^{\mathrm{}}𝑑v^{}P_0(v^{})\right]^{m1}P_m.$$ (24) For concreteness, let us consider intrinsic velocity distributions which behave algebraically near the lower cutoff, $`P_0(v)v^\mu `$ as $`v0`$. Then for large clusters we get $$P_m(v)P_m\mathrm{exp}\left(mv^{\mu +1}\right).$$ (25) This implies that the average cluster velocity $`v_m`$ scales with $`m`$ according to $`v_mm^{1/(\mu +1)}`$, and hence $`\stackrel{~}{\gamma }_cm^{\mu /(\mu +1)}P_m`$. We conclude that the phase transition does exist in the original model, although its location is shifted towards lower passing rate compared to the Maxwell model prediction. This shift is especially significant for small $`\mu `$ ($`\mu >1`$ from the normalization requirement). To check the relevance of other predictions of the Maxwell approach, we performed molecular dynamics simulations. We place $`N=20000`$ single cars onto the ring of length $`L=N`$, so that the average car density is equal to one. Initial positions and velocities of cars were assigned randomly. We considered linear $`P_0(v)=\frac{8}{9}v`$ ($`0<v<3/2`$), exponential $`P_0(v)=e^v`$, and $`P_0(v)=(2\pi v)^{1/2}e^{v/2}`$ velocity distributions, which correspond to $`\mu =1,0,1/2`$ for the small-$`v`$ asymptotics. All these three distributions have the average velocity equal to one. In Fig. 1, we plot $`\mathrm{ln}[m^{3/2}P_m]`$ vs. $`m`$ for the above three velocity distributions. We take $`\gamma =1`$ which, as we concluded before, lies above the phase transition point $`\stackrel{~}{\gamma }_c`$. We expect the system to be in the disordered phase with $`P_m`$ being expressed by Eq. (13). For the exponential and $`P_0(v)=(2\pi v)^{1/2}e^{v/2}`$ intrinsic velocity distributions, there is a good agreement with the prediction of the Maxwell model (13); for the linear velocity distribution, there are some deviations for small $`m`$, but for large $`m`$ the agreement is satisfactory. The slopes of the plots decrease with $`\mu `$. Taking into account that at the point of the phase transition the slope equals to zero, this qualitatively confirms that $`\stackrel{~}{\gamma }_c`$ gets smaller when $`\mu `$ decreases. Plots of $`P_m`$ vs. $`m`$ for intrinsic velocity distributions $`P_0(v)=e^v`$ and $`P_0(v)=(2\pi v)^{1/2}e^{v/2}`$, with passing rate $`\gamma =0.005`$ well below the phase transition point, are shown in Fig. 2a and Fig. 2b, respectively. The cluster size distribution clearly consists of two regions: almost power-law tail for smaller $`m`$ and several separate peaks for larger $`m`$. These peaks correspond to the fluctuating size of the infinite cluster, while the power-law tail describes the regular part of $`P_m`$. The apparent exponent $`\tau `$ of the power-law region $`P_mm^\tau `$ slightly varies for different passing rates and $`P_0(v)`$, though it remains confined between 3/2 and 2. It is definitely different form the value 5/2, predicted by the Maxwell model (14). The measured values of $`\tau `$ would make the total amount of cars in the system divergent, $`mP_m\mathrm{}`$, so the power-law region ends with an exponential cutoff at large $`m`$. We now comment on the relationship of our model to earlier work. On the mean-field level, our model is similar to the models of Refs.. On the level of the process, our model reminds an asymmetric conserved-mass aggregation model (ASCMAM) where clusters undergo asymmetric diffusion, aggregation upon contact, and chipping (single-particle dissociation). Of course, our model is continuum while the ASCMAM is the lattice model. More substantial difference between the two models lies in the nature of randomness – in our model intrinsic velocities are quenched random variables, while in the ASCMAM dynamics is the only source of randomness. Nevertheless, the phenomenology of the two models appears to be quite similar. In particular, the ASCMAM undergoes a phase transition, and in the jammed phase, the cluster size distribution exibits a power law decay with the exponent close to 2. We should stress that in the jammed phase, we have not reached a scale-free critical state which must have the exponent $`\tau 2`$. Maybe quenched randomness does not allow the system to organize itself into a truly normalizable critical state. Other possible explanation relies on large fluctuations in disordered systems, i.e., our system was not large enough to ensure self-averaging. ## IV Conclusion In this paper, we have investigated the model of traffic that involves clustering and passing of the next-to-leading car. Despite the fact that it is one of the simplest (if not the simplest) possible continuous model of one-lane traffic with passing, the model has rich kinetic behavior. Depending on the passing rate $`\gamma `$ the system organizes itself either into disordered phase where density of large clusters is exponentially suppressed, or into the jammed phase, where the cluster size distribution becomes independent on $`\gamma `$ and the infinite cluster is formed. Within the framework of Maxwell approach, which plays the role of the mean-field theory in the present context, we have shown that the model admits an analytical solution. We have argued that the Maxwell approach correctly predicts the existence of the phase transition and adequately describes the properties of the disordered phase which arises when the passing rate is high. For the jammed phase, the Maxwell approach correctly predicts that the system stores excessive cars in the infinite cluster and organizes itself into some kind of a critical state. However, the Maxwell approach cannot quantitatively describe other properties of the jammed phase. It would be interesting to design a more accurate theoretical approach which would allow to probe the characteristics of the low passing rate regime analytically. Some properties of the jammed state appear similar to the properties of the jammed state of a lattice model of Ref. which includes an asymmetric lattice diffusion, aggregation, and fragmentation. It would be interesting to gain a deeper understanding of the relationship between these models, and whether the quenched disorder is the main source of difference. We are thankful to E. Ben-Naim and S. Redner for discussions, and to NSF and ARO for support of this work.
no-problem/0002/astro-ph0002352.html
ar5iv
text
# Self-regulated hydrodynamical process in halo stars : a possible explanation of the lithium plateau ## References Bonifacio, P., & Molaro, P. 1997, MNRAS, 285, 847 Maeder, A., & Zahn, J.-P. 1998, A&A, 334, 1000 Mestel, L. 1953, Mon. Not. R. Astron. Soc., 113, 716 Molaro, P. 1999, preprint Richard, O. 1999, phD Thesis, University of Toulouse Spite, M., & Spite F. 1982, A&A, 115, 357 Spite, F., Francois, P., Nissen, P.E., & Spite, M. 1996, ApJ, 408, 262 Vauclair, S. 1999, A&A, 351, 973 Vauclair, S. 2000, this conference Von Zeipel, H. 1924, MNRAS, 84, 665 Zahn, J.-P. 1992, A&A, 265, 115
no-problem/0002/hep-ph0002234.html
ar5iv
text
# An updated analysis of 𝜀'/𝜀 in the standard model with hadronic matrix elements from the chiral quark model ## I Introduction The violation of $`CP`$ symmetry in the kaon system (for two recent textbooks on the subject see ref. ) is parameterized in terms of the ratios $$\eta _{00}\frac{\pi ^0\pi ^0|_W|K_L}{\pi ^0\pi ^0|_W|K_S}\text{and}\eta _+\frac{\pi ^+\pi ^{}|_W|K_L}{\pi ^+\pi ^{}|_W|K_S}.$$ (1) Eqs. (1) can be written as $`\eta _{00}`$ $`=`$ $`\epsilon {\displaystyle \frac{2\epsilon ^{}}{1\omega \sqrt{2}}}\epsilon 2\epsilon ^{},`$ (2) $`\eta _+`$ $`=`$ $`\epsilon +{\displaystyle \frac{\epsilon ^{}}{1+\omega /\sqrt{2}}}\epsilon +\epsilon ^{},`$ (3) where $`\omega =A_2/A_0`$ is the ratio between the isospin $`I=2`$ and 0 components of the $`K\pi \pi `$ amplitudes, the anomalous smallness of which is known as the $`\mathrm{\Delta }I=1/2`$ selection rule . The complex parameters $`\epsilon `$ and $`\epsilon ^{}`$ are introduced to quantify, respectively, indirect (via $`K_L`$-$`K_S`$ mixing) and direct (in the $`K_L`$ and $`K_S`$ decays) $`CP`$ violation. They are measurable quantities, and $`\epsilon `$ has been known to be non vanishing since 1964 . The $`\mathrm{\Delta }S=1`$ effective lagrangian $`_W`$ is given by $$_W=\underset{i}{}C_i(\mu )Q_i(\mu ),$$ (4) where $$C_i(\mu )=\frac{G_F}{\sqrt{2}}V_{ud}V_{us}^{}\left[z_i(\mu )+\tau y_i(\mu )\right].$$ (5) In (5), $`G_F`$ is the Fermi coupling, the functions $`z_i(\mu )`$ and $`y_i(\mu )`$ are the Wilson coefficients and $`V_{ij}`$ the Cabibbo-Kobayashi-Maskawa (CKM) matrix elements; $`\tau =V_{td}V_{ts}^{}/V_{ud}V_{us}^{}`$. According to the standard parameterization of the CKM matrix, in order to determine $`\epsilon ^{}/\epsilon `$ , we only need to consider the $`y_i(\mu )`$ components, which control the $`CP`$-violating part of the lagrangian. The coefficients $`y_i(\mu )`$, and $`z_i(\mu )`$ contain all the dependence of short-distance physics, and depend on the $`t,W,b,c`$ masses, the intrinsic QCD scale $`\mathrm{\Lambda }_{\mathrm{QCD}}`$, the $`\gamma _5`$-scheme used in the regularization and the renormalization scale $`\mu `$. The $`Q_i`$ in eq. (4) are the effective four-quark operators obtained in the standard model by integrating out the vector bosons and the heavy quarks $`t,b`$ and $`c`$. A convenient and by now standard basis includes the following ten operators: $$\begin{array}{ccc}\hfill Q_1& =& \left(\overline{s}_\alpha u_\beta \right)_{\mathrm{V}\mathrm{A}}\left(\overline{u}_\beta d_\alpha \right)_{\mathrm{V}\mathrm{A}},\hfill \\ \hfill Q_2& =& \left(\overline{s}u\right)_{\mathrm{V}\mathrm{A}}\left(\overline{u}d\right)_{\mathrm{V}\mathrm{A}},\hfill \\ \hfill Q_{3,5}& =& \left(\overline{s}d\right)_{\mathrm{V}\mathrm{A}}_q\left(\overline{q}q\right)_{\mathrm{V}\mathrm{A}},\hfill \\ \hfill Q_{4,6}& =& \left(\overline{s}_\alpha d_\beta \right)_{\mathrm{V}\mathrm{A}}_q(\overline{q}_\beta q_\alpha )_{\mathrm{V}\mathrm{A}},\hfill \\ \hfill Q_{7,9}& =& \frac{3}{2}\left(\overline{s}d\right)_{\mathrm{V}\mathrm{A}}_q\widehat{e}_q\left(\overline{q}q\right)_{\mathrm{V}\pm \mathrm{A}},\hfill \\ \hfill Q_{8,10}& =& \frac{3}{2}\left(\overline{s}_\alpha d_\beta \right)_{\mathrm{V}\mathrm{A}}_q\widehat{e}_q(\overline{q}_\beta q_\alpha )_{\mathrm{V}\pm \mathrm{A}},\hfill \end{array}$$ (6) where $`\alpha `$, $`\beta `$ denote color indices ($`\alpha ,\beta =1,\mathrm{},N_c`$) and $`\widehat{e}_q`$ are the quark charges ($`\widehat{e}_u=2/3`$, $`\widehat{e}_d=\widehat{e}_s=1/3`$). Color indices for the color singlet operators are omitted. The labels $`(V\pm A)`$ refer to the Dirac structure $`\gamma _\mu (1\pm \gamma _5)`$. The various operators originate from different diagrams of the fundamental theory. At the tree level, we only have the current-current operator $`Q_2`$ induced by $`W`$-exchange. Switching on QCD, the gluonic correction to tree-level $`W`$-exchange induces $`Q_1`$. Furthermore, QCD induces the gluon penguin operators $`Q_{36}`$. The penguin diagrams induce different operators because of the splitting of the color quark (vector) current into a right- and a left-handed part and the presence of color octet and singlet currents. Electroweak penguin diagrams—where the exchanged gluon is replaced by a photon or a $`Z`$-boson and box-like diagrams—induce $`Q_{7,9}`$ and also a part of $`Q_3`$. The operators $`Q_{8,10}`$ are induced by the QCD renormalization of the electroweak penguin operators $`Q_{7,9}`$. Even though the operators in eq. (6) are not all independent, this basis is of particular interest for any numerical analysis because it has been extensively used for the the calculation of the Wilson coefficients to the next-to-leading order (NLO) order in $`\alpha _s`$ and $`\alpha _e`$ , in different renormalization schemes. In the standard model, $`\epsilon ^{}`$ can be in principle different from zero because the $`3\times 3`$ CKM matrix $`V_{ij}`$, which appears in the weak charged currents of the quark mass eigenstate, is in general complex. On the other hand, in other models like the superweak theory , the only source of $`CP`$ violation resides in the $`K^0`$-$`\overline{K}^0`$ mixing, and $`\epsilon ^{}`$ vanishes. It is therefore of great importance for the discussion of the theoretical implications within the standard model and beyond to establish the experimental evidence and precise value of $`\epsilon ^{}`$ . The ratio $`\epsilon ^{}/\epsilon `$ (for a review see, e.g., ) is computed as $$\text{Re}\epsilon ^{}/\epsilon =e^{i\varphi }\frac{G_F\omega }{2\left|ϵ\right|\text{Re}A_0}\text{Im}\lambda _t\left[\mathrm{\Pi }_0\frac{1}{\omega }\mathrm{\Pi }_2\right],$$ (7) where, the CKM combination $`\lambda _t=V_{td}V_{ts}^{}`$ and, referring to the $`\mathrm{\Delta }S=1`$ quark lagrangian of eq. (4), $`\mathrm{\Pi }_0`$ $`=`$ $`{\displaystyle \frac{1}{\mathrm{cos}\delta _0}}{\displaystyle \underset{i}{}}y_i\text{Re}Q_i_0(1\mathrm{\Omega }_{\eta +\eta ^{}}),`$ (8) $`\mathrm{\Pi }_2`$ $`=`$ $`{\displaystyle \frac{1}{\mathrm{cos}\delta _2}}{\displaystyle \underset{i}{}}y_i\text{Re}Q_i_2,`$ (9) and $`Q_i_I=2\pi ,I|Q_i|K`$. We take the phase $`\varphi =\pi /2+\delta _0\delta _2\theta _ϵ=(0\pm 4)^0`$ as vanishing , and we assume everywhere that $`CPT`$ is conserved. Therefore, $`\text{Re}\epsilon ^{}/\epsilon =\epsilon ^{}/\epsilon `$. Notice the explicit presence of the final-state-interaction (FSI) phases $`\delta _I`$ in eqs. (8) and (9). Their presence is a consequence of writing the absolute values of the amplitudes in term of their dispersive parts. Theoretically, given that in eq. (5) $`\tau 1`$, we obtain $$\mathrm{tan}\delta _I\frac{_iz_i\text{Im}Q_i_I}{_iz_i\text{Re}Q_i_I}.$$ (10) Finally, $`\mathrm{\Omega }_{\eta +\eta ^{}}`$ is the isospin breaking (for $`m_um_d`$) contribution of the mixing of $`\pi `$ with $`\eta `$ and $`\eta ^{}`$. ### A Preliminary remarks The experiments in the early 1990’s could not establish the existence of direct $`CP`$ violation because they agreed only marginally (one of them being consistent with zero ) and did not have the required accuracy. During 1999, as we shall briefly recall in the next section, the preliminary analysis of the new run of experiments have settled on a range of values consistent with the previous NA31 result, conclusively excluding a vanishing $`\epsilon ^{}`$. On the other hand, in order to asses precisely the value we must wait for the completion of the data analysis which will improve the present accuracy by a factor 2-3. On the theoretical side, progress has been slow as well because of the intrinsic difficulty of a computation that spans energy scales as different as the pion and the top quark masses. Nevertheless, the estimates available before 1999 pointed to a non-vanishing and positive value, with one of them being in the ball park of the present experimental result. We revise our 1997 estimate of $`\epsilon ^{}/\epsilon `$ by updating the values of the short-distance input parameters —among which the improved determination of the relevant CKM entries— and by including the Gaussian sampling of the experimental input data. We also update the values and ranges of the “long-distance” model parameters (quark and gluon condensates, and constituent quark mass), by including a larger theoretical “systematic” error ($`\pm 30\%`$) in the fit of the $`CP`$ conserving $`K\pi \pi `$ amplitudes in order to better account for the error related to the truncation of the chiral expansion. For the sake of comparison with other approaches, we give our results in terms of the so-called $`B_i`$-parameters, in two $`\gamma _5`$ regularization schemes: ’t Hooft-Veltman (HV) and Naive Dimensional Regularization (NDR). The combined effect of the new ranges of the input parameters and $`\text{Im}\lambda _t`$ makes the central value of $`\epsilon ^{}/\epsilon `$ slightly higher than before, while the statistical analysis of the input parameters reduces the final uncertainty. For a more conservative assessment of the error, we give the full range of uncertainty obtained by the flat span of the allowed ranges of the input parameters. The result is numerically stable as we vary the renormalization scale and scheme. We conclude by briefly reviewing other estimates of $`\epsilon ^{}`$ published in the last year. ## II The experimental status Experimentally the ratio $`\epsilon ^{}/\epsilon `$ is extracted, by collecting $`K_L`$ and $`K_S`$ decays into pairs of $`\pi ^0`$ and $`\pi ^\pm `$, from the relation $$\left|\frac{\eta _+}{\eta _{00}}\right|^21+6\text{Re}\frac{\epsilon ^{}}{\epsilon },$$ (11) and the determination of the ratios $`\eta _+`$ and $`\eta _{00}`$ given in (1). The announcement last year of the preliminary result from the KTeV collaboration (FNAL) $$\text{Re}\epsilon /\epsilon ^{}=(2.8\pm 0.41)\times 10^3,$$ (12) based on data collected in 1996-97, and the present result from the NA48 collaboration (CERN), $$\text{Re}\epsilon /\epsilon ^{}=(1.4\pm 0.43)\times 10^3,$$ (13) based on data collected in 1997 and 1998 , settle the long-standing issue of the presence of direct $`CP`$ violation in kaon decays. However, a clearcut determination of the actual value of $`\epsilon ^{}`$ at the precision of a few parts in $`10^4`$ must wait for further statistics and scrutiny of the experimental systematics. By computing the average among the two 1992 experiments (NA31 and E731 ) and the KTeV and NA48 data we obtain $$\text{Re}\epsilon /\epsilon ^{}=(1.9\pm 0.46)\times 10^3,$$ (14) where the error has been inflated according to the Particle Data Group procedure ($`\sigma \sigma \times \sqrt{\chi ^2/3}`$), to be used when averaging over experimental data—in our case four sets—with substantially different central values. The value in eq. (14) can be considered the current experimental result. Such a result will be tested within the next year by the full data analysis from KTeV and NA48 and (hopefully) the first data from KLOE at DA$`\mathrm{\Phi }`$NE (Frascati); at that time, the experimental uncertainty will be reduced to a few parts in $`10^4`$. The most important outcome of the 1999 results is that direct $`CP`$ violation has been unambiguously observed and that the superweak scenario, in which $`\epsilon ^{}`$ = 0, can be excluded to a high degree of confidence (more than 4 $`\sigma `$’s). ## III Hadronic matrix elements In the present analysis, we use hadronic matrix elements for all the relevant operators $`Q_{110}`$ and a parameter $`\widehat{B}_K`$ computed in the $`\chi `$QM at $`O(p^4)`$ in the chiral expansion . This approach has three model-dependent parameters which are fixed by means of a fit of the $`\mathrm{\Delta }I=1/2`$ rule. Let us here review briefly the model and how the matrix elements are computed. The $`\chi `$QM is an effective quark model of QCD which can be derived in the framework of the extended Nambu-Jona-Lasinio model of chiral symmetry breaking (for a review, see, e.g., ). In the $`\chi `$QM an effective interaction between the $`u,d,s`$ quarks and the meson octet is introduced via the term $$_{\chi \text{QM}}=M\left(\overline{q}_R\mathrm{\Sigma }q_L+\overline{q}_L\mathrm{\Sigma }^{}q_R\right),$$ (15) which is added to an effective low-energy QCD lagrangian whose dynamical degrees of freedom are the $`u,d,s`$ quarks propagating in a soft gluon background. The matrix $`\mathrm{\Sigma }`$ in (15) is the same as that used in chiral perturbation theory and it contains the pseudo-scalar meson multiplet. The quantity $`M`$ is interpreted as the constituent quark mass in mesons (current quark masses are also included in the effective lagrangian). In the factorization approximation, the matrix elements of the four quark operators are written in terms of better known quantities like quark currents and densities. Such matrix elements (building blocks) like the current matrix elements $`0|\overline{s}\gamma ^\mu \left(1\gamma _5\right)u|K^+(k)`$ and $`\pi ^+(p_+)|\overline{s}\gamma ^\mu \left(1\gamma _5\right)d|K^+(k)`$ and the matrix elements of densities, $`0|\overline{s}\gamma _5u|K^+(k)`$, $`\pi ^+(p_+)|\overline{s}d|K^+(k)`$, are evaluated up to $`O(p^4)`$ within the model. The model dependence in the color singlet current and density matrix elements appears (via the $`M`$ parameter) beyond the leading order in the momentum expansion, while the leading contributions agree with the well known expressions in terms of the meson decay constants and masses. Non-factorizable contributions due to soft gluonic corrections are included by using Fierz-transformations and by calculating building block matrix elements involving the color matrix $`T^a`$: $`0|\overline{s}\gamma ^\mu T^a\left(1\gamma _5\right)u|K^+(k),\pi ^+(p_+)|\overline{s}\gamma ^\mu T^a\left(1\gamma _5\right)d|K^+(k).`$ (16) Such matrix elements are non-zero for emission of gluons. In contrast to the color singlet matrix elements above, they are model dependent starting with the leading order. Taking products of two such matrix elements and using the relation $$g_s^2G_{\mu \nu }^aG_{\alpha \beta }^a=\frac{\pi ^2}{3}\frac{\alpha _s}{\pi }GG\left(\delta _{\mu \alpha }\delta _{\nu \beta }\delta _{\mu \beta }\delta _{\nu \alpha }\right)$$ (17) makes it possible to express gluonic corrections in terms of the gluonic vacuum condensate . While the factorizable corrections are re-absorbed in the renormalization of the chiral couplings, non-factorizable contributions affect explicitly the form of the matrix elements. The model thus parameterizes all amplitudes in terms of the quantities $`M`$, $`\overline{q}q`$ , and $`\alpha _sGG/\pi `$ . Higher order gluon condensates are omitted. The hadronic matrix elements of the operatorial basis in eq. (6) for $`K\pi \pi `$ decays, have been calculated up to $`O(p^4)`$ inclusive of chiral loops . The leading order (LO) ($`O(p^0,p^2)`$) matrix elements $`Q_i_I^{LO}`$ and the NLO ($`O(p^2,p^4)`$) corrections $`Q_i_I^{NLO}`$ for final state isospin projections $`I=0,2`$ are obtained by properly combining factorizable and non-factorizable contributions and expanding the result at the given order. The total hadronic matrix elements up to $`O(p^4)`$ can then be written as: $$Q_i(\mu )_I=Z_\pi \sqrt{Z_K}\left[Q_i_I^{LO}+Q_i_I^{NLO}\right](\mu )+a_i^I(\mu ),$$ (18) where $`Q_i`$ are the operators in eq. (6), and $`a_i^I(\mu )`$ are the contributions from chiral loops (which include the wave-function renormalization). The scale dependence of the $`Q_i_I^{LO,NLO}`$ comes from the perturbative running of the quark condensate and masses, the latter appearing explicitly in the NLO corrections. The quantities $`a_i^I(\mu )`$ represent the scale dependent meson-loop corrections which depend on the chiral quark model via the tree level chiral coefficients. They have been included to $`O(p^4)`$ in ref. by applying the $`\overline{MS}`$ scheme in dimensional regularization, as for the $`\chi `$QM calculation of the tree-level chiral coefficients. The wave-function renormalizations $`Z_K`$ and $`Z_\pi `$ arise in the $`\chi `$QM from direct calculation of the $`KK`$ and $`\pi \pi `$ propagators. The hadronic matrix elements are matched—by taking $`\mu _{SD}=\mu _{LD}`$— with the NLO Wilson coefficients at the scale $`\mathrm{\Lambda }_\chi 0.8`$ ($`m_\rho `$) as the best compromise between the range of validity of chiral perturbation and that of strong coupling expansion. The scale dependence of the amplitudes is gauged by varying $`\mu `$ between 0.8 and 1 GeV. In this range the scale dependence of $`\epsilon ^{}/\epsilon `$ remains always below 10%, thus giving a stable prediction. ### A The fit of the $`\mathrm{\Delta }I=1/2`$ rule In order to assign the values of the model-dependent parameters $`M`$, $`\overline{q}q`$ and $`\alpha _sGG/\pi `$ , we consider the $`CP`$-conserving amplitudes in the $`\mathrm{\Delta }I=1/2`$ selection rule of $`K\pi \pi `$ decays. In practice, we compute the amplitudes $`A_0`$ $`=`$ $`{\displaystyle \frac{G_F}{\sqrt{2}}}V_{ud}V_{us}^{}{\displaystyle \frac{1}{\mathrm{cos}\delta _0}}{\displaystyle \underset{i}{}}z_i(\mu )\text{Re}Q_i(\mu )_0,`$ (19) $`A_2`$ $`=`$ $`{\displaystyle \frac{G_F}{\sqrt{2}}}V_{ud}V_{us}^{}{\displaystyle \frac{1}{\mathrm{cos}\delta _2}}{\displaystyle \underset{i}{}}z_i(\mu )\text{Re}Q_i(\mu )_2+\omega A_0\mathrm{\Omega }_{\eta +\eta ^{}},`$ (20) within the $`\chi `$QM approach and vary the parameters in order to reproduce their experimental values $$A_0(K\pi \pi )=3.3\times 10^7\text{GeV}\text{and}A_2(K\pi \pi )=1.5\times 10^8\text{GeV},$$ (21) This procedure combines a model for low-energy QCD—which allow us to compute all hadronic matrix elements in terms of a few basic parameters—with the phenomenological determination of such parameters. In this way, some shortcomings of such a naive model (in particular, the matching between long- and short-distance components) are absorbed in the phenomenological fit. As a check, we eventually verify the stability of the computed observables against renormalization scale and scheme dependence. The fit of the $`CP`$-conserving involves the determination of the FSI phases. The absorptive component of the hadronic matrix elements appear when chiral loops are included. In our approach the direct determination of the rescattering phases gives at $`O(p^4)`$ $`\delta _020^0`$ and $`\delta _212^0`$. Although these results show features which are in qualitative agreement with the phases extracted from pion-nucleon scattering , $$\delta _0=34.2^0\pm 2.2^0,\delta _2=6.9^0\pm 0.2^0,$$ (22) the deviation from the experimental data is sizeable, especially in the $`I=0`$ component. On the other hand, at $`O(p^4)`$ the absorptive parts of the amplitudes are determined only at $`O(p^2)`$ and disagreement with the measured phases should be expected. As a matter of fact, the authors of ref. find that at $`O(p^6)`$ the absorptive part of the hadronic matrix elements are substantially modified to give values of the rescattering phases quite close to those in eq. (22). At the same time the $`O(p^6)`$ corrections to the dispersive part of the hadronic matrix elements are very small. This result corroborates our ansatz of trusting the dispersive parts of the $`O(p^4)`$ matrix elements while inputting the experimental values of the rescattering phases in all parts of our analysis, which amounts to taking $`\mathrm{cos}\delta _00.8`$ and $`\mathrm{cos}\delta _21`$. Hadronic matrix elements in the $`\chi `$QM depend on the $`\gamma _5`$-scheme utilized . Their dependence partially cancels that of the short-distance NLO Wilson coefficients. Because this compensation is only numerical, and not analytical, we take it as part of our phenomenological approach. A formal $`\gamma _5`$-scheme matching can only come from a model more complete than the $`\chi `$QM. Nevertheless, the result, as shown in Fig. 2 below, is rather convincing. By taking $$\mathrm{\Lambda }_{\mathrm{QCD}}^{(4)}=340\pm 40\text{MeV}$$ (23) and fitting at the scale $`\mu =0.8`$ GeV the amplitudes in eqs. (19)–(20) to their experimental values, allowing for a $`\pm `$ 30% systematic uncertainty, we find (see Fig. 1) $$M=195_{15}^{+25}\text{MeV},\alpha _sGG/\pi =\left(330\pm 5\text{MeV}\right)^4,\overline{q}q=\left(235\pm 25\text{MeV}\right)^3.$$ (24) in the HV-scheme, and $$M=195_{10}^{+15}\text{MeV},\alpha _sGG/\pi =\left(333_6^{+7}\text{MeV}\right)^4,\overline{q}q=\left(245\pm 15\text{MeV}\right)^3.$$ (25) in the NDR-scheme. As shown by the light (NDR) and dark (HV) curves in Fig. 2, the $`\gamma _5`$-scheme dependence is controlled by the value of $`M`$, the range of which is fixed thereby. The $`\gamma _5`$ scheme dependence of both amplitudes is minimized for $`M190200`$ MeV. The good $`\gamma _5`$-scheme stability is also shown by $`\epsilon ^{}/\epsilon `$ and $`\widehat{B}_K`$. For this reason in our previous papers we only quoted for these observables the HV results. The fit of the amplitude $`A_0`$ and $`A_2`$ is obtained for values of the quark and gluon condensates which are in agreement with those found in other approaches, i.e. QCD sum rules and lattice, although it is fair to say that the relation between the gluon condensate of QCD sum rules and lattice and that of the $`\chi `$QM is far from obvious. The value of the constituent quark mass $`M`$ is in good agreement with that found by fitting radiative kaon decays . In Fig. 3 we present the anatomy of the relevant operator contributions to the $`CP`$ conserving amplitudes. It is worth noticing that, because of the NLO enhancement of the $`I=0`$ matrix elements (mainly due to the chiral loops), the gluon penguin contribution to $`A_0`$ amounts to about 20% of the amplitude. Turning now to the $`\mathrm{\Delta }S=2`$ lagrangian, $$_{\mathrm{\Delta }S=2}=C_{2S}(\mu )Q_{S2}(\mu ),$$ (26) where $$C_{2S}(\mu )=\frac{G_F^2m_W^2}{4\pi ^2}\left[\lambda _c^2\eta _1S(x_c)+\lambda _t^2\eta _2S(x_t)+2\lambda _c\lambda _t\eta _3S(x_c,x_t)\right]b(\mu )$$ (27) where $`\lambda _j=V_{jd}V_{js}^{}`$, $`x_i=m_i^2/m_W^2`$. We denote by $`Q_{S2}`$ the $`\mathrm{\Delta }S=2`$ local four quark operator $$Q_{S2}=(\overline{s}_L\gamma ^\mu d_L)(\overline{s}_L\gamma _\mu d_L),$$ (28) which is the only local operator of dimension six in the standard model. The integration of the electroweak loops leads to the Inami-Lim functions $`S(x)`$ and $`S(x_c,x_t)`$, the exact expressions of which can be found in the reference quoted, which depend on the masses of the charm and top quarks and describe the $`\mathrm{\Delta }S=2`$ transition amplitude in the absence of strong interactions. The short-distance QCD corrections are encoded in the coefficients $`\eta _1`$, $`\eta _2`$ and $`\eta _3`$ with a common scale- and renormalization-scheme- dependent factor $`b(\mu )`$ factorized out. They are functions of the heavy quarks masses and of the scale parameter $`\mathrm{\Lambda }_{\mathrm{QCD}}`$. These QCD corrections are available at the NLO in the strong and electromagnetic couplings. The scale-dependent factor of the short-distance corrections is given by $$b(\mu )=\left[\alpha _s\left(\mu \right)\right]^{2/9}\left(1J_3\frac{\alpha _s\left(\mu \right)}{4\pi }\right),$$ (29) where $`J_3`$ depends on the $`\gamma _5`$-scheme used in the regularization. The NDR and HV scheme yield, respectively: $$J_3^{\mathrm{NDR}}=\frac{307}{162}\text{and}J_3^{\mathrm{HV}}=\frac{91}{162}.$$ (30) On the long-distance side, the hadronic $`B_K`$ parameter is introduced by writing the $`\mathrm{\Delta }S=2`$ matrix element as $$\overline{K^0}|Q_{S2}(\mu )|K^0=\frac{4}{3}f_K^2m_K^2B_K(\mu ).$$ (31) The scale- and renormalization-scheme- independent parameter $`\widehat{B}_K`$ is then defined by means of eqs. (29)–(31) as $$\widehat{B}_K=b(\mu )B_K(\mu ).$$ (32) By using the input values found by fitting the $`\mathrm{\Delta }I=1/2`$ rule we obtain in both $`\gamma _5`$-schemes $$\widehat{B}_K=1.1\pm 0.2.$$ (33) The result (33) includes chiral corrections up to $`O(p^4)`$ and it agrees with what we found in . In the chiral limit one derives a simple expression for $`\widehat{B}_K`$ (eq. (6.3) in ref. ), which depends crucially on the value of the gluon condensate. In this limit, for central values of the parameters, we obtain $`\widehat{B}_K=0.44`$ $`(0.46)`$ in the HV (NDR) scheme. A recent calculation of $`\widehat{B}_K`$, based on QCD sum rules, finds in the chiral limit and to the NLO in the $`1/N`$ expansion $`\widehat{B}_K=0.41\pm 0.09`$ . Many calculations of $`B_K`$ have been performed on the lattice (for a recent review see ). According to ref. , the analysis of ref. presents the most extensive study of systematic errors and gives the (quenched) value $`\widehat{B}_K=0.86\pm 0.06\pm 0.14`$, which should be taken as the present reference value, while awaiting for further progress in including dynamical quarks on the lattice. Notice that no estimate of $`\epsilon ^{}/\epsilon `$ can be considered complete unless it also gives a value for $`\widehat{B}_K`$. The case of the $`\chi `$QM, for instance, is telling insofar as the enhancement of $`B_6`$ is partially compensated for by a large $`\widehat{B}_K`$ (and accordingly a smaller $`\text{Im}\lambda _t`$). ### B The factors $`B_i`$ The factors $`B_i`$, defined as $$B_iQ_i^{\mathrm{model}}/Q_i^{\mathrm{VSA}}$$ (34) have become a standard way of displaying the values of the hadronic matrix elements in order to compare them among various approaches. However they must be used with care because of their dependence on the renormalization scheme and scale, as well as on the choice of the VSA parameters. They are given in the $`\chi `$QM in table I in the HV and NDR schemes, at $`\mu =0.8`$ GeV, for the central value of $`\mathrm{\Lambda }_{\mathrm{QCD}}^{(4)}`$. The dependence on $`\mathrm{\Lambda }_{\mathrm{QCD}}`$ enters indirectly via the fit of the $`\mathrm{\Delta }I=1/2`$ selection rule and the determination of the parameters of the model. The uncertainty in the matrix elements of the penguin operators $`Q_{58}`$ arises from the variation of $`\overline{q}q`$. This affects mostly the $`B_{5,6}`$ parameters because of the leading linear dependence on $`\overline{q}q`$ of the $`Q_{5,6}`$ matrix elements in the $`\chi `$QM, contrasted to the quadratic dependence of the corresponding VSA matrix elements. Accordingly, $`B_{5,6}`$ scale as $`\overline{q}q^1`$, or via PCAC as $`m_q`$, and therefore are sensitive to the value chosen for these parameters. For this reason, we have reported the corresponding values of $`B_{5,6}`$ when the quark condensate in the VSA is fixed to its PCAC value. It should however be stressed that such a dependence is not physical and is introduced by the arbitrary normalization on the VSA result. The estimate of $`\epsilon ^{}`$ is therefore almost independent of $`m_q`$, which only enters the NLO corrections and the determination of $`\widehat{B}_K`$. The enhancement of the $`Q_{5,6}`$ matrix elements with respect to the VSA values (the conventional normalization of the VSA matrix elements corresponds to taking $`\overline{q}q`$$`(0.8\mathrm{GeV})(220\mathrm{MeV})^3`$ ) is mainly due to the NLO chiral loop contributions. Such an enhancement, due to final state interactions, has been found in $`1/N_c`$ analyses beyond LO , as well as recent dispersive studies . A large-$`N_c`$ approach, based on QCD sum rules which reproduces the electroweak $`\pi ^+`$-$`\pi ^{}`$ mass difference and the leptonic $`\pi (\eta )`$ rare decays, disagrees in the determination of $`Q_7_{0,2}`$ at $`\mu =0.8`$ GeV, due to the sharp scale dependence found for these matrix elements. Since the operator $`Q_7`$ gives a negligible contribution to $`\epsilon ^{}/\epsilon `$, we should wait for a calculation of other matrix elements within the same framework in order to asses the extent and the impact of the disagreement with the $`\chi `$QM results. Among the non-factorizable corrections, the gluon condensate contributions are most important for the $`CP`$-conserving $`I=2`$ amplitudes (and account for the values and uncertainties of $`B_{1,2}^{(2)}`$) but are otherwise inessential in the determination of $`\epsilon ^{}`$, for which FSI are the most relevant corrections to the LO $`1/N`$ result. ## IV Bounds on $`\text{Im}\lambda _t`$ The updated measurements for the CKM elements $`|V_{ub}/V_{cb}|`$ implies a change in the determination of the Wolfenstein parameter $`\eta `$ that enter in $`\text{Im}\lambda _t`$. This is of particular relevance because it affects proportionally the value of $`\epsilon ^{}/\epsilon `$. The allowed values for $`\text{Im}\lambda _t\eta |V_{us}||V_{cb}|^2`$ are found by imposing the experimental constraints for $`\epsilon `$, $`|V_{ub}/V_{cb}|`$, $`\mathrm{\Delta }m_d`$ and $`\mathrm{\Delta }m_s`$ which give rise to the following equations: $$\eta \left(1\frac{\lambda ^2}{2}\right)\left\{\left[1\rho \left(1\frac{\lambda ^2}{2}\right)\right]|V_{cb}|^2\eta _2S(x_t)+\eta _3S(x_x,x_t)\eta _1S(x_c)\right\}\frac{|V_{cb}|^2}{\lambda ^8}\widehat{B}_K=\frac{|\epsilon |}{C\lambda ^{10}},$$ (35) where $$C=\frac{G_F^2f_K^2m_K^2m_W^2}{3\sqrt{2}\pi ^2\mathrm{\Delta }M_{LS}}\text{and}x_i=m_i^2/m_W^2,$$ (36) and $`\eta ^2+\rho ^2`$ $`=`$ $`{\displaystyle \frac{1}{\lambda ^2}}{\displaystyle \frac{|V_{ub}|^2}{|V_{cb}|^2}},`$ (37) $`\eta ^2\left(1{\displaystyle \frac{\lambda ^2}{2}}\right)^2+\left[1\rho \left(1{\displaystyle \frac{\lambda ^2}{2}}\right)\right]^2`$ $`=`$ $`{\displaystyle \frac{1}{\lambda ^2}}{\displaystyle \frac{|V_{td}|^2}{|V_{cb}|^2}},`$ (38) where $`|V_{td}|`$ is found by means of $$\mathrm{\Delta }m_d=\frac{G_F}{24\sqrt{2}\pi ^2}|V_{td}|^2|V_{tb}|^2m_{B_d}f_{B_d}^2B_{B_d}\eta _Bx_tF(x_t)$$ (39) and $$\frac{\mathrm{\Delta }m_s}{\mathrm{\Delta }m_d}=\frac{m_{B_s}f_{B_s}^2B_{B_s}|V_{ts}|^2}{m_{B_d}f_{B_d}^2B_{B_d}|V_{td}|^2}.$$ (40) The function $`F(x)`$ in (39) in (35) can be found in . By using the method of Parodi, Roudeau and Stocchi , who have run their program starting from the inputs listed in table II and $$\eta _1=1.44\pm 0.18,\eta _2=0.52,\eta _3=0.45\pm 0.01,$$ (41) which are the values we find for our inputs, it is found that $$\text{Im}\lambda _t=(1.14\pm 0.11)\times 10^4,$$ (42) where the error is determined by the Gaussian distribution in Fig. 4. Notice that the value thus found is roughly 10% smaller than those found in other estimates for which $`\widehat{B}_K`$ is smaller. The effect of this updated fit is a substantial reduction in the range of $`\text{Im}\lambda _t`$ with respect to what we used in our 1997 estimate : all values smaller than $`1.0\times 10^4`$ are now excluded (as opposed as before when values as small as $`0.6\times 10^4`$ were included). ## V Estimating $`\epsilon ^{}/\epsilon `$ The value of $`\epsilon ^{}`$ computed by taking all input parameters at their central values (Table II) is shown in Fig. 5. The figure shows the contribution to $`\epsilon ^{}`$ of the various operators in two $`\gamma _5`$ renormalization schemes at $`\mu =0.8`$ GeV and $`1.0`$ GeV. The advantage of such a histogram is that, contrary to the $`B_i`$, the size of the individual contributions does not depend on some conventional normalization. As the histogram makes it clear, the two dominant contributions come from the gluon and electroweak penguins, $`Q_6`$ and $`Q_8`$. However, the gluon penguin dominates and there is very little cancellation with the electroweak penguin operator. The dominance of the $`I=0`$ components in the $`\chi `$QM originates from the $`O(p^4)`$ chiral corrections, the detailed size of which is determined by the fit of the $`\mathrm{\Delta }I=1/2`$ rule. It is a nice feature of the approach that the renormalization scheme stability imposed on the $`CP`$ conserving amplitudes is numerically preserved in $`\epsilon ^{}/\epsilon `$. The comparison of the two figures shows also the remarkable renormalization scale stability of the central value once the perturbative running of the quark masses and the quark condensate is taken into account. In what follows, the model-dependent parameters $`M`$, $`\alpha _sGG/\pi `$ and $`\overline{q}q`$ are uniformly varied in their given ranges (flat scanning), while the others are sampled according to their normal distributions (see Table II for the ranges used). Values of $`\epsilon ^{}/\epsilon `$ found in the HV and NDR schemes are included with equal weight. For a given set, a distribution is obtained by collecting the values of $`\epsilon ^{}/\epsilon `$ in bins of a given range. This is shown in Fig. 6 for a particular choice of bins. The final distribution is partially skewed, with more values closer to the lower end but a longer tail toward larger values. However, because the skewness of the distribution is less than one, the mean and the standard deviation are a good estimate of the central value and the dispersion of values around it. This statistical analysis yields $$\epsilon ^{}/\epsilon =(2.2\pm 0.8)\times 10^3.$$ (43) A more conservative estimate of the uncertainties is obtained via the flat scanning of the input parameters, which gives $$0.9\times 10^3<\text{Re}\epsilon ^{}/\epsilon <4.8\times 10^3.$$ (44) In both estimates a theoretical systematic error of $`\pm 30\%`$ is included in the fit of the $`CP`$ conserving amplitudes $`A_0`$ and $`A_2`$. The stability of the numerical outcomes is only marginally affected by shifts in the value of $`\mathrm{\Omega }_{\eta +\eta ^{}}`$ due to NLO chiral corrections and by additional isospin breaking effects . Any effective variation of $`\mathrm{\Omega }_{\eta +\eta ^{}}`$ is anti-correlated to the value of $`\alpha _sGG/\pi `$ obtained in the fit of $`A_2`$. We have verified that this affects the calculation of $`\widehat{B}_K`$ and the consequent determination of $`\text{Im}\lambda _t`$ in such a way to compensate numerically in $`\epsilon ^{}/\epsilon `$ the change of $`\mathrm{\Omega }_{\eta +\eta ^{}}`$. Waiting for a confident assessment of the NLO isospin violating effects in the $`K\pi \pi `$ amplitudes, we have used for $`\mathrm{\Omega }_{\eta +\eta ^{}}`$ the ‘LO’ value quoted in Table II. The weak dependence on some poorly-known parameters is a welcome outcome of the correlation among hadronic matrix elements enforced in our semi-phenomenological approach by the fit of the $`\mathrm{\Delta }I=1/2`$ rule. We have changed the central value of $`m_c`$ from 1.4 GeV, the value we used in , to 1.3 GeV in order to make our estimate more homogeneous with others. This change affects the determination of the ranges of the model parameters, mainly increasing $`\overline{q}q`$, via the fit of the $`CP`$ conserving amplitudes. The central value of $`\epsilon ^{}/\epsilon `$ turns out to be affected below 10%. While the $`\chi QM`$ approach to the hadronic matrix elements relevant in the computation of $`\epsilon ^{}/\epsilon `$ has many advantages over other techniques and has proved its value in the prediction of what has been then found in the experiments, it has a severe short-coming insofar as the matching scale has to be kept low, around 1 GeV and therefore the Wilson coefficients have to be run down at scales that are at the limit of the applicability of the renormalization-group equations. Moreover, the matching itself suffers of ambiguities that have not been completely solved. For these reasons we have insisted all along that the approach is semi-phenomenological and that the above shortcomings are to be absorbed in the values of the input parameters on which the fit to the $`CP`$ conserving amplitudes is based. ## VI Other estimates Figure 7 summarizes the present status of theory versus experiment. In addition to our improved calculation (and an independent estimate similarly based on the $`\chi `$QM), we have reported five estimates of $`\epsilon ^{}/\epsilon `$ published in the last year. Trieste’s, München’s and Roma’s ranges are updates of their respective older estimates, while the other estimates are altogether new. The estimates reported come from the following approaches: * München’s : In the München approach (phenomenological $`1/N`$) some of the matrix elements are obtained by fitting the $`\mathrm{\Delta }I=1/2`$ rule at $`\mu =m_c=1.3`$ GeV. The relevant gluonic and electroweak penguin $`Q_6`$ and $`Q_8_2`$ remain undetermined and are taken around their leading $`1/N`$ values (which implies a scheme dependent result). In Fig. 7 the HV (left) and NDR (right) results are shown. The darker range represents the result of Gaussian treatment of the input parameters compared to flat scanning (complete range). * Roma’s : Lattice cannot provide us at present with reliable calculations of the $`I=0`$ penguin operators relevant to $`\epsilon ^{}/\epsilon `$ as well as of the $`I=0`$ components of the hadronic matrix elements of the current-current operators (penguin contractions), which are relevant to the $`\mathrm{\Delta }I=1/2`$ rule. This is due to large renormalization uncertainties, partly related to the breaking of chiral symmetry on the lattice. In this respect, very promising is the domain-wall fermion approach which allows us to decouple the chiral symmetry from the continuum limit. On the other hand, present lattice calculations compute $`K\pi `$ matrix elements and use lowest order chiral perturbation theory to estimate the $`K\pi \pi `$ amplitude, which introduces additional (and potentially large) uncertainties. In the recent Roma re-evaluation of $`\epsilon ^{}/\epsilon `$, $`B_6`$ is taken at the VSA result varied by a 100% error. The estimate quotes the values obtained in two $`\gamma _5`$ schemes (HV and NDR). The dark (light) ranges correspond to Gaussian (flat) scan of the input parameters. * Dortmund’s : In recent years the Dortmund group has revived and improved the approach of Bardeen, Buras and Gerard based on the $`1/N`$ expansion. Chiral loops are regularized via a cutoff and the amplitudes are arranged in a $`p^{2n}/N`$ expansion. A particular attention has been given to the matching procedure between the scale dependence of the chiral loops and that arising from the short-distance analysis. The renormalization scheme dependence remains and it is included in the final uncertainty. The $`\mathrm{\Delta }I=1/2`$ rule is reproduced, but the presence of the quadratic cutoff induces a matching scale instability (which is very large for $`B_K`$). The NLO corrections to $`Q_6`$ induce a substantial enhancement of the matrix element (right range in Fig. 7) compared to the leading order result (left). The dark range is drawn for central values of $`m_s`$, $`\mathrm{\Omega }_{\eta +\eta ^{}}`$, $`\text{Im}\lambda _t`$ and $`\mathrm{\Lambda }_{QCD}`$. * Dubna’s : The hadronic matrix elements are computed in the ENJL framework including chiral loops up to $`O(p^6)`$ and the effects of scalar, vector and axial-vector resonances. ($`B_K`$, and therefore $`\text{Im}\lambda _t`$, is taken from ) Chiral loops are regularized via the heat-kernel method, which leaves unsolved the problem of the renormalization scheme dependence. A phenomenological fit of the $`\mathrm{\Delta }I=1/2`$ rule implies deviations up to a factor two on the calculated $`Q_6`$. The reduced (dark) range in Fig. 7 corresponds to taking the central values of the NLO chiral couplings and varying the short-distance parameters. * Taipei’s : Generalized factorization represents an attempt to parametrize the hadronic matrix elements in the framework of factorization without a-priori assumptions . Phenomenological parameters are introduced to account for non-factorizable effects. Experimental data are used in order to extract as much information as possible on the non-factorizable parameters. This approach has been applied to the $`K\pi \pi `$ amplitudes in ref. . The effective Wilson coefficients, which include the perturbative QCD running of the quark operators, are matched to the factorized matrix elements at the scale $`\mu _F`$ which is arbitrarily chosen in the perturbative regime. A residual scale dependence remains in the penguin matrix elements via the quark mass. The analysis shows that in order to reproduce the $`\mathrm{\Delta }I=1/2`$ rule and $`\epsilon ^{}/\epsilon `$ sizable non-factorizable contributions are required both in the current-current and the penguin matrix elements. However, some assumptions on the phenomenological parameters and ad hoc subtractions of scheme-dependent terms in the Wilson coefficients make the numerical results questionable. In addition, the quoted error does not include any short-distance uncertainty. * Trieste’s: The dark (light) ranges correspond to Gaussian (flat) scan of the input parameters. The bar on the left corresponds to the present estimate. That on the right is a new estimate , similarly based on the $`\chi `$QM hadronic matrix elements, in which however $`\epsilon ^{}/\epsilon `$ is estimated in a novel way by including the explicit computation of $`\epsilon `$ in the ratio as opposed to the usual procedure of taking its value from the experiment. This approach has the advantage of being independent of the determination of the CKM parameters $`\text{Im}\lambda _t`$ and of showing more directly the dependence on the long-distance parameter $`\widehat{B}_K`$ as determined within the model. The difference (around 10%) between the two Trieste’s estimates corresponds effectively to a larger value of $`\text{Im}\lambda _t`$, as determined from $`\epsilon `$ only, with respect to eq. (42). * Valencia’s : The standard model estimate given by Pallante and Pich is obtained by applying the FSI correction factors obtained using a dispersive analysis à la Omnès-Mushkelishvili to the leading (factorized) $`1/N`$ amplitudes. The detailed numerical outcome has been questioned on the basis of ambiguities related to the choice of the subtraction point at which the factorized amplitude is taken . Large corrections may also be induced by unknown local terms which are unaccounted for by the dispersive resummation of the leading chiral logs. Nevertheless, the analysis of ref. confirms the crucial role of higher order chiral corrections for $`\epsilon ^{}/\epsilon `$, even though FSI effects alone leave the problem of reproducing the $`\mathrm{\Delta }I=1/2`$ selection rule open. * Lund’s : The $`\mathrm{\Delta }I=1/2`$ rule and $`B_K`$ have been studied in the NJL framework and $`1/N`$ expansion by Bijnens and Prades showing an impressive scale stability when including vector and axial-vector resonances. A recent calculation of $`\epsilon ^{}/\epsilon `$ at the NLO in $`1/N`$ has been performed in ref. . The calculation is done in the chiral limit and it is eventually corrected by estimating the largest $`SU(3)`$ breaking effects. Particular attention is devoted to the matching between long- and short-distance components by use of the $`X`$-boson method . The couplings of the $`X`$-bosons are computed within the ENJL model which improves the high-energy behavior. The $`\mathrm{\Delta }I=1/2`$ rule is reproduced and the computed amplitudes show a satisfactory renormalization scale and scheme stability. A sizeable enhancement of the $`Q_6`$ matrix element is found which brings the central value of $`\epsilon ^{}/\epsilon `$ at the level of $`3\times 10^3`$. Cut-off based approaches should also pay attention to higher-dimension operators which become relevant for matching scales below 2 GeV . The calculations based on dimensional regularization may avoid the problem if phenomenological input is used in order to encode in the hadronic matrix elements the physics at all scales. Other attempts to reproduce the measured $`\epsilon ^{}/\epsilon `$ using the linear $`\sigma `$-model, which include the effect of a scalar resonance with $`m_\sigma 900`$ MeV, obtain the needed enhancement of $`Q_6`$ . However, the $`CP`$ conserving $`I=0`$ amplitude falls short the experimental value by a factor of two. With a lighter scalar, $`m_\sigma 600`$ MeV the $`CP`$ conserving $`I=0`$ amplitude is reproduced, but $`\epsilon ^{}/\epsilon `$ turns out more than one order of magnitude larger than currently measured . ## VII Conclusions The present analysis updates our 1997 estimate $`\text{Re}\epsilon ^{}/\epsilon =1.7_{1.0}^{+1.4}\times 10^3`$ , which already pointed out the potential relevance of non-factorizable contributions and the importance of addressing both $`CP`$ conserving and violating data for a reliable estimate of $`\epsilon ^{}/\epsilon `$. The increase in the central value is due to the update on the experimental inputs (mainly $`V_{ub}/V_{cb}`$). The uncertainty is reduced when using the Gaussian sampling, as opposed to the flat scan used in ref. . On the other hand the error obtained by flat scanning is larger due to the larger systematic uncertainty ($`\pm 30\%`$) used in the fit of the $`CP`$ conserving amplitudes. Among the corrections to the leading $`1/N`$ (factorized) result, FSI play a crucial role in the determination of the gluon penguin matrix elements. Recent dispersive analyses of $`K\pi \pi `$ amplitudes show how a (partial) resummation of FSI increases substantially the size of the $`I=0`$ amplitudes, while slightly affecting the $`I=2`$ components . On the other hand, the precise size of the effect depends on boundary conditions of the factorized amplitudes which are not unambiguously known . Finally, it is worth stressing that FSI by themselves do not account for the magnitude of the $`CP`$ conserving decay amplitudes. In our approach a combination of non-factorizable soft-gluon effects (at $`O(p^2)`$) and FSI (at the NLO) makes possible to reproduce the $`\mathrm{\Delta }I=1/2`$ selection rule. In turn, requiring the fit of the $`CP`$ conserving $`K\pi \pi `$ decays allows for the determination of the “non-perturbative” parameters of the $`\chi `$QM, which eventually leads to the detailed prediction of $`\epsilon ^{}/\epsilon `$. Confidence in our final estimate of $`\epsilon ^{}/\epsilon `$ is based on the coherent picture of kaon physics which arises from the phenomenological determination of the model parameters and the self-contained calculation of all $`\mathrm{\Delta }S=1`$ and 2 hadronic matrix elements. ###### Acknowledgements. We thank F. Parodi and A. Stocchi for their help in the determination of $`\text{Im}\lambda _t`$ and for Fig. 4.
no-problem/0002/cond-mat0002145.html
ar5iv
text
# Sum Rule Approach to Collective Oscillations of Boson-Fermion Mixed Condensate of Alkali Atoms ## Abstract The behavior of collective oscillations of a trapped boson-fermion mixed condensate is studied in the sum rule approach. Mixing angle of bosonic and fermionic multipole operators is introduced so that the mixing characters of the low-lying collective modes are studied as functions of the boson-fermion interaction strength. For an attractive boson-fermion interaction, the low-lying monopole mode becomes a coherent oscillation of bosons and fermions and shows a rapid decrease in the excitation energy towards the instability point of the ground state. In contrast, the low-lying quadrupole mode keeps a bosonic character over a wide range of the interaction strengths. For the dipole mode the boson-fermion in-phase oscillation remains to be the eigenmode under the external oscillator potential. For weak repulsive values of the boson-fermion interaction strengths we found that an average energy of the out-of-phase dipole mode stays lower than the in-phase oscillation. Physical origin of the behavior of the multipole modes against boson-fermion interaction strength is discussed in some detail. Collective oscillation is one of the most prominent phenomena common to a variety of many-body systems. The realization of the Bose-Einstein condensates (BEC) for trapped Alkali atoms offers a possibility to study such phenomena of quantum systems under ideal conditions. Up to now the experimental as well as theoretical studies of collective motions of BEC have been intensively performed. Quite recently the degenerate Fermi gas of trapped <sup>40</sup>K atoms has been realized, which motivates the study of collective motion also in Fermi gases. These developments further encourage the study of possible boson-fermion mixed condensates of trapped atoms. Now the static properties, stability conditions, and some dynamical properties of trapped boson-fermion condensates have been investigated. In the present paper, we study the behavior of collective oscillations of a boson-fermion mixed condensate at $`T=0`$ for both repulsive and attractive boson-fermion interactions. We adopt the sum rule approach that has proved to be successful in the studies of collective excitations of Bose condensates. For the mixed condensate, we introduce a mixing angle of the boson/fermion excitation operators so as to allow the in- and out-of-phase oscillations of the bose and fermi condensates. In the sum rule approach we first calculate the energy weighted moments $`m_p=_j(E_jE_0)^p|j|F|0|^2`$ of the relevant multipole operator $`F`$, where $`|j`$’s represent the complete set of eigenstates of the Hamiltonian with energies $`E_j`$, and $`|0`$ denotes the ground state. The excitation energy is expressed as $`\mathrm{}\omega =(m_3/m_1)^{1/2}`$ which provides a useful expression for the average energy of the collective oscillation. The moments are calculated from formulae $`m_1=\frac{1}{2}0|[F^{},[H,F]]|0`$ and $`m_3=\frac{1}{2}0|[[F^{},H],[H,[H,F]]]|0`$. We consider three types of multipole operators which are defined by $$F_\alpha ^\pm \underset{i=1}{\overset{N_b}{}}f_\alpha (\stackrel{}{r}_{bi})\pm \underset{i=1}{\overset{N_f}{}}f_\alpha (\stackrel{}{r}_{fi}),(\alpha =M,D,Q)$$ (1) where the functions $`f_\alpha `$ are defined by $`f_M(\stackrel{}{r})=r^2`$ for monopole, $`f_D(\stackrel{}{r})=z`$ for dipole, and $`f_Q(\stackrel{}{r})=3z^2r^2`$ for quadrupole excitations. The indices $`b,f`$ denote boson/fermion, $`N_b,N_f`$ the numbers of bose/fermi particles, and $`\pm `$ correspond to the in-phase and out-of-phase oscillation of the two types of particles. We actually take a linear combination of the form $$F_\alpha (𝐫;\theta )=F_\alpha ^+\mathrm{cos}\theta +F_\alpha ^{}\mathrm{sin}\theta (\frac{\pi }{2}<\theta \frac{\pi }{2})$$ (2) parametrized by the mixing angle $`\theta `$. We study the value of $`\theta `$ that minimizes the calculated frequency $`\omega `$ for each $`\alpha `$. We consider the polarized boson-fermion mixed condensate in spherically symmetric harmonic oscillator potential. The system is described by the Hamiltonian $$H=\underset{i=1}{\overset{N_b}{}}\left\{\frac{\stackrel{}{p}_{bi}^2}{2m}+\frac{1}{2}m\omega _0^2\stackrel{}{r}_{bi}^2+\frac{1}{2}g\underset{j=1}{\overset{N_b}{}}\delta (\stackrel{}{r}_{bi}\stackrel{}{r}_{bj})\right\}+\underset{i=1}{\overset{N_f}{}}\left\{\frac{\stackrel{}{p}_{fi}^2}{2m}+\frac{1}{2}m\omega _0^2\stackrel{}{r}_{fi}^2\right\}+h\underset{i=1}{\overset{N_b}{}}\underset{j=1}{\overset{N_f}{}}\delta (\stackrel{}{r}_{bi}\stackrel{}{r}_{fj})$$ (3) where we assume the same oscillator frequencies and masses for bosons and fermions for simplicity. The coupling constants $`g,h`$ are the boson-boson/boson-fermion interaction strengths represented by the $`s`$-wave scattering lengths $`a_{bb}`$ and $`a_{bf}`$ as $`g=4\pi \mathrm{}^2a_{bb}/m`$ and $`h=4\pi \mathrm{}^2a_{bf}/m`$. The fermion-fermion interaction has been neglected as the polarized system is considered. Following the standard calculation procedure the excitation frequencies are obtained as: (i) Monopole $$\frac{\omega _M(\theta )}{\omega _0}=\sqrt{2}\sqrt{1+\frac{E_{\mathrm{kin}}^++\frac{9}{4}E_{bb}+\frac{9}{4}E_{bf}+2(E_{\mathrm{kin}}^{}+\frac{9}{4}E_{bb}+\frac{3}{4}\mathrm{\Delta }^{})\mathrm{cos}\theta \mathrm{sin}\theta \mathrm{\Delta }\mathrm{sin}^2\theta }{E_{\mathrm{ho}}^++2E_{\mathrm{ho}}^{}\mathrm{cos}\theta \mathrm{sin}\theta }}$$ (4) (ii) Quadrupole $$\frac{\omega _Q(\theta )}{\omega _0}=\sqrt{2}\sqrt{1+\frac{E_{\mathrm{kin}}^++2E_{\mathrm{kin}}^{}\mathrm{cos}\theta \mathrm{sin}\theta \frac{2}{5}\mathrm{\Delta }\mathrm{sin}^2\theta }{E_{\mathrm{ho}}^++2E_{\mathrm{ho}}^{}\mathrm{cos}\theta \mathrm{sin}\theta }}$$ (5) (iii) Dipole $$\frac{\omega _D(\theta )}{\omega _0}=\sqrt{1\frac{4}{3\mathrm{}\omega _0}\frac{\mathrm{\Omega }\mathrm{sin}^2\theta }{N^++2N^{}\mathrm{cos}\theta \mathrm{sin}\theta }}$$ (6) Here we defined $`E_{\mathrm{kin}}^\pm E_{\mathrm{kin}}^b\pm E_{\mathrm{kin}}^f,E_{\mathrm{ho}}^\pm E_{\mathrm{ho}}^b\pm E_{\mathrm{ho}}^f`$ and $`N^\pm N_b\pm N_f`$, where $`E_{\mathrm{kin}}^{\{b,f\}}`$ and $`E_{\mathrm{ho}}^{\{b,f\}}`$ are respectively the expectation values of the kinetic and oscillator potential energies for boson/fermion in the ground state. Boson-boson and boson-fermion interaction energies have been denoted by $`E_{bb}`$ and $`E_{bf}`$. The quantities $`\mathrm{\Delta },\mathrm{\Delta }^{}`$ and $`\mathrm{\Omega }`$ are given in terms of the boson/fermion densities $`n_b(r),n_f(r)`$ in the ground state by $$\mathrm{\Delta }hd^3rr^2\frac{dn_f(r)}{dr}\frac{dn_b(r)}{dr},\mathrm{\Delta }^{}hd^3rr\left[n_f(r)\frac{dn_b(r)}{dr}\frac{dn_f(r)}{dr}n_b(r)\right],\mathrm{\Omega }h\xi ^2d^3r\frac{dn_f(r)}{dr}\frac{dn_b(r)}{dr},$$ (7) where $`\xi =\sqrt{\mathrm{}/m\omega _0}`$. One may use the stationary condition of the ground state, $$2E_{\mathrm{kin}}^+2E_{\mathrm{ho}}+3E_{bb}+3E_{bf}=0,2E_{\mathrm{kin}}^{}2E_{\mathrm{ho}}^{}+3E_{bb}+\mathrm{\Delta }^{}=0$$ (8) in order to eliminate in eq.(4) the dependences on $`E_{bb},E_{bf}`$ and $`\mathrm{\Delta }^{}`$. The monopole frequency is then rewritten as $$\frac{\omega _M(\theta )}{\omega _0}=\sqrt{5\frac{E_{\mathrm{kin}}^++2E_{\mathrm{kin}}^{}\mathrm{cos}\theta \mathrm{sin}\theta +2\mathrm{\Delta }\mathrm{sin}^2\theta }{E_{\mathrm{ho}}^++2E_{\mathrm{ho}}^{}\mathrm{cos}\theta \mathrm{sin}\theta }}.$$ (9) We have checked that the Thomas-Fermi calculation of the ground state adopted below gives rise to a negligible difference if one evaluates either the expression (9) or (4). We calculate the ground state energies and densities of the boson-fermion mixed system in the Thomas-Fermi approximation which is valid for $`gN_b1`$ and $`N_f1`$ except around the surface region. We take harmonic oscillator length $`\xi `$ and energy $`\mathrm{}\omega _0`$ as units, and define scaled dimensionless variables: the radial distance $`x=r/\xi `$, boson/fermion densities $`\rho _{b,f}(x)=n_{b,f}(r)\xi ^3/N_{b,f}`$, and chemical potentials $`\stackrel{~}{\mu }_{b,f}=2\mu _{b,f}/\mathrm{}\omega _0`$. We solve the coupled Thomas-Fermi equations, $$\stackrel{~}{g}N_b\rho _b(x)+x^2+\stackrel{~}{h}N_f\rho _f(x)=\stackrel{~}{\mu _b},[6\pi ^2N_f\rho _f(x)]^{2/3}+x^2+\stackrel{~}{h}N_b\rho _b(x)=\stackrel{~}{\mu _f},$$ (10) where $`\stackrel{~}{g}=2g/\mathrm{}\omega _0\xi ^3`$ and $`\stackrel{~}{h}=2h/\mathrm{}\omega _0\xi ^3`$. One of the most promising candidates for the realization of the mixed condensate is the potassium isotope system. Precise values of the scattering lengths are not well known at present and different values have been reported . We take for the boson-boson interaction the parameters of the <sup>41</sup>K-<sup>41</sup>K system in and a trapping frequency of $`450`$Hz which gives $`\stackrel{~}{g}=0.2`$. For the boson-fermion interaction we take several values in the range $`h/g=\stackrel{~}{h}/\stackrel{~}{g}=2.373.2`$. It should be noted that the interaction strength can be controlled using Feshbach resonances. We have performed a numerical calculation for $`N_b=N_f=10^6`$. In the ground state the fermions have a much broader distribution than bosons because of the Pauli principle. Fermions are further squeezed out of the center for a repulsive boson-fermion interaction ($`h>0`$) . They will eventually form a shell-like distribution around the surface of bosons for $`h/g1`$ and will be completely pushed away from the center ($`n_f(0)=0`$) at around $`h/g3`$. For an attractive boson-fermion interaction, on the other hand, the central densities of the bosons and fermions increase together. The system becomes unstable against collapse at around $`h/g=2.37`$ due to the strong attractive boson-fermion interaction. Figure 1 shows the kinetic energy, the oscillator potential energy, and the interaction energy contributions to the ground state energy against the parameter $`h/g`$. The figure shows also the quantities $`\mathrm{\Delta }`$ and $`\mathrm{\Omega }`$ which represent the contributions of the boson-fermion interaction to the multipole frequencies (4)-(6) and (9). One may notice that the fermionic kinetic- and potential-energy contributions are a few times larger than the bosonic contribution in the present system. It is noted that $`\mathrm{\Delta }`$ takes large negative values at both large negative and positive regions of $`h/g`$. In the former region the bose and fermi density distributions become coherent due to the attractive interaction, and the radial integral in eq.(7) takes a large positive value. In the opposite case ($`h/g1`$), the same integral changes sign because the fermions are pushed away from the center by the repulsive boson-fermion interaction, thus giving a large negative contribution in the bosonic surface region. In the region $`0<h/g<1`$, on the other hand, the integral is slightly positive and $`\mathrm{\Delta }`$ takes a small positive value. The quantity $`\mathrm{\Omega }`$ follows the same trend as $`\mathrm{\Delta }`$, but the absolute value is much smaller than $`\mathrm{\Delta }`$, as the most important contribution to the integral comes from the surface region where $`r\xi `$. Once the ground state parameters are obtained the frequencies (4)-(6) are minimized against $`\theta `$. Usually, the sum rule approach predicts a strength-weighted average energy of eigenstates for a given multipolarity. The calculated energy coincides with the true excitation energy if the relevant strength is concentrated in a single state. By adopting the minimization procedure we simultaneously determine the character of the low-lying collective mode and the corresponding average energy. The character of the mode is given by the value of $`\theta `$, for instance, $`\theta \pi /4`$ for the bosonic- and $`\pi /4`$ for the fermionic-modes, $`\theta 0`$ for the in-phase oscillation and $`\pi /2`$ for the out-of-phase oscillation of the two types of particles. As there are two kinds of particles involved in eq.(2), one would expect an emergence of two types of collective oscillations for each multipole. Another collective mode would have a character orthogonal to the low-lying mode as far as the phase relation of the two operators in (2) is concerned. In the present approach we calculate the frequency of the latter mode, the high-lying one, from eqs.(4)-(6), by using the operator $`F_\alpha ^+\mathrm{sin}\theta _LF_\alpha ^{}\mathrm{cos}\theta _L`$ for each $`\alpha `$, where the mixing angle $`\theta _L`$ is the one determined for the low-lying mode. Figure 2 shows frequencies of the lower (solid lines) and the higher (dashed lines) modes for (a) monopole, (b) quadrupole and (c) dipole cases as functions of $`h/g`$. The corresponding mixing angles $`\theta `$ determined by the minimization procedure for the lower mode are plotted in Fig.3 for each multipolarity as a function of $`h/g`$. Below we discuss the behavior of the frequencies $`\omega _\alpha `$ by defining three regions of $`h/g`$: (I) $`h/g<0`$, (II) $`0<h/g1`$, (III) $`1h/g`$. a)monopole: For a non-interacting boson-fermion system the low-lying monopole mode is the fermionic oscillation with frequency $`\omega _M^L=2\omega _0`$, while the higher mode is the bosonic one with $`\omega _M^H=\sqrt{5}\omega _0`$ in the Thomas-Fermi approximation. Around $`h0`$ one may obtain $`\omega _M^L2\omega _0\left(1+A\stackrel{~}{h}N^{\frac{1}{6}}\right)`$, $`\omega _M^H\sqrt{5}\omega _0\left(1A^{}\stackrel{~}{h}\stackrel{~}{g}^{\frac{1}{5}}N^{\frac{3}{10}}\right)`$, where $`A=36^{\frac{1}{6}}/4\sqrt{2}\pi ^2`$ and $`A^{}=7\sqrt{3}/20\pi ^2(8\pi /15)^{\frac{1}{5}}`$. The mixing angle for the lower mode is given as $`\theta _M=\pi /4\delta _M`$ with $`\delta _M=(56^{\frac{1}{6}}/\sqrt{2}\pi ^2)\stackrel{~}{h}N^{\frac{1}{6}}`$. This behavior is seen in region (II) where the boson-fermion interaction is weakly repulsive and the lower monopole mode is of a fermionic character, see Fig.3 (solid line). Bosonic oscillation in this region is more rigid than the fermionic one because of the repulsive interaction among bosons. In region (I), the situation is quite different: The low-lying mode becomes a coherent boson-fermion oscillation as represented by the large negative value of $`\mathrm{\Delta }`$, and the excitation energy shows a sharp decrease towards the instability point $`h/g2.37`$ of the ground state, although $`\omega _M`$ does not become exactly zero within our approximation. In this region the attractive boson-fermion interaction is much more effective in the excited state than in the ground state and cancels out the increase in the kinetic energy. In region (III), too, we find that the low-lying mode becomes an in-phase oscillation. Here, the boson and the fermion densities in the ground state are somewhat separated, and the in-phase motion which keeps this separation is energetically more favorable than the out-of-phase motion as seen in the value of $`\mathrm{\Delta }`$. b)quadrupole: For the quadrupole excitation (Fig.2(b)), the lower (higher) energy mode is almost the pure bosonic (fermionic) oscillation over the broad range of the $`h/g`$ values studied, Fig.2 (dashed line). To the first order in $`\stackrel{~}{h}`$ the frequencies of the lower and the higher quadrupole modes are given by $`\omega _Q^L\sqrt{2}\omega _0\left(1B\stackrel{~}{h}N^{\frac{1}{6}}\right)`$, $`\omega _Q^H2\omega _0(1B^{}\stackrel{~}{h}\stackrel{~}{g}^{\frac{2}{5}}N^{\frac{7}{30}})`$, where $`B=6^{\frac{1}{6}}/4\sqrt{2}\pi ^2`$ and $`B^{}=(15/8\pi )^{\frac{2}{5}}/(146^{\frac{1}{6}}\sqrt{2}\pi ^2)`$. The corresponding mixing angle for the lower mode is given by $`\theta _Q=\pi /4+\delta _Q`$ with $`\delta _Q=2^3B^{}\stackrel{~}{h}\stackrel{~}{g}^{\frac{2}{5}}N^{\frac{7}{30}}`$. For the quadrupole mode similar mechanisms as for the monopole mode are at work concerning the dependence on $`|h/g|`$. The role of the boson-fermion interaction is however much reduced compared with the monopole case as seen by the factor 2/5 of eq.(5), which reflects that the quadrupole oscillation has five different components. Thus the quadrupole mode obtains an in-phase character only at large values of $`|h/g|`$. In the other region of $`|h/g|`$ the low-lying mode becomes a simple bosonic oscillation. This is because the fermionic mode costs a larger kinetic energy and favors $`\theta =\pi /4`$ as seen in the term $`(E_{\mathrm{kin}}^bE_{\mathrm{kin}}^f)\mathrm{sin}\theta \mathrm{cos}\theta `$ in eq.(5). c)dipole: General arguments show that for a harmonic oscillator external potential a uniform shift of the ground state density generates an eigenstate of the system, corresponding to the boson-fermion in-phase dipole oscillation with frequency $`\omega _0`$. This is evident in Fig.2c) and also in eq.(6) at $`\theta =0`$. In the regions (I) and (III) the out-of-phase oscillation is unfavorable due to the same reason as for the monopole mode: It loses the energy gained in the ground state boson-fermion configuration. For a weakly repulsive $`h`$ an interesting possibility arises: In the region (II) the out-of-phase mode of the boson-fermion oscillation lies lower than the in-phase mode. Let us first note that at $`h=0`$ the out-of-phase oscillation frequency becomes degenerate as the in-phase one because the bosonic and the fermionic dipole modes are independent. One may note that at $`h/g1`$ again the degeneracy occurs. Here the potential term for the fermion becomes almost linear to the fermion density itself (see, eq.(10)). This suggests that the fermion density is determined almost entirely by the chemical potentials and becomes nearly constant as far as the boson density is finite. A uniform dipole shift of fermions thus produces almost no effect on bosons, and results in the degeneracy of the frequency. Between $`h/g=0`$ and 1, the boson-fermion repulsion is weaker for the out-of-phase oscillation than the in-phase one (and hence the ground state) as reflected in the sign of $`\mathrm{\Omega }`$. In the present paper, we studied collective oscillations of trapped boson-fermion mixed condensates using sum rule approach. We introduced a mixing angle of bosonic and fermionic multipole operators so as to study if the in- or out-of-phase motion of those particles is favored as a function of the boson-fermion interaction strength. For the monopole and quadrupole cases, the coupling of the bose- and fermi-type oscillation is not large for moderate values of the coupling strength $`h`$. At large values of $`|h/g|`$, the low-lying modes become an in-phase oscillation of bosons and fermions. This is especially so for the monopole oscillation at attractive boson-fermion interaction: The excitation energy of this mode almost vanishes at the instability point of the ground state. In the case of the dipole mode, in contrast, the in-phase oscillation remains an exact eigenmode with a fixed energy for harmonic oscillator potentials, while the average energy of the out-of-phase oscillation is strongly dependent on the boson-fermion interaction. We found that at weak repulsive values of the interaction the out-of-phase motion stays lower than the in-phase oscillation. In this paper we calculated also the frequencies of the high-lying modes for each multipole, by adopting the operators orthogonal to the low-lying modes. These modes, too, are collective in character and, in the present framework, their average frequencies showed rather strong dependences on the boson-fermion coupling strength. Deeper insight into the collective modes studied in this paper will require a detailed investigation of the solutions of, e.g., the self-consistent RPA type equations that allow an arbitrary radial dependence of the excitation operators. Studies in this direction are now in progress.
no-problem/0002/astro-ph0002346.html
ar5iv
text
# Cyclotron lines in X-ray pulsars as a probe of relativistic plasmas in superstrong magnetic fields ## Introduction Accreting magnetized neutron stars are an ideal cosmic laboratory for high energy relativistic physics. Cyclotron resonance features are the signature of the presence of a superstrong magnetic field, following the first discovery in Her X-1 (Trümper et al. trump ). These features are due to the discrete Landau energy levels for motion of free electrons perpendicular to the field in presence of a locally uniform superstrong magnetic field. A slight deviation from a pure harmonic relationship in the spacing of the different levels is expected due to relativistic effects ($`\frac{\omega _n}{m_e}=((1+2n\frac{B}{B_{\mathrm{crit}}}\mathrm{sin}^2\theta )^{\frac{1}{2}}1)/\mathrm{sin}^2\theta `$, e.g. Araya and Harding araya ). Therefore the detection of these features in the emitted X–ray spectra is in principle a direct measure of the field intensity. As the number of sensitive measurements in the hard X–ray interval (above $``$ 10 keV) is continuously growing, a sample is available to search for possible correlations between the observed parameters. A detailed modeling is difficult and a parametrized shape of the continuum still is not available from theoretical models, but substantial advances in our understanding of the radiation transport in strongly magnetized atmospheres were done in the last decade (e.g. Alexander et al. alex1 , Alexander and Mészáros alex2 , Araya and Harding araya , Isenberg et al. isenba ; isenbb , Nelson et al. nelson ). Some of these new results focused on the properties of the cyclotron resonance features observed in the spectra of accreting X–ray pulsars. In this report we discuss the current status of the measurements of cyclotron lines, with emphasis on the possible correlations between observable parameters. ## The data The BeppoSAX satellite has observed all the bright persistent and three bright transient (recurrent) accreting X–ray pulsars. Apart from the case of X Persei (Di Salvo et al. robba ), a source with a luminosity substantially lower than the other sources in the sample, the spectra observed by BeppoSAX can be empirically described using the classical power–law–plus–cutoff spectral function by White et al. white . The sensitive broad band BeppoSAX observations also allowed the detailed characterization of low energy components below a few keV (like in Her X–1, Dal Fiume et al. herx1 , and in 4U1626–67, Orlandini et al. 1626 ) and the detection of absorption features in the hard X–ray range of the spectra, interpreted as cyclotron resonance features. A summary of the properties of the broad band spectra and of the cyclotron lines as measured with BeppoSAX is reported in Dal Fiume et al. ddf . From these measurements we obtained evidence of a correlation between the centroid energy of the feature and its width. This correlation is presented and discussed elsewhere (Dal Fiume et al. ddf ; ddf98 ). ### Transparency in the line A straightforward parameter to be obtained from observations is the transparency in the line, defined as the ratio between the transmitted observed flux and the integrated flux from the continuum without the absorption feature. This ratio likely depends on the harmonic number of the feature we are observing (e.g. Wang et al. wang ) and on the physical parameters of the specific accretion column. From an observational point of view, this ratio is strongly affected by the modelization of the “continuum” shape, that is by the spectral shape used to describe the differential broad band photon number spectrum. In Figure 1 we report the observed transparencies obtained dividing the observed by the expected flux, both integrated in a $`\pm 2\sigma `$ interval around the line centroid (here $`\sigma `$ is the Gaussian width of the measured cyclotron feature). To further emphasize the uncertainty in this estimate, we added a 10% error to the data. The purely statistical uncertainties are substantially smaller. This measured transparency is related to a simple physical parameter, the opacity to photons with energy approximately equal to the cyclotron resonance energy. However the emerging integral flux and the shape of the line itself are non trivially related to the radiation transport in this energy interval, a rather difficult problem to be solved. From the phenomenological point of view, one can observe that the measured transparencies cluster around 0.5–0.6, with the notable exception of Cen X–3. ### Magnetic field intensity and spectral hardness The influence of the magnetic field intensity on the broad band spectral shape is debated. Early attempts to estimate a possible dependence of electron temperature, and therefore of broad band spectral shape, on the magnetic field intensity were done by Harding et al hard84 . Actually they conclude that “the equilibrium atmospheres have temperatures and optical depths that are very sensitive to the strength of the surface magnetic field”. If this is the case and if the broad band spectral hardness is related, as one could naively assume, to the temperature of the atmosphere, some correlation between this hardness and the cyclotron line energy should appear in data. This seems to be the case shown in Figure 2. Here we report the ratio between photon fluxes in two “hard” bands (the flux in 20–100 keV divided by the flux in 7–15 keV) versus the cyclotron line centroid. The ratio between the two fluxes is affected by the choice of the continuum, as in Figure 1. We therefore also in this case added 10% error bars that indicate this uncertainty. The statistical errors are substantially smaller. The number of sources in this plot is still very limited and therefore one cannot exclude that this apparent correlation is merely due to the limited size of the sample. Nevertheless the apparent correlation is in the right direction, i.e. harder spectra are observed for higher field intensities. We parenthetically add that no cyclotron resonance feature was observed in the pulse–phase averaged spectra of the two hardest sources of this class observed with BeppoSAX (GX1+4 Israel et al. israel and GS1843+00 Piraino et al. piraino ). If this correlation proves to be correct, this may suggest that cyclotron resonance features in these two sources should be searched at the upper limit of the BeppoSAX energy band or beyond. ### Conclusions In conclusion, even if no complete, parametric theoretical approach to model the observed spectra of accreting X–ray pulsars is still available, some quantitative measures of parameters of hot plasmas in superstrong magnetic fields are possible. Modeling the transparency in the cyclotron resonance feature is a complex problem. Further information will be extracted from maps of this transparency as a function of pulse phase. The correlation between spectral hardness and field intensity is in agreement with theoretical models. This correlation, if confirmed, can be used as a rough estimate of the magnetic field intensity from the measured spectral hardness. ###### Acknowledgements. This research is supported by Agenzia Spaziale Italiana (ASI) and Consiglio Nazionale delle Ricerche (CNR) of Italy. BeppoSAX is a joint program of ASI and of the Netherlands Agency for Aerospace Programs (NIVR).
no-problem/0002/cond-mat0002096.html
ar5iv
text
# Circulating electrons, superconductivity, and the Darwin-Breit interaction ## I Introduction Arguments and results will be presented that hopefully convince the open-minded reader that superconductivity is caused by the Darwin-Breit (magnetic) interaction between semiclassical electrons. The starting point is a careful study of the model problem of electrons on a circle. This simple model is chosen since it allows accurate treatment of the notoriously difficult problem of relativistic and magnetic effects in many-electron systems. Since classical ideas are closer to our intuition the classical picture is taken as far as possible before quantum mechanics is reluctantly adopted. The semiclassical point of view is an extremely powerful one Brack and Bhaduri (1997); Gutzwiller (1990) and the reader will find further examples of this below. Relativistic quantities, to a first approximation, have a magnitude $`(v/c)^2`$ times those of non-relativistic quantities. While this always is small in everyday life, in the atomic world this parameter is $`10^4`$, which is fairly small, but rarely negligible. A striking example of this is the energy gap in superconductors which typically is order of magnitude $`10^4`$ of the Fermi energy. Any study of this phenomenon that does not take relativistic effects into account must consequently remain inconclusive The Darwin-Breit interaction Darwin (1920); Breit (1929, 1932) is the first order relativistic correction, $$V_1=\underset{i<j}{\overset{N}{}}\frac{e^2}{c^2}\frac{𝒗_i𝒗_j+(𝒗_i𝒆_{ij})(𝒗_j𝒆_{ij})}{2r_{ij}},$$ (1) to the Coulomb potential. Sucher Sucher (1998) in a recent review (What is the force between two electrons?) gives a thorough discussion of its origin in QED. While well known as an important perturbation in accurate atomic calculations Strange (1998); Cook (1988) it has until recently (Essén Essén (1995, 1996, 1997, 1999)) usually been taken for granted, without proof or justification, that it is negligible in larger systems. Welker Welker (1938) suggested in 1939 that magnetic attraction of parallel currents might cause superconductivity, but after that the idea seems to have been forgotten. Other types of magnetic interaction have been suggested though Mathur et al. (1998). Some efforts to include the Darwin-Breit interaction in density functional approaches to solids are reviewed in Strange Strange (1998). Capelle and Gross Capelle and Gross (1995) have also made efforts towards a relativistic theory of superconductivity. In section II we introduce the analytical mechanics of particles on a circle and apply it to mesoscopic rings. This serves to introduce the mathematical model and also throws some light of the theory behind the persistent currents found in these. We later find that, though these rings are not superconducting, electron pairing might be relevant to understand their physics. Section III closes in upon the main subject of superconductivity. The London moment formula connecting the angular velocity of a superconducting body and the magnetic field it produces is introduced and motivated. The formula, together with classical electromagnetism can be used to calculate the number of superconducting electrons present. This number is found to be determined entirely by fundamental constants and the size of the body. Finally in section IV the importance of the Darwin-Breit interaction is investigated. We show how it can lead to electron pairing and calculate the relevant temperatures at which these form. We also investigate when the interaction might become dominating and find that exactly the combination of number, size, and fundamental constants that followed from the London moment is the condition for this. When the condition is fulfilled the particles no longer move individually, or in pairs, but collectively. The behavior of this condition as a function of spatial dimension is investigated. Interestingly it is found that the one-dimensionality of the ring enhances pair-formation but suppresses collective behavior (superconductivity). After that the conclusions are summarized. ## II Rings, persistent currents, and flux periodicity In solid state physics cold mesoscopic metal rings have attracted a lot of attention. In particular since theoretical predictions Büttiker et al. (1983); Bloch (1970) that an external magnetic flux through the ring causes a persistent current round it, have been experimentally verified Webb et al. (1985); Levy et al. (1990); Chandrasekhar et al. (1991). The agreement between theory and experiment is, however, still far from perfect Johnson and Kirczenow (1998), for reviews see Imry and Peshkin (1997); Imry (1997). One normally assumes that it is correct to treat the conduction electrons semiclassically, one speaks about ballistic electrons Brack and Bhaduri (1997); Imry (1997), and we will do so here. Superconductivity is not treated in this section, but we assume that the rings are perfect conductors (have zero resistance). ### II.1 Charged particles on a circle We now set up the model problem of charged particles constrained to move on a circle. Assuming that the circle has radius $`R`$, positions and velocities are given by $$𝒓_i(\phi _i)=R𝒆_\rho (\phi _i),\text{and}\text{ }𝒗_i(\phi _i,\dot{\phi }_i)=R\dot{\phi }_i𝒆_\phi (\phi _i),$$ (2) where $`𝒆_\rho (\phi )=\mathrm{cos}\phi 𝒆_x+\mathrm{sin}\phi 𝒆_y`$ and $`\dot{𝒆}_\rho =\dot{\phi }𝒆_\phi `$, as usual. We take the zeroth order Lagrangian to be $$L_0=T_0V_0=\frac{1}{2}\underset{i=1}{\overset{N}{}}m_iR^2\dot{\phi }_i^2V_0(\phi _1,\mathrm{},\phi _N).$$ (3) Since we will have metallic conduction electrons in mind the potential $`V_0`$ does not necessarily represent the Coulomb interactions, but rather interactions with the lattice plus, possibly, Debye screened two particle interactions. The generalized (angular) momenta are $`J_i=L_0/\dot{\phi }_i=mR^2\dot{\phi }_i`$ so the Hamiltonian is $$H_0=\underset{i=1}{\overset{N}{}}\frac{J_i^2}{2m_iR^2}+V_0.$$ (4) If there is a magnetic flux $`\mathrm{\Phi }=𝑩d𝒔=𝑨d𝒓=2\pi RA_\phi `$ through the ring the Hamiltonian changes to $$H_0=\underset{i=1}{\overset{N}{}}\frac{1}{2m_i}\left(\frac{J_i}{R}\frac{e_i}{c}A_\phi \right)^2+V_0=\underset{i=1}{\overset{N}{}}\frac{1}{2m_iR^2}\left(J_i\frac{e_i}{2\pi c}\mathrm{\Phi }\right)^2+V_0,$$ (5) since $`A_\phi =\mathrm{\Phi }/(2\pi R)`$. We find the equations of motion $`\dot{J}_i={\displaystyle \frac{H_0}{\phi _i}}={\displaystyle \frac{V_0}{\phi _i}},`$ (6) $`\dot{\phi }_i={\displaystyle \frac{H_0}{J_i}}={\displaystyle \frac{J_i}{m_iR^2}}{\displaystyle \frac{e_i\mathrm{\Phi }}{m_iR^22\pi c}}.`$ (7) The current round the ring is by definition $$I=\underset{i=1}{\overset{N}{}}e_i\frac{\dot{\phi }_i}{2\pi }=\frac{1}{2\pi }\underset{i=1}{\overset{N}{}}\left(\frac{e_iJ_i}{m_iR^2}\frac{e_i^2\mathrm{\Phi }}{m_iR^22\pi c}\right)I_0+I_\mathrm{\Phi }.$$ (8) One notes that the relation $$I=c\frac{H_0}{\mathrm{\Phi }}$$ (9) holds. For non-interacting particles on the ring we have $$V_0=\underset{i=1}{\overset{N}{}}U_0(\phi _i).$$ (10) Then $`H_0=_iH_i(J_i,\phi _i)`$ where $`H_i`$ are constants of the motion, $`H_i=E_i`$, whether there is a flux or not. There are then the adiabatic invariants Landau and Lifshitz (1976) $$I_{\phi _i}\frac{1}{2\pi }J_i(\phi _i;E_i,\mathrm{\Phi })d\phi _i=\overline{J_i},$$ (11) the averages, $`\overline{J_i}`$, of the $`J_i`$ round the ring. If the flux is turned on slowly they will retain their zero flux values. The zero flux average current $$\overline{I_0}=\frac{1}{2\pi R^2}\underset{i=1}{\overset{N}{}}\frac{e_i\overline{J_i}}{m_i}$$ (12) is thus also an adiabatic invariant, and remains constant. This means that slowly turning on a flux $`\mathrm{\Phi }`$ through the ring results in the extra diamagnetic circulating current $$I_\mathrm{\Phi }=\frac{\mathrm{\Phi }}{4\pi ^2R^2}\underset{i=1}{\overset{N}{}}\frac{e_i^2}{m_ic}$$ (13) independently of any pre-existing current. Below we will find that the above result can be found using Larmor’s theorem and thus, in fact, is independent of electron interactions provided other conditions are fulfilled. ### II.2 Two types of current We find that there are two different types of current possible in these rings. The ‘ballistic’ current $`I_0`$, which should be, at most Geller (1996), order of magnitude a few $`ev_\text{F}/(2\pi R)`$, where $`v_\text{F}`$ is the Fermi velocity, and the Larmor current $`I_\mathrm{\Phi }`$ induced by the flux. Assuming that only electrons contribute (13) becomes $$I_\mathrm{\Phi }=\frac{\mathrm{\Phi }}{4\pi ^2R^2}N\frac{e^2}{mc}.$$ (14) Putting $$\mathrm{\Phi }=n_\varphi \frac{hc}{|e|}n_\varphi \mathrm{\Phi }_0,$$ (15) where $`n_\varphi `$ is dimensionless and $`\mathrm{\Phi }_0=hc/|e|`$ is the flux quantum, we get the expression $`I_\mathrm{\Phi }\pi R^2=Nn_\varphi \mu _\mathrm{B}`$. Here $`\mu _\mathrm{B}=|e|\mathrm{}/(2m)`$ is the Bohr magneton. Gaussian units are used in most formulas; to get equation (14) in SI-units we simply delete $`c`$. If the flux is $`\mathrm{\Phi }=B\pi R^2`$ we can then rewrite it in the form $$I_\mathrm{\Phi }=NB2.242\mathrm{nA}/\mathrm{T}.$$ (16) To get a number out of this formula we must estimate the number $`N`$ of semiclassical electrons and know the magnetic field in teslas. The speed corresponding to the Larmor current is, in atomic units, $`v_\mathrm{\Phi }=n_\varphi /Rv_\text{F}=1.92/r_s`$, where $`r_s`$ is the radius parameter. On the other hand all semiclassical electrons contribute to $`I_\mathrm{\Phi }`$, whereas the number contributing to $`I_0`$ necessarily is small. Levy et al. Levy et al. (1990) found an average current of $`I_{\mathrm{av}}=310^3ev_\text{F}/\mathrm{}=0.36`$nA in their Cu-rings, of circumference $`\mathrm{}=2.2\mu `$m. If this is interpreted as a Larmor-current we can calculate $`N`$. At the magnetic field $`B_0=1.310^2`$T corresponding to the flux quantum $`\mathrm{\Phi }_0`$ this gives the reasonable result $`N100`$ for the number of semiclassical electrons in the system. Chandrasekhar et al. Chandrasekhar et al. (1991), on the other hand, found currents $`I=`$(0.3 – 2.0) $`ev_\text{F}/(2\pi R)`$ in a single gold ring. These can thus only be interpreted as due to ballistic currents. They might be due to electron pairs, which may form even in the normal state, as we will see below. ### II.3 Larmor’s theorem Consider a system of particles, all of the same charge to mass ratio $`e/m`$. Assume that they move in a common external potential, $`U_e(\rho ,z)`$, that is axially symmetric, i.e. independent of $`\phi `$, under the influence of arbitrary interparticle interactions. Now place this system in a weak magnetic field, $`B_z`$, along the $`z`$-axis. One can then apply Larmor’s theorem Strange (1998); Essén (1989) to show that the response of the system to this field is a rotation with angular velocity $$\mathrm{\Omega }_z=\frac{e}{2mc}B_z$$ (17) given by the Larmor frequency. This means that there will be a circulating Larmor current $$I_L=Ne\frac{\mathrm{\Omega }_z}{2\pi }=\frac{B_z}{4\pi }N\frac{e^2}{mc}$$ (18) where $`Ne`$ is the total amount of charge on the particles ($`N`$ is not necessarily the number of particles). If we insert $`B_z=\mathrm{\Phi }/(\pi R^2)`$ we recover essentially equation (14). This is why we called $`I_\mathrm{\Phi }`$ the Larmor current. Note that we derived (14) under the assumption of arbitrary charge to mass ratios $`e_i/m_i`$ but no interparticle interaction. Here we need identical charge to mass ratios $`e/m`$ and an axially symmetric external field but can have arbitrary interactions between the particles. The general results (14) and (16) for semiclassical electrons (or electron pairs or groups) in cold metal rings thus seem fairly reliable. It is noteworthy that the result of equation (13) is not necessarily due to any magnetic field affecting the particles. The flux $`\mathrm{\Phi }`$ could very well go through a smaller surface completely inside the ring material. This means that the current in (13) is a classical Aharonov-Bohm effect Aharonov and Bohm (1959). That is, an effect due to the vector potential at zero magnetic field. By contrast the Larmor result (18) is derived assuming that the magnetic field penetrates the ring. ### II.4 Quantizing the electron on the circle and flux periodicity The above results are purely classical. When we quantize them we will find that physical properties must be periodic in (half?) the flux quantum, as will now be shown show. Our previous classical results for currents must be thought of as averages over these quantum periods (beats). Flux quantization was originally suggested by London London (1961), for a thorough discussion see Thouless Thouless (1998). The classical Hamiltonian of an electron moving freely on a circle of radius $`R`$ threaded by a flux $`\mathrm{\Phi }`$ is, according to equation (5), $$H=\frac{1}{2mR^2}\left(J+\frac{|e|}{2\pi c}\mathrm{\Phi }\right)^2.$$ (19) We quantize this by letting $`J\widehat{J}=\mathrm{i}\mathrm{}/\phi `$ and thus get the Schrödinger equation $$\frac{\mathrm{}^2}{2mR^2}\left(\mathrm{i}\frac{}{\phi }+n_\varphi \right)^2\psi (\phi )=E\psi (\phi ),$$ (20) where we have used equation (15). Putting $$\psi (\phi )=\mathrm{exp}(\mathrm{i}n_\varphi \phi )\psi ^{}(\phi )$$ (21) we get $$\frac{\mathrm{}^2}{2mR^2}\frac{^2}{\phi ^2}\psi ^{}=E\psi ^{}$$ (22) for the gauge transformed wave function. It is now frequently argued Byers and Yang (1961); Bloch (1970) that the wave function must be single valued and that therefore $$\psi (\phi +2\pi )=\psi (\phi ).$$ (23) Via (21) this leads to the physical condition $$\psi ^{}(\phi +2\pi )=\mathrm{exp}(\mathrm{i}n_\varphi 2\pi )\psi ^{}(\phi )$$ (24) on the solutions of (22), where the flux has been transformed away. This boundary condition is unchanged if $`n_\varphi `$ changes by unity. This implies that physical quantities must be periodic in the flux with period $`\mathrm{\Phi }_0`$. The above argument is not necessarily reliable, however. The correct wave function for an electron is a spinor (in the non-relativistic case a two component spinor). A spinor is well known to change sign when rotated by $`2\pi `$. The question is then: will the spinor rotate as the electron travels round the circle? A free electron is known to have conserved helicity, the projection of the spin on the momentum. As the ring radius is large compared to atomic dimensions the electron momentum turns slowly and it seems reasonable that the helicity will remain conserved (as an adiabatic invariant). This, of course, means that the spinor must rotate with the momentum. The conclusion of all this is that the correct condition on the spinor wave function, for a single electron, should be $$\psi (\phi +4\pi )=\psi (\phi ),$$ (25) and thus that $$\psi ^{}(\phi +4\pi )=\mathrm{exp}(\mathrm{i}n_\varphi 4\pi )\psi ^{}(\phi ).$$ (26) This condition is unchanged whenever $`n_\varphi `$ changes by one half. I.e. physical quantities must be periodic in the flux with period $`\mathrm{\Phi }_0/2`$. Note that the same result is obtained if $`|e|`$ in equation (19) is changed to $`2|e|`$. The $`n_\varphi `$ in (20) changes to $`2n_\varphi `$ and equation (24) becomes identical to (26). In conclusion the observation of the $`\mathrm{\Phi }_0/2`$ periodicity does not necessarily imply electron pairs. It might be due to single electrons going round the ring with conserved helicity. Both the $`\mathrm{\Phi }_0`$ and the $`\mathrm{\Phi }_0/2`$ periodicities have been experimentally observed Deaver and Fairbank (1961); Doll and Näbauer (1961); Gough et al. (1987); Webb et al. (1985); Levy et al. (1990); Chandrasekhar et al. (1991). ## III Rotating superconductors and the number of superconducting electrons There is another surprising result concerning circulating electrons that is easily explained by Larmor’s theorem (17). London London (1961) showed (see also Brady (1982); Cabrera and Peskin (1989); Liu (1998)), using his phenomenological theory of superconductivity, that a superconducting sphere that rotates with angular velocity $`𝛀`$ will have an induced magnetic field (Gaussian units) $$𝑩=\frac{2mc}{|e|}𝛀$$ (27) in its interior. Here $`m`$ and $`e`$ are the mass and charge of the electron. This prediction has been experimentally verified with considerable accuracy and is equally true for high temperature and heavy fermion superconductors Tate et al. (1989); Sanzari et al. (1996). With minor modifications it is also valid for other axially symmetric shapes of the body, for example cylinders or rings. ### III.1 Understanding the London moment The London field, or ‘moment’, (27) can be thought of as follows. Assume that the superconducting body can be viewed as a system of interacting particles with the electronic charge to mass ratio confined by an axially symmetric external potential. When the body rotates we can transform the equations of motion to a co-rotating system, in which it is at rest, but in this system the particles will be affected by a Coriolis force $`m2𝛀\times 𝒗`$. Larmor’s theorem teaches us that such a Coriolis force is equivalent to an external magnetic field. Magnetic fields are, however, not allowed inside superconductors according to the Meissner effect. To get rid of the Coriolis forces the rotation induces surface supercurrents that produce a suitable compensating magnetic field $`𝑩`$. The Lorentz force of this field is $`(|e|/c)𝒗\times 𝑩`$. Provided the relation between $`𝑩`$ and $`𝛀`$ is given by (27) the two forces cancel. The equations of motion in the rotating system are then the same, in the interior, as if the system did not rotate. The disturbance from the rotation on the dynamics is minimized. The above explanation may sound compelling, but the most direct way of understanding formula (27) is, in fact, much simpler. The superconducting electrons, which are always found just inside the surface London (1961), are not dragged by the positive ion lattice so when it starts to rotate the superconducting electrons ignore this and remain in whatever motion they prefer. This, however, means that there will be an uncompensated motion of positive charge density on the surface of the body. This surface charge density, $`\sigma `$, will, of course, be the same as the density of superconducting electrons, but of opposite sign, and will produce the magnetic field. Using this we can calculate the number, $`N`$, of superconducting electrons. ### III.2 The number of superconducting electrons It is well known that a rotating uniform surface charge density will produce a uniform interior magnetic field in a sphere. If this rotating surface charge density is $`\sigma `$, then the total charge $`Q`$ is given by $$Q=N|e|=4\pi R^2\sigma ,$$ (28) and the resulting magnetic field in the interior is $$𝑩=\frac{2}{3}\frac{Q}{cR}𝛀=\frac{8\pi }{3}\frac{\sigma R}{c}𝛀,$$ (29) where $`R`$ is the radius of the sphere (relevant formulas for the calculation can be found in Essén Essén (1989)). Putting $`Q=N|e|`$ and comparing this equation with (27) one finds that the number $`N`$ must be given by $`N=3Rmc^2/e^2=3R/r_\mathrm{e}`$. We thus find that the relationship $$\frac{Nr_\mathrm{e}}{R}=3,$$ (30) where $`r_\mathrm{e}`$ is the classical electron radius, and $`N`$ the number of electrons contributing to the supercurrent, characterizes the superconductivity on a sphere of radius $`R`$. The corresponding calculation for a cylinder, long enough for edge effects to be negligible, is elementary and gives a similar value for $`Nr_\mathrm{e}/\mathrm{}`$, where $`\mathrm{}`$ is the length of the cylinder. We will return to the crucial significance of the dimensionless combination $`Nr_\mathrm{e}/R`$ below. It is noteworthy that the number $`N`$ depends only on the geometry (size) and fundamental constants ($`r_\mathrm{e}`$). How can this be if superconductivity is caused by some effective interaction with the lattice? ## IV Pairing and collective effects due to the Darwin-Breit interaction We now continue the study of the semiclassical (ballistic) electrons in the ring using the model of charged particles constrained to move on a circle. Now we further assume that the electrons are free particles to zeroth order and investigate how this is affected by the first order Darwin-Breit term. The relativistic mass-velocity correction is probably not of much interest here. ### IV.1 The Darwin-Breit term on the ring For the positions and velocities of equation (2) the Darwin-Breit term (1) becomes $$V_1=\frac{e^2}{Rc^2}\underset{i<j}{\overset{N}{}}R^2\dot{\phi }_i\dot{\phi }_j\frac{1}{4}\frac{1+3\mathrm{cos}(\phi _i\phi _j)}{\sqrt{2[1\mathrm{cos}(\phi _i\phi _j)]}}\frac{e^2R}{c^2}\underset{i<j}{\overset{N}{}}\dot{\phi }_i\dot{\phi }_jV_\phi (\phi _i\phi _j),$$ (31) and the first order Lagrangian $`L=T_0V_1`$, with $`T_0`$ given in equation (3), is $$L=\frac{1}{2}mR^2\underset{i=1}{\overset{N}{}}\dot{\phi }_i^2+\frac{e^2R}{c^2}\underset{i<j}{}\dot{\phi }_i\dot{\phi }_jV_\phi (\phi _i\phi _j).$$ (32) The nature of the function $`V_\phi `$ is indicated in equation (42) below. If we introduce (note that the electron has charge $`e=|e|`$) $$A_i=\frac{e}{c}\underset{j(i)}{\overset{N}{}}\dot{\phi }_jV_\phi (\phi _i\phi _j)$$ (33) we can write this $$L=\underset{i=1}{\overset{N}{}}\left(\frac{1}{2}mR^2\dot{\phi }_i^2+\frac{e}{2c}R\dot{\phi }_iA_i\right).$$ (34) It is easy to show that for real electrons distributed round a (one-dimensional) ring of real atoms the Darwin-Breit term will always be a small perturbation Geller (1996). Individual terms in the interaction may still be large if some pair of interparticle distances is very small. This would correspond to pair formation and is treated in the next subsection. In the real world of two and three dimensions the Darwin-Breit term as a whole can become large. This means that individually moving particles is no longer a good first approximation. This is shown in the following subsection. ### IV.2 The one-dimensional hydrogen atom The Darwin-Breit term represents an interaction which is attractive for parallel currents. For small relative velocities of the electrons it seems possible that it could lead to bound states (for the relative motion of the particles). Let us investigate this. Most conduction electrons in the metal ring will be inside the (one-dimensional) Fermi surface and they will occur in pairs of opposite momentum with no net current. Assume that only two electrons have unpaired momenta and move in the same direction around the ring approximately with the Fermi velocity. The Lagrangian of these two is then $$L=\frac{mR^2}{2}(\dot{\phi }_1^2+\dot{\phi }_2^2)+\frac{e^2R}{c^2}\dot{\phi }_1\dot{\phi }_2V_\phi (\phi _1\phi _2).$$ (35) We now make the coordinate transformation $$\phi _\text{C}=\frac{1}{2}(\phi _1+\phi _2),\phi =\phi _1\phi _2$$ (36) to center of mass angle $`\phi _\text{C}`$ and relative angle $`\phi `$. The inverse transformation is $$\phi _1=\phi _\text{C}+\frac{1}{2}\phi ,\phi _2=\phi _\text{C}\frac{1}{2}\phi ,$$ (37) and the Lagrangian becomes $$L=\frac{mR^2}{2}\left(2\dot{\phi }_\text{C}^2+\frac{1}{2}\dot{\phi }^2\right)+\frac{e^2R}{c^2}\left(\dot{\phi }_\text{C}^2\frac{1}{4}\dot{\phi }^2\right)V_\phi (\phi ).$$ (38) We define $`J_\text{C}L/\dot{\phi }_\text{C}`$ and $`JL/\dot{\phi }`$ and get the (exact) Hamiltonian $$H=J_\text{C}\dot{\phi }_\text{C}+J\dot{\phi }L=\frac{1}{4}\frac{J_\text{C}^2}{mR^2\left(1+\frac{e^2V_\phi (\phi )}{mc^2R}\right)}+\frac{J^2}{mR^2\left(1\frac{e^2V_\phi (\phi )}{mc^2R}\right)}.$$ (39) Clearly $`\dot{J}_\text{C}=H/\dot{\phi }_\text{C}=0`$ so the center of mass (angular) momentum $`J_\text{C}`$ is conserved. We put $$|J_\text{C}|2J_\text{F}=\text{const.}$$ (40) and expand to first order in the parameter $`\frac{e^2/R}{mc^2}=r_\mathrm{e}/R`$. Throwing away a constant we end up with the following Hamiltonian for the relative motion of the electrons $$H=\frac{J^2}{mR^2}\frac{J_\text{F}^2J^2}{mR^2}\frac{e^2}{mc^2}\frac{V_\phi (\phi )}{R}.$$ (41) Consistency with our original assumptions requires that $`J^2J_\text{F}^2`$ and thus we neglect the $`J^2`$ in the second term. Series expansion of $`V_\phi `$ gives $$V_\phi (\phi )=\frac{1}{4}\frac{1+3\mathrm{cos}\phi }{\sqrt{2(1\mathrm{cos}\phi )}}=\frac{1}{|\phi |}\frac{1}{3}|\phi |+\frac{97}{5760}|\phi |^3+\mathrm{},$$ (42) for the angular potential energy, so near $`\phi =0`$ this is essentially a (one-dimensional) Coulomb potential. We keep the first term and introduce $$pJ/R,\mu m/2,rR\phi ,Z_\text{F}\frac{J_\text{F}^2/(mR^2)}{mc^2}=\frac{_\text{F}}{mc^2},$$ (43) where $`_\text{F}`$ is the Fermi energy. The Hamiltonian for the relative motion then becomes the well known Hamiltonian, $$H=\frac{p^2}{2\mu }\frac{Z_\text{F}e^2}{|r|},$$ (44) for a (one dimensional) one electron atom with reduced mass $`\mu `$ and nuclear charge $`Z_\text{F}`$. The analysis above for two electrons on a circle can be done in an almost identical way in three dimensions Essén (1995, 1996, 1999) and shows that the Breit interaction can bind two electrons in their relative motion while their center of mass moves through the metal at the Fermi speed. The ground state energy in that case corresponds to a temperature of $`0.1`$ mK. In the present one-dimensional case all parameters are the same except the dimensionality of the space. The one-dimensional hydrogen atom is treated in the literature Haines and Roberts (1969); Rau (1985) and the ground state energy is known to go logarithmically to minus infinity when the dimension approaches one. To get a finite result we must therefore take account of the thickness, $`a`$, of our ring and change the potential to $$V_1(r)=\frac{Z_\text{F}e^2}{|r|+a}.$$ (45) In the three dimensional case the Bohr radius of the Hamiltonian (44) is $`a_m=2/Z_\text{F}1.5210^4r_s^2/a_0`$, where $`a_0`$ is the ordinary Bohr radius and $`r_s`$ the radius parameter. The three-dimensional ground state energy is $`E_{3d}=1/a_m^2=Z_\text{F}^2/4`$. The corresponding result for the one-dimensional potential (45) is Haines and Roberts (1969); Rau (1985) $$E_{1d}=\frac{1}{a_m^2}[2\mathrm{ln}(a_m/a)]^2.$$ (46) The condition for this is that $`aa_m`$. For the gold ring of Chandrasekhar et al. Chandrasekhar et al. (1991) with $`a80`$nm one finds that $`a_m/a10^2`$ using standard values for the Fermi energy of Au. One gets similar values for the Cu rings of Levy et al. Levy et al. (1990). The $`1d`$-condition is thus clearly satisfied in both experiments. We thus get that the ground state energy of the Darwin-Breit bound electron pairs corresponds to a temperature of roughly 1 – 2 mK. This is a bit below the temperatures (7 mK) at which the persistent current gold ring experiments in Chandrasekhar et al. (1991) were performed, but the order of magnitude agreement is noteworthy. In the $`10^7`$ Cu-rings experiment of Levy et al. Levy et al. (1990) the temperature range 7 – 400 mK was used. Physicists working with the theory of these phenomena can certainly not ignore the Darwin-Breit interaction and the possibility of pairing. ### IV.3 When does the Darwin-Breit term become large? In the previous subsection we saw that the Darwin-Breit interaction, though small, can have important qualitative effect and lead to pairing of electrons. This effect is enhanced by one-dimensionality because of the logarithmic divergence of the $`1/r`$-interaction in one dimension. Let us now investigate the possibility of collective effects due to this term. We return to the Lagrangian (32) and try to get the Hamiltonian without approximation. The generalized momentum is $$J_i\frac{L}{\dot{\phi }_i}=mR^2\dot{\phi }_i+\frac{e^2R}{c^2}\underset{j(i)}{\overset{N}{}}\dot{\phi }_jV_\phi (\phi _i\phi _j).$$ (47) In order to get an exact Hamiltonian we must solve for the $`\dot{\phi }_i`$ in terms of the $`J_i`$. If we introduce the abbreviation $`V_{ij}V_\phi (\phi _i\phi _j)`$ we can write the $`N`$ equations (47) $$J_i=mR^2\left(\dot{\phi }_i+\frac{r_\mathrm{e}}{R}\underset{j(i)}{\overset{N}{}}V_{ij}\dot{\phi }_j\right),i=1,\mathrm{},N,$$ (48) ($`r_\mathrm{e}`$=classical electron radius). As long as the sum here is negligible we have $`J_imR^2\dot{\phi }_i`$ and easily find an approximate Hamiltonian. For few particles, small $`N`$, the sum will, in practice, never exceed the small number $`Nr_\mathrm{e}/R`$ by much, since in quantum mechanics the uncertainty principle prevents the $`V_{ij}`$ from becoming to large. If, however, $`N`$ is very large, the sum can still be small if the velocities $`\dot{\phi }_j`$ have random signs. We see that the condition for breakdown of the approximation $`J_imR^2\dot{\phi }_i`$, and thus for important collective effects of the Darwin-Breit term, is that $`Nr_\mathrm{e}/R`$ no longer is small. A three dimensional estimate in Essén (1997) shows that, in fact, magnetic energy is minimized when $$\frac{Nr_\mathrm{e}}{R}1$$ (49) where $`N`$ is the number of correlated velocities. If we put $`\epsilon _er_\mathrm{e}/R`$ we can write equation (48) in the matrix form $$\left(\begin{array}{c}J_1\\ J_2\\ \mathrm{}\\ J_N\end{array}\right)=mR^2\left(\begin{array}{cccc}1& \epsilon _eV_{12}& \mathrm{}& \epsilon _eV_{1N}\\ \epsilon _eV_{21}& 1& \mathrm{}& \epsilon _eV_{2N}\\ \mathrm{}& \mathrm{}& & \mathrm{}\\ \epsilon _eV_{N1}& \epsilon _eV_{N2}& \mathrm{}& 1\end{array}\right)\left(\begin{array}{c}\dot{\phi }_1\\ \dot{\phi }_2\\ \mathrm{}\\ \dot{\phi }_N\end{array}\right).$$ (50) This shows that collective Darwin-Breit behavior is due to ”off-diagonal long range order”, a concept invented by C. N. Yang Yang (1962). Here the concept reappears in a classical context and arises in the Legendre transformation from the Lagrangian, with a Darwin-Breit interaction, to the Hamiltonian. In a real one-dimensional ring of atoms with electrons this cannot happen, as will be shown below. The algebra, however, is, barring notational and other irrelevant details, the same in two and three dimensions Essén (1996, 1997). We have already seen, in equation (30), that this parameter, $`Nr_\mathrm{e}/R`$, can be unity in three dimensions when the system is superconducting. Everything thus falls nicely into place. The Darwin-Breit term can lead to pairing of electrons at sufficiently low temperatures. Provided one has long range correlation of velocities it can also lead to a large collective effect, which, in fact, seems to be superconductivity. The condition (49) will imply different physics for different spatial dimension $`d`$. The number $`N`$ of ballistic, or semiclassical, or superconducting, or velocity-momentum correlated, electrons will be limited by the fact that there will be at most one contributed per atom, usually much less. Assume, for definiteness, the maximum number. For a sample of spatial dimension $`d`$ and side length $`R`$ this gives, very roughly, $$N_{\mathrm{max}}(d)=R^d/a_0^d,$$ (51) where $`a_0`$ is the Bohr-radius. If we put this in equation (49) we get $`\frac{R^dr_\mathrm{e}}{a_0^dR}1`$ which implies that $$R^{d1}a_0^d/r_\mathrm{e}.$$ (52) This gives the following (minimum) sizes $`R`$ of superconducting structures in spatial dimension $`d`$ $$d1+R\mathrm{},$$ (53) $$d=2Ra_0^2/r_\mathrm{e}19000a_01\mu \mathrm{m},$$ (54) $$d=3Ra_0\sqrt{a_0/r_\mathrm{e}}140a_010\mathrm{nm}.$$ (55) As stated above, we see that $`d=1`$ does not permit long range correlation. We saw that this does not mean that electron pairs do not form. It only means that no long range collective phenomenon (phase transition?) will be possible. Two dimensions differ from three in that structures (samples) must be at least two orders of magnitude larger in (linear) size. ## V Conclusions The experienced theoretical physicist, should, just by looking at formula (1), see that there is trouble with the thermodynamics ahead, since the interaction is long range ($`1/r`$) and there is no natural screening mechanism similar to that which limits the range of the Coulomb interaction. This trouble is here identified with superconductivity. The main new point here, compared to the previous investigations by the author, is the discovery that the parameter $`Nr_\mathrm{e}/R`$, of equation (49), which has appeared again and again in my study of the Darwin Hamiltonian (the exact Hamiltonian corresponding to the Lagrangian with the Darwin-Breit term), also miraculously appears in an estimate of the number of superconducting electrons, equation (30). This gives a direct connection to the heart of superconductivity that was missing before. The painful but only conclusion must be that the Darwin-Breit interaction is the interaction between electrons that causes superconductivity.
no-problem/0002/nucl-ex0002004.html
ar5iv
text
# Nuclear multifragmentation, percolation and the Fisher Droplet Model: common features of reducibility and thermal scaling ## Abstract It is shown that the Fisher Droplet Model (FDM), percolation and nuclear multifragmentation share the common features of reducibility (stochasticity in multiplicity distributions) and thermal scaling (one-fragment production probabilities are Boltzmann factors). Barriers obtained, for cluster production on percolation lattices, from the Boltzmann factors show a power-law dependence on cluster size with an exponent of $`0.42\pm 0.02`$. The EOS Au multifragmentation data yield barriers with a power-law exponent of $`0.68\pm 0.03`$. Values of the surface energy coefficient of a low density nuclear system are also extracted. Since the earliest observations of nuclear multifragmentation (the break up of excited nuclei), the Fisher Droplet Model (FDM) and percolation models have been employed in attempts to understand this phenomenon. The FDM enjoyed early success in predicting power-law distributions in fragment masses at the critical point in a liquid-vapor diagram . Percolation models also predicted a power-law distribution in fragment sizes near the critical point , . Both models still enjoy great popularity and have been employed in the analysis of Au multifragmentation data obtained by the EOS Collaboration ,,,,. Other analyses of multifragmentation data have shown two empirical properties of the fragment multiplicities which have been named reducibility and thermal scaling ,,,. Reducibility refers to the observation that for each energy bin, $`E`$, the fragment multiplicities, $`N`$, are distributed according to a binomial or Poissonian law. As such, their multiplicity distributions, $`P_N`$, can be reduced to a one-fragment production probability $`p`$, according to the binomial or Poissonian law: $`P_N^M`$ $`=`$ $`{\displaystyle \frac{M!}{M!(MN)!}}p^N(1p)^{MN};`$ (1) $`P_N`$ $`=`$ $`e^N{\displaystyle \frac{1}{N!}}N^N,`$ (2) where $`M`$ is the total number of trials in the binomial distribution. The experimental observation that $`P_N`$ could be constructed in terms of $`p`$ was considered evidence for stochastic fragment production, i.e. fragments are produced independently of each other. Experimental fragment multiplicity distributions were observed to change from binomial to Poissonian under a redefinition of fragment from $`3Z20`$ to individual charges, $`Z`$ . Thermal scaling refers to the feature that $`p`$ behaves with temperature $`T`$ as a Boltzmann factor: $`p\mathrm{exp}(B/T)`$. Thus a plot of $`\mathrm{ln}p`$ vs. $`1/T`$, an Arrhenius plot, should be linear if $`p`$ is a Boltzmann factor. The slope $`B`$ is the one-fragment production barrier. Analyses of multifragmentation distributions along these lines have demonstrated the presence of these features and have led to the extraction of barriers . Controversy has surrounded this type of analysis regarding both the physical existence of these features and their significance, mostly within the framework of dynamical vs. statistical origins of multifragmentation . In this work several important points will be made: The FDM inherently contains reducibility and thermal scaling. Since percolation reduces to the FDM, it exhibits reducibility and thermal scaling. Thus percolation provides a simple mathematical model that fully manifests these two features. Arrhenius plots for percolation can be used to extract barriers. The barriers have a power-law dependence on cluster size. Analysis of the EOS Au multifragmentation data verifies reducibility and thermal scaling. The extracted barriers also obey a power-law dependence on fragment mass. The FDM and its forerunners , are based on the equilibrium description of physical clusters or droplets. The mean number of droplets of size $`A`$ was written as: $$N_A\mathrm{exp}\left[\frac{A\mathrm{\Delta }\mu }{T}\right],$$ (3) where $`\mathrm{\Delta }\mu =\mu \mu _l`$ and $`\mu `$ and $`\mu _l`$ are the actual and liquid chemical potentials respectively. For $`\mu <\mu _l`$ (gas), $`N_A`$ falls to zero with increasing $`A`$. For $`\mu >\mu _l`$ (liquid), $`N_A`$ increases with $`A`$. To better describe the distribution for intermediate values of $`A`$, Eq. (3) was modified to include the surface of the droplets: $$N_A\mathrm{exp}\left[\frac{A\mathrm{\Delta }\mu }{T}\frac{c(T)A^{2/3}}{T}\right],$$ (4) where $`c(T)`$ is the surface free-energy density. For $`\mu <\mu _l`$, $`N_A`$ falls to zero with increasing $`A`$. For $`\mu >\mu _l`$, the terms in the exponential compete, leading to an early decrease in $`N_A`$ with $`A`$, followed by an increase. To account for the properties near criticality, Fisher introduced an explicit expression for the surface free energy and a topological factor resulting in an expression for the normalized droplet distribution: $$n_A=\frac{N_A}{A_0}=q_0A^\tau \mathrm{exp}\left[\frac{A\mathrm{\Delta }\mu }{T}\frac{c_0ϵA^\sigma }{T}\right],$$ (5) where: $`A_0`$ is the size of the system; $`q_0`$ is a normalization constant depending only on the value of $`\tau `$ ; $`\tau `$, the topological critical exponent, depends on the dimensionality of the system with origins that lie in considerations of a three dimensional random walk of a surface closing on itself, for three dimensions $`2\tau 3`$; $`c_0ϵA^\sigma `$ is the surface free energy of a droplet of size $`A`$; $`c_0`$ is the surface energy coefficient; $`\sigma `$ is the critical exponent related to the ratio of the dimensionality of the surface to that of the volume; and $`ϵ=(T_cT)/T_c`$ is the control parameter, a measure of the distance from the critical point, $`T_c`$. From this outline it is apparent that the FDM exhibits the features of reducibility and thermal scaling. The distribution in droplet size is Poissonian by construction: in the FDM each component of droplet size $`A`$ is an ideal gas without the canonical constraint of overall constituent number conservation. The resulting grand canonical distribution is Poissonian. Thus, $`\sigma _A^2=N_A`$, i.e. Poissonian reducibility. Thermal scaling is obvious in the FDM when Eq. (5) is written as follows: $$\mathrm{ln}n_A=\mathrm{ln}q_0\tau \mathrm{ln}A+\frac{A\mathrm{\Delta }\mu }{T}+\frac{c_0A^\sigma }{T_c}\frac{c_0A^\sigma }{T}.$$ (6) It is clear that linearity with $`1/T`$ (thermal scaling in an Arrhenius plot) extends to and beyond the critical point, and the slope of the Arrhenius plot gives the $`T=0`$ surface energy coefficient of the droplet. Percolation models are characterized by a constant energy per bond. The bond-breaking probability, $`p_{break}`$, is amenable to a straightforward statistical mechanics treatment. Such a treatment reveals that in the limit of $`T\mathrm{}`$, $`p_{break}1/2`$, indicating that the range of $`p_{break}`$ covered by Eq. (5) is half the usual range discussed in percolation theory; $`0p_{break}1`$. Such a literal thermodynamical treatment therefore excludes the critical point, $`p_c`$, of many types of percolation systems from thermodynamic consideration. However, percolation phenomena, with a geometrical phase transition, share with thermal critical phenomena the important features of scaling, universality and renormalization group as well as other deep connections . For example, the scaling behavior observed in percolation clusters can be described by the FDM when $`p_{break}`$ replaces $`T`$ in Eq. (5) and the control parameter becomes $`ϵ=(p_cp_{break})/p_c`$ . To demonstrate the scaling of percolation clusters a plot is made of the scaled cluster distribution, $`n_A^{scaled}=n_A/q_0A^\tau `$, as a function of the scaled control parameter, $`ϵ^{scaled}=A^\sigma ϵ/p_{break}`$. See the left panel of Fig. 1. Data over a wide range in $`A`$ and $`ϵ`$ are seen to collapse. Cluster distributions used in this analysis were generated on a simple cubic lattice of side six . Fitting $`n_A^{scaled}`$ as a function of $`ϵ^{scaled}`$ for $`ϵ0`$ and leaving $`c_0`$ and $`\mathrm{exp}\left[A\mathrm{\Delta }\mu /p_{break}\right]`$ as free parameters in Eq. (5) gives $`c_0=2.34\pm 0.03`$ and $`\mathrm{exp}\left[A\mathrm{\Delta }\mu /p_{break}\right]=0.95\pm 0.01`$, i.e. the bulk factor is unity. At the critical point, $`ϵ=0`$, the collapsed distribution takes the value of one indicating that the cluster distribution follows a power law. Away from the critical point, the cluster distribution predominantly follows the surface term in Eq. (5). The FDM does not describe the behavior of clusters for $`ϵ0`$. Other forms for the FDM’s surface factor have been suggested to describe cluster behavior on both sides of the critical point , . Also shown in Fig. 1 is a plot of the EOS Au multifragmentation data , , , , . Here the substitution of $`\sqrt{e^{}}=\sqrt{E^{}/A_0}`$ for $`T`$ has been made resulting in a control parameter of $`ϵ=(\sqrt{e_c^{}}\sqrt{e^{}})/\sqrt{e_c^{}}`$, which for a degenerate Fermi gas reduces to $`(T_cT)/T_c`$. The excitation energy normalized to the mass of the fragmenting remnant, $`e^{}`$ in MeV/nucleon, excludes collective effects . The location of the critical point, $`e_c^{}`$, and values of the critical exponents, $`\sigma `$ and $`\tau `$, were determined previously , , , , . Fitting $`n_A^{scaled}`$ as a function of $`ϵ^{scaled}`$ for $`ϵ0`$ and leaving $`c_0`$ and $`\mathrm{exp}\left[A\mathrm{\Delta }\mu /\sqrt{e^{}}\right]`$ as free parameters in Eq. (5) gives $`c_0=6.4\pm 0.6`$ MeV (via $`E^{}=aT^2`$ with $`a=A_0/13`$) and $`\mathrm{exp}\left[A\mathrm{\Delta }\mu /\sqrt{e^{}}\right]=0.8\pm 0.1`$, i.e. the bulk term is consistent with $`\mathrm{\Delta }\mu 0`$. The surface energy coefficient $`c_0`$ is of a somewhat different nature than the semiempirical mass formula parameter ($`a_s17`$ MeV for $`T=0`$, $`\rho =\rho _0`$) or estimates for low density nuclear systems ($`a_s6`$ MeV for $`T3`$ MeV, $`\rho \rho _0/3`$) . The coefficient $`c_0`$ is temperature independent; the temperature dependence is given as $`c_0ϵ`$. Since the FDM has been shown to contain the features of reducibility and thermal scaling and since the scaling inherent in the FDM describes percolation, it should be possible to observe reducibility and thermal scaling in percolation cluster distributions. To address the question of reducibility in percolation, cluster multiplicity distributions for bins in $`p_{break}`$ are considered. The ratio of the variance to the mean, $`\sigma _A^2/N_A`$, of the multiplicity distribution for each cluster of size $`A`$ is an indicator of the nature of the distribution. In Fig. 2, such a ratio is shown as a function of $`p_{break}`$. The observed ratio is near one (Poissonian limit) over the range of $`p_{break}`$. Within experimental errors, similar behavior is observed for the Au multifragmentation data. Examples of multiplicity distributions with Poissonian curves calculated from percolation $`N_A`$ are shown in the left panel of Fig. 3. Poissonian distributions reproduce percolation cluster distributions over two or three orders of magnitude for all $`A`$ values; Poissonian reducibility is present in percolation. Fig. 3 also shows the multiplicity distributions for Au multifragmentation compared to the calculated Poissonian curves. The agreement between the measured and computed distributions confirms the presence of reducibility in the Au multifragmentation data. To verify thermal scaling in percolation, the average yield of clusters of size $`A`$ and its dependence on $`p_{break}`$ are considered. The presence of thermal scaling should manifest itself through a Boltzmann factor. Again, the substitution $`p_{break}`$ for $`T`$ is made in accordance with standard percolation theory and $`\mathrm{ln}n_A`$ is plotted as a function of $`1/p_{break}`$ (Arrhenius plot). See Fig. 4. In most cases the Arrhenius plots for individual clusters of size $`A`$ are linear over two orders of magnitude. Thus, thermal scaling is verified for percolation. The observations that reducibility and thermal scaling are already present in such a simple model suggest that they are deeply rooted fundamental features of multifragmentation processes rather than being epiphenomena of complex systems. The Boltzmann factor indicates that the slope of an Arrhenius plot represents the barrier $`B`$ associated with the production of a cluster. For an interpretation of $`B`$ in percolation, the Boltzmann factor is equated with Eq. (5) yielding a power law relating $`B`$ to the size of a cluster: $`B=c_0A^\sigma `$ when $`\mathrm{\Delta }\mu 0`$. Fitting the extracted barriers $`B`$ (slopes in Fig. 4) as a function of $`A`$ gives an exponent equal to $`0.42\pm 0.02`$ in agreement with the accepted value of $`\sigma =0.45`$ for $`3`$D percolation. See Fig. 5. The constant of proportionality of the power law gives another measure of the surface energy coefficient $`c_0=2.42\pm 0.03`$. When the Arrhenius analysis is performed on the Au multifragmentation data, the results are qualitatively similar, but quantitatively distinct. Arrhenius fits of $`\mathrm{ln}n_A`$, now plotted against $`1/\sqrt{e^{}}`$, are linear over an order of magnitude or more. See Fig. 4. The barriers extracted here can be converted into units of MeV via the energy-temperature relation for a degenerate Fermi gas. Following the same analysis as for percolation, a fit was made of $`B`$ vs. $`A`$ (see Fig. 5) which yielded $`\sigma =0.68\pm 0.03`$, in agreement with previously determined EOS Au multifragmentation values , , and $`c_0=6.8\pm 0.5`$ MeV. In summary, the above effort illustrates: * the presence of reducibility and thermal scaling in the FDM, percolation and the EOS Au multifragmentation data (the latter two shown empirically); * the relationship between the FDM and percolation via a scaling analysis that also yields an estimate of the surface energy coefficient; * that the barriers obtained from percolation follow a power-law dependence on cluster size with an exponent that agrees with the accepted $`3`$D percolation value and gives another (consistent) estimate of the surface energy coefficient; * the collapse of the EOS Au fragment distributions in accordance with the FDM yielding an estimate of the surface energy coefficient of a low density nuclear system; * that the barriers obtained from EOS Au multifragmentation data follow a power-law dependence on fragment mass with an exponent near the expected value ($`2/3`$) and close to the 3D Ising universality class value ($`0.64`$) and gives another (consistent) estimate of the surface energy coefficient. This work was supported in part by the U.S. Department of Energy and by the National Science Foundation.
no-problem/0002/astro-ph0002324.html
ar5iv
text
# High Resolution Spectroscopy of the X-ray Photoionized Wind in Cygnus X-3 with the Chandra High Energy Transmission Grating Spectrometer ## 1 Introduction In a previous paper (Liedahl & Paerels 1996, ’LP96’) we presented an interpretation of the discrete spectrum of Cyg X-3 as observed with the Solid State Imaging Spectrometers on ASCA (cf. Kitamoto et al. 1994; Kawashima & Kitamoto 1996). We found clear spectroscopic evidence that the discrete emission is excited by recombination in a tenuous X-ray photoionized medium, presumably the stellar wind from the Wolf-Rayet companion star (van Kerkwijk et al. 1992). Specifically, the ASCA spectrum revealed a narrow radiative recombination continuum (RRC) from H-like S, unblended with any other transitions. On closer inspection, RRC features due to H-like Mg and Si were also found to be present in the data, although severely blended with emission lines. These narrow continua are an unambiguous indicator of excitation by recombination in X-ray photoionized gas, and their relative narrowness is a direct consequence of the fact that a highly ionized photoionized plasma is generally much cooler than a collisionally ionized plasma of comparable mean ionization (LP96, Liedahl 1999 and references therein). With the high spectral resolution of the Chandra High Energy Transmission Grating Spectrometer, we now have the capability to fully resolve the discrete spectrum. Apart from offering a unique way to determine the structure of the wind of a massive star, study of the spectrum may yield other significant benefits. Cyg X-3 shows a bright, purely photoionization driven spectrum, and, as such, may provide a template for the study of the spectra of more complex accretion-driven sources, such as AGN. The analysis will also allow us to verify explicitly the predictions for the structure of X-ray photoionized nebulae derived from widely applied X-ray photoionization codes. ## 2 Data Reduction A description of the High Energy Transmission Grating Spectrometer (HETGS) may be found in Markert et al. (1994). Cyg X-3 was observed on October 20, 1999, for a total of 14.6 ksec exposure time, starting at 01:11:38 UT. The observation covered approximate binary phases $`0.31`$ to $`+0.53`$, which means that about half of the exposure in our observation occurs in the broad minimum in the lightcurve at orbital phase zero. Aspect-corrected data from the standard CXC pipeline (processing date October 30, 1999) was post-processed using dedicated procedures written at Columbia. We used (ASCA-)grade 0,2,3,4 events, a spatial filter 30 ACIS pixels wide was applied to both the High Energy Grating (HEG) and Medium Energy Grating (MEG) spectra, and the resulting events were plotted in a dispersion–CCD pulse height diagram, in which the spectral orders are neatly separated. A second filter was applied in this dispersion–pulse height diagram. The filter consisted of a narrow mask centered on each of the spectral orders separately. The mask size and shape were optimized interactively. The residual background in the extracted spectra is of order 0.5 counts/spectral bin of 0.005 Å or less. The current state of the calibration does not provide us with the effective area associated with our joint spatial/pulse height filters to better than 25% accuracy, hence we have chosen not to flux-calibrate the spectrum at this time. An additional correction to the flux in the chosen aperture due to the (energy dependent) scattering of photons by interstellar dust has not yet been determined either. In the resulting order-separated count spectra, we located the zero order and we determined its centroid position to find the zero of the wavelength scale. We then converted pixel number to wavelength based on the geometry of the HETGS. In this procedure, we used ACIS/S chip positions that were determined after launch from an analysis of the dispersion angles in the HETGS spectrum of Capella (Huenemoerder et al. 2000). This preliminary wavelength scale appears to be accurate to approximately 2 mÅ. The spectral resolution was determined from a study of narrow, unblended emission lines in the spectrum of Capella. It is approximately constant accross the entire HETGS band, and amounts to approximately 0.012 Å (0.023 Å) FWHM for the HEG (MEG) (Dewey 2000). The resolution in the Cyg X-3 spectrum can be checked self-consistently by analyzing the width of the zero order image. Unfortunately, the zero order image is affected by pileup. However, enough events arrive during the 41 ms CCD frame transfer, forming a streak in the image, that we can construct an unbiased 1D zero-order distribution from them. The width of this distribution is consistent with the widths of narrow lines in the spectrum of Capella, which indicates that the resolution in the Cyg X-3 spectrum is not affected by systematic effects (e.g., incorrect aspect solution, defocusing). ## 3 X-ray Photoionization in Cyg X-3 Figure 1 shows the HEG and MEG first order spectra; the higher order spectra are unfortunately very weak, and we will not discuss them here. We show the spectra as a function of wavelength, because this is the most natural unit for a diffractive spectrometer: the instruments have approximately constant wavelength resolution. The spectra have been smoothed with a 3-pixel boxcar average to bring out coherent features We have indicated the positions of expected strong H– and He-like discrete features. A cursory examination of the spectrum strikingly confirms the photoionization-driven origin of the discrete emission. We detect the spectra of the H-like species of all abundant elements from Mg through Fe. In Si and S, we detect well-resolved narrow radiative recombination continua. This is illustrated in Figure 2, which shows the 3.0–7.0 Å band on an enlarged scale. The Si XIV and S XVI continua are readily apparent. The width of these features is a direct measure of the electron temperature in the recombining plasma, and a simple eyeball fit to the shapes indicates $`kT_e`$ 50 eV, which is roughly in agreement with the result of model calculations for optically thin X-ray photoionized nebulae (Kallman & McCray 1982). A more detailed, fully quantitative analysis of the spectrum will be required to see whether we can also detect the expected temperature gradient in the source (more highly ionized zones are also expected to be hotter). In the Si XIV and S XVI spectra we estimate the ratio between the total photon flux in the RRC to that in Ly$`\alpha `$ to be about 0.8 and 0.7, respectively; here, we assume $`kT_e=`$ 50 eV, and we have made an approximate correction for the differences in effective area at the various features. These measured ratios are in reasonable agreement with the expected ratio of $`0.73(kT_e/20\mathrm{eV})^{+0.17}`$ (LP96), which indicates that the H-like spectra are consistent with pure recombination in optically thin gas. The positions of the lowest members of the Fe XXVI Balmer series are indicated in Figure 1 (the fine structure splitting of these transitions is appreciable in H-like Fe, as is evident from the plot). The relative brightness of the Balmer spectrum is yet another indication of recombination excitation. There is evidence for line emission at the position of H$`\beta `$, and possibly at H$`\gamma `$ and H$`\delta `$; the spectrum is unfortunately too heavily absorbed to permit a detection of H$`\alpha `$ ($`\lambda \lambda 9.52,9.74`$ Å). Unfortunately, the long-wavelength member of the H$`\beta `$ ’doublet’ ($`\lambda 7.17`$ Å) almost precisely coincides with the expected position of Al XIII Ly$`\alpha `$, which precludes a simple and neat direct detection of Al (the first detection of an odd-$`Z`$ triple-$`\alpha `$ element in non-solar X-ray astronomy). Any limit on the Al/Si abundance ratio thus becomes dependent on an understanding of the intensity of the Fe XXVI spectrum. As for the He-like species, we detect the $`n=21`$ complexes, consisting of the forbidden (’$`f`$’), intercombination (’$`i`$’), and resonance (’$`r`$’) transitions, in Si XIII, S XV, Ar XVII, Ca XIX, and Fe XXV (as well as the corresponding RRC in Si, S, and possibly Ar). The line complexes appear resolved into blended resonance plus intercombination lines, and the forbidden line (see Figures 1 and 2), up to Ar XVII. In an optically thin, low density, purely photoionization-driven plasma, one expects the intensity ratio $`f/(r+i)1`$ for the mid-$`Z`$ elements, very different from the pattern in the more familiar collisional equilibrium case, where the resonance transition is relatively much brighter (e.g., Gabriel & Jordan 1969; Pradhan 1982; Liedahl 1999). We use the ratio $`f/(r+i)`$ rather than the conventional $`G(i+f)/r`$ and $`Rf/i`$, because the intercombination and resonance lines are unfortunately blended by significant Doppler broadening in the source (see Section 4). Theoretically, in a photoionized plasma $`f/(r+i)`$ is approximately equal to 1.3, 1.0, 0.83, for Si XIII, S XV, and Ar XVII, respectively, and depends only weakly on electron temperature (LP96, Liedahl 2000). The measured ratios $`f/(r+i)`$, derived by fitting three Gaussians with common wavelength offset and broadening at the expected positions of $`f,i`$, and $`r`$, are approximately 1.1, 0.8, and 1.1 with the HEG, for Si, S, and Ar, respectively; the corresponding ratios for the MEG are 1.3, 1.0, and 0.8. Since most of the lines contain at least 100 photons, the statistical error on the ratios is generally less than 15%. These measurements include a model for the Si XIII RRC in the S XV triplet (assuming $`kT_e=50`$ eV), and Mg XII Ly$`\gamma `$ emission in the Si XIII triplet. The He-like line ratios are probably affected by systematic features in the efficiency of the spectrometer. The S XV triplet is superimposed on the Si XIII RRC, the Si XIII triplet straddles the Si K edge in the CCD efficiency, and the Ar XVII triplet straddles the Au M<sub>IV</sub> and Ir M<sub>I</sub> edges. Corrections for these effects will have to be carefully evaluated. Nevertheless, the raw ratios $`f/(r+i)`$ for the Si and Ar triplets are already of the right magnitude for pure recombination. Our provisional conclusion is that the He-like spectra are, very roughly, consistent with pure recombination in optically thin gas. Just as in a collisional plasma, the relative strengths of the forbidden and intercombination lines are sensitive to density (Liedahl 1999; Porquet & Dubau 2000), due to collisional transfer between the upper levels of $`f`$ and $`i`$ at high density. As mentioned above, there are some systematic uncertainties in the measured line ratios, and we defer a discussion of possible constraints on the density in the wind to a future paper. The detection of fluorescent Fe emission is a surprise, because virtually no fluorescence was seen at the time of the ASCA observation (Kitamoto et al. 1994). The apparent centroid wavelength of the fluorescent line is $`1.939`$ Å (photon energy 6394 eV), with a formal error of less than $`10^3`$ Å (3 eV). The width of the line is $`0.022`$ Å FWHM, with a formal uncertainty of less than 5%. This is wider than would be expected from the velocity broadening to be discussed in the next section, and may be an indication that a range of ionization stages contributes to the fluorescent emission. If we assume the same velocity broadening for the Fe K$`\alpha `$ feature as for the high-ionization lines (which may not necessarily be correct if the low– and high–ionization lines originate in different parts of the stellar wind), we find that Fe K$`\alpha `$ has an intrinsic width (expressed as the FWHM of a Gaussian distribution) of 0.018 Å (corresponding to $`\mathrm{\Delta }E60`$ eV). The fine structure split between K$`\alpha _1`$ and K$`\alpha _2`$ contributes slightly to this width ($`\mathrm{\Delta }\lambda 0.004`$ Å), but the measured width covers the full range of K$`\alpha `$ wavelengths for charge states between fully neutral and Ne-like (Decaux et al. 1995). ## 4 Bulk Velocity Fields We find that all emission features are significantly broadened and redshifted. The lines and radiative recombination continua are resolved by both the HEG and the MEG. The line widths for H-like Mg, Si, S, Ar, Ca, and Fe Ly$`\alpha `$ were measured by fitting a simple Gaussian profile. Other than the negligibly small fine structure split ($`\mathrm{\Delta }\lambda 0.005`$ Å), these lines are clean and unblended. The resulting widths do not seem to exhibit a strong dependence on phase. Assuming that the spectrometer profile is well represented by a Gaussian of width 0.012 Å (0.023 Å) FWHM for the HEG (MEG), we find that the broadening of the lines is roughly consistent with a Gaussian velocity distribution, of width $`\mathrm{\Delta }v1500`$ km s<sup>-1</sup> FWHM. The scatter is too large to permit a meaningful test for any dependence of the velocity broadening on ionization parameter. Note that no such broadening was seen in the spectrum of Capella. We also measured the radial velocities for the Ly$`\alpha `$ lines, assuming the dispersion relation obtained from an analysis of the spectrum of Capella. Wavelengths were calculated from the level energies given by Johnson & Soff (1985); these should be accurate to a few parts in $`10^6`$. There is a clear systematic redshift to all the emission lines and RRCs, in both the positive and negative spectral orders and in both grating spectra. This is shown in Figure 3, where we have segregated dim and bright state data, but have averaged positive and negative spectral orders, and HEG and MEG spectral data. Also shown are the best fitting uniform velocity offsets. These fits were forced to yield zero wavelength shift at zero wavelength. The average redshift for the dim state is $`800`$ km s<sup>-1</sup>, and for the bright state is $`750`$ km s<sup>-1</sup>. We thus find a net redshift much smaller than the observed velocity spread, and essentially no dependence of the centroid velocity on the binary phase. We should point out that our preliminary analysis, based on fitting simple Gaussians, is admittedly crude, and may have biased the true nature of the velocity field somewhat. We also note, with caution, that Doppler shifts due to a single, uniform velocity do not appear to be a very good description of the data: the longest wavelength lines appear to be offset at a significantly larger than average radial velocity. A detailed analysis, taking into account the actual lineshape, will be required to confirm or refute the possibility that these offsets represent the expected systematic correlation of average wind velocity and ionization parameter. ## 5 Discussion The HETGS spectrum of Cyg X-3 has revealed a rich discrete spectrum, the properties of which are consistent with pure recombination excitation in cool, optically thin, low density X-ray photoionized gas in equilibrium. We fully resolve the narrow RRCs for the first time, and estimate an average electron temperature in the photoionized region of $`kT_e50`$ eV, consistent with global photoionization calculations. We detect a net redshift in the emission lines of $`v750800`$ km s<sup>-1</sup>, essentially independent of binary phase, and a distribution in velocity with a FWHM of $`1500`$ km s<sup>-1</sup>. If the wind were photoionized throughout, we would expect to see roughly equal amounts of blue– and redshifted material, so evidently we are viewing an ionized region that is not symmetric with respect to the source of the wind, as expected if only the part of the wind in the vicinity of the X-ray continuum source is ionized. However, in the simplest wind models, one would then expect to see a strong dependence of the centroid velocity on binary phase, alternating between red– and blueshifts, and this is decidedly not the case in our data. The implications of this finding for the flow pattern and distribution of material in the wind will be explored in a future paper. Finally, the Fe K$`\alpha `$ fluorescent feature, which probes a more neutral phase of the wind, has never been seen before in Cyg X-3. Unfortunately, the exact range of ionization can not be separated uniquely from systematic Doppler shifts through a measurement of the wavelengths of the K$`\alpha `$ spectra, because the feature, while clearly broadened, is not separated into its component ionization stages. Still, the width of the feature (the net effect of the velocity field and the existence of a range of charge states) and its intensity will impose strong constraints on the global properties of the wind. Acknowledgements. We wish to express our gratitude to Dan Dewey and Marten van Kerkwijk, for discussions and a careful reading of the manuscript, and to the referee, Randall Smith, for a thorough review. JC acknowledges support from NASA under a GRSP fellowship. MS’s contribution was supported by NASA under Long Term Space Astrophysics grant no. NAG 5-3541. FP was supported under NASA Contract no. NAS 5-31429. DL acknowledges support from NASA under Long Term Space Astrophysics Grant no. S-92654-F. Work at LLNL was performed under the auspices of the US Department of Energy, Contract mo. W-7405-Eng-48. Figure Captions: Fig.1—The 1–10 Å spectrum of Cyg X-3 as observed with the HEG (upper panel), and the MEG (lower panel), binned in 0.005 Å bins. The positive and negative first orders have been added, and the spectra have been smoothed with a 3 pixel boxcar filter. Labels indicate the positions of various discrete spectral features. ’He$`\alpha `$’ is the inelegant label for the resonance, intercombination, and forbidden lines in the He-like ions, plotted at the average wavelength for the complex. High-ionization features of interest that were not detected have been labeled in brackets. Horizontal bars indicate the nominal positions of the gaps between the ACIS chips; the dithering of the spacecraft will broaden the gaps and soften their edges. Fig.2—The 3.0–7.0 Å region of the spectrum enlarged; we show the raw count rates, binned by two 0.005 Å bins. The most important transitions have been labeled; dashed lines mark the expected positions of Si and S recombination edges. These markers have been redshifted by 800 km s<sup>-1</sup>. The horizontal bar near 4.5 Å in the HEG spectrum marks the nominal position of the gap between chips S2 and S3 in ACIS. The solid line in the MEG spectrum is a crude empirical fit to the continuum, with Si XIII, Si XIV, and S XVI narrow radiative recombination continua added. The electron temperature was set to 50 eV, and the continua were convolved with a 1500 km s<sup>-1</sup> FWHM velocity field, to match the broadening observed in the emission lines. Fig.3–Measured wavelength shift for selected Ly$`\alpha `$ features. Filled symbols refer to the ’dim’ state data, open symbols to the ’bright’ state data. The velocities as measured with the HEG and the MEG have been averaged; velocities in positive and negative spectral orders were averaged. Error bars indicate the size of the rms variation between these various measurements. In cases where only one or two velocities were measurable due to low signal-to-noise, we instead indicate the estimated statistical error on these measurements. The solid lines are the weighted least squares Doppler velocities for both the dim and the bright states.
no-problem/0002/astro-ph0002272.html
ar5iv
text
# Detection of X-ray pulsations from the Be/X-ray transient A 0535+26 during a disc loss phase of the primary ## 1 Introduction Be/X-ray binaries are X-ray sources composed of a Be star and a neutron star. Most of these systems are transient X-ray pulsars displaying strong outbursts in which their X-ray luminosity increases by a factor $`100`$ (see Negueruela 1998). In addition, those systems in which the neutron star does not rotate fast enough for centrifugal inhibition of accretion to be effective (see Stella et al. 1986) display persistent X-ray luminosity at a level $`10^{35}`$ erg s<sup>-1</sup>. The high-energy radiation is believed to arise due to accretion of material associated with the Be star by the compact object. It has long been known that accretion from the fast polar wind that is detected in the UV resonance lines of the Be primaries cannot provide the observed luminosities, even for detections at the weakest level (see Waters et al. 1988 and references therein; see also the calculations made for X Persei by Telting et al. 1998). Therefore it is believed that the material accreted comes from the dense equatorial disc that surrounds the Be star. Waters et al. (1989) modelled the radial outflow as a relatively slow ($`100`$ km s<sup>-1</sup>) dense wind. However most modern models for Be stars consider much slower outflows, due to strong evidence for rotationally dominated motion (quasi-Keplerian discs). This is due not only to the line shapes (see Hanuschik et al. 1996), which set an upper limit on the bulk motion at $`v3\mathrm{km}\mathrm{s}^1`$ (Hanuschik 2000), but also to the success of the Global One-Armed Oscillation model (which can only work in quasi-Keplerian discs) at explaining V/R variability in Be stars (Hummel & Hanuschik 1997; Okazaki 1997ab). The viscous decretion disc model (Okazaki 1997b; Porter 1999; Negueruela & Okazaki 2000) considers material in quasi-Keplerian orbit with an initially very subsonic outflow velocity that is gradually accelerated by gas pressure and becomes supersonic at distances $`100R_{}`$, i.e., much further than the orbits of neutron stars in Be/X-ray transients. The transient A 0535+26 is one of the best studied Be/X-ray binaries (Clark et al. 1998 and references therein). It contains a slowly rotating ($`P_\mathrm{s}=103\mathrm{s}`$) neutron star in a relatively wide ($`P_{\mathrm{orb}}=110.3\mathrm{d}`$) and eccentric ($`e=0.47`$) orbit around the B0IIIe star V725 Tau (see Finger et al. 1996; Steele et al. 1998). After its last giant outburst in February 1994 (Clark et al. 1998; Negueruela et al. 1998), the strength of the emission lines in the optical spectrum of V725 Tau has declined steadily. The last normal (periodic) outburst took place in September 1994 and the source has since not been detected by the BATSE experiment on board CGRO. ## 2 Observations ### 2.1 Optical spectroscopy V725 Tau, the optical counterpart to A 0535+26, was observed on November 7th 1998, using the 4.2-m William Herschel Telescope, located at the Observatorio del Roque de los Muchachos, La Palma, Spain. The telescope was equipped with the Utrecht Echelle Spectrograph using the 31.6 lines/mm echelle centred at H$`\alpha `$ and the SITe1 CCD camera. This configuration gives a resolution $`R40000`$ over the range $`4600\mathrm{\hspace{0.17em}10200}`$ Å. The data have been reduced using the Starlink packages ccdpack (Draper 1998), echomop (Mills et al. 1997) and dipso (Howarth et al. 1997). A detailed analysis of the whole spectrum is left for a forthcoming paper. In Figure 1, we show the shape of H$`\alpha `$, H$`\beta `$ and He i $`\lambda `$6678Å. When in emission, these three lines sample most of the radial extent of the circumstellar envelope. However, it is apparent that the lines seen in Fig. 1 correspond to photospheric absorption from the underlying star. The emission contribution from circumstellar material, if any, is certainly very small. The asymmetry in the shape of H$`\alpha `$ and He i $`\lambda `$6678Å suggests that some fast-moving material is present close to the stellar surface (see Hanuschik et al. 1993; Rivinius et al. 1998 for the discussion of low-level activity in disc-less Be stars), but the circumstellar disc is basically absent. H$`\beta `$, which, when in emission, is typically produced at distances of a few $`R_{}`$, looks completely photospheric. In Be stars H$`\alpha `$ probes a region extending to $`10R_{}`$, as measured from line-peak separation (Hummel & Vrancken 1995) and direct imaging (Quirrenbach et al. 1997). Again, circumstellar material seems to be almost absent from this region. There is weak emission emission in-filling at the line centre – transient emission components have been seen in this star during the disc-less state (Haigh et al. 2000), a behaviour typical of disc-less Be stars (Rivinius et al. 1998 and references therein). ### 2.2 X-ray observations Observations of the source were taken using the Proportional Counter Array (PCA) on board RossiXTE on 1998 August 21 and 1998 November 12 for a total on-source time of 4170 s and 2250 s, respectively. In both observations there is an excess of $`4`$ counts/s/(5 PCU) in the 2.5 –15 keV range of the Standard2 data compared to the faint source background model. Fits to power-law models with interstellar absorption result in flux estimates of $`6\times 10^{12}`$ and $`9\times 10^{12}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$ (2 – 10 keV) for the two observations respectively. These can only be considered as upper limits on the flux, due to the uncertain contribution of diffuse Galactic Disc emission to the count rates (Valinia & Marshall 1998). #### 2.2.1 Timing analysis The issue of whether the source of high energy radiation was active or not during the low activity optical phase can be solved by searching for the previously reported X-ray pulsations at $``$103.5-s spin period (e.g., Finger et al. 1996). In order to improve the signal-to-noise, we accumulated events from the top anode layer of the detectors. We also used the latest version of the faint background model. For the power spectral analysis we selected a stretch of continuous data and divided it into intervals of 309 s. A power spectrum was obtained for each interval and the results averaged together. Given that the pulse frequency ($`\nu 0.0097`$ Hz) lies on a region dominated by red noise, we have to correct for such noise if the statistical significance of the pulsations are to be established. First, we fitted the Poisson level by restricting ourselves to the frequency range 0.2 – 0.4 Hz, that is far away from the region where the red noise component may contribute appreciably. The strongest peak in the power spectrum corresponds to $`103.5\mathrm{s}`$. We also searched for periodicities in the light curves by folding the data over a period range and determining the $`\chi ^2`$ of the folded light-curve (epoch-folding technique). In this case we used 20 phase bins (19 degrees of freedom) and a range of 100 periods, around the expected period. This method has the advantage that the result is not affected by the presence of gaps in the data, hence a longer baseline can be considered than with Fourier analysis. Times in the background subtracted light-curve were converted into times at the solar-system barycentre. The result for the 1998 November observation is shown in Fig 2. We found that the peak at $`103.5\mathrm{s}`$ is significant at $`>5\sigma `$, confirming that the source was active during the observations. It is worth mentioning that the detection levels shown in Fig. 2 were obtained without a priori knowledge of the frequency of pulsations. In other words, we searched for pulsations in the frequency/period range shown in the figure. If we take into account the fact that we are interested in the pulse period at $`103.5\mathrm{s}`$, the peak becomes still more significant. The analysis of the 1998 August observation provides a much less significant detection. A peak at the expected frequency ($`\nu 0.0097`$ Hz) is seen in the power spectrum. However, epoch-folding analysis gave a significance of $`3\sigma `$ only when we considered one single period, that is, the number of trials is one. A search for pulsations in a period range did not yield any maximum above the 3-$`\sigma `$ detection level although a peak at the 103-s period is present (see Fig. 3) #### 2.2.2 Pulse shape The pulse shape (see Figure 4) is nearly sinusoidal, as expected from the absence of second or higher harmonics in the power density spectra. The amplitude of the modulation is $`2`$ count s<sup>-1</sup> in the 3 – 20 keV energy range, which implies a pulse fraction of $`53`$%. Given the unknown contribution from Galactic Disc diffuse emission, this represents only a lower limit to the pulsed fraction in the signal from the source. We have divided the November 1998 observation into two sections, corresponding to the peak of the pulse (phase bins 0.6 to 1.0) and the interpulse minimum (phase bin 0.1–0.5). An absorbed power-law fit in the energy range 2.7 – 10 keV to the two spectra gave $`\mathrm{\Gamma }=2.9\pm 0.4`$, $`N_\mathrm{H}=9\pm 4`$, $`\chi _\mathrm{r}^2=0.8`$ (18 dof) for pulse maximum and $`\mathrm{\Gamma }=3.3\pm 0.5`$, $`N_\mathrm{H}=10\pm 5`$, $`\chi _\mathrm{r}^2=0.9`$ (18 dof) for pulse minimum. The two values are consistent with each other within the error margins. The lack of spectral changes with phase requires any significant component of the detected flux due to diffuse emission to have a spectrum similar to that of the pulsar. #### 2.2.3 Spectral fit Formally the X-ray spectra are equally well represented by an absorbed power-law, blackbody and bremsstrahlung models. Table 1 shows the spectral fit results. All these models gave fits of comparable quality, which means that we are unable to distinguish meaningfully between the different spectral models of Table 1, even though the blackbody fit is unlikely to have any physical meaning, because of very small emitting area and the fact that it does not require any absorption (introducing $`N_\mathrm{H}`$ does not improve the fit) – see Rutledge et al. (1999) for a discussion of the physical inadequacy of this model for neutron stars. The value of the hydrogen column density ($`N_\mathrm{H}`$), which is consistent for the power-law and bremsstrahlung fits, is too high to be purely interstellar. The interstellar reddening to the source must be smaller than the measured $`E(BV)0.7`$ (Steele et al. 1998), which is the sum of interstellar and circumstellar contribution from the disc surrounding the Be star. According to the relation by Bohlin et al. (1978), $`E(BV)=0.7`$ implies $`N_\mathrm{H}=4.1\times 10^{21}\mathrm{cm}^2`$ and therefore there must be a substantial contribution of local material to the absorption. From the spectral fits, we estimate the 3 – 20 keV X-ray flux to be $`3.5\times 10^{33}\mathrm{erg}\mathrm{s}^1`$ and $`4.5\times 10^{33}\mathrm{erg}\mathrm{s}^1`$ for the August 1998 and November 1998 observations respectively, assuming a distance of 2 kpc (Steele et al. 1998). Although the values of the spectral parameters are consistent with each other within the error margins, they all show the same trend, namely, a harder spectral state during the 1998 August observations (lower photon index and higher blackbody and bremsstrahlung temperatures). ## 3 Results Our November 1998 X-ray observations represent a clear detection of A 0535+26 at a time when the optical counterpart showed no evidence for the presence of circumstellar material. Moreover, Haigh et al. (2000) present spectroscopy showing that the disc was already absent as early as late August 1998, when our first observation was taken. The observed luminosities in the 2 – 10 keV range ($`2\times 10^{33}\mathrm{erg}\mathrm{s}^1L_\mathrm{x}4.5\times 10^{33}\mathrm{erg}\mathrm{s}^1`$) are definitely smaller than the quiescence luminosity observed in other occasions when the equatorial disc surrounding the Be star was present. For example, Motch et al. (1991) observed the source on several occasions at a level $`L_\mathrm{x}1.5\times 10^{35}\mathrm{erg}\mathrm{s}^1`$ in the 1 – 20 keV range (correcting their value to the adopted distance of 2 kpc) using EXOSAT. The EXOSAT observation, as well as several other quiescence detections of Be/X-ray binaries with $`P_\mathrm{s}100\mathrm{s}`$, have always been interpreted in terms of accretion on to the surface of the neutron star from the equatorial outflow from the Be star. However, on this occasion we have observed the source when the disc of the Be star had been absent for several months and at an X-ray luminosity two orders of magnitude lower. Therefore we cannot consider as an a priori assumption that the emission mechanism at operation is the same. In order to explain the difference in the luminosity by two orders of magnitude, we can invoke accretion from a far less dense outflow or assume that some other emission mechanism is at work. We first consider the possibility that the observed luminosity is due to accretion on to the surface of the neutron star. Assuming an efficiency $`\eta =1`$ in the conversion of gravitational energy into X-ray luminosity, $`L_\mathrm{x}=4\times 10^{33}\mathrm{erg}\mathrm{s}^1`$ translates into an accretion rate $`\dot{M}=2.1\times 10^{13}\mathrm{g}\mathrm{s}^1=3.3\times 10^{13}M_{\mathrm{}}\mathrm{yr}^1`$. Following Stella et al. (1986), we define the corotation radius as that at which the pulsar corotation velocity equals Keplerian velocity. The corotation radius is given by $$r_\mathrm{c}=\left(\frac{GM_\mathrm{x}P_\mathrm{s}^2}{4\pi ^2}\right)^{\frac{1}{3}}$$ (1) where $`G`$ is the gravitational constant, $`M_\mathrm{x}`$ is the mass of the neutron star (assumed to be $`1.44M_{\mathrm{}}`$) and $`P_\mathrm{s}`$ is the spin period. For A 0535+26, $`r_\mathrm{c}=3.7\times 10^9\mathrm{cm}`$. The magnetospheric radius at which the magnetic field begins to dominate the dynamics of the inflow depends on the accretion rate and can be expressed as $$r_\mathrm{m}=K\left(GM_\mathrm{x}\right)^{1/7}\mu ^{4/7}\dot{M}^{2/7}$$ (2) where $`\mu `$ is the neutron star magnetic moment and $`\dot{M}`$ is the accretion rate. $`K`$ is a constant that in the case of A 0535+26 has been determined to be $`K1.0`$ when an accretion disc is present (Finger et al. 1996) and from theoretical calculations is expected to have a similar value for wind accretion. Following Finger et al. (1996), we will assume a magnetic dipolar field $`9.5\times 10^{12}\mathrm{G}`$, resulting in $`\mu =4.75\times 10^{30}\mathrm{G}\mathrm{cm}^3`$. For the accretion rate $`\dot{M}=2.1\times 10^{13}\mathrm{g}\mathrm{s}^1`$ derived above, the magnetospheric radius would be $`r_\mathrm{m}=9.3\times 10^9\mathrm{cm}`$. Therefore $`r_\mathrm{m}>r_\mathrm{c}`$ and the neutron star must be in the centrifugal inhibition regime. In order to estimate the solidity of this result, we point out that if the 110 keV cyclotron line detected in the spectrum of A 0535+26 (Kendziorra et al. 1994; Grove et al. 1995) is the second harmonic instead of the first, as has been suggested, the magnetic field (and magnetic moment) would be smaller by a factor 2. However, this would require a higher value for $`K`$ in order to fit the observations of a QPO in this system (Finger et al. 1996), leaving the value of $`r_\mathrm{m}`$ unaffected. An efficiency in the conversion of gravitational energy into radiation as low as $`\eta =0.5`$ will translate into a reduction in $`r_\mathrm{m}`$ by only a factor $`0.8`$. Therefore we conclude that the neutron star is certain to be in the inhibited regime. According to Corbet (1996), when the source is in the inhibition regime, a luminosity comparable to that observed could be produced by release of gravitational energy at the magnetospheric radius (for short, accretion onto the magnetosphere. However, even assuming that the magnetosphere is at the corotation radius and an efficiency $`\eta =1`$ (i.e., best case), in order to produce $`L_\mathrm{x}=4\times 10^{33}\mathrm{erg}\mathrm{s}^1`$, the accretion rate needed is $`\dot{M}_\mathrm{m}=8.2\times 10^{16}\mathrm{g}\mathrm{s}^1`$. With such an accretion rate, the magnetosphere would be driven in well within the corotation radius, and produce a luminosity of $`L_\mathrm{x}1.5\times 10^{37}\mathrm{erg}\mathrm{s}^1`$ by accretion on to the surface of the neutron star. Therefore we conclude that the observed luminosity is not due to accretion on to the magnetosphere. Therefore we are left with the following possibilities for the origin of the X-ray emission: * Accretion on to the neutron star through some sort of leakage through the magnetospheric barrier. This could adopt two forms. Either directly from the Be star outflow and only through a fairly limited region near the spin axis or mediated by an accretion torus, supported by the centrifugal barrier, with a small amount of material managing to penetrate the magnetosphere. * Thermal emission from the heated core of the neutron star. Brown et al. (1998) and Rutledge et al. (1999) have studied thermal emission from X-ray transients in quiescence. They predict a thermal luminosity in quiescence $$L_\mathrm{x}=6\times 10^{32}\mathrm{erg}\mathrm{s}^1\times \frac{\dot{M}}{10^{11}M_{\mathrm{}}\mathrm{yr}^1}$$ (3) where $`\dot{M}`$ here represents the long term average mass accretion rate. From the number of observed Type II outbursts in A 0535+26, we assume one giant outburst every 5 – 10 years, which translates into a long-term average of $`\dot{M}=48\times 10^{11}M_{\mathrm{}}\mathrm{yr}^1`$. This would imply quiescence thermal emission in the range $`L_\mathrm{x}=25\times 10^{33}\mathrm{erg}\mathrm{s}^1`$, which is consistent with our observations. ## 4 Discussion It is very difficult to estimate the fraction of the signal that actually comes from the source, though the pulsed component is evidently a lower limit to it. The diffuse emission from the Galactic disc is not well described at high Galactic longitudes (for A 0535+26, $`l=181.5\mathrm{°}`$), but if the assumption by Valinia & Marshall (1998) that its latitude distribution should be similar to that in the Galactic Ridge can be held, then it should not be very strong at the position of A 0535+26 ($`b=2.6\mathrm{°}`$). In any case, the total (source + diffuse) flux detected is lower than the average diffuse emission from the Galactic Ridge, which is $`2.4\times 10^{11}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$ in the 2 – 10 keV band (Valinia & Marshall 1998). Given that the fitted spectra are much softer than the model fits to Galactic Ridge diffuse emission by Valinia & Marshall (1998), which have photon indexes $`\mathrm{\Gamma }1.8`$, and the similitude between the pulse-peak and pulse-minimum spectra, it seems likely that most of the detected signal comes actually from the source. The high value of $`N_\mathrm{H}`$ obtained in all the non-thermal fits argues for the presence of large amounts of material in the vicinity of the neutron star. This could be caused by the pile-up of incoming material outside the magnetosphere. The observed spectrum is much softer than the spectra of Be/X-ray binaries at low luminosity during quiescence states (Motch et al. 1991 quote a photon index of 1.7 for their observations of A 0535+26) and could favour the thermal emission interpretation. Brown et al. (1998) proposed that thermal reactions deep within the crust of an accreting neutron star would maintain the neutron star core at temperatures of $`10^8\mathrm{K}`$, similar to that in a young radio pulsar. During the quiescent state of transient accretors, the conduction of thermal energy from the core should result in a detectable thermal spectrum from the neutron star atmosphere. In a high magnetic field pulsar, this thermal emission should be pulsed due to both the anisotropic surface temperature distribution caused by the dependence of the thermal conductivity on the magnetic field (Shibanov & Yakovlev 1996), and the anisotropic radiation pattern from the neutron star atmosphere resulting from the magnetic field (Zavlin et al. 1995). Our blackbody fit to the A0535+26 spectrum resulted in an emission radius much smaller than the neutron star. Rutledge at al. (1999) show that fits to Hydrogen atmosphere spectral models result in larger emission areas and lower effective temperature than blackbody fits. The spectra may therefore be consistent with thermal emission from the pulsar. However, if this interpretation is correct, most of the emitted luminosity should be below the energy band that we observe. In that case, the bolometric luminosity may well exceed that predicted from our estimates of the long-term average accretion rates. Our detection of A 0535+26 can also be used to set limits on the outflow from the Be companion. The condition for centrifugal inhibition is $`r_\mathrm{m}r_\mathrm{c}`$. Therefore the minimum accretion rate at which there is no inhibition corresponds to that at which $`r_\mathrm{m}=r_\mathrm{c}`$. Using the values above, we obtain an accretion rate onto the magnetosphere $`\dot{M}_\mathrm{m}=5.3\times 10^{14}\mathrm{g}\mathrm{s}^1=8\times 10^{12}M_{\mathrm{}}\mathrm{yr}^1`$. If the observed luminosity is due to accretion on to the surface of the neutron star, the rate of mass flow into the vicinity of the neutron star is then constrained to be in the range $`3\times 10^{13}M_{\mathrm{}}\mathrm{yr}^1\dot{M}8\times 10^{12}M_{\mathrm{}}\mathrm{yr}^1`$ (of course, if it is due to thermal emission, only the upper limit holds). The lower limit represents a small fraction of the mass lost from the Be star (which should be $`10^{11}M_{\mathrm{}}\mathrm{yr}^1`$), but the upper limit is close to the mass-loss values derived in the decretion disc model (Okazaki 1997b; Porter 1999). This value is also close to the accretion rate needed to sustain the quiescence luminosity observed by Motch et al. (1991), which is $`\dot{M}10^{11}M_{\mathrm{}}\mathrm{yr}^1`$. Such an accretion rate would represent a substantial fraction of the stellar mass-loss, though still only a small fraction of the disc mass (estimated to be $`10^9\mathrm{\hspace{0.17em}10}^{10}M_{\mathrm{}}`$). As has been calculated above, most of the long-term accretion rate is due to the giant outbursts, where a substantial fraction of the Be disc mass must be accreted. From all the above, it is clear that the amount of material reaching the vicinity of the neutron star during the disc-less phase of the companion is smaller than during previous quiescence states. This cannot be due to an orbital effect because Motch et al. (1991) observed the source at different orbital times and also because our two observations took place close to periastron (orbital phases $`\varphi 0`$ for the August observation and $`\varphi 0.8`$ for the November observation, according to the ephemeris of Finger et al. 1996). Existing evidence seems to argue against the existence of a persistent accretion disc surrounding the neutron star in A 0535+26. The statistical analysis of Clark et al. (1999) showed that there is no significant contribution from an accretion disc to the optical/infrared luminosity of A 0535+26. This does not rule out the presence of an accretion disc (which could, for example, be too small to radiate a significant flux in comparison to the Be circumstellar disc). It is believed that A 0535+26 does indeed form an accretion disc around the neutron during Type II outbursts, since very fast spin-up and quasi-periodic oscillations have been observed (Finger et al. 1996). However, the lack of spin-up during Type I outbursts led Motch et al. (1991) to conclude that no persistent accretion disc was present. In contrast, the Be/X-ray binary 2S 1845$``$024 shows large spin-up during every Type I outburst (Finger et al. 1999). If no accretion disc is present, the reduced amount of material reaching the neutron star must be directly due to a change in the parameters of the outflow from the Be star. Within the framework of modern models for Be star discs, considering very subsonic outflow velocities, such a change can only be due to a lower outflow density. Unfortunately, since the details of the magnetic inhibition process are poorly understood, we can only constrain the mass rate reaching the neutron star to be below that corresponding to the transition at which inhibition occurs, which is very close to the rate deduced from previous quiescence observations in which the source was not in the inhibition regime. ## 5 Conclusions A 0535+26 was active at a time when the optical counterpart V725 Tau showed no evidence for a circumstellar disc. The luminosity was two orders of magnitude lower than in previous quiescence detections and accretion was centrifugally inhibited. Given that the observed luminosity cannot be due to accretion onto the magnetosphere, we are observing either some material leaking through the magnetosphere or thermal emission from the heated core of the neutron star. In any case, this detection represents a state of an accreting X-ray pulsar that had not been observed before. Further observations of Be/X-ray binaries in a similar state (when their companions have lost their discs and very little material can reach the vicinity of the neutron star) are needed. Observations with Chandra or XMM, which combine much higher sensitivities with more adequate energy ranges, could determine whether the observed spectrum is compatible with thermal emission models. ## Acknowledgements We thank Jean Swank and the RossiXTE team for granting a Target of Opportunity observation and carrying it out on very short notice. Simon Clark is thanked for his help in preparing the proposal. The WHT is operated on the island of La Palma by the Royal Greenwich Observatory in the Spanish Observatorio del Roque de Los Muchachos of the Instituto de Astrofísica de Canarias. The observations were taken as part of the ING service observing programme. This research has made use of data obtained through the High Energy Astrophysics Science Archive Research Center Online Service, provided by the NASA/Goddard Space Flight Center. IN is supported by an ESA external fellowship. P. Reig acknowledges partial support via the European Union Training and Mobility of Researchers Network Grant ERBFMRX/CT98/0195.
no-problem/0002/cond-mat0002252.html
ar5iv
text
# Multifractal Properties of the Random Resistor Network ## Abstract We study the multifractal spectrum of the current in the two-dimensional random resistor network at the percolation threshold. We consider two ways of applying the voltage difference: (i) two parallel bars, and (ii) two points. Our numerical results suggest that in the infinite system limit, the probability distribution behaves for small $`i`$ as $`P(i)1/i`$ where $`i`$ is the current. As a consequence, the moments of $`i`$ of order $`qq_c=0`$ do not exist and all current of value below the most probable one have the fractal dimension of the backbone. The backbone can thus be described in terms of only (i) blobs of fractal dimension $`d_B`$ and (ii) high current carrying bonds of fractal dimension going from $`1/\nu `$ to $`d_B`$. PACS numbers: 64.60.Ak, 05.45.Df The transport properties of the percolating cluster have been the subject of numerous studies. A particularly interesting system is the random resistor network (RRN), where the bonds have a random conductance. The random resistor network serves as a paradigm for many transport properties in heterogeneous systems as well as being a simplified model for fracture. The first studies of the RRN were devoted to effective properties of the network (conductivity, permittivity, etc.), but for many practical applications—such as fracture, and dielectric breakdown—the central quantity is the probability distribution $`P(i)`$ of currents $`i`$. For instance, in the random fuse network, it is the maximum current corresponding the hottest or “red” bonds which will determine the macroscopic failure of the system. The probability distribution $`P(i)`$ has many interesting features, one of which is multifractality: in order to describe $`P(i)`$, an infinite set of exponents is needed. This idea of multifractality was initially proposed to treat turbulence and later applied successfully in many different fields, ranging from model systems such as DLA to physiological data such as heartbeat. It was first believed that the low current part of $`P(i)`$ and of the multifractal spectrum follow a log-normal law as it is the case on hierarchical lattices. It is now clear, that for small currents, the current probability distribution follows a power law $`P(i)i^{b1}`$ where $`b0`$. For large currents, there is a weak dependence on the system size $`L`$. This is in contrast with small currents which are governed by very long paths, and therefore depend more strongly on $`L`$. It was suggested that the exponent $`b`$ of the low-current part has a $`1/\mathrm{log}L`$ dependence, where $`L`$ is the system size. The asymptotic value $`b_{\mathrm{}}`$ of the exponent $`b`$ is of crucial importance. If $`b_{\mathrm{}}`$ is finite and positive, then a low current evolves on a subset with a fractal dimension depending on its value. On the other hand, if $`b_{\mathrm{}}`$ is zero, then the low current part of the multifractal spectrum is flat and the entire backbone is contributing to low currents. It is thus important to understand if the apparent subset structure with different fractal dimensions is a finite-size effect. This problem was adressed by Batrouni et al who conjectured a zero asymptotic slope and by Aharony et al who proposed a finite asymptotic value. The maximum value of $`L`$ in the literature is $`128`$, so numerical estimates could not lead to a definite conclusion. In this Letter, we present evidence that the asymptotic slope is zero. We first recall the basis of multifractality applied to the percolating two-dimensional resistor network of linear size $`L`$. Let $`n(i,L)`$ be the number of bonds carrying current $`i`$. By the steepest descent method, the main contribution to $`n(i,L)`$ for large $`L`$ is given by $$n(i,L)L^{f(\alpha ,L)}$$ (1) where $`\alpha \mathrm{log}i/\mathrm{log}L`$. The multifractal spectrum $`f(\alpha ,L)\mathrm{log}n/\mathrm{log}L`$ can thus be interpreted as the fractal dimension of the subset of bonds carrying the current $`i`$. The $`q`$-th moment of the current is defined as $`M_qi^q`$, where the sum is over all bonds carrying a non-zero current and $``$ denotes an average over different disorder configurations. These moments exists for $`q>q_c`$, and it can be easily shown that the “threshold” is $`q_c=b`$. The asymptotic slope thus give the asymptotic value of the threshold $`q_c`$. For the fixed current ensemble, one observes that$`M_qL^{\tau _q}`$ for large $`L`$ and for $`q>q_c`$ and where $`\tau _q`$ is a universal exponent. In particular, $`\tau _0=d_B`$, $`\tau _2=t/\nu `$, and $`\tau _{\mathrm{}}=1/\nu `$ where $`d_B`$ is the fractal dimension of the backbone, $`t`$ the conductivity exponent, and $`\nu `$ is the correlation length exponent. If the behavior is monofractal, then $`\tau _q`$ is a linear function of $`q`$, while in the multifractal case, the exponents are not described by a simple linear function of $`q`$. In the $`L\mathrm{}`$ limit, knowing $`f(\alpha )`$ is equivalent to knowing the infinite set of exponents $`\tau _q`$, as $`f(\alpha )`$ is the Legendre transform of $`\tau _q`$. The low current part of $`f(\alpha ,L)`$ was found numerically to be a power law of slope $`b=b(L)`$, where $$b(L)=b_{\mathrm{}}+\frac{A}{\mathrm{log}L}+\epsilon (L)$$ (2) and $`\epsilon (L)`$ is a correction decreasing faster than $`1/\mathrm{log}L`$ when $`L`$ is increasing. This equation shows a strong finite-size effect since $`\mathrm{log}L`$ grows very slowly, and two possibilities for $`b_{\mathrm{}}`$ were proposed, $`b_{\mathrm{}}=0`$ or $`b_{\mathrm{}}=1/4`$. We consider the two-dimensional random resistor network at criticality, i.e. the fraction of conducting bonds $`p`$ is equal to its critical value $`p=p_c=1/2`$. We first apply a voltage difference between two parallel bars. We compute $`f(\alpha ,L)`$, for a fixed voltage difference, for $`L=50,\mathrm{},1000`$, and average over $`10^4`$ configurations for each $`L`$. We show our results in Fig. 1a. The slope is clearly decreasing with $`L`$, confirming the strong finite size effects already observed. Next, we consider a second type of configuration, which we call the “two injection points” case, in contrast with the usual “parallel bars” case. We impose a voltage difference between two points $`P`$ and $`Q`$ separated by a distance $`r`$, and we look for the backbone connecting these two points. This situation was studied in, but here we keep only the backbones of size $`L`$. In this way, we have large backbones connecting the two points $`P`$ and $`Q`$, and for $`rL`$ we expect to have a large number of small currents on bonds belonging to long loops. The multifractal spectrum is then defined in the same way as for the parallel bars and we calculate for different values of $`L`$ the slope of its small-current part. The multifractal spectrum in this case is shown in Fig. 1b. We observe that there is a large amount of small currents, and that the asymptotic limit is reached faster in the two injection points case. We expect that the low current distribution will be asymptotically the same as in the parallel bar case, so the consistency between the two configurations will support our results. However, for large currents there are some distinct differences in the multifractal spectrum. Fig. 2a shows the slope $`b`$ versus $`1/\mathrm{log}L`$ according to Eq. (2) for both multifractal spectra. The extrapolation to $`L=\mathrm{}`$ is consistent with $`b_{\mathrm{}}=0`$ in both cases. This result is consistent with the behavior of the successive intercepts (Fig. 2b). Another functional form of $`b`$ versus $`L`$ could lead to another value of $`b_{\mathrm{}}`$. If we replace the abscissa of Fig.2(a) by $`1/(\mathrm{log}L)^\kappa `$, then we find that the extrapolated value for $`b_{\mathrm{}}`$ depends on $`\kappa `$, ranging from $`b_{\mathrm{}}0.10`$ for $`\kappa =2`$ to $`b_{\mathrm{}}<0`$ (which is impossible) for $`\kappa =0.5`$. It is numerically difficult to distinguish between a $`1/\mathrm{log}L`$ and a $`1/(\mathrm{log}L)^2`$ behavior, but the $`1/\mathrm{log}L`$ is the most commonly used. In Fig. 2a, we observe higher order corrections to the behavior $`b(L)=b_{\mathrm{}}+A/\mathrm{log}L`$. A better fit can be obtained by adding to the linear form a small quadratic term $`B/(\mathrm{log}L)^2`$ (and eventually even cubic and quartic terms). We find that we cannot do a quadratic fit over the whole range of $`1/\mathrm{log}L`$, and indeed this leads to two non-physical results: (a) For both geometries, the fits have negative slopes at $`1/\mathrm{log}L=0`$, which is not physical since the larger the system size, the larger the number of small currents, so the behavior of $`b`$ should be monotonically decreasing with $`L`$. (b) A second defect of these quadratic fits is that the obtained values for the intercepts are different for the two geometries, which is impossible. The important assumption here is the behavior of the leading term. There is no proof that the leading term of the expansion is $`1/\mathrm{log}L`$ rather than $`(1/\mathrm{log}L)^\kappa `$ with $`\kappa 1`$. However, the assumption that the leading term of the expansion is $`1/\mathrm{log}L`$ with $`b_{\mathrm{}}=0`$ is consistent with our numerical data, and shows that the correction $`\epsilon (L)`$ decays faster than an inverse power of $`\mathrm{log}L`$ (see Fig. 2c). Finally, we note that the sequence of maximum values of $`f(\alpha ,L)`$ for the two injection points case plausibly extrapolates in the variable $`1/\mathrm{log}L`$ as $`L\mathrm{}`$ to a value of $`d_B`$ close to the known value $`1.64`$ (Fig.3). Thus our results suggest the intriguing possibility that for $`L\mathrm{}`$, the small current part of $`f(\alpha ,L)`$ is a horizontal line at the value $`d_B`$, implying that in an infinite system the fractal dimension of the subset contributing to small current is $`d_B`$, independently of the value of $`\alpha `$. In this sense, the small current probability distribution is apparently not multifractal. The “perfectly balanced” bonds which carry zero current have a fractal dimension equal to $`d_B`$. Since these bonds contribute to $`f(\alpha ,L)`$ for $`\alpha \mathrm{}`$, the fact that their fractal dimension is $`d_B`$ supports our hypothesis that $`b_{\mathrm{}}=0`$. A related conclusion is that $`q_c=0`$, or the negative moments of the current do not exist in the infinite-size limit. In particular, it shows that the first-passage time for a tracer particle travelling in a flow field in a porous medium modelled by a percolation cluster diverges in an infinite system. Moreover, the result $`b_{\mathrm{}}=0`$ is supported by the following argument. If $`b_{\mathrm{}}`$ were not zero, then the number of bonds carrying a small current $`i`$ would be $`n(i0,L=\mathrm{})i^b_{\mathrm{}}`$. This behavior would indicate that the number of bonds carrying a small current $`i`$ approaches zero when $`i0`$, which seems unlikely, since on an infinite backbone, the number of loops is very large, and $`n(i0,L=\mathrm{})`$ should be nonzero. Hence $`b_{\mathrm{}}=0`$. This argument is consistent with the fact that the total number of bonds carrying a nonzero current, $`_0n(i,L)d(\mathrm{log}i)`$, should diverge as $`L\mathrm{}`$. For large values of the current, the multifractal features do not change as $`L`$ increase, suggesting that in the infinite-size limit, there are essentially two different type of subsets. The first comprises the blobs of fractal dimension $`d_B`$, and the second set comprises links carrying larger values of the current (red bonds), of fractal dimension ranging from $`d_{\text{red}}=1/\nu `$ to $`d_B`$. We thank L.A.N. Amaral for valuable help, and J.S. Andrade, A. Chessa, A. Coniglio, N.V. Dokholyan, P. Gopikrishnan, P.R. King, G. Paul, A. Scala, and F.W. Starr for useful discussions, two anonymous referees for helpful suggestions, DGA and BP Amoco for financial support.
no-problem/0002/cond-mat0002459.html
ar5iv
text
# Scaling in a cellular automaton model of earthquake faults \[ ## Abstract We present theoretical arguments and simulation data indicating that the scaling of earthquake events in models of faults with long-range stress transfer is composed of at least three distinct regions. These regions correspond to three classes of earthquakes with different underlying physical mechanisms. In addition to the events that exhibit scaling, there are larger “breakout” events that are not on the scaling plot. We discuss the interpretation of these events as fluctuations in the vicinity of a spinodal critical point. \] Earthquake faults and fault systems are known to exhibit scaling where the number $`N_M`$ of earthquakes with seismic moment $`M`$ scales as $`N_M1/M^B`$ with $`B`$ between 1.5 and 2.0. The observed scaling is over several decades, but for the larger events there is an indication that scaling does not apply, a fact often attributed to poor statistics. However, because models also produce this deviation from scaling, even when there are many large events, the origin of this deviation lies elsewhere. Other questions of interest include: What is the physical mechanism that produces the scaling? Do all the events on the scaling plots have the same physical origin? We report the results of our theoretical and numerical investigations of a cellular automaton (CA) model of an earthquake fault indicating that the scaling region is dominated by a spinodal-like (pseudospinodal) singularity that determines the distribution of events. The scaling can be decomposed into three distinct regions driven by different physical mechanisms. In addition to the scaling region, we find that the largest “earthquakes” are not on the scaling plot and have yet another physical origin. The system of interest is a CA version of the slider block model and consists of a discrete two-dimensional ($`d=2`$) array of blocks connected by linear springs with a spring constant (stress Green’s function) $`T(r_{ij})`$ and to a loader plate by linear springs with constant $`K_L`$; $`r_{ij}`$ is the distance between blocks. Each block $`i`$ initially receives a random position $`U_i`$ from a uniform distribution, and the loader plate contribution to the stress is set to 0. The stress $`\sigma _i`$ on each block is given by $`\sigma _i(t)=_jT(r_{ij})[U_j(t)U_i(t)]+K_L[V_n\mathrm{\Theta }(nt)U_i(t)]`$ and compared to a threshold value $`\sigma _i^F`$. If $`\sigma _i<\sigma _i^F`$, the block is not moved. If $`\sigma _i\sigma _i^F`$, the block slips (fails) and is moved according to $`U_i(t+1)=U_i(t)+[\sigma _i(t)\sigma _i^R(t)]/K,`$ where $`K=K_L+K_C`$, and $`K_C=_{j,ij}T(r_{ij})`$. The residual stress, $`\sigma _i^R(t)=\sigma ^R+a(\eta _i(t)0.5)`$, specifies the stress on a block immediately after failure. The random noise $`\eta _i`$ is taken from a uniform distribution between $`0`$ and $`1`$, $`a`$ sets the noise amplitude, and $`\sigma ^R`$ is the average residual stress. After all the blocks have been tested and moved, the stress on each block is measured again and the process is repeated. We choose $`T_{ij}=K_C/q`$ for all $`j`$ inside a square interaction range with area $`(2R+1)^2`$ centered on site $`i`$, where $`q=(2R+1)^21`$ is the number of neighbors; $`T_{ij}=0`$ for all the sites outside the interaction range. After block $`i`$ slips, $`K_C/K`$ of the local stress drop, $`\sigma _i\sigma _i^R`$, is distributed equally to its neighbors, and $`K_L/K`$ is dissipated. When no block has a stress greater than $`\sigma _i^F`$, the earthquake ceases and the seismic moment released during the event is $`M=_i\mathrm{\Delta }U_i`$, where $`\mathrm{\Delta }U_i`$ is the slip of block $`i`$ during the earthquake. The loader plate is then moved a distance $`V\mathrm{\Delta }T`$, the stresses are updated, and we search for the unstable blocks that will initiate the next event. The quantity $`\mathrm{\Delta }T,`$ which we set equal to 1, sets the “tectonic” time scale. In the limit $`V=0`$ the stress is globally incremented to bring the “weakest” block to failure and there is a single initiator per plate update. Because the $`T_{ij}`$ appropriate for earthquake faults is long-range, we will consider $`R>>1`$. In our simulations $`R=30`$, $`\sigma _i^F=\sigma ^F=1`$ is a spatial constant, $`K_L=1`$, $`K_C=100`$, $`V=0`$, and the distribution of residual stresses is defined by $`\sigma ^R=0.25`$ and $`a=0.5`$. In Fig. 1 we plot the log (base 10) of the probability $`n(s)`$ of events of size $`s`$ (number of failing blocks) versus $`\mathrm{log}(s)`$ generated by the model. For the chosen parameters there are no multiple failures of the same block during an earthquake and $`Ms`$. For the total of $`18\times 10^6`$ events, there is still a significant spread of the data in the large events region. The origin of this spread is not poor statistics. We now review the theoretical arguments that describe the scaling of events. In the limit that $`R`$ diverges such that $`r^2T(r)𝑑\stackrel{}{r}\mathrm{}`$ but $`T(r)𝑑\stackrel{}{r}`$ is finite, we have derived a Langevin equation for the stress. This derivation and numerical simulations confirm that the CA model is described by equilibrium statistical mechanics in the limit of $`R\mathrm{}`$ and that this description is a very good approximation for systems with long, but not infinite, $`R`$. Because this Langevin equation is a general description of systems with a simple scalar order parameter, the scaling of the fluctuations in the vicinity of spinodals of mean-field Ising models (and simple fluids) and the present CA model is the same. Our main assumption is that the structure and dynamics of earthquake events is identical to the structure and dynamics of fluctuations near critical points and spinodals. In particular, scaling is determined by the presence of a spinodal singularity. For Ising systems, by mapping the thermal critical point onto a properly chosen percolation model, the properties of the fluctuations at the thermal critical point can be obtained from the properties of the clusters at the percolation threshold: percolation clusters are the physical realization of the fluctuations. At the critical point the clusters associated with the divergent connectedness length are the fluctuations associated with the divergent susceptibility in the thermal model. We can use a similar mapping to generate a percolation model for the spinodal. Therefore, we can describe the scaling of events in the CA model in the language of cluster scaling for Ising models. We first discuss how the cluster structure relates to thermal critical phenomena in non-mean-field systems ($`R=1`$) where hyperscaling is valid. In this case, the mean number of clusters in a region of volume $`\xi ^d`$, where $`\xi `$ is the correlation length, is one. In such systems the critical phenomena fluctuation in this volume is isomorphic to the cluster. This picture is altered in mean-field systems. For mean-field Ising systems ($`R\mathrm{}`$) there is a line of spinodal critical points in addition to the usual critical point. These mean-field thermal singularities can also be mapped onto percolation transitions, but the relation between percolation clusters and critical fluctuations is qualitatively different. The mean number of clusters in a volume $`\xi ^d`$ is $`N_c=R^dϵ^{2d/2}`$ near the critical point and $`N_s=R^d\mathrm{\Delta }h^{3/2d/4}`$ near the spinodal. Here $`ϵ=(TT_c)/T_c,`$ where $`T_c`$ is the critical temperature, and $`\mathrm{\Delta }h=hh_s,`$ where $`h_s`$ is the value of the magnetic field at the spinodal for a fixed temperature $`T<T_c.`$ The factor $`R^d`$ appears because all lengths are in units of the interaction range. The Ginsburg criterion for mean-field critical points is $`ϵ^\gamma /(R^dϵ^{2\beta d\nu })<<1`$. That is, the system is well approximated by mean-field theory if the fluctuations are small compared to the order parameter. Using the mean-field exponents $`\gamma =1`$, $`\beta =1/2`$ and $`\nu =1/2`$, the Ginsburg criterion is equivalent to $`N_c>>1`$. We will refer to systems with $`N_c>>1`$ but finite as near-mean-field. A similar argument is used near the spinodal to show that $`N_s>>1`$ for near-mean-field. Because $`N_c>>1`$, the meaning of order parameter scaling is changed. For systems with hyperscaling, the density of the single cluster with diameter $`\xi `$ scales as $`ϵ^\beta `$, as does the order parameter. In mean-field and near-mean-field systems, $`ϵ^\beta `$ cannot be the density of a single cluster, because that would lead to a magnetization per spin greater than one. Instead $`ϵ^\beta `$ is the density of all the spins in all the clusters in a volume $`\xi ^d`$. Because all of the clusters are identical, the density of each of these clusters is $`\rho _c^{\mathrm{fc}}ϵ^{1/2}/(R^dϵ^{2d/2})`$ at mean-field critical points and $`\rho _s^{\mathrm{fc}}\mathrm{\Delta }h^{1/2}/(R^d\mathrm{\Delta }h^{3/2d/4})`$ at spinodals. These densities are good approximations in near-mean-field systems. We will refer to these clusters as fundamental clusters. These clusters are not the critical phenomena fluctuations, but are related to them. Spinodals mark the boundary between the metastable and unstable states. In near-mean-field systems the spinodal is not a sharp singularity but becomes a smeared out region associated with singularities in complex temperature and magnetic field space. As the spinodal is approached so is the limit of metastability. Hence, we would expect that nucleation events, which form another class of clusters, also play a role in the CA model. From the Langevin approach we find that the nucleation clusters are local regions of growth of the stable high stress phase in the metastable low stress phase. An earthquake represents the stress release due to the decay of the high stress phase into the metastable low stress phase. Because the nucleation phenomena of interest occurs near the spinodal, the classical picture is not valid. Instead, a calculation of the nucleation rate must include the effect of the spinodal which involves a vanishing of the surface tension. With these considerations the nucleation rate, which is proportional to the number $`n`$ of clusters per unit volume, is given by $$n\frac{\mathrm{\Delta }h^{1/2}\mathrm{exp}(AR^d\mathrm{\Delta }h^{3/2d/4})}{R^d\xi ^d},$$ (1) where $`A`$ is a constant independent of $`R`$ and $`\mathrm{\Delta }h`$. The nucleation rate in Eq. (1) contains an exponential term whose argument is the nucleation barrier. The static prefactor, which is independent of the dynamics of the model, is $`1/\xi ^d`$, where $`\xi =R\mathrm{\Delta }h^{1/4}`$ is the correlation length near the spinodal. The $`\mathrm{\Delta }h^{1/2}`$ term is the kinetic prefactor and is dynamics specific. For the CA model the distance from the spinodal is measured by the amount of stress dissipated, i.e., $`\mathrm{\Delta }hK_L/K`$. Finally, the extra factor of $`R^d`$ in the denominator reflects the fact that the theory employs a coarse grained time scale, but our simulations use a time scale based on plate updates. Because the coarse graining time is proportional to the coarse graining volume $`R^d`$, this extra factor is included in the nucleation rate. Our assumption is that the CA model behaves like an Ising model near the spinodal for mean-field and near-mean-field systems. In this limit, fundamental clusters and nucleation events, which involve coalescence of fundamental clusters, are the only clusters. Because it is the decay of the high stress clusters that is the “earthquakes” in this model, cluster scaling and earthquake scaling are the same. To understand how this point of view provides answers to the questions posed in the introduction we discuss the fundamental clusters. In mean-field each block fails at the failure threshold and fails only once during an earthquake. This behavior is an excellent approximation in near-mean-field. The amount $`\mathrm{\Delta }\sigma `$ of stress transmitted to a site during the failure of a cluster is proportional to the number of cluster sites in the interaction volume $`R^d`$, which is $`\rho _s^{\mathrm{fc}}R^d`$ because we are near the spinodal, times the fraction of stress transmitted from a failed block, which is proportional to $`R^d`$. Hence, $`\mathrm{\Delta }\sigma \rho _s^{\mathrm{fc}}`$ during the failure of a fundamental cluster, and the mean size of the fundamental cluster is $`s=\rho _s^{\mathrm{fc}}\xi ^d=\mathrm{\Delta }h^1`$. The number of fundamental clusters per unit volume is $`n_{\mathrm{fc}}=R^d\mathrm{\Delta }h^{3/2d/4}/\xi ^d=\mathrm{\Delta }h^{3/2}`$. Therefore, the density of fundamental clusters with $`s`$ blocks scales as $`n_{\mathrm{fc}}1/s^{3/2}`$. To identify the fundamental clusters we examine the stresses on the blocks that make up a cluster of failed sites and determine the minimum stress, $`\sigma _{\mathrm{min}}`$, of the failing sites prior to failure, but after the update of the loader plate that triggers the event. We record only those events for which $`\sigma _{\mathrm{min}}`$ is within the window $`[\sigma ^F\mathrm{\Delta }\sigma ,\sigma ^F]`$. In Fig. 2 we plot the log of the number of fundamental clusters versus log $`s`$. For the chosen parameters, $`\mathrm{\Delta }h=0.01`$ so that $`\mathrm{\Delta }\sigma 0.01`$. The slope of 1.53 is consistent with the theoretical prediction. The mean size $`s=\mathrm{\Delta }h^1=100`$ is also consistent with our data. Note the lack of data spread. In Figs. 1 and 2 the fundamental clusters make up only the small $`s`$ end of the scaling plot, but the fundamental clusters comprise $`17\times 10^6`$ out of $`18\times 10^6`$ events. Hence, a simulation of earthquake faults will require huge numbers of events to probe the statistics of the interesting and important large event region. We now consider the nucleation events and their clusters. The size of the nucleation events depends on several factors that determine precisely when a given high stress nucleation event will stop growing. We will concentrate on two regimes. The first is events near the top of the saddle point hill associated with the barrier between the stable and metastable state. The reason we neglect clusters on all scales between the fundamental clusters and the saddle point clusters is that Ising model studies have found no clusters in this intermediate region. Another aspect of nucleation events of this kind in Ising systems is that one must get very close to the spinodal to observe critical slowing down because the saddle point hill appears to be high but not very flat until the system is very close to the spinodal. The absence of critical slowing down near the saddle point also has been seen in the CA model. Hence, there is a class of nucleation events that do not quite reach the top of the saddle point hill. As a result, random fluctuations lead to the decay of these clusters back to the metastable phase. We call these clusters failing nucleation events. The probability of these events is characterized by a saddle point calculation without the kinetic prefactor. From these considerations and Eq. (1), the mean number of failing nucleation events per unit volume is $`n_{\mathrm{fn}}\mathrm{exp}(AR^d\mathrm{\Delta }h^{3/2d/4})/\xi ^d`$, the same as Eq. (1) without the kinetic prefactor. To obtain predictions for the scaling regime two results are needed. The first is that $`s_{\mathrm{fn}}=\mathrm{\Delta }h^{1/2}\xi ^d`$, where $`\mathrm{\Delta }h^{1/2}`$ is the density of the nucleating cluster. Second, because the exponential is a rapidly increasing function of its argument, as $`\mathrm{\Delta }h`$ decreases, the probability of a cluster increases from almost zero to a relatively large number over a very short interval of $`\mathrm{\Delta }h`$. The value of $`\mathrm{\Delta }h`$ where this crossover occurs is the limit of metastability. For this reason essentially all of the nucleation events take place at a fixed value of $`AR^d\mathrm{\Delta }h^{3/2d/4}=C`$. As for the fundamental clusters the stress transfer to a site in a nucleation event is equal to the density of the event. For our parameters the density is $`\mathrm{\Delta }h^{1/2}=0.1`$. Hence we identify these nucleating clusters by selecting only those events whose $`\sigma _{\mathrm{min}}`$ falls within the window $`[0.90,0.99]`$. The size of the event is $`s_{\mathrm{fn}}=\mathrm{\Delta }h^{1/2}\xi ^d1000`$ and the number of these events is $`n_{\mathrm{fn}}e^C/\xi ^d`$. Using the relation between $`R`$ and $`\mathrm{\Delta }h`$ implied by a fixed value of $`C`$, we find that $`n_{\mathrm{fn}}`$ scales as $`1/s_{\mathrm{fn}}^{3/2}`$. In Fig. 2 we show a log-log plot of $`n_{\mathrm{fn}}`$ versus $`s_{\mathrm{fn}}`$. The slope and mean size are consistent with our predictions. Finally, we consider a second class of nucleation events. These are the events that have made it to the top and over the saddle point hill and have become arrested during their growth phase, i.e., after growing to some size the high stress nucleation region decays back to the low stress metastable state. Because the clusters have made it to the top of the saddle point hill, this decay of the high stress phase is no longer induced by random fluctuations: it appears in the Langevin approach due to a decreasing loader plate velocity on the coarse grained scale which pulls the system away from the spinodal. We will call these clusters arrested nucleation events. Because these clusters experience critical slowing down, their number per unit volume is given by Eq. (1). A key feature in the growth of nucleation events near the spinodal is that their initial growth is a filling in, and hence these clusters are compact, that is $`s_{\mathrm{an}}\xi ^d=R^d\mathrm{\Delta }h^{d/4}`$. The density is of order unity so that we will identify these events with those clusters whose minimum stress of the failing blocks obeys the condition $`\sigma _{\mathrm{min}}<0.90`$. Using the same arguments as in for the failing nucleation events, we find that their mean size is about $`10^4`$ and the slope of the scaling plot is predicted to be 2. The data presented in Fig. 2 is consistent with these predictions. In summary, these theoretical considerations and numerical results strongly suggest several important points. (1) Earthquake fault models are statistically dominated by small, and in the case of real earthquakes, uninteresting events. (2) The large and small events have different physical mechanisms. (3) The scaling regime is composed of events with two different power law distributions, which accounts for the data spread at the large events end of the scaling plot in Fig. 1. (4) Note that there is still a spread in the data at the large events end of Fig. 2 and that these events do not scale with a slope of 2. Numerical investigation indicates that these are “breakout” events that are generated by the spatial coalescence of arrested nucleation events and are beyond the assumptions of our present theoretical treatment. That is, as the arrested high stress cluster decays, it releases stress into the surrounding system. If, due to past history, the stress field is unstable, this stress release can lead to runaway failure. This type of event was considered in Ref. and is a fourth mechanism that must be considered in the generation of earthquakes. In contrast, the nucleation events are generated by the coalescence of overlapping fundamental clusters occupying the same region of volume $`\xi ^d`$. We would like to acknowledge useful conversations with F. J. Alexander and H. Gould. The work of M. A. and W. K. was supported by DOE DE-FG02-95ER14498, and that of J. B. R. and J. S. S. M. by DOE DE-FG03-95ER14499. This work, LA-UR-00-0740, was also partially supported by the Department of Energy under contract W-7405.
no-problem/0002/cond-mat0002258.html
ar5iv
text
# Phase Transitions in Confined Antiferromagnets ## ACKNOWLEDGMENTS This work was sponsored by Consejo Nacional de Ciencia y Tecnología (CONACyT), Mexico, through grant G-25851-E. AD-O gratefully acknowledges the financial support from CONACyT through the Post Doctoral Fellowships Program.
no-problem/0002/hep-th0002151.html
ar5iv
text
# Yang-Mills Instantons in the Gravitational Instanton Backgrounds ## Abstract The simplest and the most straightforward new algorithm for generating solutions to (anti) self-dual Yang-Mills (YM) equation in the typical gravitational instanton backgrounds is proposed. When applied to the Taub-NUT and the Eguchi-Hanson metrics, the two best-known gravitational instantons, the solutions turn out to be the rather exotic type of instanton configurations carrying finite YM action but generally fractional topological charge values. preprint: July,2000 Well below the Planck scale, the strength of gravity is negligibly small relative to those of particle physics interactions described by non-abelian gauge theories. Nevertheless, as far as the topological aspect is concerned, gravity may have marked effects even at the level of elementary particle physics. Namely, the non-trivial topology of the gravitational field may play a role crucial enough to dictate the topological properties of, say, SU(2) Yang-Mills (YM) gauge field as has been pointed out long ago . Being an issue of great physical interest and importance, quite a few serious study along this line have appeared in the literature but they were restricted to the background gravitational field with high degree of isometry such as the Euclideanized Schwarzschild geometry or the Euclidean de Sitter space . Even the works involving more general background spacetimes including gravitational instantons (GI) were mainly confined to the case of asymptotically- locally-Euclidean (ALE) spaces which is one particular such GI and employed rather indirect and mathmatically-oriented solution generating methods such as the ADHM construction . Here in this work we would like to propose a “simply physical” and hence perhaps the most direct algorithm for generating the YM instanton solutions in all species of known GI. And the essence of this method lies in writing the (anti) self-dual YM equation by employing truly relevant ansa̋tz for the YM gauge connection and then directly solving it. To demonstrate how simple in method and powerful in applicability it is, we then apply this algorithm to the case of the Taub-NUT and the Eguchi-Hanson metrics, the two best-known GI. In particular, the actual YM instanton solution in the background of Taub-NUT metric (which is asymptotically-locally-flat (ALF) rather than ALE) is constructed for the first time in this work although its existence has been anticipated long ago in . Interestingly, the solutions to (anti) self-dual YM equation turn out to be the rather exotic type of instanton configurations which are everywhere non-singular having finite YM action but sharing some features with meron solutions such as their typical structure and generally fractional topological charge values carried by them. Namely, the YM instanton solution that we shall discuss in the background of GI in this work exhibit characteristics which are mixture of those of typical instanton and typical meron. This seems remarkable since it is well-known that in flat spacetime, meron does not solve the 1st order (anti) self-dual equation although it does the 2nd order YM field equation and is singular at its center and has divergent action. In the loose sense, GI may be defined as a positive-definite metrics $`g_{\mu \nu }`$ on a complete and non-singular manifold satisfying the Euclidean Einstein equations and hence constituting the stationary points of the gravity action in Euclidean path integral for quantum gravity. But in the stricter sense , they are the metric solutions to the Euclidean Einstein equations having (anti) self-dual Riemann tensor $`\stackrel{~}{R}_{abcd}={\displaystyle \frac{1}{2}}ϵ_{ab}^{ef}R_{efcd}=\pm R_{abcd}`$ (1) (say, with indices written in non-coordinate orthonormal basis) and include only two families of solutions in a rigorous sense ; the Taub-NUT metric and the Eguchi-Hanson instanton . In the loose sense, however, there are several solutions to Euclidean Einstein equations that can fall into the category of GI. Thus we begin with the action governing our system, i.e., the Einstein-Yang-Mills (EYM) theory given by $`I_{EYM}={\displaystyle _M}d^4x\sqrt{g}\left[{\displaystyle \frac{1}{16\pi }}R+{\displaystyle \frac{1}{4g_c^2}}F_{\mu \nu }^aF^{a\mu \nu }\right]{\displaystyle _M}d^3x\sqrt{h}{\displaystyle \frac{1}{8\pi }}K`$ (2) where $`F_{\mu \nu }^a`$ is the field strength of the YM gauge field $`A_\mu ^a`$ with $`a=1,2,3`$ being the SU(2) group index and $`g_c`$ being the gauge coupling constant. The Gibbons-Hawking term on the boundary $`M`$ of the manifold $`M`$ is also added and $`h`$ is the metric induced on $`M`$ and $`K`$ is the trace of the second fundamental form on $`M`$. Then by extremizing this action with respect to the metric $`g_{\mu \nu }`$ and the YM gauge field $`A_\mu ^a`$, one gets the following classical field equations respectively $`R_{\mu \nu }{\displaystyle \frac{1}{2}}g_{\mu \nu }R=8\pi T_{\mu \nu },`$ (3) $`T_{\mu \nu }={\displaystyle \frac{1}{g_c^2}}\left[F_{\mu \alpha }^aF_\nu ^{a\alpha }{\displaystyle \frac{1}{4}}g_{\mu \nu }(F_{\alpha \beta }^aF^{a\alpha \beta })\right],`$ (4) $`D_\mu \left[\sqrt{g}F^{a\mu \nu }\right]=0,D_\mu \left[\sqrt{g}\stackrel{~}{F}^{a\mu \nu }\right]=0`$ (5) where we added Bianchi identity in the last line and $`F_{\mu \nu }^a=_\mu A_\nu ^a_\nu A_\mu ^a+ϵ^{abc}A_\mu ^bA_\nu ^c`$, $`D_\mu ^{ac}=_\mu \delta ^{ac}+ϵ^{abc}A_\mu ^b`$ and $`A_\mu =A_\mu ^a(iT^a)`$, $`F_{\mu \nu }=F_{\mu \nu }^a(iT^a)`$ with $`T^a=\tau ^a/2`$ ($`a=1,2,3`$) being the SU(2) generators and finally $`\stackrel{~}{F}_{\mu \nu }=\frac{1}{2}ϵ_{\mu \nu }^{\alpha \beta }F_{\alpha \beta }`$ is the (Hodge) dual of the field strength tensor. We now seek solutions ($`g_{\mu \nu }`$, $`A_\mu ^a`$) of the coupled EYM equations given above in Euclidean signature obeying the (anti) self-dual equation in the YM sector $`F^{\mu \nu }=g^{\mu \lambda }g^{\nu \sigma }F_{\lambda \sigma }=\pm {\displaystyle \frac{1}{2}}ϵ_c^{\mu \nu \alpha \beta }F_{\alpha \beta }`$ (6) where $`ϵ_c^{\mu \nu \alpha \beta }=ϵ^{\mu \nu \alpha \beta }/\sqrt{g}`$ is the curved spacetime version of totally antisymmetric tensor. As was noted in , in Euclidean signature, the YM energy-momentum tensor vanishes identically for YM fields satisfying this (anti) self-duality condition. This point is of central importance and can be illustrated briefly as follows. Under the Hodge dual transformation, $`F_{\mu \nu }^a\stackrel{~}{F}_{\mu \nu }^a`$, the YM energy-momentum tensor $`T_{\mu \nu }`$ given in eq.(3) above is invariant normally in Lorentzian signature. In Euclidean signature, however, its sign flips, i.e., $`\stackrel{~}{T}_{\mu \nu }=T_{\mu \nu }`$. As a result, for YM fields satisfying the (anti) self-dual equation in Euclidean signature such as the instanton solution, $`F_{\mu \nu }^a=\pm \stackrel{~}{F}_{\mu \nu }^a`$, it follows that $`T_{\mu \nu }=\stackrel{~}{T}_{\mu \nu }=T_{\mu \nu }`$, namely the YM energy-momentum tensor vanishes identically, $`T_{\mu \nu }=0`$. This, then, indicates that the YM field now does not disturb the geometry while the geometry still does have effects on the YM field. Consequently the geometry, which is left intact by the YM field, effectively serves as a “background” spacetime which can be chosen somewhat at our will (as long as it satisfies the vacuum Einstein equation $`R_{\mu \nu }=0`$) and here in this work, we take it to be the gravitational instanton. Loosely speaking, all the typical GI, including Taub-NUT metric and Eguchi-Hanson solution, possess the same topology $`R\times S^3`$ and similar metric structures. Of course in a stricter sense, their exact topologies can be distinguished, say, by different Euler numbers and Hirzebruch signatures . Particularly, in terms of the concise basis 1-forms, the metrics of these GI can be written as $`ds^2`$ $`=`$ $`c_r^2dr^2+c_1^2\left(\sigma _1^2+\sigma _2^2\right)+c_3^2\sigma _3^2`$ (7) $`=`$ $`c_r^2dr^2+{\displaystyle \underset{a=1}{\overset{3}{}}}c_a^2\left(\sigma ^a\right)^2=e^Ae^A`$ (8) where $`c_r=c_r(r)`$, $`c_a=c_a(r)`$, $`c_1=c_2c_3`$ and the orthonormal basis 1-form $`e^A`$ is given by $`e^A=\left\{e^0=c_rdr,e^a=c_a\sigma ^a\right\}`$ (9) and $`\left\{\sigma ^a\right\}`$ ($`a=1,2,3`$) are the left-invariant 1-forms satisfying the SU(2) Maurer-Cartan structure equation $`d\sigma ^a={\displaystyle \frac{1}{2}}ϵ^{abc}\sigma ^b\sigma ^c.`$ (10) They form a basis on the $`S^3`$ section of the geometry and hence can be represented in terms of 3-Euler angles $`0\theta \pi `$, $`0\varphi 2\pi `$, and $`0\psi 4\pi `$ parametrizing $`S^3`$ as $`\sigma ^1`$ $`=`$ $`\mathrm{sin}\psi d\theta +\mathrm{cos}\psi \mathrm{sin}\theta d\varphi ,`$ (11) $`\sigma ^2`$ $`=`$ $`\mathrm{cos}\psi d\theta +\mathrm{sin}\psi \mathrm{sin}\theta d\varphi ,`$ (12) $`\sigma ^3`$ $`=`$ $`d\psi \mathrm{cos}\theta d\varphi .`$ (13) Now in order to construct exact YM instanton solutions in the background of these GI, we now choose the relevant ansa̋tz for the YM gauge potential and the SU(2) gauge fixing. And in doing so, our general guideline is that the YM gauge field ansa̋tz should be endowed with the symmetry inherited from that of the background geometry, the GI. Thus we first ask what kind of isometry these GI possess. As noted above, typical GI, including the Taub-NUT and the Eguchi-Hanson metrics, possess the topology of $`R\times S^3`$. The geometrical structure of the $`S^3`$ section, however, is not that of perfectly “round” $`S^3`$ but rather, that of “squashed” $`S^3`$. In order to get a closer picture of this squashed $`S^3`$, we notice that the $`r=`$constant slices of these GI can be viewed as U(1) fibre bundles over $`S^2CP^1`$ with the line element $`d\mathrm{\Omega }_3^2=c_1^2\left(\sigma _1^2+\sigma _2^2\right)+c_3^2\sigma _3^2=c_1^2d\mathrm{\Omega }_2^2+c_3^2\left(d\psi +B\right)^2`$ (14) where $`d\mathrm{\Omega }_2^2=(d\theta ^2+\mathrm{sin}^2\theta d\varphi ^2)`$ is the metric on unit $`S^2`$, the base manifold whose volume form $`\mathrm{\Omega }_2`$ is given by $`\mathrm{\Omega }_2=dB`$ as $`B=\mathrm{cos}\theta d\varphi `$ and $`\psi `$ then is the coordinate on the U(1)$`S^1`$ fibre manifold. Now then the fact that $`c_1=c_2c_3`$ indicates that the geometry of this fibre bundle manifold is not that of round $`S^3`$ but that of squashed $`S^3`$ with the squashing factor given by $`(c_3/c_1)`$. And further, it is squashed along the U(1) fibre direction. Thus this failure for the geometry to be that of exactly round $`S^3`$ keeps us from writing down the associated ansa̋tz for the YM gauge potential right away. Apparently, if the geometry were that of round $`S^3`$, one would write down the YM gauge field ansa̋tz as $`A^a=f(r)\sigma ^a`$ with $`\{\sigma ^a\}`$ being the left-invariant 1-forms introduced earlier. The rationale for this choice can be stated briefly as follows. First, since the $`r=`$constant sections of the background space have the geometry of round $`S^3`$ and hence possess the SO(4)-isometry, one would look for the SO(4)-invariant YM gauge connection ansa̋tz as well. Next, noticing that both the $`r=`$constant sections of the frame manifold and the SU(2) YM group manifold possess the geometry of round $`S^3`$, one may naturally choose the left-invariant 1-forms $`\{\sigma ^a\}`$ as the “common” basis for both manifolds. Thus this YM gauge connection ansa̋tz, $`A^a=f(r)\sigma ^a`$ can be thought of as a hedgehog-type ansa̋tz where the group-frame index mixing is realized in a simple manner . Then coming back to our present interest, namely the GI given in eq.(5), in $`r=`$constant sections, the SO(4)-isometry is partially broken down to that of SO(3) by the squashedness along the U(1) fibre direction to a degree set by the squashing factor $`(c_3/c_1)`$. Thus now our task became clearer and it is how to encode into the YM gauge connection ansa̋tz this particular type of SO(4)-isometry breaking coming from the squashed $`S^3`$. Interestingly, a clue to this puzzle can be drawn from the work of Eguchi and Hanson in which they constructed abelian instanton solution in Euclidean Taub-NUT metric (namely the abelian gauge field with (anti)self-dual field strength with respect to this metric). To get right to the point, the working ansa̋tz they employed for the abelian gauge field to yield (anti)self-dual field strength is to align the abelian gauge connection 1-form along the squashed direction, i.e., along the U(1) fibre direction, $`A=g(r)\sigma ^3`$. This choice looks quite natural indeed. After all, realizing that embedding of a gauge field in a geometry with high degree of isometry is itself an isometry (more precisly isotropy)-breaking action, it would be natural to put it along the direction in which part of the isometry is already broken. Finally therefore, putting these two pieces of observations carefully together, now we are in the position to suggest the relevant ansa̋tz for the YM gauge connection 1-form in these GI and it is $`A^a=f(r)\sigma ^a+g(r)\delta ^{a3}\sigma ^3`$ (15) which obviously would need no more explanatory comments except that in this choice of the ansa̋tz, it is implicitly understood that the gauge fixing $`A_r=0`$ is taken. From this point on, the construction of the YM instanton solutions by solving the (anti)self-dual equation given in eq.(4) is straightforward. To sketch briefly the computational algorithm, first we obtain the YM field strength 2-form (in orthonormal basis) via exterior calculus (since the YM gauge connection ansa̋tz is given in left-invariant 1-forms) as $`F^a=(F^1,F^2,F^3)`$ where $`F^1`$ $`=`$ $`{\displaystyle \frac{f^{}}{c_rc_1}}(e^0e^1)+{\displaystyle \frac{f[(f1)+g]}{c_2c_3}}(e^2e^3),`$ (16) $`F^2`$ $`=`$ $`{\displaystyle \frac{f^{}}{c_rc_2}}(e^0e^2)+{\displaystyle \frac{f[(f1)+g]}{c_3c_1}}(e^3e^1),`$ (17) $`F^3`$ $`=`$ $`{\displaystyle \frac{(f^{}+g^{})}{c_rc_3}}(e^0e^3)+{\displaystyle \frac{[f(f1)g]}{c_1c_2}}(e^1e^2)`$ (18) from which we can read off the (anti)self-dual equation to be $`\pm {\displaystyle \frac{f^{}}{c_rc_1}}={\displaystyle \frac{f[(f1)+g]}{c_2c_3}},\pm {\displaystyle \frac{(f^{}+g^{})}{c_rc_3}}={\displaystyle \frac{[f(f1)g]}{c_1c_2}}`$ (19) where “$`+`$” for self-dual and “$``$” for anti-self-dual equation and we have only a set of two equations as $`c_1=c_2`$. The specifics of different GI are characterized by particular choices of the orthonormal basis $`e^A=\{e^0=c_rdr,e^a=c_a\sigma ^a\}`$. Thus next, for each GI (i.e., for each choice of $`e^A`$), we solve the (anti)self-dual equation in (12) for ansa̋tz functions $`f(r)`$ and $`g(r)`$ and finally from which the YM instanton solutions in eq.(10) and their (anti)self-dual field strength in eq.(11) can be obtained. We now present the solutions obtained by applying the algorithm presented here to the two best-known GI, the Taub-NUT and the Eguchi-Hanson metrics. (I) YM instanton in Taub-NUT (TN) metric background The TN GI solution written in the metric form given in eq.(5) amounts to $`c_r={\displaystyle \frac{1}{2}}\left[{\displaystyle \frac{r+m}{rm}}\right]^{1/2},c_1=c_2={\displaystyle \frac{1}{2}}\left[r^2m^2\right]^{1/2},c_3=m\left[{\displaystyle \frac{rm}{r+m}}\right]^{1/2}`$ (20) and it is a solution to Euclidean vacuum Einstein equation $`R_{\mu \nu }=0`$ for $`rm`$ with self-dual Riemann tensor. The apparent singularity at $`r=m`$ can be removed by a coordinate redefinition and is a ‘nut’ (in terminology of Gibbons and Hawking ) at which the isometry generated by the Killing vector $`(/\psi )`$ has a zero-dimensional fixed point set. And this TN instanton is an asymptotically-locally-flat (ALF) metric. It turns out that only the anti-self-dual equation $`F^a=\stackrel{~}{F}^a`$ admits a non-trivial solution and it is $`A^a=(A^1,A^2,A^3)`$ where $`A^1=\pm 2{\displaystyle \frac{(rm)^{1/2}}{(r+m)^{3/2}}}e^1,A^2=\pm 2{\displaystyle \frac{(rm)^{1/2}}{(r+m)^{3/2}}}e^2,A^3={\displaystyle \frac{(r+3m)}{m}}{\displaystyle \frac{(rm)^{1/2}}{(r+m)^{3/2}}}e^3`$ (21) and $`F^a=(F^1,F^2,F^3)`$ where $`F^1`$ $`=`$ $`\pm {\displaystyle \frac{8m}{(r+m)^3}}\left(e^0e^1e^2e^3\right),F^2=\pm {\displaystyle \frac{8m}{(r+m)^3}}\left(e^0e^2e^3e^1\right),`$ (22) $`F^3`$ $`=`$ $`{\displaystyle \frac{16m}{(r+m)^3}}\left(e^0e^3e^1e^2\right).`$ (23) It is interesting to note that this YM field strength and the Ricci tensor of the background TN GI are proportional as $`|F^a|=2|R_a^0|`$ except for opposite self-duality, i.e., $`R_1^0=R_3^2`$ $`=`$ $`{\displaystyle \frac{4m}{(r+m)^3}}\left(e^0e^1+e^2e^3\right),R_2^0=R_1^3={\displaystyle \frac{4m}{(r+m)^3}}\left(e^0e^2+e^3e^1\right),`$ (24) $`R_3^0=R_2^1`$ $`=`$ $`{\displaystyle \frac{8m}{(r+m)^3}}\left(e^0e^3+e^1e^2\right).`$ (25) (II) YM instanton in Eguchi-Hanson (EH) metric background The EH GI solution amounts to $`c_r=\left[1\left({\displaystyle \frac{a}{r}}\right)^4\right]^{1/2},c_1=c_2={\displaystyle \frac{1}{2}}r,c_3={\displaystyle \frac{1}{2}}r\left[1\left({\displaystyle \frac{a}{r}}\right)^4\right]^{1/2}`$ (26) and again it is a solution to Euclidean vacuum Einstein equation $`R_{\mu \nu }=0`$ for $`ra`$ with self-dual Riemann tensor. $`r=a`$ is just a coordinate singularity that can be removed by a coordinate redefinition provided that now $`\psi `$ is identified with period $`2\pi `$ rather than $`4\pi `$ and is a ‘bolt’ (in terminology of Gibbons and Hawking ) where the action of the Killing field $`(/\psi )`$ has a two-dimensional fixed point set. Besides, this EH instanton is an asymptotically-locally-Euclidean (ALE) metric. In this time, only the self-dual equation $`F^a=+\stackrel{~}{F}^a`$ admits a non-trivial solution and it is $`A^a=(A^1,A^2,A^3)`$ where $`A^1=\pm {\displaystyle \frac{2}{r}}\left[1\left({\displaystyle \frac{a}{r}}\right)^4\right]^{1/2}e^1,A^2=\pm {\displaystyle \frac{2}{r}}\left[1\left({\displaystyle \frac{a}{r}}\right)^4\right]^{1/2}e^2,A^3={\displaystyle \frac{2}{r}}{\displaystyle \frac{\left[1+\left(\frac{a}{r}\right)^4\right]}{\left[1\left(\frac{a}{r}\right)^4\right]^{1/2}}}e^3`$ (27) and $`F^a=(F^1,F^2,F^3)`$ where $`F^1`$ $`=`$ $`\pm {\displaystyle \frac{4}{r^2}}\left({\displaystyle \frac{a}{r}}\right)^4\left(e^0e^1+e^2e^3\right),F^2=\pm {\displaystyle \frac{4}{r^2}}\left({\displaystyle \frac{a}{r}}\right)^4\left(e^0e^2+e^3e^1\right),`$ (28) $`F^3`$ $`=`$ $`{\displaystyle \frac{8}{r^2}}\left({\displaystyle \frac{a}{r}}\right)^4\left(e^0e^3+e^1e^2\right).`$ (29) Again it is interesting to realize that this YM field strength and the Ricci tensor of the background EH GI are proportional as $`|F^a|=2|R_a^0|`$, i.e., $`R_1^0=R_3^2`$ $`=`$ $`{\displaystyle \frac{2}{r^2}}\left({\displaystyle \frac{a}{r}}\right)^4\left(e^0e^1+e^2e^3\right),R_2^0=R_1^3={\displaystyle \frac{2}{r^2}}\left({\displaystyle \frac{a}{r}}\right)^4\left(e^0e^2+e^3e^1\right),`$ (30) $`R_3^0=R_2^1`$ $`=`$ $`{\displaystyle \frac{4}{r^2}}\left({\displaystyle \frac{a}{r}}\right)^4\left(e^0e^3+e^1e^2\right).`$ (31) It is also interesting to note that this YM instanton solution particularly in EH background (which is ALE) obtained by directly solving the self-dual equation can also be “constructed” by simply identifying $`A^a=\pm 2\omega _a^0`$ (where $`\omega _a^0=(ϵ_{abc}/2)\omega ^{bc}`$ are the spin connection of EH metric) and hence $`F^a=\pm 2R_a^0`$ as was noticed by but in the string theory context with different motivation. This construction of solution via a simple identification of gauge field connection with the spin connection, however, works only in ALE backgrounds such as EH metric and generally fails as is manifest in the previous TN background case (which is ALF, not ALE) in which $`A^a\pm 2\omega _a^0`$ but still $`F^a=\pm 2R_a^0`$. Thus the method presented here by first writing (by employing a relevant ansa̋tz for the YM gauge connection given in eq.(10)) and directly solving the (anti) self-dual equation looks to be the algorithm for generating the solution with general applicability to all species of GI in a secure and straightforward manner. Indeed, the detailed and comprehensive coverage of YM instanton solutions in all other GI based on the algorithm presented in this work will be reported elsewhere and it will show how simple albeit powerful this method really is. In this regard, the method for generating YM instanton solutions to (anti) self-dual equation in all known GI backgrounds proposed here in this work can be contrasted to earlier works in the literature discussing the construction of YM instantons mainly in the background of ALE GI via indirect methods such as that of ADHM . Having constructed explicit YM instanton solutions in TN and EH GI, we now turn to the physical interpretation of the structure of these SU(2) YM instantons supported by the two typical GI. Recall that the relevant ansa̋tz for the YM gauge connection is of the form $`A^a=f(r)\sigma ^a`$ in the background geometry such as de Sitter GI with topology of $`R\times (\mathrm{round})S^3`$ and of the form $`A^a=f(r)\sigma ^a+g(r)\delta ^{a3}\sigma ^3`$ in the less symmetric GI backgrounds with topology of $`R\times (\mathrm{squashed})S^3`$. Thus in order to get some insight into the physical meaning of the structure of these YM connection ansa̋tz, we now try to re-express the left-invariant 1-forms $`\{\sigma ^a\}`$ forming a basis on $`S^3`$ in terms of more familiar Cartesian coordinate basis. Utilizing the coordinate transformation from polar $`(r,\theta ,\varphi ,\psi )`$ to Cartesian $`(t,x,y,z)`$ coordinates (note, here, that $`t`$ is not the usual “time” but just another spacelike coordinate) given by $`x+iy=r\mathrm{cos}{\displaystyle \frac{\theta }{2}}e^{\frac{i}{2}(\psi +\varphi )},z+it=r\mathrm{sin}{\displaystyle \frac{\theta }{2}}e^{\frac{i}{2}(\psi \varphi )},`$ (32) where $`x^2+y^2+z^2+t^2=r^2`$ and further introducing the so-called ‘tHooft tensor defined by $`\eta ^{a\mu \nu }=\eta ^{a\nu \mu }=(ϵ^{0a\mu \nu }+ϵ^{abc}ϵ^{bc\mu \nu }/2)`$, the left-invariant 1-forms can be cast to a more concise form $`\sigma ^a=2\eta _{\mu \nu }^a(x^\nu /r^2)dx^\mu `$. Therefore, the YM instanton solution, in Cartesian coordinate basis, can be written as $`A^a=A_\mu ^adx^\mu =2\left[f(r)+g(r)\delta ^{a3}\right]\eta _{\mu \nu }^a{\displaystyle \frac{x^\nu }{r^2}}dx^\mu `$ (33) in the background of TN and EH GI with topology of $`R\times (\mathrm{squashed})S^3`$. Now to appreciate the meaning of this structure, we go back to the flat space situation. As is well-known, in flat space, the standard BPST SU(2) YM instanton solution takes the form $`A_\mu ^a=2\eta _{\mu \nu }^a[x^\nu /(r^2+\lambda ^2)]`$ with $`\lambda `$ being the size of the instanton. Note, however, that separately from this BPST instanton solution, there is another non-trivial solution to the YM field equation of the form $`A_\mu ^a=\eta _{\mu \nu }^a(x^\nu /r^2)`$ found long ago by De Alfaro, Fubini, and Furlan . This second solution is called “meron” as it carries a half unit of topological charge and is known to play a certain role concerning the quark confinement . It, however, exhibits singularity at its center $`r=0`$ and hence has a diverging action and falls like $`1/r`$ as $`r\mathrm{}`$. Thus we are led to the conclusion that the YM instanton solution in typical GI backgrounds possess the structure of (curved space version of) meron at large $`r`$. As is well-known, in flat spacetime meron does not solve the 1st order (anti) self-dual equation although it does the second order YM field equation. Thus in this sense, this result seems remarkable since it implies that in the GI backgrounds, the (anti) self-dual YM equation admits solutions which exhibit the configuration of meron solution at large $`r`$ in contrast to the flat spacetime case. And we only conjecture that when passing from the flat ($`R^4`$) to GI ($`R\times S^3`$) geometry, the closure of the topology of part of the manifold appears to turn the structure of the instanton solution from that of standard BPST into that of meron. Next, we look into the behavior of these solutions in TN and EH GI backgrounds as $`r0`$. For TN and EH instantons, the ranges for radial coordinates are $`mr<\mathrm{}`$ and $`ar<\mathrm{}`$, respectively. Since the point $`r=0`$ is absent in these manifolds, the solutions in these GI are everywhere regular. Finally, we close with perhaps the most interesting comments on the estimate of the instanton contribution to the intervacua tunnelling amplitude. It has been pointed out in the literature that both in the background of Euclidean Schwarzschild geometry and in the Euclidean de Sitter space , the (anti) instanton solutions have the Pontryagin index of $`\nu [A]=\pm 1`$ and hence give the contribution to the (saddle point approximation to) intervacua tunnelling amplitude of $`\mathrm{exp}[8\pi ^2/g_c^2]`$, which, interestingly, are the same as their flat space counterparts even though these curved space YM instanton solutions do not correspond to gauge transformations of any flat space instanton solution . This unexpected and hence rather curious property, however, turns out not to persist in YM instantons in GI backgrounds such as TN and EH metrics. In order to see this, consider the curved space version of Pontryagin index or second Chern class having the interpretation of instanton number $`\nu [A]`$ given by $`\nu [A]=Ch_2(F)={\displaystyle \frac{1}{8\pi ^2}}{\displaystyle _{M^4}}tr(FF)={\displaystyle _{R\times S^3}}d^4x\sqrt{g}\left[{\displaystyle \frac{1}{32\pi ^2}}F_{\mu \nu }^a\stackrel{~}{F}^{a\mu \nu }\right]`$ (34) and the saddle point approximation to the intervacua tunnelling amplitude $`\mathrm{\Gamma }_{GI}\mathrm{exp}[I_{GI}(instanton)]`$ (35) where the subscript “GI” denotes corresponding quantities in the GI backgrounds and $`I_{GI}(instanton)`$ represents the Euclidean YM theory action evaluated at the YM instanton solution, i.e., $`I_{GI}(instanton)={\displaystyle _{R\times S^3}}d^4x\sqrt{g}\left[{\displaystyle \frac{1}{4g_c^2}}F_{\mu \nu }^aF^{a\mu \nu }\right]=\left({\displaystyle \frac{8\pi ^2}{g_c^2}}\right)|\nu [A]|`$ (36) where we used the (anti)self-duality relation $`F^a=\pm \stackrel{~}{F}^a`$. Then the straightforward calculation yields ; $`\nu [A]=1`$, $`I_{GI}(instanton)=8\pi ^2/g_c^2`$ and $`\mathrm{\Gamma }_{GI}\mathrm{exp}(8\pi ^2/g_c^2)`$ for the instanton solution in TN metric and $`\nu [A]=3/2`$, $`I_{GI}(instanton)=12\pi ^2/g_c^2`$ and $`\mathrm{\Gamma }_{GI}\mathrm{exp}(12\pi ^2/g_c^2)`$ for the instanton solution in EH metric background. Here, however, the solution in EH metric background carries the half-integer Pontryagin index actually because the boundary of EH space is $`S^3/Z_2`$ . Therefore we need to be cautious in drawing the conclusion that the fact that solutions in GI backgrounds carry fractional topological charges could be another supporting evidence for meron interpretation of the solutions. To summarize, in the present work we constructed the solutions to (anti)self-dual YM equation in the typical gravitational instanton geometries and analyzed their physical nature. As demonstrated, the solutions turn out to take the structure of merons at large $`r`$ and generally carry fractional topological charge values. Nevertheless, it seems more appropriate to conclude that the solutions still should be identified with (curved space version of) instantons as they are solutions to 1st order (anti) self-dual equation and are everywhere regular having finite YM action. However, these curious mixed characteristics of the solutions to (anti) self-dual YM equation in GI backgrounds appear to invite us to take them more seriously and further explore potentially interesting physics associated with them. This work was supported in part by BK21 project in physics department at Hanyang Univ. and by grant No. 1999-2-112-003-5 from the interdisciplinary research program of the KOSEF. References A. A. Belavin, A. M. Polyakov, A. S. Schwarz, and Yu. S. Tyupkin, Phys. Lett. B59, 85 (1975) ; G. ‘tHooft, Phys. Rev. Lett. 37, 8 (1976). J. M. Charap and M. J. Duff, Phys. Lett. B69, 445 (1977) ; ibid B71, 219 (1977). H. Kim and S. K. Kim, Nuovo Cim. B114, 207 (1999) and references therein. T. Eguchi, P. B. Gilkey, and A. J. Hanson, Phys. Rep. 66, 213 (1980) ; G. W. Gibbons and C. N. Pope, Commun. Math. Phys. 66, 267 (1979) ; G. W. Gibbons and S. W. Hawking, ibid, 66, 291 (1979). A. Taub, Ann. Math. 53, 472 (1951) ; E. Newman, L. Tamburino, and T. Unti, J. Math. Phys. 4, 915 (1963) ; S. W. Hawking, Phys. Lett. A60, 81 (1977). T. Eguchi and A. J.Hanson, Phys. Lett. B74, 249 (1978). T. Eguchi and A. J. Hanson, Ann Phys. 120, 82 (1979). V. De Alfaro, S. Fubini, and G. Furlan, Phys. Lett. B65, 163 (1976). C. G. Callan, R. Dashen, and D. J. Gross, Phys. Rev. D17, 2717 (1978). M. Bianchi, F. Fucito, G. C. Rossi, and M. Martellini, Nucl. Phys. B440, 129 (1995). M. Atiyah, V. Drinfeld, N. Hitchin, and Y. Manin, Phys. Lett. A65, 185 (1987) ; P. B. Kronheimer, and H. Nakajima, Math. Ann. 288, 263 (1990). H. Boutaleb-Joutei, A. Chakrabarti, and A. Comtet, Phys. Rev. D20, 1844 (1979) ; ibid. D20, 1898 (1979) ; ibid. D21, 979 (1980) ; ibid. D21, 2280 (1980) ; A. Chakrabarti, Fortschr. Phys. 35, 1 (1987) ; M. Bianchi, F. Fucito, G. C. Rossi, and M. Martellini, Phys. Lett. B359, 49-61 (1995). We thank the anonymous referee for pointing this out to us.
no-problem/0002/hep-ph0002280.html
ar5iv
text
# Effects of flavor conserving CP violating phases in SUSY models ## I Introduction The minimal supersymmetric standard model (MSSM) has many CP violating (CPV) phases beyond the KM phase in the standard model (SM). These SUSY CPV phases, depending on their flavor structures, are strongly constrained by $`ϵ_K`$ or electron/neutron electric dipole moment (EDM), and have been considered very small ($`\delta 10^2`$ for $`M_{\mathrm{SUSY}}O(100)`$ GeV ). Another way to solve these problems is to consider effective SUSY models, where decouplings of the 1st/2nd generation sfermions are invoked to evade the EDM constraints and also SUSY FCNC/CP problems. In such cases, these new SUSY phases may affect $`B`$ and $`K`$ physics. One strong motivation for new CP violating phases beyond the KM phase is related with the baryon number asymmetry of the universe. Electroweak baryogenesis is possible in a certain region of the MSSM parameter space, especially for light stop ($`120\mathrm{GeV}m_{\stackrel{~}{t}_1}175`$ GeV) with CP violating phases in $`\mu `$ and $`A_t`$ parameters. This light stop and new CP violating phases in $`\mu `$ and $`A_t`$ parameters can affect $`B`$ and $`K`$ physics, although these phases are flavor conserving. In this talk, we report our three recent works related with this subject. The topics covered here are the following : the effects of $`\varphi _\mu `$ and $`\varphi _{A_t}`$ on $`B`$ physics in the MMSSM, and fully supersymmetric CP violations in the kaon system. ## II Effects of $`\mu `$ and $`A_t`$ phases on $`B`$ physics in the more minimal supersymmetric standard model (MMSSM) In the MMSSM we consider in this section, only the third family squarks and charginos can be light enough to affect $`BX_s\gamma `$ and $`B^0\overline{B^0}`$ mixing. We also ignore possible flavor changing squark mass matrix elements that could generate gluino-mediated flavor changing neutral current (FCNC) process, dicussions of which can be found in the literatures,, for example. Ignoring such contributions, the only source of the FCNC in our model may be attributed to the CKM matrix, whereas there are new CPV phases coming from the phases of $`\mu `$ and $`A_t`$ parameters in the flavor preserving sector in addition to the KM phase $`\delta _{KM}`$ in the flavor changing sector. Even if the 1st/2nd generation squarks are very heavy and degenerate, there is another important edm constraints considered by Chang, Keung and Pilaftsis (CKP) for large $`\mathrm{tan}\beta `$. This constraint comes from the two loop diagrams involving stop/sbottom loops, and is independent of the masses of the 1st/2nd generation squarks. Therefore, this CKP edm constraints can not be simply evaded by making the 1st/2nd generation squarks very heavy, and it turns out that this puts a strong constraint on the possible new phase shift in the $`B^0\overline{B^0}`$ mixing. We scanned over the broad parameter space and imposed the various experimental constraints including $`BR(BX_s\gamma )`$.It has to be emphasized that this parameter space is larger than that in the constrained MSSM (CMSSM) where the universality of soft terms at the GUT scale is assumed. The $`B^0\overline{B^0}`$ mixing is generated by the box diagrams with $`u_iW^\pm (H^\pm )`$ and $`\stackrel{~}{u}_i\chi ^\pm `$ running around the loops in addition to the SM contribution. The gluino and neutralino contributions are negligible in our model. The chargino exchange contributions to $`B^0\overline{B^0}`$ mixing is generically complex relative to the SM contributions, and this effect can be in fact significant for large $`\mathrm{tan}\beta (1/\mathrm{cos}\beta )`$, since the chargino contribution is proportional to $`(m_b/M_W\mathrm{cos}\beta )^2`$. However, the CKP edm constraint puts a strong constraint for large $`\mathrm{tan}\beta `$ case. The result is that the CKP edm constraint on $`2\theta _d`$ is in fact very important for large $`\mathrm{tan}\beta `$, and we have $`|2\theta _d|1^{}`$. This observation is important for the CKM phenomenology, since time-dependent CP asymmetries in neutral $`B`$ decays into $`J/\psi K_S,\pi \pi `$ etc. would still measure directly three angles of the unitarity triangle even if $`\varphi _{A_t}`$ and $`\varphi _\mu `$ are nonzero. We also find that the dilepton asymmetry (proportional to $`\mathrm{Re}(ϵ_B)`$) is very small as in the SM, but $`\mathrm{\Delta }m_B`$ can be enhanced as much as $`60\%`$. See Ref. for more details. The radiative decay of $`B`$ mesons, $`BX_s\gamma `$, is described by the effective Hamiltonian including (chromo)magnetic dipole operators. Interference between $`bs\gamma `$ and $`bsg`$ (where the strong phase is generated by the charm loop via $`bc\overline{c}s`$ vertex) can induce direct CP violation in $`BX_s\gamma `$. The SM predicts a very small asymmetry smaller than $`0.5\%`$, so the larger asymmetry will be a clean signal for new source of CP violating phases. In our model, we find that $`A_{\mathrm{CP}}^{bs\gamma }`$ can be as large as $`\pm 16\%`$ if chargino is light enough, even if we impose the edm constraints. So this mode may be one of the good place for probing new CPV phases. Next let us next consider $`R_{ll}`$, the ratio of the branching ratio for $`BX_sl^+l^{}`$ in our model to that in the SM. In the presence of the new phases $`\varphi _\mu `$ and $`\varphi _{A_t}`$, $`R_{\mu \mu }`$ can be as large as 1.85, and the deviations from the SM prediction can be large, if $`\mathrm{tan}\beta >8`$. As noticed in Ref. , the correlation between the $`\mathrm{Br}(BX_s\gamma )`$ and $`R_{ll}`$ is distinctly different from that in the minimal supergravisty case. ## III Fully SUSY CP violation in the kaon system In the MSSM with many new CPV phases, there is an intriguing possibility that the observed CP violation in $`K_L\pi \pi `$ is fully supersymmetric due to the complex parameters $`\mu `$ and $`A_t`$ in the soft SUSY breaking terms which also break CP softly, or CP violating $`\stackrel{~}{g}q_i\stackrel{~}{q}_j`$. Our study on the first possibility in the MMSSM indicates that the supersymmetric $`ϵ_K`$ (namely, for $`\delta _{KM}=0`$) is less than $`2\times 10^5`$, which is too small compared to the observed value : $`|ϵ_K|=(2.280\pm 0.019)\times 10^3`$. (See also Ref. .) Although one cannot generate enough CP violations in the kaon system through flavor preserving $`\mu `$ and $`A_t`$ phases in the MSSM, it is possible if one considers the flavor changing SUSY CPV phases. In the mass insertion approximation (MIA), the folklore was that if one saturates the $`ϵ_K`$ with $`(\delta _{12}^d)_{LL}`$, the corresponding $`ϵ^{^{}}/ϵ_K`$ is far less than the observed value. On the other hand, if one saturates $`ϵ^{^{}}/ϵ_K`$ with $`(\delta _{12}^d)_{LR}`$, the resulting $`ϵ_K`$ is again too small compared to the data. Therefore one would need two independent parameters $`|(\delta _{12}^d)_{LL}|O(10^3)`$ and $`|(\delta _{12}^d)_{LR}|O(10^5)`$, each of which has a $`O(1)`$ phase. Recently, Masiero and Murayama argued that such a large value of $`(\delta _{12}^d)_{LR}`$ is not implausible in general MSSM, e.g., if the fundamental theory is a string theory. In their model, the large $`(\delta _{12}^d)_{LR}`$ is intimately related with the large $`(\delta _{11}^d)_{LR}`$, so that their prediction on the neutron edm is very close to the current experimental limit. In recent work, we pointed out it is possible in fact to generate both $`ϵ_K`$ and $`\mathrm{Re}(ϵ^{^{}}/ϵ_K)`$ with a single complex number $`(\delta _{12}^d)_{LL}O(10^210^3)`$ with an order $`O(1)`$ phase, if one goes beyond the single mass insertion approximation. Namely, the $`ϵ_K`$ is generated by $`(\delta _{12}^d)_{LL}`$, whereas $`ϵ^{^{}}/ϵ_K`$ is generated by a flavor preserving $`\stackrel{~}{s}_R\stackrel{~}{s}_L`$ transition followed by flavor changing $`\stackrel{~}{s}_L\stackrel{~}{d}_L`$ transition. The former is proportinal to $`m_s(A_s\mu \mathrm{tan}\beta )/\stackrel{~}{m}^2`$ where $`\stackrel{~}{m}`$ is the common squark mass in the MIA. This induced $`LR`$ mixing is present generically in any SUSY models, if $`|\mu \mathrm{tan}\beta |1020`$ TeV. The only relevant question would be how one can have an $`O(1)`$ phase in the $`(\delta _{12}^d)_{LL}`$. For example, the gluino mass can have a CPV phase $`\varphi _3`$ which is flavor preserving. After we redefine the gluino field so that the the gluino mass parameter becomes real, the phase $`\varphi _3`$ will be tranferred to the $`\stackrel{~}{g}q_i\stackrel{~}{q}_j`$ vertex, thereby generating CP violations in both flavor preserving and flavor changing gluino mediated strong interactions. If the KM phase were not zero in this scenario, we cannot use the constraints coming from $`ϵ_K`$ or $`\mathrm{\Delta }M_B`$, since new physics would contribute to both $`\mathrm{\Delta }S=2`$ and $`\mathrm{\Delta }B=2`$ amplitudes. In particular, even the third or fourth quadrant in the $`(\rho ,\eta )`$ plane should be possible, in principle. More detailed discussions on these points will be presented elsewhere. Finally let us note that the recent observation on CP asymmetry in $`B^0J/\psi K_S`$ depends on different CP violating parameter $`(\delta _{i3}^d)_{AB}`$ where $`i=1`$ or $`2`$, and $`A,B=L`$ or $`R`$, and is independent of the kaon sector we considered here in the mass insertion approximation. ###### Acknowledgements. I am grateful to S. Baek, J.-H. Jang, Y.G. Kim, J.S. Lee and J.H. Park for enjoyable collaborations on the works presented in this talk, and also to W.S. Hou and H.Y. Cheng for their nice organization of this conference. This work is supported in part by grant No. 1999-2-111-002-5 from the interdisciplinary research program of the KOSEF and BK21 project of Ministry of Education.
no-problem/0002/gr-qc0002051.html
ar5iv
text
# Acoustics of early universe. II. Lifshitz vs. gauge-invariant theories ## 1 Introduction The density perturbations affect the microwave background temperature. The theory of gravitational instability describes how these inhomogeneities propagate throughout the radiational era, and foresee the temperature image they “paint” on the last scattering surface. Classical perturbation theory formulated half a century ago by Lifshitz and Khalatnikov has nowadays been replaced by more appropriate gauge-invariant descriptions . These formalisms introduce some new measures of inhomogeneity. They do not appeal to the metric tensor, so they easily avoid spurious perturbations arising from an inappropriate choice of the equal time hypersurfaces. They guarantee that the space structures they describe are real physical objects. On the other hand, the interpretation of the microwave background temperature fluctuations is based on the Sach-Wolfe effect, where the metric corrections play a key role . Therefore, data obtained from COBE is mostly referred to the classical concepts of Lifshitz and Khalatnikov, and only in a minor part to gauge invariant measures, which are more precise but difficult to observe . Both theories in their original formulations differ essentially. Lifshitz theory provides the two parameter family of increasing solutions for the density contrast ( formula (115.19)), while all the gauge-invariant approaches foresee in concert only a single growing density mode. Thus the interpretation of the microwave temperature map as the initial data for cosmic structure formation is fairly ambiguous. In this paper we attempt to reconcile both types of theories. We appeal to simple and classical methods of order reduction of differential equations . By use of these techniques we remove the pure-gauge perturbations from Lifshitz theory in the radiation dominated universe. In consequence we reduce the Lifshitz system to a second order differential equation, exactly the same as obtained earlier on the ground of gauge-invariant formalisms. Applying well known solutions, we express corrections to the metric tensor, the density contrast and the peculiar velocity in exact form. We show that in the early universe, scalar perturbations of any length-scale form acoustic waves propagating with the velocity $`1/\sqrt{3}`$. ## 2 Order reduction Relativistic perturbations of a Friedman universe, described in synchronous coordinates form a system of two second order differential equations with variable coefficients. In contrast, the similar Newtonian problem is expressed by only one second-order equation . Obviously, the two additional degrees of freedom appearing in the relativistic case must correspond to pure coordinate transformations (gauge freedom) , and should be removed from the theory. Removing pure gauge modes we reduce the Lifshitz equations with pressure $`p=\rho /3`$ to Bessel equation. The procedure is as follows: 1) we raise the equations order to fourth, in order to separate the $`\mu _n(\eta )`$ and $`\lambda _n(\eta )`$ coefficients, and then 2) we reduce the order of each of the separated equations back by eliminating gauge degrees of freedom. The resulting equations have exact solutions in the form of Hankel functions $`H_{3/2}`$ and their integrals. In the synchronous system of reference, the metric corrections $`h_{\mu \nu }`$ ($`\mu ,\nu =1,2,3`$) to the homogeneous and isotropic, spatially flat universe fulfill the partial differential equations ($`8\pi G=c=1`$) $`h_\alpha ^{\beta ^{\prime \prime }}+2{\displaystyle \frac{a^{}}{a}}h_\alpha ^\beta ^{}+(h_{\alpha :\gamma }^{\gamma :\beta }+h_{\gamma :\alpha }^{\beta :\gamma }h_{:\alpha }^{:\beta }h_{\alpha :\gamma }^{\beta :\gamma })=0,`$ (1) $`2\left[1+3{\displaystyle \frac{dp}{dϵ}}\right]^1\left(h^{\prime \prime }+{\displaystyle \frac{a^{}}{a}}\left[2+3{\displaystyle \frac{dp}{dϵ}}\right]h^{}\right)+(h_{\gamma :\delta }^{\delta :\gamma }h_{:\gamma }^{:\gamma })=0.`$ (2) These equations are usually solved by means of the Fourier transform $$h_{\mu \nu }=𝒜(𝐧)\left[\lambda _n(\eta )\left(\frac{\delta _{\mu \nu }}{3}\frac{n_\mu n_\nu }{n^2}\right)+\frac{1}{3}\mu _n(\eta )\delta _{\mu \nu }\right]e^{i𝐧𝐱}d^3𝐧+c.c.$$ (3) The Fourier transform (3) is defined for absolute integrable functions (the case of least interest for cosmology), for nonintegrable functions in the framework of distribution theory, or can be understood as a stochastic integral if the initial conditions are given at random . When the barotropic fluid ($`p/\rho =\delta p/\delta \rho =w=\text{const}`$) is the matter content of the universe, the functions $`\lambda _n(\eta )`$ and $`\mu _n(\eta )`$ obey ordinary, second order equations $`n^2w(\lambda _n(\eta )+\mu _n(\eta ))+2{\displaystyle \frac{a^{}(\eta )}{a(\eta )}}\lambda _n^{}(\eta )+\lambda _n^{\prime \prime }(\eta )=0,`$ (4) $`n^2w(1+3w)(\lambda _n(\eta )+\mu _n(\eta ))+(2+3w){\displaystyle \frac{a^{}(\eta )}{a(\eta )}}\mu _n^{}(\eta )+\mu _n^{\prime \prime }(\eta )=0,`$ (5) where prime denotes differentiation with respect to the conformal time $`\eta `$ and $`a`$ is the scale factor for the background metric tensor. In order to separate the variable $`\lambda _n(\eta )`$, we differentiate (4) twice and eliminate terms containing $`\mu _n(\eta )`$ or its derivatives by help of eq. (5). We obtain the fourth order differential equation $`\left(n^2w{\displaystyle \frac{a^{}(\eta )}{a(\eta )}}6w\left({\displaystyle \frac{a^{}(\eta )}{a(\eta )}}\right)^3+2(1+3w){\displaystyle \frac{a^{}(\eta )}{a(\eta )}}{\displaystyle \frac{a^{\prime \prime }(\eta )}{a(\eta )}}+2{\displaystyle \frac{a^{(3)}(\eta )}{a(\eta )}}\right)\lambda _n^{}(\eta )`$ $`+\left(n^2w+6w\left({\displaystyle \frac{a^{}(\eta )}{a(\eta )}}\right)^2+4{\displaystyle \frac{a^{\prime \prime }(\eta )}{a(\eta )}}\right)\lambda _n^{\prime \prime }(\eta )+(4+3w){\displaystyle \frac{a^{}(\eta )}{a(\eta )}}\lambda _n^{(3)}(\eta )+\lambda _n^{(4)}(\eta )=0.`$ (6) In the same way one can treat (5) to find the equation for $`\mu _n(\eta )`$ $`\left(n^2w{\displaystyle \frac{a^{}(\eta )}{a(\eta )}}(2+3w){\displaystyle \frac{a^{}(\eta )}{a(\eta )}}{\displaystyle \frac{a^{\prime \prime }(\eta )}{a(\eta )}}+(2+3w){\displaystyle \frac{a^{(3)}(\eta )}{a(\eta )}}\right)\mu _n(\eta )`$ $`+\left(n^2w+2(2+3w){\displaystyle \frac{a^{\prime \prime }(\eta )}{a(\eta )}}\right)\mu _n^{\prime \prime }(\eta )+(4+3w){\displaystyle \frac{a^{}(\eta )}{a(\eta )}}\mu _n^{(3)}(\eta )+\mu _n^{(4)}(\eta )=0.`$ (7) In the following part of this paper we restrict ourselves to a universe filled with relativistic particles, where both $`w=p_0/\rho _0=\frac{1}{3}`$ and $`=\rho _0a^4`$ are constants of motion, and the scale factor $`a`$ is a linear function of the conformal time $`a(\eta )=\sqrt{/3}\eta `$. In the flat universe the expansion rate $`\theta (\eta )=3a^{}(\eta )/a(\eta )^2`$ and the energy density $`\rho _0(\eta )`$ relate to each other by $`\rho _0(\eta )=\theta (\eta )^2/3`$, so the equations for $`\lambda _n(\eta )`$ and $`\mu _n(\eta )`$ take fairly legible form, both prior to $`{\displaystyle \frac{1}{3}}n^2(\lambda _n(\eta )+\mu _n(\eta ))+{\displaystyle \frac{2}{\eta }}\lambda _n^{}(\eta )+\lambda _n^{\prime \prime }(\eta )=0,`$ (8) $`{\displaystyle \frac{2}{3}}n^2(\lambda _n(\eta )+\mu _n(\eta ))+{\displaystyle \frac{3}{\eta }}\mu _n^{}(\eta )+\mu _n^{\prime \prime }(\eta )=0,`$ (9) and after separation $`\left({\displaystyle \frac{n^2}{3\eta }}{\displaystyle \frac{2}{\eta ^3}}\right)\lambda _n^{}(\eta )+\left({\displaystyle \frac{n^2}{3}}+{\displaystyle \frac{2}{\eta ^2}}\right)\lambda _n(\eta )+{\displaystyle \frac{5}{\eta }}\lambda _n^{(3)}(\eta )+\lambda _n^{(4)}(\eta )=0,`$ (10) $`{\displaystyle \frac{n^2}{3\eta }}\mu _n^{}(\eta )+{\displaystyle \frac{n^2}{3}}\mu _n^{\prime \prime }(\eta )+{\displaystyle \frac{5}{\eta }}\mu _n^3(\eta )+\mu _n^{(4)}(\eta )=0.`$ (11) We start with equation (10). The two well known gauge solutions are (with the accuracy to multiplicative constants) $`f_1(\eta )`$ $`=`$ $`1,`$ (12) $`f_2(\eta )`$ $`=`$ $`\sqrt{/3}{\displaystyle \frac{1}{a(\eta )}𝑑\eta }=\mathrm{log}(\eta ).`$ (13) We expect to obtain solutions for (10) in the form : $`\lambda _n(\eta )=f_1(\eta )\left({\displaystyle A(\eta )𝑑\eta }\right),`$ (14) $`A(\eta )={\displaystyle \frac{d}{d\eta }}\left({\displaystyle \frac{f_2(\eta )}{f_1(\eta )}}\right)\left({\displaystyle \frac{B(\eta )}{\eta }𝑑\eta }\right).`$ (15) where $`A(\eta )`$ and $`B(\eta )`$ are some auxiliary functions. Inserting (1415) into (10) we obtain the Bessel equation in its canonical form $$\left(\frac{n^2}{3}\frac{2}{\eta ^2}\right)B(\eta )+B^{\prime \prime }(\eta )=0.$$ (16) Equation (16) is already free of gauge modes, as one can see from simple heuristic considerations. Let us assume that there exist a third linearly independent solution of equation (4), which corresponds to a pure coordinate transformation. Then, the linear space of gauge modes would be 3-dimensional, leaving only a single degree of freedom for the real, physical perturbations. Such a theory has no proper Newtonian limit. Equation (16) is identical to the Sakai equation ( formula 5.1), the equation for density perturbations in orthogonal gauge ( formula (4.9), formulae (16–17)), the equation for gauge invariant density gradients ( formula (38)) or Laplacians ( formulae (8–9), formula (22)) after transforming these equations to their canonical form (see ). It is interesting to note that equation (16) is also identical to the propagation equation for gravitational waves (except for gravitational waves moving with the speed of light). This means that the solutions to equation (16) represent waves travelling with the phase velocity $`1/\sqrt{3}`$ (we show this explicitly in the next section). This picture also is consistent<sup>1</sup><sup>1</sup>1The procedure we present here may also be treated as a method to reconstruct metric corrections and hydrodynamic quantities in their explicit form, out of the Field and Shepley variables. with the phonon approach , as the transformation $`\varphi (\eta )=B(\eta )/\eta +B^{}(\eta )`$ to the Field-Shepley variable reduces (16) to the harmonic oscillator $`\varphi ^{\prime \prime }(\eta )+\frac{n^2}{3}\varphi (\eta )=0`$ . ## 3 Solutions The general solution for (16) is a combination of<sup>2</sup><sup>2</sup>2For similar solutions in the gravitational waves theory see . $$B(\eta )=e^{i\omega \eta }\left(1+\frac{1}{i\omega \eta }\right)$$ (17) and its complex conjugate, with the frequency $`\omega =\frac{n}{\sqrt{3}}`$. These solutions are proportional to Hankel functions $`H_{3/2}`$, but more frequently are presented as a combination of Bessel and Neumann functions $`b=a_1J+a_2N`$ . Performing integrations (1415) we determine the solution for $`\lambda _n(\eta )`$ and find the correction $`\mu _n(\eta )`$ by solving equation (4) algebraically $`\lambda (\omega \eta )`$ $`=`$ $`{\displaystyle \frac{e^{i\omega \eta }}{i\omega \eta }}\text{Ei}(i\omega \eta ),`$ (18) $`\mu (\omega \eta )`$ $`=`$ $`\left(1+{\displaystyle \frac{1}{i\omega \eta }}\right){\displaystyle \frac{e^{i\omega \eta }}{i\omega \eta }}+\text{Ei}(i\omega \eta ).`$ (19) Obviously, equation (11) is automatically fulfilled. As a result we obtain the metric corrections $`h_{\mu \nu }`$ expanded into planar waves with the frequency constant in conformal time $`\eta `$ and with varying amplitude $`h_{\mu \nu }=`$ $``$ $`{\displaystyle 𝒜(𝐧)\left(\frac{\delta _{\mu \nu }}{3}\frac{n_\mu n_\nu }{n^2}\right)\left(\frac{e^{i(𝐧𝐱\omega \eta )}}{i\omega \eta }+e^{i𝐧𝐱}\text{Ei}(i\omega \eta )\right)d^3𝐧}`$ (20) $`+`$ $`{\displaystyle 𝒜(𝐧)\frac{\delta _{\mu \nu }}{3}\left(\left(1+\frac{1}{i\omega \eta }\right)\frac{e^{i(𝐧𝐱\omega \eta )}}{i\omega \eta }+e^{i𝐧𝐱}\text{Ei}(i\omega \eta )\right)d^3𝐧}+c.c.`$ The density perturbation and peculiar velocity can be inferred from formulae (8.2-8.3) of and expressed as $`{\displaystyle \frac{\delta \rho }{\rho }}`$ $`=`$ $`{\displaystyle 𝒜(𝐧)u_\rho (𝐧𝐱,\omega \eta )d^3𝐧}+c.c.`$ (21) $`\delta v`$ $`=`$ $`{\displaystyle 𝒜(𝐧)u_v(𝐧𝐱,\omega \eta )d^3𝐧}+c.c.`$ (22) where the Fourier modes form travelling waves $`u_\rho (𝐧𝐱,\omega \eta )`$ $`=`$ $`{\displaystyle \frac{2}{3}}\left(1+{\displaystyle \frac{1}{i\omega \eta }}+{\displaystyle \frac{i\omega \eta }{2}}\right){\displaystyle \frac{e^{i(𝐧𝐱\omega \eta )}}{i\omega \eta }},`$ (23) $`u_v(𝐧𝐱,\omega \eta )`$ $`=`$ $`{\displaystyle \frac{1}{2\sqrt{3}}}\left(1+{\displaystyle \frac{i\omega \eta }{2}}\right){\displaystyle \frac{e^{i(𝐧𝐱\omega \eta )}}{i\omega \eta }}.`$ (24) A generic scalar perturbation in the early universe is a superposition of acoustic waves. Its amplitude decreases to reach a constant and positive value at late times. This decrease is substantial in the low frequency (early times) limit $`\omega \eta 1`$. Solutions are formally divergent at $`\eta =0`$, nevertheless evaluating the cosmic structure backward in time beyond its stochastic initiation $`\eta _i`$ has no well defined physical sense. The only perturbations, which are regular at $`\eta =0`$, and growing near the initial singularity, consist of standing waves $`u(𝐧𝐱,\omega \eta )+u(𝐧𝐱,\omega \eta )`$ (compare or similar effect in the gravitational waves theory ). They form a one-parameter family in the 2-parameter space of all solutions, so they are non-generic. This property has been confirmed by use of other techniques in the gauge-invariant theories . In the stochastic approach nongeneric solutions are of marginal interest since they contribute with the zero probability measure. ## 4 Summary and conclusions It is a matter of dispute whether cosmic structure was created solely by gravity forces or initiated by other, non-gravitational phenomena manifesting themselves as stochastic processes in some early epochs. For the first hypothesis regular and growing solutions are indispensable, while in the second one the generic perturbations play a key role. In a radiation dominated universe these properties exclude each other. Lifshitz theory and the gauge-invariant theories differ less than usually expected. Both types of theories, when properly written, lead to the same perturbation equation of the wave-equation form. Generic scalar perturbations are superpositions of acoustic waves. Solutions depend on the product $`n\eta `$ (equivalently on $`\omega \eta `$). Everything which concerns early epochs refers also to long waves, and vice versa<sup>3</sup><sup>3</sup>3This is a peculiar property of the spatially flat radiation-filled universe.. The perturbation scale does not divide solutions into different classes. Perturbations propagate with the same speed $`1/\sqrt{3}`$, which does not depend on the wave vector. This confirms the wave nature of scalar perturbations in the radiation dominated universe (an important property already pointed out by Lukash , but hardly discussed elsewhere) and compels one to use the complete metric corrections (20) in the Sachs-Wolfe procedure (not only the non-generic growing solutions) at the end of radiational era. The reduction technique we apply in this paper can be used for other equations of state. For $`p/\rho =\text{const}1/3`$ solutions can be expressed in terms of hypergeometric functions. In other cases solutions may not reduce to any known elementary or special functions, although the reduced equation (16) can be always found. ## Acknowledgements We would like to thank Marek Demiański and Grażyna Siemieniec-Oziȩbło for helpful discussion. This work was partially supported by State Committee for Scientific Research, project No 2 P03D 02210. ## Appendix: Lifshitz “synchronous” gauge The original Lifshitz approach provides solutions which are different from (1819), and also inconsistent with the gauge-invariant theories. To explain these differences in detail, we appeal to the complete solution to (10-11) containing both physical and spurious inhomogeneities. All the gauge freedom within synchronous system is limited to the choice of the integral constants in (15). Actually each of these “constants” can be defined as an arbitrary function of the wave number n (equivalently $`\omega `$). We write them explicitly as $`𝒜(𝐧)`$ and $`𝒢(𝐧)`$ satisfying $`\lambda (\omega \eta )=f_1(\eta )\left(𝒜(𝐧){\displaystyle A(\eta )𝑑\eta }𝒢(𝐧)\mathrm{log}(i\omega )\right)`$ (25) $`A(\eta )={\displaystyle \frac{d}{d\eta }}\left({\displaystyle \frac{f_2(\eta )}{f_1(\eta )}}\right)\left({\displaystyle \frac{B(\eta )}{\eta }𝑑\eta }+{\displaystyle \frac{𝒢(𝐧)}{𝒜(𝐧)}}\right).`$ (26) so they are equal to the Fourier coefficients in the integral $`h_{\mu \nu }`$ $`=`$ $`{\displaystyle 𝒜(𝐧)\left(\frac{n_\mu n_\nu }{n^2}\frac{\delta _{\mu \nu }}{3}\right)\left(\frac{e^{i(𝐧𝐱\omega \eta )}}{i\omega \eta }+e^{i𝐧𝐱}\text{Ei}(i\omega \eta )\right)d^3𝐧}`$ (27) $`+`$ $`{\displaystyle 𝒜(𝐧)\frac{\delta _{\mu \nu }}{3}\left[\left(1+\frac{1}{i\omega \eta }\right)\frac{e^{i(𝐧𝐱\omega \eta )}}{i\omega \eta }+e^{i𝐧𝐱}\text{Ei}(i\omega \eta )\right]d^3𝐧}`$ $`+`$ $`{\displaystyle 𝒢(𝐧)\left[\left(\frac{n_\mu n_\nu }{n^2}\frac{\delta _{\mu \nu }}{3}\right)\mathrm{log}(i\omega \eta )+\frac{\delta _{\mu \nu }}{3}\left(\mathrm{log}(i\omega \eta )\frac{1}{\omega ^2\eta ^2}\right)\right]e^{i𝐧𝐱}d^3𝐧}+c.c.`$ Each coefficient $`𝒜(𝐧)`$, $`𝒢(𝐧)`$, can be defined independently. The gauge freedom is carried by $`𝒢(𝐧)`$ what follows directly from (13). Also knowing the gauge invariant methods one can a posteriori check that $`𝒜(𝐧)`$ affects the gauge invariant inhomogeneity measures, while $`𝒢(𝐧)`$ does not. Now, the density contrast and the peculiar velocity, as inferred from formulae (8.2-8.3) of $`{\displaystyle \frac{\delta \rho }{\rho }}`$ $`=`$ $`{\displaystyle \left[𝒜(𝐧)u_\rho (𝐧𝐱,\omega \eta )+𝒢(𝐧)\stackrel{~}{u}_\rho (𝐧𝐱,\omega \eta )\right]d^3𝐧}+c.c.`$ (28) $`\delta v`$ $`=`$ $`{\displaystyle \left[𝒜(𝐧)u_v(𝐧𝐱,\omega \eta )+𝒢(𝐧)\stackrel{~}{u}_v(𝐧𝐱,\omega \eta )\right]d^3𝐧}+c.c.`$ (29) consists of the physical modes $`u_\rho `$, $`u_v`$ already found in (23-24) and the pure gauge modes equal to $`\stackrel{~}{u}_\rho (𝐧𝐱,\omega \eta )`$ $`=`$ $`{\displaystyle \frac{2}{3}}{\displaystyle \frac{1}{\eta ^2\omega ^2}}e^{i𝐧𝐱}`$ (30) $`\stackrel{~}{u}_v(𝐧𝐱,\omega \eta )`$ $`=`$ $`{\displaystyle \frac{i}{2\sqrt{3}}}{\displaystyle \frac{1}{\omega \eta }}e^{i𝐧𝐱}`$ (31) We expand integrals (28) and (29) in the early times limit (with the accuracy to $`\eta ^2`$), to obtain $`{\displaystyle \frac{\delta \rho }{\rho }}`$ $`=`$ $`{\displaystyle \left[\frac{2}{3}\frac{𝒜(𝐧)+𝒢(𝐧)}{\omega ^2\eta ^2}+\left(\frac{1}{9}i\omega \eta +\frac{1}{12}\omega ^2\eta ^2\right)𝒜(𝐧)\right]e^{i𝐧𝐱}d^3𝐧}+c.c.`$ (32) $`\delta v`$ $`=`$ $`{\displaystyle \frac{1}{2\sqrt{3}}}{\displaystyle \left[\frac{𝒜(𝐧)+𝒢(𝐧)}{i\omega \eta }+\frac{1}{2}𝒜(𝐧)\left(1+\frac{\omega ^2\eta ^2}{6}\right)\right]e^{i𝐧𝐱}d^3𝐧}+c.c.`$ (33) Both physical and gauge perturbations manifest identical singular behaviour at $`\eta =0`$. Therefore, one cannot distinguish between them solely on the grounds of their asymptotic forms. On the other hand, one is able to regularize perturbations by the gauge choice $`𝒢(𝐧)=𝒜(𝐧)`$. Then, the equal time hypersurfaces follow the hypersurfaces of equal density at early epochs. This gauge<sup>4</sup><sup>4</sup>4commonly known as the synchronous gauge has been actually employed by Lifshitz and Khalatnikov , where divergent terms $`1/\omega ^2\eta ^2`$ are cancelled by the exactly opposite pure-gauge corrections<sup>5</sup><sup>5</sup>5This does not refer to the metric correction where $`\frac{1}{\eta }`$-divergence is still present.. In consequence, perturbations described there form a mixture of both the physical and the gauge modes. In the Lifshitz gauge, the mode amplitude $`[u_\rho (𝐧𝐱,\omega \eta )\overline{u_\rho (𝐧𝐱,\omega \eta )}]^{1/2}`$ grow with time, therefore, the two independent solutions for the density contrast increase. The same concerns the peculiar velocity. In the low $`\omega \eta `$ limit the density contrast and peculiar velocity form the two-parameter linear spaces of growing solutions. As a consequence, a generic inhomogeneity increases, which is in conflict with the gauge-invariant theories .
no-problem/0002/nucl-th0002042.html
ar5iv
text
# Untitled Document n January 31, 2000 Evidence for a New State of Matter: An Assessment of the Results from the CERN Lead Beam Programme Ulrich Heinz and Maurice Jacob Theoretical Physics Division, CERN, CH-1211 Geneva 23, Switzerland The year 1994 marked the beginning of the CERN lead beam programme. A beam of 33 TeV (or 160 GeV per nucleon) lead ions from the SPS now extends the CERN relativistic heavy ion programme, started in the mid eighties, to the heaviest naturally occurring nuclei. A run with lead beam of 40 GeV per nucleon in fall of 1999 complemented the program towards lower energies. Seven large experiments participate in the lead beam program, measuring many different aspects of lead-lead and lead-gold collision events: NA44, NA45/CERES, NA49, NA50, NA52/NEWMASS, WA97/NA57, and WA98. Some of these experiments use multipurpose detectors to measure simultaneously and correlate several of the more abundant observables. Others are dedicated experiments to detect rare signatures with high statistics. This coordinated effort using several complementing experiments has proven very successful. The present document summarizes the most important results from this program at the dawn of the RHIC era: soon the relativistic heavy ion collider at BNL will allow to study gold-gold collisions at 10 times higher collision energies. Physicists have long thought that a new state of matter could be reached if the short range repulsive forces between nucleons could be overcome and if squeezed nucleons would merge into one another. Present theoretical ideas provide a more precise picture for this new state of matter: it should be a quark-gluon plasma (QGP), in which quarks and gluons, the fundamental constituents of matter, are no longer confined within the dimensions of the nucleon, but free to move around over a volume in which a high enough temperature and/or density prevails. This plasma also exhibits the so-called “chiral symmetry” which in normal nuclear matter is spontaneously broken, resulting in effective quark masses which are much larger than the actual masses. For the transition temperature to this new state, lattice QCD calculations give values between 140 and 180 MeV, corresponding to an energy density in the neighborhood of 1 GeV/fm<sup>3</sup>, or seven times that of nuclear matter. Temperatures and energy densities above these values existed in the early universe during the first few microseconds after the Big Bang. It has been expected that in high energy collisions between heavy nuclei sufficiently high energy densities could be reached such that this new state of matter would be formed. Quarks and gluons would then freely roam within the volume of the fireball created by the collision. The individual quark and gluon energies would be typical of a system at very high temperature (above 200 MeV) even if the system should not have enough time to fully thermalize. Positive identification of the quark-gluon plasma state in relativistic heavy ion collisions is, however, extremely difficult. If created, the QGP state would have only a very transient existence. Due to color confinement, a well-known property of strong interactions at low energies, single quarks and gluons cannot escape from the collision – they must always combine to color-neutral hadrons before being able to travel to the detector. This process is called “hadronization”. Thus, regardless of whether or not QGP is formed in the initial stage, the collision fireball later turns into a system of hadrons. In a head-on lead-lead collision at the SPS about 2500 particles are created (NA49) of which more than 99.9% are hadrons. Evidence for or against formation of an initial state of deconfined quarks and gluons at the SPS thus must be extracted from a careful and quantitative analysis of the observed final state. A common assessment of the collected data leads us to conclude that we now have compelling evidence that a new state of matter has indeed been created, at energy densities which had never been reached over appreciable volumes in laboratory experiments before and which exceed by more than a factor 20 that of normal nuclear matter. The new state of matter found in heavy ion collisions at the SPS features many of the characteristics of the theoretically predicted quark-gluon plasma. The evidence for this new state of matter is based on a multitude of different observations. Many hadronic observables show a strong nonlinear dependence on the number of nucleons which participate in the collision. Models based on hadronic interaction mechanisms have consistently failed to simultaneously explain the wealth of accumulated data. On the other hand, the data exhibit many of the predicted signatures for a quark-gluon plasma. Even if a full characterization of the initial collision stage is presently not yet possible, the data provide strong evidence that it consists of deconfined quarks and gluons. We emphasize that the evidence collected so far is “indirect” since it stems from the measurement of particles which have undergone significant reinteractions between the early collision stages and their final observation. Still, they retain enough memory of the initial quark-gluon state to provide evidence for its formation, like the grin of the Cheshire Cat in Alice in Wonderland which remains even after the cat has disappeared. It is expected that the present “proof by circumstantial evidence” for the existence of a quark-gluon plasma in high energy heavy ion collisions will be further substantiated by more direct measurements (e.g. electromagnetic signals which are emitted directly from the quarks in the QGP) which will become possible at the much higher collision energies and fireball temperatures provided by RHIC at Brookhaven and later the LHC at CERN. In the following the most important experimental findings and their interpretation are described in more detail: Hadrons are strongly interacting particles. In nuclear collisions, after being first created, they undergo many secondary interactions before escaping from the collision “fireball”. When they are finally set free, the fireball volume has expanded by about a factor 30–50; this information can be extracted from two-particle correlations between identical hadrons by a method called “Bose-Einstein interferometry” (NA44, NA49, WA98). At this point, the relative abundances and momentum distributions of the hadrons still contain important memories of the dense early collision stage which can be extracted by a comprehensive analysis of the hadronic final state. More than 20 different hadron species, including a few small anti-nuclei (anti-deuteron, anti-helium), have been measured by the seven experiments (NA44, NA45, NA49, NA50, NA52, WA97, WA98). A combined analysis of their momentum distributions and two-particle correlations shows that, at the point where they stop interacting and “freeze out”, the fireball is in a state of tremendous explosion, with expansion velocities exceeding half the speed of light, and very close to local thermal equilibrium at a temperature of about 100-120 MeV. This characteristic feature gave rise to the name “Little Bang”. The observed explosion calls for strong pressure in the earlier collision stages. Recently measured anisotropies in the angular distribution of the momenta perpendicular to the beam direction (NA49, NA45, WA98) indicate that the pressure was built up quickly, pointing to intense rescattering in the early collision stages. An earlier glimpse of the expanding system is provided by a measurement of correlated electron-positron pairs, also called dileptons (NA45). These data show that in sulphur-gold and lead-gold collisions the expected peak from the rho ($`\rho `$) vector meson (a particle which can decay into dileptons even before freeze-out) is completely smeared out. Simultaneously, NA45 finds in lead-gold collisions an excess of dileptons in the mass region between 250 and 700 MeV, by about a factor 3 above expectations from hadron decays scaled from proton-nucleon to lead-gold collisions. Theory explains this by a broadening of the $`\rho `$’s spectral function, resulting from scattering among pions and nucleons in a very dense hadronic fireball, just below the critical energy density for quark-gluon plasma formation. The $`\rho `$ meson mixes with its partner under chiral symmetry transformations, signalling the onset of chiral symmetry restoration as matter becomes denser and denser. The theoretical analysis of the measured hadron abundances (NA44, NA45, NA49, NA50, NA52, WA97, WA98) shows that they reflect a state of “chemical equilibrium” at a temperature of about 170 MeV. This points to an even earlier stage of the collision. In fact, such temperatures (corresponding to an energy density of about 1 GeV/fm<sup>3</sup>) are the highest allowed ones before, according to lattice QCD, hadrons should dissolve into quarks and gluons. The observations are explained by assuming that at this temperature the hadrons were formed by a statistical hadronization process from a pre-existing quark-gluon system. Theoretical studies showed that at CERN energies subsequent interactions among the hadrons, while causing pressure and driving the expansion and cooling of the fireball, are very ineffective in changing the abundance ratios. This is why, after accounting for the decay of unstable resonances, the finally measured hadron yields reflects rather accurately the conditions at the quark-hadron transition. A particularly striking aspect of this apparent “chemical equilibrium” at the quark-hadron transition temperature is the observed enhancement, relative to proton-induced collisions, of hadrons containing strange quarks. Globally, when normalized to the number of participating nucleons, this enhancement corresponds to a factor 2 (NA49), but hadrons containing more than one strange quark are enhanced much more strongly (WA97, NA49, NA50), up to a factor 15 for the Omega ($`\mathrm{\Omega }`$) hyperon and its antiparticle (WA97)! Lead-lead collisions are thus qualitatively different from a superposition of independent nucleon-nucleon collisions. That the relative enhancement is found to increase with the strange quark content of the produced hadrons contradicts predictions from hadronic rescattering models where secondary production of multi-strange (anti)baryons is hindered by high mass thresholds and low cross sections. Since the hadron abundances appear to be frozen in at the point of hadron formation, this enhancement signals a new and faster strangeness-producing process before or during hadronization, involving intense rescattering among quarks and gluons. This effect was predicted about 20 years ago as a quark-gluon plasma signature, resulting from a combination of large gluon densities and a small strange quark mass in this color deconfined, chirally symmetric state. Experimentally it is found not only in lead-lead collisions, but even in central sulphur-nucleus collisions, with target nuclei ranging from sulphur to lead (NA35, WA85, WA94). This is consistent with estimates of initial energy densities above the critical value of 1 GeV/fm<sup>3</sup> even in those collisions. Evidence for the formation of a transient quark-gluon phase without color confinement is further provided by the observed suppression of the charmonium states $`J/\psi ,\chi _c,`$ and $`\psi ^{}`$ (NA50). These particles contain charmed quarks and antiquarks ($`c`$ and $`\overline{c}`$) which are so heavy that they can only be produced at the very beginning when the constituents of the colliding nuclei still have their full energy. As one varies the size of the colliding nuclei and the centrality of the collision one finds, after subtracting the expected absorption effects from final state interactions between the $`c\overline{c}`$ pair and the nucleons of the interpenetrating nuclei, a succession of suppression patterns: The most weakly bound state, $`\psi ^{}`$, is suppressed already in sulphur-uranium collisions (NA38), the intermediate $`\chi _c`$ seems to disappear quite suddenly in semicentral lead-lead collisions, and in the most central lead-lead collisions an additional reduction of the $`J/\psi `$ yield indicates that now also the strongly bound $`J/\psi `$ ground state itself is significantly suppressed (NA50). The observation of $`\chi _c`$ suppression is indirect, via its 30-40% contribution to the measured $`J/\psi `$ yield which is expected from scaling proton-proton measurements. Charmonium suppression was predicted 15 years ago as a consequence of color screening in a quark-gluon plasma which should keep the charmed quark-antiquark pairs from binding to each other. According to this prediction, suppressing the $`J/\psi `$ requires temperatures which are about 30% above the color deconfinement temperature, or energy densities of about 3 GeV/fm<sup>3</sup>. This agrees with estimates of the initial energy densities reached in central lead-lead collisions, based on calorimetry or on a back-extrapolation from the freeze-out stage to the timei before expansion started. It was tried to reproduce the data by assuming that the charmonia are destroyed solely by final state interactions with surrounding hadrons; none of these attempts can account for the shape of the centrality dependence of the observed suppression. On the other hand, the interpretation of this pattern in terms of color screening by deconfined quarks and gluons leads to the prediction of a similar suppression pattern at RHIC in much smaller nuclei; this prediction will soon be tested. In spite of its many facets the resulting picture is simple: the two colliding nuclei deposit energy into the reaction zone which materializes in the form of quarks and gluons which strongly interact with each other. This early, very dense state (energy density about 3–4 GeV/fm<sup>3</sup>, mean particle momenta corresponding to $`T`$$``$ 240 MeV) suppresses the formation of charmonia, enhances strangeness and begins to drive the expansion of the fireball. Subsequently, the “plasma” cools down and becomes more dilute. At an energy density of 1 GeV/fm<sup>3</sup> ($`T`$$``$ 170 MeV) the quarks and gluons hadronize and the final hadron abundances are fixed. At an energy density of order 50 MeV/fm<sup>3</sup> ($`T`$ = 100–120 MeV) the hadrons stop interacting, and the fireball freezes out. At this point it expands with more than half the light velocity. This does not happen only in a few “special” collision events, but essentially in every lead-lead collision: characteristic observables, like the average transverse momentum of produced particles or the kaon/pion ratio, show only the statistically expected fluctuations in a thermalized ensemble, around average values which are the same in all collisions (NA49). Since the kaon/pion ratio is essentially fixed at the point of hadronization, this indicates the absence of long-range correlations like those expected in a fully-developed thermodynamic phase transition. A better theoretical understanding of the phase-transition dynamics might emerge from these observations. The short-range character suggests similarities with the transition found in high-$`T_c`$ superconductivity. “Direct” observation of the quark-gluon plasma may be possible via electromagnetic radiation emitted by the quarks during the hot initial stage. Searches for this radiation were performed at the SPS (WA98, NA45, NA50) but are difficult due to high backgrounds from other sources. For sulphur-gold collisions WA80 and NA45 established that not more than 5% of the observed photons are emitted directly. For lead-lead collisions WA98 have reported indications for a significant direct photon contribution. Preliminary data from NA45 are consistent with this finding, but so far not statistically significant. NA50 has seen an excess by about a factor 2 in the dimuon spectrum in the mass region between the $`\varphi `$ and $`J/\psi `$ vector mesons. The predicted electromagnetic radiation rates at the above mentioned temperatures are marginal for detection. While under these conditions it is a great experimental achievement to have obtained positive evidence for a signal, its connection with the predicted “thermal plasma radiation” is not yet firmly established. This is expected to change at the higher collision energies provided by RHIC and LHC. The much higher initial temperatures (up to nearly 1000 MeV for lead-lead collisions at the LHC have been predicted) and longer plasma lifetimes should facilitate the direct observation of the plasma radiation and lead to the production of additional heavy charm quarks by gluon-gluon scattering in the QGP phase. The much higher initial energy densities which can be reached at RHIC and LHC give us more time until the quarks and gluons rehadronize, thus allowing for a quantitative characterization of the quark-gluon plasma and detailed studies of its early tharmalization processes and dynamical evolution. Finally, the higher collision energies allow for the production of jets with large transverse momenta, whose leading quarks can be used as “hard penetrating probes” within the quark-gluon plasma. At RHIC a set of four large detectors, with complementary goals and capabilities, ensures that all experimental aspects of ultrarelativistic heavy ion collisions are optimally covered. The ability of the collider to simultaneously accelerate and collide nuclei of different sizes and energies promises a complete understanding of systematic trends as one proceeds from proton-proton via proton-nucleus to gold-gold collisions. As in solid state physics, where the knowledge of the basic interaction Lagrangian (QED) does not permit to reliably predict many bulk properties and where the detailed understanding of the latter is usually driven by experiment, we expect that such a systematic experimental study of strongly interacting matter will eventually lead to a quantitative understanding of “bulk QCD”. We are looking forward to these far-reaching opportunities provided by RHIC and LHC. Key references to the experimental data: NA44 Collaboration: H. Beker et al., “$`M_T`$-dependence of boson interferometry in heavy ion collisions at the CERN SPS”, Physical Review Letters 74 (1995) 3340-3343 I.G. Bearden et al., “Collective expansion in high-energy heavy ion collisions”, Physical Review Letters 78 (1997) 2080-2083 I.G. Bearden et al., “Strange meson enhancement in Pb-Pb collisions”, Physics Letters B 471 (1999) 6-12 NA45/CERES Collaboration: G. Agakichiev et al., “Low-mass $`e^+e^{}`$ pair production in 158 A GeV Pb-Au collisions at the CERN SPS, its dependence on multiplicity and transverse momentum”, Physics Letters B 422 (1998) 405-412 B. Lenkeit et al., “New results on low-mass lepton pair production in Pb-Au collisions at 158 GeV/$`c`$ per nucleon”, Nuclear Physics A 654 (1999) 627c-630c B. Lenkeit et al., “Recent results from Pb-Au collisions at 158 GeV/$`c`$ per nucleon obtained with the CERES spectrometer”, Nuclear Physics A 661 (1999) 23c-32c NA49 Collaboration: T. Alber et al., “Transverse energy production in <sup>208</sup>Pb+Pb collisions at 158 GeV per nucleon”, Physical Review Letters 75 (1995) 3814-3817 H. Appelshäuser et al., “Hadronic expansion dynamics in central Pb+Pb collisions at 158 GeV per nucleon”, European Physical Journal C 2 (1998) 661-670 F. Sikler et al., “Hadron production in nuclear collisions from the NA49 experiment at 158 GeV/$`cA`$”, Nuclear Physics A 661 (1999) 45c-54c NA50 Collaboration: M.C. Abreu et al., “Anomalous $`J/\psi `$ suppression in Pb-Pb interactions at 158 GeV/$`c`$ per nucleon”, Physics Letters B 410 (1997) 337-343 M.C. Abreu et al., “Observation of a threshold effect in the anomalous $`J/\psi `$ suppression”, Physics Letters B 450 (1999) 456-466 M.C. Abreu et al., “Evidence for deconfinement of quarks and gluons from the $`J/\psi `$ suppression pattern measured in Pb-Pb collisions at the CERN-SPS”, CERN-EP-2000-013, submitted to Physics Letters B NA52/NEWMASS Collaboration: R. Klingenberg et al., “Strangelet search and antinuclei production studies in Pb+Pb collisions”, Nuclear Physics A 610 (1996) 306c-316c G. Ambrosini et al., “Baryon and antibaryon production in Pb-Pb collisions at 158 $`A`$ GeV/$`c`$”, Physics Letters B 417 (1998) 202-210 G. Ambrosini et al., “Impact parameter dependence of $`K^\pm ,p,\overline{p},d`$ and $`\overline{d}`$ production in fixed target Pb + Pb collisions at 158 GeV per nucleon”, New Journal of Physics 1 (1999) 22.1-22.23 WA97/NA57 Collaborations: E. Andersen et al., “Strangeness enhancement at mid-rapidity in Pb-Pb collisions at 158 A GeV/c”, Physics Letters B 449 (1999) 401-406 F. Antinori et al., “Production of strange and multistrange hadrons in nucleus-nucleus collisions at the SPS”, Nuclear Physics A 661 (1999) 130c-139c F. Antinori et al., “Transverse mass spectra of strange and multistrange particles in Pb-Pb collisions at 158 $`A`$ GeV/$`c`$”, CERN-EP-2000-001, submitted to European Physical Journal C WA98 Collaboration: R. Albrecht et al., “Limits on the production of direct photons in 200 $`A`$ GeV <sup>32</sup>S+Au collisions”, Physical Review Letters 76 (1996) 3506-3509 M.M. Aggarwal et al., “Centrality dependence of neutral pion production in 158 $`A`$ GeV <sup>208</sup>Pb+<sup>208</sup>Pb collisions, Physical Review Letters 81 (1998) 4087-4091; 84 (2000) 578-579(E) M.M. Aggarwal et al., “Freeze-out parameters in central 158 $`A`$ GeV <sup>208</sup>Pb+<sup>208</sup>Pb collisions”, Physical Review Letters 83 (1999) 926-930
no-problem/0002/nucl-th0002050.html
ar5iv
text
# Violation of pseudospin symmetry in nucleon-nucleus scattering: exact relations ## I Introduction The concept of pseudospin was originally introduced to explain the quasidegeneracy of spherical shell orbitals with nonrelativistic quantum numbers ($`n_r,\mathrm{},j=\mathrm{}+\frac{1}{2}`$) and ($`n_r1,\mathrm{}+2,j=\mathrm{}+\frac{3}{2}`$), where $`n_r`$, $`\mathrm{}`$ and $`j`$ are the single-nucleon radial, orbital angular momentum and total angular momentum quantum numbers. This symmetry holds approximately also for deformed nuclei and even for the case of triaxiality . Only recently, Ginocchio and coworkers pointed out that the origin of the approximate pseudospin symmetry is an invariance of the Dirac Hamiltonian with $`V_V=V_S`$ under specific $`SU(2)`$ transformations . Here, $`V_S`$ and $`V_V`$ are the scalar and vector potentials, respectively. In the non-relativistic limit this leads to a Hamiltonian which conserves pseudospin, $$\stackrel{~}{𝐬}=2\frac{𝐬𝐩}{p^2}𝐩𝐬,$$ (1) where $`𝐬`$ is the spin and $`𝐩`$ is the momentum operator of the nucleon. For realistic mean fields there must be at least a weak pseudospin-symmetry violation because otherwise no bound state exists . Already in 1988 Bowlin et al. investigated whether the symmetry associated with $`V_V=V_S`$ manifests itself also in proton-nucleus scattering. They evaluated the analyzing power $`P(\theta )`$ and the spin rotation function $`Q(\theta )`$ under the assumption $`V_V=V_S`$, where $`\theta `$ is the scattering angle. Since the experimental data deviate significantly from their prediction they concluded that the symmetry is destroyed for low-energy proton scattering; only at high energies some remnants might survive. However, recently, Ginocchio revisited this question within the scattering formalism in terms of pseudospin. Based on a first-order approximation he extracted from experimental proton-<sup>208</sup>Pb scattering data at $`E_L=800`$ MeV the pseudospin-symmetry breaking part of the scattering amplitude. Contrary to he obtained at all angles a relatively small pseudospin dependent part of the scattering amplitude which confirms the relevance of pseudospin symmetry also for proton-nucleus scattering at least at medium energies. In the present work we reexamine the question of speudospin symmetry in nucleon-nucleus scattering and derive an exact relation for the nucleon-nucleus scattering amplitude in terms of scattering observables. The exact relationship for the pseudospin symmetry violating part of the scattering amplitude differs in an essential way from the first-order expression used by Ginocchio . Using the same proton-<sup>208</sup>Pb scattering data at $`800`$ MeV the exact relationship leads at all measured angles to a significantly increased violation of the pseudospin symmetry as compared to . Nevertheless, the size of the violation remains within the limits which allow one to consider pseudospin symmetry as a relevant symmetry in nucleon-nucleus scattering. In section II we discuss briefly the basic relations between the scattering observables and the scattering amplitude of nucleon-nucleus scattering within the standard formalism. Introducing the proper transformation to a pseudospin representation we present in section III exact relations for the pseudospin-symmetry violating part of the scattering amplitude. A first application of the exact relations is given in section IV where we consider the example of proton-<sup>208</sup>Pb scattering data at $`E_L=800`$ MeV. Concluding remarks are given in section V. ## II Spin dependent scattering amplitude The scattering amplitude, $`f`$, for the elastic scattering of a nucleon with momentum $`k`$ on a spin zero target is given by , $$f=A(k,\theta )+B(k,\theta )𝝈\widehat{𝐧}.$$ (2) Here $`𝝈`$ is the vector formed by the Pauli matrices, $`\widehat{𝐧}`$ is the unit vector perpendicular to the scattering plane and $`\theta `$ is the scattering angle. The complex-valued functions $`A(k,\theta )`$ and $`B(k,\theta )`$ are the spin-independent and the spin-dependent parts of the scattering amplitude. Both are not fully accessible to measurement. The observables in nucleon-nucleus scattering are related to intensities and can be described in terms of the amplitudes $`A(k,\theta )`$ and $`B(k,\theta )`$. In the case of the scattering of a spin-$`\frac{1}{2}`$ particle by a spinless target the observables are the differential cross section $$\frac{d\sigma }{d\mathrm{\Omega }}(k,\theta )=|A(k,\theta )|^2+|B(k,\theta )|^2,$$ (3) the polarization $$P(k,\theta )=\frac{B(k,\theta )A^{}(k,\theta )+B^{}(k,\theta )A(k,\theta )}{|A(k,\theta )|^2+|B(k,\theta )|^2},$$ (4) and the spin rotation function $$Q(k,\theta )=\text{i}\frac{B(k,\theta )A^{}(k,\theta )B^{}(k,\theta )A(k,\theta )}{|A(k,\theta )|^2+|B(k,\theta )|^2}.$$ (5) From Eqs. (4) and (5) follows $`P^2+Q^21`$. The extraction of the full scattering amplitude (moduli and phases of $`A(k,\theta )`$ and $`B(k,\theta )`$) from measurements is a very challenging question in quantum mechanics and intimately related to the longstanding phase problem in structure physics. Here, only a partial solution of this problem is required because our interest is limited to the ratio $`|R_s(k,\theta )|=|B(k,\theta )|/|A(k,\theta )|`$ which is a measure of the strength of the spin-dependent interaction. Combining Eqs. (4) and (5) provides a phase relation between the amplitudes, $$\sqrt{\frac{P\text{i}Q}{P+\text{i}Q}}=\mathrm{exp}\text{i}(\varphi _B\varphi _A),$$ (6) where $`\varphi _A`$ and $`\varphi _B`$ are the phases of the amplitudes $`A`$ and $`B`$, respectively. Using this result either in Eq. (4) or (5) leads to a quadratic equation for the ratio of the moduli, $$1\frac{2}{\sqrt{P^2+Q^2}}|R_s|+|R_s|^2=0,$$ (7) which has two solutions. Using the condition that $`P=Q=0`$ implies $`|B|=0`$ one can immediately select the physical solution ($`|A|0`$), $$|R_s|=\frac{|B|}{|A|}=\frac{1\sqrt{1P^2Q^2}}{\sqrt{P^2+Q^2}}=\frac{\sqrt{P^2+Q^2}}{1+\sqrt{1P^2Q^2}}.$$ (8) This result together with Eq. (6) yields also the ratio of the amplitudes, $$R_s=\frac{B}{A}=\frac{P\text{i}Q}{1+\sqrt{1P^2Q^2}}.$$ (9) These relations are exact at all scattering angles and energies. For comparison with the recent work of Ginocchio we evaluate $`|R_s|^2`$ from Eq. (9), $$|R_s|^2=C_s\left[\left(\frac{P}{2}\right)^2+\left(\frac{Q}{2}\right)^2\right]$$ (10) with $$C_s=\frac{4}{2+2\sqrt{1P^2Q^2}P^2Q^2}1.$$ (11) The second factor (square brackets) on the right-hand side of Eq. (10) is the first-order expression used in . It is obvious from the correction factor $`C_s`$ that the first-order approximation systematically underestimates the quantity $`|R_s|^2`$. Eqs. (6) and (9) represent important pieces for the proper determination of the scattering amplitude from nucleon-nucleus scattering observables. In addition one can derive from Eq. (3) with the use of Eq. (9) the relationship, $$|A|^2=\frac{d\sigma }{d\mathrm{\Omega }}\left(1\frac{P^2+Q^2}{2+2\sqrt{1P^2Q^2}}\right).$$ (12) Combining Eqs. (6), (9) and (12) one can determine the full scattering amplitude up to an overall phase. Thus we have reduced the determination of the scattering amplitude in a system with spin $`\frac{1}{2}`$ to a problem similar to that with spin zero. ## III Violation of pseudospin symmetry The closed-form expression (9) for the spin-dependent term indicates the possibility of deriving an exact expression for the violation of the pseudospin symmetry. For this purpose the concept of pseudospin must be introduced in the formalism of nucleon-nucleus scattering. Specifically, the partial-wave expansion of the scattering amplitude must be performed in terms of the pseudo-orbital angular momentum, $`\stackrel{~}{\mathrm{}}`$ $`=`$ $`\mathrm{}+1,\text{ for }j=\mathrm{}+1/2,`$ (13) $`\stackrel{~}{\mathrm{}}`$ $`=`$ $`\mathrm{}1,\text{ for }j=\mathrm{}1/2.`$ (14) Accordingly, the partial-wave S-matrix elements $`S_{\mathrm{},j}`$ must be defined for pseudo-orbital angular momentum ($`\stackrel{~}{S}_{\mathrm{},j}`$) as $$\stackrel{~}{S}_{\stackrel{~}{\mathrm{}},j=\stackrel{~}{\mathrm{}}1/2}=S_{\stackrel{~}{\mathrm{}}1,j=\stackrel{~}{\mathrm{}}1/2},\stackrel{~}{S}_{\stackrel{~}{\mathrm{}},j=\stackrel{~}{\mathrm{}}+1/2}=S_{\stackrel{~}{\mathrm{}}+1,j=\stackrel{~}{\mathrm{}}+1/2}.$$ (15) As shown by Ginocchio the scattering amplitudes $`\stackrel{~}{A}`$ and $`\stackrel{~}{B}`$ are related to $`A`$ and $`B`$ by a unitary transformation $$\left(\begin{array}{c}\stackrel{~}{A}\\ \stackrel{~}{B}\end{array}\right)=\left(\begin{array}{cc}\mathrm{cos}(\theta )& \text{i}\mathrm{sin}(\theta )\\ \text{i}\mathrm{sin}(\theta )& \mathrm{cos}(\theta )\end{array}\right)\left(\begin{array}{c}A\\ B\end{array}\right).$$ (16) Using the transformed amplitudes the scattering observables are then given by $$\frac{d\sigma }{d\mathrm{\Omega }}=|\stackrel{~}{A}|^2+|\stackrel{~}{B}|^2,$$ (17) $$P=\frac{\stackrel{~}{B}\stackrel{~}{A}^{}+\stackrel{~}{B}^{}\stackrel{~}{A}}{|\stackrel{~}{A}|^2+|\stackrel{~}{B}|^2},$$ (18) $$Q=\frac{\mathrm{sin}(2\theta )\left[|\stackrel{~}{A}|^2|\stackrel{~}{B}|^2\right]+\text{i}\mathrm{cos}(2\theta )\left[\stackrel{~}{B}\stackrel{~}{A}^{}\stackrel{~}{B}^{}\stackrel{~}{A}\right]}{|\stackrel{~}{A}|^2+|\stackrel{~}{B}|^2}.$$ (19) In the limit of pseudospin symmetry the amplitude $`\stackrel{~}{B}`$ vanishes and consequently $`P=0`$ and $`Q=\mathrm{sin}(2\theta )`$ . Therefore the ratio $`R_{ps}=\stackrel{~}{B}/\stackrel{~}{A}`$ is a measure of the violation of pseudospin symmetry in nucleon-nucleus scattering. With the transformation (16) it is straightforward to express $`R_{ps}`$ in terms of $`R_s`$, $$R_{ps}=\frac{\stackrel{~}{B}}{\stackrel{~}{A}}=\frac{\text{i}\mathrm{tan}(\theta )+R_s}{1+\text{i}\mathrm{tan}(\theta )R_s}$$ (20) and $$|R_{ps}|^2=\frac{\mathrm{tan}^2(\theta )Q\mathrm{tan}(\theta )+|R_s|^2(1Q\mathrm{tan}(\theta ))}{1+Q\mathrm{tan}(\theta )+|R_s|^2(\mathrm{tan}^2(\theta )+Q\mathrm{tan}(\theta ))}.$$ (21) Substitution of Eq. (9) in Eq. (20) yields an exact relation for the violation of pseudospin symmetry in terms of scattering observables, $$R_{ps}=\frac{P+\text{i}\left[\left(1+\sqrt{1P^2Q^2}\right)\mathrm{tan}(\theta )Q\right]}{\left[\left(1+\sqrt{1P^2Q^2}\right)+Q\mathrm{tan}(\theta )\right]+\text{i}P\mathrm{tan}(\theta )}.$$ (22) For comparison with the recent work of Ginocchio we evaluate $`|R_{ps}|^2`$ from Eq. (19). By straightforward algebraic manipulations one obtains $$|R_{ps}|^2=C_{ps}\left[\left(\frac{P}{2}\right)^2+\left(\frac{Q\mathrm{sin}(2\theta )}{2\mathrm{cos}(2\theta )}\right)^2\right]$$ (23) with $$C_{ps}=\frac{\left(1+|R_{ps}|^2\right)^2}{1+|R_{ps}|^2\mathrm{tan}^2(2\theta )+\text{i}\left(R_{ps}^{}R_{ps}\right)\mathrm{tan}(2\theta )}.$$ (24) The factor in square brackets on the right-hand side of Eq. (23) corresponds to the first-order expression used in while $`C_{ps}`$ is a correction factor. Eqs. (23) and (24) are not the best suited for the evaluation of $`|R_{ps}|^2`$, however, they demonstrate clearly that the actual value of the pseudospin symmetry breaking amplitude may deviate significantly from the first-order approximation. ## IV Example of proton-nucleus scattering The relations derived in section II and III can be directly used for an analysis of proton-nucleus scattering data, where accurate measurements of the analyzing power $`P(\theta )`$ and the spin rotation function $`Q(\theta )`$ are available. Specifically, we consider analyzing power and spin rotation function measurements for proton-<sup>208</sup>Pb scattering at $`E_L=800`$ MeV and evaluate the ratios $`|R_s|^2`$ and $`|R_{ps}|^2`$. The violation of pseudospin symmetry can be read off from Fig. 1, where the ratio $`|R_{ps}|^2`$ is displayed as a function of the scattering angle. In Fig. 1 the ratios are shown only at those scattering angles where measured values of the spin rotation function are available. The corresponding values of the analyzing power have been obtained by linear interpolation of the more complete data set given by . The shown uncertainties result from a linear error propagation of the given experimental uncertainties in $`P(\theta )`$ and $`Q(\theta )`$. For comparison the results of are also displayed. It is obvious from Fig. 1 that the first-order approximation underestimates the pseudospin symmetry breaking part of the scattering amplitude at all scattering angles, particularly at the highest available ones. Furthermore it is interesting to note that the uncertainties obtained with the use of the exact relations are consistently larger than those obtained in first-order approximation. In Fig. 2 the corresponding results are shown for the ratio $`|R_s|^2`$ characterizing the spin dependent part of the scattering amplitude. As already seen from the exact formulation (10) the first-order approximation underestimates systematically the spin dependent part. The difference depends only on the absolute size of $`P^2+Q^2`$ and amounts to a factor $`2`$ at $`\theta =15`$ degrees. The application of Eqs. (6), (9) and (12) to scattering data is straightforward and yields $`|A(k,\theta )|`$, $`|B(k,\theta )|`$ and $`\varphi _B(k,\theta )\varphi _A(k,\theta )`$. Up to a common phase, therefore, the scattering amplitude can be extracted from the scattering observables. ## V Conclusions We have derived closed-form expressions for the relationship between scattering amplitude and observables in nucleon-nucleus scattering. The scattering amplitudes can be determined, up to a common phase factor, from measured differential cross section, analyzing power and spin rotation data. Hence the problem of extracting the scattering amplitudes from nucleon-nucleus scattering data becomes mathematically similar to that for spinless particles. Extending these relations we derived a closed-form expression for the ratio of the pseudospin dependent and independent amplitudes. Thus we obtain an exact measure of the violation of pseudospin symmetry. Comparison with the first-order expression of exhibits significant deviations. Using experimental proton-<sup>208</sup>Pb data at $`E_L=800`$ MeV of we find an increased violation of pseudospin symmetry in the whole available angular range as compared to the results of . The increment is up to $`40`$% so that $`|R_{ps}|^2`$ reaches values of about $`0.15`$ at $`\theta =15`$ degrees. Nevertheless it remains within the limits which allow one to consider the pseudospin symmetry to be a relevant concept in nucleon-nucleus scattering at intermediate energies. The ratio of of pseudospin to spin dependence is actually smaller than estimated in , e.g. at $`\theta =15`$ degrees $`|R_{ps}|^2/|R_s|^21/2.4`$ instead of approximately $`1/2`$ in . This difference is a direct consequence of the remarkable spin-dependence of the scattering amplitude which limits the reliability of the estimation of $`|R_s|^2`$ within perturbation theory. At lower energies increased violation of pseudospin symmetry may occur . Whether this is true has to be proved by studying the energy dependence of $`|R_{ps}|^2`$ for several nuclei. ###### Acknowledgements. The authors thank Prof. Dr. G.W. Hoffmann and Prof. Dr. L. Ray for making available the experimental data in tabular form and Prof. Dr. R. Lipperheide for fruitful discussions and a careful reading of the manuscript. ## Figure Captions Figure 1 The angular dependence of the ratio $`|R_{ps}|^2`$ evaluated from proton-<sup>208</sup>Pb scattering data at $`E_L=800`$ MeV. The solid error bars are obtained by the exact formulation (21), the dashed error bars are the results of the first-order approximation. The lines connecting the datapoints are drawn as a guidance for the eye. Figure 2 The angular dependence of the ratio $`|R_s|^2`$ evaluated from proton-<sup>208</sup>Pb scattering data at $`E_L=800`$ MeV. The solid error bars are obtained by the exact formulation (21), the dashed error bars are the results of the first-order approximation. The lines connecting the datapoints are drawn as a guidance for the eye. Figure 1 Figure 2
no-problem/0002/cond-mat0002108.html
ar5iv
text
# Negative Pressure of Anisotropic Compressible Hall States : Implication to Metrology ## Abstract Electric resistances, pressure, and compressibility of anisotropic compressible states at higher Landau levels are analyzed. The Hall conductance varies continuously with filling factor and longitudinal resistances have huge anisotropy. These values agree with the recent experimental observations of anisotropic compressible states at the half-filled higher Landau levels. The compressibility and pressure become negative. These results imply formation of strips of the compressible gas which results in an extraordinary stability of the integer quantum Hall effect, that is, the Hall resistance is quantized exactly even when the longitudinal resistance does not vanish. Recently highly correlated anisotropic states have been observed around half-filled higher Landau levels of high mobility GaAs/AlGaAs hetero-structures. Longitudinal resistance along one direction tends to vanish at low temperature but that of another direction has a large value of order K$`\mathrm{\Omega }`$. The Hall resistance is approximately proportional to the filling factor. Since one longitudinal resistance is finite, the state is compressible. It has been noted also that the current-induced breakdown and collapse of the quantum Hall effect (QHE) at $`\nu =4`$ occurs through several steps which implies that compressible gas in quantum Hall system has unusual properties. Anisotropic stripe states were predicted to be favored in higher Landau levels. Using the von Neumann lattice basis, the present authors found an anisotropic mean field state in the lowest Landau level to have a negative pressure and negative compressibility. Their energies at higher Landau levels were calculated recently by one of the present authors and others in the Hartree-Fock approximation. They have lower energies than symmetric states, but their physical properties have not been studied well so that it is not clear if these states agree with the states found by experiments. In the present work, we study the physical properties of anisotropic mean field states around $`\nu =n+1/2`$ and $`\nu =n`$, where $`n`$ is an integer. We point out that this mean field state at $`\nu =n+1/2`$ has properties described above and can explain the experimental observations of anisotropic states. These states have a negative pressure and negative compressibility and periodic density modulation in one direction. We show that as a consequence of negative pressure, a compressible gas strip is formed in the bulk around $`\nu =n`$ and a current flows in the strip by a new tunneling mechanism, activation from undercurrent. The current is induced in isolated strip with a temperature dependent magnitude of activation type and it causes a small longitudinal resistance in the system of quantized Hall resistance. This solves a longstanding puzzle of the integer QHE, namely Hall resistance is quantized exactly even if the system has a small finite longitudinal resistance. Collapse phenomena are shown to be understandable also. The von Neumann lattice basis is one of the bases for degenerate Landau levels of the two-dimensional continuum space where discrete coherent states of guiding center variables $`(X,Y)`$ are used and is quite useful in studying QHE because translational invariance in two dimensions is preserved. Spatial properties of extended states and interaction effects were studied in systematic ways. We are able to express exact relations such as current conservation, equal time commutation relations, and Ward-Takahashi identity in equivalent manners as those of local field theory and to connect the Hall conductance with momentum space topological invariant. Exact quantization of the Hall conductance in quantum Hall regime (QHR) in the systems of disorders, interactions, and finite injected current has been proved in this basis. We use this formalism in the present paper as well. Electrons in the Landau levels are expressed with the creation and annihilation operators $`a_l^{}(𝐩)`$ and $`a_l(𝐩)`$ of having Landau level index, $`l=`$0, 1, 2$`\mathrm{}`$, and momentum, $`𝐩`$. The momentum conjugate to von Neumann lattice coordinates is defined in the magnetic Brillouin zone (MBZ), $`|p_i|\pi /a,a=\sqrt{2\pi \mathrm{}/eB}.`$ The many body Hamiltonian $`H`$ is written in the momentum representation as $`H=H_0+H_1`$, where $`H_0`$ $`=`$ $`{\displaystyle \underset{l=0}{\overset{\mathrm{}}{}}}{\displaystyle _{MBZ}}{\displaystyle \frac{d𝐩}{(2\pi /a)^2}}E_la_l^{}(𝐩)a_l(𝐩),`$ (1) $`H_1`$ $`=`$ $`{\displaystyle _{𝐤0}}𝑑𝐤\rho (𝐤){\displaystyle \frac{V(𝐤)}{2}}\rho (𝐤).`$ (2) Here $`E_l`$ is the Landau level energy $`(\mathrm{}eB/m)(l+1/2)`$ and $`V(𝐤)=2\pi q^2/k`$ for the Coulomb interaction, and the charge neutrality is assumed. In Eq. (1), $`H_0`$ is diagonal but the charge density $`\rho (𝐤)`$ is non-diagonal with respect to $`l`$. We call this basis the energy basis. A different basis called the current basis in which charge density becomes diagonal will be used later in computing current correlation functions and electric resistances. It is worthwhile to clarify the peculiar symmetry of the system described by Eq. (1). The Hamiltonian is invariant under translation in momentum space, $`𝐩𝐩+𝐊`$, where $`𝐊`$ is a constant vector, which is called the $`K`$-symmetry. This symmetry emerges because the kinetic energy is quenched due to the magnetic field. A state which has momentum dependent single particle energy violates the $`K`$-symmetry. In the present paper, we study a mean field solution which violates $`K_y`$-symmetry but preserves $`K_x`$-symmetry. The one-particle energy has $`p_y`$ dependence in this state. The compressible gas state is characterized by the following form of expectation values in the coordinate space, $$U^{(l)}(𝐗𝐗^{})\delta _{ll^{}}=a_l^{}^{}(𝐗^{})a_l(𝐗),$$ (3) where the expectation values are calculated self-consistently in the mean field approximation using $`H_1`$ and the mean field $`U^{(l)}`$. In Figs. 1 and 2, the energy per particle, pressure, and compressibility are presented with respect to the filling factor $`\nu =n+\nu ^{}`$. As seen in these figures, they become negative. The density is uniform in y-direction but is periodic in x-direction. The present anisotropic state could be identified with the stripe structure discussed in Refs. and . We have checked that bubble states discussed in Ref. also have negative pressure and compressibility. These properties may be common in the compressible states of the quantum Hall system. The compressible states thus obtained have negative pressure and are different from ordinary gas. Naively it would be expected that these gas states were unstable. However thanks to the background charge of dopants, a stable state with a negative pressure can exist. Since the pressure is negative, charge carriers compress itself while dopants do no move. Then charge neutrality is broken partly and compressible gas states are stabilized by Coulomb energy. Total energy becomes minimum with a suitable shape which depends on the density of compressible gas. The bulk compressible gas states are realized around $`\nu =n+1/2`$ (called region I), where the Coulomb energy is dominant over negative pressure and a narrow depletion region is formed at the boundary. Its width is determined by the balance between the pressure and the Coulomb force. The low density compressible gas states are realized around $`\nu =n`$ (called region II). In this case pressure effect is enhanced relatively compared to Coulomb energy and a strip of compressible gas states is formed as shown in Fig. 3 (a). Real system has disorders by which most electronic states are localized. Let us classify three different regions depending on the relative ratio between localization length $`\xi `$ at the Fermi energy and the width at potential probe area $`L_p`$ and the width at Hall probe area $`L_h`$. We assume $`L_p<L_h`$. In the region II-(i), $`\xi <L_p`$ is satisfied and localized states fill whole system. In the region II-(ii), $`L_p<\xi <L_h`$ is satisfied and the Hall probe area is filled with localized states but potential probe area is filled partly with compressible gas strip. Finally in the region II-(iii), $`L_h<\xi `$ is satisfied and whole area are filled partly with compressible gas states. In each case if localization length is longer than the width of the system, then these localized states are regarded as extended states which behave like compressible gas states with a negative pressure. In the regions II-(ii) and II-(iii), the strip contributes to electric conductance if current flows through the strip. However the strip is unconnected with source drain area. How does the current flow through the strip? This problem has not been studied before. In these regions, extended states below Fermi energy carry non-dissipative current, which we call the undercurrent. See Fig. 3 (b). We show later that the undercurrent actually induces the dragged current. First we calculate the electric conductance of the bulk compressible states in the region I. It is convenient to use current basis for computing current correlation functions. Field operators $`a_l`$ and propagator $`S_{ll^{}}`$ are transformed from the energy basis to the current basis as, $`\stackrel{~}{a}_l(𝐩)`$ $`=`$ $`{\displaystyle \underset{l^{}}{}}U_{ll^{}}(𝐩)a_l^{}(𝐩),`$ (4) $`\stackrel{~}{S}_{ll^{}}(p)`$ $`=`$ $`{\displaystyle \underset{l_1l_2}{}}U_{ll_1}(𝐩)S_{l_1l_2}(p)U_{l_2l^{}}^{}(𝐩),`$ (5) where $`U(𝐩)=e^{ip_x\xi }e^{ip_y\eta }`$ and $`(\xi ,\eta )`$ are relative coordinates defined by $`(xX,yY)`$. In the current basis, the equal time commutation relation between the charge density and the field operators are given by, $$[\rho (𝐤),\stackrel{~}{a}_l(𝐩)]=\stackrel{~}{a}_l(𝐩)\delta ^{(2)}(𝐩𝐤).$$ (6) Hence vertex part is given by a derivative of inverse of the propagator, $`\stackrel{~}{\mathrm{\Gamma }}_\mu (p,p)=_\mu \stackrel{~}{S}^1(p)`$, known as Ward-Takahashi identity. The Hall conductance is the slope of the current-current correlation function at the origin and is given by the topologically invariant expression of the propagator in the current basis as $$\sigma _{xy}=\frac{e^2}{h}\frac{1}{24\pi ^2}\mathrm{tr}(\stackrel{~}{S}(p)d\stackrel{~}{S}^1(p))^3.$$ (7) This shows that $`\sigma _{xy}`$ is quantized exactly in QHR where the Fermi energy is located in the localized state region. Now Fermi energy is in the compressible state band region and $`\sigma _{xy}`$ is not quantized. For the anisotropic states, the inverse propagator is given by $`S^1(p)_{ll^{}}=\{p_0(E_l+ϵ_l(p_y))\}\delta _{ll^{}}`$ where $`ϵ_l(p_y)`$ is the one-particle energy. $`S(p)`$ has no topological singurality and its winding number vanishes. Hence the topological property of the propagator in the current basis, $`\stackrel{~}{S}(p)`$, is determined solely by the unitary operator $`U(𝐩)`$, and the Hall conductance is written as, $`\sigma _{xy}`$ $`=`$ $`{\displaystyle \frac{e^2}{h}}{\displaystyle \frac{1}{4\pi ^2}}{\displaystyle 𝑑pϵ^{ij}\mathrm{tr}\left[S(p)U^{}(𝐩)_iU(𝐩)U^{}(𝐩)_jU(𝐩)\right]}`$ (8) $`=`$ $`{\displaystyle \frac{e^2}{h}}(n+\nu ^{}).`$ (9) To obtain the final result in the above equation, we assumed that the Landau levels are filled completely up to $`n`$ th level and $`(n+1)`$ th level is filled partially with filling factor $`\nu ^{}`$. The Hall conductance is proportional to the total filling factor. The longitudinal conductance in x-direction, $`\sigma _{xx}`$, vanishes since there is no empty state in this direction. If a momentum is added in x-direction, one particle should be lifted to a higher Landau level. There needs a finite energy and $`\sigma _{xx}`$ vanishes. The longitudinal conductance in y-direction, $`\sigma _{yy}`$, does not vanish. One particle energy has a dependence on only $`p_y`$, and the system is regarded as one dimensional. One dimensional conductance is given by Buttiker-Landauer formula . We have thus, $$\sigma _{yy}=e^2/h,\sigma _{xx}=0.$$ (10) The Hall conductance, Eq. (9), and the longitudinal conductances, Eq. (10), agree with the experimental observations of anisotropic states around $`\nu =n+1/2(n1)`$. Next we study the low density region, region II. In the first region, II-(i), whole area is filled with localized states, hence from the formula Eq. (7), the Hall conductance is quantized exactly. The longitudinal resistances vanish. We have $$\sigma _{xy}=(e^2/h)n,\sigma _{xx}=\sigma _{yy}=0.$$ (11) This corresponds to standard QHR. In the regions II-(ii) and II-(iii), a compressible strip bridges one edge to the other edge. Tunneling combined with an interaction causes the dragged current in the strip. The conductance due to the tunneling mechanism can be calculated by a current-current correlation function shown in Fig. 4. The two-loop diagram is the lowest order contribution. The dragged current flows in the compressible strip at potential probe area in the region II-(ii), and at Hall probe area in the region II-(iii). In the region II-(ii), the Hall probe area is filled with only localized states. Hence the Hall conductance is quantized exactly. The potential probe area has a finite longitudinal resistance due to an electric current in the strip. The electric current which flows in the strip makes the strip area to have a finite temperature. We have thus the exactly quantized Hall resistance and a small longitudinal resistance in this region as $$R_{xy}^1=(e^2/h)n,R_{xx}^1=(e^2/h)\epsilon .$$ (12) The small parameter $`\epsilon `$ is proportional to the activation form $`\mathrm{exp}[\beta (\mathrm{\Delta }+mv^2/2)]`$, where $`\mathrm{\Delta }`$ is the energy gap between the Fermi energy and the lower Landau level, $`\beta `$ is the inverse temperature at strip area, and $`v`$ is the average velocity of the undercurrent states. The additional term $`mv^2/2`$ to $`\mathrm{\Delta }`$ comes from the Galilean boost. This region has not been taken into account in the metrology of QHE. In the region II-(iii), whole area is filled with compressible states. Hence from Eq. (9), the Hall conductance is given by unquantized value and the longitudinal resistance becomes finite. In this case we have $$R_{xy}^1=(e^2/h)(n+\epsilon ^{}),R_{xx}^1=(e^2/h)\epsilon .$$ (13) That is, QHE is collapsed. $`\epsilon ^{}`$ has the same temperature dependence as $`\epsilon `$. The localization length and mobility edge depend on injected current in the real system. In a small current system, localization lengths are small in a mobility gap and corresponds to QHR (region II-(i)). In an intermediate current system, they become larger and the strip is formed in potential probe area first (region II-(ii)) and in Hall probe area second (region II-(iii)). QHE is collapsed in this region. In a larger current system, localization lengths become even larger, and whole system is filled with extended states. QHE is broken down in this region. This is consistent with Kawaji et al.’s recent experiments and proposal. In summary, we have shown that the anisotropic mean field states have unquantized Hall conductance, huge anisotropic longitudinal resistance, negative pressure, and negative compressibility. These electric properties are consistent with the recent experiments of the anisotropic compressible Hall state and of collapse phenomena. Negative pressure of these states does not lead to instability but instead leads to a formation of a narrow strip of compressible gas states if its density is low and formation of the bulk compressible gas with the depletion region if its density is around the half-filling. Consequently in the system of low density compressible gas states, the Hall resistance is kept in the exactly quantized value even though the longitudinal resistance is finite. This longstanding puzzle was solved from the unusual property of compressible Hall gas, namely negative pressure. Hence it plays important roles in the metrology of the QHE. Authors would like to thank Y. Hosotani and T. Ochiai for useful discussions. One of the present authors (K. I.) also thanks S. Kawaji and B. I. Shklovskii for fruitful discussions. This work was partially supported by the special Grant-in-Aid for Promotion of Education and Science in Hokkaido University provided by the Ministry of Education, Science, Sports, and Culture, the Grant-in-Aid for Scientific Research on Priority area (Physics of CP violation) (Grant No. 12014201), and the Grant-in-aid for International Science Research (Joint Research 10044043) from the Ministry of Education, Science, Sports and Culture, Japan.
no-problem/0002/hep-ph0002181.html
ar5iv
text
# 1 Introduction ## 1 Introduction The main difficulty facing almost any electroweak baryogenesis scenario is the requirement for a strong electroweak phase transition. This is required in order to prevent wash out of the produced baryons by the subsequent sphaleron transitions, and it is expressed in the (in)famous sphaleron bound. The bound states roughly that the jump in the order parameter (the Higgs expectation value) must be greater then the temperature of the phase transition . This requirement has already killed the Minimal Standard Model (MSM) as a candidate for electroweak baryogenesis. Consequently baryogenesis efforts have been redirected toward the extensions of the Standard Model which provide sufficiently strong first order phase transition, most popular one being the Minimal Supersymmetric Standard Model. Such efforts are very natural, and yet they do not take account of the very basic fact: we do not have any direct observational constraints on the Universe before the nucleosynthesis epoch. In this letter following Ref. we explore this simple fact and discuss alternative cosmologies that predate the nucleosynthesis epoch. Let us first however overview some of the basic facts of the electroweak scale baryogenesis. At the electroweak transition in the presence of a chemical potential for baryon number that is biased by some CP-violating source, quite generically one gets for the baryon-to-entropy ratio $$\frac{n_B}{s}\frac{\delta _{\mathrm{CP}}}{g_{}}\left(\frac{H}{T}\right)_{\mathrm{freeze}},$$ (1) where $`\delta _{\mathrm{CP}}`$ is the effective CP-violating parameter, and $`g_{}`$ is the number of relativistic degrees od freedom, which in the MSM reads $`g_{}106.75`$. The expansion rate $`H_{\mathrm{freeze}}`$ is defined at the temperature $`T_{\mathrm{freeze}}`$ at which the sphaleron processes go out of equilibrium (freeze out). For our purposes they can be identified with their values at the electroweak scale, since $`T_{\mathrm{freeze}}T_{\mathrm{ew}}`$ and $`H_{\mathrm{freeze}}H_{\mathrm{ew}}`$. A derivation of equation (1) in the two Higgs doublet model can be found in the first reference of , Eq. (44). In this case the effective CP-violation parameter $`\delta _{\mathrm{CP}}(Td\chi /dT)_{\mathrm{freeze}}`$, where $`d\chi /dt`$ is the chemical potential for the baryon number in the two Higgs doublet model. We have shown that quite naturally, $`(Td\chi /dT)_{\mathrm{freeze}}`$ may be of the order unity, and with some amount of tuning it can be as large as $`10^2`$. Recall that in the standard radiation dominated cosmology the Hubble parameter at the electroweak scale equals $$\frac{H_{\mathrm{ew}}}{T}1.4\times 10^{16}\left(\frac{g_{}}{107}\right)^{\frac{1}{2}}\left(\frac{T}{100\mathrm{G}\mathrm{e}\mathrm{V}}\right).$$ (2) This then implies that the observed amount of matter in the Universe, as given by the nucleosynthesis constraint $$\left(\frac{n_B}{s}\right)_{\mathrm{obs}}410\times 10^{11}$$ (3) is inconsistent with Eqs. (1) and (2). Because of this simple result attempts to produce baryons at the electroweak transition, when the main source for departure from equilibrium is expansion driven, have faded. ## 2 The model .. We will now show how this inconsistency can be avoided by considering the following simple model. Let us assume that the post-inflationary Universe is dominated by a scalar field $`\varphi `$. (This field may or may not be the inflaton.) We assume that $`\varphi `$ decays into particles, which then instantly thermalize. This situation is realized for the following hierarchy of couplings $$\mathrm{\Gamma }_\varphi H\mathrm{\Gamma }_{\mathrm{th}},$$ (4) where $`\mathrm{\Gamma }_\varphi `$ is the decay rate of $`\varphi `$, and $`\mathrm{\Gamma }_{\mathrm{th}}`$ the thermalization rate of the decaying products. We shall assume that $`\varphi `$ obeys the following equation of state $$p_\varphi =w_\varphi \rho _\varphi ,0w_\varphi 1.$$ (5) With this, the relevant equations of motion are simply $`{\displaystyle \frac{d\rho _\varphi }{dt}}+nH\rho _\varphi +\mathrm{\Gamma }_\varphi (\rho _\varphi \rho _\varphi ^{\mathrm{eq}})`$ $`=`$ $`0,n=3(w_\varphi +1)`$ (6) $`{\displaystyle \frac{d\rho _r}{dt}}+4H\rho _r\mathrm{\Gamma }_\varphi (\rho _\varphi \rho _\varphi ^{\mathrm{eq}})`$ $`=`$ $`0,`$ (7) where $`H`$ is the Hubble parameter given by $$H^2=\frac{\rho _\varphi +\rho _r}{3M_P^3},$$ (8) $`M_P=(8\pi G)^{1/2}2.4\times 10^{18}`$GeV is the reduced Planck mass, and $`\rho _\varphi `$ and $`\rho _r`$ are the energy densities of the scalar field and radiation fluid, respectively. In writing Eqs. (6) and (7) we used $`\rho _\varphi \rho _\varphi ^{\mathrm{eq}}=\rho _r\rho _r^{\mathrm{eq}}`$, and $`\mathrm{\Gamma }_\varphi =\mathrm{\Gamma }_{\varphi \mathrm{rad}}+\mathrm{\Gamma }_{\mathrm{rad}\varphi }`$ is the sum of the scalar field decay rate and the (inverse) re-population rate. Provided the hierarchy (4) holds, this simple model of inflaton decay is a good description not only when perturbative decays dominate (old reheating theory), but also when the field $`\varphi `$ decays non-perturbatively via parametric resonance (modern reheating theory). This is so because the exponential enhancement in the decay rate characterizing parametric resonance is not operative when thermalization rate of the decaying products is very large. The equilibrium densities are related as $`\rho _{\mathrm{tot}}\rho _\varphi +\rho _r=\rho _\varphi ^{\mathrm{eq}}+\rho _r^{\mathrm{eq}}`$, and $`\rho _\varphi ^{\mathrm{eq}}/g_\varphi =\rho _r^{\mathrm{eq}}/g_{}`$, where $`g_\varphi `$ is the number of degrees of freedom in $`\varphi `$. In order to keep Eq. (5) more general, we have left the “damping” coefficient $`n=3(w_\varphi +1)`$ unspecified, so that for example $`n=3`$ $`(n=4)`$ corresponds to a massive (massless) field, and $`n=6`$ corresponds to a scalar field dominated by the kinetic energy (kination). As discussed in Ref. a simple realization of kination is the following exponential potential $$V(\varphi )=V_0e^{\lambda \varphi /M_P},V_0=\frac{2}{\lambda ^2}\left(\frac{6}{\lambda ^2}1\right)M_P^4.$$ (9) For $`\lambda ^2<6`$ there is an attractor solution of the form $$\varphi (t)=\frac{2M_P}{\lambda }\mathrm{ln}M_Pt,at^{2/\lambda ^2},$$ (10) so that in the limit when $`\lambda \sqrt{6}`$ one obtains kination with $$\rho _\varphi =\rho _0\left(\frac{a_0}{a}\right)^6,at^{1/3}.$$ (11) This case corresponds to $`n=6`$ ($`w_\varphi =1`$) in Eq. (6) a behaviour identical to one with a completely flat potential. In the opposite limit, when $`\lambda ^2>6`$, one gets inflation. ## 3 .. its solution .. Equations (6) and (7) can be easily solved in the limit when only a small fraction of $`\varphi `$ has decayed into radiation, $`\rho _\varphi \rho _r,\rho _\varphi ^{\mathrm{eq}}`$. The result is (cf. Ref. and ) $`\rho _\varphi `$ $`=`$ $`\rho _\varphi ^{\mathrm{eq}}+\rho _0\left({\displaystyle \frac{a_0}{a}}\right)^ne^{\mathrm{\Gamma }_\varphi t}`$ (12) $`\rho _r`$ $`=`$ $`\mathrm{\Gamma }_\varphi \rho _0\left({\displaystyle \frac{a_0}{a}}\right)^4{\displaystyle _{t_0}^t}𝑑t\left({\displaystyle \frac{a_0}{a}}\right)^{n4}e^{\mathrm{\Gamma }_\varphi t}.`$ (13) Since we are primarily interested in integrating this equation during the preheating epoch when $`\rho _\varphi \rho _r`$, $`\mathrm{\Gamma }_\varphi t1`$, and $`at^{2/n}`$, we obtain $`\rho _r`$ $`=`$ $`{\displaystyle \frac{2}{8n}}{\displaystyle \frac{\mathrm{\Gamma }_\varphi }{H_0}}\rho _0\left({\displaystyle \frac{t_0}{t}}\left({\displaystyle \frac{t_0}{t}}\right)^{\frac{8}{n}}\right)\left(1+o(\mathrm{\Gamma }_\varphi t)\right)`$ (14) $`=`$ $`{\displaystyle \frac{\pi ^2}{30}}g_{}T^4,`$ where $`t_0=2/nH_0`$, $`H_0^2=\rho _0/3M_P^2`$. Note that $`\rho _r`$ (and $`T`$) grows rapidly from zero, reaching quickly (at $`t_{\mathrm{max}}t_0(8/n)^{n/(8n)}`$) a maximum value $`\rho _{r\mathrm{max}}\rho _0(\mathrm{\Gamma }_\varphi /4H_0)(n/8)^{n/(8n)}`$, after which $`\rho _rt^1`$ ($`Tt^{1/4}a^{n/8}`$). This dependence continues until the reheating temperature $`T_{\mathrm{reh}}`$ is reached, when $`\varphi `$ and radiation begin to equilibrate: $`\rho _\varphi /g_\varphi \rho _r/g_{}`$. Quite generically this occurs when $`\mathrm{\Gamma }_\varphi t1`$. At that moment the solutions (12) – (14) break down, and the Universe enters radiation era. ## 4 .. and the consequences ### 4.1 On the expansion rate From Eq. (14) we infer the following expression for the expansion rate $`H{\displaystyle \frac{2}{nt}}={\displaystyle \frac{8n}{6}}{\displaystyle \frac{\rho _r}{\mathrm{\Gamma }_\varphi M_P^2}}T^4,(T_{\mathrm{max}}>T>T_{\mathrm{reh}}).`$ (15) This dependence of the expansion rate on the temperature is generic in that it does not depend on $`w_\varphi =(n3)/3`$ in the equation of state for $`\varphi `$ (5). This type of behaviour is precisely the desired one since it may result in a large expansion rate at the electroweak scale. Since at the nucleosynthesis epoch the Universe is constrained to be radiation dominated, we must have $`T_{\mathrm{reh}}>T_{\mathrm{ns}}2`$MeV. This means that, in comparison to the standard Hubble parameter at the electroweak scale (2), the Hubble parameter in Eq. (15) is enhanced as $`{\displaystyle \frac{H}{T_{\mathrm{ew}}}}{\displaystyle \frac{H_{\mathrm{ew}}}{T_{\mathrm{ew}}}}\left({\displaystyle \frac{T_{\mathrm{ew}}}{T_{\mathrm{reh}}}}\right)^2,`$ (16) so that, when the reheat temperature is tuned to be $`T_{\mathrm{reh}}T_{\mathrm{ns}}`$, the expansion rate at the electroweak scale may be as much as $`(T_{\mathrm{ew}}/T_{\mathrm{ns}})^210^{10}`$ times larger from the standard one (2). Note that with this much enhancement in the expansion rate the computed baryon number production at the electroweak scale (1) and the observed value (3) are consistent. In order to check consistency of our model, we need to make sure that the sphaleron rate in the symmetric phase (above the electroweak scale) is still in equilibrium with the enhanced expansion rate (16). The maximum attainable rate at the electroweak scale is about $`H_{\mathrm{ew}\mathrm{max}}/T10^6`$. On the other hand, the rate equation for the baryon number density $`n_B`$ is given by $$\frac{dn_B}{dt}+3Hn_B+\frac{13N_F}{4}\frac{\overline{\mathrm{\Gamma }}_s}{T^3}n_B=0$$ (17) where $`N_F=3`$ is the number of fermion families. In the symmetric phase the sphaleron rate reads $`\overline{\mathrm{\Gamma }}_\mathrm{s}25\pm 2\alpha _w^5T^4`$ , and $`\alpha _w=g^2/4\pi 1/29`$ is the strength of the weak coupling at the electroweak scale. This then implies that baryons are destroyed at the rate $`\mathrm{\Gamma }_{\mathrm{sph}}10^5TH_{\mathrm{ew}\mathrm{max}}10^6T`$, which was an underlying assumption made in estimating the baryon-to-entropy ratio (1). In this case the sphalerons are out of equilibrium above about $`T10^3`$GeV, while other species in the plasma typically fall out of equilibrium when $`T>10^410^5`$GeV. An exception is the right handed electron, whose equilibration rate is of the order $`\mathrm{\Gamma }_{e_R}10^{12}T`$, and hence may be out of equilibrium at the electroweak scale. This may have interesting consequences worth further investigation. ### 4.2 On the baryon number dilution We have so far shown that one can produce enough baryons at the electroweak scale, but we have not taken account of the dilution in the baryon-to-entropy rate caused by the entropy released by the decay of $`\varphi `$. This can be estimated as follows. After the sphalerons freeze out, the number of baryons per comoving volume, $`a^3n_B`$, remains constant. On the other hand the entropy per comoving volume scales as $`S_{\mathrm{com}}a^3sa^3T^3T^{3(8n)/n}`$, where we made use of $`at^{2/n}T^{8/n}`$, so that, when the entropy dilution is included, we get the following estimate for the baryon to entropy ratio that survives $`{\displaystyle \frac{n_B}{s}}`$ $``$ $`\left({\displaystyle \frac{n_B}{s}}\right)_{\mathrm{produced}}\left({\displaystyle \frac{T_{\mathrm{reh}}}{T_{\mathrm{ew}}}}\right)^{\frac{3(8n)}{n}}`$ (18) $``$ $`{\displaystyle \frac{\delta _{\mathrm{CP}}}{g_{}}}{\displaystyle \frac{H_{\mathrm{ew}}}{T_{\mathrm{ew}}}}\left({\displaystyle \frac{T_{\mathrm{ew}}}{T_{\mathrm{reh}}}}\right)^{\frac{5n24}{n}}.`$ Note that for $`n>24/5`$ ($`w_\varphi >3/5`$) one obtains a net enhancement in the baryon production in comparison to the estimate (1). In particular, when $`n=6`$ (kination) the enhancement is $`T_{\mathrm{ew}}/T_{\mathrm{reh}}`$, which can be as large as $`10^5`$, so that, in order to get a baryon production consistent with the observation (3), it is required that the effective CP-violating parameter $`\delta _{\mathrm{CP}}`$ be at least of the order $`10^2`$. This is possible to achieve with a certain amount of tuning, as explained on the example of the two Higgs doublet model in Ref. . The main difference is that in Ref. we assumed an unconventional reheating as first discussed by Spokoiny in Ref. , while in this letter we have assumed a more standard type of reheating where the inflaton is weakly coupled to matter, so that it decays very slowly either perturbatively or nonperturbatively via narrow parametric resonance. To sum up, we have found that baryon production is quite generically enhanced in models in which inflation is followed by a kinetic mode domination, irrespectively on whether the Universe reheats by the inflaton decay or by a nonstandard Spokoiny reheating mechanism, provided reheating ends around (but not later than) the nucleosynthesis epoch. The question is of course whether our cosmological model can be made consistent with all observational constraints. A first guess would be yes, simply because we have explicitly constructed such a model in Ref. with one less free parameter ($`\mathrm{\Gamma }_\varphi `$). Let us nevertheless show that this is indeed the case. First we have $`\rho _{\mathrm{reh}}(\mathrm{\Gamma }_\varphi M_P)^2T_{\mathrm{reh}}^4>T_{\mathrm{ns}}^4(\mathrm{MeV})^4`$, from which we conclude that decay rate is tiny $$\mathrm{\Gamma }_\varphi \frac{T_{\mathrm{reh}}^2}{M_P}>\frac{T_{\mathrm{ns}}^2}{M_P}10^{12}\mathrm{eV}.$$ (19) Further, from $$\frac{H_0}{\mathrm{\Gamma }_\varphi }\left(\frac{T_{\mathrm{max}}}{T_{\mathrm{reh}}}\right)^4\left(\frac{T_{\mathrm{ew}}}{T_{\mathrm{ns}}}\right)^410^{20},$$ (20) we obtain the following constraint on the expansion rate of the Universe at the end of inflation $$H_0\frac{T_{\mathrm{max}}^4}{M_PT_{\mathrm{reh}}^2}>\frac{T_{\mathrm{ew}}^4}{M_PT_{\mathrm{ns}}^2}10^5\mathrm{GeV}.$$ (21) These two constraints are very mild in regard to constraining inflationary models. It is now easy to check that, even for the minimum decay rate $`\mathrm{\Gamma }_\varphi 10^{12}`$eV consistent with Eq. (19), the Universe reheats by the inflaton decay, and not by the Spokoiny mechanism. Indeed from $$T_{\mathrm{max}}^4H_0\mathrm{\Gamma }_\varphi M_P^2T_{\mathrm{Hawking}}^4\left(\frac{H_0}{2\pi }\right)^4$$ (22) we infer $$\mathrm{\Gamma }_\varphi \frac{H_0^3}{M_P^2}>\mathrm{\hspace{0.17em}10}^{43}\mathrm{eV},$$ (23) so that the Spokoiny reheating mechanism becomes operational only for the miniscule decay rates for which this bound is violated. One would think that it is necessary to impose the COBE constraints on the amplitude of cosmological perturbations and their spectral index as well. This is however not the case, since they actually constrain inflationary potentials, and hence should be imposed once a particular realization of inflation is given. Since the considerations in this letter are to a large extent generic, we need not consider the COBE constraints here. Finally we point out that, for most inflationary models considered in literature , the COBE constraints are in concordance with Eqs. (19) and (21). ### 4.3 On the sphaleron bound We now briefly discuss the relevance of our alternative cosmologies for baryogenesis at a first order electroweak phase transition. In order to avoid the sphaleron erasure the following bound has to be satisfied $$\frac{v(T_{\mathrm{ew}})}{T_{\mathrm{ew}}}>b\mathrm{ln}\frac{H}{T_{\mathrm{ew}}},$$ (24) where $`v`$ denotes the jump in the order parameter (the Higgs expectation value) at the phase transition, and $`b`$ is a weak function of $`H`$ (or equivalently $`v`$) and the Higgs mass. For a rough estimate we may set it to a constant ($`b0.03`$), such that for the standard expansion rate (2) one obtains $`v\text{ }\stackrel{>}{}\text{ }1.1T_{\mathrm{ew}}`$. With the expansion rate (15), the bound gets modified as $$\frac{v(T_{\mathrm{ew}})}{T_{\mathrm{ew}}}>b\mathrm{ln}\frac{H_{\mathrm{ew}}}{T_{\mathrm{ew}}}2b\mathrm{ln}\frac{T_{\mathrm{ew}}}{T_{\mathrm{reh}}}.$$ (25) Now when $`T_{\mathrm{ew}}/T_{\mathrm{reh}}10^5`$, for which in the symmetric phase $`\mathrm{\Gamma }_{\mathrm{sph}}10H_{\mathrm{max}}`$, the sphaleron bound becomes largely relaxed, yielding $`v\text{ }\stackrel{>}{}\text{ }0.4T_{\mathrm{ew}}`$. Following Refs. we have performed a more careful computation and obtained an almost identical estimate. The reader should be alert to that these estimates assume validity of the perturbative expression for the sphaleron rate in the region where it is barely trustable ($`\alpha _30.15`$ in figure 3 of Ref. ). To get a more precise estimate of the sphaleron bound in the weak transition regime would require numerical estimation. To conclude, we have shown that, when $`T_{\mathrm{reh}}\text{ }\stackrel{>}{}\text{ }T_{\mathrm{ns}}`$ the sphalerons freeze out at the transition even when the transition is quite weak ($`v0.5T`$). (They remain frozen after the transition simply because the sphaleron rate drops faster then the expansion rate (15)). This of course implies that, in cosmologies with a kinetic scalar field mode domination, in many cases the sphaleron bound does not affect baryon production at a first order electroweak transition. Needless to say this opens a new window for baryogenesis scenarios operative in models that result in a weakly first order electroweak phase transition. We must not however forget the entropy release from the decay of $`\varphi `$, which we consider next. Now following the reasoning at the beginning of section 4.2, we infer that the baryon-to-entropy ratio that survives today can be written as (cf. Eq. (18)) $$\left(\frac{n_B}{s}\right)_{\mathrm{today}}\left(\frac{n_B}{s}\right)_{\mathrm{created}}\left(\frac{T_{\mathrm{ew}}}{T_{\mathrm{reh}}}\right)^{\frac{3(8n)}{n}}.$$ (26) Note that the dilution of the produced baryon-to-entropy ratio is large in models with conventional reheating ($`n=3,4`$), while it is relatively small in the case of kination ($`n=6`$). More explicitly, for a massive field $`\varphi `$ ($`n=3`$) the entropy dilution factor is $`(T_{\mathrm{ew}}/T_{\mathrm{reh}})^5`$, which for $`T_{\mathrm{ew}}/T_{\mathrm{reh}}10^5`$ can be as large as $`10^{25}`$. This case has been discussed in a recent preprint . For a massless field $`\varphi `$ ($`n=4`$) the situation is a little better: the dilution factor is $`(T_{\mathrm{ew}}/T_{\mathrm{reh}})^3<10^{15}`$, which may still be very large. For kination however ($`n=6`$), the dilution factor reads $`T_{\mathrm{ew}}/T_{\mathrm{reh}}<10^5`$, so that the required baryon-to-entropy ratio produced at the electroweak scale is about $`310\times 10^6(T_{\mathrm{ns}}/T_{\mathrm{reh}})`$, which may be produced by many electroweak scale baryogenesis mechanisms. ## 5 Conclusions In this letter we have considered alternative cosmologies in which, after an inflationary epoch, the Universe enters a scalar field $`\varphi `$ domination epoch with an atypical equation of state, $`p_\varphi =w_\varphi \rho _\varphi `$, $`0w_\varphi 1`$. (This field may or may not be the inflaton.) We have then assumed that $`\varphi `$ decays before the nucleosynthesis epoch, so that the standard nucleosynthesis is unaffected. When the decay rate $`\mathrm{\Gamma }_\varphi `$ is chosen such that $`\varphi `$ decays in between the electroweak and nucleosynthesis scales, beneficial consequences for baryogenesis incur, analogous to ones discussed in Ref. . In this letter we have examined baryogenesis at the electroweak scale when the transition proceeds (a) without a phase transition, or (b) via a first order phase transition. We have then shown that the most beneficial results in regard to electroweak baryogenesis are attained when the Universe is dominated by the kinetic mode of the scalar field (kination), for which $`p_\varphi =\rho _\varphi `$. In Case (a) we have found that the resulting enhancement in the baryon-to-entropy ratio is at most $`T_{\mathrm{ew}}/T_{\mathrm{reh}}<10^5`$ (cf. Eq. (18)). With some tuning in the CP-violating sector, this then may lead to a baryon number production consistent with the observed value (3). In Case (b) we have shown that quite generically (independent on the equation of state for $`\varphi `$) the sphaleron bound can be relaxed as indicated in Eq. (25). The price to pay is a suppression of the original baryon-to-entropy ratio produced at the electroweak transition due to the subsequent entropy release. The model leading to a minimum entropy release is again kination ($`p_\varphi =\rho _\varphi `$), when the produced baryon-to-entropy ratio is diluted by $`T_{\mathrm{ew}}/T_{\mathrm{reh}}<10^5`$. With this amount of dilution many conventional baryogenesis mechanisms at the first order electroweak phase transition remain viable. ## Acknowledgements I would like to thank Massimo Giovannini and Misha Shaposhnikov for useful discussions.
no-problem/0002/astro-ph0002163.html
ar5iv
text
# Adiabatic CDM Models and the Competition ## 1 Introduction The aim of this paper is to demonstrate the success of inflation-inspired models of structure formation. The CMB data are pointing us towards models in which the mean spatial curvature is zero, and in which the “initial” perturbations were adiabatic and nearly scale-invariant. These three properties are all predictions of the simplest models of inflation. We will discuss them below and how they influence the properties of the CMB. Since we are never able to prove a model to be true, just that it is more probable than other models, much of the demonstration of the success of inflation-inspired models is a discussion of what goes wrong with other ones. We begin with a very quick review of the basics and then move on to a brief description of current data. The subsequent discussion of adiabatic models explains what adiabatic, flat and nearly scale-invariant mean and how these properties influence the CMB power spectrum. With this discussion complete we are then ready to see how isocurvature and defect models differ, and that they do so in ways that conflict with the data. Finally, the strong constraint on the peak location given all data, prompts a discussion about what we can learn from the peak location besides the geometry. ## 2 Preliminaries At sufficiently early times, a thermal distribution of photons kept all the atoms in the Universe ionized. Because of the strength of the Thomson cross section and the large number density of electrons, the photons were tightly coupled to the electrons (and through them to the nuclei) and therefore these components could be treated as a single fluid called the photon-baryon fluid. As the photon temperature cooled (due to the expansion of the Universe) below one Rydberg (actually well below one Rydberg due to the enormous photon-to-baryon ratio), the electrons combined with the nuclei thereby decoupling the photons from the baryons. The Universe became transparent to the photons that are now the CMB. Thus when we look at the CMB, we are seeing the Universe as it was at the time of decoupling—also referred to as “last-scattering”. The temperature of the CMB is the same in all directions, to 1 part in 100,000. The most interesting statistical property of these tiny fluctuations is the angular power spectrum, $`C_l`$, which tells us how much fluctuation power there is at different angular scales, or multipole moments $`l`$ (where $`l\pi /\theta `$). Because the departures from isotropy are so small, linear perturbation theory is an excellent approximation and the angular power spectrum can be calculated for a given model with very high precision. Thus the CMB offers a very clean probe of cosmology—one where the basic physics is much better understood than is the case for galaxies or even clusters of galaxies. Throughout, $`\mathrm{\Omega }_i`$ is the mean density of component $`i`$, $`\overline{\rho }_i`$, in units of the critical density which divides negatively and positively curved models. Note that $`\mathrm{\Omega }_i\mathrm{\Omega }_i=1`$ corresponds to the case of zero mean spatial curvature. ## 3 The Data The last year of the 1000’s was a very exciting one for those interested in measurements of the angular power spectrum. New results came from MSAM, PythonV, CAT, MAT, IAC, Viper, and BOOM/NA, all of which have bearing on the properties of the peak. These data make a convincing case that we have indeed observed a peak—which not only rises towards $`l=200`$ (as we have known for several years) but also falls dramatically towards $`l=400`$. Figure 1 shows the results from 1999 plus, in background shading, a fit of the power in 14 bands of $`l`$ to all the data. Many of the bands are at low enough $`l`$ that they cannot be discerned on a linear x-axis plot. The $`\mathrm{\Omega }=1`$ model in the figure has a mean density of non-relativistic matter, $`\mathrm{\Omega }_m=0.31`$, a cosmological constant density of $`\mathrm{\Omega }_\mathrm{\Lambda }=0.69`$, a baryon density of $`\mathrm{\Omega }_b=0.019h^2`$ , a Hubble constant of $`H_0=100h\mathrm{km}/\mathrm{sec}/\mathrm{Mpc}`$ with $`h=0.65`$, an optical depth to reionization of $`\tau =0.17`$ and a power spectrum power-law index of $`n=1.12`$, where $`n=1`$ is scale invariant. Knox and Page have recently characterized the peak with fits of phenomenological models to the data. They find the peak to be localized by TOCO and BOOM/NA at $`175<l_{\mathrm{peak}}<243`$ and $`151<l_{\mathrm{peak}}<259`$ respectively (both ranges 95% confidence). This location is also indicated by combining the PythonV and Viper data, as can be seen in Fig. 1 and a significant bound can also be derived by combining all other data, prior to these four data sets. In sum, a peak near $`l200`$, is robustly detected. Combining all the data, we can also constraint its full-width at half-maximum to be between 180 and 250 at 95% confidence. ## 4 Physical Models In this section we describe three classes of physical models (adiabatic CDM, topological defects and isocurvature dark matter) and their predictions for the angular power spectrum. ### 4.1 Adiabatic CDM The simplest models of inflation lead to a post-reheat Universe with critical density (to exponential precision) and adiabatic, nearly scale-invariant fluctuations. Although inflation does not require cold dark matter, the prediction of critical density, combined with upper limits on the mean baryonic density, push one in that direction. Within the last few years, the observations have developed to strongly prefer the gap between $`\mathrm{\Omega }_b`$ $`(b\mathrm{baryons})`$ and unity to be filled with not just cold dark matter, but a sizeable helping of dark energy too, e.g., . Let’s examine more precisely each of these three predictions. #### 4.1.1 Adiabatic Adiabatic (or, equivalently, isentropic) means that there are no spatial fluctuations in the total entropy per particle of each type. That is, $`\delta (s/n_i)=0`$ for all species $`i`$. From this, we see that $`\delta n_i/n_i=\delta s/s`$ and therefore all species have the same fractional fluctuation in their number densities. For example, where there are more dark matter particles there are more photons, etc. A general perturbation is a linear combination of isocurvature and adiabatic modes. Isocurvature perturbations are arranged so that $`\delta \rho =_i\delta \rho _i=0`$. The evolution of a single Fourier mode is initially a competition between the pressure of the baryon-photon fluid trying to decrease density contrasts, and gravity trying to enhance them. For adiabatic modes the gravitational term is initially dominant, increasing the amplitude of the mode, until the restorative force of the pressure gradients pushes it back. Despite the initial growth in the density contrast, the potential decays. This is because the photon pressure prevents the growth from happening quickly enough to counteract the effects of the expansion. It is this decay of the potential that leads to excitation of a cosine mode (after the initial transient) for the acoustic oscillation. For isocurvature modes, the potential is initially zero, until there is sufficient time for pressure gradients to evolve into density gradients. The initially growing potential excites a sine mode . All adiabatic modes of given wavenumber, $`k=\sqrt{k_x^2+k_y^2+k_z^2}`$, although they have different spatial phases, will all have the same temporal phase, because they all start off with the same relationship between the dark matter and photons. In other words, although spatially incoherent, they are temporally coherent. This coherence is essential to the familiar Doppler peak structure of the CMB power spectrum . Figure 2 illustrates the point by showing the spatial dependence of three different modes with varying wave numbers at five different stages of their evolution. #### 4.1.2 Flat Inflation generically produces flat Universes—ones where the mean spatial curvature is exponentially close to zero (e.g., $`e^{100}`$ is a particularly large residual curvature). The CMB is sensitive to curvature because the translation of a linear distance on the last-scattering surface to an angular extent on the sky depends on it. One can see this by noting that in a negatively curved space the surface area of a sphere of radius $`r`$ is larger than $`4\pi r^2`$ and therefore objects at fixed coordinate distance of fixed size appear smaller than they would in the case of zero curvature (the larger-than-$`4\pi r^2`$ sphere must be squeezed into a local $`4\pi `$ steradians). This geometrical effect shifts the CMB power spectrum peak locations by a factor of $`\mathrm{\Omega }^{1/2}`$. Other parameters also affect the peak locations by altering the coordinate distance to the last-scattering surface and the size of features there; these are subdominant effects which will be discussed later. #### 4.1.3 Nearly Scale Invariant The power spectrum of fluctuations produced by the simplest models of inflation is well-described by a power law, $`P(k)k^n`$, with $`n`$ near unity. The case $`n=1`$ is called scale-invariant because the dimensionless quantity $`k^3P(k)`$ is the same for all modes when the comparison is done at “horizon crossing” (when the mode wavelength becomes smaller than the Hubble radius). ### 4.2 Topological Defects The usual scenario for topological defects is that a phase transition in an initially homogeneous Universe gives rise to a scalar field with a spatially-varying stress-energy tensor. In most such scenarios, the scalar field configuration evolves into a network of regions which are topologically incapable of relaxing to the true ground state. Causality implies that these models have isocurvature initial conditions. In defect models, the temporal coherence is lost due to continual sourcing of new perturbations by the non-linearly evolving scalar field. This generically leads to one very broad peak with a maximum near $`l=400500`$. Thus the drop in power from $`l=200`$ to $`l=400`$ is a very challenging feature of the data. One can get lower power at $`l=400`$ than at $`l=200`$ with modifications to the ionization history, but even for these models the drop probably is not fast enough. See A. Albrecht’s contribution to these proceedings. ### 4.3 Isocurvature Dark Matter Models Note that isocurvature is more general than adiabatic. Given numerous components, there are a number of different ways of maintaining the isocurvature condition, $`_i\delta \rho _i=0`$. In what follows we will assume that the isocurvature condition is maintained by the dark matter compensating everything else. Isocurvature models have at least two strikes against them. First, scale-invariant models produce far too much fluctuation power on large angular scales, when normalized to galaxy fluctuations at smaller scales as has been known for over a decade . One might hope to save isocurvature models by tilting them far from scale invariant, but this fix cannot simultaneously get galaxy scales, COBE-scales and Doppler peak scales right. The second strike has to do with the location of the acoustic peaks. As mentioned above, the isocurvature oscillations are 90 degrees out of phase with the adiabatic ones. The peaks get shifted to higher $`l`$ ($`l=350`$ to $`400`$ for the first peak) . Geometrical effects could shift it back, to make it agree with the data, but this would require $`\mathrm{\Omega }>2`$ which is inconsistent with a number of observations. There are scenarios with initially isocurvature conditions that can produce CMB power spectra that look much like those in the adiabatic case. This can be done by adding to adiabatic fluctuations, another component which maintains the isocurvature condition and then by giving this extra component a non-trivial stress history . These alternatives will be interesting to pursue further if improvements to the data cause troubles for the currently successful adiabatic models. Even these alternatives are flat models that require some mechanism, such as inflation, for creating the super-horizon correlations in their initial conditions. Turok has shown that even super-horizon correlations are not a necessary condition for CMB power spectra that mimic those of inflation. Although no specific model is constructed, this work demonstrates that causality alone does not preclude one from getting inflation-like power spectra without inflation. For a discussion of the physical plausibility of models that could do this, see . ## 5 Peak Location and $`\mathrm{\Omega }_\mathrm{\Lambda }`$ If we assume flatness, adiabaticity and near scale-invariance we can then determine $`\mathrm{\Omega }_\mathrm{\Lambda }`$ from the location of the first peak . With these assumptions, the peak position just depends on the coordinate distance to the last-scattering surface divided by the sound horizon at last-scattering. How this ratio depends on $`\mathrm{\Omega }_\mathrm{\Lambda }`$ depends on what else we hold fixed. If we have high-precision CMB data over several peaks then $`w_b\mathrm{\Omega }_bh^2`$ and $`w_c\mathrm{\Omega }_ch^2`$ would be good things to keep fixed, since those are what affect the acoustic peak morphology. However, without such high precision data, $`w_b`$ and $`H_0`$ are good things to fix because we know these fairly well from other measurements. With $`w_b`$ and $`H_0`$ fixed, increasing $`\mathrm{\Omega }_\mathrm{\Lambda }`$ increases the sound horizon (because $`w_c`$ must decrease) but increases the coordinate distance to the last-scattering surface by more and the peak moves out to higher $`l`$. With $`w_b`$ and $`w_c`$ fixed, the sound-horizon stays the same but $`H_0`$ increases and the coordinate distance to the last-scattering surface drops: the peak shifts to lower $`l`$. If we take all the data, the peak location is $`l_{\mathrm{peak}}=229\pm 9`$. Thus, if we assume $`h=0.65`$ then this is evidence for a positive cosmological constant. It is weak evidence because inclusion of possible systematic errors would probably widen the peak bound significantly. However, for a sample-variance dominated measurement of the first peak (specifically, $`C_l`$ from $`l=100`$ to $`l=300`$) derived from observations of 1000 square degrees of sky (comparable to the Antarctic Boomerang coverage) one can determine the peak to be at, e.g., $`l_c=220\pm 5`$. One can see from Fig. 3 that we may soon have a strong determination of non-zero $`\mathrm{\Omega }_\mathrm{\Lambda }`$ based solely on Hubble constant and CMB measurements. Note that this determination will not suffer from calibration uncertainty. ## 6 Conclusion The peak has been observed by two different instruments, and can be inferred from an independent compilation of other data sets. The properties of this peak are consistent with those of the first peak in the inflation-inspired adiabatic CDM models, and inconsistent with competing models, with the possible exception of the more complicated isocurvature models mentioned above. It is perhaps instructive that where the confrontation between theory and observation can be done with a minimum of theoretical uncertainty, the adiabatic CDM models have been highly successful. ## Acknowledgments I thank A. Albrecht, L. Page and J. Ruhl for useful conversations and D. Eisenstein for comments on the manuscript. I used CMBAST and am supported by the DoE, NASA grant NAG5-7986, and NSF grant OPP-8920223. ## References
no-problem/0002/hep-th0002054.html
ar5iv
text
# 1 Introduction ## 1 Introduction The study of domain walls in gauged supergravity theories has attracted much attention over the past year. (For a review on an earlier work, see .) In particular, since these solutions are asymptotic to anti-de Sitter (AdS) space-times, they are of importance in the study of the renormalisation group flows of strongly coupled gauge theories in the context of the AdS/CFT correspondence. Typically these configurations are singular in the interior, and asymptotically approach the boundary of the AdS space-times. On the other hand static AdS domain walls (in $`D=5`$), which are asymptotic to AdS Cauchy horizons on either side of a non-singular wall, localise gravity on the wall (in $`D=4`$), and thus have important phenomenological implications. It is therefore of great importance to find embeddings of gravity-trapping domain walls within a fundamental theory such as a compactified string or M-theory, a proposal initiated by H. Verlinde . Of particular interest are field theoretic embeddings of such configurations in supergravity theories that arise as effective theories of string and M-theory compactifications. For example, scalar fields in the (abelian) vector supermultiplets of $`N=2`$ gauged supergravity theories provide natural candidates for realising AdS domain walls. However, as it turns out these walls are in general singular in the interior, and on either side of the wall they are asymptotic to the boundary of AdS, as found for the one-scalar case in (and proven in for the multi-scalar case). The result is, essentially, a consequence of the fact that the gauged supergravity potential for scalars in vector supermultiplets has at most one non-singular supersymmetric extremum per non-singular domain, and this extremum is a maximum. (For a special choice of parameters for the gauged supergravity scalar potential, one side of the domain wall may asymptotically approach a non-AdS (“dilatonic”) space-time .) Recently, we investigated domain-wall space-times supported by scalar fields belonging to massive supermultiplets. In particular, the breathing mode $`\varphi `$ which parameterises the volume of the sphere, and which is a singlet under the isometry group of the sphere, provides a consistent truncation to a single massive scalar mode. Its potential has a supersymmetric AdS minimum (and not a maximum as in the case of massless vector supermultiplets) at $`\varphi =0`$. This allows for hybrid domain-wall solutions where on one side of the wall in the transverse direction $`\rho <0`$ (in a co-moving coordinate), the space-time is asymptotic (as $`\rho \mathrm{}`$) to the AdS horizon. The other side of the wall (the $`\rho >0`$ region) allows for two possibilities: * Branch I is a singular domain-wall solution with $`\rho 0^+`$ corresponding to a naked singularity at which $`\varphi \mathrm{}`$ (zero volume of the sphere) . * Branch II is a non-singular domain wall with, $`\rho <0`$ corresponding to the dilatonic wall domain , and with $`\rho \mathrm{}`$ reaching the sphere decompactification ($`\varphi \mathrm{}`$). The higher-dimensional interpretation of this solution is a spherically compactified $`p`$-brane (in the $`D=5`$ case, a D3-brane of Type IIB string theory compactified on the five-sphere $`S^5`$) in the domain that extends from the horizon to the asymptotically flat space-time. Unfortunately, this type of domain wall cannot trap gravity . The purpose of this paper is to investigate the singular domain-wall solutions of Branch I. Owing to the naked singularity, such a solution is clearly undesirable from the classical point of view. However, quantum mechanics may be more tolerant of singularities, and we shall investigate the fluctuation spectrum in this singular background. Interestingly, the attractive gravitational potential is insufficiently singular at the naked singularity, thus rendering the spectrum bounded from below. In particular, the boundary condition on wave functions at the singularity allows for a continuous spectrum with positive energy, and the massless bound state is formally excluded. We also point out that a regularisation of the naked singularity, for example by modifying the metric at distances of the order of the inverse string scale, renders the gravitational potential finite and allows for precisely one (massless) bound-state of the lower-dimensional graviton. These issues are addressed in section 2. In section 3 we contrast the spectrum in the background of such a singular breathing-mode domain-wall to those in domain-wall backgrounds supported by massless scalars of sphere reductions. These latter domain walls, which have been previously studied in the literature , also involve examples of singular solutions in the interior. However, on the AdS side they are asymptotic to the AdS boundary. In spite of the naked singularity, the fluctuation spectrum is well behaved. The spectrum of the fluctuating modes sheds light on the spectrum of strongly-coupled gauge theories via the AdS/CFT correspondence. We also elucidate the nature of the higher-dimensional embedding of this configuration in Section 4; it describes a $`p`$-brane configuration in the domain “inside the horizon.” We show that the the singularity structure of the massive scalar domain wall solution is exactly the same as the massless scalar domain wall associated with distributed $`p`$-branes with negative tension ingredients. This suggests that the inside of the horizon of non-dilatonic $`p`$-branes such as D3-branes or M-branes should be included in the discussion and the resolution of these singularities may be analogous to the one proposed in for the singularity associated with negative tension. ## 2 Singular domain wall of the breathing mode The scalar arising as the breathing mode in a Kaluza-Klein sphere reduction is massive, and in general yields a potential that allows a supersymmetric minimum with negative cosmological constant. The effective Lagrangian for the breathing-mode scalar and gravity is of the form: $$_D=eR\frac{1}{2}e(\varphi )^2eV,$$ (1) where the potential is given by $$V=\frac{1}{2}g^2\left(\frac{1}{a_1^2}e^{a_1\varphi }\frac{1}{a_1a_2}e^{a_2\varphi }\right).$$ (2) The (positive) constants $`a_1`$ and $`a_2`$ are given by $$a_1^2=\frac{4}{k}+\frac{2(D1)}{D2},a_1a_2=\frac{2(D1)}{D2},$$ (3) where $`k`$ is a certain positive integer. For $`D=4`$, 7 and 5, this integer takes the value $`k=1`$. These cases correspond to the $`S^7`$ and $`S^4`$ reductions of $`D=11`$ supergravity, and the $`S^5`$ reduction of type IIB supergravity, respectively. For $`D=3`$ the integer $`k`$ can be equal to 1, 2 or 3. The case $`k=1`$ has a four-dimensional origin as an $`S^1`$ Scherk-Schwarz reduction of the Freedman-Schwarz model. The cases with $`k=2`$ and $`k=3`$ corresponding to $`S^3`$ and $`S^2`$ reduction of six-dimensional and five-dimensional supergravity theories. The potential has one isolated AdS minimum $`\mathrm{\Lambda }V_{\mathrm{min}}`$ at $`\varphi =0`$, tends to zero for $`\varphi \mathrm{}`$ (sphere decompactification) and tends to infinity for $`\varphi +\mathrm{}`$ (zero volume of the sphere). The domain-wall solutions were obtained in . In , the analytic solution in terms of a co-moving coordinate frame (simply related to the conformally flat metric) was derived, and the nature of the solutions was analysed in detail. They occur in two distinct branches, associated with two disconnected space-time regions. Here, we shall just describe the (asymptotic) behaviour of the solutions. (The analytical details can be found in .) Using the co-moving coordinate $`\rho `$, the metric is of the form $$ds^2=e^{2A}dx^\mu dx_\mu +d\rho ^2.$$ (4) In the first branch, the coordinate $`\rho `$ runs from $`\mathrm{}`$ to 0, with $`e^{2A}e^{2c\rho },`$ $`\mathrm{for}`$ $`\rho \mathrm{},`$ $`e^{2A}\rho ^\gamma ,\gamma =\frac{2a_2}{(D2)a_1},`$ $`\mathrm{for}`$ $`\rho 0.`$ (5) In the second branch, the coordinate $`\rho `$ runs from $`\mathrm{}`$ to $`+\mathrm{}`$, with $`e^{2A}e^{2c\rho },`$ $`\mathrm{for}`$ $`\rho \mathrm{},`$ $`e^{2A}\rho ^\gamma ,\gamma =\frac{2a_1}{(D2)a_2},`$ $`\mathrm{for}`$ $`\rho +\mathrm{}.`$ (6) Thus we see that in both solutions, as $`\rho \mathrm{}`$, the metric becomes AdS, with the constant $`c`$ is given by $`\mathrm{\Lambda }=(D1)(D2)c^2`$. ( Note that the Ricci tensor approaches $`R_{\mu \nu }=(D1)c^2g_{\mu \nu }`$.) Clearly, the asymptotically AdS region $`\rho \mathrm{}`$ corresponds to the AdS Cauchy horizon. In the second branch, the solution is free of singularities, whilst in the first branch there is a naked singularity at $`\rho =0`$. We shall concentrate on this first branch, and study the singular domain wall. Close to the singularity point at $`\rho =0`$, the values of the $`\gamma `$ coefficient appearing in (5) for the various cases are summarised in Table 1. | | $`D=3`$ | $`D=4`$ | $`D=5`$ | $`D=7`$ | | --- | --- | --- | --- | --- | | $`k=1`$ | $`\frac{1}{2}`$ | $`\frac{2}{7}`$ | $`\frac{1}{5}`$ | $`\frac{1}{8}`$ | | $`k=2`$ | $`\frac{2}{3}`$ | | | | | $`k=3`$ | $`\frac{3}{4}`$ | | | | Table 1: The values of the coefficient $`\gamma `$ in (5) for the singular domain-wall solutions with the massive breathing mode, in various dimensions $`D`$. ### 2.1 Spectrum of fluctuating modes It is of interest to examine the quantum fluctuations around the backgrounds of the Branch-I solutions. In particular, the spectrum may suffer from pathologies due to the singular nature of the metric. The fluctuations of the $`D`$-dimensional graviton (in an appropriate gauge) are described by a minimally-coupled scalar field in this gravitational background . The spectrum of these fluctuating modes in turn elucidates the nature of the modes in $`(D1)`$ dimensions, and in particular the possibility of trapping a $`(D1)`$-dimensional massless graviton at such a domain wall. (For a detailed derivation of the equation for the fluctuating modes in arbitrary dimensions, see ). The minimally-coupled scalar field $`\mathrm{\Phi }`$ obeys the wave equation $$_\mu (\sqrt{g}g^{\mu \nu }_\nu \mathrm{\Phi })=0.$$ (7) We make the Ansatz $`\mathrm{\Phi }=e^{ipx}\chi (z)`$, where $`m^2=pp`$ determines the mass of the fluctuating mode. It is helpful to cast the wave equation into the Schrödinger form, which can be done by first writing the metric in a manifestly conformally-flat frame, as $$ds^2=e^{2A(z)}(dx^\mu dx_\mu +dz^2),$$ (8) by means of an appropriate coordinate transformation. For the Branch-I solutions that we are interested in, the coordinate $`z`$ runs from $`\mathrm{}`$ to 0, and $`A(z)`$ has the following asymptotic behaviour: $`e^{2A}{\displaystyle \frac{1}{c^2z^2}},`$ $`\mathrm{for}`$ $`z\mathrm{},`$ $`e^{2A}z^{\stackrel{~}{\gamma }},\stackrel{~}{\gamma }={\displaystyle \frac{2\gamma }{2\gamma }},`$ $`\mathrm{for}`$ $`z0.`$ (9) Making the field redefinition $`\chi =e^{(D2)A/2}\psi `$, the wave equation assumes the form $$(^2V)\psi =m^2\psi ,$$ (10) with the Schrödinger potential given by $$V=\frac{D2}{2}A^{\prime \prime }+\frac{(D2)^2}{4}(A^{})^2.$$ (11) The asymptotic behaviour of the potential is given by $`V{\displaystyle \frac{D(D2)}{4z^2}},`$ $`\mathrm{for}`$ $`z\mathrm{},`$ $`V{\displaystyle \frac{c}{z^2}},`$ $`\mathrm{for}`$ $`z0.`$ (12) Thus we see that the potential near the singularity approaches a negative infinity, and there the value of the negative constant $`c`$ is important. It is given by $$c=\frac{1}{4}+\frac{1}{(k+2)^2}>\frac{1}{4},$$ (13) which is independent of the dimension $`D`$. The full form of the Schrödinger potential $`V`$ is sketched in Figure 1. Since we have $`c>\frac{1}{4}`$, the spectrum is bounded from below. Namely, the boundary condition at $`\rho =0^{}`$ is $`\mathrm{\Phi }=0`$, thus disallowing solutions with $`\mathrm{\Phi }0`$ at $`\rho =0^{}`$. As a consequence, the spectrum is continuous, with only positive energies occurring. Furthermore, the $`m^2=0`$ state has to be excluded, since it corresponds to the solution $`\varphi =`$ constant, which does not vanish at $`\rho =0`$. The “bump” resulting from the AdS space-time causes the wave functions of the continuous spectrum to be be suppressed in the interior region of the potential (close to the singularity). Thus although the background has a naked singularity, the spectrum for minimally-coupled scalar fields is well behaved. It does not, however, seem to be able to trap the massless $`(D1)`$-dimensional graviton. The Schrödinger potential for the branch-II solution is quite different. It runs smoothly from $`z=\mathrm{}`$ to $`z=+\mathrm{}`$, vanishing at both ends, with a single maximum at a certain finite value of $`z`$. ### 2.2 Regularising the metric near the naked singularity It is on one hand encouraging that in spite of the naked singularity the spectrum is well behaved, but on the other hand it is disappointing that the massless mode is not bound (and is eliminated by the boundary condition at the singularity). However, this might just be an artefact of working only at the level of the effective supergravity theory. Thus it might be that string-induced corrections could “regulate” the metric near the naked singularity. We shall show in the next section that the singularity structures of these massive-scalar domain-wall solutions are identical to those of the massless-scalar domain walls. The latter can be viewed in the higher dimension as continuous distributions of D3-branes or M-branes that include some negative-tension contributions. A resolution of singularities associated with negative-tension states was proposed in . It is not inconceivable that a similar resolution can be applied to our cases, since the singularity structure is the same. As a consequence, the negative infinity of the Schrödinger potential could be cut off at distances $`zM_{string}^1`$ and thus in fact allow a zero-mass bound state after all. In the absence of a direct way of learning about these effects from string theory, here we present a model where we modify (i.e. regulate) the metric near the singularity in terms of a plausible, albeit somewhat ad-hoc, correction of order $`M_{string}^1`$. This does at least provide an indication of the sort of modifications that can be expected once stringy corrections are taken into account. Accordingly, near the singularity as $`z0^{}`$, and beyond $`z>0`$, we shall modify the metric $`A(z)`$ so that it takes the form $`e^{2A}=(|z|+M^1)^{\stackrel{~}{\gamma }},`$ $`\mathrm{for}`$ $`z\{M^1,0^{}\},`$ (14) $`e^{2A}=e^{M^{}z}M^{\stackrel{~}{\gamma }},`$ $`\mathrm{for}`$ $`z>0,`$ (15) where $`M^{}`$ and $`M`$ are of the order of $`M_{string}`$. The positive coefficients $`\stackrel{~}{\gamma }`$ for the various examples are given by (9). Clearly, taking $`\{M,M^{}\}\mathrm{}`$ corresponds to the classical solution of the previous subsection. (The metric is continuous at $`z=0`$, but its higher derivatives are not.) Calculating the Schrödinger potential, we find that it now takes the following modified form: $`V={\displaystyle \frac{c}{(|z|+M^1)^2}},`$ $`\mathrm{for}`$ $`z\{M^1,0^{}\},`$ (16) $`V=\frac{1}{8}D(D2)M^2,`$ $`\mathrm{for}`$ $`z>0.`$ (17) Thus the negative infinity has been cut off with a step function, at distances $`zM^1`$. (See the sketch of the modified potential in Figure 2). After the regularisation of the metric, the potential has been rendered regular everywhere, and the Schrödinger equation can be written as a supersymmetric quantum mechanical system, which allows precisely one massless state (see for example ): $`\psi =e^{(D2)A(z)/2}`$. From the asymptotic behavior of the metric one sees that in the “regulated” domain $`z0^{}`$ the wave function falls off exponentially fast, with a decay constant of order $`M^1`$. On the other hand the fast fall off in the AdS regime $`z\mathrm{}`$ renders the zero-mode renormalisable, and thus we have a massless bound state trapped on the wall. ## 3 Singular domain walls with “massless” scalars Recently a large class of AdS domain-wall solutions, associated with diagonal symmetric potentials of lower-dimensional gauged supergravities, were obtained . These solutions generally interpolate between a boundary AdS and a naked singularity, and thus it is of interest to revisit the properties of the fluctuation spectra here, and to contrast them with those of the domain walls supported by the massive breathing mode. The metrics for the solutions supported by the massless scalars are given by : $`ds_D^2`$ $`=`$ $`(gr)^{{\scriptscriptstyle \frac{4}{D3}}}({\displaystyle \underset{i}{}}H_i)^{{\scriptscriptstyle \frac{1}{2}}{\scriptscriptstyle \frac{2}{N}}}dx^\mu dx_\mu +({\displaystyle \underset{i}{}}H_i)^{\frac{2}{N}}{\displaystyle \frac{dr^2}{g^2r^2}},`$ $`H_i`$ $`=`$ $`1+{\displaystyle \frac{\mathrm{}_i^2}{r^2}},N={\displaystyle \frac{4(D2)}{D3}},`$ (18) where $`D=4,5`$ and 7, and for these dimensions we have $`N=8`$, 6 and 5 respectively. The coordinate $`r`$ runs from zero to infinity. At $`r=\mathrm{}`$ we have $`H_i=1`$, and the metric describes AdS space-time in horospherical coordinates. Note that the AdS space-time is a maximum of the scalar potential. As $`r`$ approaches zero, the metric behaviour depends on the number of non-vanishing constants $`\mathrm{}_i`$. If $`n<N`$ of them are non-vanishing, the metric approaches $`ds_D^2`$ $``$ $`\rho ^\gamma dx^\mu dx_\mu +d\rho ^2,`$ $`\gamma `$ $`=`$ $`{\displaystyle \frac{2(Nn)}{n(D3)}},\rho =r^{{\scriptscriptstyle \frac{2n}{N}}}.`$ (19) The values of the constant $`\gamma `$ for the various AdS domain walls are summarised in Table 2. | | $`D=4`$ | $`D=5`$ | $`D=7`$ | | --- | --- | --- | --- | | $`n=1`$ | 14 | 5 | 2 | | $`n=2`$ | 6 | 2 | $`\frac{3}{4}`$ | | $`n=3`$ | $`\frac{10}{3}`$ | 1 | $`\frac{1}{3}`$ | | $`n=4`$ | 2 | $`\frac{1}{2}`$ | $`\frac{1}{8}`$ | | $`n=5`$ | $`\frac{6}{5}`$ | $`\frac{1}{5}`$ | | | $`n=6`$ | $`\frac{2}{3}`$ | | | | $`n=7`$ | $`\frac{2}{7}`$ | | | Table 2: The values of $`\gamma `$ coefficients for the various domain walls supported by scalars of massless supermultiplets. For all the cases, the metric has a power-law curvature singularity $`1/\rho ^2`$ at $`\rho =0`$. For $`\gamma 2`$, the singularity is marginal, in the sense that $`\rho =0`$ is also an horizon, whilst for $`\gamma <2`$ the singularity at $`\rho =0`$ is naked.<sup>1</sup><sup>1</sup>1There is an event horizon at $`\rho =0`$ if geodesics originating at some finite and non-zero value of $`\rho `$ take an infinite coordinate time to reach $`\rho =0`$. Thus there is an horizon when $`\gamma 2`$. These solutions, when oxidised back to $`D=11`$ or $`D=10`$, become ellipsoidal distributions of M-branes or D3-branes. One can in general argue that the naked singularities of these lower-dimensional solutions are therefore artefacts of the dimensional reduction. However, in the case of the solutions with $`n=N1`$, the distributions involve negative-tension distributions of the M-branes or D3-branes, which clearly also have naked singularities in the higher dimension. It is interesting to note that for the cases associated negative tension distributions, the values of $`\gamma `$ for $`D=4`$, 5 and 7 are precisely the same as the ones for the massive-scalar breathing mode potentials given in Table 1. It is therefore worthwhile to investigate whether quantum fluctuations around these backgrounds suffer from pathologies associated with the naked singularities. The minimally-coupled scalar $`\mathrm{\Phi }`$ in these asymptotically AdS geometries is of special interest (in the dual gauge theory it corresponds to the operator that couples to the kinetic energy of the gauge field strength $`F^2`$). The spectrum for the wave equations of these background was studied in detail in various publications, and no pathologies in the spectrum were encountered in any of these cases . It is straightforward to show that the wave equation near the singularity $`\rho =0`$ is then of the form $`\left(_z^2{\displaystyle \frac{C_n}{(zz_{})^2}}\right)\psi =m^2\psi ,`$ $`C_n=\frac{1}{4}+{\displaystyle \frac{(Nn2)^2}{(Nn4)^2}}.`$ (20) We see that the coefficients always satisfy the bound $`C_n\frac{1}{4}`$, which is essential in order for the energies of all the states to be bounded below. It is of interest to investigate the relation between the structure of the spectrum and the nature of the curvature singularity of the background. It has been shown that the spectrum is discrete in the cases $`Nn=1`$, 2 or 3. For all of these values, the background suffers from a naked singularity. In fact, we can show that the spectrum is not only discrete, but also positive definite. This is because the coordinate $`r`$ runs from $`r=0`$ to $`r=\mathrm{}`$, and hence the wave function $`\mathrm{\Phi }`$ has to satisfy the boundary conditions $`\mathrm{\Phi }(0)=0`$ and $`\mathrm{\Phi }(\mathrm{})`$ finite. It is straightforward to see that solutions with non-positive $`m^2`$ do not satisfy this condition. In particular, the massless solution, i.e. $`\mathrm{\Phi }=1`$, is excluded by the boundary conditions, since it does not vanish at $`r=0`$. Indeed, the case of $`D=5`$ and $`n=4`$ can be solved exactly, and the spectrum can be seen to comprise only positive values of $`m^2`$. For $`Nn4`$, the singularity is marginally clothed by an event horizon, and the spectrum is continuous, (with a mass gap when $`Nn=4`$). As we discussed earlier, the coordinate $`\rho `$ terminates at $`\rho =0`$ in all the cases, owing to the singularity, and it follows that the zero-mass solution has to be excluded from the spectrum since $`\mathrm{\Phi }(0)`$ must vanish. The absence of the $`m^2=0`$ states is consistent from the AdS/CFT viewpoint, since one does not expect gravity to be localised on the boundary of the AdS. Even in the case of a metric that is regulated at $`\rho 0`$, the zero mass state mode is excluded; this mode is of the form $`\psi =e^{(D2)A/2}`$ (as obtained from the supersymmetric quantum mechanical analysis). However, this mode violates the boundary condition that $`\psi =0`$ at the boundary of AdS. ## 4 Higher dimensional interpretation The higher-dimensional interpretations of the domain-wall solutions supported by the massive breathing modes were given in . They correspond to isotropic non-dilatonic branes in the higher dimensions, e.g., M-branes, D3-branes and self-dual strings, etc. It is straightforward to see that the Branch-II solution corresponds to the region of the $`p`$-brane that interpolates between Minkowski space-time and the AdS throat. The Branch-I solution, on the other hand, corresponds to the region between the singularity (at zero volume of the sphere) and the horizon, as we shall demonstrate below. It was shown in that the D3-brane and M5-brane admit maximal analytic extensions that do not have any singularities. It is of interest therefore to examine the oxidation of the massive-scalar domain wall to ten or eleven dimensions in more detail, which we do by retracing the step of Kaluza-Klein reduction on the sphere. After doing this, the metric becomes $`ds_{\widehat{D}}^2`$ $`=`$ $`(\epsilon H)^{2/(\widehat{D}d1)}dx^\mu dx_\mu +H^2d\rho ^2+\rho ^2d\mathrm{\Omega }_d^2,`$ $`H`$ $`=`$ $`1{\displaystyle \frac{Q}{\rho ^{d1}}},`$ (21) in terms of an appropriately-defined Schwarzschild-type coordinate $`\rho `$, where the constant $`\epsilon `$ is $`1`$ for the Branch-I solution, and $`+1`$ for the Branch-II solution. ($`\widehat{D}`$ is the oxidation endpoint dimension.) Branch-I solution therefore provides a novel extention of $`p`$-brane solution into the interior of the horizon. As was discussed in , when $`\epsilon =+1`$ the exterior region of a metric such as (21) (i.e. the region $`\rho >Q^{1/(d1)}`$ outside the horizon) can be smoothly extrapolated through the horizon at $`\rho =Q^{1/(d1)}`$ and out into another asymptotic region, thereby avoiding the curvature singularity at $`\rho =0`$ altogether, provided that $`\widehat{D}d1`$ is even. This can be seen by defining a new radial coordinate $`w`$, $$w=H^{\frac{1}{\widehat{D}d1}},$$ (22) in terms of which the metric (21) becomes $$ds_{\widehat{D}}^2=w^2dx^\mu dx_\mu +\kappa ^2\left(1w^{\widehat{D}d1}\right)^{\frac{2d}{d1}}\frac{dw^2}{w^2}+Q^{\frac{2}{d1}}\left(1w^{\widehat{D}d1}\right)^{\frac{2}{d1}}d\mathrm{\Omega }_d^2,$$ (23) where $`\kappa ^2=(\widehat{D}d1)^2Q^{\frac{2}{d1}}/(d1)^2`$. Since $`\rho `$ is an analytic function of $`w`$ on the horizon at $`w=0`$, one can analytically extend the metric to negative values of $`w`$, and so it is regular on the horizon. Furthermore, when $`\widehat{D}d1`$ is even, the metric is invariant under $`ww`$, and so the extension to negative $`w`$ is isometric to the original region with positive $`w`$ . Thus for the $`D=7`$ Branch-II domain wall oxidised on $`S^4`$ to the M5-brane in $`\widehat{D}=11`$ (for which we have $`\widehat{D}d1=1141=6`$), and the $`D=5`$ Branch-II domain wall oxidised on $`S^5`$ to the D3-brane in $`\widehat{D}=10`$ (for which $`\widehat{D}d1=1051=4`$), the metrics are completely non-singular. On the other hand the $`D=4`$ Branch-II domain wall oxidises on $`S^7`$ to give an M2-brane in $`\widehat{D}=11`$, for which the singularity cannot be evaded by the $`ww`$ reflection, since $`\widehat{D}d1=1171=3`$. In all cases the Branch-I solutions oxidise to the interior regions of the M-branes or D3-brane, and so the singularities remain in the higher dimension. The fact that the Branch-I solutions map into the interior regions of the higher-dimensional $`p`$-branes demonstrates that these interior regions do have a rôle to play, even in those cases where they are excised in the maximal analytic extension. From the lower-dimensional point of view, these naked singularities are no worse than the ones occurring in the domain-wall solutions associated with the Coulomb branches of the corresponding superconformal field theories on the AdS boundaries, as observed in section 3. The cases $`\gamma =\frac{2}{7}`$, $`\frac{1}{5}`$ and $`\frac{1}{8}`$ can occur both from the massless-scalar potential and from the massive-scalar potential. For each dimension, these massless-scalar and massive-scalar solutions have in common that their higher-dimensional origins both involve naked singularities. In the massless-scalar case, the singularity is due to the negative tension of the distributed branes; in the massive-scalar case, the singularity is the one that is inside the horizon of the non-dilatonic $`p`$-brane. However in spite of the singularities, the minimally-coupled scalar field spectra are all well-behaved. This leads to an interesting question as to whether the interior region should be included in the discussion, even in the cases such as the M5-brane and D3-brane where it is normally excluded by making the maximal analytic extension. A resolution of the singularity arising from a negative tension was proposed in . A similar resolution may be applicable for the massive-scalar domain-wall solution too, since the singularity structure is the same. ## Acknowledgements We should like to thank Klaus Behrndt, Michael Cohen, Gary Gibbons, Lisa Randall and Finn Larsen for useful conversations. M.C. would like to thank Caltech Theory Group for hospitality where part of this work was done.
no-problem/0002/hep-ex0002029.html
ar5iv
text
# A Search for Lightly Ionizing Particles with the MACRO Detector ## I Introduction Ever since Robert Millikan’s historic experiment determined that the charge on matter comes in discrete units , experimenters have spent much time and effort first determining the precise value of that charge, and later trying to observe instances in nature where anything other than an integer multiple version of that charge exists. The first hint that such objects might be present in nature were the results obtained from the deep inelastic scattering experiments at SLAC during the late 1960’s . These experiments first demonstrated that nucleons do in fact have sub-structure. By exploring the structure functions in these scattering experiments, it was discovered that protons and neutrons were constructed of smaller point-like partons, and that there were three charge-bearing partons in each of the proton and the neutron . This observed parton structure fit well into the quark model previously proposed by Gell-Man and Zweig . Although in this model the quarks which make up the baryons and mesons have fractional charge, they are always combined in a way that results in an integrally charged baryon or meson. Despite decades of searching no one has yet observed a quark free of its ever-present neighbors. Also, the search for electrons or other leptonic type particles with fractional charge has been in vain. These include larger and more sophisticated versions of Millikan’s oil drop experiment, searches in bulk matter, experiments at accelerators, and searches in the cosmic radiation . A clear observation of fractional charge would be extremely important since, depending on the type of particle seen, it might mean that confinement breaks down under some circumstances or that entirely new classes of particles exist. In Grand Unified Theories it is relatively easy to accommodate fractional charge in color singlets by extending the unification group from SU(5) to a larger group. For example, an extension to SU(7) allows for charges of $`\frac{1}{3}`$ ,another which allows $`\frac{1}{3}`$ e charge leptons has gauge group SU(5) $`\times `$ SU(5)’ . Other Grand Unified Theory groups have been considered which allow for fractional charge, including SU(8) , SO(14) , and SO(18) . Further, some theories of spontaneously broken QCD have also predicted free quarks , although these quarks would probably be contained in super-heavy quark-nucleus complexes with large non-integral charge. This paper presents the results of a search for penetrating, weakly interacting particles with fractional charge in the cosmic radiation with the MACRO detector. A more detailed description of this analysis can be found in . Since a particle of charge Q has a rate of energy loss by atomic excitation and ionization proportional to $`Q^2`$, particles of a given velocity with fractional charge deposit less energy in a detector than particles with unit charge. So, for example, a particle traveling at relativistic speed with charge of $`\frac{1}{3}`$ e will have an energy deposition only $`\frac{1}{9}`$ that of the muon. For this reason we call such particles lightly ionizing particles(LIPs). A quark of the standard model also interacts via the strong force and would not be able to penetrate large amounts of material; thus this search is only sensitive to penetrating lightly ionizing particles. ## II Experimental Setup The MACRO detector is a large($``$ 10000 $`m^2`$ sr) underground scintillator and streamer tube detector and has been described in detail elsewhere . Due to MACRO’s large size, fine granularity, high efficiency scintillator, and high resolution tracking system, it is uniquely suited to look for LIPs. In order to take advantage of this situation a special LIP trigger system has been built. Using the lowest level energy-based scintillator trigger available in MACRO, it allows a search for particles which interact electro-magnetically but deposit much smaller amounts of energy in the scintillator counters than minimum ionizing muons. The inputs are the individual counter low energy triggers produced in the PHRASE (one of the gravitational collapse triggers), which have a trigger threshold of about 1.2 MeV. Since a typical muon energy loss is about 40 MeV, this trigger threshold allows a search for particles losing less than 1/25 of this. Streamer tubes are more efficient at triggering on LIPs than the scintillator system. The key to the good sensitivity of the streamer tubes, even to extremely small amounts of ionization, is that even a single ion-electron pair produces a full streamer with reasonable probability. The measured single ion-pair efficiency for the MACRO tubes, gas mixture, and operating voltage is over 30%, which is consistent with earlier work . Since selected tracks are required to cross at least 10 streamer tube planes and a LIP trigger only requires that any 6 of them fire, the streamer tube triggering probability is over 99% for the range of charges considered in this search. The LIP trigger uses field programmable gate array circuits to form coincidences between counters in the three horizontal planes of MACRO scintillator. The resulting accidental coincidence rate of approximately 10 Hz would overload the data acquisition and storage system and so it is reduced by requiring a coincidence with at least 6 streamer tube planes in the bottom part of the detector. Since a well-reconstructed streamer tube track is required in the final off-line analysis, the streamer tube trigger requirement does not reduce the efficiency of the search, although it reduces accidental coincidences to an acceptable level. The LIP trigger stops the 200 MHz waveform digitizer(WFD) system and causes the data acquisition system to readout the waveforms of all the counters involved in the trigger. The use of this trigger allows a physics search for LIPs which is unique in many ways. Some of the main features which distinguish it are as follows: 1. Sensitivity down to $`\frac{1}{5}`$ equivalent fractional charge. Previous experiments have only checked for particles with charge $`>\frac{1}{3}`$ . 2. Good acceptance from $`\beta `$ = 0.25 - 1.0 . Particles which have a velocity lower than 0.25c are not guaranteed to pass through the detector quickly enough to insure that the LIP trigger will detect a coincidence in the faces of the scintillator system. The lowest flux limits for LIPs now come from the very large water Cherenkov detector in Japan (Kamiokande). However, because of the nature of the Cherenkov process, water detectors are only sensitive to particles with $`\beta >0.8`$. 3. Size of detector. The MACRO detector presents $``$ 800 $`m^2`$ of fiducial area to downward-going particles. The Cherenkov search at Kamiokande presents a nominal detection area of 130 $`m^2`$. The best results from scintillator-based experiments come from even smaller detectors. The search by Kawagoe et al. relied on a detector of only 6.25 $`m^2`$. 4. The possibility of searching within large multiple muon bundles for fractional charge. Because of the size and granularity of the MACRO experiment, it is possible to isolate tracks located in muon bundles containing on the order of 20 muons, and to check their energy deposition to see whether they are consistent with LIPs. For both smaller experiments and non-granulated experiments (such as single large volume water experiments like Kamiokande), multiple muon events are rejected from the data sample. If fractionally charged particles were being produced in high energy collisions in the upper atmosphere, previous experiments may have missed the signature due to the particles being buried in the high-multiplicity shower. 5. Use of high resolution waveform digitizers for energy and timing reconstructions. At a trigger threshold of $``$ 1.2 MeV each scintillator counter fires at approximately 2 kHz. The traditional ADC/TDC system is susceptible to errors associated with false starts at this rate (see for example ). A small pulse triggering the ADC/TDC system just prior to a large pulse can result in partial integration of the large pulse, producing a fake low ionization event. 6. Use of a high precision limited streamer tube tracking system. Previous underground experiments did not have independent tracking systems. Since muons that clip the corners of scintillating volumes can be an important source of background, the use of a tracking system is essential for the performance of a low background search. In addition, without a tracking system it is hard to recognize the cases where the actual tracks pass between volumes and accompanying soft gamma rays enter into the scintillating volumes. This can be a source of background . The use of a tracking system is also one of the reasons that MACRO can look for fractional charge in high multiplicity muon bundles. ## III Data Analysis The data for this search comes from two periods. The first ran from July 24th to October 12th of 1995, and the second from December 17th 1995 to November 16th 1996. These were both periods of uninterrupted waveform and LIP operation with the entire MACRO detector. The live-time varied for sub-sections of the detector and the longest live-time was 250 days. ### A Low Energy Reconstruction Triggering at very low thresholds is challenging. While previous searches have restricted themselves to $`\frac{1}{3}e`$, this search reaches $`\frac{1}{5}e`$. For particles with average path lengths through MACRO scintillator counters the energy deposited is about $$40\text{MeV}\times \left(\frac{1}{5}\right)^21.6\text{MeV}.$$ (1) Therefore, in order to be able to reconstruct LIPs which pass through MACRO, it is necessary to reconstruct energies between 1.5 and 40 MeV. The triggering threshold of the LIP trigger was measured by using muons which passed through small amounts of scintillator in the MACRO detector, and thus deposited small amounts of energy. The measured triggering efficiency is shown in figure 1; it is 100% above $``$ 2 MeV, 50% above 1.2 MeV. Each scintillator counter used in the analysis was calibrated using naturally occurring low-energy $`\gamma `$-rays. The most important of these $`\gamma `$-rays for the calibration is the 2.6 MeV line from the radioactive decay-chain: $`{}_{81}{}^{208}\text{Tl}`$ (2) $``$ $`{}_{82}{}^{208}\text{Pb}_{}^{}`$ $`+\beta ^{}+\overline{\nu _e}`$ (3) $``$ $`{}_{82}{}^{208}\text{Pb}+\gamma (2.6\text{MeV}).`$ (4) After every event which causes a readout of the WFDs, one millisecond worth of WFD data is collected for every counter involved in the event. For fast particles such as muons only the first few microseconds of the WFD data is relevant. The rest of the data is recorded in order to search for slowly moving particles such as magnetic monopoles. The one millisecond of data contains small pulses caused by naturally occurring radioactivity. By looking at these radioactivity pulses we can reconstruct the low energy spectrum. Figure 2 shows this spectrum for one of the MACRO scintillator counters. The solid line is a fit to a falling radioactivity spectrum plus two gaussians, one associated with the 2.6 MeV $`{}_{81}{}^{208}\text{Tl}`$ line, and the other, with the 1.4 MeV $`{}_{}{}^{40}\text{K}`$ line. A full GEANT Monte Carlo was performed to determine where the absolute energies of the lines in this spectrum should be, and the information from the fit is used to make a calibration constant to convert between observed PMT signal measured in the waveforms and deposited energy. Since one to five MeV is the important signal region for the LIP search, reconstructing the low energy spectrum in this region is proof that we can also reconstruct LIPs in this region. For this reason, we require a counter to have a good calibration in order to use it for the LIP analysis. Aside from a very few non-functional scintillator counters, in practice, what this means is that only the counters placed in three horizontal planes were used, and the counters in vertical planes were not. ### B Time Reconstruction It is important to determine an event’s longitudinal position in a counter from its WFD data. Calibration events as described in section III A have no associated streamer tube track, and so this is the only source of the information necessary to correct for the light attenuation of the scintillator. For particles passing through the detector, we require consistency between the longitudinal position of the event independently determined by the streamer tubes and the PMT signals. This reduces the background due to accidental coincidences between a small radioactivity pulse somewhere in the counter followed by a muon passing through a crack in the detector. The width of the position resolution determines how tightly this cut can be made. The longitudinal position in a counter of an event can be calculated using the WFD information with the expression: $$pos=\frac{\mathrm{\Delta }t\times v}{2},$$ (5) where $`\mathrm{\Delta }t`$ is the difference in time between the pulses on the two sides of the counter (as measured by the waveforms), and $`v`$ is the effective speed of light in the counter. Figure 3 shows the difference between the positions of muons passing through a scintillator counter calculated by the streamer tube tracking system and that calculated by the WFD system for all of the scintillator counters used in the analysis. These timing results were obtained by first performing a software simulation of a constant fraction trigger to obtain an initial estimate of the longitudinal position. This circuit triggers at the point on the leading edge of a pulse which is a fixed fraction of the maximum height of the pulse. In order to estimate at what time the pulse crosses the fixed fraction of the maximum peak voltage (20% was used for this analysis) a simple linear fit was used between the two samples closest to the point of crossing. A neural network was then used to further refine the estimate of the longitudinal position. The neural network was trained with a sample of events using the position obtained from the streamer tubes. We chose to use a neural network since we did not find an alternative which provided the same or better precision and was less computationally intensive. A more detailed description of the network used can be found in . ## IV Search Results After calibration, the data set was examined for LIPs in both single and multiple track events. In order to be considered in the analysis, an event had to satisfy three requirements: the LIP trigger must have fired; at least one track must have been reconstructed in the streamer tube system; and finally, at least one of the reconstructed tracks must have passed through counters in the top, center, and bottom of the detector. There were approximately 1.3 million events which satisfied these requirements. The data set was broken into two exclusive pieces, a single track and a multiple track set, with approximately 90% of the events being in the single track sample. Each of the selected events was then examined to determine its rate of energy loss in the scintillator. For each of the counters that a selected particle passed through, the reconstructed energy was scaled to a common path length of 19 cm, the distance a vertical muon passing through a scintillator counter traveled. To reduce the chance that anomalies would affect the result, the maximum energy in any of the counters was used as a measure of the particle’s energy loss. Figure 4 is a histogram of this distribution for all of the tracks(in both the single and multiple track sample) that passed the selection criteria. The trigger becomes more than 60%efficient at about 1.2 MeV and quickly rises to 100% efficiency. Then, at about 20 MeV, the efficiency of this search quickly drops to zero because a cut must be made to reject muons. Before any cuts, there are events in the region where LIPs would be expected to appear ($`<`$ 20 MeV). These result from two classes of reconstruction errors. First, there are cases where tracks passed close to the edge of a scintillator counter or very close to a phototube and the energy was incorrectly reconstructed. We therefore also exclude tracks which at their center in the scintillator volume are located in the final 10 cm of a scintillator counter. By requiring that all tanks hit by the track have this fiducial requirement, the number of events in the single track sample is reduced by $``$ 4%. Secondly, there are events in which the position reconstructed by timing in the scintillator counter is inconsistent with that obtained by the streamer tube tracking system, possibly due to random noise in the streamer tubes confusing the tracking algorithm. We require that the position of particle passage as reconstructed in the streamer tubes agrees with the position as reconstructed by the neural network timing procedure to within $`\pm `$ 45 cm, which is about 3 $`\sigma `$ for energy depositions smaller than 5 MeV. This cut removes 1.8% of the data. Figure 5 is the distribution of the maximum counter energy on a track for all of the single muon tracks considered in the analysis after the fiducial and position agreement cut. The expected signal region for LIPs is below 20 MeV. Figure 6 is the same distribution for the multiple track sample. There are four events in the multiple tracks sample with maximum deposited energies between 20 and 23 MeV. The minimum entry in the distribution for the single track sample is 23 MeV. These four events were examined by hand. All four were reconstructed as double muons by the tracking algorithm. In three cases, the tracking algorithm failed and assigned a track where one really did not exist. This nonexistent track intersected counters that were actually hit, but the calculated path lengths with the fake track were incorrect. The fourth event had a maximum energy loss of 23 MeV. This event shows no anomalies and is consistent with the lowest energy seen in the single track sample. ## V Conclusions In the approximately one year of running that this search covers, no candidates for LIPs were observed. This search was sensitive to particles with charges greater than $`\frac{1}{5}`$$`e`$ and $`\beta `$ between approximately 0.25 and 1.0. Unlike previous experiments, this search attempted to find LIPs in both single track events and buried among the tracks of multiple muon showers. For the single track sample, the assumption of an isotropic flux yields a 90% C.L. upper flux limit of $`\mathrm{\Phi }9.2\times 10^{15}cm^2sec^1sr^1`$. Once again, it should be emphasized that the energy loss considered for particles in this search is due solely to atomic excitation and ionization. If LIPs are present in the cosmic rays and they interact strongly as well as electro-magnetically they will not be able to travel through enough rock to reach the MACRO detector before they interact strongly. Only if strongly interacting LIPs were produced in the rock very near the detector would this search be sensitive to them. The two best experiments to compare this result with are the LSD experiment and the Kamiokande experiment . While LSD had the best scintillator-based limit in the world prior to this experiment, Kamiokande has the lowest limit. Both of these experiments only claim sensitivity to $`\frac{1}{3}`$ $`e`$ and $`\frac{2}{3}e`$ charged particles. Table I summarizes the limits of this search in comparison to other searches. For the entries marked “Not Quoted”, the experiments do not report a limit for that charge although the experiment should have been sensitive to that energy deposition. At least in the case of LSD there were two candidates in the $`\frac{1}{2}e`$ region which were ignored because they were not considering $`\frac{1}{2}e`$ charged particles. In the Kamiokande experiment only $`\frac{1}{3}e`$ and $`\frac{2}{3}e`$ were searched for. Unlike the other two searches this search is sensitive to a continuous range of charges from $`\frac{1}{5}e`$ to close to the charge of an electron. This limit is shown in figure 7. This search had no candidates and required hand scanning of only 3 in 1.2 million events. In order to compare flux limits for LIPs from different experiments one must keep several factors in mind. First of all, this is a limit on the flux of local LIPs at the site of the detector. Different mechanisms for LIP production result in different properties for their flux. One possibility is that the LIP particles are produced very close to the detector by some unknown neutral particle or mechanism. In this case, one could indeed expect a location independent, isotropic flux. However, for the more general case of LIP production far away from the detector, one expects different fluxes of LIP particles in different underground locations. At each detector site there will be a unique and non-trivial angular distribution, because of different rock thickness above the detectors. This will be true if the LIP particles are produced near the detector in high energy muon showers, in cosmic ray showers in the atmosphere, or if they are impinging on the Earth from outer space. Note that only particles above some minimum energy can reach an underground detector from the atmosphere, because of the ionization loss in the Earth. For the case of MACRO, which has a minimum depth of 3300 meters of water equivalent, the initial energy of a $`\frac{1}{5}`$$`e`$ charged particle before it enters the earth must be $`20`$ GeV. In comparison, the Kamiokande experiment has an overburden of 2700 meters of water equivalent, and the LSD experiment is covered with 5000 meters of water equivalent so the energy thresholds should be correspondingly lower and higher respectively. In a general discussion such as this one we can only make some qualitative remarks. If the LIP particles are produced in the atmosphere they should not arrive from directions below the horizon. A $`\frac{1}{5}`$$`e`$ charged particle would travel 25 times as far as a muon by virtue of its reduced energy loss, but that distance is still very small compared to the diameter of the earth. To compare the results of the different experiments one should therefore, in principle, consider a particular physical model of production of the particles, a detailed description of the material above the detectors, and the detector acceptances (including their angular dependences). ###### Acknowledgements. We gratefully acknowledge the support of the director and of the staff of the Laboratori Nazionali del Gran Sasso and the invaluable assistance of the technical staff of the Institutions participating in the experiment. We thank the Istituto Nazionale di Fisica Nucleare (INFN), the U.S. Department of Energy and the U.S. National Science Foundation for their generous support of the MACRO experiment. We thank INFN, ICTP (Trieste) and NATO for providing fellowships and grants (FAI) for non Italian citizens.
no-problem/0002/math0002118.html
ar5iv
text
# From Dixmier algebras to Star Products ## 1. Introduction This note is a companion piece to \[B1\], and a bridge between the approaches to quantization of nilpotent orbit covers in \[B1\] and its sequel \[B2\]. See the introduction to \[B1\] for background, references and motivations. The purpose of this note is to set out some basic relations between the representation theoretic notion of Dixmier algebras and the Poisson geometric notion of star products. ## 2. Perfect Dixmier algebras Let $`𝒪`$ be a complex nilpotent orbit in some complex semisimple Lie algebra $`𝔤`$. We assume $`𝒪`$ spans $`𝔤`$. Let $`G^{sc}`$ be a simply-connected complex Lie group with Lie algebra $`𝔤`$. Let $`\kappa :𝒪`$ be a covering where $``$ is connected. Then the following geometric structure lifts from $`𝒪`$ to $``$: the adjoint action of $`G^{sc}`$, the KKS symplectic form, the Hamiltonian functions $`\varphi ^x`$, $`x𝔤`$ defined by $`\varphi ^x(y)=(x,y)_𝔤`$, and the square of the Euler dilation action of $`^{}`$. In fact, $`G`$ acts on $``$ where $`G`$ is the quotient of $`G^{sc}`$ by the (finite central) subgroup which fixes $``$. Consider the algebra $$=R()$$ (2.1) of regular functions on $``$. Then the Euler grading $`=_{j\frac{1}{2}}^j`$ makes $``$ into a graded Poisson algebra in the sense of \[B1, Definition 2.2.1\] with $`\varphi ^x^1`$, for $`x𝔤`$, and $`\{\varphi ^x,\varphi ^y\}=\varphi ^{[x,y]}`$. Also $``$ is a superalgebra with even and odd parts $`^{even}=_j^j`$ and $`^{odd}=_{j+\frac{1}{2}}^j`$. Let $`\alpha `$ be the algebra automorphism of $``$ such that $`\alpha (f)=i^{2j}f`$ if $`f^j`$. Notice $`\alpha ^4=1`$. Suppose $`m`$ covers $`e𝒪`$. Then $``$ is a Galois cover of $`𝒪`$ if and only if $`G^m`$ is normal in $`G^e`$. In this event, $`𝒮=G^e/G^m`$ is the Galois group , $`𝒮`$ acts on $``$ by symplectic automorphisms, and our grading of $``$ is $`𝒮`$-invariant. The universal cover of $`𝒪`$ is always Galois. Now we strengthen the usual definition of Dixmier algebra (Vogan, McGovern) by adding three new axioms in (V), (VI) and (VII). We allowed redundancies in our axioms in order to make them as explicit as possible. ###### Definition 2.1. Assume the cover $``$ of $`𝒪`$ is Galois with Galois group $`𝒮`$. A *perfect Dixmier algebra* for $``$ is a noncommutative algebra $`𝒟`$ together with the following data: 1. An increasing algebra filtration $`𝒟=_{j\frac{1}{2}}𝒟_j`$ such that $`[𝒟_j,𝒟_k]𝒟_{j+k1}`$ for all $`j,k\frac{1}{2}`$. 2. A representation of $`𝒮`$ on $`𝒟`$ by filtered algebra automorphisms. 3. A Lie algebra embedding $`\psi :𝔤𝒟_1^𝒮`$ such that the representation of $`𝔤`$ on $`𝒟`$ given by the derivations $`a\psi ^xaa\psi ^x`$ is locally finite and exponentiates to a representation of $`G`$ on $`𝒟`$ by algebra automorphisms. 4. An $`𝒮`$-equivariant graded Poisson algebra isomorphism $`\gamma :gr𝒟`$ such that $`\gamma (𝐩_1(\psi ^x))=\varphi ^x`$ where $`𝐩_1:𝒟_1gr_1𝒟`$ is the natural projection. 5. An $`𝒮`$-invariant filtered algebra anti-automorphism $`\beta `$ of $`𝒟`$ such that (a) $`\beta (\psi ^x)=\psi ^x`$, (b) $`\beta `$ induces $`\gamma ^1\alpha \gamma `$ on $`gr𝒟`$ and (c) $`\beta ^4=1`$. We impose two further axioms. To state these, we notice two useful consequences of (I)-(V). First there is a unique $`G`$-linear map $`T:𝒟`$ such that $`T(1)=1`$. Second, $`𝒟`$ has become a superalgebra with $`(G\times 𝒮)`$-invariant filtered algebra $`_2`$-grading $`𝒟=𝒟^{even}𝒟^{odd}`$ where the summands are the $`\pm 1`$-eigenspaces of $`\beta ^2`$. An element $`a𝒟`$ is *superhomogeneous* if $`a𝒟^{even}`$, in which case $`|a|=0`$, or $`a𝒟^{odd}`$, in which case $`|a|=1`$. Now we require that 1. $`T`$ is a supertrace. 2. The $`G`$-invariant supersymmetric bilinear pairing $`𝒫(a,b)=T(ab)`$ is non-degenerate on $`𝒟_j`$ for each $`j\frac{1}{2}`$. In (VI), $`T`$ is a supertrace means that if $`a`$ and $`b`$ are superhomogeneous then $$T(ab)=(1)^{|a||b|}T(ba)$$ (2.2) when $`a`$ and $`b`$ have same parity, while $`T(ab)=0`$ when $`a`$ and $`b`$ have different parity. Axioms (IV) and (V) guarantee that $`T`$ vanishes on $`𝒟^{odd}`$ and so axiom (VI) amounts to (2.2). To make sense of (VII), we notice if $`L𝒟`$ is any $`\beta ^2`$-stable subspace then we have a notion of the $`𝒫`$-orthogonal subspace $`L^{}`$ (since the right and left orthogonal subspaces coincide). We then say $`𝒫`$ is *non-degenerate* on $`L`$ if $`L^{}L=0`$. The pairing $`𝒫`$ is *supersymmetric* in the sense that $`𝒟^{even}`$ and $`𝒟^{odd}`$ are $`𝒫`$-orthogonal, $`𝒫`$ is symmetric on $`𝒟^{even}`$ and $`𝒫`$ is anti-symmetric on $`𝒟^{odd}`$. Notice that (VII) is much stronger than saying $`𝒫`$ is non-degenerate on $`𝒟`$. We often speak of $`𝒟`$ as the perfect Dixmier algebra, with the additional data being understood. See \[B1, §7-8\] for examples. Here are some consequences of the axioms. First $``$ is $``$-graded if and only if $`𝒟=𝒟^{even}`$. Indeed, $``$ is $``$-graded $``$ $`\alpha ^2=1`$ $``$ $`\beta ^2=1`$, where the last equivalence follows by axiom (V)(b). An instance where $``$ is $``$-graded is when $`=𝒪`$. Second, $`𝒟^𝒮`$ is a perfect Dixmier algebra for $`𝒪`$. This follows as $`^𝒮=R(𝒪)`$ and all the Dixmier algebra data for $``$ is $`𝒮`$-equivariant. Third, axiom (IV) provides a “symbol calculus” for $`𝒟`$ with values in $``$. If $`a𝒟_j`$ then the *$`\gamma `$-symbol of a of order $`j`$* is the image of $`a`$ under the map $$𝒟_j𝒟_j/𝒟_{j\frac{1}{2}}^j$$ Fourth, let us extend $`\psi :𝔤𝒟_1^𝒮`$ to an algebra homomorphism $$\psi :𝒰(𝔤)𝒟^𝒮$$ (2.3) and let $`J`$ be the kernel of (2.3). Then $`J𝒵(𝔤)`$ is a maximal ideal of $`𝒵(𝔤)`$ where $`𝒵(𝔤)`$ is the center of $`𝒰(𝔤)`$. This follows since by axioms (III) and (IV) the vector spaces $`𝔇`$, $`gr𝔇`$ and $`R(\stackrel{~}{𝒪})`$ are all $`𝔤`$-isomorphic and so in particular $`𝔇^G=`$. Now (V)(a) says the following square is commutative: $$\begin{array}{ccc}𝒰(𝔤)& \stackrel{\psi }{-⟶}& 𝒟\\ \tau & & \beta \\ 𝒰(𝔤)& \stackrel{\psi }{-⟶}& 𝒟\end{array}$$ (2.4) where $`\tau `$ is the principal anti-automorphism of $`𝒰(𝔤)`$. Consequently $`J`$ is a $`\tau `$-stable $`2`$-sided ideal in $`𝒰(𝔤)`$. Furthermore $`J`$ is a completely prime primitive ideal in $`𝒰(𝔤)`$. ($`𝒰(𝔤)/J`$ is a subalgebra of $`𝒟`$ and so has no zero-divisors. This means $`J`$ is completely prime. But also $`J𝒰(𝔤)`$ and $`J`$ contains a maximal ideal of the center of $`𝒰(𝔤)`$ and so, by a result of Dixmier, $`J`$ is primitive.) ## 3. Simplicity of $`𝒟`$ ###### Lemma 3.1. Suppose axioms (I)-(VI) are satisfied. Let $`𝒞`$ be some $`\beta ^2`$-stable subalgebra of $`𝒟`$. Then the following two conditions are equivalent: * $`𝒞`$ is a simple ring * $`𝒫`$ is non-degenerate on $`𝒞`$ ###### Proof. (i)$``$(ii): $`𝒞`$ is simple implies $`𝒞^{}𝒞=0`$ since $`𝒞^{}𝒞`$ is a $`2`$-sided ideal in $`𝒞`$ which does not contain $`1`$. (ii)$``$(i): Let $``$ be a non-zero two-sided ideal in $`𝒟^{}`$. Pick $`a`$ with $`a0`$. Then (ii) implies there exists $`b𝒞`$ such that $`T(ab)=1`$. It follows that $`ab=c+1`$ where $`c`$ lies in $`KerT𝒞`$. So $``$ contains $`c+1`$. But then $``$ contains the $`G`$-subrepresentation generated by $`c+1`$. Since $`KerT`$ contains no non-zero $`G`$-invariants it follows (by completely reducibility of $`𝒞`$ as a $`G`$-representation), that $``$ contains both $`c`$ and $`1`$. Thus $`𝒞`$ is simple. ∎ ###### Proposition 3.2. Suppose we are in the situation of Definition 2.1 and axioms (I)-(VI) are satisfied. Then $`𝒟`$ is a simple ring if and only if $`𝒟^𝒮`$ is a simple ring. ###### Proof. Suppose $`𝒟`$ is simple. Then $`𝒫`$ is non-degenerate on $`𝒟`$ (by Lemma 3.1) and hence (since $`𝒫`$ is $`𝒮`$-invariant) $`𝒫`$ is non-degenerate on $`𝒟^𝒮`$. Then (by Lemma 3.1 again) $`𝒟^𝒮`$ is simple. Conversely, assume $`𝒟^𝒮`$ is simple. Let $``$ be a non-zero $`2`$-sided ideal in $`𝒟`$. To show $`𝒟`$ is simple, it suffices to show that $`^𝒮=𝒟^𝒮`$ is non-zero. Let $`a`$. Consider $`b=(n!)^1_{\sigma S_n}(s_{\sigma _1}a)\mathrm{}(s_{\sigma _n}a)`$ where $`s_1,\mathrm{},s_n`$ is some listing of the elements of the Galois group $`𝒮`$. Clearly $`b^𝒮`$. But also we can see using axiom (IV) that $`b0`$. Indeed, let $`j`$ be the filtration order of $`a`$ in $`𝒟`$ and let $`\varphi ^j`$ be the $`\gamma `$-symbol of order $`j`$ of $`a`$. Then $`b`$ lies in $`𝒟_{jn}`$ and the $`\gamma `$-symbol of order $`jn`$ of $`b`$ is $`(s_{\sigma _1}\varphi )\mathrm{}(s_{\sigma _n}\varphi )`$. This product is non-zero and so $`b`$ must be non-zero. ∎ ###### Corollary 3.3. If $`𝒟`$ is a perfect Dixmier algebra then $`𝒟`$ and $`𝒟^𝒮`$ are both simple rings. ###### Corollary 3.4. If $`𝒟`$ is a perfect Dixmier algebra and (2.3) is surjective, then the kernel $`J`$ is a maximal ideal in $`𝒰(𝔤)`$. ###### Remark 3.5. Ideally the axioms for a perfect Dixmier algebra should automatically imply that the image of (2.3) is simple, i.e., that $`J`$ is maximal. Our axioms (I)-(VII) may not do this, but in any event our enriched axiom set (see Remark 4.4) accomplishes this. ## 4. The noncommutative $``$ product By axiom (VII) in Definition 2.1, we have a unique $`𝒫`$-orthogonal decomposition $$𝒟=\underset{j\frac{1}{2}}{}𝒟^j$$ (4.1) such that $`𝒟_k=_{j=0}^k𝒟^j`$. Each space $`𝒟^j`$ is $`(G\times 𝒮)`$-stable. It is easy to see that $$𝒫(a,b)=𝒫(\beta (b),\beta (a))$$ for all $`a,b𝒟`$. Consequently $`𝒟^j`$ is $`\beta `$-stable and using axiom (V)(b) we find that $`\beta `$ acts on $`𝒟^j`$ by multiplication by $`i^{2j}`$. Then $`𝒟^{even}=_j𝒟^j`$, $`𝒟^{odd}=_{j+\frac{1}{2}}𝒟^j`$ and $`T`$ is the orthogonal projection of $`𝒟`$ onto $`𝒟^0=`$. Clearly there is a unique linear map $$𝐪:𝒟$$ (4.2) such that $`𝐪`$ lifts $`\gamma ^1:gr𝒟`$ and $`𝐪(^j)=𝒟^j`$ for all $`j\frac{1}{2}`$. Then $`𝐪`$ is a $`(G\times 𝒮)`$-linear vector space isomorphism, and we have $`\psi ^x=𝐪^1\varphi ^x`$ and $`\beta =𝐪\alpha 𝐪^1`$. Now we define, for all $`\varphi ,\psi `$, $$\varphi \psi =𝐪^1((𝐪\varphi )(𝐪\psi ))$$ (4.3) ###### Proposition 4.1. Assume we have a perfect Dixmier algebra $`𝒟`$ for $``$ with $`𝒟^j`$ and $`𝐪`$ defined as above. Then $``$ is a $`(G\times 𝒮)`$-invariant associative product on $``$ and so $`(,+,)`$ is an associative noncommutative algebra. Then $$^j^k^{j+k}^{j+k1}\mathrm{}^{|jk|}$$ (4.4) where $`j,k\frac{1}{2}`$. Suppose $`\varphi ^j`$ and $`\psi ^k`$ so that $`\varphi \psi =_pC_p(\varphi ,\psi )`$ where $`C_p(\varphi ,\psi )`$ lies in $`^{j+kp}`$. Then $`\varphi \psi `$ $``$ $`\varphi \psi +{\displaystyle \frac{1}{2}}\{\varphi ,\psi \}mod^{j+k2}`$ (4.5) $`C_p(\varphi ,\psi )`$ $`=`$ $`(1)^pC_p(\psi ,\varphi )`$ (4.6) ###### Proof. The second sentence is clear. Now (4.4) is equivalent to $$𝒟^j𝒟^k𝒟^{j+k}𝒟^{j+k1}\mathrm{}𝒟^{|jk|}$$ (4.7) We have $`𝒟^j𝒟^k_i𝒟^{j+ki}`$ since $`𝒟`$ is a superalgebra. Because of axiom (VII), showing (4.7) reduces to showing that $`𝒟^j𝒟^k`$ is orthogonal to $`𝒟^s`$ if $`s<|jk|`$. So suppose $`𝒟^j𝒟^k`$ is not orthogonal to $`𝒟^s`$. Then there exist $`a𝒟^j`$, $`b𝒟^k`$ and $`c𝒟^s`$ such that $`T(abc)=1`$. Then $`bc`$ has a component in $`𝒟^j`$ and so $`k+sj`$. But also $`T(bca)=\pm 1`$ (since $`T`$ is a supertrace) and so similarly $`s+jk`$. Hence $`s|jk|`$. Now for $`\varphi ^j`$ and $`\psi ^k`$ we can write $$\varphi \psi =\underset{p=0}{\overset{2\mathrm{min}(j,k)}{}}C_p(\varphi ,\psi )$$ (4.8) where $`C_p(\varphi ,\psi )^{j+kp}`$. Now axiom (IV) implies that $`C_0(\varphi ,\psi )=\varphi \psi `$ and $`C_1(\varphi ,\psi )C_1(\psi ,\varphi )=\{\varphi ,\psi \}`$. But also axiom (V) implies that the map $`\alpha :`$ (which is an algebra automorphism with respect to the ordinary product) is an algebra anti-automorphism with respect to $``$. Thus $$\alpha (\varphi \psi )=(\alpha \psi )(\alpha \varphi )=i^{2j+2k}\psi \varphi $$ Then $`i^{2p}C_p(\varphi ,\psi )=C_p(\psi ,\varphi )`$. This proves (4.6). Then in particular $`C_1(\varphi ,\psi )=C_1(\psi ,\varphi )`$. So $`C_1(\varphi ,\psi )=\frac{1}{2}\{\varphi ,\psi \}`$ and we get (4.5). ∎ We can think of $`𝐪:𝒟`$ a “quantization map” as in \[B1, §8.1-8.2\]. In particular we have ###### Corollary 4.2. If $`\varphi ^1`$, for instance if $`\varphi =\varphi ^x`$ where $`x𝔤`$, then for all $`\psi `$ we have $`\{\varphi ,\psi \}=\varphi \psi \psi \varphi `$. ###### Proof. Identical to the proof of \[B1, Corollary 8.2.3\]. ∎ Proposition 4.1 implies in particular that $``$, equipped with its $``$ product, *becomes the perfect Dixmier algebra*! The data on $`𝒟`$ required by the axioms corresponds under $`𝐪`$ to data that exist from the beginning on $``$: (I) the grading on $``$ gives rise to the filtration by subspaces $`^j`$, (II) the $`\psi ^x`$ correspond to the momentum functions $`\varphi ^x`$ of the Hamiltonian $`𝔤`$-symmetry, (III) the Galois group $`𝒮`$ already acts on $``$, (IV) $`\gamma `$ corresponds to the identity map, (V) $`\beta `$ corresponds to $`\alpha `$. Notice that the even and odd parts of $`𝒟`$ correspond to the even and odd parts of $``$. Also $`T`$ corresponds to the projection $$𝒯:^0=$$ (4.9) defined by the Euler grading. So $`𝒯(\varphi )`$ is just the *constant term* of $`\varphi `$. The nondegenerate bilinear pairing $`𝒬`$ on $``$ corresponding to $`𝒫`$ is given by $`𝒬(\varphi ,\psi )=𝒯(\varphi \psi )`$. This pairing is *graded* supersymmetric in the sense that $`𝒬`$ pairs $`^j`$ with $`^k`$ trivially if $`jk`$. In this approach, the “new” axioms (VI) and (VII) have played a crucial role. Also we have gained a lot more structure on the Dixmier algebra. In particular the $``$ product “breaks off” after degree $`|jk|`$ in (4.4); this is a Clebsh-Gordan type phenomenon. Thus the problem of finding a Dixmier algebra for $``$ can be reformulated as the problem of constructing a suitable product $``$ on $``$. To formalize this we make ###### Definition 4.3. Assume $``$ is a Galois cover of $`𝒪`$ with Galois group $`𝒮`$. A *perfect Dixmier product* on $``$ is a $`(G\times 𝒮)`$-invariant associative noncommutative product $``$ satisfying (4.4), (4.5) and (4.6) such that the bilinear pairing $`𝒬(\varphi ,\psi )=𝒯(\varphi \psi )`$ on $``$ is graded supersymmetric and non-degenerate. A perfect Dixmier product makes $``$ into a filtered superalgebra which then, together with the Hamiltonian functions $`\varphi ^x`$, $`x𝔤`$, and $`\alpha `$, is a perfect Dixmier algebra for $``$ in the sense of Definition 2.1. Conversely, we have shown that a perfect Dixmier algebra for $``$ yields a perfect Dixmier product on $``$. ###### Remark 4.4. We can enrich our axiom set to get the notions for $``$ of a *positive Dixmier algebra* and a *positive Dixmier product*. The extra axioms require lifting the complex conjugation automorphism $`\sigma `$ of $`𝒪`$ (induced by the Cartan involution of $`𝔤`$) to an antiholomorphic automorphism $`\stackrel{~}{\sigma }`$ of $``$ (of order $`2`$ or $`4`$) such that (i) $`\stackrel{~}{\sigma }`$ induces a $``$-antilinear $``$-algebra automorphism of $``$ and (ii) the Hermitian pairing $`(\varphi |\psi )=𝒯(\varphi \psi ^{\stackrel{~}{\sigma }})`$ is positive-definite. ($`𝒯`$ being a supertrace is equivalent to this pairing being Hermitian.) We develop this in \[B2\] in the context of star products. ## 5. Graded star products Let $`𝒜=_{j\frac{1}{2}}𝒜^j`$ be a graded Poisson algebra as in \[B1, Definition 2.2.1\]. Assume $`𝒜^0=`$. Then a *graded star product* (with parity) on $`𝒜`$ is a product $``$ on $`𝒜[t]`$ which makes $`𝒜[t]`$ into an associative algebra over $`[t]`$ such that, for $`\varphi ,\psi 𝒜`$, the series $`\varphi \psi =_{p=0}^{\mathrm{}}C_p(\varphi ,\psi )t^p`$ satisfies: (i) $`C_0(\varphi ,\psi )=\varphi \psi `$ (ii) $`C_1(\varphi ,\psi )=\frac{1}{2}\{\varphi ,\psi \}t`$ (iii) $`C_p(\varphi ,\psi )=(1)^pC_p(\psi ,\varphi )`$ (iv) $`C_p(\varphi ,\psi )𝒜^{j+kp}`$ when $`\varphi 𝒜^j`$ and $`\psi 𝒜^k`$ We do *not* require that $``$ is bidifferential, i.e. that the operators $`C_p(,)`$ are bidifferential. A graded star product $``$ on $`𝒜`$ is specializes at $`t=1`$ to give a noncommutative product $``$ on $`𝒜`$. Clearly $``$ uniquely determines $``$. Let $`T:𝒜𝒜^0=`$ be the projection defined by the grading. Then we have a bilinear pairing $`𝒬:𝒜\times 𝒜`$ defined by $$𝒬(\varphi ,\psi )=T(\varphi \psi )$$ (5.1) Notice that $`𝒬(\varphi ,\psi )=C_{2k}(\varphi ,\psi )`$ if $`\varphi ,\psi 𝒜^k`$ and so the parity axiom (iii) implies that $`𝒬`$ is symmetric if $`k`$ or anti-symmetric if $`k+\frac{1}{2}`$. ###### Definition 5.1. We say $``$ is *orthogonally graded* if $`𝒜^j`$ and $`𝒜^k`$ are $`𝒬`$-orthogonal when $`jk`$. We say $``$ is *perfectly graded* if also $`𝒬`$ is non-degenerate on $`𝒜^j`$ for each $`j`$. Comparing with the proof of Proposition 4.1, we find ###### Lemma 5.2. If $``$ is orthogonally graded then $`T`$ is a supertrace on $`𝒜`$ with respect to $``$. If $``$ is perfectly graded then $$𝒜^j𝒜^k𝒜^{j+k}t𝒜^{j+k1}\mathrm{}t^{2\mathrm{min}(j,k)}𝒜^{|jk|}$$ (5.2) Suppose we have Hamiltonian symmetry $`\varphi :𝔤𝒜^1`$, $`x\varphi ^x`$, as in \[B1, Definition 3.1.1\] Put $`[\varphi ,\psi ]_{}=\varphi \psi \psi \varphi `$. We say $``$ is *$`𝔤`$-covariant* if $`[\varphi ^x,\varphi ^y]_{}=t\varphi ^{[x,y]}`$ for all $`x,y𝔤`$. We say $``$ is *exactly $`𝔤`$-invariant* (or *strongly $`𝔤`$-invariant*) if we have the stronger property: $$[\varphi ^x,\psi ]_{}=t\{\varphi ^x,\psi \}$$ (5.3) for all $`x𝔤`$ and $`\psi 𝒜`$. Exact $`𝔤`$-invariance implies ordinary $`G`$-invariance, i.e., $`(g\psi _1)(g\psi _2)=g(\psi _1\psi _2)`$ where $`G`$ acts on $`𝒜[t]`$ by $`g(\psi t^p)=(g\psi )t^p`$. ###### Lemma 5.3. If $``$ is an orthogonally graded star product on $`𝒜`$ then $``$ is exactly $`𝒜^1`$-invariant. ###### Proof. If $`\varphi 𝒜^1`$ and $`\psi 𝒜`$ then $`\varphi \psi =\varphi \psi +\frac{1}{2}\{\varphi ,\psi \}t+t^2C_2(\varphi ,\psi )`$. Then $`[\varphi ^x,\psi ]_{}=t\{\varphi ^x,\psi \}`$ because of the parity axiom (iii). ∎ ###### Lemma 5.4. Suppose $``$ is a perfect Dixmier product on $``$. Then $``$ is the specialization at $`t=1`$ of a unique graded star product $``$ on $``$. Moreover $``$ is perfectly graded and $`𝒮`$-invariant. Conversely, suppose $``$ is a perfectly graded, $`𝒮`$-invariant star product on $``$. Then the specialization at $`t=1`$ of $``$ is a perfect Dixmier product on $``$. ###### Proof. Given $``$, we define a product $``$ on $`[t]`$ as follows: $``$ is $`[t]`$-bilinear and if $`\varphi ^j`$ and $`\psi ^k`$ with $`\varphi \psi =_{i=|jk|}^{j+k}\pi _i`$ where $`\pi _i^i`$ then $`\varphi \psi =_{i=|jk|}^{j+k}\pi _it^{j+ki}`$. This is the only possible way to extend $``$ to a graded star product. The properties of $``$ imply that $``$ is in fact a perfectly graded, $`𝒮`$-invariant star product. The converse is clear. ∎ ### 5.1. The operators $`𝚲^𝒙`$ Here is an important consequence of perfectness. ###### Proposition 5.5. Suppose we have a perfect Dixmier product $``$ on $``$. For $`x𝔤`$ and any $`\psi `$ we have $$\varphi ^x\psi =\varphi ^x\psi +\frac{1}{2}\{\varphi ^x,\psi \}+\mathrm{\Lambda }^x(\psi )$$ (5.4) where $`\mathrm{\Lambda }^x:`$ are linear operators. These satisfy * $`\mathrm{\Lambda }^x`$ is the $`𝒬`$-adjoint of ordinary multiplication by $`\varphi ^x`$. * If $`x0`$ and $`j`$ is positive, then $`\mathrm{\Lambda }^x`$ is non-zero somewhere on $`^j`$. * $`\mathrm{\Lambda }^x`$ is graded of degree $`1`$, i.e., $`\mathrm{\Lambda }^x(^j)^{j1}`$. * The operators $`\mathrm{\Lambda }^x`$ commute, i.e., $`[\mathrm{\Lambda }^x,\mathrm{\Lambda }^y]=0`$ for all $`x,y𝔤`$. * The operators $`\mathrm{\Lambda }^x`$ transform in the adjoint representation of $`𝔤`$, i.e., $`[\mathrm{\Phi }^x,\mathrm{\Lambda }^y]=\mathrm{\Lambda }^{[x,y]}`$ where $`\mathrm{\Phi }^x=\{\varphi ^x,\}`$. * The operators $`\mathrm{\Lambda }^x`$ commute with the $`𝒮`$-action on $``$. ###### Proof. Same as the proof of \[B1, Corollary 8.4.1\]. ∎ If we identify $``$ with $`𝒟`$ via $`𝐪`$, then we get the representation $$\mathrm{\Pi }:𝔤𝔤End_𝒮,\mathrm{\Pi }^{(x,y)}(\psi )=\varphi ^x\psi \psi \varphi ^y$$ (5.5) Then Proposition 5.5 gives ###### Corollary 5.6. For $`x𝔤`$ we have $`\mathrm{\Pi }^{(x,x)}=\eta ^x`$ and $`\mathrm{\Pi }^{(x,x)}=2\varphi ^x+2\mathrm{\Lambda }^x`$.
no-problem/0002/hep-th0002076.html
ar5iv
text
# Black Diamonds at Brane Junctions ## Abstract: We discuss the properties of black holes in brane-world scenarios where our universe is viewed as a four-dimensional sub-manifold of some higher-dimensional spacetime. We consider in detail such a model where four-dimensional spacetime lies at the junction of several domain walls in a higher dimensional anti-de Sitter spacetime. In this model there may be any number $`p`$ of infinitely large extra dimensions transverse to the brane-world. We present an exact solution describing a black $`p`$-brane which will induce on the brane-world the Schwarzschild solution. This exact solution is unstable to the Gregory-Laflamme instability, whereby long-wavelength perturbations cause the extended horizon to fragment. We therefore argue that at late times a non-rotating uncharged black hole in the brane-world is described by a deformed event horizon in $`p+4`$ dimensions which will induce, to good approximation, the Schwarzschild solution in the four-dimensional brane world. When $`p=2`$, this deformed horizon resembles a black diamond and more generally for $`p>2`$, a polyhedron. preprint: hep-th/0002076 Motivated by phenomenological considerations, there has recently been an enormous amount of interest in the possibility that there may exist extra dimensions of space which are quite large. In this framework the universe would be a brane embedded in some higher dimensional spacetime. In particular, this is the basic assumption underlying the models of Randall and Sundrum . Their second model involves a thin “distributional” static flat domain wall, or three-brane, separating two regions of five-dimensional anti-de-Sitter (AdS) spacetime. They solve for the linearized graviton perturbations and find a square integrable bound state representing a gravitational wave confined to the domain wall. They also find the linearized bulk or “Kaluza-Klein” graviton modes, and argue that they decouple from the brane and make negligible contributions to the gravitational force between two sources in the brane, so that this force is due primarily to the bound state. In this way they recover an inverse square law attraction rather than the inverse cube law one might naïvely have anticipated in five dimensions. Thus, their work demonstrates that it is possible to localize gravity on a three-brane when there is just one infinitely large extra dimension of space, so that the three-brane is a domain wall. There are many papers which serve as background for this work; for a comprehensive list of references see . The localization of gravity on a domain wall is rather striking and raises various questions. For example, is it possible to localize gravity on a brane or brane intersection when there is more than one large extra dimension of space? In fact, Arkani-Hamed et al. have shown in a simple generalization of the Randall-Sundrum scenario that this is indeed possible (see also ). Their model may be outlined as follows: since it is possible to localize gravity on a domain wall in AdS, consider $`p`$ different domain walls (each with world-volume space dimension $`d2`$) in a $`d=(p+4)`$-dimensional background spacetime. These branes can intersect at a four-dimensional junction. If the bulk spacetime between the branes consists of $`2^p`$ patches of $`(p+4)`$-dimensional AdS, then it turns out that on the four-dimensional intersection there is a normalizable graviton mode and so four-dimensional gravity is localized on the brane junction. This intersecting brane scenario is not the only possible way to localize gravity in higher dimensions. For example, four-dimensional gravity can also be localized on a three-brane in higher dimensions although in this case the relevant geometry is not AdS. If matter trapped on a brane undergoes gravitational collapse then a black hole will form. Such a black hole will have a horizon that extends into the dimensions transverse to the brane: it will be a genuinely $`d`$-dimensional object. Within the context of any brane-world scenario, we need to make sure that the metric on the brane-world, which is induced by the higher-dimensional metric describing the gravitational collapse, is simply the Schwarzschild solution, up to some corrections that are negligible small so as to be consistent with current observations. In this way we shall recover the usual astrophysical properties of black holes and stars (e.g. perihelion precession, light bending, etc.). The study of the problem of gravitational collapse in the second Randall-Sundrum model was initiated in a recent paper (see also ). There, the authors proposed that what would appear to be a four-dimensional black hole from the point of view of an observer in the brane-world, is really a five-dimensional “black cigar” which extends into the extra fifth dimension (or more accurately a “black cigar butt”, or equivalently a “pancake” , because the object only extends a small proper distance in the transverse space compared with its extent on the brane). If this cigar were to extend all the way down to the AdS horizon, then we would recover the metric for a black string in AdS. However, such a black string is unstable near the AdS horizon. This instability, known as the “Gregory-Laflamme” instability, basically means that the string will want to fragment in the region near the AdS horizon. However, the solution is stable far from the AdS horizon near the domain wall. Thus, these authors concluded that the late time solution describes an object which looks like the black string far from the AdS horizon (so the metric on the domain wall is approximately Schwarzschild) but has a horizon that closes off before reaching the AdS horizon forming the “tip” of the black cigar. In the analogous situation in one dimension lower (where the domain wall localizes three-dimensional gravity in a larger four-dimensional $`AdS_4`$ background) the exact metric describing the situation is known since it is an example of the C-metric in $`AdS_4`$. Unfortunately, the generalization of this metric to $`AdS_{p+4}`$ is not known at present and so we have to proceed in a more intuitive fashion to arrive at a consistent picture. In this paper, we extend the analysis of gravitational collapse on the brane-world to general situations. In particular, for the purposes of illustration we will consider in detail the model where the brane-world universe is actually a brane-junction : a region where multiple domain walls intersect. However, our formalism can easily be used to describe smoothings of the original Randall-Sundrum scenario, where the three-brane is smeared in the extra dimension and also other higher-dimensional situations; for instance a three-brane embedded in $`d>5`$ dimensions . As we proceed we shall work as far as possible in a general framework with a $`d`$-dimensional background which has a four-dimensional Poincaré symmetry (the restriction to four dimensions is unnecessary, but is the case most relevant for phenomenology): $$ds^2g_{\mu \nu }(z)dx^\mu dx^\nu =e^{A(z)}\eta _{ab}dx^adx^b+g_{ij}(z)dz^idz^j.$$ (1) Here, $`x^\mu =(x^a,z^i)`$, where $`x^a`$, for $`a=0,\mathrm{},3`$, are the usual coordinates of four-dimensional Minkowski space and $`z^i=x^{i+3}`$, for $`i=1,\mathrm{},p`$, are the coordinates on the $`p=(d4)`$-dimensional transverse space.<sup>2</sup><sup>2</sup>2In our conventions the metric $`g_{\mu \nu }`$ has signature $`(,+,+,+,\mathrm{})`$. The metric (1) has the form of a “warped product” which under certain conditions is responsible for the localization of gravity. In the general case, the localization of gravity in four-dimensions depends on the normalizability, or otherwise, of a certain fluctuating mode in the transverse space representing the four-dimensional graviton. The explicit requirement for normalizability just depends upon the metric and is $$d^pzg^{00}(z)\sqrt{g(z)}<\mathrm{},$$ (2) where $`g(z)=|\mathrm{det}g_{\mu \nu }(z)|`$. In , it was further shown that under very general conditions the normalizability of the four-dimensional mode also implies the decoupling of the associated Kaluza-Klein modes. In particular, for purposes of illustration, we will be interested in a simple example of the general situation described by (1) corresponding to the intersecting brane scenario of . In this example, we glue $`2^p`$ patches of $`AdS_{p+4}`$ together along $`p`$ surfaces which play the rôle of $`p`$ domain walls. Once we have performed this cutting and pasting, the metric of the multi-dimensional patched AdS spacetime which describes the domain wall junction can be written : $$ds^2=\frac{1}{\left(1+k_{i=1}^p|z^i|/\sqrt{p}\right)^2}\left(\eta _{ab}dx^adx^b+dz^idz^i\right),$$ (3) where we have included the factor of $`\sqrt{p}`$ for convenience so that $`k^1`$ is the conventional length scale of the AdS bulk. This metric represents $`p=d4`$ intersecting $`(d1)`$-branes, located at $`z^i=0`$, for each $`i`$, which mutually intersect in a four-dimensional junction located at $`z^i=0`$. We want to know how to describe the endpoint of gravitational collapse in our four-dimensional world. Following the work of , a natural guess is simply to replace the (3+1) Minkowski metric appearing in (1) with the (3+1) Schwarzschild metric. Indeed, shortly, we shall prove that the following metric is still a solution of the bulk Einstein equations with the requisite source term: $$ds^2=e^{A(z)}\left(U(r)dt^2+U(r)^1dr^2+r^2(d\theta ^2+\mathrm{sin}^2\theta d\varphi ^2)\right)+g_{ij}(z)dz^idz^j.$$ (4) where $`U(r)=12G_4M_4/r`$. Clearly, this metric describes a black hole horizon which is extended in $`p`$ extra dimensions. In other words, this is the metric of a “black $`p`$-brane” in the higher-dimensional spacetime. Naïvely, it follows that if we wish to describe a black hole on the four-dimensional brane-world all we have to do is replace the Minkowski metric $`\eta _{ab}`$ in (1) with the four-dimensional Schwarzschild metric. We now show that we can generalize the metric (1) to $$ds^2g_{\mu \nu }dx^\mu dx^\nu =e^{A(z)}\stackrel{~}{g}_{ab}(x)dx^adx^b+g_{ij}(z)dz^idz^j,$$ (5) where $`\stackrel{~}{g}_{ab}(x)`$ is any four-dimensional Ricci flat metric, and still satisfy Einstein’s equations. First of all, let us compute the change in the Einstein tensor when we replace the four-dimensional Minkowski metric by the Ricci-flat metric: $`\eta _{ab}\stackrel{~}{g}_{ab}`$. It is convenient to use the the general form of the Einstein tensor for metrics of the form $`g_{\mu \nu }=e^A\stackrel{~}{g}_{\mu \nu }`$ (see for example ): $$G_{\mu \nu }=\stackrel{~}{G}_{\mu \nu }+\frac{d2}{2}\left[\frac{1}{2}\stackrel{~}{}_\mu A\stackrel{~}{}_\nu A+\stackrel{~}{}_\mu \stackrel{~}{}_\nu A\stackrel{~}{g}_{\mu \nu }\left(\stackrel{~}{}_\rho \stackrel{~}{}^\rho A\frac{d3}{4}\stackrel{~}{}_\rho A\stackrel{~}{}^\rho A\right)\right].$$ (6) The new metric $$d\stackrel{~}{s}^2\stackrel{~}{g}_{\mu \nu }dx^\mu dx^\nu =\stackrel{~}{g}_{ab}(x)dx^adx^b+e^{A(z)}g_{ij}(z)dz^idz^j.$$ (7) now represents a genuine (unwarped) product of a four-dimensional space with metric $`\stackrel{~}{g}_{ab}(x)`$ and $`p`$-dimensional space with metric $`\stackrel{~}{g}_{ij}(z)=e^{A(z)}g_{ij}(z)`$. Using (6), it is easy to see that the only components of the Einstein tensor which change when we replace $`\eta _{ab}\stackrel{~}{g}_{ab}`$ are $`G_{ab}`$. Using the fact that $`\stackrel{~}{g}_{ab}`$ is Ricci flat, so $`\stackrel{~}{G}_{ab}(\stackrel{~}{g}_{ab})=0`$, the change is easily computed:<sup>3</sup><sup>3</sup>3Notice that since $`G_{ab}^{(0)}\eta _{ab}`$ and $`T_{ab}^{(0)}\eta _{ab}`$ the following expressions in (8) and (9) are symmetric in $`a`$ and $`b`$. $$\mathrm{\Delta }G_{ab}=G_a^{(0)c}\mathrm{\Delta }\stackrel{~}{g}_{cb},$$ (8) where $`\mathrm{\Delta }\stackrel{~}{g}_{ab}=\stackrel{~}{g}_{ab}\eta _{ab}`$ and $`G_{\mu \nu }^{(0)}`$ is the original Einstein tensor for the metric (1). The remaining part of the proof requires us to show that the stress-tensor changes in a similar fashion: i.e. the only components of $`T_{\mu \nu }`$ (where we include any cosmological constant term in $`T_{\mu \nu }`$) that change are $$\mathrm{\Delta }T_{ab}=T_a^{(0)c}\mathrm{\Delta }\stackrel{~}{g}_{cb}.$$ (9) The proof of (9) requires a little work and depends upon what kind of sources are being considered. The first situation which is relevant to the original Randall-Sundrum as well as the intersecting brane scenarios, is when the sources are static external sources. In these cases it is a simple matter to show that the stress-tensor, including the cosmological constant term, is linear in components of the background metric: in these cases (9) follows immediately. The second case that is simple to consider is when the branes are produced by a scalar field with the Lagrangian $$=\sqrt{g}\left[\frac{1}{2}_\mu \varphi ^\mu \varphi 𝒱(\varphi )\right].$$ (10) In this case, following the discussion in it is also straightforward to show that the stress-tensor is linear in the four-dimensional components of the metric. The point is that $`\varphi =\varphi (z)`$ only and so the only components of the stress-tensor that are altered by $`\eta _{ab}\stackrel{~}{g}_{ab}(x)`$ are $`T_{ab}`$ and these are linear in $`\stackrel{~}{g}_{ab}(x)`$: $$T_{ab}=e^{A(z)}\stackrel{~}{g}_{ab}(x)\left[\frac{1}{2}_i\varphi (z)^i\varphi (z)𝒱(\varphi )\right].$$ (11) Furthermore, the background scalar field satisfies an equation-of-motion that does not depend on the components of the metric $`\stackrel{~}{g}_{ab}`$ and so the field itself is not changed by $`\eta _{ab}\stackrel{~}{g}_{ab}`$. Hence, in this case also (9) is recovered and furthermore the scalar field background remains unchanged. Finally, we could consider the case when the brane is produced by more complicated tensor fields; for instance this is the situation in string theory. After some analysis one finds a similar picture in this case as for the scalar field; the point is that $`T_{\mu \nu }`$ is linear in components of the metric with coefficients that are in general functions of $`det\stackrel{~}{g}_{ab}(x)`$. When Minkowski space is replaced by the Schwarzschild solution $`det\stackrel{~}{g}_{ab}(x)`$ is unchanged and so the change in the stress tensor is linear as in (9). Furthermore, the equations-of-motion of the tensor field are also not modified and so the tensor field background remains unchanged . Hence we have established our goal in a very general setting; namely, we may replace the background (1) by (4) and still solve Einstein’s equations with the same background fields. We now wish to examine the causal structure of the black $`p`$-brane metric (4). Our aim is to show that in the background (4), there are generically singularities in the transverse space, in addition to the usual singularity of the black hole itself. In order to show this we have to show that (i) these singularities can be reached by a freely falling observer in finite proper time and (ii) that the tidal forces experienced by such an observer will diverge; that is, for each causal geodesic we should work out the frame components of the Weyl tensor which are calculated relative to a frame which is parallelly propagated along the geodesic. Our philosophy is that once we have demonstrated the existence of these singularities then this indicates that the spacetime (4) is pathological and that some other metric should describe the Schwarzschild solution on the brane. We will then proceed to intuit the properties of this solution. Before we calculate the geodesics in detail, however, we can glean some information from the curvature invariants of the spacetime. Indeed, it is easy to show that the square of the Riemann tensor includes the term $$R_{\mu \nu \rho \sigma }R^{\mu \nu \rho \sigma }=R_{abcd}^{(4)}R^{(4)abcd}e^{2A}+\mathrm{}=\frac{48M^2}{r^6}e^{2A}+\mathrm{},$$ (12) where the ellipsis stands for terms with fewer powers of $`e^{A(z)}`$. In a generic scenario, we expect that the warp-factor $`e^{A(z)}\mathrm{}`$ in some regions in the transverse space. For instance, in the intersecting-brane case, just as in the original Randall-Sundrum picture, this is indeed the case: from (3) the warp factor $`e^{A(z)}`$ goes to infinity at the “horizon” of AdS.<sup>4</sup><sup>4</sup>4The horizon of the patched AdS space is at $`_{i=1}^p|z^i|=\mathrm{}`$. Hence the curvature invariant (12) will generically diverge as we approach the horizon of AdS and also as we approach the singular core of the black $`p`$-brane ($`r=0`$). Thus, we suspect that inertial observers will see infinite tidal forces as they approach these regions. In order to confirm this suspicion, we turn to an analysis of the space of causal geodesics. To begin, let us consider geodesics in the general background (5). At this stage we do not have to specify a form for the four-dimensional metric $`\stackrel{~}{g}_{ab}`$. The geodesic equation is $$\ddot{x}^\mu +\mathrm{\Gamma }_{\nu \rho }^\mu \dot{x}^\nu \dot{x}^\rho =0,$$ (13) where the derivatives are with respect to the affine parameter $`\tau `$ (taken to be the proper time for time-like geodesics). These equations imply $$\frac{d}{d\tau }\left(g_{\mu \nu }\dot{x}^\mu \dot{x}^\nu \right)=0,$$ (14) or on integrating $$g_{\mu \nu }\dot{x}^\mu \dot{x}^\nu e^A\stackrel{~}{g}_{ab}\dot{x}^a\dot{x}^b+g_{ij}\dot{z}^i\dot{z}^j=\sigma ,$$ (15) where we can take $`\sigma =1`$ for a time-like geodesic, and $`\sigma =0`$ for a null geodesic. Using the form (5) for the metric we find that (13) for $`\mu i`$ yields $$\ddot{z}^i+\mathrm{\Gamma }_{jk}^i\dot{z}^j\dot{z}^k\frac{1}{2}g^{il}(_le^A)\stackrel{~}{g}_{ab}\dot{x}^a\dot{x}^b=0,$$ (16) which can be simplified by using (15), to an equation involving only the $`\{z^i\}`$ coordinates: $$\ddot{z}^i+\mathrm{\Gamma }_{jk}^i\dot{z}^j\dot{z}^k\frac{1}{2}g^{il}_lA\left(\sigma +g_{jk}\dot{z}^j\dot{z}^k\right)=0.$$ (17) We now contract this equation with $`g_{il}\dot{z}^l`$ to arrive at $$\frac{d}{d\tau }\left[e^A\left(\sigma +g_{jk}\dot{z}^j\dot{z}^k\right)\right]=0,$$ (18) and so $$\sigma +g_{jk}\dot{z}^j\dot{z}^k=\xi ^2e^A,$$ (19) where $`\xi `$ is a constant of integration. Now we can return to (15) and use (19) to eliminate the $`z^i`$-derivatives to arrive at $$\stackrel{~}{g}_{ab}\dot{x}^a\dot{x}^b=\xi ^2e^{2A}.$$ (20) This equation has a remarkably simple interpretation. Firstly, let us define a new affine parameter $`\nu `$ via $$\frac{d\nu }{d\tau }=e^A.$$ (21) Then (20) is simply $$\stackrel{~}{g}_{ab}\frac{dx^a}{d\nu }\frac{dx^b}{d\nu }=\xi ^2,$$ (22) i.e. the equation for a four-dimensional time-like geodesic in the metric $`\stackrel{~}{g}_{ab}`$ with an affine parameter related to the higher-dimensional one by the warp factor (21). It should not be surprising that a null geodesic in $`(p+4)`$-dimensions is equivalent to a time-like geodesic in four-dimensions: the non-trivial motion in the transverse dimensions gives rise to a mass in four dimensions. What is gratifying is the simple relation between the four- and higher-dimensional affine parameters (21). For the intersecting brane scenario, we can be more specific about the geodesics. In a given patch define $`z=_{i=1}^p|z^i|/\sqrt{p}`$. The metric (3) in this patch is then $$ds^2=\frac{1}{(1+kz)^2}\left(g_{ab}dx^adx^b+dz^2+dz_{}^idz_{}^i\right),$$ (23) where $`z_{}^i`$ represent the $`p1`$ remaining transverse coordinates. It is straightforward to find the geodesics in this case. The null geodesics are straight lines: $$z=c/(k\tau )1/k,z_{}^i=c_{}^i/(k\tau )+a_{}^i,$$ (24) for constants $`c`$, $`c_{}^i`$ and $`a_{}^i`$. The time-like geodesics are, correspondingly, curved: $$z=c/\mathrm{sin}(k\tau )1/k,z_{}^i=c_{}^i/\mathrm{tan}(k\tau )+a_{}^i.$$ (25) The affine parameter has been defined in both cases so that the geodesics approach the horizon $`z=\mathrm{}`$ as $`\tau 0^{}`$. Notice that the geodesics reach the horizon of the AdS space, $`z=\mathrm{}`$, after a finite elapse of affine parameter. The four-dimensional motion of the geodesics is now determined by (20) with $$\xi ^2=\frac{c^2+c_{}^ic_{}^i}{k^2c^4}.$$ (26) Returning to the more general case, we now take the four-dimensional metric $`\stackrel{~}{g}_{ab}`$ to be the Schwarzschild metric of a black hole. In that case we can specify the behaviour of the geodesics in four dimensions. Firstly, the metric (4) has two Killing vectors: $`k=/t`$ and $`m=/\varphi `$ which give rise to the conserved quantities $`E=ku`$ and $`L=mu`$, where $`u=\dot{x}^\mu `$ is the velocity along a geodesic. Rearranging gives $$\dot{t}=\frac{Ee^A}{U},\dot{\varphi }=\frac{Le^A}{r^2\mathrm{sin}^2\theta }.$$ (27) Since there are two conserved quantities and we may consider motion in the equatorial plane $`\theta =\pi /2`$, the effective equation for radial motion may be deduced from (22): $$\dot{r}^2+e^{2A}\left[\left(\xi ^2+\frac{L^2}{r^2}\right)U(r)E^2\right]=0.$$ (28) By introducing the new affine parameter $`\nu `$ in (21) and rescaling $`\stackrel{~}{r}=r/\xi `$, $`\stackrel{~}{E}=E/\xi `$, $`\stackrel{~}{L}=L/\xi ^2`$ and $`\stackrel{~}{M}=M/\xi `$, this can be written as $$\left(\frac{d\stackrel{~}{r}}{d\nu }\right)^2+\left(1+\frac{\stackrel{~}{L}^2}{\stackrel{~}{r}^2}\right)\left(1\frac{2\stackrel{~}{M}}{\stackrel{~}{r}}\right)=\stackrel{~}{E}^2,$$ (29) which is precisely the radial geodesic equation for a four-dimensional Schwarzschild black hole of mass $`\stackrel{~}{M}`$. Note that $`\nu `$ is the proper time along this four-dimensional geodesic. From here, the analysis is very similar to that performed in . To wit, there are two distinct classes of time-like geodesics which experience infinite affine parameter $`\nu `$: those which are bound states (ones that orbit in a restricted finite range of $`\stackrel{~}{r}`$), and those which are not (ones which make it to $`\stackrel{~}{r}=\mathrm{}`$). For the orbits which escape to $`\stackrel{~}{r}=\mathrm{}`$ the late time behaviour is $$\stackrel{~}{r}\nu \sqrt{\stackrel{~}{E}^21}$$ (30) and consequently we recover the integral $$r\sqrt{E^2\xi ^2}^\tau e^A𝑑\tau .$$ (31) In the intersecting-brane scenario, consider a time-like geodesic (25) with $`z_{}^i=`$ constant and so $`z=c/\mathrm{sin}(k\tau )1/k`$. In this case $`\nu =(1/\xi ^2k)\mathrm{cot}(k\tau )`$. So along such a geodesic, which is a bound state in four-dimensions (so that $`r`$ remains finite) it is easy to see that the curvature invariant (12) diverges at the horizon of AdS. So for such geodesics there is a genuine singularity there. However, for the second type of geodesics which escape to $`r=\mathrm{}`$ we have from (31) $`r\mathrm{cot}(k\tau )`$ and so along these geodesics the curvature invariant (12) remains finite. In order to establish the existence of a singularity at the horizon along these geodesics, we should examine the frame components of the Riemann tensor in an orthonormal frame which has been parallelly propagated along the geodesic. These frame components will measure the tidal forces experienced by the free-falling observer who moves along the geodesic. We may calculate that the tangent vector to such a non-bound state geodesic (for $`L=0`$) is given as<sup>5</sup><sup>5</sup>5Where we are using the $`(t,r,\theta ,\varphi ,z^i)`$ ordering for the components. $$u^\mu =(e^AE/U(r),e^A\sqrt{E^2\xi ^2U(r)},0,0,\dot{z}^i).$$ (32) A parallelly propagated unit normal to the geodesic is likewise given as $$n^\mu =(e^{A/2}\sqrt{E^2\xi ^2U(r)}/(\xi U(r)),e^{A/2}E/\xi ,0,0,0).$$ (33) Using just these two orthonormal vectors we can see a potential divergent tidal force. Indeed, one of the frame components is calculated to be $$R_{(u)(n)(u)(n)}=R_{\mu \nu \rho \sigma }u^\mu n^\nu u^\rho n^\sigma =\frac{2M\xi ^2}{r^3}e^{2A}+\mathrm{},$$ (34) where the ellipsis represents less singular terms. In the intersecting-brane scenario this behaves as $`1/(\mathrm{sin}^4(k\tau )\mathrm{cot}(k\tau )^3)`$ which does diverge at the AdS horizon. It follows that the black $`p`$-brane is singular all the way along the AdS horizon. It is well known that black strings and $`p`$-branes in asymptotically flat space are unstable to long-wavelength perturbations—the “Gregory-Laflamme instability” . A black hole horizon is entropically preferred to a sufficiently large “patch” of $`p`$-brane horizon. Thus, a black $`p`$-brane horizon will generically want to fragment and form an array of black holes. The argument is worth recalling. The relevant situation to consider in the present context is a four-dimensional Schwarzschild black hole embedded in flat $`(p+4)`$-dimensional spacetime; i.e. a $`p`$-brane in $`p+4`$ dimensions. Let $`R_4`$ be the radius of the horizon of the black $`p`$-brane which is related to the associated four-dimensional Schwarzschild mass of the solution by $`R_4=2G_4M_4`$. Hence the entropy for a portion of such an object with “area” $`L^p`$, in the $`p`$-dimensional transverse space, is $`L^pR_4^2`$. This object has an effective $`(p+4)`$-dimensional mass of $`M_{}=L^pR_4/G_{}`$, where $`G_{}`$ is the $`(p+4)`$-dimensional Newton constant. Let us compare this to a a $`(p+4)`$-dimensional black hole carrying the same mass. Such an object would have a horizon radius $`R_{}=2(G_{}M_{})^{1/(p+1)}`$ and hence an entropy $`R_{}^{p+2}=(G_{}M_{})^{(p+2)/(p+1)}=(L^pR_4)^{(p+2)/(p+1)}`$. So when $`(L^pR_4)^{(p+2)/(p+1)}L^pR_4^2`$, i.e. $`LR_4`$, we expect that the black $`p`$-brane becomes unstable with respect to the hyperspherical black hole. Another way to say this is that there will be a destabilizing mode of wavelength $`L`$, i.e. with a wavelength $`\lambda R_4`$. One might suspect that a similar instability occurs for black $`p`$-branes in spacetimes that are asymptotically AdS. In the authors argued that a black string (or 1-brane) in AdS will generically have to pinch off down near the AdS horizon. This is because at large $`z`$ the string is so “skinny” that it does not see the curvature scale of the ambient AdS space, and so the argument of Gregory and Laflamme goes through as it would for a string in flat space. Generalizing to the intersecting-brane scenario, the $`p`$-brane is very thin at large $`z`$ because the proper radius of its horizon gets warped: $$R_4(z)=e^{A(z)/2}R_4=\frac{R_4}{1+kz}.$$ (35) Hence using the logic of the Gregory-Laflamme instability, at a given $`z`$ the $`p`$-brane is unstable to a mode of wavelength $`\lambda (z)R_4(z)`$. The important point is that AdS acts like a box of size $`k^1`$ and so can only allow unstable perturbations of wavelength $`k^1`$. Hence, there exists a critical value for the warp-factor when the wavelength of the destabilizing mode can just fit inside the AdS box: $$e^{A(z_{\mathrm{crit}})/2}R_4k^1,$$ (36) or in this case $$z_{\mathrm{crit}}R_4k^1.$$ (37) This corresponds to a proper distance $$r_{\mathrm{crit}}k^1\mathrm{log}(kR_4)k^1\mathrm{log}(2kG_4M_4),$$ (38) from the junction. Thus, at sufficiently large $`z`$ any large perturbation will fit inside the AdS “box”, and so an instability will occur near the AdS horizon. On the other hand, when $`z`$ is small enough the potential instability occurs at wavelengths much larger than $`k^1`$ and so the instability will not occur in this region. Just as for the black string in AdS, a black $`p`$-brane in AdS is unstable near the AdS horizon but stable far from it. After the black $`p`$-brane fragments, a stable portion of horizon will remain “tethered” to the boundary of AdS. Of course, if we are in the intersecting brane scenario then the boundary of AdS has been cut away and so this stable remnant of horizon will envelop the brane junction. The detached pieces of horizon will presumably fall into the bulk of AdS.<sup>6</sup><sup>6</sup>6Since we do not actually know the explicit form of a metric which can describe the dynamics of such a situation, we have to use our intuition here. The stable remnant of $`p`$-brane horizon acts as if it has a tension, balancing the force pulling it towards the center of AdS. This remnant portion of horizon, far from being a spherically symmetric black hole, will be a highly deformed black object in $`(p+4)`$ dimensions. It is amusing to think about the gross properties of this object. In , the authors argued that after the black string fragments, the stable object left behind would resemble a “black cigar” (or more realistically a “pancake” because $`R_4k^1`$ implying that the object only extends a small proper distance in the transverse space compared with the brane ). When $`p=2`$, we see that the black membrane will tend to fragment at some surface where $`z=(|z^1|+|z^2|)/\sqrt{2}=`$ constant given by (37). In other words, the black 2-brane will tend to fragment along a diamond shaped surface. After fragmentation, the force balance between the horizon tension and the AdS potential and the symmetry of the problem will preserve this basic shape. In other words, a black hole at the junction of two domain walls in $`AdS_6`$, far from being spherically symmetric, will be deformed into the shape of a “black diamond” as illustrated in Figure 1. In general, for an arbitrary number $`p`$ domain walls, the horizon will be deformed towards the shape of a polyhedron with $`2^p`$ sides. Each side will be a portion of horizon corresponding to a given “patch” of $`AdS_{p+4}`$ used to construct the domain wall junction. In this paper we have studied aspects of gravitational collapse in certain brane-world scenarios, where gravity is localized on some four-dimensional submanifold of a higher dimensional space. If this scenario is to be phenomenologically viable, then a brane-world observer should see that the endpoint of the gravitational collapse of uncharged non-rotating matter trapped on the brane is, at least to a good approximation, the Schwarzschild solution. In other words, there should exist a metric in the higher-dimensional bulk spacetime which induces an approximation to the Schwarzschild solution on the brane up to corrections that are small for $`rk^1`$. We have shown that when one intersects the four-dimensional world with a black $`p`$-brane, the induced metric at the junction is exactly the Schwarzschild solution. However, in a concrete example corresponding to the intersecting branes, we have analyzed the causal structure of this solution, and found that the AdS horizon is singular. In fact, we found that the horizon region is a “pp-curvature” singularity , which simply means that parallelly propagated frame components of the curvature tensor diverge as the region is approached along causal geodesics, whereas scalar curvature invariants do not necessarily diverge. This singularity will be visible from the brane-world, and one might regard this as a pathology of the model.<sup>7</sup><sup>7</sup>7However, as in , we would argue that anything emerging from the singularity at the AdS horizon will be heavily red-shifted by the time it reaches the brane, and therefore it will likely be heavily suppressed. At any rate, we have argued that the black $`p`$-brane solution is unstable, and that the brane horizon will want to fragment near the AdS horizon. Presumably, the portions of the brane horizon which break away from the brane-world might fall into the bulk of AdS and form a bulk black hole. We have suggested that at late times this system settles down to a deformed horizon which intersects the brane-world in such a way that the metric induced at the domain wall junction will still be approximately the Schwarzschild solution. While we do not know the exact metric describing this configuration, we conjecture that this metric exists and that it is the unique stable vacuum solution that describes a non-rotating uncharged black hole on the domain wall junction. Our analysis can easily be extended to other cases. For instance, we may consider smoothed versions of the Randall-Sundrum scenario where the domain wall is created by some matter field . We have shown that in this case also the brane can be intersected by a black string without changing the matter field background. In this case, far from the brane the geometry approaches that of $`AdS_5`$ and so the analysis of the singularities at the AdS horizon that we have presented is equally valid in this case. We can also easily apply our analysis to higher-dimensional cases that do not correspond to intersecting branes; for instance three-branes embedded in dimension $`d>5`$ . Finally, it would clearly be desirable to find the exact form of the metric describing the end-point of gravitational collapse on the brane junction and hence determine the corrections to the Schwarzschild metric. ###### Acknowledgments. We thank Sean Carroll for a stimulating conversation. A.C. is supported in part by funds provided by the U.S. Department of Energy under cooperative research agreement DE-FC02-94ER40818. A.C. thanks the T-8 group at Los Alamos for hospitality while this project was initiated. C.C. is an Oppenheimer Fellow at the Los Alamos National Laboratory. C.C., J.E. and T.J.H. are supported by the US Department of Energy under contract W-7405-ENG-36.
no-problem/0002/gr-qc0002061.html
ar5iv
text
# Metric Unification of Gravitation and Electromagnetism Solves the Cosmological Constant Problem ## Abstract We first review the cosmological constant problem, and then mention a conjecture of Feynman according to which the general relativistic theory of gravity should be reformulated in such a way that energy does not couple to gravity. We point out that our recent unification of gravitation and electromagnetism through a symmetric metric tensor has the property that free electromagnetic energy and the vacuum energy do not contribute explicitly to the curvature of spacetime just like the free gravitational energy. Therefore in this formulation of general relativity, the vacuum energy density has its very large value today as in the early universe, while the cosmological constant does not exist at all. The mysterious cosmological constant<sup>1</sup><sup>1</sup>1See references for a nontechnical and for technical reviews. has been with us for more than eighty years ever since Einstein introduced it in 1917 to obtain a static universe by modifying his field equations to $$R_{\mu \nu }\frac{1}{2}Rg_{\mu \nu }+\lambda _bg_{\mu \nu }=\frac{8\pi G}{c^4}T_{\mu \nu },$$ (1) where $`\lambda _b`$ is the (bare) cosmological constant, $`R_{\mu \nu }`$ is the Ricci tensor, $`R=R_\mu ^\mu `$ is the curvature scalar, $`G`$ is Newton’s gravitational constant and $`c`$ is the speed of light. Here $`T_{\mu \nu }`$ is the energy-momentum tensor which represents the energy content of space to which all forms of energy such as matter energy, radiation energy, electromagnetic energy, thermal energy, etc. contribute. Gravitational energy, however, is not included in $`T_{\mu \nu }`$ because it is already included (implicitly) on the left of eq.(1). For a homogeneous and isotropic universe $`T_{\mu \nu }`$ necessarily takes the form of the energy-momentum tensor of a perfect fluid $$T_{\mu \nu }=\left(\rho +\frac{p}{c^2}\right)U_\mu U_\nu +\frac{p}{c^2}g_{\mu \nu },$$ (2) where $`\rho `$, $`p`$, and $`U_\mu `$ are respectively the energy density, pressure and four-velocity of the fluid. When the field equations (1) are applied to the Robertson-Walker metrics $$ds^2=c^2dt^2+a(t)^2\left[\frac{dr^2}{1kr^2}+r^2(d\theta ^2+sin^2\theta d\varphi ^2)\right]$$ (3) the Friedmann equation ensues: $$\left(\frac{\dot{a}}{a}\right)^2=\frac{8\pi G}{c^4}\rho +\frac{\lambda _bc^2}{3}\frac{kc^2}{a^2},$$ (4) where $`a(t)`$ is the scale factor of the universe and $`k=1,0,+1`$ for a universe that is respectively spatially open, flat, and closed. Be it defined as ‘empty space’ or ‘the state of lowest energy’ of a theory of particles, the vacuum has a lot of energy associated with it. The energy of the vacuum per unit volume, the vacuum energy density, has numerous sources. Virtual fluctuations of each quantum field corresponding to a particle and the potential energy of each field contribute to it. Stipulating that the vacuum be Lorentz invariant entails the energy-momentum tensor for it to be $$T_{\mu \nu }^{vac}=\rho _{vac}g_{\mu \nu },$$ (5) because $`g_{\mu \nu }`$ is the only $`4\times 4`$ tensor that is invariant under Lorentz transformations. Comparing eq.(5) with eq.(2) reveals that the vacuum has the equation of state $`p_{vac}/c^2=\rho _{vac}`$ . According to our current understanding, the value of the density of energy that resides in the vacuum has no relevance in nongravitational physics both at the classical and quantum levels. However, being a form of energy density, $`\rho _{vac}`$ takes its place in the field equations (1), thus modifying $`T_{\mu \nu }`$ to $$T_{\mu \nu }=T_{\mu \nu }^M+T_{\mu \nu }^{vac},$$ (6) where now $`T_{\mu \nu }^M`$ is the total energy-momentum tensor of the space other than that of the vacuum. With $`T_{\mu \nu }^{vac}`$ included, the Friedmann equation (4) changes to $$\left(\frac{\dot{a}}{a}\right)^2=\frac{8\pi G}{c^4}\rho +\frac{8\pi G}{c^4}\rho _{vac}+\frac{\lambda _bc^2}{3}\frac{kc^2}{a^2},$$ (7) from which the ‘effective cosmological constant’ is defined to be $`\lambda ^{eff}`$ $`=`$ $`\lambda _b+\lambda _{vac}`$ (8) $`=`$ $`\lambda _b+{\displaystyle \frac{8\pi G}{c^4}}\rho _{vac}`$ $`=`$ $`{\displaystyle \frac{8\pi G}{c^4}}\left({\displaystyle \frac{c^4\lambda _b}{8\pi G}}+\rho _{vac}\right)`$ $`=`$ $`{\displaystyle \frac{8\pi G}{c^4}}\rho _{vac}^{eff}.`$ This means that even if the bare cosmological constant $`\lambda _b`$ is zero, the effective cosmological constant is not. Anything that contributes to the energy density of the vacuum is tantamount to a cosmological constant. The present value of $`\rho _{vac}^{eff}`$ can be estimated from astronomical observations. The Hubble constant at the present epoch is $`H_0=(\dot{a}/a)_0=5080kms^1Mpc^1`$. The critical energy density of the universe, $`\rho _c`$, in the absence ov $`\rho _{vac}^{eff}`$ is defined by $$\rho _c=\frac{3H_0^2c^2}{8\pi G}$$ (9) and has the value $`4.7\times 10^{27}kgm^3c^2=2\times 10^{47}GeV^4`$ for $`H_0=50kms^1Mpc^1`$. In terms of the present values of the density parameters $`\mathrm{\Omega }_0=\rho _0/\rho _c`$ and $`\mathrm{\Omega }_{vac}^{eff}=\rho _{vac}^{eff}/\rho _c`$ eq.(7) can be cast into $$1=\left(\mathrm{\Omega }_0+\mathrm{\Omega }_{vac}^{eff}\frac{kc^2}{a_0^2H_0^2}\right).$$ (10) Since no effects of the spatial curvature are seen, the curvature term in eq.(10) can be safely neglected. Observations give us that $`\mathrm{\Omega }_0=0.10.4`$, with dark matter included, from which $`\mathrm{\Omega }_{vac}^{eff}0.60.9`$ follows from eq.(10). Hence, we conclude that today $`\rho _{vac}^{eff}<\rho _c10^{47}GeV^4`$ $`\lambda ^{eff}<10^{52}m^2.`$ (11) As well known, this is in great disagreement with the values predicted by gauge field theories, of which the best example and the experimentally well established one is the electroweak theory . In this theory, as a result of spontaneous symmetry breaking with a Higgs doublet $`\varphi `$, the minimum of the potential of $`\varphi `$ contributes to the vacuum energy density and hence to the cosmological constant by $`\rho _{vac}`$ $`=`$ $`V_{min}={\displaystyle \frac{1}{8}}M_H^2v^22\times 10^8GeV^4`$ $`\lambda _{vac}`$ $`=`$ $`{\displaystyle \frac{8\pi G}{c^4}}\rho _{vac}10^3m^2,`$ (12) where it is assumed that the potential vanishes at $`\varphi =0`$, and a Higgs particle of mass $`M_H150GeV`$ together with $`v=246GeV`$ have been used. These are larger than the present bounds on $`\rho _{vac}^{eff}`$ and $`\lambda ^{eff}`$ by a factor of $`10^{55}`$. The two terms in $`\rho _{vac}^{eff}=c^4\lambda _b/8\pi G+\rho _{vac}`$ must cancel to at least 55 decimal places so as to reduce $`\rho _{vac}^{eff}`$ and hence $`\lambda ^{eff}`$ to their small values today. So, in the context of Einstein’s general relativity theory, the cosmological constant problem is to understand through what natural mechanism the vacuum energy density $`\rho _{vac}`$ got reduced to its small value today. It is not why it was always small. Many different approaches to the problem, none being entirely satisfactory, have been tried . The idea which triggered the solution that we shall present here belongs to Feynman. In an interview on Superstrings, while talking about gravity he said : ‘In the quantum field theories, there is an energy associated with what we call the vacuum in which everything has settled down to the lowest energy; that energy is not zero-according to the theory. Now gravity is supposed to interact with every form of energy and should interact then with this vacuum energy. And therefore, so to speak, a vacuum would have a weight-an equivalent mass energy-and would produce a gravitational field. Well, it doesn’t! The gravitational field produced by the energy in the electromagnetic field in a vacuum-where there’s no light, just quiet, nothing-should be enermous, so enermous, it would be obvious. The fact is, it’s zero! Or so small that it’s completely in disagreement with what we’d expect from the field theory. This problem is sometimes called the cosmological constant problem. It suggests that we’re missing something in our formulation of the theory of gravity. It’s even possible that the cause of the trouble-the infinities-arises from the gravity interacting with its own energy in a vacuum. And we started off wrong because we already know there’s something wrong with the idea that gravity should interact with the energy of a vacuum. So I think the first thing we should understand is how to formulate gravity so that it doesn’t interact with the energy in a vacuum. Or maybe we need to formulate the field theories so there isn’t any energy in a vacuum in the first place.’ The purpose of this letter is to point out that (i) such a formulation of gravity in which the free electromagnetic and vacuum energy does not disturb the emptiness of space, just like the free gravitational energy, in the sense that their effects are already implicitly included on the left side of the field equations has recently been formulated by us and (ii) there does not exist a cosmological constant problem in this formulation. This new formulation, however, is not only a formulation of gravity but also a unified description of gravity and electromagnetism through a symmetric metric tensor. The impossibility of describing the motion of charged particles with different charge-to-mass ratios in an electromagnetic field by a single geometry leads us to consider classes of geometries corresponding to different charge-to-mass ratios. This way the electromagnetic force on a given charged test particle can be geometrized . Consider an object with a distribution of charged matter with total mass $`M_o`$ and charge $`Q_o`$. Let there be another distribution of matter $`M`$ and charge $`Q`$ external to the object. Let also a test particle of mass $`m`$ and charge $`q`$ be moving in the charge distribution external to the object. The modified field equations are $$R_{\mu \nu }\frac{1}{2}Rg_{\mu \nu }=\frac{8\pi G}{c^4}T_{\mu \nu }^M+\frac{k_e}{c^4}\frac{q}{m}T_{\mu \nu }^{CC},$$ (13) as opposed to the Einstein field equations $$R_{\mu \nu }\frac{1}{2}g_{\mu \nu }R=\frac{8\pi G}{c^4}\left[T_{\mu \nu }^M+T_{\mu \nu }^{EM}(Q)+T_{\mu \nu }^{EM}(Q_o)+T_{\mu \nu }^{vac}\right]$$ (14) where $`k_e`$ is the (Coulomb) electric constant, $`T_{\mu \nu }^M`$ is the matter energy-momentum tensor of the distribution outside the object. $`T_{\mu \nu }^{EM}(Q_o)`$ and $`T_{\mu \nu }^{EM}(Q)`$ are the energy-momentum tensors due to the electromagnetic fields of the object and the charge distribution outside it, respectively. $`T_{\mu \nu }^{CC}`$ is the so called charged-current tensor, given by $$T_{\mu \nu }^{CC}=\frac{1}{3}v_\alpha 𝒥^\alpha \left(\frac{1}{c^2}U_\mu U_\nu +g_{\mu \nu }\right),$$ (15) where $`v^\alpha =(\gamma _vc,\gamma _v\stackrel{}{v})`$ is the four velocity of the test particle, $`𝒥^\alpha =(c\rho _Q,\stackrel{}{J}+\stackrel{}{J}_D)`$ with $`\rho _Q`$, $`\stackrel{}{J}`$, and $`\stackrel{}{J}_D`$ being respectively the charge density, current density, and the displacement current density of the charge distribution outside the object, $`U^\mu =(\gamma _uc,\gamma _u\stackrel{}{u})`$ is the four-velocity of the charge distribution, and $`\gamma _{v(u)}=(1v^2(u^2)/c^2)`$. $`T_{\mu \nu }^{CC}`$ is not an energy momentum tensor. The right-hand side of eq.(13) does not contain the energy- momentum tensor of the electromagnetic field due to the charge distribution of the object. For example, the field equations describing a spherical distribution of mass $`M`$ and charge $`Q`$ located at $`r=0`$ are $$R_{\mu \nu }=0$$ (16) and has the solution $`ds^2=\left(12{\displaystyle \frac{GM}{c^2r}}+2{\displaystyle \frac{q}{m}}{\displaystyle \frac{k_eQ}{c^2r}}\right)c^2dt^2+\left(12{\displaystyle \frac{GM}{c^2r}}+2{\displaystyle \frac{q}{m}}{\displaystyle \frac{k_eQ}{c^2r}}\right)^1dr^2`$ $`+r^2d\theta ^2+r^2sin^2\theta d\varphi ^2.`$ (17) Whereas in Einstein’s general relativity, one has $$R_{\mu \nu }=\frac{8\pi G}{c^4}T_{\mu \nu }^{EM}$$ (18) instead of eq.(15), where $`T_{\mu \nu }^{EM}`$ is the energy-momentum tensor of the electromagnetic field of the charged sphere. Einstein’s general relativity is a theory of gravitation. Our general relativity is a theory of gravitation and electromagnetism. As well known, the solution of eq.(17) is the Reissner-Nordstrøm solution $`ds^2=\left(12{\displaystyle \frac{GM}{c^2r}}+{\displaystyle \frac{Gk_eQ^2}{c^4r^2}}\right)c^2dt^2+\left(12{\displaystyle \frac{GM}{c^2r}}+{\displaystyle \frac{Gk_eQ^2}{c^4r^2}}\right)^1dr^2+`$ $`r^2d\theta ^2+r^2sin^2\theta d\varphi ^2,`$ (19) To sum up, this new formulation of general relativity describes gravitation and electromagnetism as an effect of the curvature of spacetime produced by matter energy and electric charge. We have suggested a very simple deflection of electrons by a spherical charge distribution experiment in ref. to distinguish between the two theories. Having presented the salient features of our formulation, we can now present our solution to the cosmological constant problem: The vacuum energy today is as large as it can be. It has the same very large energy density today as it had in the early universe. The contribution of the vacuum to the cosmological constant is simply zero. This is because no form of energy-momentum tensor except for that of the matter is allowed on the right side of the modified field equations (13). The effect on the curvature of spacetime of any form of free energy like gravitational, electromagnetic, and vacuum is already included (implicitly) on the left of eq.(13). In Einstein’s general relativity the contribution of $`\rho _{vac}`$ to $`\lambda ^{eff}`$ is $`\lambda _{vac}=(8\pi G/c^4)\rho _{vac}`$, whereas in our formulation $`\lambda _{vac}=0\times \rho _{vac}=0`$. As for the bare cosmological constant $`\lambda _b`$ in eq.(1), one may mathematically include it in our eq.(13). But it cannot have the physical meaning of a some sort of bare contribution to the effective vacuum energy density. If, somehow, there is an unknown bare contribution $`\rho _b`$ to the (effective) vacuum energy density, then the corresponding bare cosmological constant is $`\lambda _b=0\times \rho _b=0`$ in our formulation. In conclusion, if a metric unification of gravitation and electromagnetism is realized in nature, then gravitation is no different from other interactions so far as the effects of the vacuum energy are concerned. The vacuum energy density and the cosmological constant are not related; the former has a very large value while the latter does not exist at all. Immediate performance of the experiment of the deflection of electrons by a positive spherical charge distribution cannot be overemphasized.
no-problem/0002/astro-ph0002055.html
ar5iv
text
# Gravitational waves from cosmological compact binaries ## 1 Introduction Binaries with two compact stars are the most promising sources for gravitational radiation. The final phase of spiral in may be detected with ground-based (LIGO, VIRGO, GEO and TAMA) and space-borne laser interferometers (LISA). This has motivated researchers to model gravitational waveforms and to develop population synthesis codes to estimate the properties and formation rates of possible sources for gravitational wave radiation. Since there is not yet a single prescription for calculating the gravitational emission from a compact binary system, it is customary to divide the gravitational waveforms in two main pieces: the inspiral waveform, emitted before tidal distortions become noticeable, and the coalescence waveform, emitted at higher frequencies during the epoch of distortion, tidal disruption and/or merger (Cutler et al. 1993). As the binary, driven by gravitational radiation reaction, spirals in, the frequency of the emitted wave increases until the last 3 orbital cycles prior to complete merger. With post-Newtonian expansions of the equations of motion for two point masses, the waveforms can be computed fairly accurately in the relatively long phase of spiral in (see, for a recent review, Rasio & Shapiro 2000 and references therein). Conversely, the gravitational waveform from the coalescence can only be obtained from extensive numerical calculations with a fully general relativistic treatment. Such calculations are now well underway (Brady, Creighton & Thorne 1998; Damour, Iyer & Sathyaprakash 1998; Rasio & Shapiro 1999). In this paper, we consider the low frequency signal from the early phase of the spiral in, which is of interest for space-borne antennas, such as LISA. For this purpose, we use the leading order expression for the single source emission spectrum, obtained using the quadrupole approximation. Our analysis includes various populations of compact binary systems: black hole-black hole (bh, bh), neutron star-neutron star (ns, ns), white dwarf-white dwarf (wd, wd) as well as mixed systems such as (ns, wd), (bh, wd) and (bh, ns). For some of these sources \[(ns, ns), (wd, wd) and (ns, wd)\], statistical information on the event rate can be inferred from electromagnetic observations. In particular, there are several observational estimates of the (ns, ns) merger rate obtained from statistics of the known population of binary radio pulsars (Narayan, Piran & Shemi 1991; Phinney 1991). A rather large number of close white dwarf binaries have recently been found (see Maxted & Marsh 1999 and Moran 1999). However, it is customary to constrain the (wd, wd) merger rate from the observed SNIa rate (see Postnov & Prokhorov 1998). Also the population of binaries where a radio pulsar is accompanied by a massive unseen white dwarf may be considerably higher than hitherto expected (Portegies Zwart & Yungelson 1999). Since most stars are members of binaries and the formation rate of potential sources of gravitational waves may be abundant in the Galaxy, the gravitational-wave signal emitted by such binaries might produce a stochastic background. This possibility has been explored by various authors, starting from the earliest work of Mironovskij (1965) and Rosi & Zimmermann (1976) until the more recent investigations of Hils, Bender & Webbink (1990), Lipunov et al. (1995), Bender & Hils (1997), Giazotto, Bonazzola & Gourgoulhon (1997), Giampieri (1997), Postnov & Prokhorov (1998), and Nelemans, Portegies Zwart & Verbunt (1999). This background, which acts like a noise component for the interferometric detectors, has always been viewed as a potential obstacle for the detection of gravitational wave backgrounds coming from the early Universe. In this paper we extend the investigation of compact binary systems to extragalactic distances, accounting for the binaries which have been formed since the onset of galaxy formation in the Universe. Following Ferrari, Matarrese & Schneider (1999a, 1999b: hereafter referred as FMSI and FMSII, respectively), we modulate the binary formation rate in the Universe with the cosmic star formation history derived from observations of field galaxies out to redshift $`z5`$ (see e.g. Madau, Pozzetti & Dickinson 1998b; Steidel et al. 1999). The magnitude and frequency distribution of the integrated gravitational signal produced by the cosmological population of compact binaries is calculated from the distribution of binary parameters (masses and types of both stars, orbital separations and eccentricities). These orbital parameters characterize the gravitational-wave signal which we observe on Earth. Detailed information for the properties of the binary population may be obtained through population synthesis. We use the binary population synthesis code SeBa to simulate the characteristics of the binary population in the Galaxy (Portegies Zwart & Verbunt 1996; Portegies Zwart & Yungelson 1998). The characteristics of the extragalactic population are derived from extrapolating these results to the local Universe. The paper is organized as follows: in Section 2 we describe the population synthesis calculations. Section 3 deals with the energy spectrum of a single source followed, in Section 4, by the derivation of the extragalactic backgrounds for the different binary populations. In Sections 3 and 4 we also give details on the adopted astrophysical and cosmological properties of the systems. In Section 5, we compute the spectral strain amplitude produced by each cosmological population and investigate its detectability with LISA. Finally, in Section 6 we summarize our main results and compare them with other previously estimated astrophysical background signals. ## 2 Population synthesis model In order to characterize the main properties of different compact binary systems, we use the binary population synthesis program SeBa of Portegies Zwart & Yungelson (1998). Details of the code can be found in (Portegies Zwart & Yungelson 1998). Here, we simply recall the main assumptions of their adopted model B, which satisfactorily reproduces the properties of observed high-mass binary pulsars (with neutron star companions). The following initial conditions were assumed: the mass of the primary star $`m_1`$ is determined using the mass function described by Scalo (1986) between 0.1 and 100 $`M_{}`$. For a given $`m_1`$, the mass of the secondary star $`m_2`$ is randomly selected from a uniform distribution between a minimum of 0.1 $`M_{}`$ and the mass of the primary star. The semi-major axis distribution is taken flat in $`\mathrm{log}a`$ (Kraicheva et al. 1978) ranging from Roche-lobe contact up to $`10^6`$ R (Abt & Levy 1978; Duquennoy & Mayor 1991). The initial eccentricity distribution is independent of the other orbital parameters, and is $`\mathrm{\Xi }(e)=2e`$. Neutron stars receive a velocity kick upon birth. Following Hartman et al. (1997), model B assumes the distribution for isotropic kick velocities (Paczyński 1990), $$P(u)du=\frac{4}{\pi }\frac{du}{(1+u^2)^2},$$ (1) with $`u=v/\sigma `$ and $`\sigma =600\text{km}\text{s}^1`$. The birthrate of the various compact binaries is normalized to the Type II+Ib/c supernova rate (see Portegies Zwart & Verbunt 1996). The supernova rate of 0.01 per year was assumed to be constant over the lifetime of the galactic disc ($`10`$ Gyr). When computing the birth and merger-rates we account for the time-delay between the formation of the progenitor system and that of the corresponding degenerate binary, $`\tau _s`$. Its value is set by the time it takes for the least massive between the two companion stars to evolve on the main sequence. For (bh, bh), (ns, ns) and (bh, ns) systems $`\tau _s\stackrel{<}{}50`$ Myr and it is negligible compared to the assumed lifetime of the galactic disc. Conversely, (wd, wd), (ns, wd) and (bh, wd) binaries follow a slower evolutionary clock and $`\tau _s`$ can be considerably larger. The cumulative probability distribution, $`P(<\tau _s)`$, predicted by the population synthesis code is shown in Fig. 1. For these systems $`\tau _s`$ can be as large as 10 Gyr although all systems are predicted to have $`\tau _s10`$ Gyr. After the degenerate binary has formed, its further evolution is determined by the time it takes to radiate away its orbital energy in gravitational waves. The time between the formation of the degenerate system and its final coalescence, $`\tau _m`$, depends on the orbital configuration and on the mass of the two companion stars. The predicted cumulative probability distribution is shown in Fig. 2 for the (wd, wd), (ns, ns) and (ns, wd) samples. We see from the figure that there is a significant fraction of systems which does not merge in 10 Gyr. For (bh, bh) binaries and mixed systems with one black hole companion the population synthesis code predicts very long merger times. In particular, all (bh, bh) systems appear to have $`\tau _m`$ greater than 15 Gyr. The reason for these large merger times is that binaries with a black hole companion are characterized by very large initial orbital separations (see e.g. Fig. 3). In fact, bh progenitors are very massive stars and have a very strong stellar wind. For this reason they do not easily reach Roche-lobe over-flow and rarely experience a phase of mass transfer, which is required to reduce the orbital separation of the stars. Unfortunately, the evolution (especially the amount of mass loss in the stellar winds) of high mass stars is rather uncertain (Langer et al. 1994). The result that we obtain at least indicates that it will be very rare to observe any of these bh mergers. Recently Portegies Zwart & McMillan (1999) however identified a new channel for producing black hole binaries which are eligible to mergers on a relatively short time scale. In Table 1, we summarize the results for all binary types that we have investigated. ## 3 Inspiral energy spectrum of single sources Assuming that the orbit of the binary system has already been circularized by gravitational radiation reaction, the inspiral spectrum $`dE/d\nu `$ emitted by a single source can be obtained using the quadrupole approximation (Misner, Thorne & Wheeler 1995). The resulting expression, in geometrical units (G=c=1), can be written as, $$\frac{dE}{d\nu }=\frac{\pi }{3}\frac{^{5/3}}{(\pi \nu )^{1/3}}$$ (2) where $`=\mu ^{3/5}M^{2/5}`$ is the so-called chirp mass, $`M=m_1+m_2`$ stands for the total mass and $`\mu =m_1m_2/M`$ is the reduced mass. The frequency $`\nu `$ at which gravitational waves are emitted is twice the orbital frequency and depends on the time left to the merger event through, $$(\pi \nu )^{8/3}=(\pi \nu _{max})^{8/3}+\frac{256}{5}^{5/3}(t_ct)$$ (3) where $`t_c`$ is the time of the final coalescence and we terminate the inspiral spectrum at a frequency $`\nu _{max}`$. When Post-Newtonian expansion terms are included, it is customary to consider the inspiral spectrum as a good approximation all the way up to $`\nu _{LSCO}`$, i.e. the frequency of the quadrupole waves emitted at the last stable circular orbit (LSCO) (see e.g. Flanagan & Hughes 1998). However, for the purposes of our study, we neglect post-Newtonian terms and we set the value of $`\nu _{max}`$ to correspond to the quadrupole frequency emitted when the orbital separation is roughly 3 times the separation at the LSCO. The value of the orbital separation at the LSCO depends on the mass ratio of the two stellar components and varies between $`5M6M`$. The lower limit is obtained in the test particle approximation ($`m_1m_2`$), whereas the upper value corresponds to the equal-mass case ($`m_1m_2`$) (see Kidder, Will & Wiseman 1993). If we consider the equal-mass limit, which is more conservative for constraining the maximum frequency, $`\nu _{LSCO}0.022/M`$ and $`\nu _{max}0.19\nu _{LSCO}`$. For (wd, wd) binaries and binaries with one wd companion, the maximum frequency, i.e. the minimum distance between the two stellar components, is set in order to cut-off the Roche-lobe contact stage. In fact, the mass transfer from one component to its companion transforms the original detached binary into a semi-detached binary. This process can be accompanied by the loss of angular momentum with mass loss from the system and the above description cannot be applied. Thus, for closed white dwarf binaries we require that the mimimum orbital separation is given by $`r_{wd}(m_1)+r_{wd}(m_2)`$ where $$r_{wd}(m)=0.012R_{}\sqrt{\left(\frac{m}{1.44M_{}}\right)^{2/3}\left(\frac{m}{1.44M_{}}\right)^{2/3}}$$ (4) is the approximate formula for the radius of a white dwarf from Nauenberg (1972) (see also Portegies Zwart & Verbunt 1996). Consider now sources at cosmological distances. The locally measured average energy flux emitted by a source at redshift $`z`$ is, $$f(\nu )=\frac{d\mathrm{\Omega }}{4\pi }\frac{dE}{d\mathrm{\Sigma }d\nu }(\nu )=\frac{(1+z)^2}{4\pi d_L(z)^2}\frac{c^3}{G}\frac{dE_e}{d\nu _e}[\nu (1+z)]$$ (5) where $`d_L(z)`$ is the luminosity distance to the source, $`\nu =\nu _e(1+z)^1`$ is the observed frequency and the factor $`c^3/G`$ is needed in order to change from geometrical to physical units. Thus, the emission spectrum can be written as, $$\frac{dE_e}{d\nu _e}[\nu (1+z)]=\frac{\pi }{3}\frac{^{5/3}}{[\pi \nu (1+z)]^{1/3}}$$ (6) where $`\nu `$ is the observed frequency emitted by a system at time $`t(z)`$ $`(\pi \nu )^{8/3}=(\pi \nu _{max})^{8/3}(1+z)^{8/3}+{\displaystyle \frac{256}{5}}^{5/3}`$ (7) $`\left[t(z_f)+\tau _mt(z)\right](1+z)^{8/3}`$ and we have written the time of the final coalescence $`t_c=t(z_c)`$ in terms of the time of formation $`t(z_f)`$ and of the merger-time $`\tau _m=t(z_c)t(z_f)`$. ## 4 Extragalactic backgrounds from different binary populations Our main purpose is to estimate the stochastic background signal generated by different populations of compact binary systems at extragalactic distances. These gravitational sources have been forming since the onset of galaxy formation in the Universe and for each binary type $`X`$ \[(ns, ns); (wd, wd); (bh, bh); (ns, wd); (bh, wd); (bh, ns)\] we should think of a large ensemble of unresolved and uncorrelated elements, each characterized by its masses $`m_1`$ and $`m_2`$ (or $`M`$ and $`\mu `$), by its redshift and by its time-delays $`\tau _s`$ and $`\tau _m`$ \[see eqns (6) and (7)\]. Thus, in order to consider all contributions from different elements of the ensemble $`X`$, we must integrate the single source emission spectrum over the distribution functions for the masses $`M`$ and $`\mu `$, for the time-delays and for the redshifts. The distribution functions for $`\tau _s`$ and $`\tau _m`$ depend on the binary type $`X`$ and have been derived from the population synthesis code discussed in the previous section. The distribution function for $``$ can be similarly estimated. However, $`\tau _s`$, $`\tau _m`$ and $``$ are not independent random variables. In particular, $`\tau _m`$ and $``$ are correlated because $``$ defines the rate of orbital decay, once the degenerate system has formed. Thus, for each binary population $`X`$, we consider the joint probability distribution for $`\tau _s`$, $`\tau _m`$ and $``$, $$p_X(\tau _s,\tau _m,)d\tau _md\tau _sd.$$ (8) Conversely, the redshift distribution function, i.e. the evolution of the formation rate for each binary type $`X`$, can be deduced from the observed cosmic star formation history out to $`z5`$. In the following subsections, we illustrate the procedure we have followed to derive the birth and merger-rates for all binary populations and to compute the spectra of the corresponding stochastic gravitational backgrounds. ### 4.1 Cosmic star formation rate In the last few years, the extraordinary advances attained in observational cosmology have led to the possibility of identifying actively star forming galaxies at increasing cosmological look-back times (see, for a thorough review, Ellis 1997). Using the rest-frame UV-optical luminosity as an indicator of the star formation activity and integrating on the overall galaxy population, the data obtained with the Hubble Space Telscope (HST Madau et al. 1996, Connolly et al. 1997, Madau et al. 1998a) Keck and other large telescopes (Steidel et al. 1996, 1999) together with the completion of several large redshift surveys (Lilly et al. 1996, Gallego et al. 1995, Treyer et al. 1998) have enabled, for the first time, to derive coherent models for the star formation rate evolution throughout the Universe. A collection of some data obtained at different redshifts is shown in the left panel of Figure 4 for a flat cosmological background model with $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$, $`h=0.5`$ and a Scalo (1986) IMF with masses in the range $`0.1100M_{}`$. Although the strong luminosity evolution observed between redshift 0 and 1-2 is believed to be quite firmly established, the behaviour of the star formation rate at high redshift is still relatively uncertain. In particular, the decline of the star formation rate density implied by the $`<z>4`$ point of the Hubble Deep Field (HDF, see Fig. 4) is now contradicted by the star formation rate density derived from a new ground-based sample of Lyman break galaxies with $`z=4.13`$ (Steidel et al. 1999) which, instead, seems to indicate that the star formation rate density remains substantially constant at $`z>12`$. It has been suggested that this discrepancy might be caused by problems of sample variance in the HDF point at $`<z>=4`$ (Steidel et al. 1999). Because dust extinction can lead to an underestimate of the real UV-optical emission and, ultimately, of the real star formation activity, the data shown in the left panel of Fig. 4 need to be corrected upwards according to specific models for the evolution of dust opacity with redshift. In the right panel of Fig. 4, the data have been dust-corrected according to factors obtained by Calzetti & Heckman (1999) and by Pei, Fall & Hauser (1999). Using different approaches, these authors have recently investigated the cosmic histories of stars, gas, heavy elements and dust in galaxies using as inputs the available data from quasar absorption-line surveys, optical and UV imaging of field galaxies, redshift surveys and the COBE DIRBE and FIRAS measurements of the cosmic IR background radiation. The solutions they obtain appear to reproduce remarkably well a variety of observations that were not used as inputs, among which the SFR at various redshifts from H$`\alpha `$, mid-IR and submm observations and the mean abundance of heavy elements at various epochs from surveys of damped Lyman-$`\alpha `$ systems. As we can see from the right panel of Fig. 4, spectroscopic and photometric surveys in different wavebands point to a consistent picture of the low-to-intermediate redshift evolution: the SFR density rises rapidly as we go from the local value to a redshift between $`12`$ and remains roughly flat between redshifts $`23`$. At higher redshifts, two different evolutionary tracks seem to be consistent with the data: the SFR density might remain substantially constant at $`z\stackrel{>}{}2`$ (Calzetti & Heckman 1999) or it might decrease again out to a redshift of $`4`$ (Pei, Fall & Hauser 1999). Hereafter, we always indicate the former model as ’monolithic scenario’ and the latter as ’hierarchical scenario’ although this choice is only ment to be illustrative. In fact, preliminary considerations have pointed out that a constant SFR activity at high redshifts might not be unexpected in hiererachical structure formation models (Steidel et al. 1999). Thus, we have updated the star formation rate model that we have considered in previous analyses (FMSI, FMSII), even though the gravitational wave backgrounds are more contributed by low-to-intermediate redshift sources than by distant ones. In addition, if a larger dust correction factor should be applied at intermediate redshifts, this would result in a similar amplification of the gravitational background spectra. ### 4.2 Birth and merger rate evolution Following the method we have previously proposed (FMSI, FMSII), for each binary type $`X`$ the birth and merger-rate evolution could be computed from the observed star formation rate evolution. However, this procedure proves to be unsatisfactory because it fails to provide a fully consistent normalization. Its main weakness is that, even if we assume 100% of binarity, i.e. that all stars are in binary systems, the star formation histories that we have described above are not corrected for the presence of secondary stars. For the mass distributions that we have considered, secondary stars are expected to give a significant contribution to the observed UV luminosity as they account for $`1/3`$ of the fraction of mass in stars more massive than $`8M_{}`$. In order to circumvent the necessity of extrapolating the UV luminosity indication of massive star formation to the full range of stellar masses predicted by the model, we could directly normalize to the rate of core-collapse supernovae. This is consistent with the adopted normalization for galactic rates. The core-collapse supernova rate can be directly derived from the observed UV luminosity at each redshift, as stars which dominate the UV emission from a galaxy are the same stars which, at the end of the nuclear burning, explode as Type II+Ib/c SNae. Moreover, the supernova rate is observed independently of the SFR. Therefore it can be used as an alternative normalization. The rates of core-collapse supernovae predicted by the models shown in Fig. 4 are shown in Fig. 5 assuming a flat cosmological background model with zero cosmological constant and $`h=0.5`$. In the same figure, we have plotted the available observations for the core-collapse supernova frequency in the local Universe (Cappellaro et al. 1997, Tamman et al. 1994, Evans et al. 1989, see also Madau et al. 1998b). The binary birthrate per entry per year and comoving volume $`\dot{\eta }(z)`$ can be related to the core-collapse supernova rate $`\dot{n}_{[SNaeII+Ib/c]}(z)`$ shown in Fig. 5 in the following way, $$\dot{\eta }(z)=\frac{\dot{n}_{[SNaeII+Ib/c]}(z)}{N_{[SNaeII+Ib/c]}}$$ (9) where $`N_{[SNaeII+Ib/c]}`$ is the total number of core-collapse supernovae that we find in the simulation. In order to estimate, from $`\dot{\eta }(z)`$, the birth and merger-rate evolution of a degenerate binary population $`X`$, we need to multiply eq. (9) by the number of type $`X`$ systems in the simulated samples, $`N_X`$, and we also need to properly account for both $`\tau _s`$ and $`\tau _m`$. We shall assume that the redshift at the onset of galaxy formation in the Universe is $`z_F=5`$ and that a zero-age main sequence binary forms at a redshift $`z_s`$. After a time interval $`\tau _s`$, the system has evolved into a degenerate binary. Then, the redshift of formation of the degenerate binary system, $`z_f`$, is defined as $`t(z_f)=t(z_s)+\tau _s`$. The system then evolves according to gravitational wave reaction until, after a time interval $`\tau _m`$, it finally merges. Thus, the redshift at which coalescence occurs, $`z_c`$, is given by $`t(z_c)=t(z_f)+\tau _m`$. This simple picture implies that the number of $`X`$ systems formed per unit time and comoving volume at redshift $`z_f`$ is $`\dot{n}_X(z_f)`$ $`=`$ $`{\displaystyle 𝑑\tau _m𝑑_0^{t(z_f)t(z_F)}𝑑\tau _sf_X}`$ $`{\displaystyle \frac{\dot{n}_{[SNaeII+Ib/c]}(z_s)}{(1+z_s)}}p_X(\tau _s,\tau _m,)`$ where $`f_X=N_X/N_{[SNaeII+Ib/c]}`$ and $`z_s`$ is defined by $`t(z_s)=t(z_f)\tau _s`$. If we write, $$p_X(\tau _s,\tau _m,)=\frac{1}{N_X}\underset{i}{\overset{N_X}{}}\delta (\tau _s\tau _{s,i})\delta (\tau _m\tau _{m,i})\delta (_i)$$ (11) where $`\tau _{s,i}`$, $`\tau _{m,i}`$ and $`_i`$ indicate the time delays and the chirp mass for the $`i^{th}`$ element of the ensemble $`X`$, the birthrate reads, $`\dot{n}_X(z_f)`$ $`=`$ $`{\displaystyle \frac{1}{N_{[SNaeII+Ib/c]}}}{\displaystyle \underset{i}{\overset{N_X}{}}}{\displaystyle \frac{\dot{n}_{[SNaeII+Ib/c]}(z_s)}{(1+z_s)}}`$ $`\mathrm{\Theta }[t(z_f)t(z_F)\tau _{s,i}]`$ where $`\mathrm{\Theta }(x)`$ is the step-function. Similarly, the number of $`X`$ systems per unit time and comoving volume which merge at redshift $`z_c`$ is, $`\dot{n}_X^{mrg}(z_c)={\displaystyle _0^{t(z_c)t(z_F)}}𝑑\tau _m{\displaystyle _0^{t(z_c)\tau _mt(z_F)}}𝑑\tau _sf_X`$ (13) $`{\displaystyle \frac{\dot{n}_{[SNaeII+Ib/c]}(z_s)}{(1+z_s)}}{\displaystyle 𝑑p_X(\tau _s,\tau _m,)}`$ where $`z_s`$ is defined by $`t(z_s)=t(z_c)\tau _m\tau _s`$. If we apply eq. (11), we can write the merger-rate in a form similar to eq. (4.2), i.e. $`\dot{n}_X^{mrg}(z_c)`$ $`=`$ $`{\displaystyle \frac{1}{N_{[SNaeII+Ib/c]}}}{\displaystyle \underset{i}{\overset{N_X}{}}}{\displaystyle \frac{\dot{n}_{[SNaeII+Ib/c]}(z_s)}{(1+z_s)}}`$ $`\mathrm{\Theta }[t(z_c)t(z_F)\tau _{s,i}\tau _{m,i}].`$ Using this procedure, we compute the birth and merger-rates for all the synthetic binary populations. The results are presented in Figs. 6, 7 and 8. Due to their relatively small $`\tau _s`$ compared to the cosmic time, the birthrates of (bh, bh), (ns, ns) and (bh, ns) systems closely trace the UV-luminosity evolution, although with different amplitudes. Our simulation suggests that (bh, bh) systems are more numerous than (ns, ns) or (bh, ns) (see Fig. 6). Conversely, Fig. 7 shows that the birthrates of (wd, wd), (ns, wd) and (bh, wd) systems misrepresent the original UV-luminosity evolution as a consequence of their large $`\tau _s`$. The largest is the characteristic time-delay $`\tau _s`$, the more the maximum is shifted versus lower redshifts because the intense star formation activity observed at $`z\stackrel{>}{}2`$, especially for monolithic scenarios, boosts the formation of degenerate systems at $`z\stackrel{<}{}2`$. For hierarchical scenarios, if the redshift at which significant star formation begins to occur is $`z_F5`$, the birthrate of degenerate systems at redshifts $`\stackrel{>}{}4`$ is almost negligible. Finally, in Fig. 8, we have shown the predicted merger-rate for (wd, wd), (ns, ns) and (ns, wd) systems. In this case, the distortion of the original UV-luminosity evolution is even more apparent, particularly for monolithic scenarios. The redshift at which the maximum merger-rate occurs as well as the high redshift tail reflects the different $`\tau _m`$ distributions of these populations. We have not shown the merger-rates for (bh, bh), (bh, wd) and (bh, ns) binaries because, as we have discussed in the previous section, these systems are predicted to have merger-rates consistent with zero throughout the history of the Universe as a consequence of their very large initial orbital separations. ### 4.3 Stochastic backgrounds Having characterized each ensemble $`X`$ by the distribution of chirp mass and time delays, $`p_X(\tau _s,\tau _m,)`$, and by the birthrate density evolution per entry $`\dot{\eta }(z)`$, we can sum up the gravitational signals coming from all the elements of the ensemble. The spectrum of the resulting stochastic background, for a binary type $`X`$ and at a given observation frequency $`\nu `$, is given by the following expression, $`{\displaystyle \frac{dE}{d\mathrm{\Sigma }dtd\nu }}[\nu ]`$ $`=`$ $`{\displaystyle _0^{z_F}}𝑑z_f{\displaystyle _0^{t(z_f)t(z_F)}}𝑑\tau _s{\displaystyle \frac{N_X\dot{\eta }(z_s)}{(1+z_s)}}`$ $`{\displaystyle _0^{\mathrm{}}}𝑑𝑑\tau _mp_X(\tau _s,\tau _m,){\displaystyle \frac{dV}{dz_e^{}}}f[\nu ,z_e^{}]`$ where $`z_F`$ is the redshift of the onset of star formation in the Universe, $`z_f`$ is the redshift of formation of the degenerate binary systems, $`z_s`$ is the redshift of formation of the corresponding progenitor system defined by $`t(z_s)=t(z_f)\tau _s`$, $`f[\nu ,z_e^{}]`$ is given by eq. (5) and $`z_e^{}`$ is the redshift of emission that an element of the ensemble must have in order to contribute to the energy density at the observation frequency $`\nu `$. It follows from eq. (7) that, for a given observation frequency $`\nu `$, $`z_e^{}`$ is a function of $`z_f`$, $`\tau _m`$, $``$ and $`\nu _{max}`$. In principle, an inspiraling compact binary system emits a continuous signal from its formation to its final coalescence thus, $`z_cz_ez_f`$. However, in eq. (4.3) we do not restrict to systems which reach their final coalescence at $`z_c0`$ as we are interested to any source between $`z=0`$ and $`z=z_F`$ emitting gravitational waves during its early inspiral phase. Therefore, the signals which contribute to the local energy density at observation frequency $`\nu `$ might be emitted anywhere between $`\text{sup}[0,z_c]z_e^{}z_f`$, provided that, $`(\pi \nu )^{8/3}=(\pi \nu _{max})^{8/3}(1+z_e^{})^{8/3}+{\displaystyle \frac{256}{5}}^{5/3}`$ (16) $`\left[t(z_f)+\tau _mt(z_e^{})\right](1+z_e^{})^{8/3}.`$ Substituting eq. (11) in eq. (4.3), we can write the background energy density generated by a population $`X`$ in the form, $`{\displaystyle \frac{dE}{d\mathrm{\Sigma }dtd\nu }}[\nu ]`$ $`=`$ $`{\displaystyle _0^{z_F}}𝑑z_f{\displaystyle \underset{i}{\overset{N_X}{}}}{\displaystyle \frac{\dot{\eta }(z_s)}{(1+z_s)}}`$ $`\mathrm{\Theta }[t(z_f)t(z_F)\tau _{s,i}]{\displaystyle \frac{dV}{dz_e^{}}}f[\nu ,z_e^{}]`$ where $`z_e^{}`$ satisfies eq. (16). The predicted spectral energy densities for the populations of degenerate binary types that we have considered are plotted in Fig. 9. For each binary type, we show the results obtained assuming both monolithic and hierarchical scenarios for the evolution of the underlying galaxy population. The spectral energy densities are characterized by the presence of a sharp maximum which, depending on the binary population, has an amplitude spanning about two orders of magnitudes, in the frequency range $`[10^510^4]`$ Hz. In the following, we refer to this part of the signal as ’primary’ component. At higher frequencies, a ’secondary’ component appears for all but (bh, bh) systems. The frequency which marks the transition between primary and secondary components as well as their relative amplitudes depend sensitively on the population. The reason why (bh, bh) systems do not show a secondary component is that this is entirely contributed by sources which merge before $`z=0`$. Conversely, the low-frequency part of the spectrum is dominated by systems with merger-times larger than a Hubble time. These sources are observed at very low frequencies because the value of the minimum frequency (which is emitted at formation, $`z_f`$) is set by the amplitude of the merger-time \[see eq. (16)\]. The larger is the merger-time, the smaller the minimum frequency at which the in-spiral waves are emitted. Moreover, eq. (6) shows that the flux emitted by each source decreases with frequency. This explains the larger amplitude of primary components with respect to secondary ones. For systems with merger-times larger than a Hubble time, the largest frequency is emitted at $`z=0`$ by binaries which form at $`z_fz_F`$. No contribution from such objects can be observed above this critical frequency and the primary component falls rapidly to zero. The amplitude of secondary components reflects the number of systems with moderate merger-times. The maximum frequency which might be observed is emitted by systems which are very close to their coalescence at $`z=0`$. Since $`\nu _{max}`$ is larger for (ns, ns) than for (wd, wd), the secondary component produced by double neutron stars extends up to $`10^2`$ Hz. It is interesting to note is that monolithic scenarios predict a maximum amplitude which is a factor $``$ 20-25% larger than the hierarchical case. This difference is much larger than what has been previously obtained for other extragalactic backgrounds (see e.g. FMSI), indicating that the energy density produced by extragalactic compact binaries is substantially contributed by sources which form at redshifts $`\stackrel{>}{}12`$. It is quite difficult to unveil the origin of this effect because of the large number of parameters which determine the appearance of the final energy density. However, a plausible explanation might be that, depending on its specific time-delays $`\tau _s`$ and $`\tau _m`$, each system emits the signal at redshifts which can be substantially smaller than the formation redshift of the corresponding progenitor system. Thus, although the background signal is mostly emitted at low-to-intermediate redshifts, the sources which produce these signals might have been formed at higher redshifts and reflect the state of the Universe at earlier times, when the differences among hierarchical and monolithic scenarios are more significant. Comparing the different panels of Fig. 9, we conclude that the background produced by (bh, bh) binaries has the largest amplitude but it is concentrated at frequencies below $`2\times 10^5`$ Hz. At higher frequencies, which are more interesting from the point of view of detectability, the dominant contribution comes from (wd, wd) systems. This is consistent with what has already been found for the galactic populations (Hils, Bender & Webbink 1990). ¿From the background spectrum it is possible to compute the closure density $`\mathrm{\Omega }_{gw}h^2`$ and the spectral strain amplitude of the signal $`S_h`$, $`S_h(\nu )`$ $`=`$ $`{\displaystyle \frac{2G}{\pi c^3}}{\displaystyle \frac{1}{\nu ^2}}{\displaystyle \frac{dE}{dSdtd\nu }}(\nu ),`$ (18) $`\mathrm{\Omega }_{gw}(\nu )`$ $`=`$ $`{\displaystyle \frac{\nu }{c^3\rho _{cr}}}{\displaystyle \frac{dE}{dtdSd\nu }}(\nu ).`$ (19) The results are shown in Figs. 10 and 11 for all binary types within monolithic and hierarchical scenarios. The strain amplitude of the backgrounds has a maximum amplitude between $`10^{18}\text{Hz}^{1/2}`$ and $`5\times 10^{17}\text{Hz}^{1/2}`$ at frequencies in the interval $`[5\times 10^65\times 10^5]`$ Hz. The function $`S_h`$ is more sensitive to the low frequency part of the energy density. Therefore, its shape reflects mainly the primary components of the corresponding energy density. In all but the (bh, bh) population, it is evident the presence of a tail at frequencies above the maximum which is the secondary component of the energy density: in the next section we compare this part of the background signal with the LISA sensitivity to assess the possibility of a detection. Still, it is clear that the prominent part of the background signals produced by extragalactic populations of degenerate binaries could be observed with a detector sensitive to smaller frequencies than LISA. Conversely, $`\mathrm{\Omega }_{gw}h^2`$ is mostly dominated by secondary components. We can compare the predictions for (bh, bh), (wd, wd) and (ns, ns) systems. Contrary to what has been found for the spectral energy density or for the strain amplitude of the signal, the largest $`\mathrm{\Omega }_{gw}h^2`$ is produced by (ns, ns), as a consequence of the high amplitude of the secondary component. In particular, no significant contribution from the primary component appears. For (wd, wd), instead, the contribution of the primary component is relevant, although its amplitude is roughly half that of the secondary component. Finally, for (bh, bh) no secondary component is produced and thus the amplitude of the closure density is very low and at very low frequencies. Mixed binary types have different properties, depending on the relative importance of the above effect. For instance, (bh, wd) produce a secondary component but the amplitude is so small to be comparable with that of the primary. We stress that the value of $`\nu _{max}`$ is quite uncertain as it defines the boundary between the early inspiral phase and the highly non-linear merger. Clearly, the more we get closer to this boundary, the less accurate is the Newtonian description of the orbit as post-Newtonian terms start to be relevant. Therefore, we believe that the most reliable part of the binary background signal is the low frequency part, i.e. the part which mostly contributes to the strain amplitude $`S_h`$. ## 5 Confusion noise level and detectability by LISA To have some confidence in the detection of a stochastic gravitational background with LISA it is necessary to have a sufficiently large SNR. The standard choice made by the LISA collaboration is $`\text{SNR}=5`$ which, in turn, yields a minimum detectable amplitude of a stochastic signal of (see Bender 1998 and references therein), $$h^2\mathrm{\Omega }_{gw}[\nu =1\text{mHz}]10^{12}.$$ (20) This value already accounts for the angle between the arms ($`60^{}`$) and the effect of LISA motion. It shows the remarkable sensitivity that would be reached in the search for stochastic signals at low frequencies. Table 2 shows that the backgrounds generated by (wd, wd) and (ns, ns) extragalactic binary populations exceed this minimum value and LISA might be able to detect these signals. We plot in Fig. 12 the predicted sensitivity of LISA to a stochastic background after 1 year of observation (Bender 1998). On the vertical axis it is shown $`h_{\text{rms}}`$, defined as, $$h_{\text{rms}}=[2\nu S_n(\nu )]^{1/2}\left(\frac{\mathrm{\Delta }\nu }{\nu }\right)^{1/2}$$ (21) where $`S_n(\nu )`$ is the predicted spectral noise density and the factor $`(\mathrm{\Delta }\nu /\nu )^{1/2}`$ is introduced to account for the frequency resolution $`\mathrm{\Delta }\nu =1/T`$ attained after a total observation time $`T`$. On the same plot we show the equivalent $`h_{\text{rms}}`$ levels predicted for different extragalactic binary populations and for the galactic population of close white dwarfs binary considered by Bender & Hils (1997). We see that the extragalactic backgrounds might be observable at frequencies between $`1`$ and $`10`$ mHz. These background signals represent additional noise components to the LISA sensitivity curve when searching for signals from individual sources. In particular, backgrounds from unresolved astrophysical sources represent a confusion limited noise. In fact, unless the signal emitted by an individual source has a much higher amplitude, the background signal prevents the individual source to be resolved. Clearly, the magnitude of this effect depends on the frequency resolution of the instrument, i.e. on the observation time. The $`h_{\text{rms}}`$ noise levels produced by extragalactic compact binaries shown in Fig. 12 have been computed assuming $`T=1`$ yr. For the same total observation time we show, in Fig. 13, the number of extragalactic (wd, wd) and (ns, ns) observed in each frequency resolution bin. At frequencies were these backgrounds might be relevant (between 1 and 10 mHz), the number of sources per bin is $`1`$, representing a relevant confusion limited noise component. The critical frequency above which the number of sources per bin is lower than 1 occurs at $`0.1`$ Hz for (wd, wd) and outside LISA sensitivity window for (ns, ns). However, at these frequencies the dominant noise component is the instrumental noise. ## 6 Conclusions In this paper we have obtained estimates for the stochastic background of gravitational waves emitted by cosmological populations of compact binary systems during their early-inspiral phase. Since we have restricted our investigation to frequencies well below the frequency emitted when each system approaches its last stable circular orbit, we have characterized the single source emission using the quadrupole approximation. Our main motivation was to develop a simple method to estimate the gravitational signal produced by populations of binary systems at extragalactic distances. This method relies on three main pieces of information: 1. the theoretical description of gravitational waveforms to characterize the single source contribution to the overall background 2. the predictions of binary population synthesis codes to characterize the distribution of astrophysical parameters (masses of the stellar components, orbital parameters, merger times etc.) among each ensemble of binary systems 3. a model for the evolution of the cosmic star formation history derived from a collection of observations out to $`z5`$ to infer the evolution of the birth and merger rates for each binary population throughout the Universe. As we have considered only the early-inspiral phase of the binary evolution, our predictions for the resulting gravitational signals are restricted to the low frequency band $`10^51`$ Hz. The stochastic background signals produced by (wd, wd) and (ns, ns) might be observable with LISA and add as confusion limited noise components to the LISA instrumental noise and to the signal produced by binaries within our own Galaxy. The extragalactic contributions are dominant at frequencies in the range $`110`$ mHz and limit the performances expected for LISA in the same range, where the previously estimated sensitivity curve was attaining its mimimum. We plan to extend further this preliminary study and to consider more realistic waveforms so as to enter a frequency region interesting for ground-based experiments. Finally, in Fig. 14 we show the spectral densities of the extragalactic backgrounds that have been investigated so far. The high frequency band appears to be dominated by the stochastic signal from a population of rapidly rotating neutron stars via the r-mode instability (see FMSII). For comparison, we have shown the overall signal emitted during the core-collapse of massive stars to black holes (see FMSI). In this case, the amplitude and frequency range depend sensitively on the fraction of progenitor star which participates to the collapse. The signal indicated with BH corresponds to the conservative assumption that the core mass is $`10\%`$ of the progenitor’s (see FMSI). Recent numerical simulations of core-collapse supernova explosions (Fryer 1999) appear to indicate that for progenitor masses $`>40M_{}`$ no supernova explosion occurs and the star directly collapses to form a black hole. The final mass of this core depends strongly on the relevance of mass loss caused by stellar winds (Fryer & Kalogera 2000). If massive black holes are formed the resulting background would have a larger amplitude and the relevant signal would be shifted towards lower frequencies, more interesting for ground-based interferometers (Schneider, Ferrari & Matarrese 1999). In the low frequency band, we have plotted only the backgrounds produced by (bh, bh), (ns, ns) and (wd, wd) binaries because their signals largely overwhelm those from other degenerate binary types. We find that both in the low and in the high frequency band, extragalactic populations generate a signal which is comparable to and, in some cases, larger than the backgrounds produced by populations of sources within our Galaxy (Giazotto, Bonazzola & Gourgoulhon 1997; Giampieri 1997; Postnov 1997; Hils, Bender & Webbink 1990; Bender & Hils 1997; Postnov & Prokhorov 1998; Nelemans, Portegies Zwart & Verbunt 1999). It is important to stress that even if future investigations reveal that the amplitude of galactic backgrounds might be higher than presently conceived, their signal could still be discriminated from that generated by sources at extragalactic distance. In fact, the signal produced within the Galaxy shows a characteristic amplitude modulation when the antenna changes its orientation with respect to fixed stars (Giazotto, Bonazzola & Gourgoulhon 1997; Giampieri 1997). The same conclusions can be drawn when the extragalactic backgrounds are compared to the stochastic relic gravitational signals predicted by some classical early Universe scenarios. The relic gravitational backgrounds suffer of the many uncertainties which characterize our present knowledge of the early Universe. According to the presently conceived typical spectra, we find that their detectability might be severely limited by the amplitude of the more recent astrophysical backgrounds, especially in the high frequency band. ## Acknowledgments We acknowledge Bruce Allen, Pia Astone, Andrea Ferrara, Sergio Frasca, Piero Madau and Lucia Pozzetti for useful conversations and fruitful insights in various aspects of the work. SPZ thank Gijs Nelemans and Lev Yungelson for discussions and code developement. This work was supported by NASA through Hubble Fellowship grant HF-01112.01-98A awarded (to SPZ) by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA under contract NAS 5-26555. Part of the calculations are performed on the Origin2000 SGI supercomputer at Boston University. SPZ is grateful to the University of Amsterdam (under Spinoza grant 0-08 to Edward P.J. van den Heuvel) for their hospitality.
no-problem/0002/astro-ph0002417.html
ar5iv
text
# Untitled Document Research and Education in Basic Space Science The Approach Pursued in the UN/ESA Workshops HAMID M.K. Al-NAIMIY<sup>a</sup> (Jordan), CYNTHIA P. CELEBRE<sup>b</sup> (Philippines), KHALIL CHAMCHAM<sup>c</sup> (Morocco), H.S. PADMASIRI DE ALWIS<sup>d</sup> (Sri Lanka), MARIA C. PINEDA DE CARIAS<sup>e</sup> (Honduras), HANS J. HAUBOLD<sup>f</sup> (United Nations), ALEXIS E. TROCHE BOGGINO<sup>g</sup> (Paraguay) ABSTRACT. Since 1990, the United Nations in cooperation with the European Space Agency is holding annually a workshop on basic space science for the benefit of the worldwide development of astronomy. These workshops have been held in countries of Asia and the Pacific (India, Sri Lanka), Latin America and the Caribbean (Costa Rica, Colombia, Honduras), Africa (Nigeria), Western Asia (Egypt, Jordan), and Europe (Germany, France). Additional to the scientific benefits of the workshops and the strengthening of international cooperation, the workshops lead to the establishment of astronomical telescope facilities in Colombia, Egypt, Honduras, Jordan, Morocco, Paraguay, Peru, Philippines, Sri Lanka, and Uruguay. The annual UN/ESA Workshops continue to pursue an agenda to network these astronomical telescope facilities through similar research and education programmes. Teaching material and hands-on astrophysics material has been developed for the operation of such astronomical telescope facilities in an university environment. 1. Introduction Research and education in astronomy and astrophysics are an international enterprise and the astronomical community has long shown leadership in creating international collaborations and cooperation: Because (i) astronomy has deep roots in virtually every human culture, (ii) it helps to understand humanity’s place in the vast scale of the universe, and (iii) it teaches humanity about its origins and evolution. Humanity’s activity in the quest for the exploration of the universe is reflected in the history of scientific institutions, enterprises, and sensibilities. The institutions that sustain science; the moral, religious, cultural, and philosophical sensibilities of scientists themselves; and the goal of the scientific enterprise in different regions on Earth are subject of intense study (Pyenson and Sheets-Pyenson 1999). The Bahcall report for the last decade of the 20th century (Bahcall, 1991) has been prepared primarily for the North American astronomical community, however, it may have gone unnoticed that this report had also an impact on a broader international scale, as the report can be used, to some extend, as a guide to introduce basic space science, including astronomy and astrophysics, in nations where this field of science is still in its infancy. Attention is drawn to the world-wide-web site at http://www.seas.columbia.edu/$``$ah297/ un-esa/ where the initiative is publicized on how developing nations are making efforts to introduce basic space science into research and education curricula at the university level. This initiative was born in 1990 as a collaborative effort of developing nations, the United Nations (UN), the European Space Agency (ESA), and the Government of Japan, and covers the period of time of the last decade of the 20th century. Through annual workshops and subsequent follow-up projects, particularly the establishment of astronomical telescope facilities, this initiative is gradually bearing results in the regions of Asia and the Pacific, Latin America and the Caribbean, Africa, and Western Asia. 2. Workshops on Basic Space Science In 1959, the United Nations recognized a new potential for international cooperation and formed a permanent body by establishing the Committee on the Peaceful Uses of Outer Space (COPUOS). In 1970, COPUOS formalized the UN Programme on Space Applications to strengthen cooperation in space science and technology between non-industrialized and industrialized nations. In 1991 the UN, in close cooperation with developing nations, ESA, and the Government of Japan, started under the auspices of COPUOS a series of annual Workshops on Basic Space Science, which were hosted by UN member States in the five economic regions defined by the UN: India, Costa Rica, Colombia, Nigeria, Egypt, Sri Lanka, Germany, Honduras, Jordan, and France (Haubold 1998). Over the past ten years, the workshops established a close interaction between scientists from developing and industrialized nations to discuss research findings at the current front lines in basic space science. The workshops also initiated a direct interaction between scientists from developing nations. In depth discussions in working groups were fostered to allow the identification of the needs - especially common needs, which could be addressed on a larger scale - to enhance the participation of the developing nations in basic space science and to identify the best ways and means in which each nation could accelerate its participation in a meaningful endeavor. The eighth in the series of UN/ESA Workshops on Basic Space Science, which, among other topics, addressed the feasibility of establishing a World Space Observatory (WSO), was held at Jordan in 1999. The ninth workshop will be held at France in 2000 and preparations are ongoing for the eleventh workshop to be held in Mauritius in 2001. The UN, ESA, Japan, and international organizations will continue to provide assistance for the establishment and operation of astronomical facilities in Colombia, Egypt, Honduras, Jordan, Morocco, Paraguay, Peru, the Philippines, Sri Lanka, and Uruguay. 3. Astronomical Telescope Facilities A number of Governments (among them Honduras 1997 and Jordan 1999), in cooperation with international partners, have acquired and established astronomical telescope facilities in their countries (Meade 16” Schmidt-Cassegrain models). In conjunction to the workshops, to support research and education in astronomy, the Government of Japan has donated high-grade equipment to a number of developing nations (among them Sri Lanka 1995, Paraguay 1998, the Philippines 2001) within the scheme of ODA of the Government of Japan (Kitamura 1999). We refer here to 45cm high-grade astronomical telescopes furnished with photoelectric photometer, computer equipment, and spectrograph (or CCD). After the installation of the telescope facility by the host country and Japan, in order to operate such high-grade telescopes, young observatory staff members from Sri Lanka and Paraguay have been invited by the Bisei Astronomical Observatory for education and training, sponsored by the Japan International Cooperation Agency \[JICA\] (Kitamura 1999, Kogure 1999). The research and education programmes at the newly established telescope facilities will focus on time-varying phenomena of celestial objects. The 45cm class reflecting telescope with photoelectric photometer attached is able to detect celestial objects up to the 12th magnitude and with a CCD attached up to the 15th magnitude, respectively. Such results have been demonstrated for the light variation of the eclipsing close binary star V505 Sgr, the X-ray binary Cyg X-1, the eclipsing part of the long-period binary e Aur, the asteroid No.45 Eugenia, and the eclipsing variable RT Cma Teles (Kitamura 1999). In forthcoming workshops, common observational programmes for variable stars for all the telescope facilities are envisaged. 4. Observing with the Telescopes: Research In the course of preparing the establishment of the above astronomical telescope facilities, the workshops made intense efforts to identify available material to be used in research and education by utilizing such facilities. It was discovered that variable star observing by photoelectric or CCD photometry can be a prelude to even more advanced astronomical activity. Variable stars are those whose brightness, colour, or some other property varies with time. If measured sufficiently carefully, almost every star turns out to be variable. The variation may be due to geometry, such as the eclipse of one star by a companion star, or the rotation of a spotted star, or it may be due to physical processes such as pulsation, eruption, or explosion. Variable stars provide astronomers with essential information about the internal structure and evolution of the stars. The most predominant institution in this specific field of astronomy is the American Association of Variable Star Observers. AAVSO co-ordinates variable star observations made by amateur and professional astronomers, compiles, processes, and publishes them, and in turn, makes them available to researchers and educators. The AAVSO receives over 350,000 measurements a year, from more than 550 observers world-wide. The measurements are entered into the AAVSO electronic database, which contains close to 10 million measurements of several thousand stars. To facilitate the operation of variable star observing programmes and to prepare a common ground for such programmes, AAVSO developed a rather unique package titled “Hands-On Astrophysics” which includes 45 star charts, 31 35mm slides of five constellations, 14 prints of the Cygnus star field at seven different times, 600,000 measurements of several dozen stars, user-friendly computer programmes to analyze them, and to enter new observations into the database, an instructional video in three segments, and a very comprehensive manual for teachers and students (http://www.aavso.org/). Assuming that the telescope is properly operational, variable stars can be observed, measurements can be analyzed and send electronically to the AAVSO. The flexibility of the “Hands-On Astrophysics” material allows an immediate link to the teaching of astronomy or astrophysics at the university level by using the astronomy, mathematics, and computer elements of this package. It can be used as a basis to involve both the professor and the student to do real science with real observational data. After a careful exploration of “Hands-On Astrophysics” and thanks to the generous cooperation of AAVSO, it was adopted by the above astronomical telescope facilities for their observing programmes (Mattei and Percy 1999, Percy 1991). The results of this effort will be reviewed at forthcoming workshops on basic space science. 5. Teaching Astrophysics: Education Various strategies for introducing the spirit of scientific inquiry to universities, including those in developing nations, have been developed and analyzed (Wentzel 1999a). What concerns the spirit of the workshops on basic space science, they have been organized and hosted by Governments and scientific communities which agreed beforehand on the need to introduce or further develop basic space science at the university level and to establish adequate facilities for pursuing such a field of science in practical terms, i.e., to operate an astronomical facility for the benefit of the university or research establishment (and to prospectively make the results from the facility available for public educational efforts). Additional to the hosting of the workshops, the Governments agreed to operate such a telescope facility in a sustained manner with the call on the international community for support and cooperation in devising respective research and educational programmes. Gradually, this policy is being implemented for those telescope facilities established through the workshops in cooperation with the UN, ESA, Japan, and other national and international organizations. Organizers of the workshops have acknowledged in the past the desire of the local scientific communities to use educational material adopted and available at the local level (prepared in the local language). However, the workshops have also recommended to explore the possibility to develop educational material (additional to the above mentioned “Hands-On Astrophysics” package) which might be used by as many as possible university staff in different nations while preserving the specific cultural environment in which astronomy is being taught and the telescope is being used. A first promising step in this direction was made with the project “Astrophysics for University Physics Courses” (Wentzel 1999b). This project has been highlighted at the IAU/COSPAR/UN Special Workshop on Education in Astronomy and Basic Space Science, held during the UNISPACE III Conference at the United Nations Office Vienna in 1999 (Isobe 1999). Additionally, a number of text books and CD-ROMs have been reviewed over the years which, in the view of astronomers from developing nations, are particularly useful in the research and teaching process (for example, just to name three of them: Bennett et al. 1999, for teaching purposes; Lang 1999, a reference work in the research process; Hamilton 1996, a CD-ROM for astronomy in the classroom). This issue could be further discussed in the Newsletters, specifically published for the benefit of Africa (Querci and Martinez 1999), Asia and the Pacific (Isobe 1999), and Latin America and the Caribbean (Eenens and Corral 1999). 6. What Next? To exactly take into account the obstacles encountered and the progress made, as observed in the ten-years long approach pursued in the UN/ESA Workshops as described above, the Ninth UN/ESA Workshop on Basic Space Science: Satellites and Telescopic Networks - Tools for Global Participation in the Studies of the Universe, hosted by the Centre National d’Etudes Spatiales (CNES) at the Observatoire Midi-Pyrenées (Université Paul Sabbatier), on behalf of the Government of France, 27-30 June 2000, Toulouse, France, will address the benefits of basic space science to society, particularly developing nations, and the experience with, results from, and the need for networks of astronomical telescopes in terms of common research and education programmes. The astronomical telescopes meant are particularly those mentioned above. During this workshop, additional working group sessions will be held to review the topics in sections 1 to 5 above and to chart the course for the future. The workshop in France will also address in detail the feasibility to establish a World Space Observatory (WSO), discussed since the workshop in Sri Lanka in 1995, and the participation of developing nations in such an effort both from the point of view of research and education (Wamsteker and Gonzales Riestra 1997, United Nations GA Document A/AC.105/723). Acknowledgments The authors are grateful to the following colleagues for sharing the pleasure to organize the UN/ESA Workshops on Basic Space Science and their follow-up projects as described in the above article: J. Andersen (IAU), J. Bennett (USA), S.C. Chakravarty (India), W. Fernandez (Costa Rica), S. Isobe (Japan), M. Kitamura (Japan), T. Kogure (Japan), K.R. Lang (USA), P. Martinez (South Africa), J. Mattei (AAVSO), P.N. Okeke (Nigeria), L.I. Onuora (UK), L. Pyenson (USA), F.-R. Querci (France), S. Rughooputh (Mauritius), R. Schwartz (Germany), M.A. Shaltout (Egypt), S. Torres (Colombia), W. Wamsteker (ESA), and D.G. Wentzel (USA). References a) Astronomical Observatory, Higher Institute of Astronomy and Space Sciences, Al al-Bayt University, P.O. Box 130302, Al Mafraq, Jordan, alnaimiy@yahoo.com b) Astronomical Observatory, Philippine Atmospheric, Geophysical & Astronomical Services Administration, 1424 Asia Trust Bank Building, Quezon Avenue, Quezon City, Philippines, cynthia<sub>-</sub>celebre@hotmail.com c) Faculty of Science Ain Chock, University Hassan II, B.P. 5366 Maarif, Casablanca 05, Morocco, chamcham@star.cpes.susx.ac.uk d) Astronomical Observatory, Arthur C. Clarke Institut for Modern Technologies, Katubedda, Moratuwa, Sri Lanka, asela@slt.lk e) Observatorio Astronomico, Universidad Nacional Autonoma de Honduras, Apartado Postal 4432, Tegucigalpa M.D.C., Honduras, mcarias@hondutel.hn f) Office for Outer Space Affairs, United Nations, Vienna International Centre, P.O. Box 500, A-1400 Vienna, Austria, haubold@kph.tuwien.ac.at g) Observatorio Astronomico, Facultad Politecnica, Universidad Nacional de Asuncion, Ciudad Universitaria, San Lorenzo, Paraguay, atroche@pol.com.py Bahcall, J.N., The Decade of Discovery in Astronomy and Astrophysics, National Academy Press, Washington D.C., 1991. Bennett, J., Donahue, M., Schneider, N., and Voit, M., The Cosmic Perspective, Addison Wesley Longman Inc., Menlo Park, California, 1999; a www site, offering a wealth of additional material for professors and students, specifically developed for teaching astronomy with this book and upgraded on a regular basis is also available: http://www.astrospot.com/. Eenens, Ph. and Corral, L., Astronomia Latino Americana (ALA), electronic Bulletin, http://www.astro.ugto.mx/$``$ala/. Hamilton, C.J., Views of the Solar System CD-ROM, National Science Teachers Association, Arlington, 1996. Haubold, H.J., “UN/ESA Workshops on Basic Space Science: an initiative in the world-wide development of astronomy”, Journal of Astronomical History and Heritage 1(2):105-121, 1998; an updated version of this paper is available at http://xxx.lanl.gov/abs/physics/9910042. Isobe, S., Teaching of Astronomy in Asian-Pacific Region, Bulletin No. 15, 1999, published since 1991, isobesz@cc.nao.ac.jp. Kitamura, M., “Provision of astronomical instruments to developing countries by Japanese ODA with emphasis on research observations by donated 45cm reflectors in Asia”, in Conference on Space Sciences and Technology Applications for National Development: Proceedings, held at Colombo, Sri Lanka, 21-22 January 1999, Ministry of Science and Technology of Sri Lanka, pp. 147-152. Kogure, T., “Stellar activity and needs for multi-site observations”, in Conference on Space Sciences and Technology Applications for National Development: Proceedings, held at Colombo, Sri Lanka, 21-22 January 1999, Ministry of Science and Technology of Sri Lanka, pp. 124-131. Lang, K.R., Astrophysical Formulae, Volume I: Radiation, Gas Processes and High Energy Astrophysics, Volume II: Space, Time, Matter and Cosmology, Springer-Verlag, Berlin 1999. Mattei, J. and Percy, J. R. (Eds.), Hands-On Astrophysics, American Association of Variable Star Observers, Cambridge, MA 02138, 1998; http://www.aavso.org/. Percy, J.R. (Ed.), Astronomy Education: Current Developments, Future Cooperation: Proceedings of an ASP Symposium, Astronomical Society of the Pacific Conference Series Vol. 89, 1991. Pyenson, L. and Sheets-Pyenson, S., Servants of Nature: A History of Scientific Institutions, Enterprises, and Sensibilities, W.W. Norton & Company, New York, 1999. Querci, F.-R. and P. Martinez (Eds.), African Skies/Cieux Africains, Newsletter, four issues published since 1997, http://www.saao.ac.za/$``$wgssa/. United Nations Report on the UN/ESA Workshop on Basic Space Science, hosted by Al al-Bayt University, Mafraq, Jordan, on behalf of the Government of Jordan, A/AC.105/723, 18 May 1999. Wamsteker, W. and Gonzales Riestra, R. (Eds.), Ultraviolet Astrophysics Beyond the IUE Final Archive: Proceedings of the Conference, held at Sevilla, Spain, 11-14 November 1997, European Space Agency SP-413, pp. 849-855. Wentzel, D.G., “National strategies for science development”, Teaching of Astronomy in Asian-Pacific Region, Bulletin No. 15, 1999a, pp. 4-10. Wentzel, D.G., Astrofisica para Cursos Universitarios de Fisica, La Paz, Bolivia, 1999b, English language version available at http://www.seas.columbia.edu/$``$ah297/un-esa/astrophysics.
no-problem/0002/cond-mat0002080.html
ar5iv
text
# Double-dot charge transport in Si single electron/hole transistors \[ ## Abstract We studied transport through ultra-small Si quantum dot transistors fabricated from silicon-on-insulator wafers. At high temperatures, 4 K $`<T<`$ 100 K, the devices show single-electron or single-hole transport through the lithographically defined dot. At $`T<4`$ K, current through the devices is characterized by multidot transport. From the analysis of the transport in samples with double-dot characteristics, we conclude that extra dots are formed inside the thermally grown gate oxide which surrounds the lithographically defined dot. \] Recent advances in miniaturization of Si metal-oxide-semiconductor field-effect transistors (MOSFETs) brought to light several issues related to the electrical transport in Si nanostructures. At low temperatures and low source-drain bias Si nanostructures do not follow regular MOSFET transconductance characteristics but show rather complex behavior, suggesting transport through multiply-connected dots. Even in devices with no intentionally defined dots (like Si quantum wires or point contacts) Coulomb blockade oscillations were reported. In the case of quantum wires, formation of tunneling barriers is usually attributed to fluctuations of the thickness of the wire or of the gate oxide. However, formation of a dot in point contact samples is not quite consistent with such explanation. Recently in an elegant experiment with both $`n^+`$ and $`p^+`$ source/drain connected to the same Si point contact Ishikuro and Hiramoto have shown that the confining potential in unintentionally created dots is similar for both holes and electrons. However, there is no clear picture where and how these dots are formed. In this work we analyze the low temperature transport through an ultra-small lithographically defined Si quantum dots. While at high temperature 4 K $`<T<`$ 100 K we observe single-electron tunneling through the lithographically defined dot, at $`T<4`$ K transport is found to be typical for a multi-dot system. We restrict ourselves to the analysis of samples with double-dot transport characteristics. From the data we extract electrostatic characteristics of both the lithographically defined and the extra dots. Remarkably, transport in some samples cannot be described by tunneling through two dots connected in sequence but rather reflects tunneling through dots connected in parallel to both source and drain. Taking into account the geometry of the samples we conclude that extra dots should be formed within the gate oxide. Transport in p- and n-type samples are similar, suggesting that the origin of the confining potential for electrons and holes in these extra dots is the same. The samples are metal-oxide-semiconductor field-effect transistors (MOSFETs) fabricated from a silicon-on-insulator (SOI) wafer. The top silicon layer is patterned by an electron-beam lithography to form a small dot connected to wide source and drain regions, see schematic in Fig. 1a. Next, the buried oxide is etched beneath the dot transforming it into a free-standing bridge. Subsequently, 40 or 50 nm of oxide is thermally grown which further reduces the size of the dot. Poly-silicon gate is deposited over the bridge with the dot as well as over the adjacent regions of the source and drain. It is important to note that in this type of devices the gate not only controls the potential of the dot but also changes the dot-source and dot-drain barriers. Finally, the uncovered regions of the source and drain are n–type or p–type doped. More details on samples preparation can be found in Ref. . Totally, about 30 hole and electron samples have been studied. Here we present data from two samples with hole (H5A) and electron (E5-7) field-induced channels. An SEM investigation of test samples, Fig. 1b, reveals that the lithographically defined dot in the Si bridge is 10-40 nm in diameter and the distance between narrow regions of the bridge is $``$70 nm. Taking into account the oxide thickness we estimate the gate capacitance to be 0.8–1.5 aF. In most of our samples (with both n– and p–channel) we see clear Coulomb blockade oscillations with a period $`\mathrm{\Delta }V_{g1}=100160`$ mV up to $``$100 K. A typical charge addition spectra is plotted in Fig. 2 and Fig 3 for samples H5A and E5-7. In H5A the spectrum is almost periodic as a function of the gate voltage $`V_g`$ at $`T>4`$ K with the period $`\mathrm{\Delta }V_{g1}130`$ mV. Assuming that each peak corresponds to an addition of one hole into the dot we calculate the gate capacitance $`C_{g1}=e/\mathrm{\Delta }V_{g1}=1.2`$ aF, which is within the error bars for the capacitance estimated from the sample geometry. The lineshape of an individual peak can be described by $`G\mathrm{cosh}^2[(V_gV_g^i)/2.5\alpha k_BT]`$, where $`V_g^i`$ is the peak position and coefficient $`\alpha =C_{total}/eC_g`$ relates the change in the $`V_g`$ to the shift of the energy levels in the dot relative to the Fermi energy in the contacts. This expression is valid if both coupling to the leads $`\mathrm{\Gamma }`$ and single-particle level spacing $`\mathrm{\Delta }E`$ are small: $`\mathrm{\Gamma },\mathrm{\Delta }Ek_BTe^2/C_{total}`$. We fit the data for H5A with $`_i\mathrm{cosh}^2[(V_gV_g^i)/w]`$ in the range -3.0 V $`<V_g<2.2`$ V and the extracted $`w`$ is plotted in the inset in Fig. 2. From the linear fit $`w=11.3+2.2T`$ \[mV\] we find the coefficient $`\alpha =10`$ \[mV/meV\], thus the Coulomb energy is $`13`$ meV and the total capacitance $`C_{total}=12.3`$ aF. The main contribution to $`C_{total}`$ comes from dot-to-lead capacitances (an estimated self-capacitance is a few aF). The extrapolated value of $`w`$ at zero temperature provides an estimate for the level broadening $`\mathrm{\Gamma }1`$ meV. At $`T<4`$ K oscillations with another period, much smaller than $`\mathrm{\Delta }V_{g1}`$, appear as a function of $`V_g`$. The small period is in the range $`\mathrm{\Delta }V_{g2}=825`$ mV in different devices ($`\mathrm{\Delta }V_{g2}=11.8`$ mV for the sample in Fig. 2). This small period is due to a single-hole tunneling through a second dot and the corresponding gate capacitance $`C_{g2}=e/\mathrm{\Delta }V_{g2}=620`$ aF. However, there is no intentionally defined second dot in our devices. Below we first analyze the experimental results and then discuss where the second dot can be formed. At low temperatures and small gate voltages (close to the turn-on of the device at high temperatures) current is either totally suppressed, as in E5-7 at $`V_g<3.5`$ V, Fig. 3a, or there are sharp peaks with no apparent periodicity, as in H5A at $`V_g>2.3`$ V, Fig. 2. Both suppression of the current and “stochastic Coulomb blockade” are typical signatures of tunneling through two sequentially connected dots. The non-zero conductance can be restored either by raising the temperature (Fig. 2) or by increasing the source-drain bias $`V_b`$ (Fig. 3a). In both cases, $`G`$ is modulated with $`\mathrm{\Delta }V_{g1}`$ and $`\mathrm{\Delta }V_{g2}`$, consistent with sequential tunneling. We conclude that in these regime the two dots are connected in series L-D<sub>1</sub>-D<sub>2</sub>-R (see schematic in Fig. 1c). At larger gate voltages ($`V_g>6`$ V for E5-7 and $`V_g<2.3`$ V for H5A) current is not suppressed even at the lowest temperatures. However, the $`G`$ pattern is different in the H5A and E5-7 samples. In H5A, the oscillations with $`\mathrm{\Delta }V_{g2}`$ have approximately the same amplitude (except for the sharp peaks which are separated by approximately $`\mathrm{\Delta }V_{g1}`$), while in E5-7 the amplitude of the fast oscillations is modulated by $`\mathrm{\Delta }V_{g1}`$. Also, the dependence of the amplitude of the fast modulations on the average conductance $`<G>`$ is different: in H5A the amplitude is almost $`<G>`$-independent, while in E5-7 it is larger for larger $`<G>`$. Non-vanishing periodic conductance at low temperatures requires that the transport is governed by the Coulomb blockade through only one dot D<sub>2</sub>. That can be achieved either if both barriers between the contacts and the D<sub>2</sub> become transparent enough to allow substantial tunneling or if the strong coupling between the main dot D<sub>1</sub> and one of the leads results in a non-vanishing density of states in the dot at $`T=0`$. If we neglect coupling between the dots, in the former case the total conductance is approximately the sum of two conductances, $`G_{parallel}G_1+G_2`$, where $`G_1`$ is conductance through the main dot L-D<sub>1</sub>-R and $`G_2`$ is conductance through the second dot L-D<sub>2</sub>-R. This case is modeled in Fig 2c using experimentally determined parameters of sample H5A. From the analysis of high-temperature transport we found that the zero-temperature broadening of D<sub>1</sub> peaks $`\alpha \mathrm{\Gamma }10`$ mV $`\mathrm{\Delta }V_{g2}\mathrm{\Delta }V_{g1}=130`$ mV and that $`G`$ should be exponentially suppressed between D<sub>1</sub> peaks at $`T=0.3`$ K if the dots are connected in series L-D<sub>1</sub>-D<sub>2</sub>-R, Fig 2b. The best description of the low temperature transport at -3.0 V $`<V_g<2.3`$ V in H5A is achieved if we assume that there are two conducting paths in parallel: through the extra dot L-D<sub>2</sub>-R and through both dots together L-D<sub>1</sub>-D<sub>2</sub>-R, Fig 2d. In the latter case, the dots are connected in series L-D<sub>1</sub>-D<sub>2</sub>-R. At high $`V_g`$ the barrier between L and D<sub>1</sub> is reduced giving rise to a large level broadening $`\mathrm{\Gamma }`$. The total conductance is $`G_{series}G_{BW}G_2/(G_{BW}+G_2)`$, where $`G_2`$ is the Coulomb blockade conductance through D<sub>2</sub> alone and $`G_{BW}=\frac{2e^2}{h}\mathrm{\Gamma }^2/(\mathrm{\Gamma }^2+\delta E^2)`$ is the Breit-Wigner conductance through D<sub>1</sub> and $`\delta E=(V_gV_g^i)/\alpha `$. In this case $`G_{series}`$ is following $`G_{BW}`$ and is modulated by $`G_2`$. Moreover, if we assume that the amplitude of $`G_2`$ is not a strong function of $`V_g`$, the amplitude of $`G_{series}`$ modulation will be a function of $`G_{BW}`$, namely the larger $`G_{BW}`$ the larger the amplitude of the modulation of the total conductance. This model of two dots in series with one being strongly coupled to the leads is in qualitative agreement with the data from sample E5-7. Non-equilibrium transport through E5-7 is shown in Fig. 4 with a single $`G`$ vs. $`V_b`$ trace at a fixed $`V_g`$ shown at the top of the figure. White diamond-shaped Coulomb blockade regions are clearly seen on the gray-scale plot. Peaks in $`G`$ at positive bias are due to asymmetry in the tunneling barriers: at negative biases tunneling to the dot is slower than tunneling off the dot and only one extra electron occupies the dot at any given time, thus only one peak, corresponding to the onset of the current, is observed (we have not seen any features due to the size quantization, which is not surprising if we take into account the large number of electrons in this dot). At positive biases current is limited by the time the electron spends in the dot before it tunnels out. In this regime an extra step in the I-V characteristic (and a corresponding peak in its derivative $`G`$) is observed every time one more electron can tunnel into the dot. These peaks, marked with arrows, are separated by the charging energy $`U_c=e\mathrm{\Delta }V_b=8`$ meV. Electrostatic parameters of the D<sub>2</sub> dot can be readily extracted from Fig. 4. The source, drain and gate capacitances are 8.5, 2.7 and 6.4 aF and the corresponding charging energy is $`9`$ meV. The charging energy of $`11`$ meV is obtained by analyzing Fermi-Dirac broadening of the conductance peaks as a function of temperature and the period of oscillations. The fact that it requires the application of $`V_b=10`$ mV to lift the Coulomb blockade means that in the Coulomb blockade regime all the bias is applied across the second dot, consistent with large conductance through D<sub>1</sub>. Where does the second dot reside? One possibility is that the silicon bridge, containing the lithographically defined dot, breaks up at low temperatures as a result of the depletion due to variations of the bridge thickness and fluctuations in the thickness of the gate oxide, or due to the field induced by ionized impurities. However, in this case $`C_{g2}`$ should be less than $`C_{g1}`$. In fact, if we assume that the thickness of the thermally grown oxide is uniform, the gate capacitance of the largest possible dot in the channel cannot be larger than 1.5 aF. Also, if at low temperatures the main dot would split into two or more dots we should see the change in the period of the large oscillations, inconsistent with our observations. Another possibility is that the dot is formed in the contact region adjacent to the bridge. Given that the oxide thickness is 40 nm, the second dot diameter should be $`100`$ nm. We measured two devices which have 30 nm wide and 500 nm long channels, fabricated using the same technique as the dot devices. Both samples show regular MOSFET characteristics down to 50 mK. Thus, it is unlikely that a dot is formed in the wide contact regions of the device. Even if such a dot was formed occasionally in some device by, for example, randomly distributed impurities, it is unlikely that dots of approximately the same size would be formed in all samples. Another argument against such a scenario is that if the second dot is formed inside one of the contact regions, it cannot be coupled to the other contact to provide a parallel conduction channel, as in sample H5A. Thus, the second dot should reside within the gate oxide, which surrounds the lithographically defined dot. Some traps can create confining potential in both conduction and valence bands, for example P<sub>b</sub> center has energy levels at $`E_c0.3`$ eV and $`E_v+0.3`$ eV. Several samples show a hysteresis during large gate voltage scans accompanied by sudden switching. This behavior can be attributed to the charging-discharging of traps in the oxide. If such a trap happens to be in a tunneling distance from both the lithographically defined dot and a contact, or the trap is extended from one contact to the other, it may appear as a second dot in the conductance. To summarize our results, we performed an extensive study of a large number of Si quantum dots. We found that all devices show multi-dot transport characteristics at low temperatures. From the data analysis we arrived at the conclusion that at least double-dot behavior is caused not by the depletion of the silicon channel but by additional transport through traps within the oxide. We acknowledge the support from ARO, ONR and DARPA.
no-problem/0002/hep-ph0002124.html
ar5iv
text
# Neutrino decay and long base-line oscillation experiments ## I Introduction The solar neutrino problem and the atmospheric anomaly belong to the category of long base-line phenomena using natural neutrino sources. Along with the short base-line oscillation experiments the long base-line projects with man-made neutrino sources are very promising. Among them the recent Chooz project using reactor neutrinos was completed with a negative effect . Most others are planned for installation at high energy accelerators. The K2K project has taken the first run recently and has shown a preliminary first cc-event . MINOS at Fermilab-Soudan and ICARUS at CERN-Gran Sasso are in a preparatory stage. The traditional interpretation of the origin of neutrino anomalies is the oscillation mechanism. Another approach is the possibility of decay of conventional massive neutrinos, which may decay in some mixing components into a singlet majoron and another neutrino, as summarised in Ref. . This decay theory is restricted by very low upper limits on neutrino masses. As an alternative, we suggested in Ref. a three-body decay mechanism to explain the existing oscillation hints, which considers neutrinos as time-like leptons and where the heavier neutrinos may decay according to the dual principle in the Super-luminous Lorentz Transformation (SLT). ## II Tachyon and neutrino decay The idea is based on the symmetry between space-like and time-like bradyons, following E. Recami et al. , according to which we may suggest that in the Super-luminous Lorentz transformation the space and the time dimensions should replace each other. As a result, tachyons may travel in a 3-dimension time while moving in an unique space direction. A very severe problem is that no tachyon was ever observed in an experiment. On the other hand, neutrinos are an exception with the following appearances of tachyon properties: \- There is a very high symmetry between neutrinos and space-like leptons which is enhanced in the electro-weak unification; \- Each neutrino has its unique space direction, left or right, described by a definite helicity; \- Neutrinos never stop in a space position, similarly as a space-like particle can never stop in a moment of time-evolution. A big challenge is the fact that all neutrinos seem to have very small mass, which disturbs the lepton-neutrino symmetry. To solve this problem we assumed in Ref. that neutrinos are realistic tachyons, however due to weak interaction their transcendent masses m being complex have to be strongly suppressed. We suggested that the real part of mass is roughly equal to $`\mathrm{\Gamma }/2`$, where $`\mathrm{\Gamma }=2\rho ^2m_0=192^1\pi ^3G_F^2m_0^5`$ is the decay width of the unstable lepton and $`m_0`$ is its rest mass. The imaginary part of tachyon mass is suppressed by the factor $`\rho `$, i.e. to the first order by the Fermi constant $`G_F`$. This means that the imaginary part may be measured in parity non-conserving experiments by interference of the weak interaction and electro-magnetic or nuclear force. Generally, we may vary the rest mass $`m_0`$ as a free parameter to fit the experiments. In this work, however, we prefer to test the first order approximation by making the extreme assumption of time-space symmetry, i.e. that the absolute value of the transcendent mass of a neutrino is equal to the rest mass of its bradyon partner. For unification of the formalism we extend a similar formula for $`\mathrm{\Gamma }`$ of unstable leptons to the electron, which does not harm the conclusion that electron neutrinos as time-like electrons should be stable. As a result, we have got a formalism almost without free parameters in our calculation. In Table I we show the observable transcendent masses of space-like neutrino-tachyons or time-like leptons: In principle, the real part of neutrino masses may cause oscillation effects, however, they are too small to be seen in current experiments. The negligible transcendent masses lead to an experimental fact that neutrinos are always identified as luxons, i.e. particles moving with a speed of light. Now in accordance with the dual principle in SLT, we may consider the muon neutrino as a time-like muon, which is able to decay similarly to the decay of a conventional space-like muon. The decay scheme is: $$\nu _\mu \nu _e+\mu +e;$$ (1) At variance with the conventional muon decay, due to the energy conservation, the process (1) takes off at a threshold $`E_\nu =106.1`$ MeV, similarly to pair production of a gamma ray. It means that muon neutrinos do not decay at rest (DAR) and we are able to observe only the neutrino decay in flight (DIF). In general, we can not warrant the conservation of lepton charge in the decay (1) as neutrinos and leptons may be Majorana ones. In the three-body decay of a muon neutrino, we may suggest that the muon as well as the positron leave each other from a very short distance ($`10^{16}`$ cm), where the weak interaction dominates the Coulomb attractive force and that they may exist in electric charge (or lepton charge) mixing states as Majorana particles: $`|\varphi _\mu ^0(\text{Left})=\mathrm{cos}\theta _\mu |\mu ^++\mathrm{sin}\theta _\mu |\mu ^{}`$ $`|\varphi _e^0(\text{Right})=\mathrm{sin}\theta _e|e^++\mathrm{cos}\theta _e|e^{}`$ Here we consider only the mixing separately of lepton (muon or electron) charges, but not the flavour mixing as in oscillation theories. In the maximum mixing ($`\theta _{\mu ,e}=\pi /4`$) Majorana leptons are fermions with a defined helicity but electrically neutral, then they may travel a long way without significant electro-magnetic interaction. At variance with neutrinos, Majorana leptons are space-like particles with a rest mass suggested almost equal to that of the corresponding Dirac leptons. They may regenerate the Dirac component after a definite period because of the lepton (or baryon) asymmetry in our space-like frame. They may also be depolarised while traversing a massive medium. During the process of regeneration or depolarisation the lepton charge is changing, separately, for Majorana electron and muon. However, we suggest that the total lepton charge $`L=L_\mu +L_e`$ is conserved, that provides the total electric charge unchanged. At the final moment when particles are completely depolarised we may get the normal Dirac components as: $`|\varphi _\mu ={\displaystyle \frac{1}{\sqrt{2}}}(|\varphi _\mu (\text{Left})+|\varphi _\mu (\text{Right}))=|\mu ^{};`$ $`|\varphi _e={\displaystyle \frac{1}{\sqrt{2}}}(|\varphi _e(\text{Left})|\varphi _e(\text{Right}))=|e^+.`$ When the depolarisation time is less than the muon lifetime we may observe not only a positron but also a muon as neutrino decay products. However, if the regeneration period or depolarisation lasts longer the muon lifetime, we may observe only a positron and products of the Dirac muon decay after regeneration or depolarisation. Another possibility is that there is neither regeneration nor depolarisation because of the conservation of lepton charges and Majorana leptons remain almost sterile as dark matter and we can never see them. Along with the three-body decay there is a possibility of two-body decay as an alternative mechanism. One can consider a well-coupling singlet quasi-muonium consisted of muon-electron pair under attractive Coulomb interaction at a short distance ($`10^{16}`$ cm), which plays a similar role of the majoron in the two-body decay as described in Ref. , however, it does not need flavour mixing. The quasi-muonium, being electrically neutral with a rest mass less than (or close to) the total rest mass of the constituent particles, has to decay into a muon-electron pair after some period, let say, equal to the muon lifetime. The existence of such a quasi-muonium, however, is unlikely as it would be observed in different experiments at accelerators or in atmospheric cosmic rays. Obviously, the two- and three-body decay mechanisms induce different energy spectra of decay products, providing a means to identify the actual decay mode. In the next section we show that the three-body decay in the first order approximation (without any free parameter) may give a satisfactory interpretation of all short base-line oscillation experiments at nuclear reactors as well at accelerators. As a next step we predict some consequences of this decay mode for long base-line oscillation experiments. ## III Interpretation of short base-line oscillation experiments We summarise here our results in Ref. as a demonstration of the suggested mechanism to interpret short base-line oscillation experiments carried out at accelerators (including LSND, KARMEN and NOMAD), as well as at nuclear reactors (including Bugey and the first short base-line Chooz). The positive effects of LSND have been interpreted as oscillations of the muon anti-neutrino produced in muon decays at rest (DAR), and oscillations of the muon neutrino produced in muon decays in flight (DIF). In both cases muons are decay products of the pions produced at the target A6. Instead of this oscillation interpretation, in our Ref. the LSND effects were explained by the decay in flight of muon neutrinos mainly from ”minor” targets A1 and A2 but not by the decay at rest of those from the major target A6. The last one (A6) contributed only about 20% in the total effects of decay in flight. In Fig. 1 are shown our calculated spectra of cc-positrons and cc-electrons produced in interactions of electron anti-neutrinos and neutrinos as products of muon neutrino decay with hydrogen and carbon nuclei in the fiducial volume. The calculated spectra are integrated over the range from $`36`$ to $`60`$ MeV for cc-positrons and from $`60`$ to $`200`$ MeV for cc-electrons, which give $`9.9`$ positrons and $`37.5`$ electrons. Comparing with LSND data which give $`22.0\pm 8.5\pm 8.0`$ cc-positrons and $`18.1\pm 6.6\pm 4.0`$ cc-electrons, our results are thus in an acceptable agreement with them. In KARMEN as a sister project of LSND only decays at rest (DAR) of pions and muons were taken into account, therefore all muon neutrinos had an energy below threshold ($`106.1`$ MeV) and could not decay. We have calculated the neutrino decay at high energy in the NOMAD experiment following the same procedure as for LSND. The fiducial volume containing about $`\mathrm{1.5\; 10}^{30}`$ nucleons and the initial muon neutrino spectrum were taken from Ref. . The $`\nu `$N-cross-section is proportional to energy as $`\mathrm{0.78\; 10}^{38}`$ $`E_\nu `$ (GeV) cm<sup>2</sup>. In Fig 2 we show the total spectrum of cc-electrons, where the dash-line is a Monte-Carlo simulation without oscillation; the calculated contribution of the neutrino decay (box-line) consists only of $`10.0`$ events in the range of $`120`$ GeV, namely $`5`$ times less than expected from oscillation (star-line). This is to conclude that our decay calculation is in a good agreement with the data of NOMAD. However, the poor statistics and low energy resolution in the lowest range of $`120`$ GeV do not allow to separate the effect of neutrino decay in flight (DIF) from the background. Concerning electron (anti-) neutrinos, they are suggested stable as time-like electrons (positrons), and certainly can not decay. As a result, oscillation experiments at nuclear reactor producing only electron anti-neutrinos could not see any effect. The recent attempt as a reactor long base-line project at Chooz is not an exception . For the solar anomaly concerning electron neutrinos, we have proposed in Ref. a qualitative interpretation by a total depolarisation of the pseudo-spin of time-like electrons passing through a thickness of dark plasma matter. It could give in the first order approximation a roughly 50% deficit of the expected solar neutrino flux which is in agreement with the averaged experimental data $`R=0.46\pm 0.06`$. ## IV Estimation of muon neutrino decay at accelerator long base-line projects In the present section we deal with the three-body decay of muon neutrinos at long base-line experiments, compared to oscillation predictions. For this purpose we consider the projects: i/ K2K at KEK-Super-K currently running ; ii/ MINOS at Fermilab-Soudan, already approved and in preparation and iii/ ICARUS at CERN-Gran Sasso waiting for approval . The decay in flight of muon neutrinos are calculated using the same parameters as for muons (decay width $`\mathrm{\Gamma }`$ or life-time $`\tau _\mu `$, rest mass $`m_0`$) and the equation for decay probability reads: $$P_d=1\mathrm{exp}\{m_0(GeV)/(\tau _\mu c)L(km)/E_\nu (GeV)\};$$ (2) The two-neutrino oscillation probability is calculated using the well-known formula: $$P_{os}=\mathrm{sin}^22\theta sin^2\{1.27\mathrm{\Delta }m^2(eV^2)L(km)/E_\nu (GeV)\};$$ (3) Here we use the most favourable parameters of the LSND for the ($`\nu _\mu \nu _e`$) oscillation ($`sin^22\theta `$, $`\mathrm{\Delta }m^2`$) $`=`$ ($`6.10^3`$, $`19eV^2`$) and also those of the Kamiokande atmospheric anomaly for the ($`\nu _\mu \nu _\tau `$) one: ($`sin^22\theta `$, $`\mathrm{\Delta }m^2`$) $`=`$ ($`0.95`$, $`5.9\times 10^3eV^2`$). Table II reviews the estimation of decay and oscillation probabilities of muon neutrinos from the long base-line experiments at a mean energy averaged over all a spectrum of each neutrino source. In this table we give not only long base-line distances but also the base-line of the companion short distance detectors. Included are: i/ the KEK front detector; ii/ COSMOS at Fermilab, and iii/ JURA at CERN. We see from Table II that the LSND parameters produce very small effects everywhere, while the Kamiokande parameters give significant oscillation in the long base-line detectors, but negligible effects in the companion short distance detectors. The neutrino decay predicts very large effects on a far distance that all long base-line detectors hardly observe muon-like events. At Super-K and at MINOS almost all muon neutrinos have to decay before reaching the detectors. Only about 2% of the muon neutrinos subsist at ICARUS. In Fig. 3 we illustrate different behaviours of the decay (Fig. 3a) and oscillation probabilities as functions of muon neutrino energy in the K2K long base-line project. For the oscillation we have taken again parameters from both experiments LSND (Fig. 3b) and Kamiokande (Fig. 3c). In the next presentation, we take into account of the oscillation only the Kamiokande parameters. Instead we may see muon-like events at short distance detectors, where the amounts of electron neutrinos as neutrino decay products may be significant, particularly at JURA, this proportion amounts to 9% of the total neutrino beam. At variance with oscillations, electron neutrinos from neutrino decays have in average only one third of the initial muon neutrino energy as the calculated electron neutrino spectrum (box-dash line) illustrated in Fig. 4 for K2K. For comparison there is shown a spectrum (diamond-dot line) of tauon (or electron) neutrinos from the $`\nu _\mu \nu _\tau (\nu _e)`$ oscillation, which repeats the shape of the initial muon neutrino spectrum (histogram). In Fig. 5 we show the corresponding calculated spectra of electron-like events from the neutrino decays (histogram) and from the oscillations (box-dash line). In the calculation we use the $`\nu _e`$N cross-section as in for K2K, and a simplified cross-section at higher energy for MINOS as $`C\mathrm{.\; 10}^{38}`$ $`E_\nu `$ (GeV) cm<sup>2</sup>, where $`C=0.78`$ for electron-like events and $`C=0.62`$ for muon-like ones. We see that the spectra of electron-like events are very different each other for the neutrino decay and for the oscillation. Particularly, the decay spectrum is soften significantly in variance with the oscillation spectrum. In Table III we summarise the total rates of cc-events integrated over the calculated spectra of cc-events at K2K and MINOS. For $`10^{20}`$ protons on target (p.o.t.) at K2K, if the muon neutrino decay works, only $`187`$ cc-events will be collected and if the particle identification (PID) works well only electron-like events, but no muon-like events will be seen. Here we cannot exclude the possibility that the electron neutrinos are Majorana. In this case only the Dirac component may be detected, as in the solar neutrino flux and, as a result, the total amount of electron-like events may be decreased up to 50% (for the maximum mixing ). For the oscillation mechanism there are different possibilities: i/ when the ($`\nu _\mu \nu _e`$) version works, we may have more electron-like events $`(236)`$ from the oscillation and an amounts of muon-like ones $`(107)`$ from the survivors of the initial muon neutrino spectrum; ii/ when the ($`\nu _\mu \nu _\tau `$) version works, we may see the same amount of muon-like events from the surviving muon neutrinos, however the rate of cc-events from the tauon neutrinos decreases significantly, due to the high threshold of the $`\nu _\tau `$N reaction and the low ratios of tauon decay into muon or electron, which are 17.9% and 17.4%, respectively . The first run of K2K at Super-K has seen the first cc-event . We have to wait for the next run in year 2000, for sufficient statistics before making any definite conclusion. According to Table III, MINOS will give more statistics to identify the origin of the oscillation hints. ## V Conclusion In the Super-K long base-line detector we expect to identify muon-like events from electron-like ones as a criteria to test the three-body decay of muon neutrinos. The shape of the spectra and the quantity of cc-events are also sensitive to the origin of the oscillation hints. If the decay mechanism works, we suggest to the next long base-line experiments to use intensively the shorter base-line detectors. The optimal condition for observation is to get significant amounts of neutrino decays to see electron-like events, not to harm the intense flux to see also a proportion of muon-like events, that a decay rate larger than say 20% is desirable. It leads to a ratio $`L(km)/E_\nu (GeV)=1.4`$ and the base-line $`L`$(km) equal to: 2.1; 15.3 and 41.8 km respectively, for the KEK front detector, COSMOS and JURA. The experiments with modified shorter base-line detectors might collect a significant statistics for a short period. For example, 0.5 kton fiducial volume of the KEK front detector at the distance 2.1 km may see $`10^4`$ muon-like and about (4 - 8)$`\times 10^3`$ electron-like events for $`10^{20}`$ p.o.t. ## Acknowledgement This work was funded by the Basic Research Program of the Ministry of Science, Technology and Environment of Vietnam. We appreciate Prof. P. Darriulat (CERN) for very useful discussion. We thank Dr. Y. Yano (RIKEN), Prof. A. Masaike (Kyoto) and Dr. Y. Suzuki (Super-K) for supporting the first author to attend the present workshop.
no-problem/0002/math0002221.html
ar5iv
text
# A proof of the weak (1,1) inequality for singular integrals with non doubling measures based on a Calderón-Zygmund decomposition ## 1. Introduction Let $`\mu `$ be a positive Radon measure on $`^d`$ satisfying the growth condition (1.1) $$\mu (B(x,r))C_0r^n\text{for all }x^d,r>0,$$ where $`n`$ is some fixed number with $`0<nd`$. We do not assume that $`\mu `$ is doubling \[$`\mu `$ is said to be doubling if there exists some constant $`C`$ such that $`\mu (B(x,2r))C\mu (B(x,r))`$ for all $`x\mathrm{supp}(\mu )`$, $`r>0`$\]. Let us remark that the doubling condition on the underlying measure $`\mu `$ on $`^d`$ is an essential assumption in most results of classical Calderón-Zygmund theory. However, recently it has been shown that a big part of the classical theory remains valid if the doubling assumption on $`\mu `$ is substituted by the size condition (1.1) (see for example the references cited at the end of the paper). In this note we will prove that Calderón-Zygmund operators (CZO’s) which are bounded in $`L^2(\mu )`$ are also of weak type $`(1,1)`$, as in the usual doubling situation. This result has already been proved in \[To1\] in the particular case of the Cauchy integral operator, and by Nazarov, Treil and Volberg \[NTV2\] in the general case. The proof that we will show here is different from the one of \[NTV2\] (and also from the one of \[To1\], of course) and it is closer in spirit to the classical proof of the corresponding result for doubling measures. The basic tool for the proof is a decomposition of Calderón-Zygmund type for functions in $`L^1(\mu )`$ obtained in \[To4\]. Our purpose in writing this paper is not only to obtain another proof in the non doubling situation of the basic result that CZO’s bounded in $`L^2(\mu )`$ are of weak type $`(1,1)`$, but to show that the Calderón-Zygmund decompositon of \[To4\] is a good substitute of its classical doubling version. Let us introduce some notation and definitions. A kernel $`k(,)`$ from $`L_{loc}^1((^d\times ^d)\{(x,y):x=y\})`$ is called a Calderón-Zygmund kernel if 1. $`|k(x,y)|{\displaystyle \frac{C}{|xy|^n}},`$ 2. there exists $`0<\delta 1`$ such that $$|k(x,y)k(x^{},y)|+|k(y,x)k(y,x^{})|C\frac{|xx^{}|^\delta }{|xy|^{n+\delta }}$$ if $`|xx^{}||xy|/2`$. Throughout all the paper we will assume that $`\mu `$ is a Radon measure on $`^d`$ satisfying (1.1). The CZO associated to the kernel $`k(,)`$ and the measure $`\mu `$ is defined (at least, formally) as $$Tf(x)=k(x,y)f(y)𝑑\mu (y).$$ The above integral may be not convergent for many functions $`f`$ because $`k(x,y)`$ may have a singularity for $`x=y`$. For this reason, one introduces the truncated operators $`T_\epsilon `$, $`\epsilon >0`$: $$T_\epsilon f(x)=_{|xy|>\epsilon }k(x,y)f(y)𝑑\mu (y),$$ and then one says that $`T`$ is bounded in $`L^p(\mu )`$ if the operators $`T_\epsilon `$ are bounded in $`L^p(\mu )`$ uniformly on $`\epsilon >0`$. It is said that $`T`$ is bounded from $`L^1(\mu )`$ into $`L^{1,\mathrm{}}(\mu )`$ (or of weak type $`(1,1)`$) if $$\mu \{x:|T_\epsilon f(x)|>\lambda \}C\frac{f_{L^1(\mu )}}{\lambda }$$ for all $`fL^1(\mu )`$, uniformly on $`\epsilon >0`$. Also, $`T`$ is bounded from $`M()`$ (the space of complex Radon measures) into $`L^{1,\mathrm{}}(\mu )`$ if $$\mu \{x:|T_\epsilon \nu (x)|>\lambda \}C\frac{\nu }{\lambda }$$ for all $`\nu M()`$, uniformly on $`\epsilon >0`$. In the last inequality, $`T_\epsilon \nu (x)`$ stands for $`_{|xy|>\epsilon }k(x,y)𝑑\nu (y)`$ and $`\nu |\nu |(^d)`$. The result that we will prove in this note is the following. ###### Theorem 1.1. Let $`\mu `$ be a Radon measure on $`^d`$ satisfying the growth condition (1.1). If $`T`$ is a Calderón-Zygmund operator which is bounded in $`L^2(\mu )`$, then it is also bounded from $`M()`$ into $`L^{1,\mathrm{}}(\mu )`$. In particular, it is of weak type $`(1,1)`$. ## 2. The proof First we will introduce some additional notation and terminology. As usual, the letter $`C`$ will denote a constant which may change its value from one occurrence to another. Constants with subscripts, such as $`C_0`$, do not change in different occurrences. By a cube $`QR^d`$ we mean a closed cube with sides parallel to the axes. We denote its side length by $`\mathrm{}(Q)`$ and its center by $`x_Q`$. Given $`\alpha >1`$ and $`\beta >\alpha ^n`$, we say that $`Q`$ is $`(\alpha ,\beta )`$-doubling if $`\mu (\alpha Q)\beta \mu (Q)`$, where $`\alpha Q`$ is the cube concentric with $`Q`$ with side length $`\alpha \mathrm{}(Q)`$. For definiteness, if $`\alpha `$ and $`\beta `$ are not specified, by a doubling cube we mean a $`(2,2^{d+1})`$-doubling cube. Before proving Theorem 1.1 we state some remarks about the existence of doubling cubes. ###### Remark 2.1. Because $`\mu `$ satisfies the growth condition (1.1), there are a lot “big” doubling cubes. To be precise, given any point $`x\mathrm{supp}(\mu )`$ and $`c>0`$, there exists some $`(\alpha ,\beta )`$-doubling cube $`Q`$ centered at $`x`$ with $`l(Q)c`$. This follows easily from (1.1) and the fact that $`\beta >\alpha ^n`$. Indeed, if there are no doubling cubes centered at $`x`$ with $`l(Q)c`$, then $`\mu (\alpha ^nQ)>\beta ^n\mu (Q)`$ for each $`n`$, and letting $`n\mathrm{}`$ one sees that (1.1) cannot hold. ###### Remark 2.2. There are a lot of “small” doubling cubes too: if $`\beta >\alpha ^d`$, then for $`\mu `$-a.e. $`x^d`$ there exists a sequence of $`(\alpha ,\beta )`$-doubling cubes $`\{Q_k\}_k`$ centered at $`x`$ with $`\mathrm{}(Q_k)0`$ as $`k\mathrm{}`$. This is a property that any Radon measure on $`^d`$ satisfies (the growth condition (1.1) is not necessary in this argument). The proof is an easy exercise on geometric measure theory that is left for the reader. Observe that, by the Lebesgue differentiation theorem, for $`\mu `$-almost all $`x^d`$ one can find a sequence of $`(2,2^{d+1})`$-doubling cubes $`\{Q_k\}_k`$ centered at $`x`$ with $`\mathrm{}(Q_k)0`$ such that $$\underset{k\mathrm{}}{lim}\frac{1}{\mu (Q_k)}_{Q_k}f𝑑\mu =f(x).$$ As a consequence, for any fixed $`\lambda >0`$, for $`\mu `$-almost all $`x^d`$ such that $`|f(x)|>\lambda `$, there exists a sequence of cubes $`\{Q_k\}_k`$ centered at $`x`$ with $`\mathrm{}(Q_k)0`$ such that $$\underset{k\mathrm{}}{lim\; sup}\frac{1}{\mu (2Q_k)}_{Q_k}|f|𝑑\mu >\frac{\lambda }{2^{d+1}}.$$ In the following lemma we will prove an easy but essential estimate which will be used below. This result has already appeared in previous works (\[DM\], \[NTV2\]) and it plays a basic role in \[To2\] and \[To4\] too. ###### Lemma 2.3. If $`QR`$ are concentric cubes such that there are no $`(\alpha ,\beta )`$-doubling cubes (with $`\beta >\alpha ^n`$) of the form $`\alpha ^kQ`$, $`k0`$, with $`Q\alpha ^kQR`$, then, $$_{RQ}\frac{1}{|xx_Q|^n}𝑑\mu (x)C_1,$$ where $`C_1`$ depends only on $`\alpha ,\beta `$, $`n`$, $`d`$ and $`C_0`$. ###### Proof. Let $`N`$ be the least integer such that $`R\alpha ^NQ`$. For $`0kN`$ we have $`\mu (\alpha ^kQ)\mu (\alpha ^NQ)/\beta ^{Nk}`$. Then, $`{\displaystyle _{RQ}}{\displaystyle \frac{1}{|xx_Q|^n}}𝑑\mu (x)`$ $``$ $`{\displaystyle \underset{k=1}{\overset{N}{}}}{\displaystyle _{\alpha ^kQ\alpha ^{k1}Q}}{\displaystyle \frac{1}{|xx_Q|^n}}𝑑\mu (x)`$ $``$ $`C{\displaystyle \underset{k=1}{\overset{N}{}}}{\displaystyle \frac{\mu (\alpha ^kQ)}{\mathrm{}(\alpha ^kQ)^n}}`$ $``$ $`C{\displaystyle \underset{k=1}{\overset{N}{}}}{\displaystyle \frac{\beta ^{kN}\mu (\alpha ^NQ)}{\alpha ^{(kN)n}\mathrm{}(\alpha ^NQ)^n}}`$ $``$ $`C{\displaystyle \frac{\mu (\alpha ^NQ)}{\mathrm{}(\alpha ^NQ)^n}}{\displaystyle \underset{j=0}{\overset{\mathrm{}}{}}}\left({\displaystyle \frac{\alpha ^n}{\beta }}\right)^jC.`$ The Calderón-Zygmund decomposition mentioned above has been obtained in Lemma 7.3 of \[To4\] and in that paper it has been used to show that if a linear operator is bounded from a suitable space of type $`H^1`$ into $`L^1(\mu )`$ and from $`L^{\mathrm{}}(\mu )`$ into a space of type $`BMO`$, then it is bounded in $`L^p(\mu )`$. We will use a slight variant of this decompositon to prove Theorem 1.1. Let us state the result that we need in detail. ###### Lemma 2.4 (Calderón-Zygmund decomposition). Assume that $`\mu `$ satisfies (1.1). For any $`fL^1(\mu )`$ and any $`\lambda >0`$ (with $`\lambda >2^{d+1}f_{L^1(\mu )}/\mu `$ if $`\mu <\mathrm{}`$) we have: * There exists a finite family of almost disjoint cubes $`\{Q_i\}_i`$ such that (2.1) $$\frac{1}{\mu (2Q_i)}_{Q_i}|f|𝑑\mu >\frac{\lambda }{2^{d+1}},$$ (2.2) $$\frac{1}{\mu (2\eta Q_i)}_{\eta Q_i}|f|𝑑\mu \frac{\lambda }{2^{d+1}}\text{for }\eta >2\text{,}$$ (2.3) $$|f|\lambda \text{a.e. (}\mu \text{) on }^d_iQ_i.$$ * For each $`i`$, let $`R_i`$ be a $`(6,6^{n+1})`$-doubling cube concentric with $`Q_i`$, with $`l(R_i)>4l(Q_i)`$ and denote $`w_i=\frac{\chi _{Q_i}}{_k\chi _{Q_k}}`$. Then, there exists a family of functions $`\phi _i`$ with $`\mathrm{supp}(\phi _i)R_i`$ and with constant sign satisfying (2.4) $$\phi _i𝑑\mu =_{Q_i}fw_i𝑑\mu ,$$ (2.5) $$\underset{i}{}|\phi _i|B\lambda $$ (where $`B`$ is some constant), and (2.6) $$\phi _i_{L^{\mathrm{}}(\mu )}\mu (R_i)C_{Q_i}|f|𝑑\mu .$$ Let us remark that other related decompositons with non doubling measures have been obtained in \[NTV2\] and \[MMNO\]. However, these results are not suitable for our purposes. Although the proof of the lemma can be found in \[To4\], for the reader’s convenience we have included it in the last section of the present paper. ###### Proof of Theorem 1.1. We will show that $`T`$ is of weak type $`(1,1)`$. By similar arguments, one gets that $`T`$ is bounded from $`M()`$ into $`L^{1,\mathrm{}}(\mu )`$. In this case, one has to use a version of the Calderón-Zygmund decomposition in the lemma above suitable for complex measures. For simplicity we assume $`\mu =\mathrm{}`$. Let $`fL^1(\mu )`$ and $`\lambda >0`$. Let $`\{Q_i\}_i`$ be the almost disjoint family of cubes of Lemma 2.4. Let $`R_i`$ be the smallest $`(6,6^{n+1})`$-doubling cube of the form $`6^kQ_i`$, $`k1`$. Then we can write $`f=g+b`$, with $$g=f\chi _{^d_iQ_i}+\underset{i}{}\phi _i$$ and $$b=\underset{i}{}b_i:=\underset{i}{}\left(w_if\phi _i\right),$$ where the functions $`\phi _i`$ satisfy (2.4), (2.5) (2.6) and $`w_i=\frac{\chi _{Q_i}}{_k\chi _{Q_k}}`$. By (2.1) we have $$\mu \left(\underset{i}{}2Q_i\right)\frac{C}{\lambda }\underset{i}{}_{Q_i}|f|𝑑\mu \frac{C}{\lambda }|f|𝑑\mu .$$ So we have to show that (2.7) $$\mu \{x^d\underset{i}{}2Q_i:|T_\epsilon f(x)|>\lambda \}\frac{C}{\lambda }|f|𝑑\mu .$$ Since $`b_i𝑑\mu =0`$, $`\mathrm{supp}(b_i)R_i`$ and $`b_i_{L^1(\mu )}C_{Q_i}|f|𝑑\mu `$, using some standard estimates we get $$_{^d2R_i}|T_\epsilon b_i|𝑑\mu C|b_i|𝑑\mu C_{Q_i}|f|𝑑\mu .$$ Let us see that (2.8) $$_{2R_i2Q_i}|T_\epsilon b_i|𝑑\mu C_{Q_i}|f|𝑑\mu $$ too. On the one hand, by (2.6) and using the $`L^2(\mu )`$ boundedness of $`T`$ and that $`R_i`$ is $`(6,6^{n+1})`$-doubling we get $`{\displaystyle _{2R_i}}|T_\epsilon \phi _i|𝑑\mu `$ $``$ $`\left({\displaystyle _{2R_i}}|T_\epsilon \phi _i|^2𝑑\mu \right)^{1/2}\mu (2R_i)^{1/2}`$ $``$ $`C\left({\displaystyle |\phi _i|^2𝑑\mu }\right)^{1/2}\mu (R_i)^{1/2}`$ $``$ $`C{\displaystyle _{Q_i}}|f|𝑑\mu .`$ On the other hand, since $`\mathrm{supp}(w_if)Q_i`$, if $`x2R_i2Q_i`$, then $`|T_\epsilon f(x)|C_{Q_i}|f|𝑑\mu /|xx_{Q_i}|^n`$, and so $$_{2R_i2Q_i}|T_\epsilon (w_if)|𝑑\mu C_{2R_i2Q_i}\frac{1}{|xx_{Q_i}|^n}𝑑\mu (x)\times _{Q_i}|f|𝑑\mu ,$$ By Lemma 2.3, the first integral on the right hand side is bounded by some constant independent of $`Q_i`$ and $`R_i`$, since there are no $`(6,6^{n+1})`$-doubling cubes of the form $`6^kQ_i`$ between $`6Q_i`$ and $`R_i`$. Therefore, (2.8) holds. Then we have $`{\displaystyle _{^d_k2Q_k}}|T_\epsilon b|𝑑\mu `$ $``$ $`{\displaystyle \underset{i}{}}{\displaystyle _{^d_k2Q_k}}|T_\epsilon b_i|𝑑\mu `$ $``$ $`C{\displaystyle \underset{i}{}}{\displaystyle _{Q_i}}|f|𝑑\mu C{\displaystyle |f|𝑑\mu }.`$ Therefore, (2.9) $$\mu \{x^d\underset{i}{}2Q_i:|T_\epsilon b(x)|>\lambda \}\frac{C}{\lambda }|f|𝑑\mu .$$ The corresponding integral for the function $`g`$ is easier to estimate. Taking into account that $`|g|C\lambda `$, we get (2.10) $$\mu \{x^d\underset{i}{}2Q_i:|T_\epsilon g(x)|>\lambda \}\frac{C}{\lambda ^2}|g|^2𝑑\mu \frac{C}{\lambda }|g|𝑑\mu .$$ Also, we have $`{\displaystyle |g|𝑑\mu }`$ $``$ $`{\displaystyle _{^d_iQ_i}}|f|𝑑\mu +{\displaystyle \underset{i}{}}{\displaystyle |\phi _i|𝑑\mu }`$ $``$ $`{\displaystyle |f|𝑑\mu }+{\displaystyle \underset{i}{}}{\displaystyle _{Q_i}}|f|𝑑\mu C{\displaystyle |f|𝑑\mu }.`$ Now, by (2.9) and (2.10) we get (2.7). ∎ ## 3. Proof of Lemma 2.4 (a) Taking into account Remark 2.2, for $`\mu `$-almost all $`x^d`$ such that $`|f(x)|>\lambda `$, there exists some cube $`Q_x`$ satisfying (3.1) $$\frac{1}{\mu (2Q_x)}_{Q_x}|f|𝑑\mu >\frac{\lambda }{2^{d+1}}$$ and such that if $`Q_x^{}`$ is centered at $`x`$ with $`l(Q_x^{})>2l(Q_x)`$, then $$\frac{1}{\mu (2Q_x^{})}_{Q_x^{}}|f|𝑑\mu \frac{\lambda }{2^{d+1}}.$$ Now we can apply Besicovich’s covering theorem (see Remark 3.1 below) to get an almost disjoint subfamily of cubes $`\{Q_i\}_i\{Q_x\}_x`$ satisfying (2.1), (2.2) and (2.3). (b) Assume first that the family of cubes $`\{Q_i\}_i`$ is finite. Then we may suppose that this family of cubes is ordered in such a way that the sizes of the cubes $`R_i`$ are non decreasing (i.e. $`l(R_{i+1})l(R_i)`$). The functions $`\phi _i`$ that we will construct will be of the form $`\phi _i=\alpha _i\chi _{A_i}`$, with $`\alpha _i`$ and $`A_iR_i`$. We set $`A_1=R_1`$ and $`\phi _1=\alpha _1\chi _{R_1},`$ where the constant $`\alpha _1`$ is chosen so that $`_{Q_1}fw_1𝑑\mu =\phi _1𝑑\mu `$. Suppose that $`\phi _1,\mathrm{},\phi _{k1}`$ have been constructed, satisfy (2.4) and $$\underset{i=1}{\overset{k1}{}}|\phi _i|B\lambda ,$$ where $`B`$ is some constant which will be fixed below. Let $`R_{s_1},\mathrm{},R_{s_m}`$ be the subfamily of $`R_1,\mathrm{},R_{k1}`$ such that $`R_{s_j}R_k\mathrm{}`$. As $`l(R_{s_j})l(R_k)`$ (because of the non decreasing sizes of $`R_i`$), we have $`R_{s_j}3R_k`$. Taking into account that for $`i=1,\mathrm{},k1`$ $$|\phi _i|𝑑\mu _{Q_i}|f|𝑑\mu $$ by (2.4), and using that $`R_k`$ is $`(6,6^{n+1})`$-doubling and (2.2), we get $`{\displaystyle \underset{j}{}}{\displaystyle |\phi _{s_j}|𝑑\mu }`$ $``$ $`{\displaystyle \underset{j}{}}{\displaystyle _{Q_{s_j}}}|f|𝑑\mu `$ $``$ $`C{\displaystyle _{3R_k}}|f|𝑑\mu C\lambda \mu (6R_k)C_2\lambda \mu (R_k).`$ Therefore, $$\mu \left\{_j|\phi _{s_j}|>2C_2\lambda \right\}\frac{\mu (R_k)}{2}.$$ So we set $$A_k=R_k\left\{_j|\phi _{s_j}|2C_2\lambda \right\},$$ and then $`\mu (A_k)\mu (R_k)/2.`$ The constant $`\alpha _k`$ is chosen so that for $`\phi _k=\alpha _k\chi _{A_k}`$ we have $`\phi _k𝑑\mu =_{Q_k}fw_k𝑑\mu `$. Then we obtain $$|\alpha _k|\frac{1}{\mu (A_k)}_{Q_k}|f|𝑑\mu \frac{2}{\mu (R_k)}_{\frac{1}{2}R_k}|f|𝑑\mu C_3\lambda $$ (this calculation also applies to $`k=1`$). Thus, $$|\phi _k|+\underset{j}{}|\phi _{s_j}|(2C_2+C_3)\lambda .$$ If we choose $`B=2C_2+C_3`$, (2.5) follows. Now it is easy to check that (2.6) also holds. Indeed we have $$\phi _i_{L^{\mathrm{}}(\mu )}\mu (R_i)C|\alpha _i|\mu (A_i)=C\left|_{Q_i}fw_i𝑑\mu \right|C_{Q_i}|f|𝑑\mu .$$ Suppose now that the collection of cubes $`\{Q_i\}_i`$ is not finite. For each fixed $`N`$ we consider the family of cubes $`\{Q_i\}_{1iN}`$. Then, as above, we construct functions $`\phi _1^N,\mathrm{},\phi _N^N`$ with $`\mathrm{supp}(\phi _i^N)R_i`$ satisfying $$\phi _i^N𝑑\mu =_{Q_i}fw_i𝑑\mu ,$$ $$\underset{i=1}{\overset{N}{}}|\phi _i^N|B\lambda $$ and $$\phi _i^N_{L^{\mathrm{}}(\mu )}\mu (R_i)C_{Q_i}|f|𝑑\mu .$$ Notice that the sign of $`\phi _i^N`$ equals the sign of $`fw_i𝑑\mu `$ and so it does not depend on $`N`$. Then there is a subsequence $`\{\phi _1^k\}_{kI_1}`$ which is convergent in the weak $``$ topology of $`L^{\mathrm{}}(\mu )`$ to some function $`\phi _1L^{\mathrm{}}(\mu )`$. Now we can consider a subsequence $`\{\phi _2^k\}_{kI_2}`$ with $`I_2I_1`$ which is also convergent in the weak $``$ topology of $`L^{\mathrm{}}(\mu )`$ to some function $`\phi _2L^{\mathrm{}}(\mu )`$. In general, for each $`j`$ we consider a subsequence $`\{\phi _j^k\}_{kI_j}`$ with $`I_jI_{j1}`$ that converges in the weak $``$ topology of $`L^{\mathrm{}}(\mu )`$ to some function $`\phi _jL^{\mathrm{}}(\mu )`$. It is easily checked that the functions $`\phi _j`$ satisfy the required properties. ∎ ###### Remark 3.1. Recall that Besicovich’s covering theorem asserts that if $`\mathrm{\Omega }^d`$ is a bounded set and for each $`x\mathrm{\Omega }`$ there is a cube $`Q_x`$ centered at $`x`$, then there exists a family of cubes $`\{Q_{x_i}\}_i`$ with finite overlap covering $`\mathrm{\Omega }`$. In (a) of the preceeding proof we have applied Besicovich’s covering theorem to $`\mathrm{\Omega }=\{x:|f(x)|>\lambda \}`$. However this set may be unbounded, and the boundedness property is a necessary assumption in Besicovich’s theorem (example: take $`\mathrm{\Omega }=[0,+\mathrm{})`$ and consider $`Q_x=[0,2x]`$ for all $`x\mathrm{\Omega }`$). We can solve this problem using different arguments. One possibility is to consider for each $`r>0`$ the set $`\mathrm{\Omega }_r=\{x:|x|r,|f(x)|>\lambda \}`$ and to apply Besicovich’s covering theorem to $`\mathrm{\Omega }_r`$. With the same arguments as above, we can decompose $`f=g+b`$, with $`|g|\lambda `$ only on $`\mathrm{\Omega }_r`$ and $`b`$ as above. Then the proof of Theorem 1.1 can be modified to show that for any fixed constants $`\lambda ,R>0`$ one has $$\mu \{xB(0,R):|T_\epsilon f(x)|>\lambda \}C\frac{f_{L^1(\mu )}}{\lambda }.$$ However we prefer the following solution. We are interested in showing that the Calderón-Zygmund decomposition of Lemma 2.4 works also without assuming $`\mathrm{\Omega }=\{x:|f(x)|>\lambda \}`$ bounded. Let us sketch the argument. Consider a cube $`Q_0`$ centered at $`0`$ big enough so that $$2^{d+1}f_{L^1(\mu )}/\mu (Q_0)<\lambda .$$ So for any cube $`Q`$ containing $`Q_0`$ we will have (3.2) $$2^{d+1}f_{L^1(\mu )}/\mu (Q)<\lambda .$$ For $`m0`$ we set $`Q_m:=\left(\frac{5}{4}\right)^mQ_0`$. For each $`m`$ we can apply Besicovich’s covering theorem to the annulus $`Q_mQ_{m1}`$ (we take $`Q_1:=\mathrm{}`$), with cubes $`Q_x`$ centered at $`x\mathrm{supp}(\mu )(Q_mQ_{m1})`$ as in (a) of the proof above, satisfying (3.1). In this argument we have to be careful with the overlapping among the cubes belonging to coverings of different annuli. Indeed, there exist some fixed constants $`N`$ and $`N^{}`$ such that if $`mN^{}`$, for $`x\mathrm{supp}(\mu )(Q_mQ_{m1})`$ we have (3.3) $$Q_xQ_{m+N}Q_{mN}.$$ Otherwise, it easily seen that $`\mathrm{}(Q_x)>\frac{3}{4}\mathrm{}(Q_m)`$, choosing $`N`$ big enough. It follows that $`Q_02Q_x`$ since $`\mathrm{}(Q_0)\mathrm{}(Q_m)`$ for $`N^{}`$ big enough too. This cannot happen because then $`2Q_x`$ satisfies (3.2), which contradicts (3.1). Because of (3.3), the covering made up of squares belonging to the Besicovich coverings of different annuli $`Q_mQ_{m1}`$, $`m0`$, will have finite overlap. Notice that in this argument, it is essential the fact that in (3.1) we are not dividing by $`\mu (Q_x)`$, but by $`\mu (2Q_x)`$.
no-problem/0002/math0002098.html
ar5iv
text
# Partitions with parts in a finite set Supported in part by grants from the PSC–CUNY Research Award Program and the NSA Mathematical Sciences Program. ## Abstract Let $`A`$ be a nonempty finite set of relatively prime positive integers, and let $`p_A(n)`$ denote the number of partitions of $`n`$ with parts in $`A`$. An elementary arithmetic argument is used to prove the asymptotic formula $$p_A(n)=\left(\frac{1}{_{aA}a}\right)\frac{n^{k1}}{(k1)!}+O\left(n^{k2}\right).$$ Let $`A`$ be a nonempty set of positive integers. A partition of a positive integer $`n`$ with parts in $`A`$ is a representation of $`n`$ as a sum of not necessarily distinct elements of $`A`$. Two partitions are considered the same if they differ only in the order of their summands. The partition function of the set $`A`$, denoted $`p_A(n)`$, counts the number of partitions of $`n`$ with parts in $`A`$. If $`A`$ is a finite set of positive integers with no common factor greater than 1, then every sufficiently large integer can be written as a sum of elements of $`A`$ (see Nathanson and Han, Kirfel, and Nathanson ), and so $`p_A(n)1`$ for all $`nn_0.`$ In the special case that $`A`$ is the set of the first $`k`$ integers, it is known that $$p_A(n)\frac{n^{k1}}{k!(k1)!}.$$ Erdős and Lehner proved that this asymptotic formula holds uniformly for $`k=o(n^{1/3})`$. If $`A`$ is an arbitrary finite set of relatively prime positive integers, then $$p_A(n)\left(\frac{1}{_{aA}a}\right)\frac{n^{k1}}{(k1)!}.$$ (1) The usual proof of this result (Netto , Pólya–Szegö \[5, Problem 27\]) is based on the partial fraction decomposition of the generating function for $`p_A(n)`$. The purpose of this note is to give a simple, purely arithmetic proof of (1). We define $`p_A(0)=1.`$ ###### Theorem 1 Let $`A=\{a_1,\mathrm{},a_k\}`$ be a set of $`k`$ relatively prime positive integers, that is, $$\mathrm{gcd}(A)=(a_1,\mathrm{},a_k)=1.$$ Let $`p_A(n)`$ denote the number of partitions of $`n`$ into parts belonging to $`A`$. Then $$p_A(n)=\left(\frac{1}{_{aA}a}\right)\frac{n^{k1}}{(k1)!}+O\left(n^{k2}\right).$$ Proof. Let $`k=|A|.`$ The proof is by induction on $`k`$. If $`k=1`$, then $`A=\{1\}`$ and $$p_A(n)=1,$$ since every positive integer has a unique partition into a sum of 1’s. Let $`k2,`$ and assume that the Theorem holds for $`k1`$. Let $$d=(a_1,\mathrm{},a_{k1}).$$ Then $$(d,a_k)=1.$$ For $`i=1,\mathrm{},k1`$, we set $$a_i^{}=\frac{a_i}{d}.$$ Then $$A^{}=\{a_1^{},\mathrm{},a_{k1}^{}\}$$ is a set of $`k1`$ relatively prime positive integers, that is, $$\mathrm{gcd}(A^{})=1.$$ Since the induction assumption holds for $`A^{}`$, we have $$p_A^{}(n)=\left(\frac{1}{_{i=1}^{k1}a_i^{}}\right)\frac{n^{k2}}{(k2)!}+O\left(n^{k3}\right)$$ for all nonnegative integers $`n.`$ Let $`n(d1)a_k`$. Since $`(d,a_k)=1`$, there exists a unique integer $`u`$ such that $`0ud1`$ and $$nua_k(modd).$$ Then $$m=\frac{nua_k}{d}$$ is a nonnegative integer, and $$m=O(n).$$ If $`v`$ is any nonnegative integer such that $$nva_k(modd),$$ then $`va_kua_k(modd)`$ and so $`vu(modd)`$, that is, $`v=u+\mathrm{}d`$ for some nonnegative integer $`\mathrm{}.`$ If $$nva_k=n(u+\mathrm{}d)a_k0,$$ then $$0\mathrm{}\left[\frac{n}{da_k}\frac{u}{d}\right]=\left[\frac{m}{a_k}\right]=r.$$ We note that $$r=O(n).$$ Let $`\pi `$ be a partition of $`n`$ into parts belonging to $`A.`$ If $`\pi `$ contains exactly $`v`$ parts equal to $`a_k`$, then $`nva_k0`$ and $`nva_k0(modd)`$, since $`nva_k`$ is a sum of elements in $`\{a_1,\mathrm{},a_{k1}\}`$, and each of the elements in this set is divisible by $`d`$. Therefore, $`v=u+\mathrm{}d`$, where $`0\mathrm{}r.`$ Consequently, we can divide the partitions of $`n`$ with parts in $`A`$ into $`r+1`$ classes, where, for each $`\mathrm{}=0,1,\mathrm{},r,`$ a partition belongs to class $`\mathrm{}`$ if it contain exactly $`u+\mathrm{}d`$ parts equal to $`a_k`$. The number of partitions of $`n`$ with exactly $`u+\mathrm{}d`$ parts equal to $`a_k`$ is exactly the number of partitions of $`n(u+\mathrm{}d)a_k`$ into parts belonging to the set $`\{a_1,\mathrm{},a_{k1}\}`$, or, equivalently, the number of partitions of $$\frac{n(u+\mathrm{}d)a_k}{d}$$ into parts belonging to $`A^{}`$, which is exactly $$p_A^{}\left(\frac{n(u+\mathrm{}d)a_k}{d}\right)=p_A^{}\left(m\mathrm{}a_k\right).$$ Therefore, $`p_A(n)`$ $`=`$ $`{\displaystyle \underset{\mathrm{}=0}{\overset{r}{}}}p_A^{}(m\mathrm{}a_k)`$ $`=`$ $`\left({\displaystyle \frac{1}{_{i=1}^{k1}a_i^{}}}\right){\displaystyle \underset{\mathrm{}=0}{\overset{r}{}}}\left({\displaystyle \frac{(m\mathrm{}a_k)^{k2}}{(k2)!}}+O(m^{k3})\right)`$ $`=`$ $`\left({\displaystyle \frac{d^{k1}}{_{i=1}^{k1}a_i}}\right){\displaystyle \underset{\mathrm{}=0}{\overset{r}{}}}{\displaystyle \frac{(m\mathrm{}a_k)^{k2}}{(k2)!}}+O(n^{k2}).`$ To evaluate the inner sum, we note that $$\underset{\mathrm{}=0}{\overset{r}{}}\mathrm{}^j=\frac{r^{j+1}}{(j+1)}+O(r^j)$$ and $$\underset{j=0}{\overset{k2}{}}(1)^j\left(\genfrac{}{}{0pt}{}{k1}{j+1}\right)=\underset{j=1}{\overset{k1}{}}(1)^j\left(\genfrac{}{}{0pt}{}{k1}{j}\right)=1.$$ Then $`{\displaystyle \underset{\mathrm{}=0}{\overset{r}{}}}{\displaystyle \frac{(m\mathrm{}a_k)^{k2}}{(k2)!}}`$ $`=`$ $`{\displaystyle \frac{1}{(k2)!}}{\displaystyle \underset{\mathrm{}=0}{\overset{r}{}}}{\displaystyle \underset{j=0}{\overset{k2}{}}}\left({\displaystyle \genfrac{}{}{0pt}{}{k2}{j}}\right)m^{k2j}(\mathrm{}a_k)^j`$ $`=`$ $`{\displaystyle \frac{1}{(k2)!}}{\displaystyle \underset{j=0}{\overset{k2}{}}}\left({\displaystyle \genfrac{}{}{0pt}{}{k2}{j}}\right)m^{k2j}(a_k)^j{\displaystyle \underset{\mathrm{}=0}{\overset{r}{}}}\mathrm{}^j`$ $`=`$ $`{\displaystyle \frac{1}{(k2)!}}{\displaystyle \underset{j=0}{\overset{k2}{}}}\left({\displaystyle \genfrac{}{}{0pt}{}{k2}{j}}\right)m^{k2j}(a_k)^j\left({\displaystyle \frac{r^{j+1}}{(j+1)}}+O(r^j)\right)`$ $`=`$ $`{\displaystyle \frac{1}{(k2)!}}{\displaystyle \underset{j=0}{\overset{k2}{}}}\left({\displaystyle \genfrac{}{}{0pt}{}{k2}{j}}\right)m^{k2j}(a_k)^j\left({\displaystyle \frac{m^{j+1}}{a_k^{j+1}(j+1)}}+O(m^j)\right)`$ $`=`$ $`{\displaystyle \frac{m^{k1}}{a_k}}{\displaystyle \underset{j=0}{\overset{k2}{}}}\left({\displaystyle \genfrac{}{}{0pt}{}{k2}{j}}\right){\displaystyle \frac{(1)^j}{(k2)!(j+1)}}+O(m^{k2})`$ $`=`$ $`{\displaystyle \frac{m^{k1}}{a_k}}{\displaystyle \underset{j=0}{\overset{k2}{}}}{\displaystyle \frac{(1)^j}{(k2j)!j!(j+1)}}+O(m^{k2})`$ $`=`$ $`{\displaystyle \frac{m^{k1}}{a_k}}{\displaystyle \underset{j=0}{\overset{k2}{}}}{\displaystyle \frac{(1)^j}{(k1(j+1))!(j+1)!}}+O(m^{k2})`$ $`=`$ $`{\displaystyle \frac{m^{k1}}{a_k(k1)!}}{\displaystyle \underset{j=0}{\overset{k2}{}}}(1)^j\left({\displaystyle \genfrac{}{}{0pt}{}{k1}{j+1}}\right)+O(m^{k2})`$ $`=`$ $`{\displaystyle \frac{m^{k1}}{a_k(k1)!}}+O(m^{k2}).`$ Therefore, $`p_A(n)`$ $`=`$ $`\left({\displaystyle \frac{d^{k1}}{_{i=1}^{k1}a_i}}\right){\displaystyle \underset{\mathrm{}=0}{\overset{r}{}}}{\displaystyle \frac{(m\mathrm{}a_k)^{k2}}{(k2)!}}+O(n^{k2})`$ $`=`$ $`\left({\displaystyle \frac{d^{k1}}{_{i=1}^{k1}a_i}}\right)\left({\displaystyle \frac{m^{k1}}{a_k(k1)!}}+O(n^{k2})\right)+O(n^{k2})`$ $`=`$ $`\left({\displaystyle \frac{d^{k1}}{_{i=1}^{k1}a_i}}\right)\left({\displaystyle \frac{1}{a_k(k1)!}}\right)\left({\displaystyle \frac{n}{d}}{\displaystyle \frac{ua_k}{d}}\right)^{k1}+O(n^{k2})`$ $`=`$ $`\left({\displaystyle \frac{d^{k1}}{_{i=1}^{k1}a_i}}\right)\left({\displaystyle \frac{1}{a_k(k1)!}}\right)\left({\displaystyle \frac{n}{d}}\right)^{k1}+O(n^{k2})`$ $`=`$ $`\left({\displaystyle \frac{1}{_{i=1}^ka_i}}\right){\displaystyle \frac{n^{k1}}{(k1)!}}+O(n^{k2}).`$ This completes the proof.
no-problem/0002/hep-ph0002067.html
ar5iv
text
# Untitled Document FERMION GENERATIONS FROM THE HIGGS SECTOR Vladimir Visnjic Department of Physics, Temple University, Philadelphia, PA 19122 and Institut za Fiziku, Beograd, Yugoslavia E-mail: visnjic@astro.temple.edu Abstract The generation structure in the quark and lepton spectrum is explained as originating from the excitation spectrum $`S_n`$ of SU(2)<sub>W</sub> doublet scalar fields, whose ground state $`S_1`$ is the Standard Model Higgs field. There is only one basic family of SU(2)<sub>W</sub> doublet left-handed fermions, $`\nu _L,e_L,u_L,d_L`$, whose bound states with $`S_n`$ manifest themselves as the generations of left-handed quarks and leptons. Likewise, there is only one basic family of the right-handed fermions, $`\nu _R,e_R,u_R,d_R`$, which combine with the gauge invariant scalar fields $`G_n`$ to produce the right-handed quarks and leptons of the second and higher generations. There are only four Yukawa coupling constants, $`G_\nu ,G_e,G_u`$, and $`G_d`$ and all quark and lepton masses are proportional to them. Suppression of flavor changing neutral currents (GIM mechanism) is automatic. $`\nu _\mu `$ and $`\nu _\tau `$ are expected to be massive. I present a theory of the origin of fermion generations in which there is only one fundamental quark/lepton family, while the second and higher ones are a consequence of an excitation spectrum in the scalar sector. The basic family of chiral fermions consists of $$\mathrm{}_L=\left(\begin{array}{c}\nu _L\\ e_L\end{array}\right),q_L=\left(\begin{array}{c}u_L\\ d_L\end{array}\right),\nu _R,e_R,u_R,d_R$$ with the usual $`\mathrm{SU}(2)_W\mathrm{U}(1)_Y`$ transformation properties. In addition to the fermions, there is a composite bosonic field $`S`$ whose lowest energy states are the SU(2)<sub>W</sub> doublet scalars $$S_1=\left(\begin{array}{c}S_1^+\\ S_1^0\end{array}\right),S_2=\left(\begin{array}{c}S_2^+\\ S_2^0\end{array}\right),S_3=\left(\begin{array}{c}S_3^+\\ S_3^0\end{array}\right)\mathrm{}$$ We shall assume that there are at least three scalar states below the lowest $`J=1`$ state. The ground state $`S_1`$ is the familiar Higgs field which develops a vacuum expectation value. $`S_2,S_3,`$ etc. are its radial excitations labeled by the “generation number.” Orthogonality of the states $`S_1,S_2,S_3\mathrm{}`$ implies that for $`n2`$, $`S_n`$ has zero vacuum expectation value and zero Yukawa couplings. It also requires that the effective quartic couplings of these states respect the generation number, at least at energies much lower than their mass scale. Under this assumption the effective potential can only be a function of $`S_i^{}S_i`$ and $`(S_i^{}S_j)(S_j^{}S_i)`$ and the low energy effective theory possesses a global SU(2) symmetry, the isospin. Note that this scheme has nothing to do with Technicolor which is a QCD-like theory invented to solve the fine-tuning problem and does not address the fermion generation problem. The $`S`$-field envisaged here explains the existence of fermion generations as its radial excitation states. For this purpose it is not necessary to assume QCD-like structure for the $`S`$-field – or even that it is based on a new interaction at all. Gauge invariant scalars The gauge invariant scalar fields are obtained in the $`S_i^{}S_j`$ (neutral) and $`\overline{S}_i^{}S_j`$ (charged) channels, where $`\overline{S}=i\tau _2S^{}`$. The fields involving the ground state $`S_1`$ are of particular importance, since $`S_1`$ is the only Higgs field in the theory (i.e. the field which has negative mass squared in the Lagrangian) and plays a special role in producing the $`W`$’s, the $`Z`$, and the left-handed fermions of the first generation. The only gauge invariant scalar involving only $`S_1`$ is $`h^0=\sqrt{2}(S_1^0<S_1^0>)`$, the physical Higgs field. For every $`n2`$ we have a positive scalar $`G_n^+=\overline{S}_1^{}S_n`$ and a neutral scalar $`G_n^0=S_1^{}S_n`$, both of which are labeled by $`n`$ and thus come in generations. As we shall see, the $`G_n`$ fields give their generation number to the right-handed fermions and thus are responsible for the generation structure in the right-handed sector. The quarks and leptons The left-handed fermions of the $`n`$-th generation, $`\nu _{nL},e_{nL},u_{nL},d_{nL},`$ are composed of the fundamental left-handed fermions $`\mathrm{}_L`$ and $`q_L`$ and the $`n`$-th generation $`S`$ field, $`S_n`$: $$\begin{array}{cc}\hfill \nu _{nL}& =\overline{S}_n^{}\mathrm{}_L\hfill \\ \hfill e_{nL}& =S_n^{}\mathrm{}_L\hfill \\ \hfill u_{nL}& =\overline{S}_n^{}q_L\hfill \\ \hfill d_{nL}& =S_n^{}q_L.\hfill \end{array}$$ $`(1)`$ The right-handed fermions of the $`n`$-th generation are composed of the fundamental right-handed fermions $`\nu _R,e_R,u_R,d_R,`$ and the $`G_n`$ fields from which they get their generation label: $$\begin{array}{cc}\hfill \nu _{nR}& =G_n^+e_R+G_n^0\nu _R\hfill \\ \hfill e_{nR}& =G_n^0e_R+G_n^{}\nu _R\hfill \\ \hfill u_{nR}& =G_n^+d_R+G_n^0u_R\hfill \\ \hfill d_{nR}& =G_n^0d_R+G_n^{}u_R.\hfill \end{array}$$ $`(2)`$ The left- and the right-handed fermions of the first generation meet at the usual Yukawa vertices, Fig. 1(a), while those of the second and higher generations meet at the vertices shown in Fig. 1(b). The four Yukawa coupling constants, $`G_\nu ,G_e,G_u`$, and $`G_d`$ are the only chiral symmetry breaking parameters in the theory and thus all fermion masses are proportional to them. In the limit in which these coupling constants go to zero all fermions are massless, irrespective of how massive their scalar constituents $`S_i`$ may be. Note, however, that the quark (lepton) masses of the second and higher generations get contributions from both $`G_u`$ and $`G_d`$ ($`G_\nu `$ and $`G_e`$), $$\begin{array}{cc}\hfill m_c& =Am_u+Bm_d\hfill \\ \hfill m_s& =Cm_u+Dm_d\hfill \end{array}$$ and analogous in the lepton sector. In particular, this implies that the muon and tau neutrinos are massive even if $`G_\nu =0`$, due to the contributions from the electron to those two neutrinos. The W and Z couplings and the mixing angles In the Standard Model the $`W`$ and $`Z`$ bosons couple to $`\overline{S}_1^{}(A_\mu \frac{i}{g}_\mu )S_1`$ and $`S_1^{}(A_\mu \frac{i}{g}_\mu )S_1`$ where $`A_\mu `$ denotes the SU(2)<sub>W</sub> gauge field. Here I postulate similar couplings of the $`W`$ and $`Z`$ bosons to the excited states, i.e. to $`\overline{S}_i^{}(A_\mu \frac{i}{g}_\mu )S_j`$ and $`S_i^{}(A_\mu \frac{i}{g}_\mu )S_j`$. This makes it possible to couple the $`W`$’s and the $`Z`$ to the quarks and leptons. However, the quark and lepton mass eigenstates are the bound states, Eq. (1). The $`W`$ coupling to the mass eigenstates involves the mixing matrix elements $`V_{ij}`$, equal to the overlap integral of the quark wave functions, $$V_{ij}=<u_{iL}|d_{jL}>=<\overline{S}_i^{}q_L|S_j^{}q_L>.$$ As shown in Ref. 1, orthogonality and completeness of the quark wave functions ensure the unitarity of the mixing matrix $`V`$ and thus the absence of flavor changing neutral currents (GIM mechanism). Alternatively, we may directly show that the off-diagonal couplings of the $`Z`$ involving $`S`$ fields of different generations are zero because of orthogonality of the $`S_n`$ states. Higher order corrections, of course, do not respect the orthogonality and introduce flavor changing neutral currents. Conclusions The proposed theory of quark and lepton generations predicts that, except for the first one, each quark and lepton generation is accompanied by a pair of scalar fields, $`G_n^+`$ and $`G_n^0`$. Thus both the fermions and the scalars come in generations which originate in the spectrum of the $`S`$-field. Ultimately the model should predict the number of quark/lepton generations and their masses. Even in the absence of these predictions, we may notice that the fermion generations predicted here differ in two important aspects from higher fermion generations which would obey the rules of the present Standard Model. First, although they may be heavy, they do not involve large Yukawa couplings and thus do not lead to new strong interactions among quarks/leptons. Heavy fermions are obtained as bound states and not as a result of large Yukawa couplings. Second, even if the number of quark generations turns out to be large, the asymptotic freedom properties of QCD may not be affected, since there are only two basic quark flavors, $`u`$ and $`d`$, all other quark flavors being bound states of these two. References 1. V. Visnjic, Phys. Rev. D25 (1982) 248.
no-problem/0002/cond-mat0002017.html
ar5iv
text
# Dynamical ordering in the c-axis in 3D driven vortex lattices ## 1 INTRODUCTION The prediction of a dynamical phase transition upon increasing drive, from a fluidlike plastic flow regime to a coherently moving solid in a moving vortex lattice has motivated many recent theoretical , experimental , and simulation work . In a previous work we have studied the dynamical regimes in the velocity-force curve (voltage-current) in 2D thin films and found two dynamical phase transitions above the critical force. The first transition, from a plastic flow regime to a smectic flow regime, is characterized by the simultaneous occurrence of a peak in differential resistance, isotropic low frequency voltage noise and maximum transverse diffusion. The second transition, from a smectic flow regime to a frozen transverse solid, is a freezing transition in the transverse direction where transverse diffusion vanishes abruptly and the Hall noise drops many orders of magnitude. In other 2D simulations the peak in differential resistance was found to coincide with the onset of orientational order and a maximum number of defects . Experimentally, the position of this peak was taken by Hellerqvist et al. as an indication of a dynamical phase transition. In this paper we show that in driven 3D layered superconductors, the peak in differential resistance also coincides with the onset of correlation along the c-axis. ## 2 MODEL Simulations of vortices in 3D layered superconductors have been done previously using time–dependent Ginzburg–Landau–Lawrence–Doniach equations (two layers), , the 3D XY model and Langevin dynamics of interacting particles . Here we study the motion of pancakes vortices in a layered superconductor with disorder, with an applied magnetic field in the c-direction and with an external homogeneous current in the layers (ab-planes). We use a model introduced by J.R. Clem for a layered superconductor with vortices in the limit of zero Josephson-coupling between layers . We simulate a stack of equally spaced superconducting layers with interlayer periodicity $`s`$, each layer containing the same number of pancake vortices. The equation of motion for a pancake located in position $`𝐑_𝐢=(𝐫_𝐢,z_i)=(x_i,y_i,n_is)`$ (with z-axis in c-direction) is: $$\eta \frac{d𝐫_𝐢}{dt}=\underset{ji}{}𝐅_𝐯(\rho _{ij},z_{ij})+\underset{p}{}𝐅_𝐩(\rho _{ip})+𝐅,$$ (1) where $`\rho _{ij}=|𝐫_i𝐫_j|`$ and $`z_{ij}=|z_iz_j|`$ are the in-plane and inter-plane distance between pancakes $`i,j`$, $`r_{ip}=|𝐫_i𝐫_p|`$ is the in-plane distance between the vortex $`i`$ and a pinning site at $`𝐑_𝐩=(𝐫_p,z_i)`$ (pancakes interact only with pinning centers within the same layer), $`\eta `$ is the Bardeen-Stephen friction, and $`𝐅=\frac{\mathrm{\Phi }_0}{c}𝐉\times 𝐳`$ is the driving force due to an applied in-plane current density $`𝐉`$. We consider a random uniform distribution of attractive pinning centers in each layer with $`𝐅_𝐩=A_pe^{(r/r_p)^2}𝐫/r_p^2`$, where $`r_p`$ is the pinning range. The magnetic interaction between pancakes $`𝐅_v(\rho ,z)=F_\rho (\rho ,z)\widehat{r}`$ is given by: $$F_\rho (\rho ,0)=(\varphi _0^2/4\pi ^2\mathrm{\Lambda }\rho )[1(\lambda _{}/\mathrm{\Lambda })(1e^{\rho /\lambda _{}})]$$ (2) $$F_\rho (\rho ,z)=(\varphi _0^2\lambda _{}/4\pi ^2\mathrm{\Lambda }^2\rho )[e^{z/\lambda _{}}e^{R/\lambda _{}}].$$ (3) Here, $`R=\sqrt{z^2+\rho ^2}`$, $`\lambda _{}`$ is the penetration length parallel to the layers, and $`\mathrm{\Lambda }=2\lambda _{}^2/s`$ is the 2D thin-film screening length. A analogous model to Eqs. (2-3) was used in . We normalize length scales by $`\lambda _{}`$, energy scales by $`A_v=\varphi _0^2/4\pi ^2\mathrm{\Lambda }`$, and time is normalized by $`\tau =\eta \lambda _{}^2/A_v`$. We consider $`N_v`$ pancake vortices and $`N_p`$ pinning centers per layer in $`N_l`$ rectangular layers of size $`L_x\times L_y`$, and the normalized vortex density is $`n_v=N_v\lambda _{}^2/L_xL_y=B\lambda _{}^2/\mathrm{\Phi }_0`$. Moving pancake vortices induce a total electric field $`𝐄=\frac{B}{c}𝐯\times 𝐳`$, with $`𝐯=\frac{1}{N_vN_l}_i𝐯_i`$. We study the dynamical regimes in the velocity-force curve at $`T=0`$, solving Eq. (1) for increasing values of $`𝐅=F𝐲`$. We consider a constant vortex density $`n_v=0.1`$ in $`N_l=5`$ layers with $`L_x/L_y=\sqrt{3}/2`$, $`s=0.01`$, and $`N_v=36`$ pancake vortices per layer. We take a pinning range of $`r_p=0.2`$, pinning strengh of $`A_p/A_v=0.2`$, with a density of pinning centers $`n_p=0.65`$ on each layer. We use periodic boundary conditions in all directions and the periodic long-range in-plane interaction is dealt with exactly using an exact and fast converging sum . The equations are integrated with a time step of $`\mathrm{\Delta }t=0.01\tau `$ and averages are evaluated in $`16384`$ integration steps after $`2000`$ iterations for equilibration (when the total energy reaches a stationary value). Each simulation is started at $`F=0`$ with an ordered triangular vortex lattice (perfectly correlated in c-direction) and slowly increasing the force in steps of $`\mathrm{\Delta }F=0.1`$ up to values as high as $`F=8`$. ## 3 RESULTS We start by looking at the vortex trajectories in the steady state phases. In Figure 1(a-b) we show a top view snapshot of the instantaneous pancake configuration for two typical values of $`F`$. In Figure 2(a-b) we show the vortex trajectories $`\{𝐑_i(t)\}`$ for the same two typical values of $`F`$ by plotting all the positions of the pancakes in all the layers for all the time iteration steps. In Fig. 3(a) we plot the average vortex velocity $`V=V_y(t)=\frac{1}{N_v}_i\frac{dy_i}{dt}`$, in the direction of the force as a function of $`F`$ and its corresponding derivative $`dV/dF`$. We also study the pair distribution function: $$g(\rho ,n)=\frac{1}{N_pN_l}\underset{i<j}{}\delta (\rho \rho _{ij})\delta (nsz_{ij}).$$ (4) In Fig. 3(b) we plot a correlation parameter along c-axis defined as: $$C_z^n=\underset{\rho 0}{lim}g(\rho ,n),$$ (5) as a function of $`F`$ for $`n=1,2`$. Below a critical force, $`F_c0.4`$, all the pancakes are pinned and there is no motion. At the characteristic force, $`F_p0.8`$, we observe a peak in the differential resistance. At $`F_c`$ pancake vortices start to move in a few channels , as was also seen in 2D vortex simulations . A typical situation is shown in Fig. 2(a). In this plastic flow regime we observe that the motion is completely uncorrelated along c-direction, with $`C_z^n0`$ for $`F_c<F<F_p`$ as shown in Fig. 3(b). In Fig. 1(a) and 2(a) we see that this situation corresponds to a disordered configuration of pancakes and to an uncorrelated structure of plastic channels along c-axis. At $`F_p`$ there is an onset of correlation along the c-axis and pancakes vortices start to align forming well defined stacks or vortex lines. This onset of c-axis correlation corresponds to the transition from plastic flow to a moving smectic phase (a complete discussion of the translational as well as temporal order will be discussed elsewhere ). For $`F>F_p`$ we observe that the structure of smectic channels is very correlated in the c-direction. ## 4 DISCUSSION In a system with $`N_l=5`$ layers we have found that there is a clear onset of c-axis correlation with increasing driving force, where the pancake vortices start to align forming well defined stacks moving in smectic flow channels. Below this transition, in the plastic flow regime, these stacks of pancakes are unstable. Also, in an enhancement of c-axis correlations with increasing drive was observed in a bilayered system. We have further found that the in-plane properties are in well correspondence with the ones obtained in 2D thin films simulations . A better understanding of the effects of c-axis correlations in pancake motion on each layer can be obtained by studying translational and temporal order in larger systems, through the analysis of the structure factor, voltage noise, and in-plane as well as inter-plane velocity-velocity correlation functions . In conclusion we have analyzed the vortex correlation along the field direction (c-axis) in the velocity-force characteristics at $`T=0`$ and found that above the critical current there is an onset of c-axis correlation in the transition between a plastic flow regime to a smectic flow regime. This transition coincides with the peak in the differential resistance. Experimentally, this effect could be studied with measurements of c-axis resistivity as a function of an applied current parallel to the layers . We acknowledge discussions with L.N. Bulaevskii, P.S. Cornaglia, F. de la Cruz, Y. Fasano, M. Menghini and C.J. Olson. This work has been supported by a grant from ANPCYT (Argentina), Proy. 03-00000-01034. D.D. and A.B.K. acknowledge support from Fundación Antorchas (Proy. A-13532/1-96), Conicet, CNEA and FOMEC (Argentina). This work was also supported by the Director, Office of Advanced Scientific Computing Research, Division of Mathematical, Information, and Computational Sciences of the U.S. Department of Energy under contract number DE-AC03-76SF00098.
no-problem/0002/quant-ph0002037.html
ar5iv
text
# IISc-CTS-2/00 quant-ph/0002037 Quantum Algorithms and the Genetic Code ## I Genetic Information I am going to talk about processes that form the basis of life and evolution. The hypothesis that living organisms have adapted to their environment, and have exploited the available material resources and the physical laws governing them to the best of their capability, is the legacy of Charles Darwin—survival of the fittest. This is an optimisation problem, but it is not easy to quantify it in mathematical terms. Often we can explain various observed features of living organisms . The explanation becomes more and more believable, as more and more of its ingredients are verified experimentally . Yet even when definite predictions exist, an explanation is an explanation and not a proof; there is no way we can ask evolution to repeat itself and observe it like many common scientific experiments. With this attitude, let us look at life. Living organisms try to perpetuate themselves . The disturbances from the environment, and the damage they cause, make it impossible for a particular structure to survive forever. So the perpetuation is carried out through the process of replication. One generation of organisms produces the next generation, which is essentially a copy of itself. The self-similarity is maintained by the hereditary information—the genetic code—that is passed on from one generation to the next. The long chains of DNA molecules residing in the nuclei of the cells form the repository of the genetic information. These DNA molecules control life in two ways: (1) their own highly faithful replication, which passes on the information to the next generation (each life begins as a single cell, and each cell in a complex living organism contains identical DNA molecules), and (2) the synthesis of proteins which govern all the processes of a living organism (haemoglobin, insulin, immunoglobulin etc. are well-known examples of proteins) . Computation is nothing but processing of information. So we can study what DNA does from the view-point of computer science. In the process of designing and building modern computers, we have learnt the importance of various software and hardware features. Let us look at some of them. The first is the process of digitisation. Instead of handling a single variable covering a large range, it is easier to handle several variables each spanning a smaller range. Any desired accuracy can be maintained by putting together as many as necessary of the smaller range variables, while the instruction set required to manipulate each variable is substantially simplified. This simplification means that only a limited number of processes have to be physically implemented, leading to high speed computation. Discretisation also makes it possible to correct small errors arising from local fluctuations. There are disadvantages of digitisation in terms of increase in the depth of calculation and power consumption, but the advantages are so great that digital computers have pushed away analogue computers to obscurity. Even before the discovery of DNA, Erwin Schrödinger had emphasised the fact that an aperiodic chain of building blocks carries information , just like our systems of writing numbers and sentences. The structure of DNA and protein reveals that life has indeed taken the route of digitising its information. DNA and RNA chains use an alphabet of 4 nucleotide bases, while proteins use an alphabet of 20 amino acids. The second is the packing of the information. When there are repetitive structures or correlations amongst different sections of a message, that reduces its capacity to convey new information—part of the variables are wasted in repeating what is already conveyed. Claude Shannon showed that the information content of a fixed length message is maximised when all the correlations are eliminated and each of the variables is made as random as possible. Our languages are not that efficient; we can immediately notice that consonants and vowels roughly alternate in their structure. When we easily compress our text files on a computer, we remove such correlations without losing information. Detailed analyses of DNA sequences have found little correlation amongst the letters of its alphabet, and we have to marvel at the fact that life has achieved the close to maximum entropy structure of coding its information. The third is the selection of the letters of the alphabet. This clearly depends on the task to be accomplished and the choices available as symbols. A practical criterion for fast error-free information processing is that various symbols should be easily distinguishable from each-other. We use the decimal system of numbers because we, at least in India, learnt to count with our fingers. There is no way to prove this, but it is a better explanation than anything else. We can offer a better justification for why the computers we have designed use the binary system of numbers. 2 is the smallest base available for a number system, and that leads to maximal simplification of the elementary instruction set. (The difference is obvious when we compare the mathematical tables we learnt in primary schools to the corresponding operations in binary arithmetic.) 2 is also the maximum number of items that can be distinguished with a single yes/no question. (Detecting on/off in an electrical circuit is much easier than identifying more values of voltages and currents.) Thus it is worth investigating what life optimised in selecting the letters of its alphabet. The computational task involved in DNA replication is ASSEMBLY. The desired components already exist (they are floating around in a random ensemble); they are just picked up one by one and arranged in the required order. To pick up the desired component, one must be able to identify it uniquely. This is a variant of the unsorted database search problem, unsorted because prior to their selection the components are not arranged in any particular order. (It is important to note that this replication is not a COPY process. COPY means writing a specific symbol in a blank location, and unlike ASSEMBLY, it is forbidden by the linearity of quantum mechanics.) The optimisation criterion for this task is now clear—one must distinguish the maximum number of items with a minimum number of identifying questions. I have already pointed out that in a classical search, a single yes/no question can distinguish two items. The interesting point is that a quantum search can do better. ## II Unsorted Database Search Let the database contain $`N`$ distinct objects arranged in a random order. A certain object has to be located in the database by asking a set of questions. Each query is a yes/no question based on a property of the desired object (e.g. is this the object that I want or not?). In the search process, the same query is repeated using different input states until the desired object is found. Let $`Q`$ be the number of queries required to locate the desired object in the database. Using classical probability analysis, it can be easily seen that (a) $`Q=N`$ when all objects are available with equal probability for each query (i.e. each query has a success probability of $`1/N`$), and (b) $`Q=(N+1)/2`$ when the objects which have been rejected earlier in the search process are not picked up again for a query. Here the angular brackets represent the average expectation values. Option (b) is available only when the system possesses memory to recognise what has already been tried before. In the random cellular environment, the rejected object is thrown back into the database, and only option (a) is available to a classical ASSEMBLY operation. Lov Grover discovered a quantum database search algorithm that locates the desired object using fewer queries . Quantum algorithms work with amplitudes, which evolve in time by unitary transformations. At any stage, the observation probability of a state is the absolute value square of the corresponding amplitude. The quantum database is represented as an $`N`$dimensional Hilbert space, with the $`N`$ distinct objects as its orthonormal basis vectors. The quantum query can be applied not only to the basis vectors, but also to their all possible superpositions (i.e. to any state in the Hilbert space). Let $`|b`$ be the desired state and $`|s`$ be the symmetric superposition state . $$|b=(0\mathrm{}010\mathrm{}0)^T,|s=(1/\sqrt{N})(1\mathrm{}1)^T.$$ (1) Let $`U_b=12|bb|`$ and $`U_s=12|ss|`$ be the reflection operators corresponding to these states . The operator $`U_b`$ distinguishes between the desired state and the rest. It flips the sign of the amplitude in the desired state, and is the query or the quantum oracle. The operator $`U_s`$ treats all objects on an equal footing. It implements the reflection about the average operation. Grover’s algorithm starts with the input state $`|s`$, and at each step applies the combination $`U_sU_b`$ to it. Each step just rotates the state vector by a fixed angle (determined by $`|b|s|=1/\sqrt{N}`$) in the plane formed by $`|b`$ and $`|s`$. $`Q`$ applications of $`U_sU_b`$ rotate the state vector all the way to $`|b`$, at which stage the desired state is located and the algorithm is terminated. $$(U_sU_b)^Q|s=|b.$$ (2) This relation is readily solved, since the state vector rotates at a constant rate, giving $$(2Q+1)\mathrm{sin}^1(1/\sqrt{N})=\pi /2.$$ (3) Over the last few years, this algorithm has been studied in detail. I just summarise some of the important features: * For a given $`N`$, the solution for $`Q`$ satisfying Eq.(3) may not be an integer. This means that the algorithm will have to stop without the final state being exactly $`|b`$ on the r.h.s. of Eq.(2). There will remain a small admixture of other states in the output, implying an error in the search process. The size of this admixture is determined by how close one can gets to $`\pi /2`$ on the r.h.s. of Eq.(3). Apart from this, the algorithm is fully deterministic. * The algorithm is known to be optimal , going from $`|s`$ to $`|b`$ along a geodesic. No other algorithm, classical or quantum, can locate the desired object in an unsorted database with a fewer number of queries. * The iterative steps of the algorithm can be viewed as the discretised evolution of the state vector in the Hilbert space, governed by a Hamiltonian containing two terms, $`|bb|`$ and $`|ss|`$. The former represents a potential energy attracting the state towards $`|b`$, while the latter represents a kinetic energy diffusing the state throughout the Hilbert space. The alteration between $`U_b`$ and $`U_s`$ in the discretised steps is reminiscent of Trotter’s formula used in construction of the transfer matrix from a discretised Feynman’s path integral . * Asymptotically, $`Q=\pi \sqrt{N}/4`$. The best that the classical algorithms can do is to random walk through all the possibilities, and that produces $`Q=O(N)`$ as mentioned above. With the use of superposition of all possibilities at the start, the quantum algorithm performs a directed walk to the final result and achieves the square-root speed-up. * The result in Eq.(3) depends only on $`|b|s|`$; the phases of various components of $`|s`$ can be arbitrary, i.e. they can have the symmetry of bosons, fermions or even anyons. To come back to the genetic code, let us look at two of the solutions of Eq.(3) for small $`Q`$. The only exact integral solution is $`Q=1`$, $`N=4`$. Base-pairing during DNA replication can be looked upon as a yes/no query, either the pairing takes place through molecular bond formations or it does not, and its task is to distinguish between 4 possibilities. The other interesting solution is $`Q=3`$, $`N=20.2`$. The well-known triplet code of DNA has 3 consecutive nucleotide bases carrying 21 signals , 20 for the amino acids plus a STOP . 3 base-pairings between t-RNA and m-RNA transfer this code to the amino acid chain . These solutions are highly provocative. This is the first time they have come out of an algorithm that performs the actual task accomplished by DNA. It is fascinating that they are the optimal solutions . Indeed it is imperative to investigate whether DNA has the quantum hardware necessary to implement the quantum search algorithm. ## III Molecular Biology and the Structure of DNA Over the last fifty years, molecular biologists have learnt a lot about the structure and function of DNA by careful experiments (they have also been rewarded with many Nobel prizes). Let us quickly go through some of the facts they have unravelled . * DNA has the structure of a double helix. It can be schematically represented as a ladder, as in Fig.1. The sides of the ladder have a periodic structure with alternating sugar and phosphate groups. The nucleotide base pairs form the rungs of the ladder, and the genetic information is encoded in the order of these base pairs. * DNA contains 4 nucleotide bases—A, T, C, G—which are closely related to each-other in chemical structure . The bases are always paired as A-T and C-G along the rungs of the ladder by Hydrogen bonds. This base-pairing makes the two DNA strands complementary in character. * The sugar and phosphate groups along the side of the ladder are held together by covalent bonds. Their bonding is completely insensitive to the bases attached to them, and takes place in the presence of DNA polymerase enzymes. * During replication, the helicase enzyme separates the two strands of DNA by breaking the Hydrogen bonds, much like opening a zipper. The unpaired bases along each of the separated strands find their partners from the surrounding molecules, producing two copies of the original DNA. * The sides of the ladder have asymmetric ends. The replication process is directed, always proceeding from the $`5^{}`$ end to the $`3^{}`$ end (these numbers label the position of the carbon atoms in the sugar rings) of the strand being constructed. The DNA polymerase enzyme slides along the intact strand, adding one base at a time to the growing strand. During this process, base-pairing and sugar-phosphate bonding alternate. * RNA molecules carry the nucleotide bases—A, U, C, G—with U very similar in chemical structure to T. A-U pairing is as strong as A-T pairing. Messenger RNA (m-RNA) has a single strand structure. In the first step of protein synthesis, the RNA polymerase enzyme separates the paired DNA strands and constructs an m-RNA strand on the DNA template by base-pairing (the process is the same as in DNA replication). The m-RNA strand grows as the RNA polymerase enzyme slides along the DNA from the promoter to the terminator base sequence. Finally the RNA polymerase enzyme detaches itself from the DNA, the fully constructed m-RNA strand floats away to the ribosomes in the cytoplasm of the cell, and the separated DNA strands pair up again. * Transfer RNA (t-RNA) molecules have 3 RNA bases at one end and an amino acid at the other, with a many-to-one mapping between the two. Inside cellular structures called ribosomes, 3 t-RNA bases line up against the matching bases of m-RNA, aligning the amino acids at the other end. The aligned amino acids then split off from the t-RNA molecules and bind themselves into a chain. The process again proceeds monotonically from the $`5^{}`$ end to the $`3^{}`$ end of the m-RNA. After the amino acids split off, the remnant t-RNA molecules are recycled. This completes the transfer of the genetic code from DNA to proteins. * Enzymes play a crucial role in many of the above steps. In addition to facilitating various processes by their catalytic action, they store energy needed for various processes, ensure that DNA keeps out U and RNA keeps out T, and perform error correction by their $`3^{}5^{}`$ exonuclease action (i.e. reversing the assembly process to remove a mismatched base pair). * Hereditary DNA is accurately assembled, with an error rate of $`10^7`$ per base pair, after the proof-reading exonuclease action. Proteins are assembled less accurately, with an error rate of $`10^4`$ per amino acid . Let us also recollect some useful facts about various chemical bonds. * Ionic bonds are strong, can form at any angle, and can be explained in terms of electrostatic forces. Ions often separate in solutions. * Covalent bonds are strong, form at specific angles, and can be explained in terms of Coulomb forces between electrons and nuclei and the exclusion principle. * Van der Waals bonds are weak, not very angle dependent, and explainable in terms of interactions between virtual electric dipoles. They play an important role in transitions between solid, liquid and gas phases, as well as in folding and linking of polymers. * Hydrogen bonds are weak, highly angle dependent, and explainable in terms of a proton ($`H^+`$) tunneling between two attractive energy minima. The situation is a genuine illustration of a particle in a double well potential, e.g. $`OH\mathrm{}:NO^{}:\mathrm{}HN^+`$. High sensitivity of the tunneling amplitude to the shape of the energy barrier make Hydrogen bonds extremely sensitive to the distances and angles involved. They are the most quantum of all bonds; water is a well-known example. * Delocalisation of electrons and protons over distances of the order of a few angstroms greatly helps in molecular bond formation. It is important to note that these distances are much bigger than the Compton wavelengths of the particles, yet delocalisation is common and maintains quantum coherence. In case of electrons, the phenomena are called resonance and hybridisation, e.g. the benzene ring. In case of protons, the different configurations are called tautomers, e.g. amino $``$ imino and keto $``$ enol fluctuations of the nucleotide bases. With all this information, the quantum search algorithmic requirements from the DNA structure are clear. It is convenient to take the distinct nucleotide bases as the quantum basis states in the Hilbert space. Then (1) The quantum query transformation $`U_b`$ must be found in the base-pairing with Hydrogen bonds. (2) The symmetric transformation $`U_s`$ must be found in the base-independent processes occurring along the sides of the ladder. (3) An environment with good quantum coherence must exist. Thermal noise is inevitable at $`T300^{}K`$ inside the cells, so the transformations must be stable against such fluctuations. Figuratively, the best that can be achieved is $$\mathrm{Actual}\mathrm{evolution}=\genfrac{}{}{0pt}{}{lim}{\mathrm{decoherence}0}[\mathrm{Quantum}\mathrm{evolution}].$$ (4) Thus we need quantum features that smoothly cross over to the classical regime, i.e. features that are reasonably stable against small decoherent fluctuations. Examples are: (a) geometric and topological phases, and (b) projection/measurement operators. ## IV Base-pairing as the Quantum Oracle During DNA replication, the intact strand of DNA acts as a template on which the growing strand is assembled. At each step, the base on the intact strand decides which one of the four possible bases can pair with it. This is exactly the yes/no query (also called the oracle) used in the database search algorithm. To connect this oracle to the quantum transformation $`U_b`$, we have to look at how molecular bond formations transform a quantum state. The generic quantum evolution operator is $`\mathrm{exp}(iHt)`$, with $`H`$ being the total Hamiltonian of the system. Global conservation of energy means that the overall phase $`\mathrm{exp}(iEt)`$ will completely factor out of the evolution and will not affect the final probabilities. What matters is only the relative phase between pairing and non-pairing bases. During the pairing process, the bases come together in an initial scattering state, discover that there is a lower energy binding state available, and decay to that state releasing the extra energy as a quantum . The interaction Hamiltonian for the bond formation process can be represented as $$H_{int}(a^{}b+b^{}a),$$ (5) where $`a,a^{}`$ are the transition operators between the excited and ground states of the reactants, and $`b,b^{}`$ are the transition operators between zero and one quantum states of energy released. Both the terms in Eq.(5) are necessary for the Hamiltonian to be Hermitian. The phase change $`\phi `$ during the bond formation satisfies $$\mathrm{exp}(iH_{int}t_b)|e|0=\phi |g|1.$$ (6) With only two states involved, Eq.(6) is easily solved by diagonalising $`H_{int}`$. In the two dimensional space, let $$H_{int}=\mathrm{\Delta }E_H\left(\begin{array}{cc}0& 1\\ 1& 0\end{array}\right),\mathrm{eigenvectors}:\frac{1}{\sqrt{2}}\left(\begin{array}{c}1\\ \pm 1\end{array}\right),$$ (7) and eigenvalues $`\pm \mathrm{\Delta }E_H`$. Eq.(6) then reduces to $$\mathrm{exp}(i\mathrm{\Delta }E_Ht_b)=\phi =\mathrm{exp}(i\mathrm{\Delta }E_Ht_b),$$ (8) with the solution $`\phi =\sqrt{1}`$. This is the geometric phase well-known in quantum optics. A complete Rabi cycle in a two-level system gives a phase change of $`1`$, and the transition process corresponds to half the cycle. The phase $`\phi `$ does not depend on specific values of $`\mathrm{\Delta }E_H`$ or $`t_b`$, but only on the fact that the transition takes place. Quantum mechanics does not specify how $`\phi `$ will be divided between the bound state and the released energy quantum. In quantum optics, this break-up is determined by the phase of the laser. Here I assume that the process of decoherence is such that the energy quantum does not carry away any phase information. At this stage, we discover a pleasant surprise that the base-pairing takes place not with a single Hydrogen bond but with multiple Hydrogen bonds (two for A-T and A-U, and three for C-G), as shown in Fig.2 . Multiple Hydrogen bonds are necessary for the mechanical stability of the helix. But they are also of different length, making it likely that they form asynchronously. Assuming a two-step deexcitation process during base-pairing, the geometric phase change becomes $`\phi ^2=1`$, just what is needed to implement $`U_b`$. The energy of a single Hydrogen bond is $`\mathrm{\Delta }E_H7kT`$, giving $`\mathrm{exp}(\mathrm{\Delta }E_H/kT)10^3`$. This roughly explains the observed error rate in DNA replication. The tunneling amplitude for bond formation is related to both $`\mathrm{\Delta }E_H`$ and $`t_b`$, which determines the time scale of the base-pairing, $$\mathrm{\Delta }E_Ht_b\mathrm{}t_b4\times 10^{15}\mathrm{sec}.$$ (9) ## V A Quantum Search Scenario The next step is to look for the transformation $`U_s`$ in the processes occurring along the sides of the DNA-ladder. During these processes, quantum evolution produces various phases as molecular bonds get formed and broken. But these bonds treat all the nucleotide bases in the same manner, so the phases completely factor out and have no effect on the final probabilities. Thus I leave the phases out. Suppose that $`|s`$ is the equilibrium state of the physical system, favoured by the processes that occur along the sides of the DNA-ladder. This means that any other initial state will gradually relax towards $`|s`$, with the damping provided by the environment. Let $`t_r`$ be the time scale for this relaxation process. Now $`|s`$ is a superposition state of nucleotide bases, and it can be created only if the cellular environment provides transition matrix elements between its various components. (In free space, transition matrix elements between nucleotide bases of different chemical composition vanish.) The magnitude of these transition matrix elements decides how quickly $`|s`$ cycles through its various components. Let $`t_{osc}`$ be the time scale for these oscillations. Now let us look at the DNA replication process, when the above defined time scales satisfy the hierarchy $$t_b<<t_{osc}<<t_r.$$ (10) 1. In the initial stage, the randomly floating around nucleotide bases come into contact with the growing DNA strand, and relax to the state $`|s=(1/\sqrt{N})_i|i`$. 2. When the nucleotide base finds its proper orientation, Hydrogen bond formation suddenly takes place, changing the state to $`U_b|s`$. This state is entangled between the nucleotide bases and the energy quanta, $`U_b|s=(1/\sqrt{N})[_{ib}|i|0|b|2]`$. 3. After this sudden change, the relaxation process again tries to bring the system back to the state $`|s`$. With the time scales obeying Eq.(10), this relaxation occurs as damped oscillations, much like what happens when one gives a sudden jerk to a damped pendulum. 4. The opposite end of the damped oscillation is $`(2|ss|1)U_b|s=U_sU_b|s`$. When the system evolves to this opposite end, it discovers that it is no longer entangled between the nucleotide bases and the energy quanta, $`U_sU_b|s|0=|b|2`$ for $`N=4`$. At this point, the energy quanta are free to wander off with minimal disturbance to the quantum coherence. The departure of the energy quanta confirms the base-pairing, providing a projective measurement of the system. (I take the measurement time scale to be much smaller than $`t_{osc}`$.) 5. The energy quanta that have wandered off are unlikely to return, making the process irreversible. The replication then continues to add the next base onto the growing strand. These steps are schematically shown in Fig.3. They provide a highly tuned yet a robust algorithm. There are no fine-tuned parameters; the hierarchy of Eq.(10) has enough room to take care of substantial variation in individual time scales. Life couldn’t be simpler! To illustrate the possibility that processes with time scales obeying the hierarchy of Eq.(10) do physically occur, let us look at a quantum example. Consider the processes involved in the functioning of the $`NH_3`$ maser. $`NH_3`$ is a molecule with two equivalent configurations, corresponding to the Nitrogen atom being above or below the triangle of Hydrogen atoms. These two configurations can be distinguished by the direction of their electric dipole moments. Quantum tunneling of the Nitrogen atom through the triangle of Hydrogen atoms mixes these two configurations, and the ground state is the symmetric superposition of the two. In $`NH_3`$ gas, any initial state decays towards this equilibrium ground state (molecular collisions help in this relaxation). If a electric field is applied to the gas for a short duration, it favours one of the two configurations and kicks the molecules out of their equilibrium state. After the removal of the electric field, the kicked molecules oscillate from one configuration to the other, till the oscillations are damped out by the decay process. Shining the molecules with a radio-frequency pulse resonant with $`t_{osc}`$ removes the extra energy quickly by stimulated emission and produces a coherent maser. The relaxation time scale $`t_r`$ depends on temperature as well as on various molecular concentrations, and governs the overall replication rate. Under normal circumstances, DNA replication is observed to occur at the rate of 1000 base-pairings/sec, constraining $`t_r`$ to be smaller than $`O(10^3)`$ sec. Without any knowledge of the transition matrix elements, I do not have any estimate of $`t_{osc}`$. Strictly speaking, I should not talk about quantum states in processes that involve damping; the proper mathematical formulation must be in the language of density matrices. But when the damping is small, as is the case here, it is easier to talk about states. The steps above can be easily transcribed in the language of density matrices, without changing their outcomes. Now we can proceed to the remaining pieces needed to complete the scenario: a mechanism that favours the state $`|s`$ as the equilibrium state, and an environment that permits an almost coherent quantum evolution. For that I have to appeal to the ingredients ignored so far—the enzymes. ## VI The Role of the Enzymes Enzymes play a very important role in many biochemical reactions, and some of the things they do were mentioned in section III. The rates of various biochemical reactions, when estimated with the standard thermodynamical analysis (probability distributions, diffusion processes, kinetic theory, etc.), fall too short of the observed rates, often by orders of magnitudes. One is left with no choice but to admit that these reactions are catalysed. Enzymes are the objects which catalyse biochemical reactions. They are large complicated molecules, much larger than the reactants they help, made of several peptide chains. Their shapes play an important part in catalysis, and often they completely surround the reaction region. They do not bind to either the reactants or the products, just help them along the way. The standard explanation is that enzymes lower reaction barriers. Just how this lowering of reaction barriers occurs is not clearly understood, and is an active area of research (for example, enzymes can form weak bonds with the transition state or suck out solvent molecules from in between the reactants). Ultimately it must be explained in terms of some underlying physical laws. I put forward two specific hypotheses about what enzymes accomplish in the processing of genetic information. * Enzymes provide a shielded environment where quantum coherence of the reactants is maintained. This is a rather passive task, consistent with the properties of enzymes mentioned above, and it is also plausible. For instance, diamagnetic electrons do an extraordinarily good job of shielding the nuclear spins from the environment—the coherence time observed in NMR is $`O(10)`$ sec, much longer than the thermal environment relaxation time ($`\mathrm{}/kTO(10^{14})`$ sec) and the molecular collision time ($`O(10^{11})`$ sec), and still neighbouring nuclear spins couple through the electron cloud. A few orders of magnitude increase in coherence time is sufficient in many reactions for faster quantum algorithms to take over from their classical counterparts, and provide the catalytic speed-up. * Enzymes are able to create superposed states of chemically distinct molecules. This is an active task. Various nucleotide bases differ from each other only in terms of small chemical groups, containing less than 10 atoms, at their Hydrogen bonding end. To convert one base into another, enzymes have to be repositories of these chemical groups which differentiate between various nucleotide bases. Enzymes are known to do cut-and-paste jobs with such chemical groups (e.g. one of the simplest substitution processes is methylation, replacing $`H`$ by $`CH_3`$, which converts U to T). Given such transition matrix elements, quantum dynamics automatically produces a superposition state as the lowest energy equilibrium state. (Note that the cut-and-paste job in a classical environment would produce a mixture, but in a quantum environment it produces superposition.) It is mandatory that the enzymes do the cut-and-paste job only on the growing strand and not on the intact strand. Perhaps this is ensured by other molecular bonds. These hypotheses are powerful enough to explain many observed properties of enzymes from a new perspective. For example, * It is obvious why DNA replication always takes place in the presence of enzymes. If base-pairing were to occur by chance collisions, it would occur anywhere along the exposed unpaired strand. * Since enzymes control the transition matrix elements, they can keep U out of DNA and T out of RNA. * Quantum processes can tunnel through the reaction barriers, instead of climbing over them. * As long as quantum coherence is maintained, the replication process is reversible. This can easily explain the error-correcting exonuclease action of the polymerase enzymes. More importantly, these hypotheses are experimentally testable, provided one can observe the replication process at its intermediate stages. That is within the grasp of modern techniques such as X-ray diffraction analysis, electron microscopy, NMR spectroscopy, radioactive tagging of atoms in chemical reactions, and femtosecond photography. ## VII Summary and Future I have proposed a quantum algorithmic mechanism for DNA replication and protein synthesis. This genetic information processing takes place at the molecular level, where quantum physics is indeed the dominant dynamics (classical physics effects appear as decoherence and are subdominant). It is reasonable to expect that if there was something to be gained from quantum computation, life would have taken advantage of that at this physical scale . For DNA replication, the quantum search algorithm provides a factor of two speed-up over classical search, but it is still an advantage. Quantum algorithms can provide a bigger advantage for more complicated processes involving many steps. Implementation of quantum database search does not require a general purpose quantum computer. A system that can implement the quantum oracle and quantum state reflections is sufficient. I have described the physical analogues of these operations in genetic information processing, and it is not unusual for living organisms to find the correct ingredients without bothering about generalities. In fact various pieces of the scenario have fitted together so nicely (database search paradigm, optimal $`(Q,N)`$ values—$`(1,4)`$ and $`(3,20.2)`$, quantum transformations implementing $`U_b`$ and $`U_s`$), that with the courage of conviction, I have made bold hypotheses to fill the remaining gaps. The role that enzymes play, as described in section VI, is both plausible and experimentally verifiable. The other assumptions I have made (i.e. two-step cascade deexcitation in base-pairing, energy quanta not carrying away any phase information, energy quanta departing only when they cause minimal decoherence to the system) concern how decoherence modifies pure quantum evolution, and are also experimentally testable. Such experimental tests would decide the future of this proposal. It is clear that if experiments verify the quantum scenario for genetic information processing presented here, there will be a significant overhaul of conventional molecular biology. There is nothing inappropriate in that—the subject of molecular biology was born out of quantum physics, and quantum physics has not yet had its last word on it. I also want to acknowledge that molecular biology is a far more complicated subject than the simplest features I have explored in this work. If quantum physics provides better explanations for other more complex phenomena of life, it would be a wonderful development. ## Appendix: Some Questions and Answers I am grateful to the audience for their many comments and questions. I summarise below some of the important concepts they brought out. Q: Are there any examples of genetic codes which use other values of $`Q`$, say $`Q=2`$? A: I do not know of any such examples. May be some day we shall become clever enough to synthesise such instances in the laboratory. Q: Parallel processing can also speed-up algorithms. Why is that not used in the case of genetic code? A: In case of DNA replication, enzymes separate only a limited region of paired strands as they slide along replicating the information. Complete separation of the strands would break too many Hydrogen bonds and cost too much in energy. Thus the replication remains a local process. In case of protein synthesis, random pairing of t-RNA and m-RNA at several different locations would lead to a lot of mismatches and errors, since the triplet code needs precise starting and ending points. Parallel processing does take place though: several ribosomes work simultaneously on a single m-RNA strand, each one traversing the full length of m-RNA from one end to the other and constructing identical amino acid chains. Q: Three nucleotide bases can form 64 distinct codes. Why do you still say that 20 is the optimal number for the triplet code? A: When the DNA assembly takes place, with each base as a separate unit, the number of possibilities explored is indeed $`4^Q`$. But this is not what happens in the matching between t-RNA and m-RNA. The three bases come as a single group, without any possibility of their rearrangement. The whole group has to be accepted or rejected as a single entity. In such a situation, the number of objects that can be distinguished is smaller, as given by Eq.(3). Q: Do you have any understanding of degeneracy of the triplet code? A: With only 21 signals embedded amongst 64 possibilities, the amino acid code is indeed degenerate. Moreover, all the amino acids are not present in living organisms with equal frequency, quite unlike the nucleotide bases. Ribosomes are also much bigger and more complicated molecules than the polymerase enzymes, and so capable of carrying out more complex tasks. All this makes protein synthesis a more difficult process to study than DNA replication. I have not analysed it in detail, and I am unable to provide any understanding of the degeneracy of the triplet code. Q: The energy quanta do not have to be released exactly at the opposite end of the oscillation. Oscillations spend more time near their extrema anyway, and so even random emission will give high probability for correct base-pairing. A: I agree. But a smaller success probability means that more trials will be necessary to achieve the correct base-pairing. That would diminish the advantage of the quantum algorithm. Q: Enzymes are also proteins which have to be synthesised by the DNA. How are they synthesised in the first place? A: Classical algorithms can do everything that quantum algorithms do, albeit slower. In the absence of enzymes, various steps will take place only by random chance, and so the start-up will be slow. But once processes get going, as in a living cell, enzymes will be manufactured along the way and there will be no turning back to slower algorithms. Q: If polymerase enzymes have to keep on supplying various chemical groups to nucleotide bases, they would run out of their stock at some stage. What happens then? A: In my proposal, the polymerase enzymes substitute one chemical group for another. With the DNA base sequence being random, the chemical groups are recycled with high probability. So with a reasonable initial stock an enzyme can perform its task for a long time. Of course if an enzyme runs out of its stock, it has no choice but to quit and replenish its stock. Q: How essential is the environment of a living cell in DNA replication? Are there any other molecules besides enzymes involved? A: DNA replication can take place without cellular environment. With proper enzymes in the solution, the polymerase chain reaction can rapidly multiply DNA. This is used in DNA fingerprinting from dead cells. Q: Many inorganic reactions can be speeded up by appropriate catalysts. Are they quantum reactions too? A: In my view, catalysts convert random walk processes into directed walks, providing a $`\sqrt{N}`$ speed-up in the number of steps. Quantum superposition is one way to achieve this, but it does not have to be the only way. There may be even classical mechanisms which can do the same job. Q: Living organisms are known to emit radiation, which is coherent and not thermal (black-body). Is there any connection between these biophotons and your proposal? A: This is the first time I have heard about biophotons. If they are related to some quantum processes going on inside living cells, that would be great. Q: Are there any applications of your proposal? A: Understanding the basic processes of life will always lead to new applications. For example, molecular biologists have been working on accelerating synthesis of desired proteins and inhibiting growth of cancer and harmful viruses. I am not inclined to speculate more on this issue right now.
no-problem/0002/astro-ph0002025.html
ar5iv
text
# First results of the air shower experiment KASCADE ## 1 INTRODUCTION The air shower experiment KASCADE aims at the investigation of the knee region of the charged cosmic rays. It is built up as a multidetector setup for measuring simultaneously a large number of observables in the different particle (electromagnetic, muonic and hadronic) components of the extended air shower (EAS). This enables to perform a multivariate multiparameter analysis for the registered EAS on an event-by-event basis to account for the non parametric, stochastic processes of the EAS development in the atmosphere. In parallel the KASCADE collaboration tries to improve the tools for the Monte Carlo simulations with the relevant physics. The code CORSIKA allows not only the detailed three dimensional simulation of the shower development in all particle components (including neutrinos) down to the observation level, but it has been implemented several high-energy interaction models. As the basic physics of these models in the relevant energy region and in the extreme forward direction cannot be tested at present days’ accelerators, the test of these models emerged as one of the goals of the KASCADE experiment. The following overview is based on results presented at the 26<sup>th</sup> International Cosmic Ray Conference in Salt Lake City, Utah 1999 . ## 2 THE KASCADE EXPERIMENT The KASCADE array consists of 252 detector stations in a $`200\times 200`$m<sup>2</sup> rectangular grid containing unshielded liquid scintillation detectors ($`e/\gamma `$-detectors) and below 10 cm lead and 4 cm steel plastic scintillators as muon-detectors. The total sensitive areas are $`490`$m<sup>2</sup> for the $`e/\gamma `$\- and $`622`$m<sup>2</sup> for the muon-detectors. In the center of the array a hadron calorimeter ($`16\times 20`$m<sup>2</sup>) is built up, consisting of more than 40,000 liquid ionisation chambers in 8 layers with a trigger layer consisting of 456 scintillation detectors in between. Below the calorimeter a setup of position sensitive multiwire proportional chambers (MWPC) in two layers measures high-energy muons ($`E_\mu >2`$GeV) of the EAS. For each single shower a large number of observables are reconstructed with small uncertainties. For example, the errors for the so-called shower sizes, i.e. total numbers of electrons $`N_e`$ and number of muons in the range of the core distance $`40200`$m $`N_\mu ^{tr}`$, are smaller than 10$`\%`$. The resulting frequency spectra of the sizes (inclusive the spectra of the hadron number and muon density spectra at different core distances) show kinks at same integral fluxes. This is a strong hint for an astrophysical source of the knee phenomenon based on pure experimental data, since same intensity of the flux corresponds to equal primary energy. But for the reconstruction of the primary energy spectrum and the chemical composition detailed Monte Carlo simulations are indispensable due to the unknown initial parameters and the large intrinsic fluctuations of the stochastic process of the shower development in the atmosphere. The usage of a larger number of less correlated observables in a multivariate analysis parallel to independent tests of the simulation models tries to find the solution of this dilemma. ## 3 ANALYSES AND RESULTS In the air shower simulation program CORSIKA several high-energy interaction models are embedded including VENUS, QGSJET and SIBYLL (Refs. see in ). The models are based on the Gribov-Regge-theory and QCD in accordance with accelerator data. Extrapolations for the EAS physics in the knee region are necessary due to the high interaction energy and for the extreme forward direction. To compare KASCADE data with Monte Carlo expectations a detector simulation by GEANT is performed for each CORSIKA simulated shower. One test is the comparison of simulated integral muon trigger and hadron rates with the measurements. This test is sensitive to the energy spectrum of the hadrons which are produced in the forward direction at primary energies around 10 TeV, where the chemical composition is roughly known (Fig.1). For higher primary energies the hadronic part of the interaction models are tested by comparisons of different hadronic observables in ranges of shower sizes . In general it is seen that the high-energy interaction models predict a too large number of hadrons at sea level compared with the measurements. Nonparametric multivariate methods like “Neural Networks” or analyses based on the “Bayesian decision rules” are applied to the KASCADE data for the estimation of the energy and mass of the cosmic rays on an event-by-event basis. The necessary “a-priori” information in form of probability density distributions are won by detailed Monte Carlo simulations with large statistics. For the energy reconstruction the shower sizes $`N_e`$ and $`N_\mu ^{tr}`$ as parameters are used in a neural network analyses (Fig.2). A parametric approach to the same data leads to compatible results (Fig.2): here a simultaneous fit to the $`N_e`$ and $`N_\mu ^{tr}`$ size spectra is performed. The kernel function of this fit contains the size-energy correlations for two primary masses (proton and iron) obtained by Monte Carlo simulations. An analysis of the size-ratio $`\mathrm{lg}(N_\mu ^{tr})/\mathrm{lg}(N_e)`$ calculated for each single event leads to results of the elemental composition for different energy ranges (Fig.3). The measured distribution of these ratios is assumed to be a superposition of simulated distributions for different primary masses. The large iron sampling calorimeter of KASCADE allows to investigate the hadronic part of EAS in terms of the chemical composition. For six different hadronic observables (won by spatial and energy distributions of the hadrons) the deviations of the mean values to expectations of pure proton and iron primaries in certain energy ranges are calculated. Besides the use of global parameters like the shower sizes, sets of different parameters are used for neural network and Bayesian decision analyses. Examples of such observables are the number of reconstructed hadrons in the calorimeter, their reconstructed energy sum, number of muons in the shower center, or parameters obtained by a fractal analysis of the hit pattern of muons and secondaries at the MWPC. The latter ones are sensitive to the structure of the shower core which is mass sensitive due to different shower developments of light and heavy particles in the atmosphere. In Figure 3 results of a Bayesian analyses and of a separate neural net analysis using the fractal parameters are shown. As the tendency of the results of each described method is consistent with a heavier primary mass after the knee region, but the absolute scale strongly depends on the particle component of which the observables are constructed from, the syllogism is that the balance of the energy and number of particles between the muonic, electromagnetic and hadronic part in the EAS differs for simulations and the real shower development. ## 4 CONCLUSIONS First results of the KASCADE experiment can be summarized by following statements: All secondary particle components of the showers display a kink in the size spectra. This strongly supports an astrophysical origin of the “knee”, rather than effects of the interaction of the primaries in the atmosphere. The knee is sharper for the light primary component than for the heavy one. This result follows from the measurement as an increasing average mass of the primary cosmic rays above the observed kink, together with the energy dependent mass classification of single air showers. But none of the high-energy interaction models en vogue is able to fit the data of all observables consistently.