id
stringlengths 30
36
| source
stringclasses 1
value | format
stringclasses 1
value | text
stringlengths 5
878k
|
---|---|---|---|
no-problem/9912/hep-lat9912018.html | ar5iv | text | # Scaling tests of the improved Kogut-Susskind quark action
\[
## Abstract
Improved lattice actions for Kogut-Susskind quarks have been shown to improve rotational symmetry and flavor symmetry. In this work we find improved scaling behavior of the $`\rho `$ and nucleon masses expressed in units of a length scale obtained from the static quark potential, and better behavior of the Dirac operator in instanton backgrounds.
\]
Because of their large computational requirements, full QCD lattice simulations profit greatly from the use of improved actions, which allow better physics to be extracted from simulations at moderate lattice spacings. The Kogut-Susskind formulation of lattice fermions is attractive for full QCD simulations because the thinning of the degrees of freedom reduces the computational effort and, more importantly, because the residual unbroken chiral symmetry prevents additive renormalization of the quark mass, and hence eliminates problems with exceptional configurations.
The lattice artifacts in the Kogut-Susskind formulation are order $`a^2`$, unlike the Wilson formulation which has an order $`a`$ artifact, which can be cancelled by the “clover” improvement. A third nearest neighbor coupling introduced by Naik cancels order $`a^2`$ violations of rotational symmetry in the free quark propagator, and this has been shown to lead to an improvement in the rotational symmetry of meson propagators. Another lattice artifact is the breaking of flavor symmetry, signalled by the fact that only one of the pions produced from the four flavors of quarks has an exactly vanishing mass at zero quark mass. This flavor symmetry breaking can be understood as a scattering of a quark from one corner of the Brilloin zone to another through the exchange of a gluon with momentum near $`\pi /a`$. Roughly speaking, the cure for this problem consists of introducing a form factor for the quark-gluon interaction by smearing out, or “fattening” the gauge connection in the quark action. In our previous works we have investigated the effects of different fat link actions on the flavor symmetry violation seen in the pion mass spectrum. (see also Ref. .) A theoretically attractive action is the “Asqtad” action studied in Ref , which includes the Naik correction to improve rotational symmetry, fattening of the nearest neighbor coupling to cancel couplings to gluons with any momentum component equal to $`\pi /a`$, and a term introduced by Lepage which corrects order $`a^2`$ errors at low momentum introduced by the form factor. This “Asqtad” action cancels all tree level order $`a^2`$ errors.
Our previous works tested improvements in physical quantities that were directly related to the improvements in the action. That is, the Naik term is introduced to improve rotational symmetry in the quark propagator and was seen to improve rotational symmetry in the meson propagators. The fat link term was designed to reduce flavor symmetry breaking and was seen to reduce the flavor symmetry breaking in the pion mass spectrum. It is perhaps less obvious that other quantities will be improved, and such tests are the subject of this note. We have calculated hadron masses and the static quark potential at different lattice spacings, and we can compare the scaling of these quantities with results with other actions. We have also investigated the physics of instantons on the lattice by computing eigenvalues of the improved and conventional Kogut-Susskind Dirac operators on smooth instanton configurations and on semi-realistic “noisy” configurations containing an instanton.
We test scaling by computing hadron masses using a length scale determined from the static quark potential. In particular, we will use a variant of the Sommer parameter defined by $`r_1^2F(r_1)=1.00`$, where the commonly used $`r_0`$ is defined by $`r_0^2F(r_0)=1.65`$. (The advantage of $`r_1`$ is that it is determined at a shorter length scale, where the potential can be determined more accurately, and so lattice spacings in simulations with different parameters can be matched more accurately using $`r_1`$.) For the quenched potential, this is related to $`r_0`$ and $`\sigma `$ by $`r_1/r_0=0.725`$ and $`r_1\sqrt{\sigma }=0.85`$. For quenched simulations with the single plaquette gauge action, we take the lattice spacing from the interpolating formula of Guagnelli, Sommer and Wittig:
$`\mathrm{log}(a/r_1)`$ $`=`$ $`1.35891.7139(\beta 6)`$ (1)
$`+`$ $`0.8155(\beta 6)^20.6667(\beta 6)^3,`$ (2)
while for quenched simulations with the Symanzik improved gauge action we fit a similar formula to the string tension results of Collins et al. and our own results at $`\beta =10/g^2=8.0`$
$`\mathrm{log}(a/r_1)`$ $`=`$ $`0.9700.840(\beta 8)`$ (3)
$`+`$ $`0.413(\beta 8)^2+0.304(\beta 8)^3,`$ (4)
with an error of around 1%. We use the Goldstone pion mass to adjust the quark mass in the various simulations, interpolating to the point where $`m_\pi r_1=0.778`$. (This somewhat arbitrary value corresponds to $`am_q=0.02`$ at $`10/g^2=8.0`$.)
We calculated the quenched hadron spectrum using the improved quark action and Symanzik improved gauge action at $`10/g^2=7.4`$ and $`7.75`$ using $`16^3\times 32`$ lattices archived in our earlier work, and on $`20^3`$ lattices at $`10/g^2=8.0`$ generated in our current project, which have a lattice spaing of $`a0.14`$ fm. In Fig. 1 we plot the vector meson mass in units of $`r_1`$ versus the squared lattice spacing for several combinations of gauge and quark actions. The bold squares are from our improved quark action, with the Symanzik improved gauge action. The octagons are from a simulation with the conventional staggered action, and the simple one-plaquette gauge action. The diamonds are the conventional staggered quark action, with a Symanzik improved gauge action, and the crosses are the Symanzik improved gauge action with a quark action containing only the Naik improvement. We also show some results for tadpole improved clover Wilson quark actions. The plusses are from the SCRI collaboration, using Symanzik improved gauge action, and the fancy squares from the UKQCD collaboration, using the one-plaquette gauge action. Fig. 2 is a similar plot of the nucleon masses in units of $`r_1`$. In both of these plots the “Asqtad” action shows scaling behavior that is dramatically better than the other actions tested.
Instantons play an important role in the Euclidean description of QCD. They provide the solution to the $`U(1)`$ problem , explain chiral symmetry breaking , and at least at a qualitative level reproduce the low mass hadron correlators . Thus it is important that the lattice actions we use to simulate QCD approximate the continuum behavior at finite lattice spacing. Here we study the topological aspects of the improved Kogut-Susskind actions and compare with the standard formulations. The effect of topology on the spectrum of the Dirac matrix has been also studied in , and in . In , the microscopic spectral density was computed and shown that Kogut-Susskind fermions are insensitive to topology on coarse lattice spacings ($`a0.5`$ fm). In ref. it was shown that at lattice spacing about 0.07 fm there is a clear separation between topological and non-topological modes.
We first studied the behavior of the low eigenvalues of the Dirac matrix in the background of an instanton by computing eigenvalues and eigenvectors of $`\overline{)}D_e^2`$ using the Ritz functional technique . $`\overline{)}D_e^2`$ is the squared Dirac operator restricted to even lattice sites. In an instanton background $`\overline{)}D_e^2`$ should have two chiral eigenvectors with small eigenvalues, each corresponding to two eigenvectors of $`\overline{)}D`$. We computed the eight lowest eigenvalues of $`\overline{)}D^2`$ on smooth instantons. The instantons are put on the lattice by discretizing the continuum formula for the instanton gauge field $`A_\mu `$, and the resulting gauge configuration was smoothed by twenty APE smearing sweeps. We calculated eigenvectors on lattices ranging in size from $`4^4`$ to $`16^4`$ containing an instanton with radius $`\rho `$ equal to one fourth the lattice size, $`\rho =L/4`$. Since there is no QCD dynamics to define a length scale on these smoothed lattices, changing $`\rho /a`$ can equally well be considered to be varying the size of the instanton at fixed lattice spacing or varying the lattice spacing for a fixed instanton size. We computed the eight lowest eigenvalues of $`\overline{)}D^2`$ for the standard KS action, the Naik action, the “Fat7” action( all couplings to gluons with momentum $`\pi /a`$ set to zero), the “Fat7” with the Naik term, and finally the Asqtad action (with $`u_0=1`$ since tadpole improvement has no meaning on smooth backgrounds). As expected we found two small eigenvalues, which approach zero as the lattice spacing goes to zero. These two eigenvalues were degenerate to the accuracy of the computation. In Fig. 3 we plot the small eigenvalue $`\lambda `$ of $`\overline{)}D`$ for all the actions tested as a function of $`\rho /a`$. For all the actions, the small eigenvalues go to zero exponentially as $`a/\rho `$ decreases. Neither the Naik term nor the fattening of the link term affects the eigenvalue significantly. (This is not surprising — there is no point in smoothing out the link when you are already working on a smooth background.) However, the Asqtad action, which differs from the Fat7 action in that it contains the small momentum correction introduced by Lepage, has smaller eigenvalues than the other actions. We also measured the chiralities $`\chi =\psi ^{}\gamma _5\psi `$ of the eigenvectors associated with the small eigenvalues. Although $`\chi `$ differs significantly from $`1`$, it differs from $`1`$ by an amount proportional to $`a^2/\rho ^2`$.
To see if these small eigenvalues persist when the QCD dynamics is turned on, we “heated” the $`\rho /a=2`$ instanton on the $`8^4`$ lattice. This was done by short quenched molecular dynamics trajectories, using the tadpole improved one loop Symanzik gauge action with $`\beta =8.0`$. Since we want to introduce short distance structure without disturbing the long distance topological structure of the initial lattice, we used short trajectories — ten molecular dynamics steps of size $`0.02`$. We ran for $`20`$ such trajectories, saving the lattice at the end of each trajectory. The resulting sequence of $`20`$ lattices was discarded if any of the lattices had topological charge different frome one. In total we produced 31 different heating runs in which the instanton survived. At the end of the $`20`$th trajectory the average plaquette was $`1.92`$, similar to the average $`1.86`$ of thermalized $`\beta =8.0`$ quenched lattices. For comparison, we did 24 runs starting from smooth lattices containing no instantons. In Fig. 4 we plot the 3/4 power of the averaged product of the eight smallest eigenvalues of $`\overline{)}D_e^2`$ for the $`Q=1`$ heating runs, divided by the same quantity from the $`Q=0`$ runs. For all of the quark actions, $`u_0`$ was kept at one during these heating runs. This quantity, “$`\mathrm{Det}_8^{3/4}`$”, is an approximation to the factor by which three flavors of massless dynamical quarks would suppress such instanton configurations in a full QCD simulation. The abscissa in Fig. 4 is $`3_{\mu \nu }\mathrm{Tr}P_{\mu \nu }`$, where $`P_{\mu \nu }`$ is the plaquette. This is related to the amount of disorder induced by the heating The vertical line marks the average plaquette of a thermalized lattice. As the amount of disorder increases, the size of the small eigenvalues in the $`Q=1`$ runs increases, but the eigenvalues of the fat link $`\overline{)}D`$ rise less than those of the Naik and standard KS actions. Roughly speaking, Fig. 4 shows that at $`a0.14`$ fm three dynamical quarks with fat link actions suppress instantons by more than a factor of two relative to the conventional action. We have also looked at the chirality of the would-be zero modes. Again, as the disorder increases the deviation from $`1`$ also increases, but the improved action holds up better. After 20 heating trajectories with the $`\rho =2a`$ instanton, the chirality for the fat actions is about $`0.4`$ while for the standard KS action (and the Naik) it is $`0.2`$. Finally, the degeneracy of the two would-be zero modes of $`\overline{)}D^2`$ is lifted as disorder is added. The splitting of these modes is related to the flavor symmetry breaking and, as expected from our previous studies , is about a factor of three smaller for the fat actions than for the standard Kogut-Susskind action.
Lastly, we looked at the eight low eigenvalues of $`\overline{)}D_e^2`$ for the Asqtad and standard Kogut-Susskind actions on 424 thermalized $`8^4`$ quenched $`\beta =8.0`$ lattices, which have a lattice spacing of about 0.14 fm. At this volume we have $`1`$ to $`2`$ topological objects of average size $`\rho =2a`$ to $`3a`$ per lattice, based on the instanton distribution measurements in . Typically, when the topological charge is non-zero we find small eigenvalues with high absolute value of the chirality ($`0.2`$ to $`0.3`$) for the Asqtad action. In a chirality versus eigenvalue scatter plot the topological eigenvalues clearly separate from the non-topological. Similar but weaker separation is also observed for the standard Kogut-Susskind action. We have also checked the validity of the index theorem. For the Kogut-Susskind fermions which represent $`4`$ flavors it takes the form $`4Q=N_{}N_+`$, where $`N_{}`$ are the number of left-handed modes, $`N_+`$ the number of right-handed modes and $`Q`$ is the topological charge. On the lattice this relation does not hold exactly for staggered fermions, but the quantities $`4Q`$ and $`N_{}N_+`$ are strongly correlated. For the Asqtad action this correlation is $`90\%`$, while for the standard Kogut-Susskind is $`83\%`$. In both cases the topological charge was measured after $`200`$ APE smearings.
The code used for generating smooth instanton configurations and for measuring the topological charge was written by Anna Hasenfratz and Tamas Kovacs. Computations were done on the T3E and PC cluster at NERSC, on the T3E and SP2 at SDSC, on the NT cluster and Origin 2000 at NCSA, and on the Origin 2000 at BU. This work was supported by the U.S. Department of Energy under contracts DOE – DE-FG02-91ER-40628, DOE – DE-FG03-95ER-40894, DOE – DE-FG02-91ER-40661, DOE – DE-FG05-96ER-40979 and DOE – DE-FG03-95ER-40906 and National Science Foundation grants NSF – PHY99-70701 and NSF – PHY97–22022. |
no-problem/9912/astro-ph9912145.html | ar5iv | text | # The CIDA-QUEST Large Scale Variability Survey in the Orion OB Association: initial results
## 1. Introduction
Crucial aspects of theories of star formation can only be tested by studying the stellar populations both in and near molecular clouds. While the earliest stages of stellar evolution must be probed with infrared and radio techniques, many important questions can only be studied with optical surveys of older populations with ages $``$ 3-20 Myr.
In the past, studies of OB associations have been used to investigate sequential star formation and triggering on large scales (e.g., Blaauw 1991 and references therein). However, because OB stars are formed essentially on the main sequence (e.g., Palla & Stahler 1992) and evolve off it in $`10`$ Myr, they are not useful for infering star-forming histories on timescales of 1-3 Myr. Moreover, it is not possible to study cluster structure and dispersal or disk evolution without studying low-mass stars. Studies of individual clusters in the optical and IR have been made (cf. Lada 1992), but these are biased toward the highest-density regions, and cannot address older and/or widely dispersed populations.
Recent technological advances have now made it possible to carry out large-scale studies, building on the availability of cameras with multiple CCDs on telescopes with wide fields of view.
Figure 1 is shows the Orion A and B clouds and surroundings which we propose to survey. The Orion belt stars, $`\delta `$, $`ϵ`$, and $`\zeta `$, are shown for reference. The prominent bright emission nebulae are the Orion Nebula (ONC), NGC 2023, and NGC 2024 clusters, marking the sites of very recent star formation. Also indicated are the OB associations in the region (Blaauw 1964): Ori 1b and Ori 1a. Ori 1d corresponds to the Trapezium/ONC region; Ori 1c is the region surrounding it. Photometric analysis of the O, B and A stars (Warren & Hesser 1977, 1978; Brown et al 1994, BGZ) indicate ages of $`<1`$ Myr (1d, see also Hillenbrand 1997), 3 Myr (1c), 7 Myr (1b), and 12 Myr (1a). The latest results from Hipparcos (de Zeeuw et al 1999) yield mean parallaxes corresponding to 330 pc (1a), 440 pc (1b), and 460 pc (1c), with an uncertainty of 30%.
## 2. The Photometric Variability Survey
The large scale, multiband (BVRIH$`\alpha `$), multi-epoch, deep photometric survey over $`120\mathrm{}^{}`$ in Orion (Figure 1) is being carried out using the QuEST<sup>1</sup><sup>1</sup>1 C. Abad, B. Adams, P. Andrews, C. Bailyn, C. Baltay, A. Bongiovanni, C. Briceño, V. Bromm, G. Bruzual, P. Coppi, F. Della Prugna, N. Ellman, W. Emmet, I. Ferrín, F. Fuenmayor, M. Gebhard, R. Heinz, J. Hernández, D. Herrera, K. Honeycutt, G. Magris, J. Mateu, S. Muffson, J. Musser, O. Naranjo, H. Neal, G. Oemler, R. Pacheco, G. Paredes, M. Rangel, A. Rengstorf L. Romero, P. Rosenzweig, Ge. Sánchez, Gu. Sánchez, C. Sabbey, B. Schaefer, H. Schenner, J. Shin, J. Sinnott, J. Snyder, S. Sofia, J. Stock, J. Suárez D. Tellería, W. van Altena, B. Vicente, K. Vieira , A. K. Vivas<sup>1</sup><sup>1</sup>footnotetext: The QuEST (Quasar Equatorial Survey Team) collaboration includes Yale University, Indiana University, Centro de Investigaciones de Astronomía, and Universidad de Los Andes (Venezuela). The main goal of QuEST is to perform a large scale survey of quasars. camera, a $`8k\times 8k`$ CCD mosaic detector installed on the 1m (clear aperture) Schmidt telescope at Llano del Hato Observatory, in the Venezuelan Andes ($`8^{}47^{}`$ N, 3610 m elevation). The 16 $`2048\times 2048`$ UV-enhanced, front illuminated, Loral CCD chips are set in a 4x4 array (Figure 2, left), covering most of the focal plane of the Schmidt telescope, yielding a scale of 1.02” per pixel and a field of view of $`2.3^{}\times 2.3^{}`$. The camera is optimized for drift-scan observing in the range $`6^{}\delta +6^{}`$: the telescope is fixed and the CCDs are read out E-W at the sidereal rate as stars drift across the device, crossing each of the four filters in succession. This procedure generates a continuous strip (or “scan”) of the sky, $`2.3^{}`$ wide; conversely, one can survey the sky at a rate of $`34.5\mathrm{}^o/hr/filter`$, down to $`V_{lim}=19.7`$ ($`S/N=10`$).
### 2.1. Data reduction and analysis
QuEST has developed its own software since the huge amount of data produced is very difficult to handle with packages such as IRAF. The whole process is completely automated with minimum interaction from the user. The output catalogs contain, among others, J2000.0 positions, instrumental and calibrated magnitudes in 4 bands, and their corresponding errors. Positions are good to $`\pm 0.2^{\prime \prime }`$ down to V$`19`$, within a few square degrees.
We have developed tools for identifying variable stars using differential photometry. Using a $`\chi ^2`$ test and assuming a Gaussian distribution for the errors, we consider variable only those objects for which the probability that the observed distribution is a result of the random errors is very small. In Figure 2, right, we show a sample result from the variability analysis in the $`2.3^o`$ wide strip indicated in Figure 1. The dispersion increases for fainter objects, so that most (non-variable) objects populate a curved region. We use the $`\chi ^2`$ test to identify potential variable stars with a 99.99% confidence level (crosses in Figure 2).
## 3. Results
During Dec.98 - early Feb.99, we obtained 16 BVRI scans over the strip indicated in Figure 1. We calibrated a ‘master’ scan using Landolt (1992) standard fields and then normalized the photometry in all the other scans to this reference scan.
The value of variability for picking out pre-main sequence (PMS) candidates is shown in Figure 3. The upper left panel is a color-magnitude diagram with all the stars in a $`10\mathrm{}^{}`$ area within Ori 1b (the ZAMS at 440 pc is shown for reference); the upper right panel shows the variables in the same field, picked out using our selection criteria. Populations above and below the ZAMS are clearly separated using our variability criteria. Indeed, the only 5 known TTS in this region (Herbig & Bell 1988) were all recovered as variables. We also compared our data with the Kiso H$`\alpha `$ survey (c.f. Wiramihardja et al. 1993) and found that the vast majority of H$`\alpha `$ stars above the ZAMS are detected as variables ($`70`$%), but essentially none of the ones below are (they could be a mixture of field dMe stars and some false detections; c.f. Briceño et al. 1999).
To emphasize the point even further, in the lower panels of Figure 3 we show color-magnitude diagrams for field stars ($`\alpha =4h5h`$, $`\delta =1^{}`$), showing clearly that the tail of the distribution of background stars extends far above the main-sequence, while variables show essentially no population of PMS objects.
We have initiated followup spectroscopy of the brighter (V $`<16.5`$) variable PMS candidates (Fig.3b), using the FAST spectrograph (Fabricant et al. 1998) on the 1.5m telescope at SAO, with a spectral resolution of 6Å in the range 4000 - 7000Å. We confirm low mass PMS stars based on the presence of emission in H$`\alpha `$ and other lines, and of Li I$`\lambda 6707`$Å strongly in absorption, which is a reliable indicator of youth in K4-K5 and later spectral type stars (c.f. Briceño et al. 1997). Even at this low resolution, Li I can be seen in late type stars with high SNR spectra (Figure 4). In this way, we have obtained spectra for 157 candidates and confirmed 74 of them as new TTS. We are now placing these objects in the HR diagram to derive their masses and ages. This high ($`50`$%) efficiency is the result of the clean selection provided by the variability criterion.
The new TTS have spectral types K7 - M2, corresponding to masses roughly $`0.50.3M_{}`$ at $`13`$ Myr. Though preliminary, this list of new TTS already suggests that the fraction of CTTS in 1a is much lower than in 1b, which would be expected if 1a is indeed older than 1b. We are analyzing in detail the light curves of these new stars, and spectroscopy of further candidates is under way.
This is the first optical survey to approach in spatial extent studies of extended star-forming regions (near the galactic plane, not reached by SDSS) done by other surveys like the RASS and 2MASS, but going much deeper than the RASS and having simultaneous photometry over several optical bandpasses for many epochs, providing a unique variability database that one-time surveys like 2MASS cannot not offer.
## References
Blaauw, A. 1964, ARAA, 2, 213
Blaauw, A. 1991, in The Physics of Star Formation and Early Stellar Evolution, eds. C. Lada and N.D. Kylafis, (Dordrecht: Kluwer), p. 125
Briceño, C., Hartmann, L., Stauffer, J., Gagne, M., Caillault, J.-P., & Stern, A. 1997,
Briceño, C., Calvet, N., Kenyon, S., & Hartmann, L. 1999, AJ, 118, 1354
Brown, A., de Geus, E.J., & de Zeeuw, P.T. 1994, AA, 289, 101
de Zeeuw, K., Hoogerwerf, R., de Bruijne, J., Brown, A., & Blaauw, A. 1999, AJ, 117, 354
Fabricant, D., Cheimets, P., Caldwell, N. & Geary, J. 1998, PASP, 110, 79
Herbig, G.H., & Bell, K.R. 1988, Lick Obs. Bull. 1111
Hillenbrand, L. 1997, AJ, 113, 1733
Lada, E. 1992, ApJ, 393, 25
Maddalena, R., Morris, M., Moscowitz, J., & Thaddeus, P. 1987, ApJ, 303, 375
Palla, F., & Stahler, S.W. 1992, ApJ, 392, 667
Warren, W.H., & Hesser, J.E. 1977, ApJS, 34, 115
Warren, W.H., & Hesser, J.E. 1978, ApJS, 36, 497
Wiramihardja, S., Kogure, T., Yoshida, S., Ogura, K., & Nakano, M. 1993, PASJ, 45, 643 |
no-problem/9912/nucl-ex9912010.html | ar5iv | text | # Peculiarities of isotopic temperatures obtained from p + A collisions at 1 GeV
<sup>1</sup><sup>1</sup>institutetext: St.Petersburg Nuclear Physics Institute, 188350 Gatchina, Russia
Russian Academy of Science <sup>2</sup><sup>2</sup>institutetext: Institut für Kern- und Hadronenphysik, FZR inc., 01314 Dresden, Germany (Received: )
## Abstract
The nuclear temperatures obtained from inclusive measurements of double isotopic yield ratios of fragments produced in 1 GeV p+A collisions amount to $``$ 4 MeV nearly independent from the target mass.
The pioneering studies in ref.poch hint to the transition between liquid and gaseous phases of nuclear matter. The nuclear temperature as the crucial observable was derived from the isotope thermometer based on double yield ratios albergo (see below). This thermometer is assumed to be sensitive to the local temperature at the particle freeze-out serf . Meanwhile, the critical behaviour has been established in various heavy-ion collisions involving medium and heavy mass nuclei. Whereas the underlying Statistical Multifragmentation Model bondorf was successful in this mass region it is open whether the statistical nature of fragmentation processes can be seen and classified in small many-body systems botv2 . Therefore, temperature measurements in light nuclei with tested isotope thermometers are highly desirable.
A search for reliable isotope thermometers in proton induced collisions p+Xe$``$IMF(3$`Z14`$)+X at beam momentum from 80 to 250 GeV/c was recently performed in ref.tsang where it was established that such thermometers show a characteristic behaviour that is independent of the reaction type. Encouraged by this finding we have analysed the data available from inclusive measurements of
1 GeV proton interactions with various target nuclei. The data taken into consideration were obtained in some independent experimental projects performed at the external proton beam of the PNPI synchrocyclotron Gatchina.
(i) One experiment was addressed to Light Charged Particle (LCP) detection at backward angles MNA1 :
p(1GeV)+(<sup>6</sup>Li,<sup>7</sup>Li,Be,C,Al,<sup>58</sup>Ni,Ag,Pb)$``$LCP(Z=1)+X
The basic tool was a lense spectrometer with a momentum resolution of $`\mathrm{\Delta }`$p/p$``$2.5% within the dynamical range from 0.25 to 0.75 GeV/c. This spectrometer was installed at $`\mathrm{\Theta }_{lab}`$=109<sup>o</sup> and 156<sup>o</sup> with respect to the beam axis. TOF measurements allowed to separate the hydrogen isotopes obtained from proton collisions with various targets from <sup>6</sup>Li to lead. Differential cross sections were obtained from the kinetic energy spectra extrapolated by fits with a Maxwell functional form. For the first time, we attend to the yields of hydrogen isotopes and employed the thermometer based on the double-ratio (<sup>2</sup>H/<sup>3</sup>H)/(<sup>1</sup>H/<sup>2</sup>H).
Thus, it became possible to determine the temperature of <sup>6</sup>Li as the smallest probe. Hitherto, the hydrogen isotope yields have casted doubt on usefulness as thermometer since different nonequilibrium processes may contribute to these yields. Such contributions should be suppressed under our kinematical conditions and we consider the hydrogen yields as an adequate tool for temperature measurements.
(ii) The second data set to be analysed involves Intermediate Mass Fragments (IMF):
p(1GeV)+(Be,C,<sup>58</sup>Ni, Ag,Au,<sup>238</sup>U)$``$IMF(Z$``$2)+X
As the incident energy was kept fixed at 1 GeV we expect that target-spectator fragmentation contributes mainly to the observables. In distinction from heavy-ion collisions the influence of compression and collective motion on the fragment abundancies and on the related temperatures shlomo is supposed to be minimized.
IMF production was studied in p+Ag, Au and U collisions at $`\mathrm{\Theta }_{lab}`$=60<sup>o</sup> and 120<sup>o</sup> volnin1 with a setup consisting of the above mentioned magnetic lense spectrometer combined with a $`\mathrm{\Delta }`$E-E telescope. The energy resolution of the $`\mathrm{\Delta }`$E-detector ( $``$ 50 keV) allowed to separate isotopes of fragments from helium to boron. Absolute cross sections were obtained by integration of the inclusive energy spectra approximated by a moving source fit and angular integration using the expression d$`\sigma `$/d$`\mathrm{\Omega }`$=$`c_1+c_2cos\mathrm{\Theta }_{lab}`$. We included into this analysis additional differential cross sections at $`\mathrm{\Theta }_{lab}`$=60<sup>o</sup> of fragments produced in 1 GeV proton collisions with <sup>48</sup>Ti, <sup>58</sup>Ni, <sup>64</sup>Ni, <sup>112</sup>Sn and <sup>124</sup>Sn volnin2 .
(iii) Isotopically separated fragments from <sup>9</sup>Be and <sup>12</sup>C targets have been registered with an experimental setup consisting of two TOF-E spectrometers installed at
$`\mathrm{\Theta }_{lab}`$=30<sup>o</sup> and 126<sup>o</sup> with respect to the beam axis land1 . The basic detectors in each arm were twin Bragg Ionization Chambers combined with Parallel Plate Avalanche Counters. This setup, in general described in ref. nim , allowed to measure the low energy part of the kinetic energy distributions of the fragments below $``$ 30 MeV. This part of the fragment spectrum, well reproduced by a moving-source fit with one exponential slope, is expected to represent mainly the equilibrated component.
From the inclusive differential and absolute cross sections measured in the mentioned experiments we derived isotopic yield ratios.
The method of temperature evaluation from isotopic abundancies,ref. albergo is related to five assumptions summarized in ref. milazzo . The most important one is the selection of fragments emitted from a single and equilibrated source. These conditions are assumed to be fulfilled rather well at 1 GeV incident energy if the emission from the target spectator is considered. Experimentally, detection in backward direction and (or) registration of fragments with low kinetic energies should satisfy these requirements. According to ref.albergo the temperature can be obtained from the relation
$$T_{app}=\frac{B}{ln(aR)},$$
(1)
where the double ratio R =R<sub>1</sub>/R<sub>2</sub> is defined by the isotope yields (Y)
R<sub>1</sub> = Y(A<sub>i</sub>,Z<sub>i</sub>) / Y(A<sub>i</sub>+$`\mathrm{\Delta }`$A,Z<sub>i</sub>+$`\mathrm{\Delta }`$Z)
R<sub>2</sub> = Y(A<sub>j</sub>,Z<sub>j</sub>) / Y(A<sub>j</sub>+$`\mathrm{\Delta }`$A,Z<sub>j</sub>+$`\mathrm{\Delta }`$Z)
which is valid if the fragments with mass A<sub>i</sub>,A<sub>j</sub> and nuclear charge Z<sub>i</sub>,Z<sub>j</sub> are produced in their ground states. Each combination of ($`R,a,B`$) in equation (1) terms a ”thermometer” which allows to find the absolute or relative temperature related to the fragment formation. The numerator B in equation (1) is determined by the binding energies BE
B = BE(A<sub>i</sub>,Z<sub>i</sub>) - BE(A<sub>i</sub>+$`\mathrm{\Delta }`$A,Z<sub>i</sub>+$`\mathrm{\Delta }`$Z) – BE(A<sub>j</sub>,Z<sub>j</sub>) + BE(A<sub>j</sub>+$`\mathrm{\Delta }`$A,Z<sub>j</sub>+$`\mathrm{\Delta }`$Z).
The magnitude $`a`$ includes the spin degeneration factor and mass numbers of the considered isotopes. The intrinsic nuclear temperature is proportional to the temperature measured by means of relation (1) up to 5-7 MeV as shown in ref.hxi . We selected pairs with the same $`\mathrm{\Delta }`$A = $`\mathrm{\Delta }`$Z=1 where the influence of the chemical potentials cancels out. The choice of pairs with $`\mathrm{\Delta }`$A =1, $`\mathrm{\Delta }`$Z=0 was made to minimize the influence of Coulomb barriers onto the yields.
The relation (1) must be modified for ”sequential decays”, i.e. if particle decays from higher lying states of the same and other isotopes contributes to the yields. In the special case of thermometers selected by $`B`$ 10 MeV an empirical correction for sequential decays was published in ref. tsang
$$\frac{1}{T_{app}}=\frac{1}{T_o}+\frac{ln(\kappa )}{B}$$
(2)
where T<sub>o</sub> is the unknown intrinsic equilibrium temperature. The correction factor $`\kappa `$ is defined by $`R_{app}`$=$`\kappa R_o`$ where $`R_{app}`$ is the measured double isotope yield ratio and $`R_o`$ the corresponding one for isotopes produced at equilibrium. The sensitivity of the thermometers improves with increasing B reducing relative errors. In the limit where B becomes equal or less the intrinsic temperature appreciable contributions from sequential decays may affect the yields milazzo .
In figs.1–3 we present the dependence of T<sub>app</sub> on the target mass number A<sub>T</sub> as obtained by individual thermometers. Hereby, neither selection criterion B$``$ 10 MeV nor correction for sequential decays were applied to avoid in the first instance detailed discussions about it.
Fig. 1 shows the results obtained with the LCP thermometer (<sup>2</sup>H/<sup>3</sup>H)/(<sup>1</sup>H/<sup>2</sup>H) for two angles. One can see that T<sub>app</sub> is nearly a constant over the target mass region 6$`A_T`$ 208 for $`\mathrm{\Theta }`$=109<sup>o</sup> (top of fig.1). Since this behaviour is also established by IMF thermometers (see below) we cannot confirm the former doubts about the utility of the ratio $`Y(p)/Y(d)`$ (ref.albergo ). But the temperatures which are derived from the hydrogen yields at $`\mathrm{\Theta }`$= 156<sup>o</sup> (lower part of fig.1) show some increase toward smaller A<sub>T</sub>. We guess that an admixture of the $`\mathrm{\Delta }`$ isobaric state becomes apparent in the used differential cross sections. Fits performed with a Maxwell-Boltzmann distribution including a Breit-Wigner contribution evidence this enhancement for the smallest A<sub>T</sub> but the error bars become larger.
We plot the temperatures obtained by He and IMF yields as function of A<sub>T</sub> in fig. 2. Additional data given in fig. 3 confirm the finding observed in fig.1 and 2. Whereas IMF thermometers provide constant temperatures, some dependence on A<sub>T</sub> is observed if we make use of the ratio <sup>3</sup>He/<sup>4</sup>He.
A comparison of temperatures derived from double ratios with $`\mathrm{\Delta }`$A=1,$`\mathrm{\Delta }`$Z=0 with those of $`\mathrm{\Delta }`$A=1,$`\mathrm{\Delta }`$Z=1 shows that they are equal within the error bars apart from some larger fluctuations. The observed agreement of the studied thermometers exhibits that possibly each one is suitable for relative temperature measurements without the hitherto introduced limitation B $``$ 10 MeV. For a given thermometer the influence of sequential decays seems to be independent from the origin of the excited primordial fragments. This behaviour is rather surprising since the target mass number (or the volume of the fragmenting nuclei, respectively) changes nearly $``$25 times. Under the same conditions, the single ratios of isotope yields show a pronounced dependence on N<sub>T</sub>/Z<sub>T</sub>, ref.rab .
Next we converted the above values T<sub>app</sub> on the basis of relation (2) into intrinsic temperatures T<sub>o</sub> as far as the correction factors for sequential decay $`\kappa `$ hxi were available. The mean correction amount to $``$ 5% but does not exceed 15%. Alternative correction methods, e.g. bon2 , valid in a multifragmentation scenario cannot be applied since contributions from this process are $``$5% at 1 GeV incident energy. The behaviour of individual thermometers is demonstrated in the top panel of fig.4. Although the drawn errors are overestimated <sup>1</sup><sup>1</sup>1 The errors drawn in figs.1–4 were obtained by simulations where the primary yields $`Y_i\pm \mathrm{\Delta }Y_i`$ were treated as Gaussian distributions with $`Y_i`$ and $`\sigma _i`$=$`\mathrm{\Delta }Y_i`$. Such procedure provides dependable but enlarged errors $`\mathrm{\Delta }T_o`$ because it takes not into account that systematic errors are to be reduced in the ratios $`Y_i/Y_{i+1}`$ if they are taken from the same experiment. the trend observed with one thermometer is repeated by each of the other ones. Such property hints to a real physical effect. In the lower panel we present the mean averages of the above values in order to compare with other available data. The following features of fig.4 are worth to discuss:
(i) All temperatures which have been corrected for sequential decays by using equation (2) almost coincide at each target mass number A<sub>T</sub>. Only the thermometer which explores the double ratio (<sup>11</sup>B/<sup>12</sup>B)/(<sup>3</sup>He/<sup>4</sup>He) overestimates the temperature in the case of the target nuclei $`Au`$ and $`U`$.
(ii) The temperatures which have been derived from the differential cross sections at $`\mathrm{\Theta }_{lab}`$=60<sup>o</sup> are larger in comparison with those obtained from production cross sections. Enhanced temperatures in forward direction and strong variations in the thermometers with <sup>3</sup>He/<sup>4</sup>He ratios were also reported in ref. Vio98 .
(iii) Further, we observe pronounced structures in the temperatures. These irregularities are related to fragments from the target nuclei <sup>48</sup>Ti,<sup>58</sup>Ni,<sup>64</sup>Ni,<sup>112</sup>Sn and <sup>124</sup>Sn which were studied to search for the influence of various nucleon composition on the fragmentation process. The authors of this investigation volnin2 stressed that the yields of fragments (except <sup>4</sup>He) normalized to the geometrical cross section depend mainly on the $`N/Z`$ ratio. We presume from this observation that the fluctuations of the temperature usually attributed to sequential decays have a causal connection with the nucleon composition and with the nuclear structure rather than with the size of the fragmenting system. Such hypothesis is suggested by comparing $`T_o`$ of pairs <sup>58</sup>Ni– <sup>64</sup>Ni and <sup>112</sup>Sn–<sup>124</sup>Sn, respectively. It seems to be a property of a fragmentation scenario like the Fermi break-up, ref.botv1 .
(iiii) The temperatures obtained from the ratios of cross sections show a smooth but weak dependence on the target mass A<sub>T</sub>. The most pronounced increase with decreasing A<sub>T</sub> was observed only for the thermometers which include the ratio <sup>3</sup>He/<sup>4</sup>He (see fig.4). Above all, we attribute this slope to the variation of the inherent nucleon constitution of the target nuclei. Double ratios involving heavier isotopes provide temperatures which are nearly independent on the target mass (see figs.2–3) within the error bars. Although hints to this behaviour were already found in refs.tsang ,Vio98 one may doubt the universal validity of temperature measurements by double isotope ratios. Therefore, we accomplished an independent test of this method using data from an other physical process. For this purpose, yields of isotopes from hydrogen to boron registered in the spontaneous and thermal-neutron induced ternary fission, vorob krog , were processed by the same procedure as applied to them from fragmentation at 1 GeV. <sup>2</sup><sup>2</sup>2A forthcoming paper is in progress. The used thermometers show a consistent behaviour resulting in significant lower temperatures of $`T_{app}`$ 1 MeV. Remark that fission neutron spectra are well reproduced by a temperature of $``$0.7 MeV bowman .
Summarizing, we analyzed inclusive data obtained in
1 GeV proton interactions with various target nuclei employing different isotope thermometers. We found that even uncorrected thermometers which involve pairs with B$``$ 10 MeV provide ”stable” results which may be suitable for relative temperature measurements. The weak dependence of the temperatures from the target mass A<sub>T</sub> suggests speculations about an unique thermodynamical behaviour whereby the dimensions of the nuclear systems may be changed to a large extent.
Acknowledgments This work was supported by the German Ministry of Education and Research (BMBF) under contract RUS-622-96 and by the Russian Foundation for Fundamental Research Grant No. 95-02-03671. |
no-problem/9912/astro-ph9912392.html | ar5iv | text | # Evolution of Gas and Dust in Circumstellar Disks
## 1. Introduction
The chemical history of solar system materials took place within a wide diversity of physical and chemical environments, as gas and dust were transported from a molecular cloud core to a destination in planetary bodies and atmospheres. Until recently, reconstruction of the relevant initial conditions and evolutionary processes relied solely on vestigial evidence wrested from analyses of the chemical composition of matter in the present-day solar system. This method of investigation is limited, however, by difficulties inherent in bridging a temporal discontinuity of over 4 billion years. It is especially challenging to disentangle the signatures of antecedent conditions from those of more recent processes occurring within the solar system (E.g., Brown, this volume). Improvements in observational methods have now made it possible to leap across this gap in time and directly observe analogs of the milieu in which early solar system materials were forged. Circumstellar disks and envelopes around currently-forming young stars have become the subject of increasingly detailed investigations with high-resolution astronomical techniques operating across a broad range of wavelengths (See reviews by Beckwith & Sargent 1996 and Koerner 1997).
The presence of protostellar envelopes and accretion disks around young stars was disclosed as soon as appropriate long-wavelength detectors became available. Observations extending from near infrared to millimeter wavelengths revealed excess radiation with a range of properties that implied an evolutionary sequence of states for the circumstellar material (Myers & Benson 1983; Lada & Wilking 1984; Strom et al. 1989; Beckwith et al. 1990). Models of the associated “Spectral Energy Distributions” (SEDs) accounted for diverse spectral shapes by locating dust at various distances from the young star, partitioned between spherical, flared disk, or flat disk configurations (Adams, Lada, & Shu 1987; Kenyon & Hartmann 1987; cf. Beckwith & Sargent 1993 and Shu et al. 1993).
The first images to confirm the above interpretations also validated the idea that differing SED properties were due to the time dependence of circumstellar disk properties. Aperture synthesis imaging of CO(2$``$1) emission from the very young ($`t10^5`$ yr) stellar object, HL Tauri, revealed circumstellar dust and gas in an elongated structure with a mass several times that of the minimum required to form a solar system like our own (Beckwith et al. 1986; Sargent & Beckwith 1987; 1991). Coronagraphic imaging of dust around the much older ($`t10^{78}`$ yr) main sequence star, $`\beta `$ Pictoris, revealed a far more tenuous dust disk (Smith & Terrile 1984). The small mass of material and its short lifetime against dispersal implied that any associated planet formation had largely taken place (cf. Backman & Paresce 1993). It now appears that these objects represent snapshots at times which bracket most of the early evolution of protoplanetary disk systems. Gaps in the sequence are rapidly being filled in with new high-quality images of a wide range of objects.
## 2. Imaging the Stages of Disk Evolution
### 2.1. The Embedded Protostar Stage
Chemical processing of protostellar and protoplanetary gas and dust begins even before it encounters a shock front at the disk-envelope interface. Radiation from the central protostar, enhanced density in the molecular cloud core, and interaction with far-reaching ionized jets and bipolar outflows all contribute to distort the original interstellar chemical signature (Bergin, Högerheijde, and Ohashi, this volume). These changes may vary systematically in a time-dependent way, making it possible to use chemical abundances as a chronometer which traces the evolution of the infalling envelope (cf. Langer et al. 2000).
As infalling matter impacts a circumstellar accretion disk, it is subject to shock heating and gas drag with an intensity that depends sensitively on the radial distance of the impact from the star and concomitant impact velocity (see review by Lunine 1997 and references therein). In particular, icy grains accreting at distances greater than 30 AU from a Sun-like star are likely to suffer little sublimation of volatiles, and gas molecules are unlikely to be dissociated. Consequently, disk material originally at this radius is unlikely to bear much of the imprint of its entry into the disk.
High-resolution images now reveal that much of the material incorporated into circumstellar disks does indeed originally arrive at radial distances much greater than 30 AU. Although much of the material in the outer regions of the flattened structure around HL Tauri is now known to be infalling (Hayashi et al. 1993), continuum images indicate the presence of a central disk, presumably centrifugally supported, with a radius of order 100 AU (Lay et al. 1994; Mundy et al. 1996). Flattened structures of similar size and kinematics have recently been imaged around a small sample of other embedded young stars, in both CO line emission (cf. Ohashi, this volume), and in scattered light (Padgett et al. 1999). One example, IRAS 04302, is displayed in Fig. 1 and appears as a 450-AU-radius circumstellar structure oriented edge on with a highly flattened morphology (Padgett et al. 1999). Kinematic analysis of CO spectral line images indicates that rotational motions dominate the velocity field of this structure. These results strongly suggest that most of the gas in solar nebula analogs originally arrives at distances greater than 100 AU where little processing takes place. The large sizes of more-evolved centrifuglly supported disks bear this out.
### 2.2. The “T-Tauri” Phase
Young stars first become optically visible when their infall envelope has dispersed enough to become transparent. Surveys of unresolved infrared and millimeter-wave emission from these “T Tauri stars” (TTs) provided initial evidence that circumstellar disks are the dominant component in the dust configuration at this stage (Strom et al. 1989; Beckwith et al. 1990). A large fraction of TTs are detected with associated SEDs that can be attributed to dusty disks, similar to those expected for planetary systems in formation (cf. Beckwith & Sargent 1993). The overwhelming majority of these are associated with “classical” T Tauri stars (cTTs), for which diagnostics of protostellar accretion such as H$`\alpha `$ emission are still robust (cf. Calvet, Hartmann, & Strom 2000). Optically thick dust emission is not as readily detected from disks around T Tauri stars without strong evidence of protostellar accretion, the so-called “weakline T Tauri stars” (wTTs) (cf. Osterloh & Beckwith 1995).
Aperture synthesis imaging of the disk around GM Aurigae was the first to demonstrate that gas was rotationally supported throughout the radial extent of a circumstellar disk around a cTTs (Koerner et al. 1993). High-resolution coronagraphic imaging with HST confirms the picture from mm-wave interferomery and is displayed in Fig. 2 (Koerner et al. 1999). Light scattered from the concave surface of a flared disk is consistent with an orientation like that derived from aperture synthesis images of the molecular emission. The disk is several hundred AU in radius and has a mass several times that required to form our own solar system (cf. Dutrey et al. 1998).
Additional observations of CO emission from cTTs have demonstrated that the properties of the disk around GM Aur are not at all unusual. TTs which exhibit similar mm-wave continuum luminosity are typically surrounded by disks with radii greater than 100 AU (Koerner & Sargent 1995; Dutrey, this volume). The occurrence frequency for mm-wave detection at the associated luminosity level is about 10$`\%`$ for TTs, generally. These are the only objects which can be appropriately considered to be analogs of the early solar nebula, since disk masses derived for TTs with lower mm-wave continuum luminosity are well below the minimum required to produce a solar system like our own.
The timescale and associated mechanism by which disks eventually disperse is not conclusively determined. Photo-evaporation may provide a way to deplete disk gas from the outside in (see Johnstone, this volume), while accretion onto the star may remove material from the inner disk. Infrared surveys establish that inner-disk dust is depleted in TTs older than 3 $`\times 10^6`$ yrs (Skrutskie et al. 1990), but it is unclear whether the underlying cause is pre-planetary grain accumulation, protostellar accretion, or some other dispersal mechanism. Surveys at millimeter and sub-millimeter wavelengths fail to reveal a correlation between stellar age and associated disk mass (Beckwith et al. 1990; Beckwith & Sargent 1991), but such an effect may be masked by the failure of studies to discriminate between single and binary stars (Jensen et al. 1994; Osterloh & Beckwith 1995). Evidence for the persistence time of gas in disks is scarcer than for dust, even though 99% of the mass of protoplanetary disks is thought to consist of molecular gas. There is some suggestion that gaseous disks are dispersed on timescales of 10<sup>7</sup>–10<sup>8</sup> years (Skrutskie et al. 1991; Zuckerman, Forveille, & Kastner 1995), but the number of objects observed with sufficient sensitivity is still quite small. In any event, gaseous disks similar to those imaged around cTTs have not been imaged for any weakline T Tauri stars, except for couple of borderline cases (Duvert et al. 1999; Qi, this volume).
### 2.3. Debris Disks
The presence of remnant circumstellar dust around “Vega-type” stars – A-type stars with infrared excess and ages between ten and a few hundred million yrs – may signal a more evolved stage of planet formation (cf. Backman & Paresce 1993; Lagrange, Backman, & Artymowicz 2000 and references therein). The infrared signature of optically thin dust is present in IRAS measurements, but the correlation with early spectral type may be due simply to the ability of hotter stars to heat circumstellar dust at several 10’s of AU to temperatures characteristic of infrared radiation. Images of these tenuous “debris disks” have been obtained in both scattered light and thermal infrared emission for $`\beta `$ Pictoris (Smith & Terrile 1984; Lagage & Pantin 1994; Heap et al. 1997), HR 4796A (Jayawardhana et al. 1998; Koerner et al. 1998; Schneider et al. 1999), HD 141569 (Augereau et al. 1999; Weinberger et al. 1999), and at sub-millimeter wavelengths for several nearby stars (Holland et al. 1998; Greaves et al. 1998). In many cases, the images confirm what was deduced by models of the spectral distribution of radiated energy, namely that the disks surround large inner holes with sizes like that of our solar system. This is readily apparent in thermal IR and HST images of HR 4796A shown in Figure 3, where the dust is confined largely to a circumstellar ring.
The presence of a large hole in the disk around HR 4796 was originally implied by the shape of its SED (Jura et al. 1995), a characteristic that applies to many other debris-disk examples as well. Modeling of thermal infrared imaging like that shown in Fig. 3 demonstrated unequivocally that the disk did not extend all the way to the star (Koerner et al. 1998). This result was dramatically confirmed in coronagraphic imaging with the Hubble Space Telescope (Schneider et al. 1999), also displayed in Fig. 3. New analysis of imaging at 24.5 $`\mu `$m reveals that most of the emission at the stellar position is well in excess of the photosphere. The color temperature of dust close to the star is $``$ 170 K, similar to that expected for an ice condensation front like that which may have assisted in the formation of Jupiter. For HR 4796A, a star considerably warmer than the Sun, this temperature corresponds to a a radial distance of about 10 AU (Wahhaj et al. 2000).
The presence of an inner hole in the disk around $`\beta `$ Pictoris is also implied by its SED (Beckman et al. 1992), but is not quite as readily apparent as for HR 4796A. This is clear from the thermal infrared image in Fig. 4. The appearance of continuity is deceiving, however, since the emission intensity depends heavily on temperature and will be preferentially greater for the material close to the star. Modeling of such images indicates that an inner region of reduced density is indeed present (cf. Lagage & Pantin 1994; Pantin et al. 1997). Additional evidence for planetary bodies is apparent in the form of a warp sharply identified in HST images (Burrows et al. 1995; Heap et al. 1997). It appears in Fig. 4 as a variation in the angle of the long axis of intermediate contour levels. The warp may be the result of the dynamic influence of a planetary body with an orbit which is inclined with respect to the plane of the disk.
Holes and/or gaps are evident in a few other extant disk images, including those for $`\alpha `$ PsA, $`ϵ`$ Eri, and HD 141569 (Holland et al. 1998; Greaves et al. 1998; Augereau et al. 1999; Weinberger et al. 1999). These features strengthen the interpretation of debris disks as representing a late protoplanetary phase in which the system is largely devoid of molecular gas and contains fully formed planets and/or planetesimals which generate remnant debris via mutual collisions. The implied connection between disks and planets has become even more explicit with the ground-based coronagraphic detection of dust around a star for which a planet has actually been detected. Dust detected around 55 Cnc, a star with a radial-velocity signature of a planet (Marcy et al. 1999), lies well outside of the orbit of the detected body, but implies an orbital plane which confirms a planetary mass for the companion (Trilling & Brown 1998).
## 3. Discussion
The images reviewed above help establish and refine a picture of the evolution of protoplanetary disks that is inferred from long-wavelength spectral properties. It is now clear that the typical size for solar-system-analog disks is larger than the canonical solar system (R $``$ 50 AU) by factors of several. This suggests that a large fraction of the initial molecular reservoir was not heavily modified by an accretion shock. The timescales for survival of optically thick dust and molecular gas seems to be similar, of order a few times 10<sup>6</sup> yr. Disks of tenuous debris frequently survive for another 100 million years and show evidence of the dynamic influence of larger bodies.
Many questions remain unanswered in the above picture. It would be especially useful to obtain images of disks in transition between a viscously accreting stage and one in which gas and dust are largely dispersed. These would help refine estimates of the timescales involved and could enlighten our understanding of the dispersal mechanisms as well. In addition, it is not always clear whether differences between individual disks are the result of evolution or simply of different initial conditions. In order to sort this out, the statistical properties of circumstellar matter around a large unbiased sample of young stars must be obtained. High-resolution imaging of such a sample is currently infeasible, but broadband spectral characteristics will be accessible to infrared surveys taken with instruments such as SIRTF and SOFIA. These are designed to operate above the atmosphere with the sensitivity required for detection of waning disks in star-forming regions. Interpretations of such surveys will, of course, be helped by the “ground truth” afforded by available images, but they may not require detailed imaging of every source. It is expected that the broadband spectral properties of hundreds of young stars can be surveyed with currently planned instrumentation. This will allow us to begin to answer questions, not just about the evolution of solar system analogs, but about the place of our solar system within a diversity of possible circumstellar environments.
## References
Adams, F. C., Lada, C. J., & Shu, F. H. 1987, ApJ, 312, 788
Augereau, J.C., Lagrange, A.M., Mouillet, D., & Ménard, F., 1999, A&A, 350, L51
Backman, D.E. Witteborn, F.C.., & Gillett, F.C., 1992, ApJ, 385, 670
Backman, D.E. & Paresce, F. 1993, in Protostars and Planets III, eds. E.H. Levy and J.I. Lunine, (Tucson: University of Arizona Press), 1253
Beckwith, S.V.W., & Sargent, A.I. 1996, Nature, 383, 139
Beckwith, S. V. W., & Sargent, A. I. 1993, in Protostars and Planets III, eds. E. H. Levy & J. I. Lunine (Tucson:University of Arizona Press), 521
Beckwith, S.V.W., Sargent, A. I., Scoville, N.Z., Masson, C.R., Zuckerman, B., & Phillips, T.G. 1986, ApJ, 309, 755
Beckwith, S.V.W., Sargent, A. I., Chini, R., & Güsten, R. 1990, AJ, 99, 924
Burrows, C.J., Krist, J.E., Stapelfeldt, K.R., & WFPC2 IDT 1995, BAAS, 187, 32.05
Calvet, N., Hartmann, L., & Strom, S.E. 2000, in Protostars and Planets IV, eds. V. Mannings, A.P. Boss, & S.S. Russell (Tucson: University of Arizona Press), in press
Dutrey, A., Guilloteau, S., Prato, L., Simon, M., Duvert, G., Schuster, K., & Ménard, F. 1998, A&A, 338, L63
Duvert, G., et al. 2000, A&A, in press
Greaves, J.S., Holland, W.S., Moriarty-Schieven, G., Jenness, T., Dent, W.R.F., Zuckerman, B., McCarthy, C., Webb, R.A., Butner, H.M., Gear, W.K., & Walker, H.J., 1998, ApJ, 506, L133
Hayashi, M., Ohashi, N., & Miyama, S.M., ApJ, 418, L71
Heap, S.R., Lindler, D.J., Woodgate, B., & STIS ID Team, 1997, BAAS, 191, 47.02
Holland, W.S., Greaves, J.S., Zuckerman, B., Webb, R.A., McCarthy, C., Coulson, I.M., Walther, D.M., Dent, W.R.F., Gear, W.K., & Robson, I., 1998, Nature, 392, 788
Jayawardhana, R., Fisher, S., Hartmann, L., Telesco, C., Pina, R., & Fazio, G., 1998, ApJ, 503, L79
Jensen, E.L.N., Mathieu, R.D., & Fuller, G.A. 1994, ApJ, 429, L29
Jura, M., Ghez, A.M., White, R.J., McCarthy, D.W., Smith, R.C., & Martin, P.G., 1995, ApJ, 445, 451
Kenyon, S.J., & Hartmann, L. 1987, ApJ, 323, 714
Koerner, D.W. 1997, in Planetary and Interstellar Processes Relevant to the Origins of Life, ed. D.C.B. Whittet, (Kluwer: Dordrecht) pp. 157
Koerner, D.W. & Sargent, A.I. 1995, AJ, 109, 2138
Koerner, D.W., Sargent, A.I., & Beckwith, S.V.W. 1993, Icarus, 106(1), 2
Koerner, D.W., Ressler, M.E., Werner, M.W., & Backman, D.E. 1998, ApJ, 503, L83
Koerner, D.W., Schneider, G., Smith, B. A., Becklin, E. E., Hines, D. C., Kirkpatrick, J. D., Lowrance, P. J., Meier, R., Reike, M., Terrile, R. J., Thompson, R. I., 1999, BAAS, 193, 73.14
Lada, C,J, & Wilking, B.A. 1984, ApJ, 287, 610
Lagage, P.O., & Pantin, E., 1994, Nature, 369, 628
Lagrange, A-M., Backman, D.E., & Artymowicz, P., 2000, in Protostars and Planets IV, eds. V. Mannings, A.P. Boss, & S.S. Russell (Tucson: University of Arizona Press), in press
Langer, W.D., van Dishoeck, E.F., Bergin, E.A., Blake, G.A., Tielens, A.G.G.M., Velusamy T., & Whittet, D.C.B., 2000, in Protostars and Planets IV, eds. V. Mannings, A.P. Boss, & S.S. Russell (Tucson: University of Arizona Press), in pres
Lay, O.P., Carlstrom, J.C., Hills, R.E., & Phillips, T.G., 1994, ApJ, 434, L75
Lunine, D.W. 1997, in Planetary and Interstellar Processes Relevant to the Origins of Life, ed. D.C.B. Whittet, (Kluwer: Dordrecht) pp. 205
Lynden-Bell, D., & Wood, R. 1968, MNRAS, 138, 495
Mannings, V., Koerner, D.W., & Sargent, A.I. 1997, Nature, 388, 555
Marcy, G.W., Butler, R.P., Vogt, S.S., Fischer, D., & Liu, M.C. 1999, ApJ, 520, 239
Mundy, L.G., Looney, L. W., Erickson, W., Grossman, A., Welch, W.J., Forster, J. R., Wright, M.C.H., Plambeck, R.L., Lugten, J., & Thornton, D.D., 1996, ApJ, 464, L169
Myers, P.C., & Benson, P.J., ApJ, 248, 87
Osterloh, M., & Beckwith, S. 1995, ApJ, 439, 288
Padgett, D.L., Brandner, W., Stapelfeldt, K.R., Strom, S.E., Terebey, S., Koerner, D. 1999, AJ, 117, 1490
Pantin, E., Lagage, P.O., & Artymowicz, P., 1997, A&A, 327, 1123
Sargent, A.I., & Beckwith, S. 1987, ApJ, 323, 294
Sargent, A.I., & Beckwith, S. 1991, ApJ, 382, L31
Schneider, G., Smith, B.A., Becklin, E.E., Koerner, D.W., Meier, R., Hines, D.C., Lowrance, P.J., Terrile, R.J., Thompson, R.I., & Rieke, M. 1999, ApJ, 513, L127
Shu, F., Najita, J., Galli, D., and Ostriker, E. 1993, in Protostars & Planets III, ed. E. H. Levy & J. I. Lunine (Tucson: University of Arizona Press), 3
Skrutskie, M.F., Dutkevitch, D., Strom, S.E., Strom, K.M., & Shure, M.A. 1990, AJ, 99, 1187
Skrutskie, M.F., Snell, R., Dutkevitch, D., Strom, S.E., Schloerb, F.P., & Dickman, R.L. 1991, AJ, 102, 1749 Strom, K.M., Shure, M.A. 1990, AJ, 99, 1187
Smith, B.A., & Terrile, R.J. 1984, Science, 226, 1421
Strom, K. M., Strom, S. E., Edwards, S., Cabrit, S., & Skrutskie, M. F. 1989, AJ, 97, 1451.
Trilling, D.E., & Brown, R.H. 1998, Nature, 395, 775
Wahhaj, Z., Koerner, D.W., Backman, D.E., Werner, M.W., Serabyn, E.,& Ressler, M.E. 2000, BAAS, 195, 25.04
Weinberger, A.J., Becklin, E.E., Schneider, G., Smith, B.A., Lowrance, P.J., Silverstone, M.D., Zuckerman, B., & Terrile, R.J., 1999. ApJ, 525, L53
Zuckerman, B., Forveille, T., & Kastner, J.H., 1995, Nature, 373, 494 |
no-problem/9912/chao-dyn9912018.html | ar5iv | text | # The Nosé–Hoover thermostated Lorentz gas *footnote **footnote *Note: Most of the figures are in poor quality output. The originals are many MB large. They can be obtained upon request.
## I introduction
In a system of particles under an external force a nonequilibrium steady state can be obtained by applying a thermostat . Deterministic and time reversible bulk thermostating is based on introducing a momentum dependent friction coefficient in the equations of motion. One type of this mechanism is the Nosé–Hoover thermostat . It creates a canonical ensemble in equilibrium and yields a stationary nonequilibrium distribution of velocities in nonequilibrium. Another version, the Gaussian isokinetic thermostat , leads to a microcanonical density for the velocity components in equilibrium and to a constant kinetic energy in nonequilibrium. Though the microscopic dynamics of these thermostated systems is time reversible the macroscopic dynamics is irreversible in nonequilibrium . This is related to a contraction onto a fractal attractor .
Characteristic features of thermostated many particle systems, like a nonequilibrium steady state and a fractal attractor, have been recovered for a specific one particle system, the Gaussian thermostated Lorentz gas . The periodic Lorentz gas consists of a particle that moves through a triangular lattice of hard disks and is elastically reflected at a collision with a diskA model almost identical to the driven periodic Lorentz gas, except for some geometric restrictions, is the Galton board, which has been invented in 1873 to study probability distributions .. It serves as a standard model in the field of chaos and transport, see e. g., . In contrast to many particle systems a one particle system reflects more strongly the properties of a thermostat. For the Gaussian thermostated Lorentz gas a complicated dependence of the attractor on the field strength results .
A second simple one particle system which has been much investigated concerning its chaotic properties is the Nosé–Hoover thermostated harmonic oscillator. In contrast to the Lorentz gas, the dynamics of this system is generically nonergodic . However, one can obtain an ergodic dynamics for this system by the additional control of the square of the kinetic energy .
Recently an alternative thermostating mechanism, called thermostating by deterministic scattering, has been introduced for the periodic Lorentz gas This mechanism has been applied later to a system of hard disk under a temperature gradient and shear.. This deterministic and time reversible mechanism is based on including energy transfer between moving particle and disk scatterer at a collision, instead of using a momentum dependent friction coefficient. It leads to a canonical probability density for the particle in equilibrium, and in nonequilibrium it keeps the energy of the particle on average constant. In Refs. this model has been compared to the Gaussian thermostated Lorentz gas: In nonequilibrium one finds an attractor for this model which is similar to the fractal attractor of the Gaussian thermostated Lorentz gas, but in contrast to the Gaussian case the attractor is phase space filling even for high field strengths. For both models the conductivity is a nonlinear decreasing function with increasing field strength on a coarse scale.
Based on its construction the method of thermostating by deterministic scattering is in fact closer to the Nosé–Hoover thermostat. This motivates us to apply for the first time the Nosé–Hoover thermostat to the periodic Lorentz gas. In Section II we introduce the Nosé–Hoover thermostat and discuss some variations of it. In Section III we define the periodic Lorentz gas and the thermostats we will study. We investigate these models in equilibrium in section IV and in nonequilibrium in Section V. Conclusions are drawn in Section VI.
## II The Nosé–Hoover thermostat and some variations
In the following sections we consider a one particle system in two dimensions with the position coordinates $`\stackrel{}{q}=(q_x,q_y)`$ and the momentum coordinates $`\stackrel{}{p}=(p_x,p_y)`$. The mass of the particle has been set equal to unity. The equations of motion for the Nosé–Hoover thermostat are then given by
$`\dot{\stackrel{}{q}}`$ $`=`$ $`\stackrel{}{p}`$ (1)
$`\dot{\stackrel{}{p}}`$ $`=`$ $`\stackrel{}{\epsilon }\zeta \stackrel{}{p}`$ (2)
$`\dot{\zeta }`$ $`=`$ $`({\displaystyle \frac{p^2}{2T}}1){\displaystyle \frac{1}{\tau ^2}}.`$ (3)
The thermostat variable $`\zeta `$ couples the particle dynamics to a reservoir. It controls the kinetic energy of the particle $`p^2/2`$ such that $`<p^2>=2T`$. This holds even in nonequilibrium as induced by an electric field $`\stackrel{}{\epsilon }`$. $`\tau `$ is the response time of the thermostat. Performing the limit $`\tau 0`$ in Eqs. (3) approximates the Gaussian thermostat with $`\zeta =(\stackrel{}{\epsilon }\stackrel{}{p})/p^2`$. In the limit $`\tau \mathrm{}`$ the friction coefficient approaches a constant, $`\zeta =\zeta _c`$, and the equations of motion are not time reversible anymore. The dynamics of this dissipative limit has been investigated in .
A generalization of the Nosé–Hoover thermostat to control higher even moments of $`p`$ has been introduced by Hoover . The moments are fixed according to the momentum relations for the Gaussian distribution. By such a more detailed control of the nonequilibrium steady state statistical dynamical properties, like ergodicity, can be improved, as has been mentioned for the Nosé–Hoover thermostated harmonic oscillator in the introduction. In principle the method of the control of the even moments can straightforwardly be extended to the control of the odd moments. However, such a thermostat would involve an additional parameter to control the respective current of the subsystem. Thus, the corresponding reservoir would be more than a single thermal reservoir, which is physically not desirable, apart from the fact that the respective equations of motion would not be time reversible anymore.
We briefly note that there exist other formal generalizations , or modifications , of the Nosé–Hoover thermostat in the literature. They have been critically reviewed in Refs. .
## III Variations of the Nosé–Hoover thermostat for the periodic Lorentz gas
The basic themostating method we investigate in this paper is the Nosé–Hoover thermostat, Eqs. (3). In the following we introduce three variations of it which are time reversible and result in a dissipative dynamics in nonequilibrium.
The first variation is the Nosé–Hoover thermostat with a field dependent coupling to the reservoir,
$`\dot{\stackrel{}{q}}`$ $`=`$ $`\stackrel{}{p}`$ (4)
$`\dot{p_x}`$ $`=`$ $`\epsilon _x(1+\epsilon _x)\zeta p_x`$ (5)
$`\dot{p_y}`$ $`=`$ $`\epsilon _y(1+\epsilon _y)\zeta p_y`$ (6)
$`\dot{\zeta }`$ $`=`$ $`({\displaystyle \frac{p^2}{2T}}1){\displaystyle \frac{1}{\tau ^2}},`$ (7)
which is obtained by including the factors $`1+\epsilon _x`$, resp. $`1+\epsilon _y`$, in Eqs. (3). Alternatively, these equations can be written by defining two field dependent friction coefficients, $`\xi _x=(1+\epsilon _x)\zeta `$ and $`\xi _y=(1+\epsilon _y)\zeta `$, which are governed by $`\dot{\xi }_x=(p^2/2T1)(1+\epsilon _x)/\tau ^2`$ and $`\dot{\xi }_y=(p^2/2T1)(1+\epsilon _y)/\tau ^2`$, respectively. It then becomes clear that for each momentum component there exists a separate response time of the reservoir which reads $`\tau /\sqrt{1+\epsilon _x}`$, resp. $`\tau /\sqrt{1+\epsilon _y}`$. The basic advantage of this thermostat is that the response times are now adjusted to the corresponding component of the field strength such that with increasing field strength the response time decreases. The standard Nosé–Hoover thermostat Eqs. (3) is contained as a special case in equilibrium .
The second variation goes back to Hoover . It includes a control of $`<p^4>=8T^2`$,
$`\dot{\stackrel{}{q}}`$ $`=`$ $`\stackrel{}{p}`$ (8)
$`\dot{\stackrel{}{p}}`$ $`=`$ $`\stackrel{}{\epsilon }\zeta _1\stackrel{}{p}\zeta _2{\displaystyle \frac{p^2}{2T}}\stackrel{}{p}`$ (9)
$`\dot{\zeta _1}`$ $`=`$ $`({\displaystyle \frac{p^2}{2T}}1){\displaystyle \frac{1}{\tau ^2}}`$ (10)
$`\dot{\zeta _2}`$ $`=`$ $`{\displaystyle \frac{p^2}{2T}}({\displaystyle \frac{p^2}{2T}}2){\displaystyle \frac{1}{\tau ^2}}.`$ (11)
As mentioned in the previous sections this variation can improve statistical dynamical properties, like ergodicity.
The third variation controls $`p_x^2`$ and $`p_y^2`$ separately. This is performed by defining two independent reservoirs for the $`x`$– and $`y`$–direction,
$`\dot{\stackrel{}{q}}`$ $`=`$ $`\stackrel{}{p}`$ (12)
$`\dot{p_x}`$ $`=`$ $`\epsilon _x\zeta _xp_x`$ (13)
$`\dot{p_y}`$ $`=`$ $`\epsilon _y\zeta _yp_y`$ (14)
$`\dot{\zeta _x}`$ $`=`$ $`({\displaystyle \frac{p_x^2}{T}}1){\displaystyle \frac{1}{\tau _x^2}}`$ (15)
$`\dot{\zeta _y}`$ $`=`$ $`({\displaystyle \frac{p_y^2}{T}}1){\displaystyle \frac{1}{\tau _y^2}}.`$ (16)
This variation more deeply intervenes in the microscopic dynamics by forcing the single components $`p_x^2`$ and $`p_y^2`$ separately towards canonical distributions. However, in contrast to the previous variations a curiosity is hidden in it: At a collision the thermostat $`\zeta _x,\zeta _y`$ is uncorrelated to the thermostated variables $`p_x`$ and $`p_y`$, because $`p_x`$ and $`p_y`$ change at a collision whereas $`\zeta _x,\zeta _y`$ remain the same. Therefore the thermostat does not work efficiently. But we have not found any reflection of this curiosity in the macroscopic behavior.
We study the dynamics of these models in one Lorentz gas cell with periodic boundaries, see Fig. 1(a). As the radius of the disk we take $`r=1`$. For the spacing between two neighboring disks we choose $`w0.2361`$, corresponding to a density equal to $`4/5`$ of the maximum packing density of the scatterers. A collisionless free flight of the particle is avoided for this parameter . The relevant variables of the dynamical system are defined in Fig. 1(b): $`\beta `$ is the angular coordinate of the point at which the particle elastically collides with the disk, and $`\gamma `$ is the angle of incidence at this point.
The equations of motion are integrated by a fourth order Runge Kutta algorithm with a step size of $`dt=0.005`$ between two collisions. The collision of the particle with the disk has been determined with a precision of $`10^7`$. Unless declared otherwise the temperature is chosen to $`T=0.5`$.
## IV Equilibrium
Inserting the initial condition $`(p_0,\zeta _0)=(\sqrt{2T},0)`$ in Eqs. (3) with $`\epsilon =0`$ the velocity of the particle becomes a constant, $`p=\sqrt{2T}`$, that means the Nosé–Hoover thermostat does not act in the Lorentz gas and the dynamics is microcanonical. For other initial conditions one observes in computer simulations that $`p^2`$ and $`\zeta `$ oscillate periodically and that the dynamics of this one particle system is nonergodic.
A stability analysis confirms the numerical results: The Nosé–Hoover thermostated equations of motion Eqs. (3) can be reduced for $`\epsilon =0`$ to
$`\dot{p^2}`$ $`=`$ $`2\zeta p^2`$ (17)
$`\dot{\zeta }`$ $`=`$ $`({\displaystyle \frac{p^2}{2T}}1){\displaystyle \frac{1}{\tau ^2}}.`$ (18)
Eqs. (18) are also valid at the moment of a collision, because $`p^2`$ and $`\zeta `$ are not changed by a collision. The fixed point of Eqs. (18) is $`(p^2,\zeta )=(2T,0)`$ with the eigenvalues $`\lambda _{1/2}=\pm \sqrt{4T/\tau ^2}`$ and is thus elliptic.
The additional control of $`<p^4>`$ destroys the microcanonical probability density but it is not sufficient to obtain an “exact” dynamics<sup>§</sup><sup>§</sup>§The dynamics is “exact” if every initial density of nonzero measure converges to the same stationary density in the periodic Lorentz gas. Different initial conditions still lead to a different shape of the probability density $`\varrho (p_x)`$, as is shown in Fig. 2(a).
In contrast the separate control of $`p_x^2`$ and $`p_y^2`$ leads to an exact dynamics in equilibrium corresponding to the canonical probability density $`\varrho (p_x)`$, as shown in Fig. 2(b).
## V Nonequilibrium
We now apply an external electric field $`\stackrel{}{\epsilon }`$ parallel to the $`x`$–axis. The Nosé–Hoover thermostat and the related models then lead to well defined nonequilibrium steady states with constant average energy of the particle.
### A Probability density $`\varrho (p_x)`$
The probability density $`\varrho (p_x)`$ for the Nosé–Hoover thermostat for $`\epsilon =0.5`$ is presented in Fig. 3(a). For $`\tau ^2=0.01`$ the density shows some remains of the deformed microcanonical density of the Gaussian thermostat, whereas for $`\tau ^2=1`$ and $`\tau ^2=1000`$ the density becomes similar to the density of thermostating by deterministic scattering, which is related to a canonical distribution .
Fig. 3(b) shows $`\varrho (p_x)`$ for the three variations of the Nosé–Hoover thermostat. We have chosen here $`T=0.60029`$ which corresponds to the temperature in the bulk for thermostating by deterministic scattering at a parametric temperature of $`T=0.5`$. The density of the Nosé–Hoover thermostat with field dependent coupling to the reservoir is very close to the density of thermostating by deterministic scattering. The density of the Nosé–Hoover thermostat with separate control of $`p_x^2`$ and $`p_y^2`$ looks like a superposition of the densities of the Nosé–Hoover thermostat for small and for large $`\tau `$.
In all models the mean value of $`\varrho (p_x)`$ is positive, indicating a current parallel to the field direction.
### B Conductivity
The conductivity $`\sigma =<p_x>/\epsilon `$ for the Nosé–Hoover thermostat is shown in Fig. 4. For $`\tau ^2=0.01`$ the curve is very similar to the conductivity of the Gaussian thermostated Lorentz gas . For $`\tau ^2=1000`$ the curve is more stretched along the $`\epsilon `$–axis and globally not decreasing anymore. In contrast the conductivity as obtained from thermostating by deterministic scattering is a globally decreasing function. According to the Einstein relation, in the limit $`\epsilon 0`$ $`\sigma `$ should approach the equilibrium diffusion coefficient $`D`$ of the periodic Lorentz gas, which for $`w=0.2361`$ has the value $`D0.21`$ . This is hard to see for $`\tau ^2=1000`$, because for $`\epsilon 0`$ the probability density changes drastically from a smooth, canonical like density to a non-smooth density. It is also difficult to see any linear response in computer simulations, as has already been discussed for the Gaussian thermostated Lorentz gas and for thermostating by deterministic scattering in Ref. .
### C Attractor
Fig. 5 shows the Poincaré section of $`(\beta ,\mathrm{sin}(\gamma ))`$Because the symbols for the angles vary in the literature we mention again that the angle $`\beta `$ gives the location of the collision relative to the field direction and $`\gamma `$ is the angle of incidence. at the moment of the collision for the Nosé–Hoover thermostat, for the three variations and for thermostating by deterministic scattering. Again, we have chosen $`T=0.60029`$ to compare the results with thermostating by deterministic scattering. For all models the structure of the attractor is qualitatively the same as the structure of the fractal attractor obtained for the Gaussian thermostated Lorentz gas . However, the fine structure varies with the models and with the response time $`\tau `$. For the Nosé–Hoover thermostat with $`\tau ^2=0.01`$, see Fig 5(a), the structure is most pronounced, whereas for the control of $`p_x^2`$ and $`p_y^2`$ separately, see Fig. 5(e), the structure is least visible.
### D Bifurcation diagram
The angle $`\beta `$ at the moment of the collision is presented as a function of the field strength for the Nosé–Hoover thermostat in Fig. 6. For all three values of $`\tau ^2`$ the attractor is phase space filling for small field strengths $`\epsilon <1.3`$ and contracts onto a periodic orbit with increasing field strength. For $`\tau ^2=0.01`$ the scenario is similar to the one of the Gaussian thermostated Lorentz gas In Fig. 10 of the angle of flight after a collision is plotted and the field is parallel to the negative x-axis.. For $`\tau ^2=1`$ the scenario looses its richness, but it gets a bit more complicated again for $`\tau ^2=1000`$. In contrast to the Nosé–Hoover thermostat, the attractor of thermostating by deterministic scattering remains phase space filling even for large $`\epsilon `$ .
The bifurcation diagram in the dissipative limit $`\tau \mathrm{}`$ of Eqs. (3) with a constant friction coefficient $`\zeta _c`$ shows an inverse scenario to Fig. 6, as is presented in Fig. 7. For small $`\epsilon `$ the trajectory is a so–called creeping orbit , then it changes to a periodic orbit, and for large $`\epsilon `$ the attractor gets phase space filling. By increasing $`\zeta _c`$ the strength of the dissipation increases with the consequence that the onset of chaotic behavior starts at higher field strengths. With respect to the numerics we remark that related to the different shapes of the attractors by varying $`\zeta _c`$ in the dissipative limit, the duration of the transient behavior of the Nosé–Hoover thermostat grows drastically for $`\tau \mathrm{}`$.
Figs. 8-10 show the bifurcation diagrams for the three variations of the Nosé–Hoover thermostat. In general one observes that even for high field strengths chaotic regions appear, in contrast to the Nosé–Hoover thermostat.
For the variation with the additional control of $`p^4`$ the attractor covers a bounded $`\beta `$–interval for $`\epsilon 3`$. For these field strengths the trajectory is a creeping orbit.
Fig. 10(b) depicts the attractor for the variation with separate control of $`p_x^2`$ and $`p_y^2`$ under different response times for the $`x`$– and $`y`$–direction. Since the field acts in the $`x`$–direction we have chosen a strong coupling, $`\tau _x^2=0.1`$, for the $`x`$–direction and a weak coupling, $`\tau _y^2=1000`$, for the y–direction. Up to a field strength $`\epsilon 6.5`$ no periodic window has been found. The bifurcation diagram for these parameters most strongly deviates from the Nosé–Hoover and Gaussian thermostated Lorentz gas and is closest to the bifurcation diagram of thermostating by deterministic scattering. However, in contrast to thermostating by deterministic scattering the attractor is more concentrated around $`\beta \pi `$.
We detected a numerical problem for the second and the third variation at several values of $`\tau `$. One observes that sporadically after large time intervals there appears a creeping orbit with a very low velocity of the particle, which is difficult to handle numerically. Whether this creeping orbit is the stationary state could probably be clarified by calculating Lyapunov exponents .
### E Thermodynamic entropy production and phase space volume contraction
A characteristic property of the Nosé–Hoover thermostat as well as of the Gaussian isokinetic thermostat is that the thermodynamic entropy production is equal to the phase space volume contraction rate . This equality can as well easily be verified for the variation with the additional control of $`p^4`$ and for the variation with separate control of $`p_x^2`$ and $`p_y^2`$.
On the other hand it does not hold for the Nosé–Hoover thermostat with field dependent coupling to the reservoir. From Eqs. (7) one gets for the phase space contraction rate of this model $`<\mathrm{div}\dot{\mathrm{\Gamma }}>=(2+\epsilon )<\zeta >`$ where $`\dot{\mathrm{\Gamma }}=(\dot{\stackrel{}{q}},\dot{\stackrel{}{p}},\dot{\zeta })`$. The precise relation between thermodynamic entropy production $`\dot{S}_{TD}=\epsilon <p_x>/T`$ and $`<\mathrm{div}\dot{\mathrm{\Gamma }}>`$ for this variation is obtained by calculating the energy balance between subsystem and reservoir:
$$E=\frac{p^2}{2}+T\tau ^2\zeta ^2$$
(19)
is the total energy which in a nonequilibrium steady state should on average be zero,
$$<\frac{dE}{dt}>=0.$$
(20)
Inserting Eqs. (7) with $`\epsilon _y=0`$ in Eqs. (20) leads to
$$\frac{\epsilon _x<p_x>}{T}=\frac{\epsilon _x<p_x^2\zeta >}{T}+2<\zeta >.$$
(21)
Numerical simulations have shown that $`p_x^2`$ and $`\zeta `$ are no independent quantities in nonequilibrium. If $`p_x^2`$ and $`\zeta `$ would be independent and equipartitioning would be fulfilled, i. e., $`<p_x^2>=T`$, which is only the case in equilibrium, then Eq. (21) would lead to an identity between thermodynamic entropy production and phase space volume contraction. The results for $`<\mathrm{div}\dot{\mathrm{\Gamma }}>`$ and for $`\dot{S}_{TD}`$ as obtained from computer simulations for this system are presented in Table I at different $`\tau `$ and $`\epsilon `$. More details of the entropy production in this variation and in a Gaussian thermostat with the same property are discussed in Ref. .
## VI conclusions
We have investigated the Nosé–Hoover thermostat and three variations of it for the periodic Lorentz gas. All models are time reversible and lead to well defined nonequilibrium steady states with a constant average kinetic energy of the moving particle.
As a typical characteristic of deterministic and time reversible thermostating mechanisms it has been confirmed that in nonequilibrium all these systems contract onto attractors similar to the fractal attractor of the Gaussian thermostated Lorentz gas.
In equilibrium only the variation of the Nosé–Hoover thermostat with separate control of $`p_x^2`$ and $`p_y^2`$ leads to an exact dynamics with $`\varrho (p_x)`$ being canonical just like the corresponding density of thermostating by deterministic scatterin g.
In nonequilibrium the attractor of the Nosé–Hoover thermostat contracts onto a periodic orbit for higher field strength, analogous to the Gaussian thermostated Lorentz gas. However, the detailed scenario depends on the value of the response time $`\tau `$. Concerning the probability density in equilibrium, the attractor, and the conductivity in nonequilibrium, the properties of the standard Nosé–Hoover thermostat in the periodic Lorentz gas are closer to the properties of the Gaussian thermostat than to the properties of thermostating by deterministic scattering, although the Nosé–Hoover thermostat and thermostating by deterministic scattering share the property of keeping the energy of the particle on average constant in nonequilibrium, and for both thermostats the probability densities for the velocity components are related to a canonical probability density.
Concerning the bifurcation diagrams we find that the dynamics of the three variations of the Nosé–Hoover thermostat are in general “more chaotic”. Even for higher field strengths there exist pronounced chaotic regions. The separate control of $`p_x^2`$ and $`p_y^2`$ leads to a phase space filling attractor up to high field strengths in nonequilibrium. The equilibrium properties and the bifurcation diagram in nonequilibrium of this model are qualitatively closest to the properties of thermostating by deterministic scattering in comparison to the other versions.
For the Nosé–Hoover thermostat with field dependent coupling to the reservoir the thermodynamic entropy production is generically not equal to the phase space volume contraction in contrast to the other models.
An important question would be to look for common properties of all deterministic and time reversible thermostats. So far, only the existence of a fractal attractor in nonequilibrium appears to be typical. A more detailed investigation of these thermostating mechanisms should in particular involve a quantitative comparison, as, e. g., by means of computing Lyapunov exponents. An answer to this question would be helpful for obtaining a general characterization of nonequilibrium steady states.
Acknowledgments: We dedicate this article to G. Nicolis, a champion of chaoticity, on occasion of his 60th birthday. K. R. and R. K. thank G. Nicolis for his continuous support and for the possibility to collaborate with him on problems of thermostating. This work has been started during the workshop “Nonequilibrium Statistical Mechanics” (Vienna, February 1999). The authors thank the organizers G. Gallavotti, H. Spohn and H. Posch for the invitation to this meeting. K. R. thanks the European Commission for a TMR grant under contract no. ERBFMBICT96-1193. R. K. acknowledges as well financial support from the European Commission. |
no-problem/9912/astro-ph9912355.html | ar5iv | text | # Unpulsed Optical Emission from the Crab Pulsar1footnote 11footnote 1Based on observations using the 6m telescope at the Special Astrophysical Observatory of the Russian Academy of Sciences in Nizhnij Arkhyz, Russia
## 1 Introduction
The Crab pulsar provides one of the best multiwavelength sources of magnetospheric emission from $`\gamma `$-rays to the radio regime and as such, remains the gold standard as regards providing definitive empirical datasets with which to constrain current existing theoretical models of such nonthermal emission. Throughout this entire frequency range, the pulsar’s light curve retains essentially the same morphology, being traditionally divided up into four distinct regions - the two peaks, the Bridge of emission between the peaks, and the ‘off’ region. This latter component was historically presumed to originate from the nebula, a reasonable assumption considering the intense beaming observed from this object.
Optically, the pulsar has been scrutinized ever since its initial discovery in the radio by Staelin & Reifenstein (1968). The pulsar is bright enough for effective single-pixel high speed photometry, and following its confirmation as an optical pulsar by Cocke et al. (1969), numerous such observations followed (e.g. Wampler et al. 1969, Kristian et al. 1970, Cocke and Ferguson 1974, Groth 1975a, Groth 1975b). These observations typically spanned the $`BVRI`$ wavebands at time resolutions of $``$ milliseconds, and as absolute reference timing was not possible, individual light curves per dataset were typically co-added in a least-squares fashion.
Despite the somewhat restricted data acquisition and analytical conditions associated with these observations, there was a consensus that the common arrival time of all these colored peaks was accurate to within 10$`\mu `$s, there were suggestions of morphological differences between the leading and trailing edges of various light curves, and that the light curve was strongly polarized as a function of rotational phase (Wampler et al. 1969).
Subsequent observations by Peterson et al. 1978 using a 2 dimensional (2-d) image photon counting camera in the $`UB`$ bands suggested that the supposed ‘off’ phase of the pulsar’s rotational phase was in fact consistent with continuing emission from the pulsar, indicating the Crab was actually ‘on’ for the full rotational cycle. Whilst these results at the time were unprecedented, deeper exposures combined with more rigorous image processing algorithms would have yielded more accurate estimates of the ‘off’ components flux yields and overall spectral form.
In Jones et al. 1981, Smith et al. 1988 and Smith et al. 1996, several dedicated phase-resolved $`V`$ & $`UV`$ polarimetric observations of the Crab pulsar using both ground based single-pixel photometers and the $`HST`$ High Speed Photometer yielded data indicating sharp swings in polarisation angle around both the peaks, in addition to some form of polarisation evolution in the Bridge region. Remarkably, the analysis also indicated a large polarisation component associated with the traditional ‘off’ phase of emission. The inference was that, combined with the earlier Peterson et al. results, the ‘off’ emission of the pulsar was consistent with some form of nonthermal, undoubtedly synchrotron related origin. However, the single-pixel based nature of these observations limited the possibility of accurately resolving the unpulsed component’s contribution in terms of polarisation, which may be expected to contain a substantial nebular component.
Ideally, one requires a high speed 2-d photometer in order to obtain acceptably significant signal to noise (S/N) datasets in several wavebands from which one might hope to photometrically isolate the various components of a phase-resolved light curve. With such systems, the effective photometer sky aperture can be reduced compared with conventional photometers, and effects such as telescope wobble can be entirely removed (Shearer et al. (1996)). Thus, photons chosen for analysis can be selected in software which can place an aperture matched (to maximise S/N) to the prevailing seeing and background conditions, and then isolate those barycentred photons within specifically chosen phase regions of the light curve. The TRIFFID high speed photometer, previously used in the detection of pulsations from both Geminga (Shearer et al. (1998)) and PSR B0656+14 (Shearer et al. (1997)) is ideal in this regard, as it makes use of a MAMA camera. In this communication we document the first attempts to photometrically isolate the Crab’s unpulsed ‘off’ component of emission in three color bands.
## 2 Observations and Analysis
Observations of the Crab pulsar were made over 5 nights between the 14th and 19th of January 1996, using the TRIFFID camera mounted at Prime Focus of the 6m telescope of the Special Astrophysical Observatory located in the Russian Caucasus. The primary targets of this observing run were the Geminga and PSR B0656+14 pulsars, thus the Crab observations were somewhat limited. Data was taken in the $`U`$, $`B`$ and $`V`$ Johnson bands. The plate scale was 0.22”/pixel at the $`MAMA`$ photocathode. For all nights observing the Crab, the atmospheric stability was good although there were several transits of high altitude cirrus. Table 1 shows a log of the observations. Each individual dataset was binned to form an integrated image, and from this, reference stars were chosen as guides for the image processing software. Flat fields were prepared using deep dome flats co-added with a number of sky flats, taken immediately after the observations. Image processing, incorporating a Weiner filter modified shift-and-add algorithm (Redfern et al. (1993)), followed, incorporating the derived flat field and correcting for telescope wobble and gear drift. This yielded full field images of the inner Crab nebula within which the pulsar and its stellar companions were registered.
For each dataset, all photons within a radius of 50 pixels of the Crab pulsar were extracted using the image processing software, and these time-stamps were then barycentred using the JPL DE2000 ephemeris. A standard epoch folding algorithm was used to prepare light curves based upon given Jodrell Bank Crab Ephemeris (Lyne & Pritchard, (1996)) via folding modulo the barycentred time series. This yielded both light curves and a phase-resolved image within a certain specfied phase range - in this initial case the full cycle. The number of phase bins used was 3000, yielding a bin resolution of $``$ 11 $`\mu `$s.
Following this, the s2 ($`V`$), s1 ($`U`$) & w4 ($`B`$) images were used respectively as ‘templates’ with which to re-orientate the other color datasets geometrically, so as to result in a set of identical integrated images for each dataset. Total summed errors in this shift-and-rotation technique were of order 0.1$`\%`$ in terms of pixel units typically. For each colour band, each dataset was summed chronologically, yielding a total $`U`$, $`B`$ and $`V`$ dataset.
Phase-resolved images obtained based upon the approximate locations of the four principal morphological regions previously defined were obtained as shown in Figure 1. In this figure, the phase regions defining the peaks and Bridge of emission match those defined previously by Eikenberry & Fazio 1997. It is clear that there is emission associated with the pulsar during what has been conventionally regarded as the ‘off’ phase - as had been originally indicated by Peterson et al. With these deep phase-resolved images, it is possible to apply the full arsenal of image processing techniques, and thus photometrically characterise the time-resolved nature of the pulsar’s emission particulary for the ‘off’ region.
In order to do this, we must satisfactorily isolate the unpulsed component from the background-removed light curves, in such a way so that we are satisfied that our denominated phase window samples what is consistent with an unpulsed component only. To isolate this ‘off’ region, standard image processing techniques were used to remove the Crab pulsar from each of the full cycle images in the $`UBV`$ bands. In effect, one fits an analytical point spread function (PSF) to the full cycle photometric image, and one then uses this PSF to firstly derive the flux associated with full cycle image, and then to derive the fluxes associated with the other phase resolved images corresponding to the two peaks, the bridge and the unpulsed component of emission. We now outline the approach in more detail.
For a given color band, the removal of the Crab pulsar from the full cycle image was performed via the daophot IRAF package, using the psf task to fit a PSF to the Crab pulsar stellar point source. This was then used as in input to the allstar task, which re-fits the PSF to the candidate stellar point source - in this first case, the full cycle Crab image - in order to accurately remove the candidate and in so doing, determine both the flux and its error associated with this procedure. For the full cycle image, the removal was performed satisfactorily, as the deep exposures in $`UBV`$ provided good background statistics to the required fitting algorithms.
Using standard aperture photometry, the resulting Crab-removed image was then used to determine the total background flux within the fixed radius centred on the PSF derived centroid of the Crab point source. This net background flux was then used to correct the existing light curves. The procedure was repeated for each of the three color band datasets. In each case, the resulting light curve indicated evidence for residual emission during the presumed ‘off’ phase of emission, as can be seen in Figure 1.
It is clearly necessary to determine the duration of the true ‘off’ phase of emission, namely that consistent with emission from a constant source. Perhaps more critically, we want to ensure that this emission is not contaminated by the flux associated with the trailing edge of Peak 2 and the leading edge of Peak 1. In order to do this, we attempted to isolate that part of the corrected light curve within this ‘off’ phase region whose phase-average flux is, to first order, consistent with a constant source of emission. This was done by starting with the largest phase range in terms of bins defining the traditional ‘off’ region, computing the total flux within this range, and then determining the idealised average flux level per bin. The deviation over the defined phase region of the observed flux levels per bin against the averaged flux levels per bin were examined using a $`\chi ^2`$ test. This process was repeated iteratively, by dropping the test phase region window (and hence number of bins), and sweeping this through the initially denominated ‘off’ phase region. In this way, at the 95$`\%`$ level of confidence, the chosen bin range was (0.75 - 0.825) of phase, based on an analysis of the three color band light curves. Within this phase region we are satisfied that the observed flux is consistent with emission from a constant source, at this confidence level. We note that this bin range is marginally smaller than that defined by Percival et al. 1993, on analysis of High Speed Photometer data taken of the Crab pulsar from the Hubble Space Telescope, using a similar analytical technique.
With this ‘off’ region so defined, the corresponding 2-d images were acquired for the three color bands. Application of the IRAF allstar task using the empirically derived full cycle PSF for each of the $`UBV`$ images successfuly removed the faint stellar point source visible in each, and from this the flux was estimated. In addition, a local PSF was constructed per phase-resolved image, and the fitting-and-extraction process was performed using both local and full-cycle determined PSFs. This was done for completeness, although the full cycle PSF were found to be sufficient and more ideal, being based upon a higher S/N source and substantially diminished background noise (in comparison to the phase-resolved images). This would seem to indicate that sharp nebular features which might be expected to ”contaminate” the off pulse PSF more than that of the on pulse PSF do not contribute significantly to these results.
The original removal and estimation of the relative fluxes from the full cycle $`UBV`$ datasets yielded a set of reference count rates. All subsequent flux estimates for specific phase regions were subsequently normalized to these reference count rates per color band. Limited prior observations of several Landolt reference stars in the PG0220 field (Landolt (1992)) provided calibration magnitudes which indicated integrated Crab fluxes in agreement with that expected. Using the $`UBVR`$ ground-based fluxes of Percival et al. 1993 as reference points, we thus renormalized our previously determined fractional fluxes. This reference data was based on ground based observations of the Crab pulsar made at the 2.1m telescope at McDonald Observatory in January 1992, and corrected for interstellar extinction using $`E(BV)`$ = 0.51 $`\pm `$ 0.04 (Savage & Mathis 1978).
Table 2 details this phase averaged flux, and in Table 3 we show the derived fractional fluxes for that of the unpulsed components as determined by this analysis in addition to the other light curve components. In Table 4 we have reproduced the estimated power-law parameter $`\alpha `$ determined via a weighted least-squares analysis of each individual spectral dataset. We have re-calculated $`\alpha `$ for the both the full range & $`UBV`$ Percival et al. dataset to compare with the other power-law fits. We note that one would estimate a change in flux of $``$ 0.01 over the four years between the reference integrated flux and our observations, following the phenomenologically derived $`\dot{L_V}`$ $``$ 0.003 mag/yr (Pacini (1971)), empirically confirmed most recently by Nasuti et al. (1997), which is within the error bounds quoted.
## 3 Discussion
The question of the unpulsed component of emission for the Crab pulsar has always remained somewhat challenging, as one is confronted with temporal problems and the nebular contribution. With this 2-d $`MAMA`$ data, definitive flux estimates are attainable for the first time. In Peterson et al. 1978, (and elsewhere Miller & Wampler, 1969), the estimated total unpulsed emission is compared with the peak intensity - rather a relative area in terms of phase allocation at our level of temporal resolution - and also with mean pulsed flux. Peterson et al. applied rather novel techniques in the image processing their data obtained via the use of a 6.2ms time resolved Image Photon Counting System camera. Using an iterative least-squares semi-empirical based PSF, they determined residuals which when smoothed yielded a background image which was subtracted from the star field, and the same method estimated the star intensities. Peterson et al. did not present errors associated with their eventual tabulated results. We note the 6.2ms absolute timing resolution. This is some $``$ 20$`\%`$ of the light curve, and accurate phase resolution may not have guaranteed accurate continual phase resolution photometry. Timing errors, accurate phase resolution and estimation of the total aperture background are all guaranteed at unprecedented resolution with our datasets. From the background corrected light curves, we can determine the incident flux within the designated ‘steady’ region of emission, and then compare it directly with both the total pulsar flux and pulsed-only flux. These differences, presented in terms of magnitude change, are shown in Table 5.
The tabulated data suggests that the original estimates by Peterson et al. 1978 were optimistic by typically at least a magnitude, but this is understandable bearing in mind the rather difficult data and analysis they were working with. There is agreement to some extent with the trend - the early datasets suggested that there was a greater ratios in the $`B`$ in comparison to the $`U`$ band, yet no error estimates are included. No $`V`$ data was analysed at that time. We note that if one was to assume that the unpulsed emission was restricted to a specfic phase region, and not assumed to exist for the entire rotation, then whilst the ratios would drop further, they would still imply a similar spectral form.
In Figure 2 we reproduce the full Percival et al. (1993) derived corrected flux distribution with the unpulsed flux estimates implied from the tabulated ratios. We have also included the derived flux fractions for peaks 1 and 2, and the Bridge of emission, which are considered elsewhere in some detail (Golden et al., 1999). It seems clear that one can represent the unpulsed emission in spectrally in terms of a steeper power-law with $`\alpha `$ $``$ -0.60 $`\pm `$ 0.37 in contrast to the rather flat $`\alpha `$ $``$ 0.11 $`\pm `$ 0.08 associated with the full integrated emission.
## 4 Conclusion
The resolved unpulsed flux component, whether within its defined ‘off’ region or normalized to the pulsar’s full cycle, suggests a power-law form. There are two options - either the emission is real and of a nonthermal nature or the emission is false, a consequence of some form of photocathode or other artifact intrinsic to the $`MAMA`$ photon counting detector. This latter would manifest timing irregularities which were not evident under analysis. Photon timeseries taken from the pulsar and other stars in the field were tested for deviations from a Poissonian distribution at varying timescales, and there was no evidence for such a deviation at the 99% confidence level, atmospheric variations notwithstanding. In this, we confirm the earlier work of Smith et al. (1978). Consequently we may conclude that the emission is from the pulsar.
This unpulsed emission has been more commonly observed in the higher (X-ray & $`\gamma `$-ray) regimes, and scrutinised in some detail. In X-rays, the unpulsed component is difficult to discern amid the intense nebular emission. Becker & Aschenbach 1995 attempted to analyse $`ROSAT`$ HRI data, ostensibly to determine limits to the pulsar’s thermal emission during the unpulsed phase - they concluded with a realistic upper limit to $`T_{surface}`$ for the Crab’s temperature. This does seem to suggest that in X-rays, the hot Crab and the plerion would dominate the emission.
For detected unpulsed $`\gamma `$-ray emission, the existing models place the emission either just beyond the magnetosphere or far out in the plerion, namely the Outer-Gap model of Cheung $`\&`$ Cheng 1994, and the pulsar-wind model of De Jager $`\&`$ Harding 1992. Two principle predictive facts concern us regarding these two models; the first is that the Pulsar-Wind model implies an emission region large in extent, perhaps up to $``$ 20”, whereas the Outer-Gap model requires emission to occur in the immediate vicinity of the pulsar, thus having a resolution of order $``$ 1”. Secondly, the Outer-Gap model implicitly expects a correlation between the pulsed and unpulsed emission, whether it be temporal or spectral in nature (Cheung $`\&`$ Cheng 1994).
Based upon our resolved functional form for the unpulsed component we can reject the De Jager & Harding model, as the source is undeniably localised to the pulsar. The conclusion is that the emission is in some way magnetospherically related. We cannot accept the opposing Cheng & Cheung model for a number of reasons. The emission mechanism is based on the original outer-gap magnetospheric model Cheng, Ho & Ruderman, (1988), and in this case is a result of the cross-streaming of two opposing outer-gaps primary & secondary photon streams. Here the inner streaming IR-optical photons from the far-side gap collide with the primary $`\gamma `$-rays & e<sup>±</sup> pairs from the near-side gap at some distance ($``$ 3$`R_{LC}`$) from the magnetosphere. These interactions result in isotropically radiated high energy emission, predominately from X-rays (MeV) to $`\gamma `$-rays (Gev – TeV). Cheung & Cheng 1994 point out that at the low (keV - 50 MeV) range, their model predicts flux levels much lower than that observed, and that other mechanisms not accounted for (they suggest synchrotron self-Compton mechanisms) must be present. It is clear that any IR-optical photons that are emitted will be predominately pulsed in nature (as in the original Cheng, Ho & Ruderman, (1988) ansatz) as the process advocated would be expected to be preferentially luminous at the higher frequency ranges.
There have been other attempts to explain the observed steady emission (Peterson et al. 1978); they are that
* the unpulsed component is actually pulsed emission emitted from points spatially extended in the magnetosphere, and it is manifested to us following varying time-of-flights and relativistic effects,
* the pulses could actually possess trailing & leading edges that effectively result in fully pulsed emission,
* the unpulsed component is a result of the reprocessing or reflection of pulsed emission from material near the pulsar (such as a nebulosity, localised knot etc.).
The spatially extended hypothesis above is commensurate with the numerical model framework of Romani and Yadigaroglu 1995, which requires that emission occurs from such a similar topology with similar arguments for the resulting formation of the light curve morphologies. However, this model was based on a number of questionable assumptions as noted by the paper’s authors. More critically, Eikenberry & Fazio 1997 have unambigously shown evidence for significant intra-color phase differences between the leading and trailing edges from $`\gamma `$-rays to the IR, consistent with a localised origin. These caveats have made such a theoretical basis difficult.
We have already noted the similar spectral forms of both the Bridge and unpulsed components of emission as is evident from Table 4. It is our contention that the observed unpulsed component of emission has its source in a similar electron population/magnetic field/Lorentz factor environment to the Bridge component. The change in power-law exponents from the peaks to the Bridge/unpulsed component may be as a result of either a change in the emitting e<sup>±</sup> energy distribution or via modification (due to scattering or absorption processes) of the emitted photon flux. In either case this would be consistent with emission occurring from a region closer to the neutron star within the magnetosphere particularly if we were to assume a common e<sup>±</sup> energy distribution originating above a polar cap, which would be expected to evolve in this way as the e<sup>±</sup> population streams radially along the open field lines. Ultimately whether both Bridge & unpulsed emission are associated with the main peaks, or whether they are spatially & energetically seperate is at this stage unresolved; viewing geometry issues may be a major factor.
We recall from Smith et al. 1988 that the unpulsed component could be regarded as an extended source of emission, spread in longitude in proximity to light cylinder. The observed flux would then be the result of emission from field lines at and beyond the limit of the polar cone regions, including both the leading and trailing edges of the cores. These field lines would be expected to be affected by abberation and tend towards a toroidal direction - Smith et al. 1988 note that the position angle of the unpulsed (and indeed Bridge component) is similar to that of the mean of the peaks, namely $`130^o`$. Thus these unusual polarisation effects noted by Smith et al. 1988 and others indicate similar behaviour for both Bridge & unpulsed regions, perhaps substantiating a belief that they are in some way phenomenologically linked. Another hypothesis is that the observed unpulsed component represents some fraction of the original synchrotron emitting photons scattered by the local $`e^\pm `$ particle density along various path lengths within the magnetosphere, resulting in an apparently isotropically radiated emission component. However, one might expect an essentially randomised polarisation nature to these scattered photons, which is not reflected in previous phase-resolved polarimatory.
Clearly, we require further unpulsed estimates in the $`UV`$ and $`RIR`$ wavebands in order to characterise the manner in which this emission component correlates with that of the dominant pulsed emission - most interestingly in the vicinity of $``$ $`10^{14}`$ Hz, where an apparent rollover inconsistent with conventional synchrotron self-absorption is apparent. Such estimates would provide constraints to the existing power-law fits, consolidating our contention that the unpulsed component of emission is steeper than that for the integrated spectral index. Perhaps of even greater urgency would be the definitive acquistion of polarimetric photometry of the unpulsed component with the nebular content removed so as to finally assess a possible link between it and the Bridge of emission. Such future work could provide yet more critical empirical constraints to the nascent field of numerical magnetospheric optical emission models.
Acknowledgements
The authors wish to thank R. Butler for assistance with the photometric analysis and M. Cullum for provision of the ESO MAMA detector. The support of Enterprise Ireland, the Irish Research and Development agency, is gratefully acknowledged. This work was supported by the Russian Foundation of Fundamental Research, the Russian Ministry of Science and Technical Politics, and the Science-Educational Centre ”Cosmion”, and under INTAS Grant No. 96-0542. |
no-problem/9912/hep-ph9912458.html | ar5iv | text | # Single Top Quark Production at the LHC: Understanding Spin
## Abstract
We show that the single top quarks produced in the $`Wg`$-fusion channel at a proton-proton collider at a center-of-mass energy $`\sqrt{s}=14\mathrm{TeV}`$ posses a high degree of polarization in terms of a spin basis which decomposes the top quark spin in its rest frame along the direction of the spectator jet. A second useful spin basis is the $`\eta `$-beamline basis, which decomposes the top quark spin along one of the two beam directions, depending on which hemisphere contains the spectator jet. We elucidate the interplay between the two- and three-body final states contributing to this production cross section in the context of determining the spin decomposition of the top quarks, and argue that the zero momentum frame helicity is undefined. We show that the usefulness of the spectator and $`\eta `$-beamline spin bases is not adversely affected by the cuts required to separate the $`Wg`$-fusion signal from the background.
preprint: Fermilab–Pub–99/361-T McGill/99–39 SLAC-PUB-8317 hep-ph/9912458
One of the many physics goals of the CERN Large Hadron Collider (LHC) program is a detailed study of the top quark. With a measured mass of 173.8 $`\pm `$ 5.2 GeV, the top quark is by far the heaviest known fermion, and the only known fermion with a mass at the electroweak symmetry-breaking scale. Thus, it is hoped that a detailed study of how the top quark couples to other particles will be of great utility in determining if the Standard Model mechanism for electroweak symmetry-breaking is the correct one, or if some type of new physics is responsible. Angular correlations among the decay products of polarized top quarks provide a useful handle on these couplings. One consequence of the large top quark mass is that the time scale for the top quark decay, set by its decay width $`\mathrm{\Gamma }_t`$, is much shorter than the typical time required for QCD interactions to randomize its spin: a top quark produced with spin up decays as a top quark with spin up. The Standard Model $`VA`$ coupling of the $`W`$ boson to the top quark leaves an imprint in the form of strong angular correlations among the decay products of the top quark.
The purpose of this letter is to demonstrate that single top quark production in the $`Wg`$ fusion channel at LHC energies provides a copious source of polarized top quarks. Although possessing a larger production cross section, top quark pairs at the LHC do not dominantly populate a single spin configuration in any basis, because the initial state is primarily $`gg`$. On the other hand, the $`Wg`$ fusion channel is the largest source of single top quarks at the LHC. At the most basic level, $`Wg`$ fusion is an electroweak process, with the produced top quarks coupled directly to a $`W`$ boson. Therefore, it is not surprising to learn that these top quarks are strongly polarized. However, as has been shown in studies for other colliders, the appropriate spin basis for the top quark is not the traditional helicity basis. The essential point is that unless the particle whose spin is being studied is produced in the ultrarelativistic regime, there is no reason to believe that the helicity basis will provide the simplest description of the physics involved. Top quarks produced in $`pp`$ collisions via $`Wg`$ fusion at a center of mass energy $`\sqrt{s}=14\mathrm{TeV}`$ typically posses a speed of only $`\beta 0.6`$ in the zero momentum frame (ZMF). Furthermore, the helicity of a massive particle is frame-dependent: the direction of motion of the top quark changes as we boost from frame to frame. This is significant, since, as we shall show, it is not possible to unambiguously define the ZMF. Thus, although we can pin down the ZMF well enough to say that the typical speed of the top quarks is $`\beta 0.6`$ in that frame, we cannot do so with the precision required to compute the top quark spin decomposition in the ZMF helicity basis. Instead, we are left with the options of measuring the top quark helicity in the laboratory frame (LAB helicity basis), or using some other basis. Fortunately, it is simple to construct a spin basis in which well over 90% of the top quarks are produced in one of the two possible spin states.
We begin by outlining the computation of the single top quark production cross section, which is shown schematically in Fig. 1. The calculation which we will use for our spin analysis may be described as “leading order plus resummed large logs.” For simplicity, we do not include the additional tree-level $`23`$ and one-loop $`22`$ diagrams which would be required for a full next-to-leading order computation. The neglected contributions turn out to be numerically small (about 2.5% of the total) at LHC energies.
Early calculations of the $`Wg`$ fusion process were based solely on the $`23`$ diagrams of Fig. 1. These diagrams are dominated by the configuration where the final state $`\overline{b}`$ quark is nearly collinear with the incoming gluon. In fact, they become singular as the mass of the $`b`$ quark is taken to zero. This mass singularity appears as the large logarithm $`\mathrm{ln}(m_t^2/m_b^2)`$.<sup>*</sup><sup>*</sup>*More precisely, this logarithm reads $`\mathrm{ln}[(Q^2+m_t^2)/m_b^2]`$ where $`Q^2`$ is the virtuality of the $`W`$ boson. Furthermore, at each order in the strong coupling, there are logarithmically enhanced contributions, converting the perturbation expansion from a series in $`\alpha _s`$ to one in $`\alpha _s\mathrm{ln}(m_t^2/m_b^2)`$. To deal with this situation, a formalism which sums these collinear logarithms to all orders by introducing a $`b`$ quark parton distribution function has been developed and subsequently applied to $`Wg`$ fusion . The large logarithms which caused the original perturbation expansion to converge slowly are resummed to all orders and absorbed into the $`b`$ quark distribution, which turns out to be perturbatively calculable. Once the $`b`$ quark distribution has been introduced, we must reorder perturbation theory, and begin with the $`22`$ process shown in Fig. 1a. The $`23`$ process then becomes a correction to the $`22`$ contribution. However, because the logarithmically enhanced terms within the $`23`$ contribution have been summed into the $`b`$ quark distribution, there is overlap between the $`22`$ and $`23`$ processes: simply summing their contributions will result in overcounting. To account for this, we should subtract that portion of the $`23`$ diagram where the gluon splits into a (nearly) collinear $`b\overline{b}`$ pair. Schematically, we indicate this by the diagram in Fig. 1b. Equivalently, we should subtract the first term from the series of collinear logarithms which were summed to produce the $`b`$ quark distribution. This point of view is reflected by the prescription for computing Fig. 1b: we simply use the $`22`$ amplitude, but we replace the $`b`$ quark parton distribution function with the (lowest-order) probability for a gluon to split into a $`b\overline{b}`$ pair:
$$b_0(x,\mu ^2)=\frac{\alpha _s(\mu ^2)}{2\pi }\mathrm{ln}\left(\frac{\mu ^2}{m_b^2}\right)_x^1\frac{dz}{z}P_{qg}(z)g(\frac{x}{z},\mu ^2).$$
(1)
Eq. (1) contains the DGLAP splitting function
$$P_{qg}(z)=\frac{1}{2}[z^2+(1z)^2].$$
(2)
The total single top quark production cross section then consists of the $`22`$ process minus the overlap plus the $`23`$ process. As is emphasized in Ref. , the division among the three kinds of contributions is arbitrary and depends upon our choice of the QCD factorization scale. Different choices in factorization scale correspond to a reshuffling of the contributions among the three terms.
The production cross sections for single $`t`$ and $`\overline{t}`$ quarks will be unequal at the LHC. An initial state $`u`$, $`\overline{d}`$, $`\overline{s}`$, or $`c`$ quark is required for $`t`$ production, whereas $`\overline{t}`$ production requires an initial state $`\overline{u}`$, $`d`$, $`s`$ or $`\overline{c}`$ quark. Since the LHC is a $`pp`$ collider, we expect more $`t`$ quarks than $`\overline{t}`$ quarks, since the protons contain more valence $`u`$ quarks than $`d`$ quarks. This expectation is met by the total cross sections we obtain: 159 pb for $`t`$ production and 96 pb for $`\overline{t}`$ production.All of the cross sections reported in this paper were computed using the CTEQ5HQ parton distribution functions, two-loop running $`\alpha _s`$, and the factorization scales advocated in Ref. . Table I summarizes the contributions from each flavor of light quark in the initial state. We see that for $`t`$ production, the initial state contains an up-type quark 80% of the time, while for $`\overline{t}`$ production, the initial state contains a down-type quark 69% of the time. In the following discussion, we will talk in terms of the dominant initial states, although when presenting the final spin beakdowns, all flavors will be included.
We are now ready to discuss the spin of the top quarks produced at LHC energies, beginning with the $`22`$ contributions. For a final state $`t`$, the dominant $`22`$ process is $`ubdt`$. In the ZMF of the initial state partons, the outgoing $`t`$ and $`d`$ quarks are back-to-back. Now the initial state contains a massless $`u`$ quark and an effectively massless $`b`$ quark. Since they couple to a $`W`$ boson, we know they have left-handed chirality. Since they are both ultrarelativistic fermions, this left-handed chirality translates into left-handed helicity. Thus, the initial spin projection is zero. The final state $`d`$ quark is also massless, and so its left-handed chirality also implies left-handed helicity. Conservation of angular momentum then leads to the $`t`$ quark having left-handed helicity in this frame. Since the $`t`$ quark is massive, boosting to another frame will, in general, introduce a right-handed helicity component. In particular, if we measure the helicity of the top quark in the laboratory frame instead of the ZMF, we find that it is left-handed only 66% of the time.
Turning to the $`23`$ process, we find that the addition of a third particle to the final state frees the top quark from its obligation to have left-handed helicity in the ZMF. In fact, we find that left-handed tops are produced only 82% of the time by this process. Again, this number changes if we boost out of the ZMF: the fraction of left-handed helicity tops is only 59% in the lab frame.
When we come to the overlap contribution, we discover that it is not possible to unambiguously define the ZMF. Should we define the ZMF in terms of the light quark and gluon, or in terms of the light quark and the $`b`$ quark which descended from the gluon via splitting? These two frames are different, and, as we have already argued, the helicity of the top quark is not invariant under the longitudinal boosts connecting these two frames.
Another way of illustrating the difficulty is to consider the question of experimentally reconstructing the ZMF. To determine the ZMF, we would have to account for all of the final state particles. With real detectors, this is clearly impossible, since particles with small transverse momenta or very large pseudorapidity tend to be missed. But the $`23`$ process frequently contains a low-$`p_T`$ $`\overline{b}`$ quark, which is likely to escape detection. Even with a perfect detector, it is still not possible to decide whether a given event came from the $`22`$ diagram of Fig. 1a or the $`23`$ diagram of Fig. 1c. The point is, a perfect detector would also track the proton remnants as well as the actual scattering products. Since there is no intrinsic bottom in the proton, after a $`22`$ interaction there would be a $`\overline{b}`$ quark among the proton remnants hitting our “perfect” detector. As far as the detector is concerned, such a $`\overline{b}`$ quark would look identical to the $`\overline{b}`$ quark generated by the $`23`$ process. The best that could be done is to observe that a $`\overline{b}`$ quark associated with the proton remnant would tend to have a much smaller $`p_T`$ than one associated with the $`23`$ diagram. However, the kinematics of the two processes overlap, rendering the location of the dividing line arbitrary. This is precisely the physics of the statement made earlier that the division of contributions among the $`22`$, $`23`$, and overlap terms is arbitrary, and depends on the QCD factorization scale.
Rather than use the (undefined) ZMF helicity basis, we should decompose the top quark spin in a manner which does not depend on any particular frame. The spectator basis, introduced in Ref. , provides such a means. This basis is based upon the observation that when we decompose the top quark spin along the direction of the $`d`$-type quark, the spin down contribution is small. The $`22`$ process produces no spin down $`t`$ quarks in this basis, which is equivalent to measuring the helicity of the $`t`$ quark in the frame where the $`t`$ quark and $`d`$-type quark are back-to-back. The overlap contribution, being just the $`22`$ process computed with Eq. (1) in place of the $`b`$ quark distribution function, shares a common spin structure with the $`22`$ process in this basis. The amplitude for spin-down $`t`$ quark production via the $`23`$ diagrams is suppressed by its lack of a singularity when the $`b`$ quark mass is taken to zero. Since the $`d`$-type quark appears in the spectator jet 80% of the time, if we simply use the direction of the spectator jet as the top quark spin axis, we obtain a high degree of polarization: 92% of the top quarks associated with the $`23`$ process are produced with spin up in this basis. Combining the three contributions, we find that over-all fraction of spin up quarks in the spectator basis is 95%.
For $`\overline{t}`$ production the situation is a bit different. The $`d`$-type quark is in the final state only 31% of the time; in the remainder of the events, it is supplied by one of the beams. Hence, the spectator basis chooses the “wrong” direction for the spin axis the majority of the time! However, the spectator jet is simply the scattered light quark. In the $`Wg`$ fusion process, the momentum transfer via the $`t`$-channel $`W`$ boson deflects the incoming light quark just a little. Thus, the spectator jet momentum points in nearly the same direction as the original light quark momentum. This fact is reflected in the large (absolute) values of pseudorapidity at which the spectator jet usually emerges. Since the spectator jet and initial light quark posses nearly parallel momentum vectors, it does not degrade the degree of spin polarization very much to use the spectator jet direction even when the $`d`$-type quark was actually in the initial state. Overall, we find that 93% of the $`\overline{t}`$’s are produced with spin down in the spectator basis, which is only slightly worse than the situation for $`t`$’s.
Since the $`d`$-type quark really comes from one of the two beams the majority of the time, it is worthwhile to consider the beamline basis in addition to the spectator basis. From Ref. we recall that the beamline basis is defined by decomposing the $`\overline{t}`$ spin along the direction of one of the beams as seen in the $`\overline{t}`$ rest frame. Hence, there are two different beamline bases, since the two beams are not back-to-back in the $`\overline{t}`$ rest frame. We want to choose the beam which supplied the light quark. As we noted in the previous paragraph, the spectator jet typically points in the same direction as the beam which supplied the light quark. Therefore, we should choose to decompose the $`\overline{t}`$ spin along the beam which is most-nearly aligned with the spectator jet on an event-by-event basis. That is, we define the $`\eta `$-beamline basis as follows: if the pseudorapidity of the spectator jet is positive, choose the right-moving beam as the spin axis. If the pseudorapidity of the spectator jet is negative, choose the left-moving beam as the spin axis. In terms of the $`\eta `$-beamline basis we find that 90% of the $`\overline{t}`$’s have spin down. While this is somewhat worse than simply using the spectator basis, matters may be improved by using only those events where the spectator jet has a pseudorapidity which is larger in magnitude than some cut value $`\eta _{min}`$. This takes advantage of the fact that an initial state $`d`$ quark is a valence quark. Thus, on average, it carries a bigger longitudinal momentum fraction than a quark plucked from the sea. As a result, the spectator jet from such events tends to be produced at (slightly) larger pseudorapidity than events initiated by $`\overline{u}`$, $`s`$, or $`\overline{c}`$ quarks. So a minimum pseudorapidity requirement increases the chances that the chosen beam actually does contain the $`d`$-type quark. For example, choosing $`\eta _{min}=2.5`$ results in a spin decomposition very similar to that obtained in the spectator basis. Because such a cut on the spectator jet pseudorapidity is envisioned by the experiments in order to separate the signal from the background, there is no disadvantage to including a minimum $`|\eta |`$ requirement in our definition of the $`\eta `$-beamline basis.
For convenience, we have summarized our results for the spin decompositions of single $`t`$ and $`\overline{t}`$ production at the LHC in Tables II and III. The final column of both tables contains the spin asymmetry $`(N_{}N_{})/(N_{}+N_{})`$. This is the quantity which appears in the differential distribution of the decay angle :
$$\frac{1}{\sigma _T}\frac{d\sigma }{d\mathrm{cos}\theta }=\frac{1}{2}\left[1+\frac{N_{}N_{}}{N_{}+N_{}}\mathrm{cos}\theta \right].$$
(3)
In Eq. (3), $`\theta `$ is the angle between the charged lepton (from the decaying top quark) and the chosen spin axis, as measured in the top quark rest frame.To describe the decay of a $`\overline{t}`$ quark, we should replace $`\mathrm{cos}\theta `$ by $`\mathrm{cos}\theta `$ in Eq. (3). Since the $`\overline{t}`$’s are primarily spin down in the bases we are considering, the $`\overline{t}`$ spin asymmetry will be negative. Thus, the $`t`$ and $`\overline{t}`$ samples may be combined without diluting the resulting angular correlations in the event that the sign of the charged lepton cannot be determined. Obviously, we want to make the spin asymmetry as large as possible in order to make this angular correlation easier to observe. From Tables II and III we see that the spectator basis produces correlations which are about a factor of 3 larger than in the LAB helicity basis. The improvement provided by the $`\eta `$-beamline basis is comparable.
Figs. 2 and 3 present the $`p_T`$ distributions of the produced $`t`$ and $`\overline{t}`$ quarks. In addition to the total cross section, we have plotted the contributions from the dominant spin component in the LAB helicity, spectator, and $`\eta `$-beamline bases.
In general, the spin of the top quark depends upon the point in phase space at which it is produced. Therefore, it is important to make an assessment of the impact of the experimental cuts which are imposed to isolate the signal from the background. Although a full-scale detector simulation is beyond the scope of this letter, we have investigated the effect of the following theorists’ cuts:
$`\begin{array}{ccc}\text{missing energy:}\hfill & & \overline{)}p_T>15\mathrm{GeV},\hfill \\ \text{lepton:}\hfill & & p_T>15\mathrm{GeV},|\eta |<2.5\hfill \\ \text{spectator jet:}\hfill & & p_T>50\mathrm{GeV},2.5<|\eta |<5.0\hfill \\ \text{bottom jet:}\hfill & & p_T>50\mathrm{GeV},|\eta |<2.5\hfill \\ \text{isolation cut:}\hfill & & \sqrt{(\mathrm{\Delta }\eta )^2+(\mathrm{\Delta }\phi )^2}>0.4,\text{all pairs}\hfill \\ \text{third jet:}\hfill & & \text{none with}p_T>30\mathrm{GeV},|\eta |<2.5.\hfill \end{array}`$ (10)
These cuts are similar to the ones used in the ATLAS design study . Because these cuts tend to bias towards events where the top quark has a large velocity in the ZMF (to the extent that the ZMF can be defined), we expect that the spin fractions will be higher in their presence. Indeed this is the case, as may be seen from Tables IV and V. Even in the presence of cuts, both the spectator and $`\eta `$-beamline bases outperform the LAB helicity basis by more than a factor of 2 with regard to the magnitude of the angular correlations present in the final state. For $`t`$ events, the spectator basis is slightly better than the $`\eta `$-beamline basis. For $`\overline{t}`$ events, the $`\eta `$-beamline basis is slightly better than the spectator basis. In both cases, however, the differences are so small that it would certainly be worthwhile to do the experimental analysis using both bases, especially since some of the systematics will differ in the two cases.
To summarize, we have seen that it is not possible to uniquely define the zero momentum frame of the initial state partons in the $`Wg`$-fusion process at the LHC. Consequently, when studying the spin of the produced top quarks, it does not make sense to use the ZMF helicity basis. Instead, we must use a spin basis whose definition does not depend on the existence of a well-defined ZMF. Simply using the LAB helicity basis results in a description of the top quarks where both spin components are comparable in size. However, there are spin bases where the top quarks are described primarily by just one of the two possible spin states. Two such bases are the spectator and $`\eta `$-beamline bases. In the spectator basis, we decompose the spin of the top quark in its rest frame along the direction of the spectator jet as seen in that frame. In the $`\eta `$-beamline basis we decompose the spin of the top quark in its rest frame along the direction of one of the proton beams as seen in that frame. The right-moving beam is chosen if the pseudorapidity of the spectator jet is positive, whereas the left-moving beam is chosen if the pseudorapidity of the spectator jet is negative. We find that both of these bases the spin angular correlations are approximately a factor of 3 larger than in the LAB helicity basis. The utility of these two bases is not adversely affected by the imposition of the sorts of cuts required to extract a $`Wg`$ fusion signal from the background.
###### Acknowledgements.
We would like to thank Dugan O’Neil for prompting us to think carefully about the relationship between the $`22`$ and $`23`$ processes. We would also like to thank Scott Willenbrock and Zack Sullivan for their helpful discussions concerning their calculation of the next-to-leading order $`Wg`$-fusion cross section. SJP would like to thank the SLAC theory group for their support and hospitality during the initial stages of this work. The Fermi National Accelerator Laboratory is operated by Universities Research Association, Inc., under contract DE-AC02-76CHO3000 with the U.S.A. Department of Energy. High energy physics research at McGill University is supported in part by the Natural Sciences and Engineering Research Council of Canada and the Fonds pour la Formation de Chercheurs et l’Aide à la Recherche of Québec. SLAC is supported by the U.S.A. Department of Energy under contract DE-AC03-76SF00515. |
no-problem/9912/chao-dyn9912014.html | ar5iv | text | # Bifurcation analysis of the plane sheet pinch
## I Introduction
Finite-resistivity plasma instabilities play an important role for the release of stored magnetic energy in many astrophysical objects. They also restrict the plasma stability in several fusion devices . The simplest configuration in which they appear is the plane sheet pinch. In a pinch a conducting fluid can be held together by the action of an electric current passing through it with the pressure gradients being balanced by the Lorentz force. Of special interest is the resistive tearing instability, which was studied by Furth et al. by using a boundary layer approach and afterwards numerically without making the boundary-layer approximation . All these studies refer to the infinite Hartmann number case because of their neglect of the kinematic viscosity. The Hartmann number $`Ha`$, which is the geometric mean of two Reynolds-like numbers, one being kinetic and the other magnetic, is the essential parameter that determines the global stability boundaries of the plane sheet pinch as well as those of its cylindrical counterpart . Thus kinematic viscosity has to be included.
A recent sheet pinch study has been done with spatially and temporally uniform kinematic viscosity and magnetic diffusivity, and with impenetrable stress-free boundaries. It is found that the quiescent ground state (in which the current density is uniform and the magnetic field profile across the sheet is linear) remains stable, no matter how strong the driving electric field. This study was extended to the case of magnetic diffusivity varying across the sheet, which results in the profiles of the equilibrium magnetic field deviating from linear behavior. In particular, the conductivity profile may be chosen such that the magnetic-field and/or the current profile have inflection points. A Squire theorem could be proven for this configuration whose stability depends on the Hartmann number, the degree of current concentration about the midplane of the sheet, and on the magnetic shear (i.e, the asymmetry of the equilibrium magnetic field) .
A stability analysis can be considered as part of a bifurcation analysis, which will be provided for the cases of two as well as of three spatial dimensions in the present paper. In a bifurcation analysis one tries to determine the set of possible time-asymptotic states, the attractors, for given values of the system parameters. The bifurcations from a static sheet-pinch equilibrium have previously been studied for the case of two spatial dimensions . Grauer investigated the interaction of two different tearing modes in the two-dimensional slab geometry by reducing the dynamics at the bifurcation point to that on a four-dimensional center manifold. The new time-asymptotic states were found to be of the tearing-mode type, but e.g. also traveling waves were found.
We note here that even though with increasing Hartmann number the equilibrium becomes first unstable to two-dimensional perturbations (according to the Squire theorem), the new final states may be three-dimensional. This is one of the problems the present paper is addressed to. The first question arising is which type of two-dimensional (2D) time-asymptotic state develops nonlinearly from the tearing mode when the whole problem is restricted to two spatial dimensions. The bifurcation studies predict steady states for the generic cases. Similarly, large scale perturbations of a sheared magnetic field equilibrium were found to result in a final tearing-mode type stationary state via multiple coalescence of the magnetic island structures . A question coming up then is whether the 2D time-asymptotic states are stable with respect to three-dimensional perturbations. If not, how do the stability properties depend on parameters like the strength of a constant external magnetic field or the wavelength of perturbations in the invariant direction of the 2D state? Finally, what are the characteristic properties of the three-dimensional time-asymptotic states, when they manifest? In the present paper for the first time a comprehensive stability analysis of the two-dimensional time-asymptotic states that develop from the tearing mode is presented. For the case of a spatially uniform resistivity these problems were addressed in numerical studies of the magnetohydrodynamic (MHD) equations by Dahlburg et al. , who found two-dimensional quasi-equilibria of the tearing-mode type to be unstable to three-dimensional perturbations. Secondary three-dimensional instabilities were similarly observed for non-static primary states with, in addition to a sheared magnetic field, a pressure-driven jet-like flow , and their nonlinear development was proposed as a scenario for the transition to MHD turbulence.
The outline of the paper is as follows. In Sec. II the physical model is introduced. The MHD equations as well as boundary and initial conditions are discussed. In Sec. III we provide the results of the bifurcation analysis. First the 2D results are discussed. After that we investigate the linear stability of 2D time-asymptotic states with respect to 3D perturbations. For appropriate choices of the external parameters the 2D states prove to be 3D unstable. The 3D asymptotics is investigated by a full three-dimensional long-time simulation of the pinch dynamics. Finally we discuss our results and end with an outlook in Sec. IV.
## II Physical model
### A MHD equations
We use the nonrelativistic, incompressible MHD equations,
$$\rho \left(\frac{𝐯}{t}+(𝐯)𝐯\right)=\rho \nu ^2𝐯p+𝐉\times 𝐁,$$
(1)
$$\frac{𝐁}{t}=\times (\eta \mu _0𝐉𝐯\times 𝐁),$$
(2)
$$𝐯=0,𝐁=0,$$
(3)
where $`𝐯`$ is the fluid velocity, $`𝐁`$ the magnetic induction, $`\mu _0`$ the magnetic permeability in a vacuum, $`𝐉=\times 𝐁/\mu _0`$ the electric current density, $`\rho `$ the mass density, $`p`$ the pressure, $`\nu `$ the kinematic viscosity, and $`\eta `$ the magnetic diffusivity. While $`\rho `$ and $`\nu `$ are assumed constant, $`\eta `$ varies spatially:
$$\eta (𝐱)=\eta _0\stackrel{~}{\eta }(𝐱),$$
(4)
where $`\eta _0`$ is a dimensional constant and $`\stackrel{~}{\eta }(𝐱)`$ a dimensionless function of position.
Let the pinch width (sheet thickness) $`L=L_1`$ and some yet arbitrary field strength $`B_0`$ be used as the units of length and magnetic induction. Writing $`v_A=B_0/\sqrt{\mu _0\rho }`$ for the Alfvén velocity corresponding to $`B_0`$, we transform to dimensionless quantities. Specifically $`𝐱`$, $`𝐁`$, $`𝐯`$, $`t`$, $`p`$, $`𝐉`$, and $`𝐄`$ are normalized by $`L`$, $`B_0`$, $`v_A`$, $`\tau _A=L/v_A`$, $`\rho _0v_A^2`$, $`B_0/(\mu _0L)`$, and $`B_0v_A`$, respectively. The quantity $`𝐄`$ is the electric field. Equations (1) and (2) then become
$$\frac{𝐯}{t}=(𝐯)𝐯+M^1^2𝐯p+𝐉\times 𝐁,$$
(5)
$$\frac{𝐁}{t}=\times (S^1\stackrel{~}{\eta }𝐉𝐯\times 𝐁),$$
(6)
where
$$M=\frac{v_AL}{\nu }\text{ and }S=\frac{v_AL}{\eta _0}$$
(7)
are Reynolds-like numbers based on the Alfvén velocity: $`S`$ is the Lundquist number and $`M`$ its viscous analogue. The geometric mean of the two Reynolds-like numbers gives the Hartmann number,
$$Ha=\sqrt{MS}.$$
(8)
Finally, the dimensionless Ohm’s law becomes
$$S^1\stackrel{~}{\eta }𝐉=𝐄+𝐯\times 𝐁.$$
(9)
### B Boundary conditions and static equilibrium
We use Cartesian coordinates $`x_1`$, $`x_2`$, $`x_3`$ and consider our magnetofluid in the slab $`0<x_1<1`$. $`x_1`$ is referred to as the cross-sheet coordinate. In the $`x_2`$ and $`x_3`$ directions periodic boundary conditions with periods $`L_2`$ and $`L_3`$, respectively, are used.
The boundary planes are assumed to be impenetrable and stress-free, i.e.,
$$v_1=\frac{v_2}{x_1}=\frac{v_3}{x_1}=0\text{at }x_1=0,1.$$
(10)
The system is driven by an electric field of strength $`E^{}`$ in the $`x_3`$ direction, which can be prescribed only on the boundary. We further assume that there is no magnetic flux through the boundary,
$$B_1=0\text{at }x_1=0,1.$$
(11)
Conditions (10) and (11) imply that the tangential components of $`𝐯\times 𝐁`$ on the boundary planes vanish, so that according to Eq. (9)
$$J_2=0,J_3=\frac{E^{}S}{\stackrel{~}{\eta }_b}\text{at }x_1=0,1,$$
(12)
where $`\stackrel{~}{\eta }_b`$ is the value of $`\stackrel{~}{\eta }`$ on the boundaries. The boundary conditions for the tangential components of $`𝐁`$ then become
$$\frac{B_2}{x_1}=\frac{E^{}S}{\stackrel{~}{\eta }_b},\frac{B_3}{x_1}=0\text{at }x_1=0,1.$$
(13)
A detailed discussion of these boundary conditions is found in Ref. .
Any stationary state with the fluid at rest has to satisfy the equations
$`p+𝐉\times 𝐁`$ $`=`$ $`\mathrm{𝟎},`$ (14)
$`\times (\stackrel{~}{\eta }𝐉)`$ $`=`$ $`\mathrm{𝟎}.`$ (15)
Equations (14), (15) and the boundary conditions are satisfied by the Harris equilibrium
$`\stackrel{~}{\eta }`$ $`=`$ $`\mathrm{cosh}^2[(x_10.5)/a],`$ (16)
$`𝐉`$ $`=`$ $`𝐉^e=(0,0,{\displaystyle \frac{1}{a\mathrm{tanh}(1/2a)\mathrm{cosh}^2[(x_10.5)/a]}}),`$ (17)
$`𝐁`$ $`=`$ $`𝐁^e=(0,{\displaystyle \frac{\mathrm{tanh}[(x_10.5)/a]}{\mathrm{tanh}(1/2a)}}+\overline{B_2^e},\overline{B_3^e}),`$ (18)
$`p`$ $`=`$ $`p^e={\displaystyle \frac{𝐁_{}^{e}{}_{}{}^{2}}{2}},`$ (19)
where $`\overline{B_2^e}`$ and $`\overline{B_3^e}`$ are constants. The resistivity given by Eq. (16) decreases from the boundary towards the sheet center where it takes on a minimum value. This is in accordance with the expectation that the plasma is hotter within the current sheet combined with the decrease of the typical Spitzer resistivity with temperature, i.e. $`\stackrel{~}{\eta }T^{3/2}`$. Unlike other studies where the system is infinitely extended in the cross-sheet ($`x_1`$) direction, we do not use the current sheet half width $`a`$ as the unit of length. Instead, we normalize to the finite distance $`L=L_1`$ between the two boundary planes. The magnetic field unit, $`B_0`$, was chosen in such a way that, in the case of $`\overline{B_2^e}=0`$, $`|B_2^e|=1`$ on the boundary planes.
We use the notations
$$𝐛=𝐁𝐁^e,𝐣=𝐉𝐉^e,$$
(20)
where $`𝐯`$ and $`𝐛`$ are our dynamical variables, for which the stress-free boundary conditions are now as follows :
$$v_1=\frac{v_2}{x_1}=\frac{v_3}{x_1}=b_1=\frac{b_2}{x_1}=\frac{b_3}{x_1}=0\text{at }x_1=0,1.$$
(21)
We Fourier expand both vector fields into modes $`\mathrm{exp}\{i(k_2x_2+k_3x_3)\}`$ in the $`x_2`$ and $`x_3`$ directions. In the cross-sheet direction $`x_1`$ sine and cosine expansions are used in correspondence with the imposed stress-free boundary conditions (for more details see ). Dynamical integrations of the system are performed in Fourier space by means of a pseudo-spectral method with 2/3-rule dealiasing. The grid size for the 3D integrations was taken to be $`32\times 16\times 16`$ which was found to be sufficient for our low Hartmann number studies (see Table 1). Time integration was performed using a Runge-Kutta scheme with a variable time step. Compared to similar calculations with a spatially uniform resistivity, the simulations were extremely time-expensive since only very short time steps were possible. Additionally one has to keep in mind that our spectral resolution is restricted by the evaluation of the Jacobian necessary for the linear stability analysis of the time-asymptotic states. The used resolution results in the inversion of a $`2836\times 2836`$ matrix.
## III Results
### A 2D time asymptotics
We started with a determination of the stability boundary for the static sheet pinch equilibrium. The Squire theorem allowed a restriction to $`x_3`$ invariant perturbations (i.e., to perturbations with wave number $`k_3=0`$). Furthermore, due to the invariance of the equilibrium in the $`x_2`$ direction stability could be tested for each wave number $`k_2`$ (or the corresponding Fourier mode) separately. First, the system was assumed to be infinitely extended in the $`x_2`$ direction. In this case the wave number $`k_2`$ of a perturbation can adopt any real value. Figure 1 shows the stability boundary in the Hartmann number–wavelength plane for $`\overline{B_2^e}=0`$ and $`a=0.1`$ ($`\overline{B_2^e}`$ and $`a`$ are fixed to these values throughout the paper). Since the equilibrium profile $`B_2^e(x_1)`$ is symmetric ($`\overline{B_2^e}=0`$), the value of $`\overline{B_3^e}`$ has no influence on the stability (see Ref. ). The unstable region lies to the right of the boundary curve. For the spatial resolution used, instability sets in at $`Ha=Ha_c=64.57`$ and $`k_2=k_{2}^{}{}_{c}{}^{}=2.67`$.
In calculations using the full nonlinear equations the aspect ratio $`L_2`$ (in the 3D case $`L_3`$ as well) has to be fixed to a finite value. We have used $`L_2=4`$ in all nonlinear calculations. There are some subtleties concerning the onset of instability and the application of the Squire theorem when $`L_2`$ is finite, due to the fact that only a discrete set of $`k_2`$ values is admitted. With $`L_2=4`$ instability sets in at $`Ha=[Ha_c]_{L_2=4}=66.20784`$ and $`k_2=[k_{2}^{}{}_{c}{}^{}]_{L_2=4}=\pi `$, corresponding to a critical wavelength of $`2`$. Unstable 3D modes at Hartmann numbers close to $`[Ha_c]_{L_2=4}`$ are excluded if the aspect ratio $`L_3`$ is finite (for more details see Appendix). Figure 2 shows, for $`Ha=70`$, a comparison of the growth rate of the most unstable 2D mode, which has wavelength 2 in the $`x_2`$ direction, with the growth rates of the most unstable 3D mode with the same wavelength in the $`x_2`$ direction and different wavelengths in the $`x_3`$ direction.
When $`Ha`$ exceeds the critical value $`[Ha_c]_{L_2=4}`$ the tearing mode grows due to a bifurcation where a pair of identical real eigenvalues becomes positive. A superposition of the static equilibrium and the most unstable eigenvector was taken as the initial state to follow up the nonlinear development of the tearing mode in two spatial dimensions.
After several hundred Alfvén times $`\tau _A`$ convergence to a stationary state was observed. This state is of course linearly stable with respect to two-dimensional perturbations. It is also clear that the development to the new time-asymptotic states is decellerated the closer to the critical value $`[Ha_c]_{L_2=4}`$ the Hartmann number is taken. This was indicated first by the convergence of the maximum eigenvalue to zero. Namely, due to the marginal stability with respect to translations in the $`x_2`$ direction, one eigenvalue of the time-asymptotic state has to vanish. For this real eigenvalue $`\lambda _0`$ we had, for instance, $`\lambda _0510^4`$ for $`t=700`$ and $`Ha=66.21`$, $`\lambda _0510^4`$ for $`t=500`$ and $`Ha=66.3`$ (and $`\lambda _0210^4`$ for $`t=800`$ and $`Ha=66.3`$), but already $`\lambda _0210^6`$ for $`t=500`$ and $`Ha=67`$ (and $`\lambda _010^8`$ for $`t=800`$ and $`Ha=67`$). The time development of run 1 became extremely slow. This run close to $`[Ha_c]_{L_2=4}`$ was performed in order to make it as sure as possible that secondary bifurcations close to the primary bifurcation point were not overlooked. Since the time-asymptotic solutions for $`Ha=66.21`$, $`66.3`$, and $`67`$ are all of the same type, the solutions simulated for $`Ha=66.3`$ and $`Ha=67`$ are likely to belong to a branch originating in the primary bifurcation. In Fig. 3 the time developments of the specific kinetic energy $`E_{kin}=\frac{1}{2V}_V𝐯^2\text{d}V`$, the specific magnetic energy $`E_{mag}=\frac{1}{2V}_V𝐛^2\text{d}V`$, and their sum, the total energy $`E_{tot}`$, are plotted for runs 2 and 3 (run 3 only shown in the inset). Nearly perfect steady states are reached for both Hartmann numbers at later stages. For $`Ha=67`$ it is practically constant in time at $`t=800`$. The amplitude of the eigenvectors is not determined by the stability analysis. For $`Ha=67`$ thus two energetically different initial conditions were considered and were found to relax toward the same asymptotic state, one from energetically below and the other from energetically above the asymptotic energy (in the first case, not shown in the figure, the energies increase as functions of time and then become almost constant).
In Fig. 4 the new asymptotic state is shown for $`Ha=S=M=67`$. Field lines of $`𝐁`$, stream lines of $`𝐯`$, and contour lines of the current density component $`J_3`$ are drawn. One observes a magnetic island structure with a chain of $`X`$ and $`O`$ points, fluid motion in the form of convection-like cells or rolls, and a filamentation of the original current sheet. For $`J_3`$ only the most inner part of the sheet is shown to highlight the filamentation despite of the dominant $`J_3^e`$. Two wavelengths in the $`x_2`$ direction are seen, corresponding to the fact that $`L_2=4`$ and the critical perturbation has wavelength 2.
### B 3D secondary instability of the 2D time-asymptotic states
Once the 2D time-asymptotic states close to the bifurcation point were calculated with sufficient accuracy, their linear stability with respect to 3D perturbations could be investigated. Though the stability boundary of the quiescent basic state is determined by the Hartmann number, the bifurcating states may depend on $`S`$ and $`M`$ separately. We have restricted ourselves, however, to cases with $`Ha=S=M`$. The two-dimensional states were extrapolated to three dimensions by continuing them constantly in the $`x_3`$ direction, and the stability analysis was performed for the resulting 3D systems. Since the equilibria were invariant in the $`x_3`$ direction, stability could be tested for each wave number $`k_3`$ separately. Taking into account just one wave number, $`k_3=2\pi /L_3`$, in the $`x_3`$ direction, the aspect ratio or pinch height $`L_3`$ was varied. We also added constant magnetic shear components $`\overline{B_3^e}`$ to the saturated 2D states. This could be done since such constant field components do not influence a 2D solution: The contribution of $`\overline{B_3^e}`$ to the Lorentz force $`𝐉\times 𝐁`$ in Eq. (1) vanishes since both $`𝐉`$ and the added magnetic field are in the $`x_3`$ direction, and its contribution to the term $`\times (𝐯\times 𝐁)`$ in Eq. (2) vanishes as a consequence of the incompressibility condition $`𝐯=0`$ \[in the incompressible case one has $`\times (𝐯\times 𝐁)=(𝐁)𝐯(𝐯)𝐁`$\].
The motivation for adding a $`\overline{B_3^e}`$ is that in many applications externally generated magnetic fields are present in addition to the self-consistently supported ones. In the solar atmosphere, for instance, current sheets may form when regions of obliquely directed magnetic field are brought together and will then in general have a sheetwise field component. In magnetic fusion devices like the tokamak toroidal magnetic fields, which correspond to sheetwise fields in plane geometry, are externally applied to stabilize the confined plasma.
Results of the stability calculations for $`Ha=M=S=66.3`$ and $`Ha=M=S=67`$ are shown in Fig. 5 where the maximum real part of the eigenvalue spectrum is plotted against the varying parameters $`L_3`$ and $`\overline{B_3^e}`$. In the case of $`\overline{B_3^e}=0`$ the two-dimensional saturated states are always unstable, namely to three-dimensional disturbances with a sufficiently large wavelength in the $`x_3`$ direction (see upper panel in Fig. 5).
At the stability threshold always two identical real eigenvalues become positive. The multiplicity two results from the symmetry of the system with respect to reflections in the planes $`x_3=const.`$, due to which the linearly independent modes with wave numbers $`+k_3`$ and $`k_3`$, respectively, become simultaneously unstable (the periodic boundary conditions, which allow the decomposition into Fourier modes, are also needed here) . The secondary instability to three-dimensional perturbations is always suppressed by a sufficiently strong field $`\overline{B_3^e}`$ (see lower panel in Fig. 5). This is in accordance with the general expectation that a magnetic field impedes motions with gradients in the direction of the field due to the tension associated with the lines of force.
The closer to the critical value $`[Ha_c]_{L_2=4}`$ the Hartmann number is, the larger is the minimum wavelength of the unstable perturbations in the third dimension (see upper panel in Fig. 5). Now the Squire theorem does not exclude that 3D perturbations to the quiescent basic state are unstable immediately above the critical Hartmann number, provided their wavelengths $`2\pi /k_3`$ are sufficiently large (cf. Appendix). One might suspect, therefore, that the unstable 3D perturbations to the 2D time-asymptotic tearing-mode state are also unstable perturbations with respect to the basic state at the same Hartmann number. This is not the case, however: Consider, for example, the curve for $`Ha=66.3`$ in Fig. 5 (maximum growth rate over wavelength of the perturbation in the $`x_3`$ direction). For $`L_3=2\pi /k_3=7`$ one observes 3D instability of the 2D time-asymptotic state. Can the quiescent basic state for $`Ha=66.3`$ be unstable to a 3D perturbation with wavelength $`7`$ in the $`x_3`$ direction? The wave number $`k_2`$ of such a 3D perturbation can take on the values $`n2\pi /4`$, $`n=1,2,3,\mathrm{}`$ (since we have chosen the fixed aspect ratio $`L_2=4`$). Squire’s theorem connects the 3D perturbation to a 2D perturbation with wave number $`\stackrel{~}{k_2}=[k_2^2+(2\pi /7)^2]^{1/2}`$ which is simultaneously unstable or stable at the Hartmann number $`\stackrel{~}{Ha}=(k_2/\stackrel{~}{k_2})66.3`$. With $`k_2=2\pi /4`$, i.e., with the smallest possible $`|k_2|`$, one finds $`\stackrel{~}{Ha}=57.6`$, which is below the critical value $`Ha_c=64.57`$ (see Fig. 1). That is to say, a 3D mode with $`k_2=2\pi /4`$ (and $`k_3=2\pi /7`$) cannot be an unstable perturbation to the quiescent basic state at $`Ha=66.3`$. For the next possible $`k_2`$ value, $`22\pi /4`$, one has $`\stackrel{~}{Ha}=63.7`$, still below the critical value $`Ha_c`$. For all higher $`k_2`$ values the wavelength $`2\pi /\stackrel{~}{k_2}`$ lies clearly below the unstable wavelength domain (see Fig. 1) for $`Ha_cHa66.3`$.
Figures 6 and 7 show an unstable 3D eigenstate to the time-asymptotic 2D state with $`a=0.1`$ and $`L_2=4`$ at $`Ha=67`$ (which is shown in Fig. 4). The fields are shown in Fig. 6 in the $`x_2`$-$`x_3`$ plane to underline qualitatively new structures in the third dimension. As in Fig. 4, two wavelengths of the perturbation in $`x_2`$ are shown. The 2D equilibrium is mixed with the 3D perturbation in the ratio 50% equilibrium to 50% perturbation. In the perturbed state velocity and magnetic field have components in the $`x_3`$ direction and all structures, including the current filaments, are modulated in this direction.
Previous analyses of secondary instabilities of the sheet pinch , as well as analyses of similar instabilities in hydrodynamic shear flows , have indicated that these instabilities are ideal, i.e., their growth rates independent of dissipation. It appears interesting, therefore, to compare the growth rates of the secondary instability with those of the primary one at the same Hartmann numbers. The growth rate of the most unstable 2D tearing mode (those with wavelength 2) for $`Ha=66.3`$ is $`8.810^4`$, the corresponding growth rate for $`Ha=67`$ is $`7.510^3`$. A comparison with the upper panel in Fig. 5 shows that the secondary instability grows approximately five times as fast as the primary one. This agrees with results of Dahlburg et al. . However, since all our calculations were restricted to $`S`$ and $`R`$ values close to the primary bifurcation point, where the growth rates of all primary or secondary modes go through zero or are still negative, they do not allow yet a characterization of the secondary mode as ideal or non-ideal; saturation of the growth rate may occur for larger $`S`$ and $`R`$.
### C 3D time asymptotics
Finally, full three-dimensional simulations were performed to follow the unstable modes in their nonlinear evolution. The resistivity gradients made the simulations again extremely time-expensive. The calculations were thus restricted to the case $`L_3=L_2=4`$, $`\overline{B_2^e}=\overline{B_3^e}=0`$, and $`Ha=67`$. The asymptotic 2D state was extrapolated constantly into the third dimension and was mixed with the most unstable 3D eigenstate giving the initial condition. The first phase of the full three-dimensional simulation was done with a lower spectral resolution, namely $`16^3`$, up to the time $`t_02900`$. In Fig. 8, the temporal behavior of the specific energies after this initial growth phase for the next 900 time units is shown which was calculated with the highest resolution as given in Table I. The energies still oscillate slightly, but with a decreasing amplitude, indicating convergence to a three-dimensional steady state of the sheet pinch configuration.
Furthermore, the solution is characterized by a clear and apparently time-independent spatial structure. In Fig. 10 corresponding level surfaces of $`|𝐯|`$ are shown at $`t=550`$. We found the same shape of the level surface of $`|𝐯|`$ at $`t=900`$. Additionally, the same level surfaces are shown for the 2D time-asymptotic state with otherwise the same parameters in Fig. 9. The comparison indicates that there is some relation between the two solutions — the isosurfaces of $`|𝐯|`$ in the 3D case are obtained from those in the 2D case by a modulation in the $`x_3`$ direction. This suggests, but does not prove, that the unstable 3D perturbations to the time-asymptotic 2D state do not drive the system to a completely different solution existing somewhere in phase space, but that 2D and 3D solutions originate simultaneously in the primary bifurcation of the quiescent basic state.
## IV Summary and outlook
We have numerically studied the primary and secondary bifurcations of an electrically driven plane sheet pinch with stress-free boundaries. The profile of the electrical conductivity across the sheet was chosen such as to concentrate the electric current largely about the midplane of the sheet and thus to allow instability of the quiescent basic state at some critical Hartmann number. Our results can be summarized as follows.
(1) The most unstable perturbations to the basic state are two-dimensional tearing modes. If the whole problem is restricted to two spatial dimensions, also the bifurcating new time-asymptotic state is of the tearing-mode type, namely, a stationary solution characterized by a magnetic island structure with a chain of $`X`$ and $`O`$ points, fluid motion in the form of convection-like rolls, and a filamentation of the original current sheet. We have calculated this state with precision for the aspect ratio $`L_2=4`$ and Hartmann numbers close to the critical one. In contrast to the stability boundary the bifurcating new solutions may depend on the two Reynolds-like numbers of the problem separately. We have restricted ourselves to $`M=S=Ha`$.
(2) The bifurcating steady (time-asymptotic) two-dimensional state was tested for stability with respect to three-dimensional perturbations. It proved to be unstable to 3D perturbations with a sufficiently large wavelength in the third direction. At the stability threshold always two identical real eigenvalues become positive (i.e., there are two purely growing unstable eigenmodes). We also added constant external magnetic field components along the invariant direction to the 2D tearing-mode equilibrium. If these components are sufficiently strong, they suppress the secondary instability with respect to three-dimensional perturbations, which is in accordance with the general expectation that a magnetic field impedes motions with gradients in the direction of the field and has been noted before .
(3) Full three-dimensional simulations were performed to follow the unstable 3D modes in their nonlinear evolution. The solution seems to converge to a 3D steady state. Although velocity and magnetic field have now components in the invariant direction of the 2D state and all structures are modulated in this direction, there is still some resemblance to the 2D tearing mode state. This suggests, but does not prove, that the unstable 3D perturbations to the 2D state do not drive the system to a completely different region in phase space. 2D and 3D solutions might originate simultaneously in the primary bifurcation of the basic equilibrium.
Since our calculations were made very close to the primary bifurcation point, we suppose that the steady 2D tearing-mode state is unstable from the beginning and that there is a direct transition of the system from the quiescent basic state to three-dimensional attractors. This is true if the 2D state is not stabilized by an external magnetic field along its invariant direction. Furthermore, a sufficiently small $`L_3`$ ensures stability of the 2D state in a certain Hartmann number interval above the critical value (since the unstable 3D perturbations, whose wavelength must exceed some threshold value, are then not admitted). Finally, the aspect ratio $`L_2`$ and the magnetic Prandtl number $`Pr_m=\nu /\eta _0=S/M`$ might influence the bifurcation scenario, possibly in such a way that in some parameter ranges the 2D tearing-mode solution bifurcates stably from the basic state.
Secondary instabilities that succeed primary two-dimensional ones and that lead to three-dimensionality have been considered as an important step in the transition from laminar to turbulent states in linearly unstable nonconducting shear flows . For the case of linearly stable shear flows (e.g. the plane Couette flow), it was suggested that nonlinear stationary and linearly unstable three-dimensional states, which develop already below the onset threshold of turbulence , can form a chaotic repellor in phase space . Such a repellor can cause the transient turbulent states above the onset threshold. There is some analogy of the magnetohydrodynamic pinch to shear flows, and Dahlburg et al. have presented numerical evidence that the secondary instability of two-dimensional quasi-equilibria of the tearing-mode type can lead to turbulence in a plane sheet pinch. In their calculations the pinch was not driven by an external electric field (nor mechanically driven) and the electrical conductivity was assumed to be spatially uniform. In such a case the pinch always decays resistively, that is, velocity and magnetic field tend to zero as $`t\mathrm{}`$. By our choice of the resistivity profile and the applied electric field we could calculate exact time-asymptotic, in particular steady states and could corroborate the result of Dahlburg et al. that saturated two-dimensional tearing-mode states are unstable to three-dimensional perturbations. We did not observe a transition to a turbulence-like state yet. Irregular behavior may be expected to arise through subsequent bifurcations when the Reynolds-like numbers are further raised.
## ACKNOWLEDGMENTS
J.S. wishes to acknowledge many fruitful discussions with Armin Schmiegel. We thank the referee for helpful comments.
## Instability of the quiescent basic state and Squire’s theorem in the case of a finite aspect ratio $`L_2`$
Squire’s theorem states that for increasing Hartmann number two-dimensional perturbations to the quiescent basic state become unstable first. Specifically: For each three-dimensional eigenmode with wave numbers $`k_2`$, $`k_3`$ and growth rate $`\lambda `$ at Hartmann number $`Ha`$, there exists a two-dimensional (i.e., $`x_3`$ invariant) eigenmode with wave number $`\stackrel{~}{k_2}=(k_2^2+k_3^2)^{1/2}`$ and growth rate $`\stackrel{~}{\lambda }=(\stackrel{~}{k_2}/k_2)\lambda `$ at Hartmann number $`\stackrel{~}{Ha}=(k_2/\stackrel{~}{k_2})Ha`$ .
### 1 Case $`L_2=\mathrm{}`$
If the stability problem is considered on the infinite $`x_2`$-$`x_3`$ plane, i.e., with all wave numbers $`k_2`$ and $`k_3`$ allowed, for increasing $`Ha`$ one or several two-dimensional modes with a critical wave number $`k_{2}^{}{}_{c}{}^{}`$ (and the corresponding critical wavelength $`L_{2}^{}{}_{c}{}^{}=2\pi /k_{2}^{}{}_{c}{}^{}`$) become unstable at a critical Hartmann number $`Ha_c`$ where all three-dimensional modes are still stable. Above $`Ha_c`$ the critical value $`k_{2}^{}{}_{c}{}^{}`$ broadens to an unstable $`k_2`$ interval. The latter means, however, that three-dimensional modes could be unstable immediately above $`Ha_c`$. Namely, consider a 3D eigenmode with wave numbers $`k_2`$, $`k_3`$ and growth rate $`\lambda `$ at some Hartmann number $`Ha=Ha_c+ϵ`$, $`ϵ>0`$. If $`k_2`$ is chosen from the interior of the unstable $`k_2`$ interval at $`Ha`$, then $`\stackrel{~}{k_2}=(k_2^2+k_3^2)^{1/2}`$ lies within the unstable $`k_2`$ interval at the Hartmann number $`\stackrel{~}{Ha}=(k_2/\stackrel{~}{k_2})Ha`$, where $`Ha_c<\stackrel{~}{Ha}<Ha_c+ϵ`$, if only $`|k_3|`$ is chosen sufficiently small. This does not mean yet that the 2D mode to which the 3D mode is connected is unstable, since there are in general also stable 2D eigenmodes with the same wave number $`\stackrel{~}{k_2}`$. But if the associated 2D mode is unstable, i.e., if the real part of $`\stackrel{~}{\lambda }=(\stackrel{~}{k_2}/k_2)\lambda `$ is positive, this implies that also $`\mathrm{}(\lambda )>0`$. The possibility of unstable three-dimensional eigenmodes close to the critical Hartmann number is excluded by the Squire theorem, however, if $`L_3`$ is finite, i.e., if there is a positive lower bound (however small) to the modulus of the wave number $`k_3`$. In that case there is a finite Hartmann number interval above $`Ha_c`$ where all unstable eigensolutions are purely two-dimensional.
### 2 Case $`L_2`$ finite
Fixing $`L_2`$ to a finite value complicates the problem, since only a set of discrete values is admitted for $`k_2`$. If not just $`L_2=n2\pi /k_{2}^{}{}_{c}{}^{}`$, with $`n`$ denoting a positive integer number, that is, if $`k_{2}^{}{}_{c}{}^{}`$ is not just an admissible $`k_2`$, instability to 2D modes will set in at some Hartmann number $`[Ha_c]_{L_2}`$ above $`Ha_c`$ and for a wave number $`[k_{2}^{}{}_{c}{}^{}]_{L_2}`$ different from $`k_{2}^{}{}_{c}{}^{}`$.
#### a Subcase $`L_2L_{2}^{}{}_{c}{}^{}=2\pi /k_{2}^{}{}_{c}{}^{}`$
In the case $`L_2L_{2}^{}{}_{c}{}^{}`$ for all $`k_2`$ holds $`k_2k_{2}^{}{}_{c}{}^{}`$ and consequently the smallest admissible $`k_2`$ becomes unstable first, i.e., $`[k_{2}^{}{}_{c}{}^{}]_{L_2}=2\pi /L_2k_{2}^{}{}_{c}{}^{}`$. It is easily seen that as in the case of $`L_2=\mathrm{}`$ (i) directly at the onset of instability only 2D modes can be unstable (since modes with $`k_2=0`$ cannot be unstable and the Squire theorem thus would connect any unstable 3D mode to a 2D mode with wave number $`\stackrel{~}{k_2}>[k_{2}^{}{}_{c}{}^{}]_{L_2}`$ outside the unstable $`k_2`$ interval at the Hartmann number $`[Ha_c]_{L_2}`$), (ii) immediately above $`[Ha_c]_{L_2}`$ also unstable 3D modes are possible (or at least not forbidden by Squire’s theorem), (iii) a finite aspect ratio $`L_3`$ (however large) ensures that in a finite Hartmann number interval close to the onset of instability only purely two-dimensional eigenmodes are unstable (see also Fig. 2 where the pinch is stable with respect to 3D modes for $`L_36`$).
#### b Subcase $`L_2>L_{2}^{}{}_{c}{}^{}=2\pi /k_{2}^{}{}_{c}{}^{}`$
More involved is the situation for $`L_2>L_{2}^{}{}_{c}{}^{}`$. Then it cannot be excluded generally that 3D modes become unstable first, and in principal each individual situation has to be tested separately. One can distinguish between the cases $`[k_{2}^{}{}_{c}{}^{}]_{L_2}>k_{2}^{}{}_{c}{}^{}`$ and $`[k_{2}^{}{}_{c}{}^{}]_{L_2}<k_{2}^{}{}_{c}{}^{}`$, of which the first one is simpler. In both cases special complications arise from the fact that 3D modes with wave numbers $`k_2`$ smaller than $`[k_{2}^{}{}_{c}{}^{}]_{L_2}`$, that is to say, with $`k_2=n2\pi /L_2<[k_{2}^{}{}_{c}{}^{}]_{L_2}=n_02\pi /L_2`$ ($`n`$, $`n_0`$ denoting integer numbers) can come into play.
In the case of $`[k_{2}^{}{}_{c}{}^{}]_{L_2}>k_{2}^{}{}_{c}{}^{}`$ these 3D modes (with $`k_2`$ smaller than $`[k_{2}^{}{}_{c}{}^{}]_{L_2}`$) are the only 3D modes that could become unstable at a Hartmann number less than $`[Ha_c]_{L_2}`$ (where the first 2D mode becomes unstable); if they remained stable, the situation is similar to subcase 2 a. The 3D modes with $`k_2<k_{2}^{}{}_{c}{}^{}`$ must remain stable close to the onset of 2D instability, however, if $`[Ha_c]_{L_2}`$ does not exceed $`Ha_c`$ too much (and $`[k_{2}^{}{}_{c}{}^{}]_{L_2}`$ does not differ too much from $`k_{2}^{}{}_{c}{}^{}`$), such that (i) $`|k_3|`$ has to be larger than some positive threshold value in order that $`\stackrel{~}{k_2}=(k_2^2+k_3^2)^{1/2}`$ (with $`k_2=n2\pi /L_2`$, $`n<n_0`$) can come into the unstable $`k_2`$ interval close to the onset of instability (since there is a finite gap between the unstable $`k_2`$ interval and the largest admissible $`k_2`$ that is smaller than $`k_{2}^{}{}_{c}{}^{}`$) and (ii) as a consequence of this $`\stackrel{~}{Ha}=(k_2/\stackrel{~}{k_2})Ha`$ must be smaller than $`Ha_c`$. If this is the case and, furthermore, $`[k_{2}^{}{}_{c}{}^{}]_{L_2}>k_{2}^{}{}_{c}{}^{}`$, the situation is the same as for $`L_22\pi /k_{2}^{}{}_{c}{}^{}`$.
The numerical example of this paper belongs to the category just discussed: $`L_2`$ finite, $`L_2>2\pi /k_{2}^{}{}_{c}{}^{}`$, $`[k_{2}^{}{}_{c}{}^{}]_{L_2}>k_{2}^{}{}_{c}{}^{}`$, and close to the onset of instability no unstable 3D modes with $`k_2<[k_{2}^{}{}_{c}{}^{}]_{L_2}`$. We found $`k_{2}^{}{}_{c}{}^{}=2.67`$, corresponding to a critical wavelength of $`L_{2}^{}{}_{c}{}^{}=2.35`$, and $`Ha_c=64.57`$. The critical values for the fixed aspect ratio $`L_2=4`$ are $`[k_{2}^{}{}_{c}{}^{}]_{L_2=4}=\pi `$, corresponding to a critical wavelength of $`[L_{2}^{}{}_{c}{}^{}]_{L_2=4}=2`$, and $`[Ha_c]_{L_2=4}=66.20784`$ (see also Fig. 1). Loosely speaking, an unstable 3D mode has to fit now between $`Ha_c`$ and $`[Ha_c]_{L_2=4}`$ with its critical Hartmann number $`\stackrel{~}{Ha}`$. It can only have the wave number $`k_2=2\pi /4=\pi /2`$, since otherwise $`\stackrel{~}{k_2}=(k_2^2+k_3^2)^{1/2}>[k_{2}^{}{}_{c}{}^{}]_{L_2}`$ (i.e., the associated 2D mode would be stable). This implies, in order to have instability
$$\stackrel{~}{k_2}=[(\pi /2)^2+k_3^2]^{1/2}>k_{2}^{}{}_{c}{}^{}=2.67,$$
(22)
and consequently
$$\stackrel{~}{Ha}=\frac{\pi /2}{\stackrel{~}{k_2}}[Ha_c]_{L_2=4}<\frac{\pi /2}{k_{2}^{}{}_{c}{}^{}}[Ha_c]_{L_2=4}38.9,$$
(23)
which lies below $`Ha_c`$. 3D modes with $`k_2=\pi /2`$ cannot be unstable even for Hartmann numbers significantly above $`[Ha_c]_{L_2=4}`$. 3D modes with $`k_2=[k_{2}^{}{}_{c}{}^{}]_{L_2=4}`$, on the other hand, can be unstable immediately above $`[Ha_c]_{L_2=4}`$ and are stabilized by an upper bound to the aspect ratio $`L_3`$, as discussed in the preceding subsections of this Appendix. We found the condition $`L_3<1000`$ to be sufficient to stabilize all 3D modes at $`Ha=66.208`$ ($`>[Ha_c]_{L_2=4}=66.20784`$). |
no-problem/9912/hep-ph9912486.html | ar5iv | text | # Diffractive production of high-𝑝_𝑡 photons at HERA
## 1 Introduction
One of the cleanest of all diffractive processes is that of diffractive photon production in the process $`\gamma p\gamma Y`$ where the photon carries a large $`p_t`$ and is well separated in rapidity from the hadronic system $`Y`$. The process can be measured at HERA . The largeness of the transferred momentum $`tp_t^2\mathrm{\Lambda }_{\mathrm{QCD}}^2`$ ensures the applicability of perturbative QCD. Unlike diffractive meson production this process has the advantage that the hard subprocess is completely calculable in perturbation theory. The only non-perturbative component resides in the parton density functions of the proton that factorize in the usual manner.
Theoretical interest in this process dates back to the work of where calculations were performed in fixed order perturbation theory and to lowest order in $`\alpha _s`$. Recent work has extended this calculation to sum all leading logarithms in energy, for real incoming photons and for real and virtual incoming photons . The cross-section for $`\gamma q\gamma q`$ can be written
$$\frac{d\sigma _{\gamma q}}{dp_t^2}\frac{1}{16\pi \widehat{s}^2}\left|A_{++}\right|^2$$
(1)
and we have ignored a small contribution that flips the helicity of the incoming photon. The photon-quark CM energy is given by $`\widehat{s}`$. To leading logarithmic accuracy
$$A_{++}=i\alpha \alpha _s^2\underset{q}{}e_q^2\frac{\pi }{6}\frac{\widehat{s}}{p_t^2}_{\mathrm{}}^{\mathrm{}}\frac{d\nu }{1+\nu ^2}\frac{\nu ^2}{(\nu ^2+1/4)^2}\frac{\mathrm{tanh}\pi \nu }{\pi \nu }F(\nu )\mathrm{e}^{z\chi (\nu )}$$
(2)
where
$$z\frac{3\alpha _s}{\pi }\mathrm{log}\frac{\widehat{s}}{p_t^2},$$
(3)
$`\chi (\nu )=2(\mathrm{\Psi }(1)\mathrm{Re}\mathrm{\Psi }(1/2+i\nu ))`$ is the BFKL eigenfunction , $`F(\nu )=2(11+12\nu ^2)`$ for on-shell photons and there is a sum over the quark charges squared, $`e_q^2`$. The separation in rapidity between the struck parton and the final-state photon is $`\mathrm{\Delta }\eta \mathrm{log}(\widehat{s}/p_t^2).`$
The full photon-proton cross-section is obtained after multiplying by the parton density functions:
$$\frac{d\sigma }{dxdp_t^2}=\left[\frac{81}{16}g(x,\mu )+\mathrm{\Sigma }(x,\mu )\right]\frac{d\sigma _{\gamma q}}{dp_t^2}$$
(4)
and we take the factorization scale $`\mu =p_t`$.
We have implemented this result in the HERWIG event generator in order to aid the experimental measurement of the process. We also note that having done this, it is a straightforward procedure to include the high-$`p_t`$ production of vector mesons in a similar manner and we intend to do this in the near future. In this paper, we compare the HERWIG generated data with theory and discuss the strategy for a future measurement.
In order to speed up the event generation procedure, we used the following approximate parameterization:
$`G(z)`$ $``$ $`{\displaystyle _{\mathrm{}}^{\mathrm{}}}{\displaystyle \frac{d\nu }{1+\nu ^2}}{\displaystyle \frac{\nu ^2}{(\nu ^2+1/4)^2}}{\displaystyle \frac{\mathrm{tanh}\pi \nu }{\pi \nu }}2(11+12\nu ^2)\mathrm{}^{z\chi (\nu )}`$ (5)
$``$ $`{\displaystyle \frac{4.52}{(z+0.1)^{3/2}}}\mathrm{}^{4z\mathrm{ln}2}\mathrm{\Theta }(z1)+(23.7+35z^{2.3})\mathrm{\Theta }(1z)`$
which is good to within a few percent over the $`z`$-range of interest.
## 2 Results
We show the $`p_t`$ spectrum of the scattered photon in Figure 2. This plot is computed at fixed $`W=200`$ GeV ($`W^2=\widehat{s}/x`$) and a fixed $`\alpha _s=0.2`$. The choice to fix $`\alpha _s`$ is supported by Tevatron and HERA data on gaps between jets and high $`p_t`$ diffractive vector meson production . The solid curve shows the theoretical prediction derived directly from (4), it is compared to the HERWIG generated data at the parton level (before parton showers) and at the hadron level. We used the proton parton density functions of (code 155 in PDFLIB ). The good agreement between theory and parton level is a check that the process is correctly implemented in HERWIG. The systematic shift arises because HERWIG ensures that energy and momentum are conserved and that final state hadrons are on shell .
In subsequent figures, we make the typical HERA cuts on the photon energy variable, $`0.25<y<0.75`$, and on the photon virtuality, $`Q^2<0.01`$ GeV<sup>2</sup>. Statistical errors are shown corresponding to 44 pb<sup>-1</sup> of ep data, typical of that already collected by each HERA experiment. We also make a cut $`y_{IP}<0.01`$ where
$$y_{IP}=\underset{i}{}\frac{(Ep_z)_i}{2E_\gamma }\frac{p_t^2}{xW^2}\mathrm{}^{\mathrm{\Delta }\eta }$$
(6)
and the sum is over all final state particles excluding the electron and photon. Note that $`y_{IP}`$ can be measured accurately without needing to see the whole of system $`Y`$ since the material lost at low angles does not contribute much to the numerator. As the last approximate equality shows, this cut ensures that the rapidity gap between the outoing struck parton and the outoing photon is bigger than about 4.5 units (recall that a large rapidity gap is a signal of diffractive processes). We have also integrated over all photon $`p_t>2.5`$ GeV (and used $`p_{\mathrm{tmin}}=1`$ GeV in the event generation).
The effect of varying the $`y_{IP}`$ cut on the size of rapidity gap can be seen in Figure 2, where we plot the rapidity of the scattered photon and the edge of system $`Y`$. By not requiring to measure system $`Y`$ in detail it is possible to reach very high rapidity gaps. As the plot shows, gaps of 6 to 7 units in rapidity are not uncommon. Note that the limited $`y`$-range combined with the steeply falling $`p_t`$ spectrum constrain the rapidity of the photon to be around $`\eta 2`$.
In Figure 4 we show the $`x_{IP}`$ distribution and in Figure 4 the $`p_t`$ distribution:
$$x_{IP}=\frac{(E+p_z)_\gamma }{2E_p}\frac{p_t^2}{W^2}.$$
(7)
This variable can be measured to high accuracy. The steep rise at small $`x_{IP}`$ is driven by the BFKL kernel $`\chi (\nu )`$ in (2). In particular, the dominant conribution comes from $`\nu 0`$, and this leads to
$$\frac{d\sigma }{dx_{IP}}\frac{W^2}{p_t^4}\mathrm{}^{2z\chi (0)}\frac{1}{W^2}\left(\frac{1}{x_{IP}}\right)^{2\omega _0+2}$$
(8)
where $`\omega _0=(3\alpha _s/\pi )4\mathrm{ln}2`$ in the LLA. It will be interesting to see to what degree the measured $`x_{IP}`$ distribution follows this power-like behaviour.
We thank Jon Butterworth and Norman Evanson for their help.
## References |
no-problem/9912/hep-ex9912055.html | ar5iv | text | # Mass Measurement of the W-boson using the ALEPH Detector at LEP
## 1 INTRODUCTION
In this paper we discuss the measurement of the W mass from direct reconstruction of the invariant mass of the decay products in the channels WW$``$qq̄qq̄ (4q) and WW$`\mathrm{}\nu `$qq̄. Preliminary results are presented for data collected in ALEPH during 1998 with an integrated luminosity of 174.2$`pb^1`$ at 188.63GeV.
## 2 EVENT SELECTIONS
### 2.1 WW $``$ qq̄qq̄ events
Events are preselected to remove radiative returns to the Z and clustered into four jets using the `DURHAM-PE` algorithm, defining $`y_{cut}>0.001`$. Events are vetoed if a charged track in a jet carries more than 90% of the jet energy or if there is more than 95% of electromagnetic energy in a $`1^{}`$ cone around any particle.
A Neural Network with fourteen input variables (NN14) is used to perform the final selection. Training is performed with an independent sample of the standard Monte-Carlo (`KORALW` hadronised with `JETSET`) and comparable samples of qq̄$`(\gamma )`$ and ZZ events, both generated using `PYTHIA`, to simulate the background. Events with NN$`14>0.3`$ are used to extract the W mass.
### 2.2 WW $``$ e$`\nu `$qq̄ and WW $`\mu \nu `$qq̄ events
The total charged energy and multiplicity are used to preselect events with a further cut on the total longitudinal momentum and visible energy to remove radiative returns to the Z. The lepton candidates are chosen to be more energetic and isolated than the other charged tracks and are identified as an electron or muon in the detector. The energy of electron candidates are corrected for possible bremsstrahlung photons detected in the electromagnetic calorimeter.
The `DURHAM-PE` algorithm is used to force two jets from objects not used to reconstruct the lepton defining $`y_{cut}>0.0003`$. A probability for an event to come from a signal process is determined from Monte Carlo reference samples using the lepton energy and isolation and the event total transverse momentum. Selected events are required to have a probability greater than 0.4.
### 2.3 WW $`\tau \nu `$qq̄ events
A similar preselection to that used for electron and muon events is applied with additional constraints to remove events with i) energy around the beam line, ii) isolated, energetic photons and iii) those already selected as electron or muon candidate events.
A tau jet is constructed from one or three charged tracks and two other jets are forced using the `JADE` algorithm. The tau jet must be that jet most anti-parallel to the missing momentum vector and isolated from the other jets. As with electron and muon candidate events, a probability is calculated and the same cut applied.
## 3 EXTRACTION OF THE W-MASS
### 3.1 WW $``$ qq̄qq̄ events
To improve the invariant mass resolution a four constraint kinematic fit is applied to the four jets to conserve energy (as provided by LEP) and momention. Corrections are applied to the jets to take into account particle losses in the detector. The masses from the three combinations of di-jets formed from the fitted jets are rescaled according to $`m_{ij}^{resc}/m_{ij}=E_{beam}/(E_i+E_j)`$, where $`E_i`$ and $`E_j`$ are the measured jet energies.
A pairing algorithm is applied to the di-jet combinations to select that which most closely corresponds to a WW pair. The combination with the smallest mass difference between rescaled masses is chosen provided one mass lies in the window 60 to 86 GeV/$`c^2`$ and the other 74 to 86 GeV/$`c^2`$.
A binned Monte-Carlo reweighting procedure is employed to find the value of $`m_W`$ which best fits the mass distributions. Events from a `KORALW` sample with equivalent background are reweighted with a CC03 matrix element to provide a two-dimensional probability density function for the minimisation using a single parameter, $`m_W`$. Variable binning controlled by the density of Monte-Carlo is employed optimised to produce a stable result. The W-width is allowed to vary with $`m_W`$ according to the Standard Model.
In order to check that the procedure does not introduce any biases, Monte Carlo samples are generated with $`m_W`$ in the range 79.35 and 81.35 GeV/$`c^2`$. Treating these samples as data, no significant offsets in the masses measured are found.
### 3.2 WW $`\mathrm{}\nu `$qq̄ events
A two constraint fit is applied to each event by minimising a $`\chi ^2`$ constructed from the deviations of selected parameters of the jets and leptons from their true values and demanding that the hadron and lepton invariant masses are equal. The single fitted mass obtained for each event must lie in the window 74 to 94.5 GeV/$`c^2`$.
A reweighting procedure similar to that employed in the four quark channel is used to fit the mass distribution. Fixed binning is used for the electron and muon channels and variable binning is retained for the tau channel.
## 4 SYSTEMATIC UNCERTAINTIES
The systematic errors summarised in Table 1. Uncertainties due to the detector are determined using data taken at the Z at intervals during the high energy running. Particles not seen by the detector cause discrepancies in the jet finding, the effect is estimated by matching jets built from Monte Carlo tracks before and after passing them through the detector simulation.
The error due to parton fragmentation effects is measured by determining the mass shift when `HERWIG` rather than `JETSET` is used for hadronisation. The effect of initial state radiation is estimated by comparing the use of 1st and second order matrix elements. The LEP energy error is that given by LEP.
Colour reconnection between parton pairs in the qq̄qq̄ channel is studied using variants on `JETSET` or `HERWIG`. The Bose-Einstein effect has been studied using Z-peak data, the systematic error is determined from the shift in the mass measured when the parameters obtained at the Z peak are applied to Monte Carlo events using a modified `JETSET`. This effect is applied between to W decay products in the qq̄qq̄ channel.
## 5 MASS MEASUREMENT RESULTS
The mass distributions obtained are shown in figure 1. The preliminary results for 189 GeV are given in Table 2 for each channel where the BE-CR error is that for the Bose-Einstein and Colour reconnection systematics added in quadrature.
Results for data taken at 172 and 183 GeV have been published , combining these and the results above gives
$`m_W^{hadronic}`$ = 80.561 $`\pm `$ 0.095(stat.) $`\pm `$ 0.050(syst.) $`\pm `$ 0.056(BE-CR) GeV/$`c^2`$
$`m_W^{leptonic}`$ = 80.343 $`\pm `$ 0.089(stat.) $`\pm `$ 0.041(syst.) GeV/$`c^2`$.
The current weighted average of all ALEPH results is obtained by including measurements from W pair cross sections at 161 and 172 GeV , giving
$`m_W`$ = 80.411 $`\pm `$ 0.064(stat.) $`\pm `$ 0.037(syst.) $`\pm `$ 0.022(BE-CR) $`\pm `$ 0.018(LEP) GeV/$`c^2`$ |
no-problem/9912/astro-ph9912317.html | ar5iv | text | # Blueshift Without Blueshift: Red Hole Gamma-Ray Burst Models Explain the Peak Energy Distribution
## GRB MODEL BUILDING CHALLENGES
Because gamma-ray bursts vary so rapidly, they must be compact. These compact gamma-ray bursts release enormous energy, and therefore they must form an intense fireball that is optically thick, pair-producing, and thermalized. But the spectrum is not thermal, and there is no sign of pair-production attenuation at the high end of the observed spectrumjsgband . This seeming self-contradiction (the opacity problem) can be solved by having the fireball power a relativistic shell or jet that collides with something (perhaps itself) to produce the observed gamma raysjsgrees . This fireball-driven relativistic shock model is currently the leading candidate to explain GRBsjsgpiran . It solves the opacity problem. But like almost all other published models, it fails to explain the observed spectroscopy of GRBs, particularly the narrowness of the observed peak energy distributionjsgpreece . Furthermore, this model does not explain the high ratio of the energy of the GRB burst itself (caused by internal shocks) to the energy in the afterglow (caused by external shocks in the fireball/shock model)jsgpa . Nevertheless the predictions of this model for the afterglows themselves are consistent with current observationsjsgpiran .
Finally, there is the problem of the overall energetics of the GRB. The two leading candidates to produce the initial fireball or fireballs –the so-called central engine– are merging neutron stars and core-collapse supernovaejsgwoosley . Both these sources have over 10<sup>5</sup><sup>4</sup> ergs of total energy available. This is more than enough energy for even the most energetic GRB, but it is not at all clear how to prevent most of it from falling into the newly created black hole that forms in the standard general relativity versions of these models.
There seems to be an inherent conflict between solving the opacity problem and solving the peak energy distribution problem. The only successful technique available to solve the opacity problem is to invoke highly relativistic bulk motion. In the relativistic frame, the gamma rays are below pair-production threshold and so do not suffer pair-production attenuation. This definitively solves the opacity problem. But unless the Lorentz gamma factor of the bulk motion can be fine-tuned to a very narrow range for all GRBs, the resulting blueshift will not only relocate the peak of the photon energy distribution; it will also substantially widen it, inconsistent with the observed narrow E-peak distribution. Thus one needs to find a way to fine-tune the Lorentz gamma factor or find some other way around this conflict. In the fireball/shock model, the gamma factor depends sensitively on the baryon loading, and hence will vary widely. Furthermore, the internal shocks model is dependent on shocks with varying Lorentz gamma factors colliding with each other. So narrowly limiting the gamma factor is not a reasonable option for this model.
A generic solution to this problem is provided if the relativistic bulk motion results not from an initial explosion, but rather from the gravitational acceleration of matter falling into a deep potential well. An arbitrarily high Lorentz gamma factor can be attained, but the accompanying blueshift will be exactly cancelled when the matter and radiation are redshifted as they emerge from the potential well. (By that time, the matter and radiation will have separated, so the opacity problem has already been solved).
A black hole can provide the necessary deep potential well. Once matter or radiation is deep in the potential well of a black hole, however, it is almost impossible for it to escape. Therefore, we will consider an alternative gravitational collapse paradigm in which it is possible to escape from deep within the potential well of a gravitationally collapsed object.
## WHY CONSIDER RED-HOLE MODELS?
The problems with constructing a GRB model might be sufficient motivation to consider alternate theories of gravity. However, a stronger motivation comes from the theory of gravitation. Recent theoretical developments in string theory, quantum gravity and critical collapse strongly suggest the possibilities of both gravitational collapse without singularities (and without loss of information) and also gravitational collapse without event horizonsjsgms ; jsgst ; jsgchr . If these possibilities are correct, we are forced to consider the phenomenological consequences (such as different models for GRBs and core-collapse supernovae) of alternate paradigms for gravitational collapse in which black holes do not formjsggrab .
## RED HOLES– A NEW PARADIGM
Many authors have considered the alternative in which a hard core collapsed object similar to a smaller harder denser neutron star forms in place of a black holejsgrob . We here consider the alternative in which no such hard surface forms. Instead the spacetime stretching that forms a black hole in the standard model occurs, but it does not continue to the extent necessary to form an event horizon or a singularity. Instead, spacetime stretches enormously, but not infinitely, and forms a wide, deep potential well with a narrow throat. We call this a red hole.
This type of spacetime configuration was considered by Harrison, Thorne, Wakano and Wheeler (HTWW) in 1965, but only as a way station in the final collapse to a black hole (not yet then called by that name)jsghtww . In their version, part of the configuration is inside the event horizon, the collapse continues, and a singularity soon forms.
In the new alternate paradigm we call a red hole, no event horizon forms and no singularity forms. The gravitational collapse does not continue forever, but eventually stops. (Why? Perhaps due to quantum effects or string-theory dualities, but we cannot discuss this adequately here.) As the collapse proceeds, the collapsing matter becomes denser and denser until it reaches a critical point, after which, the distortion of spacetime is so great that the density decreases. This happens because the spacetime is stretching outward faster than the collapsing material can fall inward. (This decreasing density effect was already noticed by HTWW in their analysis of gravitational collapse in the context of standard general relativityjsghtww . In general relativity, this expansion of spacetime is mostly hidden behind the event horizon and does not prevent the formation of a singularity in a finite time. This is not the case in several observationally viable alternate theories of gravityjsgyil ; jsgitin .) This is why we are confident that the center of a red hole resembles a low-density vacuum more than it resembles a high-density neutron star. The decrease in density due to this enormous stretching may also be a factor in halting the gravitational collapse of the red hole before the stretching becomes infinite.
As a result, even though the stretching of spacetime is enormous, it never becomes fast enough to exceed the speed of light and cause an event horizon to form. And it stops before it reaches an infinite size or any other form of singularity. (Infinite density and infinite curvature also do not occur.) Nevertheless, it is very hard to escape from a red hole. First, there are trapped orbits inside the red hole for photons as well as massive particles, which allows permanent or nearly permanent trapping of mass and energy. Second, the Shapiro delay in crossing a red hole is very substantial, (in some cases, enormous). Hence particles which are only crossing the red hole or passing through are in effect temporarily trapped.
In fact most of the matter falling into a red hole will be trapped. However, radiation, and highly relativistic matter that falls directly into the center of the red hole and does not rescatter while inside the red hole, can travel straight through and emerge on the other side. This possibility is essential for our proposed new GRB models.
## RED-HOLE BURST MODEL
Elsewhere, we have considered models based on relocating part or all of the standard fireball/shock model inside or near a red hole. Here, we want to consider an even more radical model. In this model, the central engine is the direct source of the gamma-ray burst. There is no intervening finely tuned jet of baryons. There is no sensitive dependence on the baryon loading factor, and no dependence on a later shock to retransform the energy into gamma rays. Instead the original pair-rich fireball (created by matter collapsing into a red hole) becomes rapidly thin as it falls into the interior of the red hole and expands. Because everything (photons, baryons, electrons and positrons) is falling into the red hole at almost the same highly relativistic speed, the photons are below pair-production threshold in the infalling frame. Therefore, the fireball is optically thin and the annhilation radiation escapes. The plasma is falling with highly relativistic Lorentz gamma factors up to 1000 or more. The pair-annihilation photons are emitted in opposing pairs. One is highly redshifted, while its twin is equally and oppositely highly blueshifted. The spectrum is highly broadened, but the central peak does not move significantly, since the net blueshift of the infalling electron-positron pair is balanced by the net (or average) redshift of the escaping photon pair. Thus this model can solve the narrow peak energy distribution with ease. The more critical question is whether the combined annihilation line and thermal spectrum of the pair-rich fireball can be stretched enough to create the Band spectrum, or whether more conventional reliance on synchroton shock emission and/or inverse Compton scattering is necessary. |
no-problem/9912/cond-mat9912386.html | ar5iv | text | # Self-organization in BML Traffic Flow Model: Analytical Approaches *footnote **footnote *This work was done in 1995, accepted for publication in 1996.
## I Introduction
In recent years much attention is paid on cellular automaton models for the investigations on complex systems. These models can be viewed as statistical models with dynamics. Some models of traffic flow are under intensive studies. A two-dimensional model was introduced by Biham, Middleton and Levine (BML) . It is defined on a square lattice with periodic boundary condition. Each site contains either an arrow pointing upwards or to the right, or is empty. The dynamics is controlled by the traffic light, such that the right arrows move only on even time steps and the up arrows move on odd time steps. On even time steps, each right arrows moves one lattice constant to the right unless the site on its righthand side is occupied by an arrow, which is either up or right. If it is blocked, it does not move, even if during the same time step the blocking arrow moves out of that site. Similar rules apply to the up arrows, which move upwards. The velocity $`v`$ of a right (up) arrow is defined to be the number of moves it makes within a number of even (odd) time steps divided by this number of time steps. It has a maximal value $`v=\mathrm{\hspace{0.17em}1}`$, indicating that the arrow is never stopped. The minimal value $`v=\mathrm{\hspace{0.17em}0}`$ represents that the arrow is stopped during the entire time duration. The average velocity $`\overline{v}`$ for the system is obtained by averaging $`v`$ over all the arrows in the system. If it is further averaged over many asymptotic configurations, one obtains the ensemble average.
BML model is fully deterministic. It is called to be self-organized because whatever the initial condition is, one often (but not always) finds in the asymptotic configurations that, all the arrows move freely in their turns hence the velocity averaged over all the arrows is $`\overline{v}=\mathrm{\hspace{0.17em}1}`$, or they are all stopped, with $`\overline{v}=\mathrm{\hspace{0.17em}0}`$. These two types of configurations are referred to as moving and jamming ones, respectively. In the language of dynamics, they are the biggest basins of attraction. Which asymptotic configuration is finally reached depends on both the density of arrows and the initial condition. Is there any asymptotic configuration in which some arrows are moving while others are blocked? The answer is yes. Consider a column occupied by less than $`N/2`$ up arrows, where $`N`$ is the number of the lattice points on each column, while its two neighboring columns are both full of up arrows. Then asymptotically the arrows in this column are moving forever, independent of other arrows, which are all blocked. But such configurations are rare, that is, they occupy a very small volume in the phase space, compared with those of the moving or jamming configurations. This is indicated by the simulation result that there is a sharp moving-to-jamming transition with the increase of arrow density. The simulation result (see Fig.3 of Ref.) also tells us that the fraction of phase space volume occupied by the moving or jamming configurations increases with the size of the lattice.
Here we make some analytical approaches. We give exact results on the lower critical density, below which there are only moving configurations asymptotically, and an upper critical density, above which there are only jamming configurations asymptotically. Between these two critical densities, the asymptotic configuration can be moving or jamming, or even with both moving and blocked arrows, depending on the initial configurations. As indicated in the simulation, there is another critical density above which the asymptotic configurations are typically (but not always) jamming. This is the sharp (but not absolutely stepwise) jamming transition discovered in the ensemble average velocity.
The content of this article is as follows. For convenience of discussions, we introduce some notations in Sec. II. In Secs. III and IV, we give some exact results, the upper and lower critical densities are determined. In Sec. V, by considering the typical pattern formation of the jamming cluster, we obtain, in a heuristic way, the critical density for the jamming transition. The dependence on the lattice size is determined, and the order parameter is identified. Sec. VI is a summary.
## II Notations
For convenience of discussions, we introduce some notations here. There are $`N\times N`$ lattice points, the density of up (right) arrows is $`p_{}=n_{}/N^2`$ ($`p_{}=n_{}/N^2`$), where $`n_{}`$ ($`n_{}`$) is the number of up (right) arrows. The total density of arrows is $`p=p_{}+p_{}`$. The number of empty lattice points is denoted as $`n_0`$. The empty sites can be regarded as occupied by holes. Each lattice point is given a coordinate $`(i,j)`$. $`i`$ and $`j`$ each runs from $`1`$ to $`N`$, hence the lower-left corner is $`(1,1)`$. The periodic boundary condition can be expressed as
$$(i+N,j)=(i,j+N)=(i+N,j+N)=(i,j).$$
(1)
Because of the periodic boundary condition, the lattice can be transformed in the way shown in Fig. 1, so that it can be viewed as a parallelogram, also with the periodic boundary condition. This parallelogram is made up of $`N`$ lines parallel to the left-falling diagonal of the original square, on each of these lines there are also $`N`$ lattice points. For convenience, we say that the lattice is composed of $`N`$ “left-falling diagonals” (LFD). For example, this transformation translates the line linking $`(1,i)`$ and $`(i,1)`$ $`N`$ units upwards to be connected with the line linking $`(i+1,N)`$ and $`(N,i+1)`$, composing a LFD. Since the arrows are right or up, this viewpoint is very useful in our discussions.
To avoid confusion, the word “state” is used for the lattice points, while “configuration” is for the whole system. The state of $`(i,j)`$ is denoted as $`|i,j`$. $`|i,j=,`$ or $`0`$ if $`(i,j)`$ is occupied by an up arrow, a right arrow or a hole, respectively. $`|i,j`$ is, of course, dependent on time, so it can be written as $`|i,j(t)`$ if necessary. Obviously, in a moving configuration, $`|i,j(t)=|i+\delta ,j(t+\delta )`$ if $`|i,j(t)=`$, $`|i,j(t)=|i,j+\delta (t+\delta )`$ if $`|i,j(t)=`$.
## III Exact results on moving configuration
First we point out that not only a jamming configuration, but also a moving configuration is stationary, in the sense that all arrows of the same type move simultaneously and thus form a rigid body.
The exact results are stated in the form of theorems.
Theorem 1.In a moving configuration, a LFD always consists of a same type of arrows, as well as holes.
Proof.—Suppose $`|i,j(t)=`$, while $`|i\delta ,j+\delta (t)=`$, where $`\delta `$ is a positive integer. If $`t`$ is an odd (even) time step, then after $`\delta `$ odd (even) time steps, the right (up) arrow is blocked by the up (right) arrow. This should not happen in a moving configuration. Because of periodic boundary condition, every lattice point on the same LFD as $`(i,j)`$ can be represented as $`(i\delta ,j+\delta )`$ with $`\delta >\mathrm{\hspace{0.17em}0}`$. Therefore there cannot be both up and right arrows on a same LFD. Q.E.D.
Theorem 2.—In a moving configuration where there are both right and up arrows, there is at least one empty LFD at any instant.
Proof.—Without lose of generality, consider an odd time step. For a LFD of up arrows, there cannot be any right arrow on its upside LFD, seen as follows. Suppose there are right arrows on this upside LFD. If a right arrow is just above an up arrow, the latter is blocked, which is not permitted in a moving configuration. If a right arrow is above a hole on the LFD composed of up arrows and holes, at the next time step, this right arrow will fill the hole (which has moved one step upwards), and join the up arrows (which have moved one step upwards) on a same LFD. Hence there appears a LFD where there are both up and right arrows. This is forbidden in a moving configuration, according to Theorem 1. If there are both up and right arrows on the lattice, because of periodic boundary condition, there must be right arrows “before” the up arrows, even though on the original square lattice they are “after” the up arrows. Therefore there is at least one empty LFD. On the other hand, at this odd time step, for a LFD of right arrows, it is not necessary for its righthand side (just the upside) LFD to be empty, since the right arrows do not move at this time step. In conclusion, the least number of empty LFD is only $`1`$. Q.E.D.
Theorem 3.—Consider $`N>\mathrm{\hspace{0.17em}2}`$. There is an upper critical density, above which there is no moving configuration. The upper critical density is $`1/2`$ if $`N`$ is odd, and is $`1/21/2N`$ if $`N`$ is even.
Proof.—For $`N=\mathrm{\hspace{0.17em}2}`$, there cannot be any moving configuration with the presence of both up and right arrows. Hence we only consider $`N>\mathrm{\hspace{0.17em}2}`$. Without lose of generality, consider an odd time step. At this time step, there can be an arrow on the righthand side of a right arrow, but there cannot be any arrow on the upside of an up arrow. The most crowded configuration is the following: an empty LFD is on the upside of a block of LFD-s consisting of up arrows and holes, which is followed by a block of LFD-s consisting of right arrows and holes. Hence the number of up arrows $`n_{}`$ and the holes in the up block, $`n_0^{()}`$, should satisfy $`n_{}n_0^{()}`$ if there are even number of LFD-s in this block, and $`n_{}n_0^{()}+N`$ if there are odd number of LFD-s in this block. Similarly, the number of right arrows $`n_{}`$ and the holes in the right block, $`n_0^{()}`$, should satisfy $`n_{}n_0^{()}`$ if there are even number of LFD-s in this block, and $`n_{}n_0^{()}+N`$ if there are odd number of LFD-s in this block. In addition, the total number of the holes on the lattice should satisfy $`n_0n_0^{()}+n_0^{()}+N`$, since there is at least one empty LFD, according to Theorem 2. Therefore if $`N`$ is even, at the most crowded case, we have odd number of non-empty LFD-s, hence we have either an odd number of up LFD-s and an even number of right LFD-s, or an even number of up LFD-s and an odd number of right LFD-s. In either case, we have $`n=n_{}+n_{}`$$`n_0^{()}+n_0^{()}+N`$ $`n_0`$$`=N^2n`$, hence $`p=n/N^2`$$`1/2`$. If $`N`$ is odd, at the most crowded case, we have even number of non-empty LFD-s, hence we have either an odd number of up LFD-s and an odd number of right LFD-s, or an even number of up LFD-s and an even number of right LFD-s. In the odd-odd case, we have $`n=n_{}+n_{}`$$`n_0^{()}+n_0^{()}+2N`$ $`n_0+N`$$`=N^2+Nn`$, hence $`p=n/N^2`$$`1/2+1/2N`$. In the even-even case, we have $`n=n_{}+n_{}`$$`n_0^{()}+n_0^{()}`$ $`n_0N`$$`=N^2Nn`$, hence $`p=n/N^2`$$`1/21/2N`$. Combining these two cases we have $`p1/21/2N`$ if $`N`$ is even. Q.E.D.
## IV Exact results on jamming configuration
Because an up arrow can only be blocked by an arrow (which can be right or up) above it, while a right arrow can only be blocked by an arrow on its righthand side, these arrows form a directed path in a jamming configuration. All directed paths point upwards or to the right. Considering the periodic boundary coundition, one may obtain the following thorems.
Theorem 4.—In a jamming configuration, starting from an arbitrary arrow, one can obtain a directed path which returns to either the starting arrow or another arrow on this path.
We call such a path a closed path. If it returns to the starting arrow, it is a circular path. Each closed path contains a circular path as a part.
Theorem 5.In a jamming configuration, there must be at least one circular path.
Clearly this is a necessary condition for a configuration to be jamming.
Theorem 6.The length of a circular path is $`N`$ if it is composed of only one type of arrows, and is $`2N`$ if it is composed of both types of arrows. Here the unit of the length is the lattice constant. For example, the length of an edge of the square is $`N1`$.
Proof.—Obviously the circular path made up of one type of arrows is parallel to the edge of the square, hence its length is $`N`$. If the circular path is made up of both types of arrows, because it is directed, generally it appears as two parts on the square lattice. For example, one part is a directed path connecting $`(1,J)`$ and $`(I,N)`$, another part is a directed path connecting $`(I,1)`$ and $`(N,J)`$ . Note that $`(N,J)`$ is a nearest neighbor of $`(1,J)(N+1,J)`$, and $`(I,N)`$ is a nearest neighbor of $`(I,1)(I,N+1)`$. Clearly the total length of such a circular path is $`2N`$. Q.E.D.
Theorem 7.—There is a lower critical density, below which there is no jamming configuration. The lower critical density is $`(1+p_s/p_l)/N`$, where $`p_s`$ and $`p_l`$ are respectively the smaller and larger one of $`p_{}`$ and $`p_{}`$.
Proof.-Suppose there is a circular path made up of only the arrows with the larger density, and there are no other arrows of this type. The arrows with the smaller density are blocked by this circular path. Therefore $`N_l=N`$, $`N_s=(p_s/p_l)N`$. Thus $`(1+p_s/p_l)/N`$ is the smallest possible density for the jamming configuration if the circular path is made up of only one type of arrows. If all the up and right arrows take part in composing the circular path, the density is $`2/N`$. Since $`2/N(1+p_s/p_l)/N`$, the theorem holds in general. Q.E.D.
## V Formation of a jamming configuration
The so-called jamming transition observed in the simulation referrs to a sharp, but not stepwise, transition in the ensemble average velocity. There are both moving and jamming configurations between the upper and lower critical densities, depending on the initial condition. In , the critical density for the jamming transition was defined to be at the center of the transition region. Here we define the critical density of jamming transition as the value of the density at which the ensemble average velocity starts to be nearly zero. We approximate the non-stepwise transition in the ensemble average by a typical process of formation of a jamming configuration. Consequently the critical density of jamming transition is approximated as the density above which there forms a typical jamming cluster.
A jammming configuration is formed soon after the appearance of a circular path, which usually consists of both up and right arrows. When an arrow meet the circular path, it is blocked. The circular path blocks the right arrows on its lefthand side and up arrows on its downside. Blocked arrows block other arrows, and so on. Consequently, a global cluster with directed branching structure emerges. The skeleton of this cluster is a circular path.
Now note that in the final jamming cluster, there are some arrows which are the ends of the cluster. If the end-arrow is an up (right) one, its upside (righthand side) must be occupied, while there must be neither right arrow on its lefthand side nor up arrow on its downside. So the density of end-arrows are
$$\rho _e(p)=p^2(1p_{})(1p_{})=p^2p^3+p^2p_{}p_{}p^2.$$
(2)
Hence the number of ends is $`p^2N^2`$. Since the length of the circular path is $`N`$, it is very reasonable to assume that the number of the ends is a power of $`N`$ . Consequently the critical density for the “jamming transition” is a power of $`N`$, that is,
$$p_c(N)=CN^\alpha ,$$
(3)
where $`C`$ is a coefficient, while $`\alpha `$ is the exponent.
The simulation results can be used to test this heuristic argument, and determine $`\alpha `$ and $`C`$. With the approximate values of $`p_c(N)`$ for $`N=\mathrm{\hspace{0.17em}16},\mathrm{\hspace{0.17em}32},\mathrm{\hspace{0.17em}64},\mathrm{\hspace{0.17em}128},\mathrm{\hspace{0.17em}512}`$ obtained from Fig. 3 in Ref. , we obtain a good fit to Eq. (3) with $`\alpha 0.14`$ and $`C0.76`$, as shown in Fig.2.
Eq. (3) suggests that the jamming cluster at $`p_c`$ is a fractal with dimensionality $`2+\alpha =\mathrm{\hspace{0.17em}1.86}`$, which is close to 91/48, the fractal dimension of the infinite cluster of two-dimensional percolation . This is understandable since the jamming cluster forms soon after the circular path forms, which is similar to the formation of an infinite cluster in percolation. The order parameter of percolation is the probability that an arbitrarily chosen occupied site or bond belongs to an infinite cluster. Likewise, we identify the the order parameter in jamming transition as the probability that an arbitarily chosen arrow belongs to a closed path.
## VI summary
We have studied BML two-dimensional traffic flow model analytically. In particular we gives exact results on the two most typical asymptotic configurations, the moving configuration and jamming configuration. In a moving configuration, all arrows keep moving, while in a jamming configuration, all arrows are blocked. Theorems 1 and 2 give two basic properties of a moving configuration. Based on these, Theorem 3 provides the upper critical density, above which there is no moving configuration asymptotically. The upper critical density is $`1/2`$ if $`N`$ is odd, and is $`1/21/2N`$ is $`N`$ is even. Theorems 4, 5 and 6 give basic properties of a jamming configuration. The crucial thing is the formation of a so-called circular path which cuts the lattice into two parts. Theorem 7 then gives the lower critical density, below which there is no jamming configuration asymptotically. The so-called jamming transition observed in the ensemble average velocity happens in a short region between the upper and lower critical density. We define the critical density for the jamming transition as that on which the emsemble average velocity begins to be close to zero. We investigate it approximately by considering the formation of a typical jamming cluster. The $`N`$-dependent critical density $`p_c(N)`$ is claimed to be $`CN^\alpha `$, which fits the numerical data very well, with $`C0.76`$ and $`\alpha 0.14`$.
The jamming transition turns out to be quite similar to percolation, and we identify the order parameter as the probability that an arbitrarily chosen arrow belongs to a closed path. Further investigations on possible criticality are interesting. |
no-problem/9912/cond-mat9912332.html | ar5iv | text | # The topology of the Fermi surface of Bi2Sr2CaCu2O8-δ from angle resolved photoemission.
## Abstract
We present a study of the topology of the normal state Fermi surface (FS) of the high T<sub>c</sub> superconductor Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8-δ</sub> (Bi2212) using angle-resolved photoemission. We present FS mapping experiments, recorded using unpolarised radiation with high (E,k) resolution, and an extremely dense sampling of k-space. In addition, synchrotron radiation-based ARPES has been used to prove the energy independence of the FS as seen by photoemission. We resolve the current controversy regarding the normal state FS in Bi2212. The true picture is simple, self-consistent and robust: the FS is hole-like, with the form of rounded tubes centred on the corners of the Brillouin zone. Two further types of features are also clearly observed: shadow FSs, and diffraction replicas of the main FS caused by passage of the photoelectrons through the modulated Bi-O planes.
The topology and character of the normal state Fermi surfaces of the high temperature superconductors have been the object of both intensive study and equally lively debate for almost a decade. Angle-resolved photoemission spectroscopy (ARPES) has played a defining role in this discussion. The pioneering work of Aebi et al. illustrated that angle-scanned photoemission using unpolarised radiation can deliver a direct, unbiased image of the complete FS of Bi2212 , confirming the large FS centered at the corners of the Brillouin zone (BZ) predicted by band structure calculations . Furthermore, the use of the mapping method enabled the indentification of weak additional features (”shadow Fermi surface”, SFS) which were attributed to the effects of short-range antiferromagnetic spin correlations . Subsequently, ARPES investigations clearly identified a further set of dispersive photoemission structures which are extrinsic and result from a diffraction of the outgoing photoelectrons as they pass through the structurally modulated Bi-O layer, which forms the cleavage surface in these systems .
Recently, this whole picture has been called into question. ARPES data recorded using particular photon energies (32-33 eV) have been interpreted in terms of either: a FS with missing segments , an extra set of one dimensional states , or an electron-like FS centred around the $`\mathrm{\Gamma }`$ point . A further study suggests that both electron or hole-like FS pieces can be observed, depending on the photon energy used in the ARPES experiment . These points illustrate that the situation is far from clear, and that it is essential that an unambiguous framework is arrived at for the interpretation of the ARPES data.
In this contribution, we present ARPES investigations of Bi2212, with the aim of clearing up the controversy regarding the normal state FS topology. We present a combination of angle-scanned photoemission data using unpolarised radiation (giving FS maps) with synchrotron-based EDCs. In this way we can shed light on the crucial role played not only by the polarisation effects, but also by the photon energy in the photoemission data from Bi2212. Ultimately, we are able to suggest a resolution of the current controversy, which can be shown to be a result of the complex, photon energy and polarisation-dependent interplay of three different types of photoemission features around the $`\overline{M}`$ point : the main FS, diffraction replicas (DRs) and the SFS.
The angle-scanned ARPES experiments were performed using monochromated, unpolarised He I radiation from a high performance source (VUV5000, Gammadata-SCIENTA) coupled to a SCIENTA SES200 analyser enabling simultaneous analysis of both the E and k-distribution of the photoelectrons. The overall energy resolution was set to 30 meV and the angular resolution to $`\pm `$0.38 , which gives $`\mathrm{\Delta }`$k $``$ 0.028 Å<sup>-1</sup> (i.e. 2.4 $`\%`$ of $`\mathrm{\Gamma }`$X). These experiments were either carried out at room temperature or 120K. The synchrotron-based data were recorded at 100K with $`\mathrm{\Delta }\mathrm{\Theta }`$ = $`\pm `$1 and $`\mathrm{\Delta }`$E = 70 meV using a commercial 65mm goniometer-mounted analyser with radiation from the U2-FSGM beamline at the BESSY I facility. High quality single crystals of Bi2212 were cleaved in-situ to give mirror-like surfaces.
In Fig. 1 we show a Fermi surface map of Bi2212 with a k-point density of some 1500 EDCs per BZ quadrant. The grey scale indicates the photoemission intensity within an energy window of 20 meV centred at the Fermi level, E<sub>F</sub>. We stress here that each of the pixels of Fig. 1 represent a ’real’ EDC - no interpolation, reflection or other mathematical manipulation of the data has been carried out.
The topology of the main FS of Bi2212 for photoelectron final state energies of the order of 17 eV is evident \- Fig. 1 shows it to have the form of rounded barrels centred at the X,Y points of the BZ . We mention at this point that the simple, almost ’traditional’ topology of the FS clearly observed here does not depend on the method used to define k<sub>F</sub>. In addition, the point should not be overlooked that the use of unpolarised radiation in such a mapping experiment gives a relatively unbiased reproduction of the FS, without the catastrophic changes in intensity contrast which can result from polarisation-dependent matrix element effects. Also evident in Fig. 1 is the presence of the shadow FS, which is particularly clear in the lower right corner of the map (the dotted lines show the form of the SFS).
Nevertheless, the question remains as to the validity of this FS topology when ’seen’ using photoelectrons with higher final state energies. It could be argued that the final states (17-20 eV above E<sub>F</sub>) accessed with ’traditional’ photon energies are not sufficiently high to guarantee free electron-like final states. In particular, it is the exact situation around the $`\overline{M}`$ point which is central to the debate, as it is in this region of k-space where the ’closing’ of the main FS arcs to give a $`\mathrm{\Gamma }`$-centred (electron-like) FS has been proposed. On using photon energies between 32 - 33eV (giving a final state energy of around 28eV) a number of groups have concluded a radically different topology for the normal state FS of Bi2212 . In order to address this question, we have measured EDCs of Bi2212 using sychrotron radiation of different energies. Part of these data are shown in Fig. 2, which displays two series of EDCs recorded in the normal state (T=100K) along the $`\mathrm{\Gamma }`$$`\overline{M}`$Z line in k-space.
The data with h$`\nu `$=32 eV are very similar to those reported in and are recorded in the same experimental geometry. It is evident that the spectral weight of the states related to the extended saddle point singularity is strongly reduced around the $`\overline{M}`$ point, in agreement with recent theoretical calculations . This reduction could indeed be seen as a sign of a FS crossing, followed by a reappearance of the band between $`\overline{M}`$Z. However, taking a photon energy of 50eV, for which no-one would doubt the validity of a quasi-free electron-like final state, the picture is completely different. These data look like those recorded for h$`\nu `$=20-25eV clearly showing the approach of the band to the E<sub>F</sub> but giving no evidence of a Fermi level crossing - i.e. the extended saddle point singularity scenario is confirmed.
The singular nature of the data for h$`\nu `$$``$32-33 eV, combined with the reappearance of the ’traditional’ FS picture for higher photon energies is a strong indication that the former are the result of strong matrix element-derived suppression of the spectral weight of the main band and not a FS crossing.
This explanation is confirmed by the data shown in Fig. 3, in which we present a detailed momentum map of the normal state FS of Bi2212 recorded at 120 K in order to capture more sharply the details around the $`\overline{M}`$-point.
Fig. 3 clearly shows the richness of structure in the ARPES data around $`\overline{M}`$. This arises from the complex interplay between main FS, DRs and the SFS features. As an example of this, one can see the overlap of the SFS and 1st order DR features at ca. 0.6($`\pi `$,$`\pi `$) as a bright spot on the map. We emphasize that only an analysis of uninterpolated data recorded with high (E,k)-resolution on an extremely fine k-mesh can enable the discrimination between the numerous features concentrated within this small region of the BZ. Furthermore, the DRs of the main FS also lead to a bundling of intensity along a ribbon centered on the (0,-$`\pi `$)-($`\pi `$,0) line - indicated by grey shading in Figs. 1 and 3. Indeed, our data offer a natural explanation for the observations made with h$`\nu `$=32-33eV. The matrix-element mediated reduction of the saddle point intensity for these photon energies (Fig. 2 and ) explains both the suppression of spectral weight directly along the (0,-$`\pi `$)-($`\pi `$,0) cut observed in , and means that the edges of the ribbon will become relatively more intense. Thus, crossing of the two edges of the ribbon feature could be mistakenly interpreted as a ’main’ band crossing along the $`\mathrm{\Gamma }`$-$`\overline{M}`$ line .
In conclusion, we have shown that high (E,k) resolution, high k-density angle-scanned photoemission data-sets combining the advantages of both the mapping and EDC approaches give a self-consistent and robust picture of the nature and topology of the FS in Bi2212. From the data presented here and from a comparison of pristine and Pb-doped Bi2212 it is clear that three different features are present in the ARPES data of Bi2212. These are a hole-like main FS with the topology of a curved barrel centred around the X,Y points, extrinsic DRs of the main FS due to the Bi-O modulation which lead to high intensity ribbons centered on the (0,-$`\pi `$)-($`\pi `$,0) line, and a shadow FS. The recent controversy as regards the FS topology most likely resulted from the use of unfavorable experimental conditions.
We gratefully acknowledge fruitful discussions with H. Eschrig and D. van der Marel. Part of this work has been supported by the BMBF (05 SB8BDA 6), the DFG (Graduiertenkolleg ’Struktur- und Korrelationseffekte in Festkörpern’ der TU-Dresden) and the SMWK (4-7531.50-040-823-99/6). |
no-problem/9912/hep-lat9912049.html | ar5iv | text | # 1 Introduction
## 1 Introduction
With the goal of understanding the complexity of QCD and the role of symmetry in dynamics, we studied a field theory called Valence QCD (VQCD) in which the Z graphs are forbidden so that the Fock space is limited to the valence quarks. We calculated nucleon form factors, matrix elements, and hadron masses both with this theory and with quenched QCD on a set of lattices with the same gauge background. Comparing the results of the lattice calculations in these two theories, we drew conclusions regarding the $`SU(6)`$ valence quark model and chiral symmetry. While recognizing the goal of VQCD, Nathan Isgur disagrees on some of the conclusions we have drawn .
The foremost objection raised in is to our suggestion that the major part of the hyperfine splittings in baryons is due to Goldstone boson exchange and not one-gluon-exchange (OGE) interactions. The logic of Isgur’s objection is that VQCD yields a spectroscopy vastly different from quenched QCD and therefore the structure of the hadrons (to which hyperfine splittings in a quark model are intimately tied) is also suspect so no definite conclusions are possible. To put this into perspective it should be emphasized at the outset that spectroscopy is only one aspect of hadron physics examined in . We have studied the axial and scalar couplings of nucleon in terms of $`F_A/D_A`$ and $`F_S/D_S`$, the neutron to proton magnetic moment ratio $`\mu _n/\mu _p`$, and various form factors. None of these results reveal any pathologies of hadron structure and turn out to be close to the $`SU(6)`$ relations, as expected. In fact this is what motivated the study of valence degrees of freedom via VQCD.
In Sec. 2 we address specific issues related to spectroscopy in VQCD. Isgur also presented more general agruments against the idea of boson exchange as a contributor to hyperfine effects. A cornerstone of his discussion is the unifying aspect of OGE in a quark model picture. We believe that it is also natural and economical to identify chiral symmetry as the common origin for much of the physics being discussed here. Therefore in Sec. 3 we take the opportunity to sketch out an an effective theory that may serve as a framework to interpret the numerical results of VQCD.
## 2 Hadron Spectrum
### 2.1 Meson excitation — $`a_1`$$`\rho `$ mass difference
Isgur argues that even with the ‘constituent quark’ mass shift incorporated into VQCD which lifts the baryon masses by $`3m_{const}`$ and the mesons by $`2m_{const}`$, it does not restore the $`a_1`$$`\rho `$ mass splitting. This is a good point. However, the author’s objection that the $`a_1`$ does not have an orbital excitation energy relative to the $`\rho `$ is based on the non-relativistic picture that the axial vector meson has a $`p`$-wave excitation as compared to the $`s`$-wave description of the vector meson. This is not necessarily true for the relativistic system of light quarks. For example, in a chirally-symmetric world, there are degenerate states due to parity doubling. The pion would be degenerate with the scalar and $`a_1`$ would be degenerate with $`\rho `$. This is indeed expected at high temperature where the chiral symmetry breaking order parameter, $`\overline{\mathrm{\Psi }}\mathrm{\Psi }`$, goes to zero.
For heavy quarks, we think VQCD should be able to describe the vector – axial-vector meson difference based on the non-relativistic picture. As seen from Figs. 25 and 28 in Ref. , from $`m_qa=0.25`$ on, the axial-vector meson starts to lie higher than the vector meson. In the charmonium region ($`\kappa =0.1191`$), we find the mass difference between them to be $`502\pm 80\mathrm{MeV}`$. Indeed, this is close to the experimental difference of $`413\mathrm{MeV}`$ between $`\chi _{c1}`$ and $`J/\mathrm{\Psi }`$.
In the light quark region the near degeneracy of $`a_1`$ and $`\rho `$ is interpreted as due to the fact that axial symmetry breaking scale, as measured by the condensates $`\overline{u}u`$ and $`\overline{v}v`$, is small in VQCD as compared to $`\overline{\mathrm{\Psi }}\mathrm{\Psi }`$ in QCD . As a result, there are near parity doublers in the meson spectrum. Note that it is consistent with the observation that dynamical mass generation, another manifestion of spontaneously broken chiral symmetry, is also very small in VQCD.
In the chiral theory, Weinberg’s second sum rule gives the relation $`m_{a_1}=\sqrt{2}m_\rho `$ and the improved sum rule, taking into account of the experimental $`a_1`$ and $`\rho `$ decay constants, gives $`m_{a_1}=1.77m_\rho `$ . This relation is based on chiral symmetry, current algebra, vector meson dominance, and the KSFR relation. These are based on the premise of spontaneous symmetry breaking (SSB). Otherwise, one would expect parity doubling for $`a_1`$ and $`\rho `$. Thus, to explain the spectrum, we argue that it is sufficient to implement SSB chiral symmetry, not necessarily the $`p`$-wave orbital excitation as in the non-relativistic theory. In other words, by restoring the spontaneously broken $`SU(3)_L\times SU(3)_R\times U_A(1)`$ symmetry to VQCD which has only $`U_q(6)\times U_{\overline{q}}(6)`$, it is possible to restore the physical mass difference between $`a_1`$ and $`\rho `$ to be consistent with Weinberg’s sum rule.
### 2.2 Hyperfine splittings
As for hyperfine splittings, we have argued that the one-gluon-exchange is not the major source since OGE is still contained in VQCD. Being magnetic in origin, the color-spin interaction is related to the hopping of the quarks in the gauge background in the spatial direction . VQCD does not change this from QCD; the $`\stackrel{}{\sigma }\stackrel{}{B}`$ term is present in the Pauli spinor representation of the VQCD action. Thus, we are forced to draw the conclusion that one-gluon-exchange type of color-spin interaction, i.e. $`\lambda _i^c\lambda _j^c\stackrel{}{\sigma }_i\stackrel{}{\sigma }_j`$, cannot be responsible for the majority part of the hyperfine splittings between $`N`$ and $`\mathrm{\Delta }`$ and between $`\rho `$ and $`\pi `$. While we suggested that the Goldstone boson exchange is consistent with the Z-graphs and maybe responsible for the missing hyperfine interaction in the baryons (Fig. 1), it is correctly pointed out by Isgur that there is no such $`q\overline{q}`$ exchange between the quark and anti-quark in the meson.
One therefore has to consider the possibility that the hyperfine splitting mechanism in the light quark sector is different in mesons from that in the baryons. The numerical results of QCD and VQCD do not, by themselves, reveal the interaction mechanism. A mapping to some model is necessary to make an interpretation. We consider the $`SU(3)`$ Nambu-Jona-Lasinio NJL model as an example. Starting with a color current-current coupling
$$9/8G(\overline{\psi }t^a\gamma _\mu \psi )^2,$$
(1)
it is convenient to consider Fierz transform to include the exchange terms. The Lagrangian for the color-singlet $`q\overline{q}`$ meson then takes the following $`SU(3)_LSU(3)_R`$ symmetric form with dimension-6 operators for the interaction
$`_{NJL}=\overline{\psi }(i\partial ̸m_0)\psi +G{\displaystyle \underset{i}{}}[(\overline{\psi }{\displaystyle \frac{\lambda _i}{2}}\psi )^2+(\overline{\psi }{\displaystyle \frac{\lambda _i}{2}}i\gamma _5\psi )^2]`$ (2)
$`G/2{\displaystyle \underset{i}{}}[(\overline{\psi }{\displaystyle \frac{\lambda _i}{2}}\gamma _\mu \psi )^2+(\overline{\psi }{\displaystyle \frac{\lambda _i}{2}}\gamma _\mu \gamma _5\psi )^2].`$
The scalar four-fermion interaction can generate a dynamical quark mass
$$m_d=G\overline{\psi }\psi .$$
(3)
in the mean-field approximation. This is illustrated in Fig. 2. While all the meson masses are lifted up by the dynamical quark masses, the attractive pseudo-scalar four-fermion interaction brings the pion mass back to zero making it a Goldstone boson. The repulsive vector and axial-vector four-fermi interaction makes the $`\rho `$, at $`770\mathrm{MeV}`$, slightly higher than twice $`m_d=360\mathrm{MeV}`$. Similarly, the $`a_1`$ mass is calculated at $`m_{a_1}1.2\mathrm{GeV}`$, which is not far from the Weinberg’s sum rule relation $`m_{a_1}=\sqrt{2}m_\rho `$. We see that with one parameter, $`G`$, the meson masses can be reasonably described in the NJL model without the $`q\overline{q}`$ type of meson exchange as in Fig. 1. In addition, current algebra relations such as the Gell-Mann-Oakes-Renner relation
$$m_\pi ^2f_\pi ^2=\frac{m_u^0+m_d^0}{2}\overline{u}u+\overline{d}d,$$
(4)
are satisfied. The crucial ingredient here is spontaneous chiral symmetry breaking which is characterized by non-vanishing $`f_\pi `$ and quark condensate $`\overline{\mathrm{\Psi }}\mathrm{\Psi }`$, and the existence of Goldstone bosons.
We should point out that although the color current–current coupling in Eq. (1) is reminiscent of the one-gluon-exchange interaction with the $`q^2`$ in the gluon propagator replaced by a cut-off $`\mathrm{\Lambda }^2`$ which reflects the short-range nature of the interaction, it is the covariant form for relativistic quarks not the one-gluon exchange potential in the non-relativistic reduction. It is the latter which has been considered as the standard form for hyperfine and fine splittings in the valence quark model.
As illustrated through the NJL model, it is possible to have different mechanisms for hyperfine splitting in the baryons and mesons. In the baryons, the hyperfine splitting can be largely due to the meson exchanges between the quarks in the $`t`$-channel (Fig. 1); whereas in the mesons, it is the $`s`$-channel short-range four-fermion coupling (Fig. 3) that give rise to the hyperfine splittings. Although they appear to be different mechanisms, both of them are based on spontaneously broken chiral symmetry.
The author displayed the spectrum ranging from heavy–heavy mesons ($`b\overline{b},c\overline{c}`$) to light–light mesons ($`s\overline{s}`$ and isovector light quarkonia) in Fig. 4 of his paper which suggests a smooth trend as a function of the quark mass and argues for a universal OGE hyperfine interaction with a strength proportional to $`1/m_Q^2`$. We have pointed out in our VQCD paper from the outset that we believe the heavy–heavy mesons are well described by a non-relativistic potential model including the OGE; this is supported by the lattice calculations . It is the validity of OGE in the light–light mesons sector that we question. What have been neglected in Fig. 4 of Ref. are the $`1^{++}`$ and $`0^{++}`$ mesons. Had these been put in, one would have seen that $`a_0(1430)`$ lies higher than $`a_1(1260)`$ and $`a_2(1320)`$. This ordering between $`1^{++}`$ and $`0^{++}`$ mesons is reversed from that in the charmonium family where $`\chi _{c1}(3510)`$ lies higher than $`\chi _{c0}(3415)`$. There is an indication from the lattice calculation that this cross-over occurs at about the strange mass region . As far as we know, this pattern of order reversal in the fine splitting as the quark mass becomes light cannot be accommodated in the OGE picture.
Also shown in Fig. 5(a) of Ref. are the hyperfine splittings of the ground state heavy-light mesons. We concur that the splittings of $`B^{}(5325)`$$`B(5279)`$ and $`D^{}(2010)`$$`D(1869)`$ are quite consistent with the matrix elements of the hyperfine interaction $`\stackrel{}{\sigma }_Q\stackrel{}{B}/2m_Q`$ and that it clearly demonstrates the $`1/m_Q`$ behavior of the heavy quark. We never questioned the relativistic corrections of the heavy quarks. It is with light quarks that we think OGE has problems. For example, consider the similar splittings for the heavy–light mesons with different light quarks. The mass difference between $`D^{}(2010)`$ and $`D(1869)`$ is $`140.64\pm 0.10\mathrm{MeV}`$. This is practically the same as that between $`D_s^{}(2110)`$ and $`D_s(1969)`$ which is $`143.9\pm 0.4\mathrm{MeV}`$. There is no indication of the $`1/m_q`$ dependence on the light quark mass as required by the OGE potential. Similarly we find that $`m_B^{}m_B=45.78\pm 0.35\mathrm{MeV}`$ is identical to $`m_{B_s^{}}m_{B_s}=47.0\pm 2.6\mathrm{MeV}`$. Again, there is no $`1/m_q`$ dependence.
## 3 Effective Theory for Both Mesons and Baryons
Besides commenting on the spectroscopy specific to VQCD, Isgur also questioned the meson exchange picture on more general grounds. Since this issue has been raised, we take the opportunity to extend our discussion although it is outside the scope of VQCD.
Perhaps the most serious challenge to the meson exchange picture in the baryons is the possibility of meson exchanges between the quark and anti-quark in the iso-singlet meson. It is pointed out by Isgur that the annihilation diagram depicted in Fig. 6 in Ref. in terms of the quark lines is OZI suppressed in QCD. We should add that it is $`O(1/N_c^2)`$ suppressed as compared to one-pion-exchange between the quark pairs in the baryon (Fig. 1) in the large $`N_c`$ analysis. On the other hand, interpreting this as a Goldstone boson exchange between the quark and anti-quark in the iso-singlet mesons, such as a kaon exchange, leads to large $`\omega \varphi `$ mixing. How does one reconcile the apparent contradiction? The short answer is that there is no such process in the effective theory of mesons. It is inconsistent, within the renormalization group approach to effective theories, to consider this QCD annihilation process as a meson exchange between the quark and anti-quark in the meson. To see this, we shall use the NJL model as an illustration.
### 3.1 Bosonization
We shall follow the example given by U. Vogl and W. Weise for a simple $`U(1)_VU(1)_A`$ symmetric Lagrangian
$$=\overline{\psi }(i\partial ̸m_0)\psi +G[(\overline{\psi }\psi )^2+(\overline{\psi }i\gamma _5\psi )^2].$$
(5)
To bosonize this theory, one needs to integrate out the fermions. One can follow the Hubbard-Stratonovich transformation by introducing Gaussian auxiliary boson fields $`\sigma `$ and $`\pi `$ with the Lagrangian $`\mu ^2/2(\sigma ^2+\pi ^2)`$ and the partition function becomes
$$𝒵=𝒩𝒟\sigma 𝒟\pi 𝒟\overline{\psi }𝒟\psi e^{i{\scriptscriptstyle d^4x\overline{\psi }[i\partial ̸m_0\mu \sqrt{2G}(\sigma +i\gamma _5\pi )]\psi }\mu ^2/2(\sigma ^2+\pi ^2)},$$
(6)
after a linear shift of the fields $`\sigma `$ and $`\pi `$. Note here, the $`\sigma `$ and $`\pi `$ are the auxiliary fields with no kinetic terms.
At this stage, one can integrate the fermion field with the quadratic action to obtain the fermion determinant. This gives an effective action with the $`\mathrm{tr}\mathrm{ln}M`$ Lagrangian, where $`M`$ is the inverse quark propagator between the square brackets in Eq. (6). Expanding the $`\mathrm{tr}\mathrm{ln}M`$ to the second order in the derivative $`_\mu `$ for the low energy long wavelength approximation, the effective Lagrangian becomes
$`_{eff}(\sigma ,\pi )={\displaystyle \frac{1}{2}}[(_\mu \sigma )^2+(_\mu \pi )^2]{\displaystyle \frac{1}{2}}m_\pi ^2\pi ^2{\displaystyle \frac{1}{2}}m_\sigma ^2\sigma ^2`$ (7)
$`{\displaystyle \frac{2m^2}{f_\pi }}\sigma (\sigma ^2+\pi ^2){\displaystyle \frac{m^2}{2f_\pi }}\pi (\sigma ^2+\pi ^2)^2,`$
where $`m=m_0+\mu \sqrt{2G}\sigma =m_02G\overline{\mathrm{\Psi }}\mathrm{\Psi }`$. Besides giving $`\pi `$ and $`\sigma `$ masses as the physical mesons, it also gives the explicit meson-meson couplings.
Thus, to construct an effective theory below the meson confinement scale, which corresponds to the chiral symmetry breaking scale $`\mathrm{\Lambda }_\chi =4\pi f_\pi 1\mathrm{GeV}`$ as we shall see later, one can take the following equivalent approaches: In the first one, one can introduce higher dimensional operators like $`(\overline{\psi }\psi )^2,(\overline{\psi }i\gamma _5\psi )^2,(\overline{\psi }\gamma _\mu \psi )^2,(\overline{\psi }\gamma _\mu \gamma _5\psi )^2`$ to the usual QCD Lagrangian and tune the couplings to match to QCD above $`\mathrm{\Lambda }_\chi `$. Many improved lattice actions are constructed this way in order to do numerical simulation at a lower lattice cut-off or larger lattice spacing in order to save computer time . In the second approach, one can introduce auxiliary fields $`\pi ,\sigma ,\rho ,a_1`$, etc. to replace the four-fermion operators with couplings to fermion bilinears and multi-auxiliary-field couplings as in Eq. (6). This form has been considered in lattice QCD simulations to control the singular nature of the massless Dirac operator. The third approach is to bosonize the theory by integrating out the fermion fields and performing derivative expansion of the $`\mathrm{tr}\mathrm{ln}M`$ action from the fermion loop as in Eq. (7). An extensive and successful model of this kind has been developed where $`\rho `$ is predicted to be close to the experimental value and $`a_1`$ mass is related to the $`\rho `$ via the modified Weinberg sum rule . VMD and the KSFR relation are satisfied. In addition, the pion form factor, $`\pi \pi `$ scattering, and a host of meson decays are all in good agreement with the experiments.
We see that in none of the above three equivalent approaches is there a coupling between the quark and physical mesons. Thus, there is no OPE between the quark-anti-quark pair in the meson. Since one is below the meson confinement scale $`\mathrm{\Lambda }_\chi `$, the meson fields are the relevant degrees of freedom. Once one integrates out the fermion fields in the meson in favor of the physical meson fields, it would be inconsistent to construct a meson model with couplings between quarks and physical mesons. Of course, this does not preclude short-range couplings between $`u\overline{u}`$, $`d\overline{d}`$ and $`s\overline{s}`$ in the $`s`$-channel to resolve the $`U_A(1)`$ anomaly and give $`\eta ^{}`$ a large mass via the contact term of the topological susceptibility .
Then how does one justify the $`\sigma `$ \- quark model that one proposes as an effective theory for the baryons? To realize this one has to make a distinction between the meson and the baryon.
### 3.2 Chiral effective theory for baryons
In view of the observation that mesons have form factors in the monopole form and baryons have form factors in the dipole form, the $`\pi NN`$ form factor is much softer than the $`\rho \pi \pi `$ form factor, we suggest that the confinement scale of quarks in the baryon $`l_B`$ is larger than $`l_M`$ – the confinement scale between the quark and anti-quark in the meson; that is,
$$l_B>l_M.$$
(8)
This is consistent with the large $`N_c`$ approach where the mesons are treated as point-like fields and the baryons emerge as solitons with a size of order unity in $`N_c`$. Taking the $`l_M`$ from the $`\rho \pi \pi `$ form factor gives $`l_M0.2\mathrm{fm}`$. This is very close to the chiral symmetry breaking scale set by $`\mathrm{\Lambda }_\chi =4\pi f_\pi `$. We consider them to be the same, i.e. below $`\mathrm{\Lambda }_\chi `$, operators of mesons fields become relevant operators. As for the baryon confinement scale, we take it to be the size charactering the meson-baryon-baryon form factors. Defining the meson-baryon-baryon form factors from taking out the respective meson poles in the nucleon pseudoscalar, vector, and axial form factors (see Fig. 17 in Ref. ), we obtain $`l_B0.60.7\mathrm{fm}`$. This satisfies the inequality in Eq. (8). Thus, in between these two scales $`l_M`$ and $`l_B`$, one could have coexistence of mesons and quarks in a baryon.
We give an outline to show how to construct a chiral effective theory for baryons. In the intermediate length scale between $`l_M`$ and $`l_B`$, one needs to separate the fermion field into a long-range one and a short-range one
$$\psi =\psi _L+\psi _S,$$
(9)
where $`\psi _L/\psi _S`$ represent the infrared/ultraviolet part of the quark field with momentum components below/above $`1/l_M`$ or $`\mathrm{\Lambda }_\chi `$. We add to the ordinary QCD Lagrangian irrelevant higher dimension operators with coupling between bilinear quark fields and auxiliary fields as given in Ref. . However, we interpret these quark fields as the short-range ones, i.e. $`\psi _S`$ and $`\overline{\psi }_S`$. Following the procedure in Ref. , one can integrate out the $`\psi _S`$ and $`\overline{\psi }_S`$ fields and perform the derivative expansion to bosonize the short-range part of the quark fields. This leads to the Lagrangian with the following generic form:
$$_{\chi QCD}=_{QCD^{}}(\overline{\psi }_L,\psi _L,A_\mu ^L)+_M(\pi ,\sigma ,\rho ,a_1,G,\mathrm{})+_{\sigma q}(\overline{\psi }_L,\psi _L,\pi ,\sigma ,\rho ,a_1,G,\mathrm{}).$$
(10)
$`_{QCD^{}}`$ includes the original form of QCD but in terms of the quark fields $`\overline{\psi }_L,\psi _L`$, and the long-range gauge field $`A_\mu ^L`$ with renormalized couplings; it also includes higher-order covariant derivatives . $`_M`$ is the meson effective Lagrangian, e.g. the one derived by Li which should include the glueball field $`G`$. Finally, $`_{\sigma q}`$ gives the coupling between the $`\overline{\psi }_L,\psi _L`$, and mesons. As we see, in this intermediate scale, the quarks, gluons, and mesons coexist and meson fields do couple to the quark fields, but it is $`\psi _L`$ that the mesons couple to, not $`\psi _S`$. Going further down below the baryon confinement scale $`1/l_B`$, one can integrate out $`\overline{\psi }_L,\psi _L`$ and $`A_\mu ^L`$, resulting in an effective Lagrangian $`(\overline{\mathrm{\Psi }}_B,\mathrm{\Psi }_B,\pi ,\sigma ,\rho ,a_1,G,\mathrm{})`$ in terms of the baryon and meson fields . This would correspond to an effective theory in the chiral perturbation theory.
Fig. 4 is a schematic illustration of effective theories partitioned by the two scales of $`l_M`$ and $`l_B`$. We should point out that although we adopt two scales here, they are distinct from those of Manohar and Georgi . In the latter, the $`\sigma `$ – quark model does not make a distinction between the baryons and mesons. As such, there is an ambiguity of double counting of mesons and $`q\overline{q}`$ states. By making the quark-quark confinement length scale $`l_B`$ larger than the quark–anti-quark confinement length scale $`l_M`$, one does not have this ambiguity. The outline we give here is a systematic way of constructing the effective theory at appropriate scales following Wilson’s renormalization group approach .
We see from Fig. 5 that the $`_{\sigma q}`$ part of the effective chiral theory in Eq. (10) is capable of depicting meson dominance (Fig. 5(a)), the quark Z-graphs and cloud degree of freedom via the meson exchange current (Fig. 5(b)), and the sea quarks in the disconnected insertion via the meson loop (Fig. 5(c)) in a baryon. These correspond to the dynamical quark degrees of freedom in QCD as we alluded to in the study of baryon form factors in the path-integral formulation . On the other hand, when one considers the chiral perturbation theory at energy lower than $`1/l_B300\mathrm{MeV}`$, the dressing of baryons with meson clouds (Fig. 6) no longer distinguishes the cloud-quarks from the sea-quarks.
One important aspect of constructing effective theories based on the renormalization group is that chiral symmetry and other symmetries of the theory should be preserved as one changes the cut-off so as to ensure universality.
As we see from the above construction of effective chiral theories, there is no large OZI-violating meson exchange between the quark and anti-quark in an iso-singlet meson. The problem that Isgur perceives for the meson exchange in the iso-singlet meson is simply not there.
## 4 Conclusions
As stressed at the beginning, hadron spectroscopy is only one of the many facets of hadron physics. At low energies, there is a lot of evidence that chiral symmetry is playing a crucial role, for example, in the $`\pi \pi `$ scattering, the Goldberger-Treiman relation, the Gell-Mann-Oakes-Renner relation, the Kroll-Ruderman relation, the KSRF relation, and Weinberg sum rules.
As far as light hadrons are concerned, it is natural to expect chiral symmetry to play a role in spectroscopy also. For many years, various chiral models have been successful in describing the pattern of masses in the meson sector in addition to scattering and decays. Now it appears that the chiral quark picture can give a reasonable explanation of the baryon spectroscopy as well as structure.
Finally, we echo Isgur’s comment ‘while qQCD describes both the $`\rho \pi `$ and $`\mathrm{\Delta }N`$ splittings, they are both poorly described in vQCD. It would be natural and economical to identify a common origin for these problems.’ It is proposed that chiral symmetry is this common origin, albeit it may have different dynamical realization in mesons and baryons. We suggest it is chiral symmetry that is the essential physics multilated in VQCD and that this is manifested by the suppression of dynamical mass generation, approximate parity doublets, the incorrect $`U(6)`$ symmetry and the disappearance of hyperfine splittings. We expect that effective chiral theories or models that incorporate the spontaneously broken $`SU(3)_L\times SU(3)_R\times U_A(1)`$ symmetry will have the relevant dynamical degrees of freedom necessary to delineate the structure and spectroscopy of both mesons and baryons of light quarks at a scale below $`1\mathrm{GeV}`$.
## 5 Acknowledgment
We thank S. Brodsky and B. A. Li for illuminating discussions. We thank M. Peskin for pointing out Ref. to us. This work is partially supported by U.S. DOE grant No. DE-FG05-84ER40154 and NSF grant No. 9722073. |
no-problem/9912/astro-ph9912120.html | ar5iv | text | # On the inability of Comptonization to produce the broad X-ray iron lines observed in Seyfert nuclei
## 1. Introduction
The X-ray study of Seyfert nuclei and other types of active galactic nuclei (AGN) has been energized for the past few years by the observation of relativistically broad iron K$`\alpha `$ lines in their X-ray spectra (Tanaka et al. 1995; Nandra et al. 1997; Reynolds 1997). In particular, the Seyfert galaxy MCG$``$6-30-15 has become an important testing ground for models of broad iron line formation. A long observation of MCG$``$6-30-15 by the Advanced Satellite for Cosmology and Astrophysics (ASCA) revealed a high signal-to-noise broad iron line with a velocity width of $`10^5\mathrm{km}\mathrm{s}^1`$ and a profile which is skewed to low energies (Tanaka et al. 1995). The excitement stirred by these studies is due to the widely held belief that the iron lines originate from the surface layers of an accretion disk which is in orbit about a supermassive black hole, and that the line width and profile provide a direct probe of the velocity field and strong gravitational field within a few Schwarzschild radii of the black hole. Models of line emission from the inner regions of a black hole accretion disk (e.g., Fabian et al. 1989; Laor 1991) fit the observed line profiles well.
The suggestion that we are observing the immediate environment of an accreting supermassive black hole is a bold one and certainly warrants a critical examination. In this spirit, Fabian et al. (1995; hereafter F95) examined a number of alternative hypotheses for the origin of these broad iron lines including models in which the line is produced in an outflow or jet, and models in which the line is intrinsically narrow (or even absent) and a complex underlying continuum mimics the broad line. Both of these classes of models were found to be unphysical or did not reproduce the observed spectrum.
Another alternative model, first proposed by Czerny, Zbyszewska & Raine (1991) but also considered by F95, is one in which the iron line is intrinsically narrow (i.e., emitted in slowly moving material which is very far from a compact object) and then broadened to the observed profile by Compton downscattering in matter that surrounds the source of line photons. F95 rejected this model on the basis that the Comptonizing cloud must have a radius of $`R<10^{14}\mathrm{cm}`$ in order to maintain the required high ionization state and that, with such a small radius, gravitational effects from a central $`10^7\mathrm{M}_{}`$ black hole would be important anyway for determining the line profile. The principal aim of F95 was to demonstrate the need to include strong gravity in any model of the iron line, so they terminated their chain of reasoning at that point. The question remains, however, as to whether Compton downscattering has a significant affect on the line profile or whether we can interpret iron line observations in terms of naked accretion disk models.
Misra & Kembhavi (1998) and Misra & Sutaria (1999; hereafter MS99) have recently developed the Comptonization model further. In their current model, they suggest that a cloud with optical depth $`\tau =4`$ and temperature $`kT0.5\mathrm{keV}`$ surrounds the central engine. The upper limit to the temperature of the Compton cloud comes from the fact that the iron K$`\alpha `$ line photons need to be primarily downscattered, rather than upscattered, in order to reproduce the observed line profile. The central engine produces the continuum emission which keeps the cloud ionized, and a narrow iron line which is Compton broadened to the observed width. They show that the resultant line profiles can be brought into good agreement with the ASCA observations.
A direct prediction of the Comptonization model (F95, MS99) is that the multiple Compton scatterings should produce a break in the spectrum of the power-law continuum radiation at approximately $`E_{\mathrm{br}}m_\mathrm{e}c^2/\tau ^2`$ (i.e., $`30`$$`40\mathrm{keV}`$). Recently, it has been reported that BeppoSAX (Guainazzi et al. 1999) observations constrain the location of the continuum break to be at energies greater than $`100\mathrm{keV}`$ , thereby arguing against the Comptonization model (Misra 1999). However, a robust determination of the continuum break is not completely straightforward since it depends upon the parameters assumed for the Compton reflection component (e.g., see Lee et al. 1999). Thus, while the lack of a spectral break at 30–40 keV remains the most compelling argument against the Comptonization model, it is interesting to consider constraints on the Comptonization model that are independent of a continuum spectral break.
In this paper, we apply a number of observational constraints to the MS99 model. We focus on the case of the iron line in MCG$``$6-30-15, but also address the line in NGC 3516, the other high signal-to-noise case of a relativistically broad line. We show that the continuum source in MCG$``$6-30-15 required by the constrained model violates thermodynamic limits (i.e., the “black body” limit). We also show that only a very small region of parameter space is open to the Comptonization model in the case of NGC 3516. Hence, we conclude that the Compton downscattering model is not a viable model for the broad iron lines in one, and possibly both, of these sources.
## 2. Constraints from continuum variability in MCG$``$6-30-15
The iron line in MCG$``$6-30-15 has been observed to change flux and profile on timescales of $`10^4\mathrm{s}`$ (Iwasawa et al. 1996, 1999). This is the shortest timescale on which detailed line changes can currently be probed and there may indeed be line variability on shorter timescales. MS99 note that such variability is consistent with the line originating from a Compton cloud of size $`R10^{14}\mathrm{cm}`$.
However, in the Comptonization model, the continuum photons also pass through the same Comptonizing medium as the iron line photons. Thus, continuum variability can be used to place much tighter constraints on the size of the cloud. Any variability of the central source would be smeared out as the photons random walk through the cloud on a timescale of
$$t_{\mathrm{MS}}\frac{R\tau }{c}.$$
(1)
Appreciable continuum variability in MCG$``$6-30-15 is observed on timescales down to $`t_{\mathrm{obs}}100\mathrm{s}`$ (Reynolds et al. 1995; Yaqoob et al. 1997). Since we must have $`t_{\mathrm{obs}}t_{\mathrm{MS}}`$, an upper limit on the Compton cloud is $`R_{\mathrm{cloud}}=10^{12}\mathrm{cm}`$, two orders of magnitude less than the size assumed in MS99. Assuming a geometrically thick cloud and solar abundances, the density of the material is $`n_\mathrm{H}5\times 10^{12}\mathrm{cm}^3`$.
In assessing the robustness of this constraint, it should be noted that the iron line in MCG–6-30-15 is always observed to be broad (although the width of the line does indeed vary, e.g. Iwasawa et al. 1996), and the source is always observed to vary its flux with a temporal power spectrum that extends down to 100 s timescales (Lee et al. 1999b; Nowak & Chiang 1999; Reynolds 1999). Thus, it is difficult to support a model in which the Compton cloud is sometimes present (producing a broad line and a slowly varying continuum) and sometimes absent (producing a narrow line and a rapidly varying continuum).
## 3. The Compton temperature and the black-body limit
In the situation postulated by the MS99 model, the temperature of the Compton cloud will be locked to the Compton temperature of the (local) radiation field. We model the continuum spectrum of the central source as the superposition of a black body spectrum (which may represent thermal emission from an accretion disk) and a power-law spectrum with energy index $`\alpha =1`$ which extends up to hard X-ray energies (which may be identified as accretion disk photons that have been subjected to multiple Compton upscattering by an accretion disk corona).
The flux at the inner edge of the Compton cloud is then given by
$$F_\nu =\frac{h\nu ^3L_{\mathrm{bb}}}{2\sigma _{\mathrm{SB}}T^4c^2R_{\mathrm{in}}^2(\mathrm{exp}(h\nu /kT)1)}+\frac{L_{\mathrm{pl}}f(\nu )}{4\pi R_{\mathrm{in}}^2\mathrm{\Xi }},$$
(2)
where $`R_{\mathrm{in}}`$ is the inner radius of the Compton cloud, $`L_{\mathrm{bb}}`$ is the luminosity of the black body component, $`\sigma _{\mathrm{SB}}=5.67\times 10^5\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1\mathrm{K}^4`$ the Stefan-Boltzmann constant, $`L_{\mathrm{pl}}`$ is the luminosity in the power-law component, $`f(\nu )=\nu ^1`$ in the range $`\nu _{\mathrm{min}}<\nu <\nu _{\mathrm{max}}`$ (and zero elsewhere), and $`\mathrm{\Xi }`$ is given by
$$\mathrm{\Xi }=\mathrm{ln}\left(\frac{\nu _{\mathrm{max}}}{\nu _{\mathrm{min}}}\right).$$
(3)
Guided by the hard X-ray observations of MCG$``$6-30-15 (e.g., Lee et al. 1999), the parameters describing the power-law component are fixed to have the following values:
$`h\nu _{\mathrm{min}}`$ $`=`$ $`0.1\mathrm{keV},`$ (4)
$`h\nu _{\mathrm{max}}`$ $`=`$ $`50\mathrm{keV},`$ (5)
$`L_{\mathrm{pl}}`$ $`=`$ $`5\times 10^{43}\mathrm{erg}\mathrm{s}^1`$ (6)
The resulting Compton temperature is given by
$$T_\mathrm{C}=\frac{1}{1+}\left(T+\frac{h(\nu _{\mathrm{max}}\nu _{\mathrm{min}})}{4k\mathrm{\Xi }}\right),$$
(7)
where $``$ is the ratio of the black body luminosity to the power-law luminosity:
$$=\frac{L_{\mathrm{bb}}}{L_{\mathrm{pl}}}.$$
(8)
The line corresponding to a Compton temperature of $`T_\mathrm{C}=0.5\mathrm{keV}`$ on the $`(,T)`$-plane is shown on Fig. 1a, and the forbidden region of parameter space (giving $`T_\mathrm{C}>0.5\mathrm{keV}`$) is shaded with lines of positive gradient.
For completeness, it should be noted that the above expression for the Compton temperature is only strictly valid due to the soft nature of our spectrum. The Compton temperature depends, of course, on the form of the radiation field inside the cloud. Ignoring downscattering, this field is greater than the external radiation field by a factor of $`\tau `$. For the high-energy radiation ($`h\nu 50\mathrm{keV}`$), $`\tau `$ has an energy dependence due to Klein-Nishina corrections, thereby affecting the Compton temperature. The neglect of downscattering is also invalid at these energies. However, these corrections to the Compton temperature have a negligible effect in our case.
The ASCA observation shows no evidence for a soft excess component in MCG$``$6-30-15 across the entire well-calibrated spectral range of the solid-state imaging spectrometers (SIS; 0.6–10 keV). Thus, we impose the condition that the black-body flux at 0.6 keV is less than the power-law flux at the same energy:
$$\frac{h\nu ^3L_{\mathrm{bb}}}{2\sigma _{\mathrm{SB}}T^4c^2R_{\mathrm{in}}^2(\mathrm{exp}(h\nu /kT)1)}<\frac{L_{\mathrm{pl}}f(\nu )}{4\pi R_{\mathrm{in}}^2\mathrm{\Xi }}$$
(9)
The region on the $`(,T)`$-plane forbidden by this constraint is shaded with lines of negative gradient in Fig. 1a.
Finally, we make the observation that there is a fundamental limit to the black body luminosity which is imposed by thermodynamics:
$$L_{\mathrm{bb}}<4\pi R_{\mathrm{max}}^2\sigma _{\mathrm{SB}}T^4$$
(10)
where $`R_{\mathrm{max}}`$ is the maximum allowed size of the black body source. Since the continuum source is hypothesized to be interior to the Compton cloud, we must have $`R_{\mathrm{max}}R_{\mathrm{cloud}}`$. The region of the $`(,T)`$-plane forbidden by this constraint is shown in solid-shade in Fig. 1a.
We see that applying these three constraints eliminates all regions of the $`(,T)`$-plane. One must conclude that the Compton cloud model discussed by Misra & Kembhavi (1998) and MS99 is not valid in the case of MCG$``$6-30-15.
NGC 3516 also displays a strong broad iron line that has been observed at high signal-to-noise with ASCA (Nandra et al. 1999). We have also examined constraints on the Comptonization model for this iron line. Continuum variability in this object is observed on timescales down to $`2000\mathrm{s}`$ (Edelson & Nandra 1998; K. Nandra, private communication), giving a maximum size of $`R_{\mathrm{cloud}}2\times 10^{13}\mathrm{cm}`$ for the Comptonizing cloud, rather larger than the case of MCG$``$6-30-15. Also, BeppoSAX observations fail to see a soft excess in the X-ray spectrum all of the way down to $`0.2\mathrm{keV}`$ (Stirpe et al. 1998). Noting that $`L_{\mathrm{pl}}1\times 10^{44}\mathrm{erg}\mathrm{s}^1`$ produces the constraint diagram shown in Fig. 1b. It is seen that these constraints eliminate all but a very small region of parameter space. Thus, although the broad line in NGC 3516 could in principle be explained with the Comptonization model, the amount of fine tuning necessary for finding the line parameters makes the model improbable in this case.
## 4. Discussion
It should be stressed that we have used conservative parameters in our assessment of these observational constraints. In particular, we assume that the power-law component of the continuum emission possesses an energy index of $`\alpha =1`$ (corresponding to a photon index of $`\mathrm{\Gamma }=2`$) and a high energy cutoff of $`50\mathrm{keV}`$. In fact, the overall X-ray spectrum is harder than this (especially once the Compton reflection component is accounted for) and the high energy cutoff may well occur at rather higher energies. Either of these effects will raise the Compton temperature of the power-law component and require an even cooler black body component in order to cool the Compton cloud below the $`0.5\mathrm{keV}`$ limit. It should also be noted that we have ignored any infra-red emission from the continuum source. Due to the high densities of the matter in the Compton cloud, IR emissions redwards of $`10\mu \mathrm{m}`$ will be free-free absorbed and act to heat the cloud rather than Compton cool it. Again, the neglect of the IR emissions is a conservative assumption for our purposes.
There is another, independent, problem faced by the Compton cloud model: it is very difficult to maintain the required ionization state. F95 treated this problem by considering the required cloud size necessary to acheive some critical ionization parameter $`\xi _\mathrm{c}L_{\mathrm{ion}}/nR^2`$. According to F95, the AGN spectrum of Mathews & Ferland (1987), $`\xi _\mathrm{c}=10^4\mathrm{erg}\mathrm{cm}\mathrm{s}^1`$ can be considered the point at which a photoionized plasma becomes completely ionized. Using the observed luminosity of MCG$``$6-30-15, they deduced that the cloud must have a size $`R<10^{14}\mathrm{cm}`$ in order to achieve at least this critical ionization parameter. As we will now show, this is a very conservative argument and, in fact, ionization balance imposes much more severe limits on the cloud size.
While the formal ionization parameter may be very high, the very soft continuum spectrum postulated by MS99 may still have trouble fully ionizing the iron throughout the whole cloud. To see this, note that all continuum photons capable of ionizing hydrogen like iron (Fe xxvi) reside in the power law component of the continuum. The continuum source in MCG$``$6-30-15 emits Fe xxvi ionizing photons at a rate
$$N_{\mathrm{ion}}\frac{L_{\mathrm{pl}}}{E_{\mathrm{ion}}\mathrm{\Xi }},$$
(11)
where $`E_{\mathrm{ion}}=9.3\mathrm{keV}`$ is the ionization potential of Fe xxvi. This evaluates to $`N_{\mathrm{ion}}3\times 10^{50}\mathrm{s}^1`$. The radiative recombination rate of the postulated Compton cloud, on the other hand, is given by
$$N_{\mathrm{rec}}\frac{4\pi }{3}R^3n^2A_{\mathrm{rad}}\left(\frac{T}{10^4\mathrm{K}}\right)^{X_{\mathrm{rad}}}$$
(12)
where the coefficients $`A_{\mathrm{rad}}`$ and $`X_{\mathrm{rad}}`$ are given by Shull & van Steenberg (1982). For a temperature of $`kT=0.5\mathrm{keV}`$ and $`R=10^{12}\mathrm{cm}`$, this gives $`N_{\mathrm{rec}}=3\times 10^{50}\mathrm{s}^1`$. Thus, there are just enough ionizing photons present in the entire power law tail to ionize the hydrogen-like iron. If the temperature of the Compton cloud is below 0.5 keV, or the radius of the cloud is larger<sup>1</sup><sup>1</sup>1Note that the quantity $`nR`$ is proportional to the optical depth of the cloud and so is fixed by the width of the broad iron line., it will be impossible to photoionize the cloud. Very large iron edges would then be present in the observed X-ray spectrum, contrary to observations. Thus, ionization balance imposes a size limit of $`R10^{12}\mathrm{cm}`$, independently of continuum variability constraints.
Finally, we address whether there are reasonable modifications that can be made to the MS99 scenario that will avoid the constraints imposed in this paper. There are three such modifications that we should consider. Firstly, if the geometry is such that the X-ray continuum source is viewed directly (rather than through the Compton cloud), one might imagine that the size of the Compton cloud and the X-ray continuum variability would be decoupled thereby relaxing the constraints discussed above. An example of such a geometry is if the Compton cloud forms a torus around the central X-ray source. In such a geometry, the X-ray continuum source illuminates and ionizes the observed face of the Compton cloud and powers iron line fluorescence from an optical depth of $`\tau 4`$ into the cloud. However, in this case, one would expect ionized iron lines (from the ionized zones that overlay the near-neutral zones in the Compton cloud) rather than the observed cold iron lines. Also, the illuminated surface of the Compton cloud, which must be highly ionized so as not to be a strong narrow iron line emitter, would act as a Compton mirror and smear out the observed continuum variability, even though the continuum source is viewed directly. Of course, any such modification to the basic Comptonization model in which the Compton cloud is allowed to be bigger than $`R10^{12}\mathrm{cm}`$ must be subject to the ionization problem described above.
Secondly, a large region of parameter space would open up if the Compton cloud experienced a different soft continuum to that observed (i.e. if the soft excess can be ‘hidden’ from view). Noting that the black body component must scatter though the same parts of the Compton cloud that broadens the iron line (in order to Compton cool it), one concludes that the black body photons and broad iron lines photons will follow very similar paths through the system. Hence, it is impossible to hide the soft excess emission from view in a system in which we observe a Compton broadened iron line.
Thirdly, the black body limit can be bypassed if the soft continuum source is placed outside of the Compton cloud. While it is difficult to construct rigorous arguments against this case, we consider that placing a powerful ($`L_{\mathrm{bb}}/L_{\mathrm{pl}}>3`$) soft continuum source at large distances from the central hard X-ray continuum source is an ad-hoc solution.
## 5. Conclusions
In this work, we have constrained the Compton cloud model for the broad iron line in both MCG$``$6-30-15 and NGC 3516 by considering two observational constraints which are independent of the detection of a spectral break in the continuum spectrum: the continuum variability timescale and the absence of an observed soft excess. We have then demonstrated that the constrained model requires a continuum source which violates the black body limit. We also point out that the difficulty of photoionizing the Compton cloud to the required levels. Thus, we rule out the Comptonization model for the broad iron line in MCG$``$6-30-15, and show that fine tuning is required in order for the model to explain the line in NGC 3516. We conclude that the combination of relativistic Doppler shifts and gravitational redshifts still provides the best explanation for the broad iron lines seen in AGN.
We are indebted to Jim Chiang, Andrew Fabian, Mike Nowak, and Firoza Sutaria for insightful discussions throughout the course of this work. We are also grateful to the anonymous referee for several useful suggestions. We thank the Aspen Center for Physics for their hospitality during the X-ray Astrophysics Workshop in August 1999, at which time this work was started. CSR appreciates support from Hubble Fellowship grant HF-01113.01-98A. This grant was awarded by the Space Telescope Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA under contract NAS 5-26555. We also appreciate support from NASA under LTSA grant NAG5-6337 and the RXTE guest observer grant NAG5-7339 as well as Deutsche Forschungsgemeinschaft grant Sta 173/22. |
no-problem/9912/physics9912045.html | ar5iv | text | # Static Electric Dipole Polarizabilities of Na Clusters
## 1 Introduction
The measurement of the static electric polarizability of sodium clusters knight and its interpretation in terms of the jellium model ekardt was one of the triggers for the research activities that today form the field of modern metal cluster physics. The first theoretical studies were followed by several others with different methods and aims: density functional calculations using pseudopotentials moullet1 ; moullet2 or taking all electrons into account guan aimed at a quantitative description of the experimentally observed effects, semi-classical approaches brack89 focused on size-dependent trends, and the static electric polarizability served to test and compare theoretical concepts sicforpol ; guetxc ; chelikowsky . Recently, the field received new inspiration from a second experimental determination of the static polarizability of small, uncharged Na clusters rayane .
Whereas a qualitative understanding of the experiments can be obtained with relatively simple models, a quantitative theoretical determination of the polarizability requires knowledge of the ionic and electronic configurations of the clusters. Great effort has been devoted in the past to determine these moullet2 ; martinsalt ; koutecky ; roethlis . However, taking all ionic and electronic degrees of freedom into account in a three-dimensional calculation is a task of considerable complexity. Therefore, most of these studies were restricted to clusters with not more than nine atoms. To reduce the computational expense, approximations for including ionic effects were developed saps ; spiegelmann ; ppstoer ; manninenhueckel ; bmcaps ; newpp . A second problem, however, is the great number of close-lying isomers that are found in sodium clusters. This effect is especially pronounced when stabilization of an overall shape through electronic shell effects is weak, i.e. for the “soft” clusters that are found between filled shells. In the present work we present calculations for the static electric polarizability that include the ionic structure in a realistic way. We take into account a great number of isomers for clusters with up to 20 atoms, especially for the soft clusters that fill the second electronic shell. The theoretical concepts that we used in this study are introduced in section 2, where we also discuss the relevant cluster structures. In section 3 we present our results and compare with other calculations and experimental work. Our conclusions are summarized in section 4.
## 2 Theoretical concepts
The starting point for the theoretical determination of the polarizability of a cluster is the calculation of the ionic and electronic configuration of the ground-state and close lying isomers. In the present work, this was done in two steps. First, we calculated low-energy ionic geometries for a wide range of cluster sizes with an improved version newpp of the “Cylindrically Averaged Pseudopotential Scheme” (CAPS) bmcaps . In CAPS the ions are treated fully three-dimensionally, but the valence electrons are restricted to axial symmetry. The cluster ground state is found by simultaneously minimizing the energy functional with respect to the set of ionic positions (simulated annealing) and the valence-electron density. For the exchange and correlation energy we used the local-density approximation (LDA) functional of Perdew and Wang pw , and for the pseudopotential we employed the recently developed phenomenological smooth-core potential that reproduces low temperature bulk and atomic properties newpp . Detailed comparisons with ab initio calculations have shown newpp that CAPS predicts ionic geometries of sodium clusters rather accurately since truly triaxial deformations are rare. Furthermore, in a second step we performed fully three-dimensional (3D) Kohn-Sham (KS) calculations to check the ordering of isomers and to calculate polarizabilities without axial restriction on the electrons, and also included configurations from 3D geometry optimizations into our analysis as discussed below.
Fig. 1 schematically depicts the most important ionic geometries for neutral clusters with even electron numbers between 2 and 20. (We have calculated the polarizabilities also for many further and higher isomers which, however, are not shown in Fig. 1 for the sake of clarity. They were omitted from the discussion since they do not lead to qualitatively different results.) For the small clusters $`\mathrm{Na}_2`$, $`\mathrm{Na}_4`$ and $`\mathrm{Na}_6`$, many other theoretical predictions have been made moullet1 ; moullet2 ; rayane ; koutecky ; roethlis , and our geometries are in perfect agreement with them. In addition, due to the construction of the pseudopotential newpp , the bond lengths are close to the experimental ones, as e.g. seen in the dimer, where our calculated bond length is $`5.78a_0`$ and the experimental one expdimer is $`5.82a_0`$. For $`\mathrm{Na}_8`$ and $`\mathrm{Na}_{10}`$, our results are in agreement with 3D density functional calculations moullet1 ; moullet2 ; roethlis . For $`\mathrm{Na}_{12}`$, we do not now of any ab initio calculations. Therefore, besides two low-energy configurations from CAPS \[(a) and (b)\], we also included a locally re-optimized low-energy geometry from a 3D, Hückel model calculation spiegelmann in our analysis (c). Our 3D calculations confirm our CAPS results and find structures (a) and (b) quasi degenerate with a difference in total energy of 0.05 eV, whereas structure (c) is higher by 0.4 eV. Two of the three geometries considered for $`\mathrm{Na}_{14}`$ \[(b) and (c)\] were also found very similar in 3D Hückel model calculations spiegelmann ; manninenhueckel , and both CAPS and the 3D KS calculations find all of them very close in energy. For $`\mathrm{Na}_{16}`$, we find as the CAPS-ground state structure (a), and in our 3D calculations structure (b) is quasi degenerate with (a), whereas structures (c) and (d) are higher by 0.08 eV and 0.5 eV. Due to their very different overall shapes, these isomers span a range of what can be expected for the polarizability. For $`\mathrm{Na}_{18}`$ and $`\mathrm{Na}_{20}`$, our structures are again in close agreement with the 3D density functional calculation of roethlis , and all three structures are quasi degenerate.
The static electric polarizability was calculated in two different ways. The first is based on a collective description of electronic excitations. It uses the well known equality
$$\alpha =2m_1,$$
(1)
which relates the negative first moment
$$m_1(𝐐)=_0^{\mathrm{}}E^1S_𝐐(E)𝑑E=\underset{\nu }{}(\mathrm{}\omega _\nu )^1\left|\nu |𝐐|0\right|^2$$
(2)
of the strength function
$$S_𝐐(E)=\underset{\nu }{}\left|\nu |𝐐|0\right|^2\delta (E_\nu E_0E),$$
(3)
to the static electric polarizability $`\alpha `$ in the direction specified by the external (dipole) excitation operator $`𝐐`$.
In the evaluation of the strength function, the excited states $`|\nu `$ are identified with collective excitations. A discussion of this approach can be found in lrpa . The collective calculations were carried out using the cylindrically averaged densities and the “clamped nuclei approximation” moullet2 , i.e. the ionic positions were taken to be the same with and without the dipole field. We have also checked this widely used approximation in the context of our studies and find it well justified, as discussed below.
The static polarizability can also be calculated directly from the derivative of the induced dipole moment $`𝝁`$ in the presence of an external electric dipole field $`𝐅`$ (“finite field method”):
$$𝜶_{𝒊𝒋}\mathbf{=}\frac{𝝁_𝒋\mathbf{(}\mathbf{+}𝑭_𝒊\mathbf{)}\mathbf{}𝝁_𝒋\mathbf{(}\mathbf{}𝑭_𝒊\mathbf{)}}{\mathrm{𝟐}𝑭_𝒊}\mathbf{,}𝒊\mathbf{,}𝒋\mathbf{=}𝒙\mathbf{,}𝒚\mathbf{,}𝒛\mathbf{,}$$
(4)
where
$$𝝁_𝒋\mathbf{(}𝐅\mathbf{)}\mathbf{=}\mathbf{}𝒆\mathbf{}𝒓_𝒋𝒏\mathbf{(}𝐫\mathbf{,}𝐅\mathbf{)}𝒅^\mathrm{𝟑}𝒓\mathbf{+}𝒆𝒁\underset{𝐑}{\mathbf{}}𝑹_𝒋$$
(5)
for ions with valence $`𝒁`$. Here one has to make sure that the numerically applied finite dipole field $`𝐅`$ is small enough to be in the regime of linear response, but that it is on the other hand large enough to give a numerically stable signal. We have carefully checked this and found that the used field strengths between $`\mathbf{0.00001}𝒆\mathbf{/}𝒂_{\mathrm{𝟎}}^{}{}_{}{}^{\mathrm{𝟐}}`$ and $`\mathbf{0.0005}𝒆\mathbf{/}𝒂_{\mathrm{𝟎}}^{}{}_{}{}^{\mathrm{𝟐}}`$ meet both requirements. Applied to the axial calculations, this approach allows to obtain the polarizability in the $`𝒛`$-direction. By employing this method with the 3D KS calculations we have checked the influences of the axial averaging and the collective model on the polarizability and found that the z-polarizabilities from the axial and the 3D finite-field calculations agree within 1% on the average for the low-energy isomers. This shows that the axial averaging is a good approximation for the clusters discussed here. The performance of the collective model will be discussed below in Section 3.1. The orientation of our coordinate system was chosen such that the z-axis is in that principal direction of the tensor of inertia in which it deviates most from its average value. The average static electric polarizability
$$\overline{𝜶}\mathbf{:=}\frac{\mathrm{𝟏}}{\mathrm{𝟑}}𝒕𝒓\mathbf{(}𝜶\mathbf{)}$$
(6)
of course is independent of the choice of coordinate system.
## 3 Results
### 3.1 Comparison of different theoretical results
Since all density functional calculations that we know of agree on the geometry of the smallest sodium clusters, these clusters can serve as test cases to compare different theoretical approaches. In Table 1 we have listed the averaged static dipole polarizability as obtained in different calculations, together with the value obtained in the recent experiment of Rayane et al. rayane . All calculations reproduce the experimental trend and give the correct overall magnitude. But also, all calculations underestimate the polarizability. The magnitude of this underestimation, however, varies considerably for the different approaches. Whereas our results are closest to the experiment and close to the theoretical ones of Ref. rayane , with the largest difference to the experiment being 8 % for $`\mathrm{𝐍𝐚}_\mathrm{𝟖}`$, a difference of 27 % is found for this cluster in the calculation based on the ab initio Bachelet, Hamann, Schlüter (BHS) Pseudopotential moullet2 . A good part of this difference can be explained by comparing the bond lenghts of the clusters. The BHS pseudopotential considerably underestimates the bond lengths moullet2 , leading to a higher electron density and a lower polarizability. Our empirical smooth-core pseudopotential, on the other hand, was constructed to reproduce the experimental low-temperature bulk bond length (together with the compressibility and the atomic $`\mathrm{𝟑}𝒔`$-level) when used with the LDA, and correspondingly results in a higher polarizability, in better agreement with experiment. It is further interesting to note that also the polarizabilities calculated with the empirical Bardsley potential moullet1 , which was constructed to reproduce atomic energy levels, are noticeably higher than the BHS-based values. This shows that the cluster polarizability is also sensitive to atomic energy levels, and the fact that our values are closest to the experiment thus is a natural consequence of the combination of correct atomic energy levels and bond lengths.
From comparison with the calculations that went beyond the LDA guan ; rayane , it however becomes clear that the empirical pseudopotentials by construction ”compensate” some of the errors that are a consequence of the use of the LDA. Therefore, it would be dangerous to argue that the inclusion of gradient corrections, which have been shown to increase the polarizability, could bring our calculated values in agreement with experiment: going beyond the LDA but keeping the empirical LDA pseudopotentials could lead to a double counting of effects. We therefore conclude that, on the one hand, a considerable part of the earlier observed differences between theoretical and experimental polarizabilities can be attributed to effects associated with errors in the bond lengths or atomic energy levels, but on the other hand, further effects must contribute to the underestimation with respect to experiment. We will come back to this second point below.
In Table 2 we have listed the polarizabilities of $`\mathrm{𝐍𝐚}_\mathrm{𝟐}`$ to $`\mathrm{𝐍𝐚}_{\mathrm{𝟐𝟎}}`$ for the geometries shown in Fig. 1. The left half gives the polarizability as computed from the 3D electron density with the finite-field method, and the right half lists the values obtained in the axially averaged collective approach. For the clusters up to $`\mathrm{𝐍𝐚}_\mathrm{𝟖}`$, the two methods agree well and the differences for the averaged polarizabilities are less than 1% for $`\mathrm{𝐍𝐚}_\mathrm{𝟐}`$ and $`\mathrm{𝐍𝐚}_\mathrm{𝟖}`$, and 3% for $`\mathrm{𝐍𝐚}_\mathrm{𝟒}`$ and $`\mathrm{𝐍𝐚}_\mathrm{𝟔}`$. This shows that the collective description is rather accurate, which is remarkable if one recalls that we are dealing with only very few electrons. Beyond $`\mathrm{𝐍𝐚}_\mathrm{𝟖}`$, the differences are 6 % on the average, which is still fair, but obviously higher. This looks counter-intuitive at first sight, because the collective description should become better for larger systems. However, for $`𝑵\mathbf{>}\mathrm{𝟖}`$ there comes an increasing number of particle-hole states close to the Mie plasmon resonance lrpa , leading to increasing fragmentation of the collective strength. $`𝒎_\mathbf{}\mathrm{𝟏}`$ and thus $`𝜶`$ is sensitive to energetically low-lying excitations since their energies enter in the denominator in Eq. 2, and this can lead to an underestimation of the polarizability.
Comparing the polarizabilities of clusters with the same number of electrons but different geometries shows the influence of the overall shape of the cluster. For $`\mathrm{𝐍𝐚}_{\mathrm{𝟏𝟒}}`$, e.g., isomers (a) and (c) have a valence electron density which is close to prolate, whereas (b) has a more oblate one. The averaged polarizability for the two prolate isomers is equal, although their ionic geometries differ. The oblate isomer, however, has a noticeably higher averaged polarizability. This is what one expects, because for oblate clusters there are two principal directions with a low and one with a high polarizability, whereas for prolate clusters the reverse is true. The fact that different ionic geometries can lead to very similar averaged polarizabilities is also seen for $`\mathrm{𝐍𝐚}_{\mathrm{𝟏𝟐}}`$. It thus becomes clear that contrary to what was believed earlier moullet1 one cannot necessarily distinguish between details in the ionic configuration by comparing theoretical values to experimental data that measure the averaged polarizability.
### 3.2 Comparison with experiments
Fig. 2 shows $`\overline{𝜶}`$ for our ground state structures as obtained in the axial, collective approach and the 3D finite-field calculations, in comparison to the two available sets of experimental data. The absolute values for the experiments were calculated from the measured relative values with an atomic polarizability of $`\mathbf{23.6}\mathbf{\AA }^\mathrm{𝟑}`$ rayane . To guide the eye, the polarizabilities from each set of data are connected by lines. Both experiments and the theoretical data show that, overall, the polarizability increases with increasing cluster size. The polarizability from the axial collective model qualitatively shows the same behavior as the one from the 3D finite field calculation.
Comparison of the 3D values with the experimental data shows that for the smallest clusters, the theoretical and experimental values agree as discussed before, and the values obtained in the two experiments are comparable up to $`\mathrm{𝐍𝐚}_{\mathrm{𝟏𝟎}}`$. Beyond $`\mathrm{𝐍𝐚}_{\mathrm{𝟏𝟎}}`$, the discrepancies between the two experiments become larger, and also the differences between theoretical and experimental polarizabilities increase. For $`\mathrm{𝐍𝐚}_{\mathrm{𝟏𝟐}}`$, $`\mathrm{𝐍𝐚}_{\mathrm{𝟏𝟒}}`$, $`\mathrm{𝐍𝐚}_{\mathrm{𝟏𝟔}}`$ and $`\mathrm{𝐍𝐚}_{\mathrm{𝟏𝟖}}`$ the experiment of Knight et al. gives lower values than the experiment of Rayane et al., and the calculated averaged polarizability is lower than both experiments for $`\mathrm{𝐍𝐚}_{\mathrm{𝟏𝟐}}`$, $`\mathrm{𝐍𝐚}_{\mathrm{𝟏𝟒}}`$, and $`\mathrm{𝐍𝐚}_{\mathrm{𝟏𝟔}}`$. For $`\mathrm{𝐍𝐚}_{\mathrm{𝟏𝟖}}`$ the finite-field value obtained for our ground-state structure matches the value measured by Knight el al., and for $`\mathrm{𝐍𝐚}_{\mathrm{𝟐𝟎}}`$, our ground-state polarizability is very close to the measurement of Rayane et al. In this discussion one must keep in mind, however, that the experimental uncertainty is about +/-2 $`\mathbf{\AA }^\mathrm{𝟑}`$ per atom rayane , i.e. the uncertainty in the absolute value increases with the cluster size, as indicated by the errors bars in Fig. 2. Comparisons are made easier if the linear growth in $`\overline{𝜶}`$ is scaled away. Therefore, one should rather look at the normalized polarizability
$$\overline{𝜶}^𝐧\mathbf{:=}\frac{\overline{𝜶}}{𝑵𝜶_{\mathrm{𝐚𝐭𝐨𝐦}}}\mathbf{,}$$
(7)
which is shown in Fig. 3, because it allows to identify trends and details more clearly.
From Fig. 3 it becomes clear that for $`\mathrm{𝐍𝐚}_\mathrm{𝟐}`$ to $`\mathrm{𝐍𝐚}_\mathrm{𝟖}`$, the trend seen in the two experiments is similar up to one exception: For $`\mathrm{𝐍𝐚}_\mathrm{𝟔}`$, the experiment of Rayane et al. predicts a noticeably smaller value than the one by Knight et al. Comparison with our theoretical data shows that, although the values of the older experiment are closer to the theory with respect to magnitude for $`\mathrm{𝐍𝐚}_\mathrm{𝟐}`$, $`\mathrm{𝐍𝐚}_\mathrm{𝟒}`$ and $`\mathrm{𝐍𝐚}_\mathrm{𝟖}`$, the trend that our data show corresponds clearly to the one seen in the new experiment since the two curves are parallel. Going from $`\mathrm{𝐍𝐚}_\mathrm{𝟖}`$ to $`\mathrm{𝐍𝐚}_{\mathrm{𝟏𝟎}}`$, both experiments predict a steep rise in the polarizability. This rise due to the shell closing at $`\mathrm{𝐍𝐚}_\mathrm{𝟖}`$ is also seen in the theoretical data, but it is less pronounced than in the experiments (as we will discuss below). For $`\mathrm{𝐍𝐚}_{\mathrm{𝟏𝟐}}`$, a higher $`\overline{𝜶}^𝐧`$ than for $`\mathrm{𝐍𝐚}_{\mathrm{𝟏𝟎}}`$ is predicted by the data of Rayane, whereas the reverse ordering is seen in the data of Knight et al. Again, our calculations support the finding of the new experiment, and all isomers lead to similar $`\overline{𝜶}^𝐧`$. For $`\mathrm{𝐍𝐚}_{\mathrm{𝟏𝟒}}`$, both experiments show a decrease. Our prolate ground state and isomer reproduce this trend. That it is the prolate structures that fit to the experiment is consistent with the ab initio molecular dynamics calculations of Häkkinen et al. hakkinen . The next step to $`\mathrm{𝐍𝐚}_{\mathrm{𝟏𝟔}}`$ again reveals a slight difference between the two experiments: both predict an increase compared to $`\mathrm{𝐍𝐚}_{\mathrm{𝟏𝟒}}`$, but whereas the older experiment sees $`\overline{𝜶}^𝐧`$ smaller for $`\mathrm{𝐍𝐚}_{\mathrm{𝟏𝟔}}`$ than for $`\mathrm{𝐍𝐚}_{\mathrm{𝟏𝟐}}`$, the new experiment shows the opposite ordering. Once more, our ground state structure leads to a polarizability that follows the trend of the new experiment. (The other isomers, however, lead to smaller polarizabilities, and an explanation for the difference between the experiments thus might be that different ensembles of isomers were populated due to slightly different experimental conditions.) Going to $`\mathrm{𝐍𝐚}_{\mathrm{𝟏𝟖}}`$ leads to a decrease in the polarizability in both experiments. Our calculation shows this decrease, which is a manifestation of the nearby shell closing. But whereas the old experiment actually sees the shell closing at $`\mathrm{𝐍𝐚}_{\mathrm{𝟏𝟖}}`$ and an increase in the polarizability for $`\mathrm{𝐍𝐚}_{\mathrm{𝟐𝟎}}`$, the new experiment and our data find an absolute minimum at $`\mathrm{𝐍𝐚}_{\mathrm{𝟐𝟎}}`$.
A comparison with the polarizability obtained in the spherical jellium model ekardt ; guetxc for $`\mathrm{𝐍𝐚}_\mathrm{𝟖}`$ and $`\mathrm{𝐍𝐚}_{\mathrm{𝟐𝟎}}`$, also indicated in Fig. 3, shows the improvement that is brought about by the inclusion of the ionic structure.
### 3.3 Discussion
As just discussed, our calculations reproduce the fine structure seen in the new experiment. However, there is no obvious explanation for why the results of the two experiments differ private . Also, there is a characteristic change in the magnitude of the difference between our theoretical results and the experimental values of Rayane et al.: whereas the calculated values for $`\mathrm{𝐍𝐚}_\mathrm{𝟐}`$ to $`\mathrm{𝐍𝐚}_\mathrm{𝟖}`$ and $`\mathrm{𝐍𝐚}_{\mathrm{𝟐𝟎}}`$ on the average differ only by 5 % from the new experiment, the open-shell clusters from $`\mathrm{𝐍𝐚}_{\mathrm{𝟏𝟎}}`$ to $`\mathrm{𝐍𝐚}_{\mathrm{𝟏𝟖}}`$ show 18 % difference for the ground state. The increase in polarizability when going from $`\mathrm{𝐍𝐚}_\mathrm{𝟖}`$ to $`\mathrm{𝐍𝐚}_{\mathrm{𝟏𝟎}}`$ is considerably underestimated, whereas the following steps in the normalized polarizability are nearly reproduced correctly, i.e. it looks as if the theoretical curve for $`\mathrm{𝐍𝐚}_{\mathrm{𝟏𝟎}}`$ to $`\mathrm{𝐍𝐚}_{\mathrm{𝟏𝟖}}`$ should be shifted upwards by a constant. A first suspicion might be that this “offset” could be due to the use of CAPS in the geometry optimization. But it should be noted that the step occurs at $`\mathrm{𝐍𝐚}_{\mathrm{𝟏𝟎}}`$, and that also $`\mathrm{𝐍𝐚}_{\mathrm{𝟏𝟒}}`$ and $`\mathrm{𝐍𝐚}_{\mathrm{𝟏𝟖}}`$ are off by the same amount. Since the low-energy structures of these clusters are well established, as discussed in Section 2, and since also the geometry for $`\mathrm{𝐍𝐚}_{\mathrm{𝟏𝟐}}`$ from the 3D calculation does not lead to qualitatively different results, we can conclude that the differences are not due to limits in the geometry optimization with CAPS. Also, the neglection of the relaxation of the nuclei in the presence of the electric dipole field has been investigated earlier moullet2 for the small clusters and was shown to be a good approximation. We have counter checked this result for the test case Na<sub>10</sub> and find corrections of less than 1%.
An obvious limitation of our approach is the neglect of the core polarization. However, the all-electron calculations of Guan et al. guan treat the core electrons explicitly and do not lead to better agreement with experiment, as discussed in Section 3.1. From this, one already can conclude that core polarization cannot account for all of the observed differences. Its effect can be estimated from the polarizability of the sodium cation. Different measurements corepol find values between $`\mathbf{0.179}\mathbf{\AA }^\mathrm{𝟑}`$ and $`\mathbf{0.41}\mathbf{\AA }^\mathrm{𝟑}`$, leading to corrections of - roughly - 1-2 % in $`\overline{𝜶}^𝐧`$. Since the core polarizability leads to a shift in $`\overline{𝜶}^𝐧`$ that is the same for all cluster sizes, it contributes to the difference that is also seen for the smallest clusters, but it cannot explain the jump in the difference seen at $`\mathrm{𝐍𝐚}_{\mathrm{𝟏𝟎}}`$.
Another principal limitation of our approach is the use of the LDA. As, e.g., discussed in guan , the LDA can affect the polarizability in different and opposing ways. On the one hand it may lead to an overscreening and thus an underestimation of the polarizability, and early calculations within the spherical jellium model reported that indeed the static polarizability was increased if one went beyond LDA using self-interaction corrections sicforpol . On the other hand, self-interaction corrections can lead to more negative single particle energies and thus to smaller polarizabilities guan , and Refs. guetxc and ullrich give examples where the overall effect of self-interaction corrections on the optic response is very small. One cannot directly conclude from the jellium results to our ionic structure calculations, because the sharp edge of the steep-wall jellium model can qualitatively lead to differences. But in any case it is highly implausible that the LDA affects the clusters from $`\mathrm{𝐍𝐚}_{\mathrm{𝟏𝟎}}`$ to $`\mathrm{𝐍𝐚}_{\mathrm{𝟏𝟖}}`$ much stronger than the other ones, and it should also be kept in mind that the worst indirect effects of the LDA as, e.g., underestimation of bond lengths, are compensated by using our phenomenological pseudopotential.
One might also ponder about possible uncertainties in the experimental determination of the polarizabilities. A considerable underestimation could be explained if one assumes that while passing through the deflecting field, the clusters are oriented such that one always measures the highest component of the polarizability. In that case, we would not have to compare the averaged value to the experiment, but the highest one. One could imagine that the cluster’s rotation be damped, since angular momentum conservation is broken by the external field and the energy thus could be transferred from the rotation to internal degrees of freedom (vibrations). The time scale of this energy transfer is not known, but since the clusters are spending about $`\mathrm{𝟏𝟎}^\mathbf{}\mathrm{𝟒}`$s in the deflecting field region, it seems unlikely that there should be no coupling over such a long time. One could further argue that for statistical reasons it is less likely that a larger cluster will lose its orientation again through random-like thermal motion of its constituent ions than a smaller cluster. However, the maximal energy difference between different orientations is very small, for $`\mathrm{𝐍𝐚}_{\mathrm{𝟏𝟎}}`$, e.g., it is $`\mathbf{0.3}𝐊𝐤_𝐁`$ for the typical field strength applied in the experiment knight . Thus, thermal fluctuations can be expected to wipe out any orientation.
From another point of view, however, the finite temperature explains a good part of the differences that our calculations (and other calculations for the small clusters) show in comparison to the experimental data. Whereas our calculations were done for T=0, the supersonic nozzle expansion used in the experiment produces clusters with an internal energy distribution corresponding to about 400 - 600 K hansen ; durgourd . An estimate based on the thermal expansion coefficient of bulk sodium leads to an increase in the bond lengths of about 3 %, and a detailed finite-temperature CAPS calculation baerbel for $`\mathrm{𝐍𝐚}_{\mathrm{𝟏𝟏}}^\mathbf{+}`$ at 400 K also shows a bond length increase of 3 %. This will only be a lower limit, since in neutral clusters one can expect a larger expansion than in the bulk due to the large surface, and also a larger expansion than for charged clusters. Thus, to get an estimate for the lower limit of what can be expected from thermal expansion, we have scaled the cluster coordinates by 3 % and again calculated the polarizabilities, finding an increase of about 3 % for the planar and 5 % for three-dimensional structures. This finding is consistent with the results of Guan et al. Together with the corrections that are to be expected from the core polarizability, this brings our results for the small and the closed shell clusters in quantitative agreement with the experimental data.
## 4 Summary and Conclusion
We have presented calculations for the static electric dipole polarizability for sodium clusters with atom numbers between 2 and 20, covering several low-energy structures for each cluster size beyond $`\mathrm{𝐍𝐚}_{\mathrm{𝟏𝟎}}`$. By comparing our results to previous calculations for the smallest clusters, we have shown that a pseudopotential which correctly reproduces atomic and bulk properties also improves the static response considerably. We have shown that a collective model for the excited states of sodium clusters, whose validity for the dynamical response was established previously, works reasonably also for the static response in realistic systems. Over the whole range of cluster sizes studied in the present work, we confirm the fine structure seen in a recent experiment. By comparing the calculated averaged polarizability of different isomers for the same cluster size to the measured polarizability, we showed that completely different ionic geometries can lead to very similar averaged polarizabilities. By considering higher isomers we furthermore took a first step to take into account the finite temperature present in the experiment. Our results show that for the open shell clusters from $`\mathrm{𝐍𝐚}_{\mathrm{𝟏𝟎}}`$ to $`\mathrm{𝐍𝐚}_{\mathrm{𝟏𝟖}}`$, also higher lying isomers do not close the remaining gap between theory and experiment. This shows that it is a worthwhile task for future studies to investigate the influence of finite temperatures on these “soft” clusters explicitly. For $`\mathrm{𝐍𝐚}_\mathrm{𝟐}`$ to $`\mathrm{𝐍𝐚}_\mathrm{𝟖}`$ and $`\mathrm{𝐍𝐚}_{\mathrm{𝟐𝟎}}`$, we showed that quantitative agreement is already obtained when the effects of thermal expansion and the core polarizability are taken into account.
One of us (S. Kümmel) thanks K. Hansen for several clarifying discussions concerning the experimental temperatures and time-scales, especially with respect to the “orientation question”, and the Deutsche Forschungsgemeinschaft for financial support. |
no-problem/9912/hep-ph9912535.html | ar5iv | text | # Rapidity Gaps from Colour String TopologiesContribution to the DIS 99 workshop proceedings
## Abstract
Diffractive deep inelastic scattering at HERA and diffractive $`W`$ and jet production at the Tevatron are well described by soft colour exchange models. Their essence is the variation of colour string-field topologies giving both gap and no-gap events, with a smooth transition and thereby a unified description of all final states.
The hard scale in diffractive hard scattering has provided the possibility to analyse rapidity gap events based on underlying parton processes calculable in perturbation theory. Although this has been quite successful, perturbative QCD (PQCD) cannot give the complete solution since the rapidity gap connects to the soft part of the event where non-perturbative effects on a long space-time scale are important.
In order to understand these non-perturbative effects and provide a unified description of all final states, we have developed models for the soft dynamics. These models are added to Monte Carlo generators (Lepto for $`ep`$ and Pythia for $`p\overline{p}`$), such that an experimental approach can be taken to classify events depending on the characteristics of the final state: e.g. gaps or no-gaps, leading protons or neutrons etc.
The basic assumption of the models is that variations in the topology of the confining colour force fields (strings) lead to different hadronic final states after hadronisation (Fig. 1). The PQCD interaction gives a set of partons with a specific colour order. However, this order may change due to soft, non-perturbative interactions.
In the soft colour interaction (SCI) model it is assumed that colour-anticolour, corresponding to non-perturbative gluons, can be exchanged between partons and remnants emerging from a hard scattering. This can be viewed as the partons interacting softly with the colour medium of the proton as they propagate through it, which should be a natural part of the process in which ‘bare’ perturbative partons are ‘dressed’ into non-perturbative ones and the confining colour flux tube between them is formed. The hard parton level interactions are given by standard perturbative matrix elements and parton showers, which are not altered by softer non-perturbative effects. The unknown probability to exchange a soft gluon between parton pairs is given by a phenomenological parameter $`R`$, which is the only free parameter of the model. With $`R=0.5`$ one obtains the correct rate of rapidity gap events observed at HERA and a quite decent description of the measured diffractive structure function (Fig. 2).
Leading neutrons are also obtained in agreement with experimental measurements . In the Regge approach pomeron exchange would be used for diffraction, pion exchange added to get leading neutrons and still other exchanges should be added for completeness. The SCI model provides a simpler description.
Applying the same SCI model to hard $`p\overline{p}`$ collisions one obtains production of $`W`$ and di-jets in association with rapidity gaps (Fig. 3). Keeping the $`R`$-value obtained from gaps at HERA, the observed rates of diffractive $`W`$ and diffractive di-jet production at the Tevatron are reproduced (Fig. 3). This is in contrast to the Pomeron model which, when tuned to HERA gap events, gives a factor $`6`$ too large rate at the Tevatron .
SCI does not only lead to rapidity gaps, but also to other striking effects. It reproduces the observed rate of high-$`p_{}`$ charmonium and bottomonium at the Tevatron, which are factors of 10 larger than predictions based on conventional PQCD. This is accomplished by the change of the colour charge of a $`Q\overline{Q}`$ pair (e.g. from a gluon) from octet to singlet. A quarkonium state can then be formed using a simple model for the division of the cross-section below the threshold for open heavy flavour production onto different quarkonium states.
An alternative to SCI is the newly developed generalised area law (GAL) model which, based on a generalisation of the area law suppression $`e^{bA}`$ with $`A`$ the area swept out by the string in energy-momentum space, gives modified colour string topologies through string reinteractions. The probability $`P=R_0[1exp(b\mathrm{\Delta }A)]`$ for two strings pieces to interact depends on the area difference $`\mathrm{\Delta }A`$ which is gained by the string rearrangement. This favours making ‘shorter’ strings, e.g. with gaps, whereas making ‘longer’, ‘zig-zag’ shaped strings is suppressed. The fixed probability $`R`$ in SCI is thus replaced by a dynamical one, where the parameter $`R_0=0.1`$ is chosen to reproduce the HERA gap event rate in a simultaneous fit to data from $`e^+e^{}`$ annihilation at the $`Z^0`$-peak. The resulting diffractive structure function compares very well with HERA data (Fig. 2). The GAL model also improves the description of non-diffractive HERA data .
The GAL model can also be applied to $`p\overline{p}`$ to obtain diffractive $`W`$ and di-jet production through string rearrangements like in Fig. 3. The observed rates are reproduced quite well (Fig. 3). However, the treatment of the ‘underlying event’, which is a notorious problem in hadron-hadron scattering, introduces a larger uncertainty than for the SCI model .
The Tevatron data on gaps between two high-$`p_{}`$ jets are harder to understand. SCI does give such events, but at a too low rate. The GAL model can give the observed rate, but again with an uncertainty due to the treatment of the underlying event. The measured colour-singlet fraction in D0 tends to increase with increasing jet separation, whereas CDF data shows no significant such effect. However, the required gap size is fixed to $`1<\eta <1`$. Our Monte Carlo study shows an increase with jet separation with this fixed gap size, but a decrease when the gap size follows the jet separation (Fig. 4). The proper diffractive signature should rather be no suppression with increasing gap size. Thus, the exact gap definition is very important for the interpretation and this issue should therefore be examined further experimentally.
In conclusion, our models for non-perturbative QCD dynamics in terms of varying colour string topologies give a satisfactory explanation of several phenomena, both diffractive and non-diffractive, thus providing a unified description of many different hadronic final states. |
no-problem/9912/astro-ph9912386.html | ar5iv | text | # Updating the census of star clusters in the Small Magellanic Cloud.
## 1 Introduction
The Magellanic Clouds contain rich star cluster systems (Hodge, 1986, 1988). The distances of the Clouds and their rather high galactic latitudes make them ideal targets to probe total populations of extended objects.
It is becoming possible to map out the overall angular distributions of star clusters, associations and HII regions, see e.g. the revisions of previously cataloged and newly identified objects in the SMC and LMC (Bica & Schmitt, 1995; Bica et al., 1999). Such distributions help one better understand star formation mechanisms and the evolution of the Magellanic System. The homogeneous surveys above were carried out on ESO/SERC R and J Sky Survey Schmidt Plates.
CCD survey results are becoming available which provide deep images in particular areas. A sector of the LMC was studied by Zaritsky et al. (1997) using UBVI filters and they identified previous and new clusters. These objects were cross-identified in detail with the previous literature in Bica et al. (1999), and were included in that catalog. Recently, Pietrzyński et al. (1998) built a catalog of SMC clusters from the Optical Gravitational Lensing Experiment (OGLE) BVI database. The region covered by the OGLE Survey is $``$ 2.4 square degrees in the central parts of the SMC. They reported 238 clusters and presented a cross-identification concluding that 72 clusters were newly cataloged.
In the present paper we perform a detailed cross-identification of the objects in the OGLE catalog with those in previous works, homogenizing classifications. We indicate intrinsically new objects which in turn have implications on the star cluster census. According to Hodge (1986) the total cluster population in the SMC would be $``$ 900 if it were surveyed entirely with 4m telescope deep B plates as he did for a selection of fields. Considering incompleteness effects related to faint turnoffs he estimated a grand total of $``$ 2000. Since Hodge’s (1986) study new objects have been identified in the SMC (Bica & Schmitt, 1995; Pietrzyński et al., 1998) and it is important to update the census in view of shedding light on the SMC history of star cluster formation and dissolution. In Section 2 we discuss the cross-identification procedures and present the results. In Section 3 we compare the angular distribution of the SMC\_OGLE objects with that of all extended objects in the SMC, and discuss the impact of the new SMC\_OGLE objects on the SMC census of extended objects, in particular star clusters. In Section 4 we give the concluding remarks.
## 2 Cross-identifications
Pietrzyński et al. (1998) presented the catalog of OGLE objects in the SMC, together with I-Band CCD images of each object. We used the information therein provided, particularly the coordinates and images for the present cross-identification with the objects in Bica & Schmitt (1995, hereafter BS95).
We overplotted the 238 SMC\_OGLE objects on J2000 maps containing the objects in BS95. The objects resulting very close in position had their CCD images in Pietrzyński et al. (1998) compared to the corresponding fields in the ESO/SERC R and J Schmidt plates and Digitized Sky Survey (DSS) images, in order to check equivalences, which as rule occurred. Isolated objects turned out to be new objects.
We measured diameters and position angles of the new SMC\_OGLE objects and classified them homogeneously with BS95 and Bica et al. (1999). The object types in the latter classifications are: C for star cluster, A for emissionless association, CA and AC for objects with intermediate properties, NA for HII regions and embedded associations, NC for HII regions and embedded star clusters or high surface brightness compact HII regions, N for supernova remnants, AN and CN are respectively associations and clusters which show traces of emission.
The results for all SMC\_OGLE objects following the BS95 and Bica et al. (1999) catalog format are given in Table 1. By columns: (1) The Sky Survey field quadrant where the object is best seen. (2) Object cross-identification in the different catalogs. (3) and (4) are right ascension and declination for the epoch 2000 respectively, (5) Object Type.(6) and (7) Major and minor diameters respectively. (8) Position angle of major axis (0=N, 90=E). (9) Remarks: ‘mP’, ‘mT’ indicate member of pair, triple etc, respectively; ‘&’ indicates additional designations to column 2.
We confirmed most of the cross-identifications by Pietrzyński et al. (1998) for the 166 objects therein indicated as having previous identifications. However of the 72 SMC\_OGLE objects reported as newly found by Pietrzyński et al. (1998) we concluded that 26 had previous identification in the literature, while 46 are intrinsically new objects. These 46 objects are relatively isolated, and their typical appearance on the DSS images is a few enhanced pixels, basically unresolved, thus objects which indeed required deep CCD images to be recognized as clusters.
Two OGLE objects are duplicated, SMC\_OGLE236 = SMC\_OGLE144 and SMC\_OGLE175 = SMC\_OGLE19 (Table 1). Two single objects in the OGLE catalog, SMC\_OGLE26 and SMC\_OGLE33, were considered to be pairs in BS95 (the north and south components of NGC248 and H86-78). Adopting the latter separations the total number of objects with SMC\_OGLE designation remains 238 as in Pietrzyński et al. (1998). One case of close position between an SMC\_OGLE object (SMC\_OGLE31) and one in BS95 (B36) resulted in two different objects. The coordinates of SMC\_OGLE166 in Pietrzyński et al. (1998) did not correspond to the cluster image; the correct coordinates were kindly provided by Dr. Pietrzyński.
Table 2 shows the distribution of SMC\_OGLE objects in different catalogs considering the first chronological designation. Column 1 shows the acronym; column 2 the catalog reference; column 3 counts made in Pietrzyński et al.’s (1998) Table 2; finally in column 4 counts made in Table 1 of the present work. This comparison basically shows the distribution of the difference between the 72 initially reported as new and the 46 intrinsically new SMC\_OGLE objects. This difference arises mainly from faint clusters in the B (Brück, 1976), H86 (Hodge, 1986) and BS (BS95) catalogs.
Concerning the distributions in Table 2, note that the L61 (Lindsay, 1961) and MA (Meyssonier & Azzopardi, 1993) catalogs deal with emission stellar sources, but containing some extended objects which were included in BS95. These extended emission line objects are HII Regions and embedded star clusters. The L61 designations given to seven SMC\_OGLE objects in Table 2 of Pietrzyński et al. (1998) have SMC-N (Henize, 1956) as first chronological identification in BS95. Note also that one SMC\_OGLE object has a counterpart in the MA catalog.
### 2.1 Updated Electronic Version of the BS95 Catalog
We provide in Table 3 the first 5 lines of an updated electronic version of the revised and extended catalog of star clusters, associations and emission nebulae in the SMC/Bridge (BS95). This incorporates the present results concerning the SMC\_OGLE objects (Table 1), in particular the 46 new entries. We note that the equatorial coordinates in the present version are J2000, while in BS95 they were B1950. As pointed out by BS95 an updated catalog condensing the literature results is very useful for future surveys.
Outside the OGLE survey area we include 3 objects which were previously in the list of excluded catalog entries of BS95 (their Table 3) because they were not clearly interpreted as clusters in the ESO/SERC Schmidt Plates. These objects are: (i) B133 (Brück, 1976), which appears to be a physical system (Kontizas, 1980; Hodge, 1983); (ii) H86-95 and H86-96 indicated as a cluster pair in (Hatzidimitriou & Bhatia, 1990), which are probably clusters as seen in the second generation Digitized Sky Survey images.
The SMC/Bridge catalog now totals 1237 objects: 595 classified as clusters (C+CA+CN), 350 as associations (A+AC+AN) and 292 related to emission nebulae (N+NC+NA). Considering also the recent LMC catalog (Bica et al., 1999) the number of extended objects in the Magellanic System is now 7895.
## 3 Angular distribution and census
In the following discussions we adopt the updated BS95 catalog (Table 3), which incorporates that of the OGLE survey (Table 1). We consider as SMC objects those with right ascension less than $`2^h`$, thus excluding the Bridge region.
Figure 1 shows the angular distribution of all SMC\_OGLE objects compared to that of all extended objects in the updated BS95 catalog. The OGLE survey covers the central regions of the SMC, including the very dense bar region to the southwest and the dense extension to the northeast. Note that there are moderately dense fields around the OGLE survey area in which future CCD surveys will certainly detect new objects. Also the low density outer zones and the SMC Wing region to the southeast are important to be surveyed.
Figure 2 superimposes the angular distribution of the 46 intrinsically new SMC\_OGLE objects to that of all (238) objects in the SMC\_OGLE catalog. The new objects are more frequent in the Bar region, but they are also numerous in the northeast extension.
In Table 4 we show distributions of object types in the SMC for different spatial extractions in the updated BS95 and OGLE catalogs. In column 2 (whole SMC) there are 1122 objects distributed among (clusters : associations : nebulae) as (584 : 252 : 286). The 46 new OGLE objects and the three catalog additions outside the OGLE region (Section 2.1) increased the number of SMC objects by $``$ 5%.
We also show in Table 4 object type counts occurring in the OGLE survey area: in column 3 all objects from the updated BS95 catalog, in column 4 all objects in the revised OGLE catalog (Table 1), and finally in column 5 the intrinsically new OGLE objects. We compute 631 extended objects in the OGLE survey area as compared to 238 SMC\_OGLE objects. The large OB associations and nebular complexes explain this difference in part (mostly included in the A and NA types), but there occur many clusters (C type) in the OGLE survey area not included in the OGLE catalog. The classification C, CA to AC is one of decreasing density (Bica et al., 1999) and most of the new OGLE objects (column 5) were classified in the CA type. Finally we point out that two new OGLE objects are related to emission (NC and NA types), and $``$ 10 % of the objects in the OGLE catalog (column 4) are related to emission or have traces of emission (CN type). The emission is better seen in ESO/SERC R plates owing to H$`\alpha `$ than in the I-band CCD images.
We show in column 6 of Table 4 the counts for all objects outside the OGLE survey area, and in column 7 a crude prediction of objects that may be detected by similar CCD surveys. For each object type we computed the fraction of new OGLE objects (column 5) with respect to all objects of the same type in the area (column 3). It is expected that the average age in the outer parts be larger than in the inner parts. This effect would increase the number of new CCD objects in the outer parts since the turnoffs would be fainter, and also because crowding effects are less important. On the other hand the latter point does not favor new detections because many of these relatively old objects would have already been detected in previous photographic surveys. It is also worth noting that the older age effect towards outer regions is not isotropic, since young clusters certainly occur to the southeast (Wing and Bridge regions), which favors more previous detections. Under the assumption that such effects tend to compensate, the number of new objects in the outer region would be $``$ 42 (column 7).
A cluster census depends on the definition of cluster itself. One can include embedded clusters in HII regions and extend the density range to loose systems, as done for the LMC (Bica et al., 1999). Considering the census of SMC star clusters by including a density range (C, CA and AC types) the actual number is 633 and including predictions 672. Considering also the clusters related to emission (NC and CN types) the actual cataloged number is 719 and including predictions 759. With respect to a cluster population of 719 the new objects in the OGLE catalog imply an increase of $`7\%`$.
Hodge (1986) carried out a similar analysis by comparing the number of clusters cataloged in deep 4m telescope B plates in selected areas to that of known clusters in SMC catalogs at that time. He predicted $``$ 900 clusters if all the SMC were surveyed with similarly deep plates, and $``$ 2000 clusters if small older clusters were detectable. The plate limits in Hodge (1986) are $`B=23`$ and $`B=22`$, respectively in the outer and core SMC regions, while in the OGLE survey they are slightly less deep with $`B21.2`$, $`V21.5`$ and $`I21.0`$ (Udalski et al., 1998). The present number of cataloged clusters in Table 3 (719) is still short of $``$ 180 with respect to Hodge’s prediction for the detection level of the 4m plates (900), but it is considerably larger than that actually known at that time ($`600`$).
Hodge’s (1986) 4m telescope photographic survey is to date still the deepest one in selected areas in the SMC. Note that 53 faint cluster candidates (indicated by Hodge as probable or questionable clusters) remain in Table 3 of BS95 owing to limitations in the ESO/SERC Schmidt plates. This number excludes H86-95 and H86-96 (now in the present catalog in Table 3 \- see Section 2.1) and the duplications H86-131=128, 137=133, 161=158 and 168=165 (now also in Table 3). Such faint clusters may turn out to be crucial to infer the rate whereby intermediate age and old clusters formed in possible bursts and/or dissolved in the SMC and also LMC (Geisler et al., 1997). The brightest star in these faint clusters is mostly in the range $`18`$ B $`22`$ (Hodge, 1986) which places them as candidate intermediate/old age clusters since some bright red giant/clump stars may be present or not in such underpopulated objects. The turnoff magnitude for old clusters in the SMC is B $`2223`$ (Hodge, 1986). Assuming the faint objects as clusters the updated sample would be 772 (719 + 53), still short of $`130`$ with respect to the 4m B plate survey predictions for the whole SMC with comparable plate limits.
We conclude that deep CCD surveys like OGLE will certainly reveal new clusters in the SMC intermediate and outer parts, and deeper surveys would be necessary (especially in the central regions) to attain $``$ 900 objects. As an example of detection of very faint clusters by means of deep localized images, see the recent discovery of two clusters in crowded LMC fields with HST (Santiago et al., 1998). Finally, deep field CCD surveys would be necessary to check the existence of small old (and intermediate age) clusters in order to attain a total population of $``$ 2000. One possibility is that most or part of such older low mass objects have dissolved.
## 4 Concluding remarks
The updating of the SMC extended object catalog by including the OGLE survey and some additional results was carried out. The 46 new OGLE objects increase the number of extended objects in the SMC by $``$ 5% and star clusters themselves by $``$ 7% . If a similar CCD survey were carried out in the whole SMC area, a simple estimate suggests that $``$ 40 additional objects could be detected. The present number of star clusters (considering also those related to emission and loose systems) is 719, still short by $``$ 180 with respect to Hodge’s (1986) prediction for a global survey in the SMC attaining $`B=23`$ and $`B=22`$ in the outer parts and core respectively. Deeper field CCD surveys would be necessary to check the existence of small old (and intermediate age) clusters in order to attain a grand total population of $``$ 2000. However it is possible that most or part of such older low mass objects have dissolved.
We acknowledge the Brazilian institution CNPq for support. |
no-problem/9912/astro-ph9912302.html | ar5iv | text | # Finding Clusters of Galaxies in the Sloan Digital Sky Survey using Voronoi Tessellation
## References
Butcher, H., & Oemler, A. 1984, ApJ, 285, 426
Gunn, J.E. & Weinberg, D.H., 1995, in Wide Field Spectroscopy and the Distant Universe ed. Maddox & Aragòn-Salamanca (World Scientific, Singapore), 3
Kepner, J., Fan, X., Bahcall, N., Gunn, J., Lupton, R. & Xu, G., 1999, ApJ, 517, 78
Kim, R. S. J. et al. 1999, in preparation
Lupton et al. 2000, in preparation
Postman, M. et al., 1996, AJ, 111, 615
Ramella, M., Nonino, M., Boschin, W., & Fadda, D. 1999, in Observational Cosmology: The Development of Galaxy Systems, ed. G. Giuricin, M. Mezzetti, and P. Salucci, ASP, 176, 108
Voges, W. et al. 1999, A&A, 349, 389 |
no-problem/9912/cond-mat9912087.html | ar5iv | text | # Comment on “Inverse exciton series in the optical decay of an excitonic molecule”
## Abstract
Tokunaga et al. \[Phys. Rev. B 59, R7837 (1999)\] claim the first successful observation of the inverse exciton $`M`$ series in the emission spectrum of excitonic molecules. We assert that actually such a series was observed as early as 1989 in the $`\beta `$-$`ZnP_2`$ crystal. We show that the objections of Tokunaga et al. against the biexciton nature of the inverse exciton series in $`\beta `$-$`ZnP_2`$ are ungrounded. In particular, their estimations for the ratio of the intensities of $`M2`$ and $`M1`$ emission lines in this crystal give two order smaller value because they do not take into account the reabsorption of the $`M1`$ line photons. We report the observation of additional inverse exciton $`M^{}`$ series in the emission spectrum of excitonic molecules in $`\beta ZnP_2`$.
In the recent publication Tokunaga et al. have reported an observation of the inverse exciton $`M`$ series in the emission spectrum of excitonic molecules in $`CuCl`$ crystal. They claim that their experiment is the first successful observation of such a series in semiconductors. The aim of our Comment is to show that the above claim is not actual.
To our knowledge the first observation of the emission $`M`$ series caused by two-electron radiative transitions in a biexciton was reported by some authors of the present Comment ten years ago. In Ref. such a series was observed in $`\beta ZnP_2`$ crystal (fig. 1). At two-electron transition one exciton of the molecule annihilates and the other stays in the ground state (line $`M1`$) or passes to excited (lines $`M2`$, $`M3`$, …) and ionized (line $`M\mathrm{}`$) states. Later, we described inverse exciton series in $`\beta ZnP_2`$ emission spectrum in Ref. . However, the authors of Ref. disregarded above publications. Tokunaga et al. cited only the paper, where the theoretical analysis of two-electron and two-photon radiative transitions in an excitonic molecule was made. However, they have objected the biexciton nature of the inverse exciton series in $`\beta ZnP_2`$.
First their argument is that the ratio of the intensities of $`M2`$ and $`M1`$ lines of the series in $`\beta ZnP_2`$ is too large for the biexciton $`M`$ series: $`I_2/I_110^1`$. We would like to point an attention of the authors of Ref. to the following. Indeed, an experimental value of $`I_2/I_1`$ is $`1.1\times 10^1`$. Proceeding from the assumption of biexciton nature of the inverse exciton series in $`\beta ZnP_2`$, we obtained theoretically in Ref. : $`I_2/I_1=1.1\times 10^2`$. Numerical estimations of the authors of Ref. give $`10^3`$ for $`I_2/I_1`$ in this material. However, as we have pointed out in Ref. , this disagreement between theory and experiment is clear enough. At high excitation of the sample by powerful pulse $`N_2`$ laser (excitation intensities $`10^6W/cm^2`$) the concentration of $`1S`$ excitons is high enough. In addition to this, in $`\beta ZnP_2`$ the lowest exciton state is the forbidden state of orthoexciton that could raise the concentration of excitons in $`1S`$ state. Transitions from the $`1S`$ state of exciton into the molecule ground state cause reabsorption of $`M1`$ line photons ($`he_{1S}+\mathrm{}\omega _{M1}h_2e_2`$) and corresponding decrease of this line intensity. Concentration of excitons in the $`2S`$, $`3S`$, … states does not increase considerably because of their fast relaxation to the $`1S`$ state, and there is no saturation of the lines $`M2`$, $`M3`$ and $`M\mathrm{}`$. This our argument is based on the results of experimental study of the dependences of the intensities of inverse series lines on the excitation intensity $`I_{exc}`$ (fig. 2). The second power increase of the $`M1`$ line intensity with $`I_{exc}`$ slows done at high excitation levels, and this effect becomes stronger with increase of the excitation intensity. Such saturation effect is not observed for other lines of the series. Let us extrapolate the $`M1`$ line intensity dependence on excitation intensity to the values of $`I_{exc}`$ at which an experimental value of $`I_2/I_1`$ is $`1.1\times 10^1`$ (fig. 2). Consequently, we obtain the value of $`I_2/I_1`$ in the case of the absence of reabsorption of $`M1`$ line photons and the respective absence of this line saturation. An extrapolation gives $`I_2/I_2=4.3\times 10^3`$. This value agrees with Ref. estimations ($`10^3`$) which do not consider the reabsorption effect. Some disagreement between the value obtained from the extrapolation and our estimations made in Ref. ($`1.1\times 10^2`$) is possibly due to rough wave function of biexciton that we have used in Ref. . For other lines of the inverse series in $`\beta ZnP_2`$ good agreement between experimental and theoretical values of the ratios of intensities takes place: $`(I_3/I_2)_{exp}=1.4\times 10^1`$ and $`(I_3/I_2)_{th}=1.3\times 10^1`$; $`(I_{\mathrm{}}/I_3)_{exp}=4.3\times 10^1`$ and $`((I_4+I_5)/I_3)_{th}=3.9\times 10^1`$, where $`I_{\mathrm{}}=_{n=4}^{\mathrm{}}I_n`$ is the total intensity of $`M4`$, $`M5`$, … lines which merge into the total $`M\mathrm{}`$line. In $`CuCl`$ the lowest exciton state is allowed. Biexcitons in Ref. were resonantly created by two-photon absorption method and, therefore, excitons would be created only at the optical decay of biexcitons. Due to these two facts, the concentration of excitons would be insufficient for reabsorption of $`M1`$ line photons, and this line would not be saturated, i.e. an agreement between experiment and theory would take place for all lines of the inverse series in $`CuCl`$. Such an agreement was reported in Ref. .
$`M`$ series in $`\beta ZnP_2`$ is observed at considerably higher temperatures too (fig. 3). This fact confirms high binding energy of the biexciton in $`\beta ZnP_2`$ and rejects any possible impurity interpretation. $`\beta ZnP_2`$ is very remarkable in the following. If the direction of the wavevector of emitted photon $`𝐤`$ is parallel to axis $`b`$ of crystal ($`𝐤b`$) and photon polarization is $`𝐄a`$, another inverse exciton $`M^{}`$ series is observed in emission spectrum of this crystal (fig. 4). This additional series is symmetric to $`A`$ series of the free $`S`$ orthoexciton which is observed in absorption spectrum at $`𝐄a`$ and $`𝐤b`$. Thus, $`M^{}`$ series is due to the radiative transitions from the excitonic molecule ground state to the $`S`$ states of orthoexciton. Main $`M`$ series is symmetric to $`C`$ series of the free $`S`$ paraexciton which is observed in absorption and emission spectrum at $`𝐄c`$. Respectively, $`M`$ series is due to the radiative transitions from the biexciton ground state to the $`S`$ states of paraexciton.
Authors of Ref. see the cause that the previous attempts of biexciton $`M`$ series observation “were unsuccessful” in fact that “the $`M_{n2}`$ lines are extremely weak in intensity, requiring for their observation a highly sensitive detection technique and a high-quality sample free from impurity emissions”. However, the fact of extremely weak intensities of $`M`$ series lines in $`CuCl`$ does not prove the impossibility of that in other materials these intensities can be considerably higher. Indeed, in $`CuCl`$ intensities of $`M_{n2}`$ lines are extremely small, since in this material: $`I_2/I_1=1.4\times 10^4`$; $`I_3/I_1=4.0\times 10^5`$; $`I_4/I_1=1.5\times 10^5`$. In $`\beta ZnP_2`$ we have more than one order higher values: $`I_2/I_1=4.3\times 10^3`$; $`I_3/I_1=6.0\times 10^4`$; $`I_4/I_1=2.6\times 10^4`$. This fact simplifies the observation of $`M`$ series in $`\beta ZnP_2`$. And finally, authors of Ref. asserted that our experimental data were not confirmed in a similar experiment by K. Kondo (K. Kondo, M.S. thesis, Okayama University, 1998). However, they have answered by themselves on this point having written that the high-quality samples are required for biexciton $`M`$ series observation. Most likely, K. Kondo just did not have the samples of the required quality.
The authors of Ref. determined the components $`C_n`$ of the exciton excited states $`n=2,3`$, and $`4`$ in the biexciton wave function in $`CuCl`$ from the relative intensities of $`M1`$, $`M2`$, $`M3`$, and $`M4`$ lines. This allowed them to reconstruct the internal molecule wave function. However, they did not say that this idea was proposed in Ref. . Only the reabsorption of $`M1`$ line photons and the respective saturation of this line did not allow us to reconstruct the biexciton wave function in $`\beta ZnP_2`$. However, taking for $`I_2/I_1`$ the extrapolation value obtained above, we can estimate the components $`C_n`$ of the exciton excited states in the biexciton wave function in $`\beta ZnP_2`$ as following: $`|C_2/C_1|^2=I_2/I_1=4.3\times 10^3`$; $`|C_3/C_1|^2=I_3/I_1=6.0\times 10^4`$; $`|C_4/C_1|^2=I_4/I_1=2.6\times 10^4`$.
In conclusion, we assert that the inverse exciton $`M`$ series in the emission spectrum of excitonic molecules was observed for the first time in $`\beta ZnP_2`$ crystal in 1989. Besides this series, we have observed in this crystal an additional inverse exciton $`M^{}`$ emission series which is due to the radiative transitions from the biexciton ground state to the $`S`$ states of orthoexciton. |
no-problem/9912/hep-ph9912534.html | ar5iv | text | # 1 Introduction
## 1 Introduction
Ideas on diffraction have been developed over a long time. Quite old (‘old-old’) is the Regge approach with a pomeron mediating elastic and diffractive interactions . Being from pre-QCD times, Regge phenomenology only considers soft interactions described in terms of hadrons. In a modern QCD-based language one would like to understand diffraction on the parton level. This was the starting point of the by now ‘old new’ idea that one should probe the structure of the pomeron through a hard scattering in diffractive events. By introducing a hard scale one should resolve partons in the pomeron and also make calculations possible through perturbative QCD (pQCD). This opened the new branch of diffractive hard scattering with models and the discovery by UA8 as discussed in section 3.
The models are based on a factorization between the new concepts of a ‘pomeron flux’ (in the proton) and a ‘pomeron structure function’ in terms of parton density functions. These ideas may be interpreted as the pomeron being analogous to a hadron (maybe a glueball?) as discussed in section 2.
The discovery of rapidity gap events in deep inelastic scattering (DIS) at HERA was a great surprise to most people, although it had been predicted as a natural consequence of the diffractive hard scattering idea . The pointlike probe in DIS makes it an ideal way to measure the parton structure of diffraction. This is discussed in section 4 together with diffractive production of jets and $`W`$’s at the Tevatron. Although pomeron-based models may work phenomenologically, there are conceptual and theoretical problems as discussed in section 5.
These problems are related to the general unsolved problem of non-perturbative QCD (non-pQCD). Diffraction is one important aspect of this, others are hadronization in high energy collisions and the confinement of quarks and gluons. In recent years there has been an increased interest for these problems and efforts are made based on new ideas and methods as discussed in section 6. The hard scale in diffractive hard scattering only solves part of the problem by making the upper part of the diagrams in Fig. 1 calculable in perturbation theory. However, the soft, lower part of the interaction occurs over a large space-time as illustrated in Fig. 1c and must be treated with some novel non-pQCD methods.
One such ‘new-new’ idea is the soft colour interaction (SCI) model , which is an explicit attempt to describe non-pQCD interactions in a Monte Carlo event generation model. Although it is quite simple, it is able to describe data on different diffractive and non-diffractive interactions as discussed in section 6. However, a better theoretical basis for this kind of models is certainly needed. In addition, the rapidity gaps between high-$`p_{}`$ jets observed at the Tevatron are still a challenge to understand (section 7). In conclusion (section 8), although substantial progress has been made recently, diffractive scattering is still a basically unsolved problem which provides challenges for the future.
## 2 Rapidity gaps and the pomeron concept
The dynamics of hadron-hadron interactions are largely not understood. Only the very small fraction of the cross section related to hard (large momentum transfer) interactions can be understood from first principles using pertubation theory, e.g. jet production in QCD or $`\gamma ^{},W,Z`$ production in electroweak theory. The large cross section ($`𝒪(mb)`$) processes, on the other hand, are given by non-pQCD for which proper theory is lacking and only phenomenological models are available. These processes are classified in terms of their final states as illustrated in Fig. 2.
The distribution of final state hadrons is then usually expressed in terms of the rapidity variables
$$\mathrm{rapidity}y=\frac{1}{2}\mathrm{ln}\frac{E+p_z}{Ep_z}\mathrm{ln}\mathrm{tan}\frac{\theta }{2}=\eta \mathrm{pseudorapidity}$$
(1)
where the approximation becomes exact for massless particles and the polar angle $`\theta `$ is with respect to the $`z`$-axis along the beam. In a totally inelastic interaction (Fig. 2e) the hadrons are distributed with a flat rapidity plateau. This corresponds to longitudinal phase space where the transverse momenta are limited to a few hundred MeV, but longitudinal momenta cover the available phase space. This is in accordance with hadronization models, e.g. the Lund string model , where longitudinal momenta are given by a scaling fragmentation function and transverse momenta are strongly suppressed above the scale of soft interactions. The probability to have events with a gap, i.e. a region without particles, due to statistical fluctuations in such a rapidity distribution decreases exponentially with the size of the gap.
Experimentally one observes a much higher rate of gaps. Diffraction is nowadays often defined as events with large rapidity gaps which are not exponentially suppressed . This is, however, a wider definition than that previously often used in terms of a leading proton taking a large fraction (e.g. $`x_F\text{ }>0.9`$) of the beam proton momentum which enforces a rapidity gap simply by kinematical constraints. However, a gap can be anywhere in the event and therefore allow a forward system of higher mass than a single proton. The definition chosen reflects what the experiments actually observe. The leading protons go down the beam pipe and their detection require tracking detectors in ‘Roman pots’ which are moved into the beam pipe to cover the very small angles caused by the scattering itself or the bending out of the beam path by machine dipole magnets.
The simplest gap events occur in elastic and single diffractive scattering (Fig. 2a,b). Due to the scattered proton there is obviously an exchange of energy-momentum, but not of quantum numbers. In Regge phenomenology this is described as the exchange of an ‘object’ with vacuum quantum numbers called a pomeron (IP) after the Russian physicist Pomeranchuk. Regge theory is a description based on analyticity for scattering amplitudes in high energy interactions without large momentum transfers, but it is not a theory based on a fundamental Lagrangian like QCD.
The kinematics of single diffraction can be specified in terms of two variables, e.g. the momentum fraction $`x_p=p_f/p_i`$ of the final proton relative to the initial one and the momentum transfer $`t=(p_ip_f)^2`$. The pomeron then takes the momentum fraction $`x_{IP}=1x_p`$ and has a negative mass-squared $`m_{IP}^2=t<0`$ meaning that it is a virtual exchanged object. The other proton produces a hadronic system $`X`$ of mass $`M_X^2=x_{IP}s`$, i.e. the invariant mass-squared of the ‘pomeron-proton collision’. The cross section for single diffraction (SD) is experimentally found to be well described by
$$\frac{d\sigma _{SD}}{dtdx}\frac{1}{x_{IP}}\left\{a_1e^{b_1t}+a_2e^{b_2t}+\mathrm{}\right\}\frac{a}{M_X^2}\left|F(t)\right|^2$$
(2)
where the exponential damping in $`t`$ can be interpreted in terms of a proton form factor $`F(t)`$ giving the probability that the proton stays intact after the momentum ‘kick’ $`t`$. With $`x_{IP}<0.1`$ the maximum $`M_X`$ reachable at ISR, S$`p\overline{p}S`$, Tevatron and LHC are 20, 170, 570 GeV and 4.4 TeV, respectively. However, the rate of large $`M_X`$ events is suppressed due to the dominantly small pomeron momentum fraction. This is the reason why it took until 1985 to demonstrate that the rapidity distribution of hadrons in the $`X`$-system shows longitudinal phase space . Therefore, the pomeron-proton collision is similar to an ordinary hadron-proton interaction. This ruled out ‘fireball models’ giving a spherically symmetric final state having a Gaussian rapidity distribution . Thus, the hadronic final state provides information on the interaction dynamics producing it.
The Regge formalism relates the differential cross sections for different processes. This is achieved through the factorization of the different vertices such that the same kind of vertex in different processes is given by the same expression. The exchange of other than vacuum quantum numbers are described as, e.g., meson exchanges. Since the exchanged object is not a real state, but virtual with a negative mass-squared, it is actually a representation of a whole set of states (e.g. mesons) with essentially the same quantum numbers. The spin versus the mass-squared of such a set gives a linear relation which can be extrapolated to $`m^2=t<0`$ and provides the trajectory $`\alpha (t)`$ for the exchange. This provides the essential energy dependence $`\sigma s^{2\alpha (t)2}`$ of the cross section. The pomeron trajectory $`\alpha _{IP}(t)=1+ϵ+\alpha ^{}t1.08+0.25t`$ has the largest value of all trajectories at $`t=0`$ (intercept) which leads to the dominant contribution to the hadron-hadron cross section. Contrary to the $`\pi `$ and $`\rho `$ trajectories, which have well known integer spin states at the pole positions $`t=m_{meson}^2`$, there are no real states on the pomeron trajectory. However, a recently found spin-2 glueball candidate with mass $`1926\pm 12`$ MeV fits well on the pomeron trajectory.
This would be in accord with the suggestion that the pomeron is some gluonic system which may be interpreted as a virtual glueball . In a modern QCD-based language it is natural to consider a pomeron-hadron analogy where the pomeron is a hadron-like object with a quark and gluon content. Pomeron-hadron interactions would then resemble hadron-hadron collisions and give final state hadrons in longitudinal phase space, just as observed. There was, however, another view in terms of a pomeron-photon analogy where the pomeron is considered to have an effective pointlike coupling to quarks. Single diffractive scattering would then be similar to deep inelastic scattering and the exchanged pomeron scatters a quark out of the proton, leading to a longitudinal phase space after hadronization. This fits well with the experimental evidence for pomeron single-quark interactions .
## 3 Idea and discovery of diffractive hard scattering
To explore the diffractive interaction further, we introduced in 1984 the new idea that one should use a hard scattering process to probe the pomeron interaction at the parton level. In retrospect this seems obvious and simple, but at that time it was quite radical and was criticised. The idea was launched before the observations of longitudinal event structure in diffraction, the glueball candidate on the pomeron trajectory and the pomeron single-quark interactions discussed above. Furthermore, diffraction was at that time a side issue in particle physics that was ignored by most people.
Based on the pomeron factorization hypothesis, the diffractive hard scattering process was considered in terms of an exchanged pomeron and a pomeron-particle interaction were a hard scattering process on the parton level may take place as illustrated in Fig. 3. The diffractive hard scattering cross section can then be expressed as the product of the inclusive single diffractive cross section and the ratio of the pomeron-proton cross sections for producing jets and anything, i.e.
$$\frac{d\sigma _{jj}}{dtdM_X^2}=\frac{d\sigma _{SD}}{dtdM_X^2}\frac{\sigma (IPpjj)}{\sigma (IPpX)}$$
(3)
Here, $`d\sigma _{SD}`$ can be taken as the parametrization of data in eq. (2) and the total pomeron-proton cross section $`\sigma (IPpX)`$ can be extracted from data using the Regge formalism resulting in a value of order 1 mb. Together these parts of eq. (3) can be seen as an expression for a pomeron flux $`f_{IP/p}(x_{IP},t)`$ in the beam proton. The cross section for pomeron-proton to jets, $`\sigma (IPpjj)`$, is assumed to be given by pQCD as
$$\sigma (IPpjj)=𝑑x_1𝑑x_2𝑑\widehat{t}\underset{ij}{}f_{i/IP}(x_1,Q^2)f_{j/p}(x_2,Q^2)\frac{d\widehat{\sigma }}{d\widehat{t}}$$
(4)
where a parton density function $`f_{i/IP}`$ for the pomeron is introduced in analogy with those for ordinary hadrons. The pomeron parton density functions were basically unknown, but assuming the pomeron to be gluon dominated it was resonable to try $`xg(x)=ax(1x)`$ or $`xg(x)=b(1x)^5`$ for the cases of only two gluons or of many gluons similar to the proton. Similarly, if the pomeron were essentially a $`q\overline{q}`$ system one would guess $`xq(x)=cx(1x)`$. The normalisation constants $`a,b,c`$ can be chosen to saturate the momentum sum rule $`_0^1𝑑x_ixf_{i/IP}(x)=1`$, which seems like a reasonable assumption to get started.
This formalism allows numerical estimates for diffractive hard scattering cross sections. Diffractive jet cross sections at the CERN S$`p\overline{p}`$S collider energy were found to be large enough to be observable. Furthermore, turning the formalism into a Monte Carlo (MC) program (precursor to Pompyt described below) to simulate complete events, demonstrated a clearly observable event signature: a leading proton ($`x_F\text{ }>0.9`$) separated by a large rapidity gap from a central hadronic system with high-$`p_{}`$ jets.
Based on these predictions, the UA8 experiment was approved and constructed. It had Roman pots in the beam pipes to measure the momentum of leading (anti)protons and used the UA2 central detector to observe jets. The striking event signature were observed in 1987 signalling the discovery of the diffractive hard scattering phenomenon, which was investigated further with more data .
The observed jets showed the characteristic properties of QCD jets as quantified in the Monte Carlo, e.g. jet $`E_{}`$ and angular distributions and energy profiles. The longitudinal momentum of the jets gives information on the momentum fraction ($`x_1`$ in Fig. 3b) of the parton in the pomeron; a change in the shape of the $`x_1`$-distribution shifts the parton-parton cms with respect to the $`X`$ cms and thereby the momentum distribution of the jets . Comparison of data and the Monte Carlo shows a clear preference for a hard parton distribution . Using a quark or gluon distribution $`xf(x)x(1x)`$ gives a resonable description of the observed $`x_F`$-distribution of the jets, although giving too little in the tail at large $`x_F`$. This is more clearly seen, if instead of considering individual jets, one takes both jets in each event and plot the longitudinal momentum of this pair, Fig. 4. The excess at large $`x_F`$ can be described by having 30% of the pomeron structure function in terms of a super-hard component with partons taking the entire pomeron momentum, i.e. $`xf(x)\delta (1x)`$. The $`\delta `$-function can be seen as a representation of some more physical distribution which is very hard, e.g. $`xf(x)1/(1x)`$.
With the UA8 data alone, one cannot distinguish between gluons or quarks in the pomeron. The UA1 experiment has given some evidence for diffractive bottom production . This may be interpreted with a gluon-dominated pomeron such that the $`ggb\overline{b}`$ subprocess can be at work, but no firm conclusion can be made given the normalization uncertainty in the model and the experimental errors .
UA8 have recently provided the absolute cross section for diffractive jet production . This shows that, although the Monte Carlo model reproduces the shapes of various distributions, it overestimates the absolute cross section; $`\sigma (data)/\sigma (model)=0.30\pm 0.10`$ or $`0.56\pm 0.19`$ for the model with the pomeron as a gluonic or a $`q\overline{q}`$ state, respectively. This have raised questions concerning the normalization of the pomeron flux and the pomeron structure function, as will be discussed in section 5.
In summary, diffractive hard scattering has been discovered by UA8 and the main features can be interpreted in terms of an exchanged pomeron with a parton structure.
## 4 Rapidity gap events at HERA and the Tevatron
The above model for diffractive hard scattering can be naturally extended to other kinds of particle collisions, $`a+pp+X`$ where $`a`$ can not only be any hadron but also a lepton or a photon. Based on the pomeron factorization hypothesis the cross section is $`d\sigma (a+pp+X)=f_{IP/p}(x_{IP},t)d\sigma (a+IPX)`$. The pomeron flux can be taken as a simple parametrization of data in terms of exponentials as above, or obtained from Regge phenomenology in the form
$$f_{IP/p}(x_{IP},t)=\frac{9\beta _0^2}{4\pi ^2}\left(\frac{1}{x_{IP}}\right)^{2\alpha _{IP}(t)1}\left[F_1(t)\right]^2$$
(5)
with parameters obtained from data on hadronic inclusive diffractive scattering. Here, $`\beta =3.24GeV^2`$ is the mentioned effective pomeron-quark coupling and $`F_1(t)=(4m_p^2At)/(4m_p^2t)(1t/B)^2`$ is a proton form factor with $`m_p`$ the proton mass and parameters $`A=2.8,B=0.7`$. The pomeron trajectory is $`\alpha _{IP}(t)1.08+0.25t`$.
For the hard scattering cross section $`d\sigma (a+IPX)`$ one should use the relevant convolution of parton densities and parton cross sections, e.g. eq. (4) for hadron-pomeron collisions. In order to simulate complete events this formalism has been included in the Monte Carlo program Pompyt based on the Lund Monte Carlo Pythia . In particular, there are options for different pomeron flux factors and parton densities. Moreover, Pompyt also contains pion exchange processes where a pion, with a flux factor and parton densities, replaces the pomeron as an example of other possible Regge exchanges.
### 4.1 Diffractive DIS at HERA
As suggested already in , one should probe the pomeron structure with deep inelastic scattering, e.g. at HERA. The advantage would be to have a clean process with a well understood point-like probe with high resolving power $`Q^2`$. The experimental signature should be clear; a quasi-elastically scattered proton (going down the beam pipe) well separated by a rapidity gap from the remaining hadronic system. The kinematics is then described by the diffractive variables $`x_{IP}`$ (or $`x_p=1x_{IP}`$) and $`t`$, as above, and the standard DIS variables $`Q^2=q^2=(p_ep_e^{})^2`$ and Bjorken $`x=Q^2/2Pq`$ (where $`P,p_e,p_e^{},q`$ are the four-momenta of the initial proton, initial electron, scattered electron and exchanged photon, respectively).
The cross section for diffractive DIS can then be written
$$\frac{d\sigma (epepX)}{dxdQ^2dx_{IP}dt}=\frac{4\pi \alpha ^2}{xQ^4}\left(1y+\frac{y^2}{2}\right)F_2^D(x,Q^2;x_{IP},t)$$
(6)
where the normal proton structure function $`F_2`$ has been replaced by a corresponding diffractive one, $`F_2^D`$, with $`x_{IP}`$ and $`t`$ specifying the diffractive conditions. Only the dominating electromagnetic interaction is here considered and $`R=\sigma _L/\sigma _T`$ is neglected for simplicity. If pomeron factorization holds, then $`F_2^D`$ can be factorized into a pomeron flux and a pomeron structure function, i.e. $`F_2^D(x,Q^2;x_{IP},t)=f_{IP/p}(x_{IP},t)F_2^{IP}(z,Q^2)`$ where the pomeron structure function $`F_2^{IP}(z,Q^2)=_fe_f^2\left(zq_f(z,Q^2)+z\overline{q}_f(z,Q^2)\right)`$ is given by the densities of (anti)quarks of flavour $`f`$ and with a fraction $`z=x/x_{IP}`$ of the pomeron momentum. Since the photon does not couple directly to gluons, they will only enter indirectly through $`gq\overline{q}`$ as described by QCD evolution or the photon-gluon fusion process.
Although diffractive DIS had been predicted in this way , it was a big surprise to many when it was observed first by ZEUS and then by H1 . Since leading proton detectors were not available at that time, it was the large rapidity gap that was the characteristic observable, i.e. no particle or energy depositions in the forward part of the detector as shown in Fig. 5a. Leading protons have later been clearly observed , but the efficiency is low so the dominant diffractive data samples are still defined in terms of rapidity gaps. A simple observable to characterize the effect is $`\eta _{max}`$ giving, in each event, the maximum pseudo-rapidity where an energy deposition is observed. Fig. 5b shows the distribution of this quantity.
Although the bulk of the data with $`\eta _{max}`$ in the forward region is well described by ordinary DIS Monte Carlo events, there is a large excess with a smaller $`\eta _{max}`$ corresponding to the central region or even in the electron hemisphere. This excess is well described by Pompyt as deep inelastic scattering on an exchanged pomeron with a hard quark density. The gap events have the same $`Q^2`$ dependence as normal DIS and are therefore not some higher twist correction. Their overall rate is about 10% of all events, so it is not a rare phenomenon.
In normal DIS, a quark is scattered from the proton leaving a colour charged remnant (diquark in the simplest case). This gives rise to a colour field (e.g. a string) between the separated colour charges, such that the hadronization gives particles in the whole intermediate phase space region as illustrated in Fig. 6a. The gap events correspond to the scattering on a colour singlet object, Fig. 6b, which gives no colour field between the hard scattering system and the proton remnant system. Therefore, no hadrons are produced in the region between them, i.e. a rapidity gap appears. The size of the gap is basically a kinematic effect. The larger fraction of the proton beam momentum that is carried by the forward going colour singlet proton remnant system, the smaller fraction remains for other particles which therefore emerge at smaller rapidity. The forward going system $`Y`$ must have a small invariant mass in order to escape undetected in the beam pipe. It is mostly a proton with a large fraction of the beam momentum and only a very small angular deflection.
Since the $`Y`$ system is not observed the $`t`$ variable is not measured, but is usually negligibly small (c.f. the proton form factor above). However, with the invariant definitions
$`x_{IP}`$ $`=`$ $`{\displaystyle \frac{q(Pp_Y)}{qP}}={\displaystyle \frac{Q^2+M_X^2t}{Q^2+W^2m_p^2}}{\displaystyle \frac{x(Q^2+M_X^2)}{Q^2}}`$ (7)
$`z=\beta `$ $`=`$ $`x/x_{IP}={\displaystyle \frac{q^2}{2q(Pp_Y)}}={\displaystyle \frac{Q^2}{Q^2+M_X^2t}}{\displaystyle \frac{Q^2}{Q^2+M_X^2}}`$ (8)
$`x_{IP}`$ can be reconstructed from the DIS variables and the $`X`$-system. Likewise, $`z`$ (or $`\beta `$) can be measured and corresponds to Bjorken-$`x`$ for DIS on the pomeron and can therefore be interpreted as the momentum fraction of the parton in the pomeron.
From the measured cross section of rapidity gap events, the diffractive structure function $`F_2^D`$ can be extracted based on eq. (6). Since $`t`$ is not measured it is effectively integrated out giving the observable $`F_2^{D(3)}(x_{IP},\beta ,Q^2)`$. To a first approximation it was found that the $`x_{IP}`$ dependence factorises and is of the form $`1/x_{IP}^n`$ with $`n=1.19\pm 0.06\pm 0.07`$. This is in basic agreement with the expectation $`f_{IP/p}1/x_{IP}^{2\alpha _{IP}(t)1}1/x_{IP}^{1.16+0.5t}`$ from the pomeron Regge trajectory above.
However, with the increased statistics and kinematic range available in the new data displayed in Fig. 7, deviations from such a universal factorisation are observed. The power of the $`x_{IP}`$-dependence is found to depend on $`\beta `$. One way to interpret this is to introduce a subleading reggeon (IR) exchange with expected trajectory $`\alpha _{IR}(t)0.55+0.9t`$ and quantum numbers of the $`\rho ,\omega ,a`$ or $`f`$ meson . Fits to the data (Fig. 7) show that although the pomeron still dominates, the meson exchange contribution is important at larger $`x_{IP}`$ and causes $`F_2^D`$ to decrease slower (or $`x_{IP}F_2^{D(3)}`$ to even increase). The fit gives the intercepts $`\alpha _{IR}(0)=0.50\pm 0.19`$, in agreement with the expectation, and $`\alpha _{IP}(0)=1.20\pm 0.04`$ which is, however, significantly larger than 1.08 obtained from soft hadronic cross sections.
There is no evidence for a $`\beta `$ or $`Q^2`$ dependence in these intercepts and one can therefore integrate over $`x_{IP}`$ (using data and the fitted dependence), resulting in the measurement of $`F_2^D(\beta ,Q^2)`$ shown in Fig. 8. Following the above framework, this quantity can be interpreted as the structure function of the exchanged colour singlet object, which is mainly the pomeron. The fact that $`F_2^D`$ is essentially scale independent, i.e. almost constant with $`Q^2`$, shows that the scattering occurs on point charges. The small $`Q^2`$ dependence present is actually compatible with being logarithmic as in normal QCD evolution, although the rise with $`lnQ^2`$ persists up to large values of $`\beta `$ in contrast to the proton structure function. There is only a weak dependence on $`\beta `$ such that the partons are quite hard and there is no strong decrease at large momentum fraction which is characteristic for ordinary hadrons.
These features are in accordance with a substantial gluon component in the structure of the diffractive exchange, as confirmed by a quantitative QCD analysis . Standard next-to-leading order DGLAP evolution gives a good fit of $`F_2^D(\beta ,Q^2)`$ as demonstrated in Fig. 8. The fitted momentum distributions of quarks and gluons in the pomeron are shown in Fig. 9. Clearly, the gluon dominates and carries 80–90% of the pomeron momentum depending on $`Q^2`$. At low $`Q^2`$ the gluon distribution may even be peaked at large momentum fractions, c.f. the superhard component observed by UA8 , but when evolved to larger $`Q^2`$ it then becomes flatter in $`\beta `$.
The general conclusion from these HERA data is therefore that the concept of an exchanged pomeron with a parton density seems appropriate. Moreover, Monte Carlo models, like Pompyt and Rapgap (which is also based on the above pomeron formalism), can give a good description of the observed rapidity gap events.
### 4.2 Diffractive $`W`$ and jets at the Tevatron
Based on the Pompyt model, predictions were also made for diffractive $`W`$ and $`Z`$ production at the Tevatron $`p\overline{p}`$ collider, which provides sufficient energy in the pomeron-proton subsystem. With partons in the pomeron this occurs through the subprocesses $`q\overline{q}W`$ and $`gqqW`$ as illustrated in Fig. 10. The latter requires an extra QCD vertex $`gq\overline{q}`$ and is therefore suppressed by a factor $`\alpha _s`$. Thus, a gluon-dominated pomeron leads to a smaller diffractive $`W`$ cross section than a $`q\overline{q}`$-dominated pomeron. However, in both cases the cross sections were found to be large enough to be observable and the decay products of the $`W`$ ($`Z`$) often emerge in a central region covered by the detectors. Moreover, a measurement of these decay products, ideally muons from $`Z`$ decay, allows a reconstruction of the $`x`$-shape of the partons in the pomeron .
Diffractive $`W`$ production at the Tevatron was recently observed by CDF resulting in a diffractive to non-diffractive $`W`$ production ratio $`R_W=(1.15\pm 0.55)\%`$ . Since leading protons could not be detected, diffraction was defined in terms of a large forward rapidity gap, which in terms of a pomeron model corresponds to $`x_{IP}`$ dominantly in the range 0.01–0.05. The observed $`R_W`$ is much smaller than predicted with a $`q\overline{q}`$ dominated pomeron. Using Pompyt with the standard pomeron flux of eq. (5) and pomeron parton densities obtained from fits to the HERA diffractive DIS data, results in $`R_W=56\%`$, i.e. several standard deviations above the measured value!
Diffractive hard scattering has also been observed at the Tevatron in terms of rapidity gap events with two high-$`p_{}`$ jets (dijets) as in UA8. The detailed definitions of gaps and jets differ somewhat between CDF and D0, but the results are similar. The ratio of diffractive to non-diffractive dijet events found at $`\sqrt{s}=1800GeV`$ by CDF is $`R_{jj}=(0.75\pm 0.05\pm 0.09)\%`$ and by D0 $`R_{jj}=(0.76\pm 0.04\pm 0.07)\%`$ . D0 has also obtained the ratio $`R_{jj}=(1.11\pm 0.11\pm 0.20)\%`$ at the lower cms energy $`\sqrt{s}=630GeV`$. These rates are significantly lower than those obtained with the standard pomeron model with parton densities that fit the diffractive HERA data.
The inability to describe the data on hard diffraction from both HERA and the Tevatron with the same pomeron model raises questions on the universality of the model, e.g. concerning the pomeron flux and structure function. This is examined in Fig. 11 in terms of the momentum sum of the partons and the amount of gluons needed to fit the data. The region acceptable to HERA data is compatible with a saturated momentum sum rule, but in disagreement with the internally consistent $`p\overline{p}`$ collider data.
CDF has also very recently observed events with a central dijet system and rapidity gaps on both sides. On one side a high-$`x_F`$ antiproton is actually detected. This can be interpreted as double pomeron exchange (c.f. Fig. 2d), one from each of the quasi-elastically scattered proton and antiproton, where the two pomerons interact to produce the jets. The diffractive hard scattering model then contains a convolution of two pomeron flux factors and two pomeron parton densities with a QCD parton level cross section. The observed ratio of two-gap jet events to the single-gap jet events is found by CDF to be $`(0.26\pm 0.05\pm 0.05)\%`$ . An important observation is also that the $`E_{}`$-spectrum of the jets in these two-gap events have the same shape as in single-gap and no-gap events. This hints at the same underlying hard scattering dynamics which does not change with the soft processes that cause gaps or no-gaps. It is not yet clear whether this feature appears naturally in the pomeron model. However, the double pomeron exchange model, with pomeron flux and parton densities based on diffractive HERA data, seems to overestimate the rate of two-gap jet events .
## 5 Pomeron problems
The inability to describe both HERA and $`p\overline{p}`$ collider data on hard diffraction is a problem for the pomeron model. It shows that the ‘standard’ pomeron flux factor and pomeron parton densities cannot be used universally. A possible cure to this problem has been proposed in terms of a pomeron flux ‘renormalization’ . The flux in eq. (5) is found to give a much larger cross section for inclusive single diffraction than measured at $`p\overline{p}`$ colliders, although it works well for lower energy data. This is due to the increase of $`f_{IP}1/x_{IP}^{2\alpha _{IP}(t)1}`$ as the minimum $`x_{IPmin}=M_{Xmin}^2/s`$ gets smaller with increasing energy $`\sqrt{s}`$. To prevent that the integral of the pomeron flux increases without bound, it is proposed that it should saturate at unity, i.e. one renormalizes the pomeron flux by dividing with its integral whenever the integral is larger than unity. This prescription not only gives the correct inclusive single diffractive cross section at collider energies, but it also makes the HERA and Tevatron data on hard diffraction compatible with the pomeron hard scattering model. The model result for HERA is not affected, but at the higher energy of the Tevatron the pomeron flux is reduced such that the data are essentially reproduced. In another proposal based on an analysis of single diffraction cross sections, the pomeron flux is reduced at small $`x_{IP}`$ through a $`x_{IP}`$\- and $`t`$-depending damping factor. Neither of these two modified pomeron flux factors have a clear theoretical basis.
A difference between diffraction in $`ep`$ and $`p\overline{p}`$ is the possibility for coherent pomeron interactions in the latter . In the incoherent interaction only one parton from the pomeron participates and any others are spectators. However, in the pomeron-proton interaction with $`IP=gg`$ both gluons may take part in the hard interaction giving a coherent interaction. For example, in the $`IPp`$ hard scattering subprocess $`ggq\overline{q}`$, the second gluon from the pomeron may couple to the gluon from the proton. Such diagrams cancel when summing over all final states for the inclusive hard scattering cross section (the factorization theorem). For gap events, however, the sum is not over all final states and the cancellation fails leading to factorization breaking and these coherent interactions where the whole pomeron momentum goes into the hard scattering system. With momentum fraction $`x`$ of the first gluon and $`1x`$ of the second, a factor $`1/(1x)`$ arises from the propagator of the second, soft gluon in the pomeron. This may motivate a super-hard component in the pomeron with effective structure function $`1/(1x)\delta (1x)`$ as in the UA8 data discussed above. This coherent interaction cannot occur in the same way in DIS since the pomeron interacts with a particle without coloured constituents. This difference between $`ep`$ and $`p\overline{p}`$ means that there should be no complete universality of parton densities in the pomeron.
Although modified pomeron models may describe the rapidity gap events reasonably well, there is no satisfactory understanding of the pomeron and its interaction mechanisms. On the contrary, there are conceptual and theoretical problems with this framework. The pomeron is not a real state, but can only be a virtual exchanged spacelike object. The concept of a structure function is then not well defined and, in particular, it is unclear whether a momentum sum rule should apply. In fact, the factorisation into a pomeron flux and a pomeron structure function cannot be uniquely defined since only the product is an observable quantity .
It may be incorrect to consider the pomeron as being ‘emitted’ by the proton, having QCD evolution as a separate entity and being ‘decoupled’ from the proton during and after the hard scattering. Since the pomeron-proton interaction is soft, its time scale is long compared to the short space-time scale of the hard interaction. It may therefore be natural to expect soft interactions between the pomeron system and the proton both before and after the snapshot of the high-$`Q^2`$ probe (as illustrated in Fig. 1c). The pomeron can then not be considered as decoupled from the proton and, in particular, is not a separate part of the QCD evolution in the proton.
Large efforts have been made to understand the pomeron as two-gluon system or gluon ladder in pQCD. By going to the soft limit one may then hope to gain understanding of non-pQCD. Perhaps one could establish a connection between pQCD in the small-$`x`$ limit and Regge phenomenology. More explicitly, attempts have been made to connect the Regge pomeron with gluon ladders in pQCD. For example, the analogy between the Regge triple pomeron diagram for single diffractive scattering has been connected with the gluon ladder fan diagram in pQCD to estimate the pomeron gluon density . The fan diagrams are described by the GLR equation which gives a novel QCD evolution with non-linear effects due to gluon recombination $`ggg`$. This reduces the gluon density at small-$`x`$ (screening); an effect that could be substantial in the pomeron .
Diffractive DIS has been considered in terms of models based on two-gluon exchange in pQCD, see e.g. . The basic idea is to take two gluons in a colour singlet state from the proton and couple them to the $`q\overline{q}`$ system from the virtual photon. With higher orders included the diagrams and calculations become quite involved. Nevertheless, these formalisms can be made to describe the main features of the diffractive DIS data. Although this illustrates the possibilities of the pQCD approach to the pomeron, one is still forced to include non-perturbative modelling to connect the two gluons in a soft vertex to the proton. Thus, even if one can gain understanding by working as far as possible in pQCD, one cannot escape the fundamental problem of understanding non-pQCD.
## 6 Non-perturbative QCD and soft colour interactions
The main problem in understanding diffractive interactions is related to our poor theoretical knowledge about non-pQCD. The Regge approach with a pomeron can apparently be made to work phenomenologically, but has problems as discussed above. Therefore, new models have recently been constructed without using the pomeron concept or Regge phenomenology. Instead, they are based on new ideas on soft colour interactions that give colour rearrangements which affect the hadronization and thereby the final state. These models have first been developed for diffractive DIS which is a simpler and cleaner process than diffraction in $`p\overline{p}`$ collisions.
One model to understand diffractive DIS at HERA exploits the dominance of the photon-gluon fusion process $`\gamma ^{}gq\overline{q}`$ at small-$`x`$. The $`q\overline{q}`$ pair is produced in a colour octet state, but it is here assumed that soft interactions with the proton colour field randomizes the colour. The $`q\overline{q}`$ pair would then be in an octet or singlet state with probability 8/9 and 1/9, respectively. When in a singlet state, the $`q\overline{q}`$ pair hadronizes independently of the proton remnant, which should result in a lack of particles in between. From the photon-gluon fusion matrix element one then obtains the diffractive structure function
$$F_2^D(x,Q^2,\xi )\frac{1}{9}\frac{\alpha _s}{2\pi }\underset{q}{}e_q^2g(\xi )\beta \{[\beta ^2+(1\beta )^2]\mathrm{ln}\frac{Q^2}{m_g^2\beta ^2}2+6\beta (1\beta )\}$$
(9)
where 1/9 is the colour singlet probability. The next factor, including the density $`g(\xi )`$ of gluons with momentum fraction $`\xi `$, corresponds to a pomeron flux factor. The $`\beta `$-dependent factor corresponds to the pomeron structure function $`F_2^D(\beta ,Q^2)`$ above, with $`\beta =x/\xi `$ as usual. Thus, there is an effective factorisation which is similar to pomeron models. The gluon mass parameter $`m_g`$ regulates the divergence in the QCD matrix element and is chosen so as to saturate the DIS cross section at small-$`x`$ with the photon-gluon fusion process. The model reproduces main features of the gap events, such as their overall rate and $`Q^2`$ dependence. However, it is simple and does not take into account higher order parton emissions and hadronization. Therefore, it cannot give as detailed predictions as the Monte Carlo models above.
In the same general spirit another model was developed independently using a Monte Carlo event generator approach . The starting point is the normal DIS parton interactions, with pQCD corrections in terms of matrix elements and parton showers in the initial and final state. The basic new idea is that there may be additional soft colour interactions (SCI) between the partons at a scale below the cut-off $`Q_0^2`$ for the perturbative treatment. Obviously, interactions will not disappear below this cut-off, the question is rather how to describe them properly. The proposed SCI mechanism can be viewed as the perturbatively produced quarks and gluons interacting softly with the colour medium of the proton as they propagate through it. This should be a natural part of the process in which ‘bare’ perturbative partons are ‘dressed’ into non-perturbative ones and the formation of the confining colour flux tube in between them. These soft interactions cannot change the momenta of the partons significantly, but may change their colour and thereby affect the colour structure of the event. This corresponds to a modified topology of the string in the Lund model approach, as illustrated in Fig. 12, such that another final state will arise after hadronization.
Lacking a proper understanding of non-perturbative QCD processes a simple model was constructed to describe and simulate soft colour interactions. The hard parton level interactions are treated in the normal way using the Lepto Monte Carlo based on the standard electroweak cross section together with pQCD matrix elements and parton showers. The perturbative parts of the model are kept unchanged, since these hard processes cannot be altered by softer non-pQCD ones. Thus, the set of partons, including the quarks in the proton remnant, are generated as in conventional DIS. The SCI model is added by giving each pair of these colour charged partons the possibility to make a soft interaction, changing only the colour and not the momentum. This may be viewed as soft non-perturbative gluon exchange. Being a non-perturbative process, the exchange probability cannot be calculated and is therefore described by a phenomenological parameter $`R`$. The number of soft exchanges will vary event-by-event and change the colour topology such that, in some cases, colour singlet subsystems arise separated in rapidity as shown in Fig. 12bc. Here, (b) can be seen as a switch of anticolour between the antiquark and the diquark and (c) as a switch of colour between the two quarks. Colour exchange between the perturbatively produced partons and the partons in the proton remnant (representing the colour field of the proton) are of particular importance for the gap formation.
Both gap and no-gap events arise in this model. The rate and main properties of the gap events are qualitatively reproduced , e.g. the $`\eta _{max}`$ distribution in Fig. 5b and the diffractive structure function $`F_2^{D(3)}`$. The gap rate depends on the parameter $`R`$, but the dependence is not strong giving a stable model with $`R0.2`$–0.5. This colour exchange probability is the only new parameter in the model. Other parameters belong to the conventional DIS model and have their usual values. The rate and size of gaps do, however, depend on the amount of parton emission. In particular, more initial state parton shower emissions will tend to populate the forward rapidity region and prevent gap formation .
The gap events show properties characteristic of diffraction as demonstrated in Fig. 13. The exponential $`t`$-dependence arises in the model from the gaussian intrinsic transverse momentum (Fermi motion) of the interacting parton which is balanced by the proton remnant system, i.e. $`exp(k_{}^2/\sigma _i^2)`$ with $`\sigma _i0.4GeV`$ and $`tk_{}^2`$. The forward system (Fig. 13b) is dominantly a single proton, as in diffractive scattering, but there is also a tail corresponding to proton dissociation. The longitudinal momentum spectrum of protons in Fig. 13c shows a clear peak at large fractional momentum $`x_L`$. Defining events having a leading proton with $`x_L>0.95`$ as ‘diffractive’, one observes in Fig. 13bc that most of these events fulfill the gap requirement.
One may ask whether this kind of soft colour interaction model is essentially a model for the pomeron. This is not the case as long as no pomeron or Regge dynamics is introduced. The behaviour of the data on $`F_2^D(\beta ,Q^2)`$ in Fig. 8 is in the SCI model understood as normal pQCD evolution in the proton. The rise with $`lnQ^2`$ also at larger $`\beta `$ is simply the normal behaviour at the small momentum fraction $`x=\beta x_{IP}`$ of the parton in the proton. Here, $`x_{IP}`$ is only an extra variable related to the gap size or $`M_X`$ (eq. (7)) which does not require a pomeron interpretation. The flat $`\beta `$-dependence (Fig. 8b) of $`x_{IP}F_2^D=\frac{x}{\beta }F_2^D`$ is due to the factor $`x`$ compensating the well-known increase at small-$`x`$ of the proton structure function $`F_2`$.
This Monte Carlo model gives a general description of DIS, with and without gaps. In fact, it can give a fair account for such ‘orthogonal’ observables as rapidity gaps and the large forward $`E_{}`$ flow . Diffractive events are in this model defined through the topology of the final state, in terms of rapidity gaps or leading protons just as in experiments. There is no particular theoretical mechanism or description in a separate model, like pomeron exchange, that defines what is labelled as diffraction. This provides a smooth transition between diffractive gap events and non-diffractive no-gap events . In addition, leading neutrons are also obtained in fair agreement with recent experimental measurements . In a conventional Regge-based approach, pomeron exchange would be used to get diffraction, pion exchange added to get leading neutrons and still other exchanges added to get a smooth transition to normal DIS. The SCI model indicates that a simpler theoretical description can be obtained.
The same SCI model can also be applied to $`p\overline{p}`$ collisions, by introducing it in the Pythia Monte Carlo . This leads to gap events in hard scattering interactions as illustrated for $`W`$ production in Fig. 14. It is amazing that the same SCI model, normalized to the diffractive HERA data, reproduces the above discussed rates of diffractive $`W`$’s and diffractive jet production observed at the Tevatron .
The soft colour interactions do not only lead to rapidity gaps, but also to other striking effects. They have been found to reproduce the observed rate of high-$`p_{}`$ charmonium and bottomonium at the Tevatron, which are factors of 10 larger than predictions based on conventional pQCD. The SCI model included in Pythia accomplish this through the standard pQCD parton level processes of heavy quark pair production. The most important contribution comes from a high-$`p_{}`$ gluon which splits in a $`Q\overline{Q}`$ pair, e.g. the next-to-leading process $`gggQ\overline{Q}`$, where the colour octet charge of the $`Q\overline{Q}`$ can be turned into a singlet through SCI. The $`Q\overline{Q}`$ pairs with mass below the threshold for open heavy flavour production, are then mapped onto the various quarkonium states using spin statistics. The results are in good agreement with the data, both in terms of absolute normalization and the shapes. Also details like the rates of different quarkonium states and the fraction of $`J/\psi `$ produced directly or from decays are reproduced quite well.
This simple model for soft colour interactions is quite successful to describe a lot of data, both for diffractive and non-diffractive events. Of course, it is only a very simple model and far from a theory, but it may lead to a proper description. A very recent step in this direction is the use of an area law for string dynamics .
The SCI model has similarities with other attempts to understand soft dynamics. Soft interactions of a colour charge moving through a colour medium has been considered and argued to give rise to large $`K`$-factors in Drell-Yan processes and synchrotron radiation of soft photons . A semi-classical approach to describe the interaction of a $`q\overline{q}`$ pair with a background colour field of a proton has been developed into a model for diffraction in DIS . The $`q\overline{q}`$, which is here a fluctuation of the exchanged virtual photon, can emerge in a colour singlet state after the interaction with the proton such that a rapidity gap can arise. This provides a very interesting theoretical framework giving results in basic agreement with data, although one cannot make as detailed comparisons as with a Monte Carlo model.
Other attempts to gain understanding through phenomenological models have also been made in the same general spirit as the SCI model. The colour evaporation model can reproduce rapidity gap data and charmonium production with fitted parameters to regulate the probability of forming colour singlet systems. Changes of colour string topologies have also been investigated in a different context, namely $`e^+e^{}W^+W^{}q_1\overline{q}_2q_3\overline{q}_4`$. This gives two strings that may interact and cause colour reconnections resulting in a different string topology affecting the $`W`$ mass reconstruction and Bose-Einstein effects.
In conclusion, there has been an increased interest in recent years to explore non-pQCD through various theoretical attempts and phenomenological soft interaction models.
## 7 Rapidity gaps between jets
The diffractive events discussed so far always had a rapidity gap adjacent to a leading proton or small mass system. The momentum transfer between the initial proton and this very forward system is always very small (exponential $`t`$-distribution) as characteristic of soft processes. This applies whether the high-mass $`X`$-system contains hard scattering or not. In $`p\overline{p}`$ collisions at the Tevatron one has discovered a new kind of rapidity gaps, namely where the gap is in the central region and between two jets with high $`p_{}`$, i.e. ‘jet-gap-jet’ events.
In a sample of $`p\overline{p}`$ events at $`\sqrt{s}=1800GeV`$ having two jets with transverse energy $`E_{}^{jet}>20GeV`$, pseudorapidity $`1.8<|\eta _{jet}|<3.5`$ and $`\eta _{jet1}\eta _{jet2}<0`$, CDF finds that a fraction $`R_{jgj}=(1.13\pm 0.12\pm 0.11)\%`$ has a rapidity gap within $`|\eta |<1`$ between the jets. At $`\sqrt{s}=630GeV`$ the CDF result is $`R_{jgj}=(2.7\pm 0.7\pm 0.6)\%`$ with $`E_{}^{jet}>8GeV`$ which corresponds to approximately the same momentum fraction $`x`$ of the interacting partons at the two cms energies. D0 finds very similar results in terms of ‘colour singlet fractions’ $`f_s=(0.94\pm 0.04\pm 0.12)\%`$ for $`E_{}^{jet}>30GeV`$ at $`\sqrt{s}=1800GeV`$ and $`f_s=(1.85\pm 0.09\pm 0.37)\%`$ for $`E_{}^{jet}>12GeV`$ at $`\sqrt{s}=630GeV`$. Although the CDF and D0 event selections and analyses differ, the resulting relative rates of jet-gap-jet events are quite similar. They are definitely larger at the lower energy. In D0 the ratios tend to increase with increasing $`E_{}^{jet}`$ and rapidity separation between the jets, but the CDF data shows no significant such effect.
The jet-gap-jet events can be interpreted in terms of colour singlet exchange. However, the momentum transfer $`|t|E_{jet}^2>100GeV^2`$ is very large in contrast to the small $`t`$ in ordinary diffraction. An interpretation in terms of the Regge pomeron is therefore not possible, but attempts have been made using pQCD models of two-gluon exchange. Such models seems at first to give energy and $`E_{}^{jet}`$ dependences that are not consistent with the data, but recent developments indicate that this need not be the case . The salient features of the data can, on the other hand, be interpreted in terms of the colour evaporation model . A problem with both these approaches is however, that they do not take proper account of higher order pQCD parton emissions, multiple parton-parton scattering and hadronization. These are well known problems for the understanding of the ‘underlying event’ in hadron-hadron collisions and must be investigated with detailed Monte Carlo models. For example, the perturbative radiation in a high-$`p_{}`$ scattering must be included since it cannot be screened by soft interactions. The proposed models attempts to describe all these effects through a ‘gap survival probability’ . However, a real understanding of gap between jets is still lacking.
## 8 Conclusions
Diffractive hard scattering has in recent years been established as a field of its own with many developments in both theory and experiment. Rapidity gap events have been observed with various hard scattering processes; high-$`p_{}`$ jet and $`W`$ production, and deep inelastic scattering.
The model with a pomeron having a parton structure is quite successful in describing data, in particular for diffractive DIS at HERA where parton densities in the pomeron have been extracted. However, the pomeron model has some problems. The pomeron flux and/or the pomeron parton densities are not universal to all kinds of interactions, or they are more complicated with, e.g., a flux renormalization. Even if such modified pomeron models can be made to describe data both from $`ep`$ and $`p\overline{p}`$, there are conceptual problems with the pomeron. In particular, it is doubtful whether the pomeron can be viewed as a separate entity which is decoupled from the proton during the long space-time scale of the soft interaction.
The general problem is soft interactions in non-perturbative QCD. Perhaps Regge theory is the proper soft limit of QCD, but it may also exist more fruitful roads towards a theory for soft interactions. This has generated an increased interest to explore new theoretical approaches and phenomenological models.
A new trend is to consider the interactions of partons with a colour background field. The hard pQCD processes should then be treated as usual, but soft interactions are added which change the colour topology resulting in a different final state after hadronization. In the Monte Carlo model for soft colour interactions this gives a unified description with a smooth transition between diffractive and non-diffractive events. The different event classes can then be defined as in experiments, e.g. in terms of rapidity gaps or leading protons. This model and others in a similar general spirit can describe the salient features of many different kinds of experimental data.
Nevertheless, there are many unsolved problems that are challenging to solve. In particular, the events with a rapidity gap between two high-$`p_{}`$ jets are poorly understood. Progress in the field of diffractive hard scattering will contribute to the ultimate goal: to understand non-perturbative QCD.
Acknowledgments: I am grateful to Tom Ferbel and all the participants for a most enjoyable school. |
no-problem/9912/quant-ph9912112.html | ar5iv | text | # 1 (a) The conduction band profile of two semiconductor quantum dots with an intervening tunnel barrier. All subbands are aligned in energy to allow resonant tunneling of electrons between the two dots. The only difference in the material of the two dots is in their elastic constants. (b) The experimental set-up.
The Double Quantum Dot Feline Cousin of Schrödinger’s Cat: An Experimental Testbed for a Discourse on Quantum Measurement Dichotomies
S. Bandyopadhyay<sup>1</sup><sup>1</sup>1Corresponding author. E-mail: bandy@quantum1.unl.edu
Department of Electrical Engineering, University of Nebraska, Lincoln, Nebraska 68588-0511, USA
## Abstract
Quantum measurement theory is a perplexing discipline fraught with paradoxes and dichotomies. Here we discuss a gedanken experiment that uses a popular testbed - namely, a coupled double quantum dot system - to revisit intriguing questions about the collapse of wavefunctions, irreversibility, objective reality and the actualization of a measurement outcome.
Quantum measurement theory is a sub discipline replete with many subtleties of quantum mechanics. Its basic underpinning can be summarized by a fundamental and yet profound question: when and how does a pure state, descriptive of a quantum system entangled with a measuring apparatus (also a quantum system), evolve into a mixed state that results in distinguishable outcomes of the measurement. Since in standard quantum mechanics, no unitary time evolution can cause a pure state to evolve into a mixed state there is essentially no cookbook “quantum recipe”’ to forge distinguishable outcomes in quantum measurement.
A number of formalisms that augment the standard mathematical framework of quantum mechanics provide a dynamical description of the measurement process in terms of an actual transition of a pure state into a mixed state. This has been termed “collapse of a wave function”. However, even if we accept the augmented mathematical framework, some mysteries still remain. How does the collapse occur? Is it a discrete event in time or is it a continuous process? Is the collapse observer-dependent (i.e. it happens only when an observer decides to look at the outcome of a quantum measurement) or does the outcome materialize at some time independent of the observer? In this short communication, we re-visit these issues in the context of a popular quantum system that illustrates many of the subtleties in quantum measurement theory.
Consider a double quantum dot system coupled by a translucent tunnel barrier. The conduction band diagram is shown in Fig. 1(a). The two quantum dot materials are identical in all respects except in their elastic constants. That is, electrons cannot distinguish between them, but phonons can.
An electron is introduced into the ground state of the system and exists in a coherent superposition of two states $`|1>`$ and $`|2>`$
$$\psi =\frac{1}{\sqrt{2}}(|1>+|2>)$$
(1)
where $`|1>`$ is a semi-localized wave function in the left dot and $`|2>`$ is a semi-localized wave function in the right dot. A weakly coupled point detector in the vicinity of one of the dots can tell whether that dot is occupied by the electron or the other one is. This experimentally realizable system has been studied in the context of the quantum measurement problem by a number of authors recently.
We now summarize three different viewpoints regarding the quantum measurement problem. The orthodox viewpoint associated with the Copenhagen interpretation is epitomized by Von-Neumann: the wave function collapses when an observer chooses to look at the detector and gain knowledge about where the electron is . This is an observer-dependent reality and has been much discussed in the context of the Schrödinger cat paradox. A different viewpoint espoused by a number of researchers is predicated on objective reality. It can be briefly stated as follows: once a measurement outcome is actualized, it remains “out there” forever to be inspected by an observer at any subsequent time without changing the outcome. The outcome does not depend on when, or if at all, the observer inspects it, and does not change once actualized. Home and Chattopadhyay have suggested an experiment involving UV-exposed DNA molecules to empirically determine at what instant an outcome is actualized and the result recorded in a stable and discernible form for perpetuity. A third viewpoint claims that there may be no such precise instant. The pure state may gradually evolve towards a mixed state and concomitantly decoherence begins to set in, but the system may never quite completely decohere in a finite time (we define complete decoherence as the state in which the off-diagonal terms of the 2$`\times `$2 density matrix associated with Equation (1) vanish). The off-diagonal terms may decay with time owing to the interaction with the detector (and this may slow down the wiederkehr quantum oscillation between the states $`|1>`$ and $`|2>`$ \- the so-called quantum Zeno effect) but the off-diagonal terms need not ever vanish completely. This has been termed a “continuous collapse”. Korotkov claims that continuous measurement need not cause any decoherence or collapse (i.e, the off-diagonal terms need not decay at all because of the interaction with the detector) if continuous knowledge of the measurement result at all stages of detection is used to faithfully reconstruct the pure state. These three viewpoints are quite disparate and cannot be reconciled easily.
We suggest a simple gedanken experiment to resolve some of these conflicting viewpoints. Consider the situation when we have two independent detectors capable of detecting which dot is occupied by the electron in Fig. 1. The detectors are independent in the sense that they are located vast distances apart and initially there is no coupling between them. One detector is the weakly coupled point detector (see Fig. 1b) in the vicinity of a dot capable of fairly non-invasive measurement which causes at most gradual collapse a lá Gurvitz. The other detector is a phonon detector located far away. Suppose that when the electron is in the right dot it emits a zero energy acoustic phonon which has a finite wave vector and hence a finite momentum. It also has a finite group velocity. Such phonons do not typically exist in bulk materials, but exist in quantum confined structures like wires and dots. The emitted phonon has different wave vectors depending on whether the emission took place in the left dot or the right dot because elastic constants (and hence the phonon dispersion relations) in the two dots are different. When the phonon arrives at the detector, it is absorbed by an electron and by measuring the momentum imparted to the electron (or equivalently the associated current), one can tell whether the phonon came from the left dot or the right dot. Thus, monitoring the current in the phonon detector will constitute a “measurement”. Let us say that the phonon was emitted at time $`t`$ = 0<sup>2</sup><sup>2</sup>2 It may bother the reader that Heisenberg’s Uncertainty Principle is being violated in this thought experiment. If the phonon has precisely zero energy, how can we say that it is emitted at exactly time t=0? The answer is that at time t=0, we are not measuring the energy. If we ever wanted to measure the phonon’s energy, we could take forever. If indeed Heisenberg’s Uncertainty Principle were relevant here, then all elastic collisions (e.g. electron-impurity collision) will take forever. Yet we can calculate an effective scattering time for an electron impurity collision from Fermi’s Golden Rule. and it arrives at the phonon detector at time $`t`$ = $`t_1`$. The detector finds that the phonon came from the right dot.
If the viewpoint of objective reality is correct, then the actualization of the outcome took place at time $`t`$ = 0. Thereafter, the electron will be always found in the right dot. We can empirically pinpoint this instant at a later time $`t`$ $`>`$ 0 (actually at $`t`$ $``$ $`t_1`$) since we can determine $`t_1`$, the time of flight of the phonon between the dot and the phonon detector. We simply have to know the distance between the dot and the detector and the phonon group velocity to know $`t_1`$. Thus when the phonon detector registers the phonon, we will know that the actualization took place $`t_1`$ units of time prior to the registration event. Additionally, if we know the time $`t`$ = -$`t_2`$ when the electron was injected into the double dot system, then we can find out how long thereafter the actualization of the outcome took place (this time is simply $`t_2`$). This is similar to what Home and Chattopadhyay had proposed to achieve in their UV-exposed DNA system .
We now come to the central issue. Between the time $`t`$ = 0 and $`t`$ = $`t_1`$ (i.e. while the phonon is in flight), the observer (phonon detector) is still ignorant of the outcome, but the actualization of the measurement has supposedly already taken place. During this critical time period, the weakly coupled point detector tries to continuously determine which dot is occupied. If the observer-independent viewpoint is correct, then the electron will be always found in the right dot. But, if the observer-dependent viewpoint is correct , then the Schrödinger cat is in suspended animation between $`t`$ = 0 and $`t`$ = $`t_1`$ since the observer (phonon detector) has not registered any phonon yet. Consequently, the almost non-invasive point detector (which takes a very long time to destroy the superposition acting alone) should have a non-zero probability of finding the electron in the left dot. To ensure that these are the only two possible scenarios, we will allow the maximum latitude. For instance, we will assume: (i) the quantum oscillation period between the two dots (wiederkehr) is much smaller than the time of flight $`t_1`$ and the Zeno effect is negligible because of the weak coupling with the non-invasive point detector, (ii) the emission of zero energy phonon does not alter the electron’s energy and hence does not subsequently disallow resonant tunneling between the quantum dots, and (iii) the remote phonon detector is unaware of the set-up before time $`t`$ = $`t_1`$ and hence cannot influence events before time $`t`$ = $`t_1`$ (causality). Thus, if the point detector ever finds the electron in the left dot between $`t`$ = 0 and $`t`$ = $`t_1`$, the objective reality (observer-independent) viewpoint will be suspect. In this pathological example, the difference between the observer-dependent and observer-independent viewpoint can be simply stated thus: in the first viewpoint, the collapse took place at $`t`$ = $`t_1`$ and in the second viewpoint, it took place at $`t`$ = 0. As long as any non-invasive detector in the timeframe $`t`$ = 0 till $`t`$ = $`t_1`$ finds the electron in the left dot and the phonon detector at time $`t_1`$ finds the electron to have emitted the phonon in the right dot, we will know that the “collapse” did not take place at $`t`$ = 0 which would then contradict the observer independent viewpoint. We will then be forced to admit that perhaps collapse ultimately takes place in the sensory perception of the observer . This is currently a contentious topic.
An interesting question is whether the phonon emission is a collapse event. There is no energy dissipation involved in emitting a zero-energy phonon, but energy dissipation is not necesssary for collapse since elastic interaction of an electron with a magnetic impurity that causes a change in the internal degree of freedom of the scatterer (say, spin flip) constitutes effective collapse. “Creation” of a phonon is certainly changing its internal degrees of freedom in a major way and therefore should be viewed as a collapse event within the framework of standard models.
But what if the point detector will find the electron in the left dot after time $`t`$ = $`t_1`$ when the phonon detector has already determined that the electron collapsed in the right dot. This will make standard collapse models suspect since we must then admit that the phonon emission did not cause a collapse. Complete collapse is an irreversible event (equivalent to saying that the Zeno time is infinite). However the third viewpoint of Gurvitz guarantees that the electron will be ultimately delocalized (and hence found in the left dot with a non-zero probability) if we make a continuous measurement with the point detector. In contrast, if frequent repeated measurements are made, then the Zeno effect guarantees that the opposite will happen; the electron will become more localized in one dot as the frequency of observation is increased. Thus, there is an essential dichotomy when one considers the fact that a continuous measurement is really the ultimate limit of frequent repeated measurements and yet they make opposite predictions. It is not clear how this dichotomy will be ultimately resolved.
In this communication, we have proposed a gedanken experiment to resolve some of the
dichotomies between the myriad viewpoints permeating quantum measurement theory. Experiments such as the one proposed here will soon be within the reach of modern technology. Hopefully, they will shed new light on this fascinating topic. |
no-problem/9912/hep-th9912166.html | ar5iv | text | # Puncture of gravitating domain walls
## I Introduction
Hybrid topological defects can occur in a variety of physical scenarios (, ) where multiple phase transitions are allowed to occur simultaneously. Typically, one may envision lower dimensional defects ending on higher dimensional defects (so-called ‘Dirichlet’ topological defects ), or conversely higher dimensional defects may end on lower dimensional defects . In this paper, we are chiefly concerned with the latter situation, where domain walls may have string boundary components.
In the first papers to study walls ending on strings , the authors considered an explicit symmetry breaking pattern whereby the unifying gauge symmetry $`\text{Spin}(10)`$ is broken to $`SU(3)\times SU(2)\times U(1)`$ by way of the group $`SU(4)\times SU(2)\times SU(2)`$. The first symmetry breaking pattern gives rise to the existence of $`𝐙_2`$ strings, which may form boundaries for the domain walls which are produced by the second symmetry breaking.
In fact, this basic picture will hold in any scenario where you have a series of phase transitions of the form
$$𝒢\stackrel{X_1}{}\stackrel{X_2}{}I$$
(1)
where $`\pi _0(𝒢)=\pi _1(𝒢)=I`$ and $`\pi _0()I`$ (here $`I`$ denotes the identity group). Since the sequence (1.1) is exact it follows that
$`\pi _1(𝒢/)=\pi _0()`$
That is to say, the first symmetry breaking give rise to strings. The second phase transition then gives rise to domain walls. More explicitly, if we consider the behaviour of the field $`X_2`$ as we move around a string, we see that $`X_2`$ must have a discontinuity since the field is not invariant under the action of $``$. In other words, we have to ‘add in’ domain walls to account for this discontinuity.
It follows that domain walls in these multiple phase transition scenarios are not topologically stable defects. Instead, such walls are unstable to a quantum mechanical decay process, whereby a closed string loop boundary component appears on the worldvolume of the domain wall, and then expands. To calculate the probability, $`P`$, that this process occurs, one typically invokes the semiclassical approximation
$`PAe^{(S_ES_B)}`$
where $`S_E`$ is the Euclidean action of the instanton of the post-tunneling configuration (a domain wall with a hole in it), $`S_B`$ is the Euclidean action of the background instanton (a domain wall without a hole in it), and $`A`$ is a prefactor which is calculated by considering fluctuations about the instanton. Typically, one ignores the prefactor and focusses on the action terms (since these terms lead to exponential suppression, they typically dominate).
The trajectory of a virtual string loop moving in Euclidean space is a two-sphere of radius $`R`$, where $`R`$ is the radius at which the string nucleates. The Euclidean section of a planar domain wall is (topologically) still just $`𝐑^3`$. Thus, the instanton for the final state is $`𝐑^3`$ with a ball of radius $`R`$ removed, and the instanton for the initial state is $`𝐑^3`$. It follows that the difference between the two actions must contain a boundary term proportional to the area of the two-sphere, and a bulk term proportional to the volume of the removed region; indeed, one obtains
$$S_ES_B=4\pi R^2\mu \frac{4\pi }{3}R^3\sigma $$
(2)
where $`\mu `$ is the string tension and $`\sigma `$ is the energy density of the domain wall. For typical symmetry breaking scales associated with strings and walls, one finds that $`S_ES_B>>1`$, and so this decay process is usually supressed .
In all of this analysis, we have not said anything about the gravitational fields of these domain walls. This is because we have been assuming that the walls are so ‘light’ that they have effectively decoupled from gravity. Of course, there is no reason why heavy domain walls should not suffer the same instabilities as light walls. We are therefore led to generalize the known work on light domain walls, to include the effects of gravity. However, the gravitational effects of heavy domain walls are highly non-trivial; indeed, a gravitating domain wall generically closes the universe! Of course, this overwhelming property of domain walls is just another reason why we should be interested in finding new decay modes to get rid of them. We now turn to a short discussion of the gravitational effects of domain walls, before outlining the new work on how they may decay by the formation of closed string loop boundary components.
Throughout this paper we use units in which $`\mathrm{}=c=G=1`$.
## II VIS domain walls: A brief introduction
Solutions for the gravitational field of a domain wall were found by Vilenkin (for an open wall) and Ipser and Sikivie (for closed walls). The global structure of these Vilenkin-Ipser-Sikivie (or ‘VIS’) domain walls has been extensively discussed recently (, ) so we will only present a brief sketch here.
To begin, we look for a solution of the Einstein equations where the source term is an energy momentum tensor describing a distributional source located at $`z=0`$:
$$T_{\mu \nu }=\sigma \delta (x)\mathrm{diag}(1,1,1,0)$$
(3)
It is impossible to find a static solution of the Einstein equations with this source term; indeed, the VIS solution is a time-dependent solution describing a uniformly accelerating domain wall. In order to understand the global causal structure of the VIS domain wall, it is most useful to use coordinates $`(t,x,y,z)`$ so that the metric takes the form
$$ds^2=e^{2k|z|}\left(dt^2dz^2\right)e^{2k(t|z|)}(dy^2+dx^2),$$
(4)
Here, $`k=2\pi \sigma `$, and the wall is located at $`z=0`$. The gravitational field of this solution has amusing properties. For example, if you take the Newtonian limit of the Einstein equations for (4) you obtain the equation
$`^2\varphi =2\pi \sigma `$where $`\varphi `$ is the Newtonian gravitational potential and $`\sigma `$ is the energy density of the wall. From this equation it is clear that a wall with positive surface energy density will have a repulsive gravitational field, whereas a wall with negative energy density will have an attractive gravitational field. An even simpler way to see that the (positive $`\sigma `$) VIS wall is repulsive is to notice that the $`tz`$ part of the metric is just the Rindler metric.
Further information is recovered by noticing that the $`z`$= constant hypersurfaces are all isometric to $`2+1`$ dimensional de Sitter space:
$$ds^2=dt^2e^{2kt}(dy^2+dx^2).$$
(5)
Given that $`2+1`$ de Sitter has the topology $`\mathrm{S}^2\times 𝐑`$ it follows that the domain wall world sheet has this topology. In other words, at each instant of time the domain wall is topologically a two-dimensional sphere. Indeed, in the original Ipser-Sikivie paper a coordinate transformation was found which takes the $`(t,x,y,z)`$ coordinates to new coordinates $`(T,X,Y,Z)`$ such that in the new coordinates the metric becomes (on each side of the domain wall):
$$ds^2=dT^2dX^2dY^2dZ^2.$$
(6)
Furthermore the domain wall, which in the old coordinates is a plane located at $`z=0`$, is in the new coordinates the hyperboloid
$$X^2+Y^2+Z^2=\frac{1}{k^2}+T^2.$$
(7)
Of course, the metric induced on a hyperboloid embedded in Minkowski spacetime is just the de Sitter metric, and so this is consistent with what we have already noted. This metric provides us with a useful way of constructing the maximal extension of the domain wall spacetime:
First, take two copies of Minkowski space, and in each copy consider the interior of the hyperboloid determined by equation (7), match these solid hyperboloids to each other across their respective boundaries; there will be a ridge of curvature (much like the edge of of a lens) along the matching surface, where the domain wall is located. Thus, an inertial observer on one side of the wall will see the domain wall as a sphere which accelerates towards the observer for $`T<0`$, stops at $`T=0`$ at a radius $`k^1`$, then accelerates away for $`T>0`$. We illustrate this construction below, where we include the acceleration horizons to emphasize the causal structure.
Now, the repulsive effect of this vacuum domain wall is very similar to the inflationary effect of a positive cosmological constant seen in de Sitter space. Indeed, we often find it is useful to think of a VIS spacetime as an inflating universe where all of the vacuum energy has been ‘concentrated’ on the sheet of the domain wall.
## III ‘Popping’ a VIS domain wall: Instanton and action
### A Euclidean section of the ingoing state
Before we construct instantons for popping domain walls, it is useful if we first recall a few basic facts about the Euclidean section and action of the ordinary VIS domain wall.
In the simplest scenarios, a VIS domain wall will form when there is a breaking of some discrete symmetry. Usually, one thinks of the symmetry breaking in terms of some Higgs field $`\mathrm{\Phi }`$. If $`_0`$ denotes the ‘vacuum manifold’ of $`\mathrm{\Phi }`$ (i.e., the submanifold of the Higgs field configuration space on which the Higgs acquires a vacuum expectation value because it will minimize the potential energy $`V(\mathrm{\Phi })`$), then a necessary condition for a domain wall to exist is that $`\pi _0(_0)0`$. In other words, vacuum domain walls arise whenever the vacuum manifold is not connected. Given these assumptions, one usually writes the Lagrangian density for the matter field $`\mathrm{\Phi }`$ as
$$_m=\frac{1}{2}g^{\alpha \beta }_\alpha \mathrm{\Phi }_\beta \mathrm{\Phi }V(\mathrm{\Phi }).$$
(8)
The exact form of $`V(\mathrm{\Phi })`$ is not important. All that we require in order for domain walls to be present is that $`V(\mathrm{\Phi })`$ has a discrete set of degenerate minima, where the potential vanishes. Given this matter content, the full (Lorentzian) Einstein-matter action then reads:
$$S=_Md^4x\sqrt{g}\left[\frac{R}{16\pi }+_m\right]+\frac{1}{8\pi }_Md^3x\sqrt{h}K.$$
(9)
Here, $`M`$ denotes the four-volume of the system, and $`M`$ denotes the boundary of this region. One obtains the Euclidean action, $`I`$, for the Euclidean section of this configuration by analytically continuing the metric and fields and reversing the overall sign. The ‘simplified’ form of this Euclidean action in the thin wall limit has been derived in a number of recent papers (, ) and so we will not reproduce the full argument here. Basically, one first assumes that the cosmological constant vanishes ($`R=0`$) and then one uses the fact that the fields appearing in the matter field Lagrangian depend only on the coordinate ‘$`z`$’ normal to the wall, and one integrates out this $`z`$-dependence to obtain the expression
$$I=\underset{i=1}{\overset{n}{}}\frac{\sigma _i}{2}_{D_i}d^3x\sqrt{h_i}.$$
(10)
Here, $`D_i`$ denotes the $`i`$-th domain wall, $`\sigma _i`$ is the energy density of the domain wall $`D_i`$, $`h_i`$ is the determinant of the three-dimensional metric $`h_{}^{ab}{}_{(i)}{}^{}`$ induced on the domain wall $`D_i`$ and $`n`$ is the total number of domain walls. It is not hard to prove that variation relative to $`h_{}^{ab}{}_{(i)}{}^{}`$ on each domain wall will yield the Israel matching conditions.
Now, as we have seen the Lorentzian section of a single VIS domain wall is just two portions of flat Minkowski space glued together. It is therefore natural to propose that the Euclidean section is obtained by gluing two flat Euclidean four-balls together along a common $`S^3`$ boundary component. In this way one obtains the ‘lens instanton’, which describes (in the context of the no-boundary proposal) the creation of a single VIS domain wall from ‘nothing’. This lens instanton is the Euclidean section of the incoming state - the VIS domain wall before the decay process (puncture) has taken place. In order to calculate the rate for the process, we now need to construct the instanton for the perforated domain wall.
### B Euclidean section of the outgoing state
As we saw above, a VIS domain wall moving in imaginary time sweeps out an $`S^3`$ ‘ridge’ of curvature along the equator of the lens instanton. In a similar way, we expect a virtual loop of string moving in imaginary time to sweep out a two-sphere, $`S^2`$. For our purposes, we want this string loop to correspond to a puncture which appears on the Euclidean portion of the domain wall, expands to some maximum size, then collapses again. In other words, the Euclidean section of the punctured domain wall is obtained by taking the lens instanton, and ‘truncating’ each flat four-ball so that the ‘hole’ swept out by the string is made manifest. This is illustrated below:
More explicitly, if we write the metric on flat space in the usual form
$`ds^2=d𝒯^2+dX^2+dY^2+dZ^2`$
where $`𝒯=iT`$ denotes imaginary time, then the domain wall (on the lens instanton) is located at the boundary of each ball of radius $`R=1/k`$. In order to obtain the instanton for the punctured wall, we truncate the range of the variable $`Z`$:
$`ZC<R`$
where $`C`$ is some constant. In other words, we shave off a portion of the lens instanton so that the virtual string worldsheet is located at the surface $`Z=C`$.
### C Calculation of the action and amplitude
We are now in position to calculate the action and decay rate for this process. Since there is no topology change in this situation, it is easy to construct an interpolating instanton by finding slices of the initial instanton which match smoothly to slices of the final instanton. We may therefore use the no-boundary ansatz without relying on more sophicticated arguments, such as the ‘patching proposal’ put forward in . As described in the introduction, we want to calculate the action difference, $`\mathrm{\Delta }S`$, between the Euclidean action of the final configuration ($`S_F`$) and the Euclidean action of the initial configuration ($`S_I`$).
Since the final instanton is obtained from the initial instanton simply by removing a ‘cap’ from the boundary of each four-ball before performing identifications, it follows that the only action difference can come from this contribution. As in the non-gravitating case described in the introduction, the removed region will contribute a bulk term (corresponding to the hole in the domain wall) and a surface term (the boundary of the hole).
The volume of the removed cap is calculated to be
$`2\pi ^2R^3\left(1{\displaystyle \frac{2}{\pi }}({\displaystyle \frac{C}{2R}}\sqrt{1C^2/R^2}+{\displaystyle \frac{cos^1(C/R)}{2}})\right)`$
Likewise, the surface swept out by the string is just a two-sphere of radius $`(R^2C^2)^{1/2}`$. Weighting the bulk term with domain wall energy density $`\sigma `$, and the surface term with string tension $`\mu `$ we therefore obtain
$$\mathrm{\Delta }S=S_FS_I=4\pi (R^2C^2)\mu \pi ^2R^3\left(1\frac{2}{\pi }(\frac{C}{2R}\sqrt{1C^2/R^2}+\frac{cos^1(C/R)}{2})\right)\sigma $$
(11)
Ignoring the prefactor term, the decay probability is then given as
$$𝒫=e^{\mathrm{\Delta }S}$$
(12)
(Actually, the prefactor term may contain interesting information in this situation, for the simple reason that when the domain wall ‘pops’, the overall symmetry of the spacetime is broken from spherical symmetry to cylindrical symmetry; presumably, fluctuations about the instanton would respect this symmetry breaking. We will have more to say about the evolution of quantum fields in a punctured domain wall spacetime later in this paper).
Now, just as for popping light domain walls , our use of the thin wall approximation to derive (3.4) is only justified if the scale of symmetry breaking for the strings is much larger than the scale of symmetry breaking for domain walls. It follows that, generically, we will have $`\mathrm{\Delta }S>>1`$, and hence $`𝒫<<1`$.
### D Decay of domain walls with multiple punctures
It is also possible for several holes to spontaneously form on the surface of a domain wall. In the simplest situation, all of the string loop boundary components will nucleate at the same initial radius ($`(R^2C^2)^{1/2}`$), with the same string tension $`\mu `$ so that the total action will just be
$`\mathrm{\Delta }S_{TOT}=N\mathrm{\Delta }S`$
where $`\mathrm{\Delta }S`$ is given by (3.4), and $`N`$ is the total number of holes. Of course, one might also imagine more exotic scenarios where the holes nucleate at different initial radii $`R_i=(R^2C_{i}^{}{}_{}{}^{2})^{1/2}`$, and string tension $`\mu _i`$, so that the total action for the decay would be given as
$`\mathrm{\Delta }S_{TOT}={\displaystyle \underset{i=1}{\overset{N}{}}}\mathrm{\Delta }S_i`$
where
$`\mathrm{\Delta }S_i=4\pi (R^2C_{i}^{}{}_{}{}^{2})\mu _i\pi ^2R^3\left(1{\displaystyle \frac{2}{\pi }}({\displaystyle \frac{C_i}{2R}}\sqrt{1C_{i}^{}{}_{}{}^{2}/R^2}+{\displaystyle \frac{cos^1(C_i/R)}{2}})\right)\sigma `$
In either situation, the creation of multiple holes is heavily suppressed relative to the nucleation of a single hole.
Of course, it is of some interest to know whether or not a domain wall can ever be completely annihilated by the processes which we are discussing here. In fact, it is not hard to prove that a domain wall will be completely destroyed by this decay process whenever at least four string loop boundary components are nucleated.
In order to understand this, recall that the worldvolume of the domain wall is isometric to $`2+1`$-dimensional de Sitter spacetime (embedded in $`3+1`$-dimensional Minkowski), and that the trajectory of a given string loop boundary component may be obtained by taking the intersection of the hyperboloid with a surface of constant $`Z`$ (relative to the coordinates (2.4) on Minkowski space), which is simply a copy of $`2+1`$ dimensional Minkowski spacetime. If we view all of this from ‘above’ the hyperboloid, then we see that the domain wall is a two-sphere which expands uniformly outwards, and that the string loop boundary components are intersections of this two-sphere with flat timelike hypersurfaces. If there are at least four (non-parallel) such hypersurfaces, then on each spacelike surface the hypersurfaces will bound a tetradedron of fixed size. In three dimensions, a tetrahedron may bound a two-sphere, and so initially at least the domain wall may still lie (partially) within the tetrahedron. However, if the sphere continues to expand it will always eventually envelop the tetrahedron, and hence the domain wall will have been annihilated (i.e., the string boundary components ‘collide’ precisely at the corners of the tetrahedron). If you only had three (or fewer) timelike hypersurfaces, you could never completely bound the two-sphere in this way. The intersection of the sphere with a given timelike hypersurface would continue unbounded in some direction (because you would never encounter another timelike hypersurface). Thus, at least a portion of the domain wall would survive eternally. In other words, you need to nucleate at least four punture wounds in a domain wall in order to completely annihilate the domain wall.
## IV Lorentzian evolution of punctured domain walls
In the last section, we showed that a VIS domain wall may decay via the formation of closed string loop boundary components on the worldvolume of the wall. We found instantons, and calculated the corresponding actions and rates, for such processes. We now turn our attention to the Lorentzian, or ‘real time’, picture of this process.
First, recall the representation (Fig. 1) of the VIS domain wall as two solid hyperboloids in Minkowski space, identified along their respective boundaries. The constraint on the coordinated $`Z`$, $`ZC<R`$, which we imposed on the Euclidean section, extends to the Lorentzian section as well. Thus, in the Lorentzian coordinates $`(T,X,Y,Z)`$, the equation of motion for the loop of string may be written as
$$X^2+Y^2=(R^2C^2)+T^2$$
(13)
Thus, the initial radius of the loop of string (at $`T=0`$) is seen to be $`R^2C^2>0`$, which is what we expect given that the Lorentzian section must match smoothly to the Euclidean section. At late times, the hole is expanding at the speed of light. Of course, the hole never completely devours the wall for the simple reason that the spherical wall also expands exponentially. The global structure of this spacetime is illustrated below:
It is interesting to consider the gross properties of particles which are propagating in the background of a wall which spontaneously decays in this way. For example, suppose we consider some scalar field $`\psi `$ which couples to the Higgs field (of the domain wall) $`\varphi `$ through the interaction
$`_{int}={\displaystyle \frac{\lambda }{2}}\varphi ^2\psi ^2`$
(Here, we are thinking of $`\varphi `$ as a Higgs field with a standard $`\varphi ^4`$, ‘double-well’ potential). In it is shown that the reflection coefficient for scattering of $`\psi `$ particles off of a $`\varphi `$ domain wall is zero in the limit that the domain wall has zero thickness. In other words, it makes sense to think of the (infinitely thin) VIS domain walls which we have been considering in this paper as spherical, accelerating, ‘moving mirrors’. Put another way, each side of the domain wall is a spherical cavity, with Dirichlet boundary conditions for the fields at the boundary of the cavity.
In it is shown that a mirror moving with uniform acceleration $`a`$ will generate (relative to an inertial particle detector) a thermal bath of radiation at temperature
$`T={\displaystyle \frac{a}{2\pi k_b}}`$
where $`k_b`$ is Boltzmann’s constant. Of course, we could have predicted that the VIS spacetime would have an entropy and temperature of this form, simply because the repulsive energy of the wall generates a cosmological horizon which leads to loss of information. Inertial observers in a VIS spacetime can never recover information about what is happening on the other side of the domain wall, and they will represent this ignorance by tracing over states associated with the horizon.
If a domain wall decays by the formation of a closed string loop boundary component on the worldvolume of the wall, the isotropic thermal nature of the initial (VIS) spacetime should be lost. This is because the initial spherical symmetry will be broken to cylindrical symmetry when the hole forms. The hole is in some sense then a ‘window’ between the two spherical cavities, through which information can propagate. It would be interesting to have some explicit calculation for the scattering of $`\psi `$-particles in the background of a VIS domain wall with a single puncture.
As we have shown, if four (or more) punctures spontaneously nucleate on a wall, the string boundary components must eventually collide and annihilate the wall, so that all of the energy density initially stored in the wall is thermalized. The final ‘annihilation’ of a domain wall by this process is rather analagous to the endpoint of inflation, in the sense that the source driving the exponential expansion is suddenly ‘switched off’. However, a domain wall with several punctures will presumably generate a highly non-isotropic radiation background, in contrast with the inflationary scenario.
## V Conclusions: A potential brane-world instability?
We have shown how to describe the decay of vacuum domain walls by the formation of closed string boundary components, when the effects of gravity are included. We found new instantons, as well as the corresponding Lorentzian solutions, which describe the formation and evolution of multiple closed string loop boundary components on a given VIS domain wall background. It would be nice to have a complete picture of how quantum fields will evolve in the background of one of these punctured walls.
In general domain walls arise as (D-2)-dimensional defects (or extended objects) in D-dimensional spacetimes. In fact, domain walls are a common feature in the menagerie of objects which appear in the low-energy limit of string theory, as has been discussed in detail in and .
Of course, there has recently been an enormous amount of interest in the possibility that the universe itself might be a domain wall moving in some five-dimensional bulk spacetime. In particular, Randall and Sundrum have recently put forward a model where the universe is a $`𝐙_2`$-symmetric positive tension domain wall bounding two bulk regions of anti-de Sitter (adS) spacetime. In their original scenario, the bulk cosmological constant is fine-tuned relative to the domain wall energy density so that the effective cosmological constant on the brane is precisely zero. However, during a period of inflation on the brane-world the effective cosmological constant would be positive, and the domain wall would simply be a four-dimensional de Sitter hyperboloid embedded in five-dimensional adS (for a recent interesting discussion of various semiclassical instabilities associated with these de Sitter brane-worlds see ). Thus, the causal structure of these de Sitter brane-worlds is identical to that of the VIS domain wall <sup>*</sup><sup>*</sup>*The causal structure of these walls, in the four-dimensional case, was first discussed in ; it was shown in that this causal structure is universal in any dimension.
Now, one can certainly imagine that the domain wall of the Randall-Sundrum model arises because of a symmetry breaking pattern which allows for the universe itself to end on a two-brane (all that is required is an associated exact sequence of the form (1.1)). In such a scenario, the brane-world would be unstable to the formation of ‘holes’ - two-dimensional surfaces where the universe would end. These holes would accelerate out, devouring the universe and converting brane-world fields into bulk degrees of freedom. Since a fundamental polytope in four dimensions has five faces, it follows that you would have to nucleate at least five of these holes to completely annihilate a de Sitter universe in this fashion.
One could even go further and imagine scenarios where the brane-world is itself the boundary of a puncture wound in some four-brane (in this way you could have more than one large extra dimension).
In general, we would expect many of the (self-gravitating) p-branes of supergravity models to be unstable to the sort of decay processes which we have described in this paper. From this point of view, the process which we have studied is just another example of the sort of ‘brane damage’ which we expect to be a generic feature of the p-branes of M-theory.
Research on these and related questions is currently underway.
Acknowledgements
The authors thank Richard Battye, Robert Caldwell, Sean Carroll, Gary Gibbons and James Grant for useful conversations. A.C. was supported by a Drapers Research Fellowship at Pembroke College, Cambridge, and is currently (partially) supported at MIT by funds provided by the U.S. Department of Energy (D.O.E.) under cooperative research agreement DE-FC02-94ER40818. A.C. also thanks the organizers and participants of the ITP Program on Supersymmetric Gauge Dynamics and String Theory, where this work was completed, for stimulating discussions. At ITP A.C. was supported by PHY94-07194. |
no-problem/9912/cond-mat9912054.html | ar5iv | text | # 1 Average time required to approach an attractor as function of the control parameter 𝑎 for the coupling strength ϵ=0.3 in a system of size 𝑁=100. Averages over 100 different initial conditions are performed. Vertical bars indicate minimum and maximum values of the transient time.
\[
Very long transients in globally coupled maps
Susanna C. Manrubia and Alexander S. Mikhailov
Fritz-Haber-Institut der Max-Planck-Gesellschaft, Faradayweg 4-6, 14195 Berlin, Germany
Very long transients are found in the partially ordered phase of type II of globally coupled logistic maps. The transients always lead the system in this phase to a state with a few synchronous clusters. This transient behaviour is not significantly influenced by the introduction of weak noises. However, such noises generally favor cluster partitions with more stable periodic dynamics.
PACS number(s): 05.45.-a, 05.45.Xt, 05.40.Ca \]
Globally coupled maps (GCM) formed by ensembles of logistic maps have been used as a paradigm of complex collective dynamic behaviour for a decade . Originally, GCM were introduced as a mean-field approach to coupled map lattices , but later the nontrivial dynamics and the rich collective phenomenology displayed by that system made it a subject worth of study in itself. One of the main properties of globally coupled logistic maps is the presence of different phases characterized by turbulent (non-synchronized) behaviour, clustering, and global synchronization . The formation of a number of subgroups of synchronized elements out of a symmetrical ensemble has a high relevance for many applications, such as the organization of the immune or the neural system, ecological networks, cell differentiation, and structuring of social hierarchies. Therefore, the GCM phases in which the system displays clustering have been intensively studied .
The former studies about the complex collective behavior displayed by GCM are usually based on the classical classification of the phase-space introduced by K. Kaneko . As we report here, the partially ordered phase of type II is equivalent to its neighboring ordered phase, with the only difference that very long transients preceed the achievement of the final attractor. This implies a revision of the phase space of GCM, and shows that there is a strong non-monotonous dependence of the transient length with the system parameters, which has to be taken into account in any numerical study.
The simplest globally coupled discrete-time system is given by
$$x_i(t+1)=(1ϵ)f(x_i(t))+\frac{ϵ}{N}\underset{j=1}{\overset{N}{}}f(x_j(t)).$$
(1)
where the individual element evolves according to the logistic map $`f(x)=1ax^2`$, $`N`$ is the total number of maps and $`ϵ`$ specifies the coupling strength. In general, an attractor of this dynamical system is formed by a number $`𝒦`$ of synchronous clusters each containing $`N_k`$ elements, $`k=1,\mathrm{}𝒦`$ and can be characterized by means of the partition $`(𝒦;N_1N_2\mathrm{}N_𝒦)`$. For convenience, this classification also includes one-element ”clusters” ($`N_k=1`$) that actually correspond to individual non-entrained elements. Thus, the partition ($`N;1,1,\mathrm{},1`$) corresponds to the asynchronous state of the entire ensemble, while the partition ($`1;N`$) represents its fully synchronous state, where all $`N`$ elements belong to a single cluster. In addition to these two states, the system would generally also have other partitions where a certain number of synchronous groups of elements with $`N>𝒦>1`$ are present. The choise of an attractor with a particular partition is determined by the initial conditions. The attractor corresponding to each initial condition is characterized by a certain number $`𝒦_m`$ of clusters after a transient has elapsed.
The phase space of the GCM (1) has been described by K. Kaneko using the average cluster number $`\overline{𝒦}=_{m=1}^M𝒦_m/M,`$ where the index $`m=1,\mathrm{},M`$ enumerates the set of employed initial conditions. Four different phases have been identified :
1. Coherent phase. The elements follow the same trajectory ($`x_i(t)=x_j(t)`$, $`i,j;t`$), forming a single synchronous cluster ($`\overline{𝒦}=1`$).
2. Ordered phase. Almost all basin volume is occupied by a few-cluster attractor ($`\overline{𝒦}`$ is small and does not grow with $`N`$).
3. Partially ordered phase. Coexistence of many-cluster and few-cluster attractors ($`\overline{𝒦}`$ is large and grows with $`N`$).
4. Turbulent phase. No synchronization among the elements ($`\overline{𝒦}=N`$).
The coexistence of many-cluster and few-cluster attractors has been observed by K. Kaneko in two different parameter intervals. One of them separated the ordered and the turbulent phases. Here the system is in the partially ordered phase of type I, also called the intermittent phase. The other interval lies between the regions occupied by the ordered and coherent phases. In this interval the system is in the partially ordered phase of type II (called the ”glassy” phase in the initial publication ). The typical parameter intervals are $`1.56<a<1.80`$ for $`ϵ=0.3`$ (partially ordered phase of type II) , and $`1.58<a<1.69`$ for $`ϵ=0.1`$ (intermittent phase) .
To compute the asymptotic properties of a dynamical system, one has to ensure that the system has had enough time to approach its final state, i.e. that the dynamical attractor for the given initial conditions has been reached. Slow relaxation is indeed known for some dynamical systems (see, e.g., ). The properties of the transients of GCM have not yet been sufficiently investigated. The aim of the present Letter is to systematically study the transient behaviour of GCM, described by equation (1). Our principal result is that inside the whole parameter region, corresponding to partially ordered (”glassy”) phases of type II, only few-cluster attractors are observed after very long transients. This result holds also when weak noises are added. It contradicts to what has previously been reported by K. Kaneko .
We performed long runs of up to $`T=10^7`$ iterations and recorded the time at which the final minimum value of $`𝒦`$ was reached in the parameter region corresponding to the partially ordered phase of type II. To this end, the partition $`(𝒦;N_1N_2\mathrm{}N_𝒦)`$ was determined every $`\mathrm{\Delta }t=2550`$ time steps. Double precision real numbers were used in these computations, ensuring the absolute precision of $`10^{16}`$. Two elements were taken to belong to the same cluster only if they had exactly the same state within the double computer precision, i.e. if $`|x_i(t)x_j(t)|<10^{16}`$.
The computed average transient length for $`ϵ=0.3`$ as a function of $`a`$ is shown in Figure 1. We see that the transients may extend up to tens of thousands and even millions of time steps. They become especially long near $`a=1.6`$. Previous numerical studies were limited to much shorter evolution times (up to $`10^4`$ iterations) and therefore some of the behaviour observed in these studies essentially corresponded to transients. This becomes clear if we compare our Fig. 1 with Fig. 9 in Ref. : A strong increase in the mean number of clusters $`\overline{𝒦}`$ was reported exactly where the transient length greatly increases (exceeding $`3000`$ time steps). Our investigation reveals that, after long transients, only attractors with $`𝒦2`$ are typically found for $`a<1.65`$, and only attractors with $`𝒦6`$ are observed for $`1.65<a2`$. Similar results are also obtained in our calculations for $`ϵ=0.25,\mathrm{\hspace{0.33em}0.35},`$ and $`0.4`$.
The time dependence of the number of clusters $`𝒦(t)`$ at $`ϵ=0.3`$ for 30 different initial conditions is shown in Figure 2 for $`a=1.6`$ (main plot) and $`a=1.8`$ (inset). For $`a=1.6`$, the number of clusters is indeed large during the initial evolution, and comparable with the total size of the system ($`N=100`$). Later on the number of clusters is slowly decreasing and eventually only attractors with $`𝒦=2`$ (but different partitions $`N_1,N_2`$) are found at this value of the parameter $`a`$. The system evolution at $`a=1.8`$ is essentially similar, though it is characterized by a much faster convergence to the final states (note the difference by almost two orders of magnitude in the time scales in these two plots). Another difference is that for $`a=1.8`$ the final states with various cluster numbers $`𝒦=2,\mathrm{\hspace{0.33em}3}`$ and $`4`$ are observed. By averaging over a large number of initial conditions, the time dependence for the relaxation of the mean cluster number $`\overline{𝒦}(t)`$ to its asymptotic value $`𝒦`$ has been obtained. Figure 3 shows in the logarithmic scale the time dependence of the quantity $`\delta 𝒦=`$ $`\overline{𝒦}(t)𝒦`$ for $`ϵ=0.3`$ and $`a=1.6`$. We clearly see that the relaxation is exponential, $`\delta 𝒦\mathrm{exp}\{\beta t\}`$.
We have further analysed how the mean transient time $`\tau =\beta ^1`$ depended on the system size $`N`$. The explored interval of system sizes was $`2^4N2^{12}`$; we have used several values of $`a`$ and fixed the coupling strength at $`ϵ=0.3`$. We did not find any strong variation of $`\tau `$ with $`N`$, i.e. the order of magnitude of $`\tau `$ did not depend on the system size. The transient length depicted in Fig. 1 is characteristic for almost three decades of variation in the system size $`N`$.
The presence of very long transients indicates that the system may be sensitive to the application of noise. Indeed, for the so-called Milnor attractors even a tiny perturbation would suffice to destabilize the asymptotic state . The existence of Milnor attractors has been discussed both for the partially ordered phase of type II and for the intermittent phase . To analyze the effect of weak random perturbations, we have modified equations (1) by adding a noise term $`\eta r_i(t)`$. We have chosen a small noise intensity $`\eta =10^{10}`$; independent random numbers $`r_i(t)(1,1)`$ are drawn anew from a uniform distribution for each element and at each time step. Noise prevents the spurious synchronization of elements in the system: If the states of two maps $`i`$ and $`j`$ are equal (with computer precision) at time $`t`$, they will follow identical trajectories for all $`t^{}>t`$ in a pure deterministic system. When noise is added, spurious attractors are not attained and only robust attractors should be detected.
In the presence of noise the states of elements in a cluster cannot be identical. To define a cluster, we have to choose a certain finite precision $`\delta `$ and say that elements $`i`$ and $`j`$ belong to the same cluster at time $`t`$ if $`|x_i(t)x_j(t)|<\delta `$ (cf. the respective definition for the case of randomly coupled maps ). We have found that the application of weak noise does not qualitatively influence the above-described evolution. Figure 4 shows in logarithmic scale the mean number $`\overline{𝒦}(t)`$ of clusters as function of time when weak noise is present (all other parameters are the same as in Fig. 3). The system still evolves towards final distributions characterized by a few large clusters. Typically, a slower convergence to the limit value $`𝒦`$$`=2`$ was observed when the noise was acting. Only in a very narrow domain $`1.60a1.62`$ did noise seem to prevent convergence to a few-cluster attractor. Note that this area coincides with the maximum transient length in the deterministic case, and is near the boundary where the single synchronous cluster becomes unstable (it is known that the unstabilization of the coherent phase proceeds through a power-law divergence of the transient lenght ).
The dynamics corresponding to a particular cluster partition in our simulations was either periodic or (intrinsically) chaotic. To detect intrinsically chaotic dynamics, local Lyapunov exponents were examined. After a fixed transient of length $`T=10^7`$ we numerically calculated the local Lyapunov exponent
$$\lambda _m(ϵ,a)=\frac{1}{N\stackrel{~}{t}}\underset{t=T}{\overset{T+\stackrel{~}{t}}{}}\underset{j=1}{\overset{N}{}}\mathrm{log}|f^{}(x_j(t))|$$
(2)
corresponding to the trajectories $`x_j(t)`$ of elements $`j`$ for the given initial condition $`m`$ and parameters $`ϵ`$ and $`a`$. The averaging time was always $`\stackrel{~}{t}=10^4`$. Positive exponents correspond to chaotic dynamics. The same procedure was used both in the presence and in absence of the noise.
When noise is acting, it may, in principle, induce transitions from one cluster partition to another. Our numerical investigations show, however, that such transitions actually take place in the presence of very weak noises only if the dynamics corresponding to a particular cluster partition is intrinsically chaotic. This observation leads us to a conjecture that Milnor attractors in GCM are, perhaps, only generated by cluster partitions with intrinsically chaotic dynamics. Cluster partitions with periodic attractors are stable against a finite amount of perturbation, while the system leaves with certainty a partition with a chaotic attractor in a finite time when noise is present (see also ).
Interestingly, the addition of noise favors the attainement of periodic attractors. Figure 5 shows the fraction $`f`$ of initial conditions leading to a cluster partition with (intrinsically) chaotic dynamics with and without noise for the coupling intensity $`ϵ=0.3`$, as identified by means of (2). We see that this fraction is strongly reduced in the presence of noise around the parameter value $`a=1.61`$. To explain this, suppose that the system has approached a cluster partition with chaotic dynamics. Given that any such partition is destabilized even by weak noise, we expect that elements would spend only some time near this attractor, but then one of them would change its cluster affiliation and a new cluster partition would thus be produced. As long as this new partition is also chaotic, the system again easily escapes and the same procedure repeats until a much more stable partition with periodic dynamics is found. This simple argument predicts that, under the action of noise, the system would wander between chaotic Milnor attractors until it eventually finds a robust periodic attractor. If this is indeed so, chaotic attractors would always represent mere transients for sufficiently weak noises. Nonetheless, to test such hypothesis much longer iterations are apparently needed.
Thus, our numerical analysis of globally coupled logistic maps has shown that the collective dynamics of this system in the partially ordered phase of type II is characterized by the presence of very long transients. The asymptotic states of the system in this parameter region are, however, the same as in the ordered phase and include only a small number of synchronous clusters. This conclusion holds even when a small noise, eliminating spurious attractors, is introduced. We have also performed the analysis of dynamical transients in the intermittent phase, i.e. at the interface between the ordered and the turbulent phases, and have found (to be separately published) that in this region the coexistence of few- and many-cluster attractors is indeed observed. These results are important for the general classification of dynamical behaviour in GCM.
The authors acknowledge the financial support from the Alexander-von-Humboldt Foundation (Germany). |
no-problem/9912/astro-ph9912437.html | ar5iv | text | # A thin H i circumnuclear disk in NGC 4261
## 1 Introduction
Perhaps the most striking feature of the FRI radio galaxy NGC 4261 (3C 270) is its approximately 240 pc radius circumnuclear dust disk revealed by HST observations (Jaffe et al. jaffe93 (1993)). Further optical observations of the kinematics of emission lines in the inner regions of this disk by Ferrarese et al. (ferrarese96 (1996)) have since provided evidence for a central black hole of mass of $`4.9\times 10^8`$M. In the radio band, H i absorption has been detected toward the core of NGC 4261 using the VLA (Jaffe & McNamara jaffe94 (1994)). It was argued by these authors that this absorption is due to atomic hydrogen in the inner part of the HST disk. Such a disk interpretation is consistent with high resolution VLBI or MERLIN H i absorption observations in a number of other AGN. For example VLBA H i observations of another FRI galaxy, Hydra A, are consistent with a 20 pc flattened disk structure (Taylor taylor96 (1996)). In this Letter we report on VLBI observations of NGC 4261 performed using the high-sensitivity antennas of the European VLBI Network (EVN). The aims were to confirm that the H i is indeed associated with the HST dust disk and to better constrain the disk geometry and physical properties.
Detailed studies of the dynamics and chemistry of circumnuclear disks such as that found in NGC 4261 are important for several reasons. Such disks almost certainly provide the fuel which powers AGN, but the accretion process is poorly understood. In addition such flattened circumnuclear structures are required by orientation-based unified schemes. While the inner edges of these occulting structures must be on BLR scales (0.1 – 1 pc) their outer radii are poorly defined and may extend to hundreds of parsecs. Evidence for circumnuclear gas on a variety of scales in different physical states has been accumulating. Examples include HST imaging of parsec scale ionised gas in M87 (Ford et al. ford94 (1994)), 100 – 1000 pc scale molecular CO in Centaurus A (Rydbeck et al. rydbeck93 (1993)) and HCN in the Seyfert 2 NGC 1068 (Tacconi et al. tacconi94 (1994)). Amongst all the objects observed NGC 4261 is unique in showing optical dust, H i absorption and VLBI scale free-free absorption (Jones & Wehrle jones97 (1997)), allowing us to study the disk on a variety of scales.
The HST optical imaging of NGC 4261 provides strong constraints on the disk geometry at 100 pc scales. The ratio of the apparent major and minor axes (Ferrarese et al. ferrarese96 (1996), Jaffe et al. jaffe96 (1996)) implies, if the disk is circular, that its normal is inclined $`64^{}`$ to the line of sight. Modeling of the dust obscuration shows that it is the East side of the disk which is closest to us. Such modeling also shows that the dust disk is thin, with a thickness $`<40`$ pc at its outer edge (Jaffe et al. jaffe96 (1996)). The normal to the disk is found in projection to be roughly oriented along the radio axis, making an angle of $`14^{}`$ to the kiloparsec scale radio jets. At both 1.6 and 8.4 GHz VLBI observations show a two-sided jet in the same position angle as the kiloparsec jets (Jones & Wehrle jones97 (1997)). The Eastern jet, which is slightly weaker, is assumed to be the counterjet, given that the Eastern edge of the HST dust disk is tilted toward us and the radio jets are roughly aligned along the disk axis. Consistent with this orientation, Jones & Wehrle (jones97 (1997)) argue that a narrow gap in the 8.4 GHz radio emission toward the Eastern jet is from free-free absorption via occultation by an inner ionised accretion disk of radius 0.2 pc. In the remainder of the paper we discuss our H i VLBI observations and the additional constraints on disk geometry and physical properties they provide. Throughout this paper we assume a distance of $`30`$ Mpc (Nolthenius nolthenius93 (1993)), so 1 mas corresponds to 0.14 pc.
## 2 Observations and Processing
Observations of NGC 4261 were made at 21 cm with the European VLBI Network (EVN) on February 22 1999 and lasted 10 hours. Participating antennas were Effelsberg, Jodrell Bank (Lovell telescope), Medicina, Noto, Onsala, Torun, and Westerbork (phased array). Unfortunately one of the three large collecting areas, the 100m at Effelsberg, was unable to observe due to heavy snowfall. The observing mode consisted of 4 frequency bands, all observed with dual circular polarisation, 4 MHz wide and 2-bit sampled. The second frequency band was centred on the H i line of NGC 4261 at a velocity of 2237 km/s (heliocentric, optical definition) corresponding to a frequency of 1410 MHz.
The data were processed on the EVN MkIV data processor at the Joint Institute for VLBI in Europe (JIVE) and constitute the first scientific experiment to be carried out with this new facility. The data were correlated between July 26 and August 6 1999 in two passes; the first resulting in 128 spectral channels on both polarisations of the line data, the second pass yielding sensitive continuum data, by processing all basebands with modest spectral resolution. For the spectral line dataset the resulting spectral resolution from uniform weighting was $`7.4`$ km/s. Data quality was good, except for some bad tape passes from Torun and a large fraction of the data for Medicina which was corrupted by interference. Amplitude calibration was carried out in the standard way using the $`T_{\mathrm{sys}}`$ values and gain curves from the stations. The continuum was imaged using standard self-calibration and CLEAN deconvolution methods. From variations in amplitude gain factor in the final step of amplitude self-calibration we estimate the uncertainties in the overall flux density scales of our images to be of order 15%.
## 3 Results
The continuum image obtained of NGC 4261 is shown in Fig. 1. The noise level in this image is 0.35 mJy/beam. We find that the Western jet is somewhat brighter and more extended than the Eastern one, consistent with the VLBA maps of Jones & Wehrle (jones97 (1997)). From these VLBA maps it is clear that the flat spectrum core lies close to the peak of Fig. 1. We were able to fit a three component Gaussian model to the continuum visibility data, consisting of one compact core component and two jet components 18 and 14 mas (2.5 and 2 pc) to the East and West respectively.
In order to detect the weak H i absorption the spectral line data was self-calibrated with the continuum image, and then the continuum was subtracted using the AIPS task UVLIN. The spectral absorption was unambiguously detected on the Jodrell Bank – Westerbork baseline (Fig. 2). Other baselines did not have sufficient sensitivity to give any detections. From the phase information (not shown) it is clear that the absorption is not centred on the reference position of the self-cal process; but is offset from the core. The sign of the phase on the Jodrell Bank – Westerbork baseline suggests that the absorption is preferentially on the Eastern (counterjet) side. Fig. 2 shows the absorbed flux density integrated at the supposed position of the main counterjet component located 18 mas to the East of the core (see below).
From the VLA spectrum presented in the Jaffe & McNamara (jaffe94 (1994)) paper, we estimate a total integrated absorbed flux density of $`604\pm 168`$ mJy km/s, compared to the corresponding number for our VLBI spectrum; $`409\pm 55`$ mJy km/s. The amount of VLBI scale absorption is therefore consistent with the VLA observations. Although we cannot exclude the possibility of additional H i absorbing gas on scales larger than sampled by the VLBI observations, we feel confident that we detect the bulk of the absorbing gas.
In making quantitative estimates of the opacity toward different source components we applied a model-fitting technique based on the three component model used to fit the continuum data. We first averaged the Jodrell – Westerbork spectral absorption data in frequency over the line width and then fitted the resulting phase and amplitude versus time using a three component Gaussian model based on the continuum model. Each Gaussian had the same fixed shape and position as that fitted to the continuum data, only the amplitude of each component was allowed to vary. The minimised $`\chi ^2`$ is achieved when most of the absorption is on the counterjet, a possible small absorption at the core and no absorption against the jet component. Fixing the jet absorption at zero we obtained the $`\chi ^2`$-landscape shown in Fig. 3 for different combinations of counterjet and core absorption. From this we estimate the absorbed counterjet and core flux densities averaged over the line to be $`1.5\pm 0.3`$ mJy and $`0.7\pm 0.4`$ mJy respectively.
Dividing by the continuum flux densities of each component from the absorbed fluxes we can estimate line-averaged opacities of $`0.11\pm 0.02`$ and $`0.01\pm 0.01`$ against the counterjet and core respectively. It therefore appears that virtually all of the absorbing gas is against the counterjet. Integrating the opacity over the line we estimate a total H i column density towards the counterjet of $`N_{\mathrm{HI}}=2.5\times 10^{19}T_{\mathrm{sp}}\mathrm{cm}^2`$ and $`N_{\mathrm{HI}}<2.2\times 10^{18}T_{\mathrm{sp}}\mathrm{cm}^2`$ towards the core.
## 4 Discussion
Having most of the H i column located in front of the counterjet at a projected distance of $`2.5`$ pc is consistent with a number of previous results on NGC 4261. First, at distances closer to the nucleus we do not expect H i absorption, since we know the circumnuclear material is mostly ionised. This is shown by the free-free absorption at a projected radius of 0.2 pc inferred by Jones & Wehrle (jones97 (1997)). Secondly, NGC 4261 harbours an X-ray source ($`10^{41}`$ erg/s in the 0.2-1.9 keV range, Worrall & Birkinshaw worrall94 (1994)), which puts a limit on the total column density on the line of sight to the nucleus. The model presented by Worrall & Birkinshaw (worrall94 (1994)) yields an upper limit on the total column density of $`4\times 10^{20}\mathrm{cm}^2`$. Given that the X-rays preferentially originate in the nucleus, this fits in comfortably with the constraints from our VLBI H i absorption.
The model fitting from which we derive the optical depth does not allow the positions of the components to vary (see Sect. 3), nor is there any sensitivity for H i beyond the end of the continuum counter-jet. We are therefore forced to make a simplifying assumption, namely that 18 mas is the mean radius of the H i absorbing structure. This is supported by the fact that most of the VLA absorption is recovered by the VLBI observations (Sect. 3). Given the HST dust disk inclination, this implies a distance 5.7 pc away from the nucleus. The FWHM of the line is comparable with other H i absorption observations of circumnuclear gas (e.g. in Cyg A, Conway & Blanco conway95 (1995)). Therefore, in the next step we assume that the atomic gas is part of such a circumnuclear rotating structure and not due to individual clouds randomly distributed in front of the continuum source. Such a model of the H i disk is supported by the nuclear parameters derived by Ferrarese et al. (ferrarese96 (1996)) from HST data on optical transitions. They found a central mass of $`4.9\times 10^8\mathrm{M}_{}`$, which implies a rotational velocity of 610 km/s at the location of the H i. Under the standard assumption that the linewidth $`\mathrm{\Delta }V`$ provides an estimate of the isotropic turbulent velocity, we use the thin disk relation $`hr(\mathrm{\Delta }V/V_{\mathrm{circ}})`$ to estimate the disk thickness $`h`$. We estimate the velocity dispersion $`\mathrm{\Delta }V`$ at radius $`r`$ to be $`130\mathrm{km}/\mathrm{s}`$ which gives $`h=1.3`$ pc. So the H i is likely to reside in a thin circumnuclear disk with an opening angle of $`13^{}`$. The average density can, assuming a volume filling factor $`f`$ of unity, be estimated to be $`n_{\mathrm{HI}}=6\times 10^2\mathrm{cm}^3`$ for a spin temperature of 100 K. It follows that a more clumpy distribution ($`f<1`$) will increase the estimated density ($`f^{1/3}`$) and decrease the estimated mass ($`f^{2/3}`$). However, adopting $`f=1`$ for simplicity, a mass estimate of H i inside an homogeneous disk of radius 6 pc is $`M_{\mathrm{HI}}2\times 10^3`$M. Such a mass would be enough to supply material to the source for $`<3\times 10^7`$ years (assuming a radiative efficiency $`\eta <10\%`$), given that the total luminosity of the radio source is $`3.6\times 10^{41}`$ erg/s (e.g. Ferrarese et al. ferrarese96 (1996)). Using the correlation between FRI source sizes and their age (Parma et al. parma99 (1999)), the size of NGC 4261 (Jaffe & McNamara jaffe94 (1994)) implies an age $`3\times 10^7`$ years. The same correlation shows other FRIs with ages $`>10^8`$ years. Hence, on this time-scale the H i mass we estimate is barely sufficient to fuel the source. It seems more plausible that there is a continuous flow of accreting material being transported from the 100 pc scale dust disk onto the central nucleus.
The circumnuclear torus- or disk-structures observed in H i are usually found on slightly larger scales (50 – 100 pc; e.g. Gallimore et al. gallimore99 (1999) and Conway conway99 (1999)). Only in a few other cases the H i is found to lie on very small scales ($`<10`$ pc in NGC 4151; Mundell et al. mundell96 (1996) and Gallimore et al. gallimore99 (1999)) and it is not obvious that H i survives so close to the nucleus. For gas irradiated by X-rays an effective ionisation parameter $`\xi _{\mathrm{eff}}`$ can be defined, which governs the physical state of the gas (Maloney et al. maloney96 (1996)). For $`\xi _{\mathrm{eff}}<10^3`$ the gas is likely to be molecular with gas temperatures close to or below 100 K, while higher values of $`\xi _{\mathrm{eff}}`$ correspond to a hotter atomic gas phase. Following Maloney et. al (maloney96 (1996)), we use $`\xi _{\mathrm{eff}}=L_\mathrm{x}/(r^2nN_{22}^{0.9})`$, where $`L_\mathrm{x}`$ is the hard ($`>1`$ keV) X-ray luminosity, $`r`$ is the distance from the nucleus to the irradiated gas, $`n`$ is the gas density and $`N_{22}`$ is the column density in units of $`10^{22}`$ cm<sup>-2</sup>. At the distance of 6 pc a gas density of $`6\times 10^2`$ cm<sup>-3</sup> yields an (atomic) obscuring column density of $`N_{22}1.1`$. Using the hard X-ray luminosity of NGC 4261 ($`10^{41}`$ erg/s, Roberts et al. roberts91 (1991)) this results in $`\xi _{\mathrm{eff}}=0.5`$; thus implying a mainly atomic gas phase where the gas temperature is likely to exceed 1000 K (Maloney et al. maloney96 (1996)). As a consequence the spin temperature is probably larger than 100 K, and our estimates of the H i mass and density will only be lower limits.
We conclude that within the scope of this model, it is indeed possible to have an atomic structure on the scales sampled by our VLBI observations. The inner boundary of this region is naturally set by the location of the free-free absorption, which also must be geometrically thin in order to leave the core unattenuated. On the outside, the structure changes over into a dust disk which is visible to HST from its innermost pixel, at $`r6`$ out to 240 pc. However, since one would think that the mm radiation originates from the flat spectrum core, it is difficult to reconcile the reported CO absorption (Jaffe & McNamara jaffe94 (1994)) with a thin molecular disk. Apart from the unknown location of the CO gas, the evidence points to the FRI radio-source in NGC 4261 being powered by gas infall through a relatively thin disk with a clear gradient of excitation conditions.
###### Acknowledgements.
The scientific observations presented in this paper were made possible by the dedication and expertise of the teams involved in constructing the EVN MkIV data processor and implementing the EVN MkIV upgrade at the stations. We acknowledge especially the effort required by the correlator team to produce the data for this project in a such an early stage. |
no-problem/9912/astro-ph9912241.html | ar5iv | text | # Theoretical Transmission Spectra During Extrasolar Giant Planet Transits
## 1 Introduction
The recent transit detection of HD 209458 b (Charbonneau et al. 1999a; Henry et al. 1999ab) concludes a period of eager anticipation since the first close-in extrasolar giant planet (CEGP), 51 Peg b, was discovered in 1995 (Mayor & Queloz 1995). Five close-in extrasolar giant planets with orbital radii $`0.05`$ AU are known, and there are an additional six closer than 0.15 AU (see e.g. Schneider 1999). The chance of a transit for a CEGP — assuming random alignment of the orbital inclination — is roughly 10%. Transits have been excluded for five of the 11 above planets (Henry et al. 2000, 1997; Baliunas et al. 1997; Henry, private communication.) The transit of HD 209458 b confirms that the CEGPs are gas giants, gives the planet radius, and fixes the orbital inclination, which removes the $`\mathrm{sin}i`$ ambiguity in mass and provides the average planet density.
While the transit gives important physical parameters for the planet, it cannot provide any information about the planet’s atmosphere. Nevertheless, the near edge-on orbital inclination means that HD 209458 b is promising for a number of different types of planet atmosphere investigations. Two of these involve optically reflected light and benefit from the nearly full phase of the planet: spectral separation of the combined star-planet light (Cameron et al. 1999; Charbonneau et al. 1999), and photometric observations of the phase curve from reflected planetary light (Seager, Whitney, & Sasselov 1999). (See Seager et al. 1999 for a full discussion.) A third method is the transmission spectra discussed here, which requires the near edge-on orientation.
The idea of spectral transmission observations is not new: they have been observed in binary stars (e.g. Eaton 1993); they have been extensively observed and analyzed for occultations of stars and the Sun by planets in our Solar System (e.g. Smith & Hunten 1990); they are one motivation for large transit surveys of stars with no known CEGPs (e.g. Vulcan Camera Project (PI W. Borucki), STARE (PI T. Brown), WASP (PI S. Howell)); they have been discussed briefly for extrasolar planets (e.g. Schneider 1994; Charbonneau et al. 1999a); and they have been discussed for extrasolar planet exospheres (Rauer et al. 2000). Here, for the first time to our knowledge, we quantify the method for CEGPs and provide estimates of specific spectral features in the combined star-planet light during a planet transit. Many more CEGPs will be detected in the near future, both by ongoing radial-velocity searches and by wide-field transit searches. The transit transmission lines will provide us with constraints on cloud-top location, and line-of-sight column density, temperature ($`T`$), and pressure ($`P`$).
## 2 Transmission Spectra
During an EGP planetary transit, the planet passes in front of the star and occults the stellar flux in the amount equal to the ratio of the planet-to-star area. During the transit, some of the stellar flux will pass through the optically thin part of the planet atmosphere, the part of the atmosphere above the planet limb. In stellar occultations by planets in the Solar System, the limb of giant planets is usually defined at (1) the cloud tops, or (2) at the 1-bar level (Atreya 1986). Here we define the planet limb as the boundary (e.g. the cloud tops) above which the planet’s atmosphere is transparent to the stellar continuum radiation. The clouds are taken to be 1 pressure scale height above the cloud base, which due to irradiation heating is expected to be well above the 1 bar level (Seager 1999). We call the entire atmosphere above the limb the “transparent atmosphere”, although the transparent atmosphere is optically thick in some transitions. Below the limb the optically thick clouds prevent radiation from being transmitted through the atmosphere.
The ratio of the planet’s projected transparent atmosphere area to star-minus-planet area is small, on the order of $`10^4`$$`10^3`$, using $`R_{}=1.3R_{}`$, $`R_P=1.54R_J`$ (based on HD 209458 parameters from Mazeh et al. 2000), and using an estimate for the limb radial depth of 0.01 $`R_P`$ to 0.05 $`R_P`$ . The planet’s absorption features will be superimposed on the observable stellar flux, and will appear at the $`10^4`$ level below the continuum flux. Very strong spectral features are needed for detection; essentially the absorption features must be optically thick. In contrast to reflected planetary light, the transit transmission flux is not diluted by the planet-star distance, and is not cloud albedo-dependent.
During Solar System planet occultations and binary star occultations successive measurements are made as the planet occults the star (i.e. during ingress or egress). The time-dependent change in spectral features provide column density and temperatures for different atmosphere heights. Because the CEGP is much smaller and fainter than the parent star this will not be possible in the near future. The best orbital phase for the transmission spectra observations of CEGPs is when the planet is fully projected on the visible hemisphere of the star so that the planet’s projected transparent atmosphere takes out the greatest area and limb darkening from the star is at its minimum.
Refraction through the CEGP’s atmosphere has to be accounted for in defining the limb and in computing the total optical path of rays reaching us through it. During the solar transit of Venus in 1761 an obvious refraction effect at second contact convinced Lomonosov in St.Petersburg that Venus had an atmosphere (Cruikshank 1983). However, the cloud tops in our model of HD 209458b (see below) are high in its atmosphere, where the gas density is fairly low and refraction is small. In an isothermal atmosphere with gas scale height $`H`$, the angle of refraction for a ray passing at planetocentric distance $`r`$ is: $`\theta =\nu (r)[(2\pi R_P)/H]^{1/2}`$, where $`\nu (r)=n1`$ is the atmospheric refractivity at $`r`$ (for a $`H_2+He`$ mixture at STP: $`1.2\times 10^4`$), with $`n(\lambda ,r)`$ the index of refraction. Refraction introduces a lengthening of the pathways of the rays through a spherical stratified atmosphere: $`\mathrm{\Delta }s=0.5z\theta ^2`$, where simply $`z^2=(R_P+H)^2R_P^2`$. Unlike stellar occultations in our Solar system, in CEGP transits the parent star is a very extended background source and subtends a significant solid angle at its orbit. This allows us to observe rays deflected at angles larger than the average isothermal $`\theta `$ through the densest optically thin layers of the atmosphere. However, the cloud tops in HD 209458b are still at $`P<10^5`$dyn cm<sup>-2</sup>, and $`\mathrm{\Delta }s/s<`$ 2%.
### 2.1 Close-in EGPs Transparent Atmosphere Model
We consider the only currently known EGP system with an observed transit, HD 209458. We use the stellar parameters $`R_{}=1.3R_{}`$, T<sub>eff</sub>=6000, log $`g`$ = 4.25, and \[Fe/H\] = 0.0, derived from evolutionary calculations and fits to spectroscopic data in Mazeh et al. (2000). We use the planetary parameters $`R_P=1.54R_J`$, $`M_P=0.69M_\mathrm{J}`$, log $`g`$ = 2.9, and $`i=85.2^{}`$ derived from the transit observations and radial velocity observations, together with the stellar parameters (Mazeh et al. 2000). For a circular orbit we derive a semimajor axis $`a=0.0468`$ AU. We compute the incident flux of HD 209458 with the above parameters from the model grids of Kurucz (1992).
We compute the CEGP atmosphere (temperature-pressure (T-P) profile and emergent spectra) using our code described in Seager (1999) and Seager et al. (1999). This code is improved over the version described in Seager & Sasselov (1998) in two major ways. One is a Gibbs free energy minimization code to calculate solids and gases in chemical equilibrium; the second is condensate opacities for 3 solid species. So while in Seager & Sasselov (1998) we considered neither the depletion of TiO nor formation of MgSiO<sub>3</sub>, in the new models we do. One of the largest uncertainties in the atmosphere models is the location of clouds, and the cloud particle type and size. In Seager et al. (1999) we find that the T-P profile and the emergent flux (reflected + thermal) depends entirely on the condensate assumptions.
Because we define the planet limb at the cloud tops, the cloud wavelength-dependent albedos are not directly important. Furthermore, even if the clouds were not optically thick, they would not superimpose any spectral features on the optical stellar flux since their extinction (absorption + scattering) is generally a smooth function of wavelength.
#### 2.1.1 Transparent Atmosphere Temperature-Pressure Profile
We use the radial T-P profile generated in the self-consistent irradiated atmosphere code, and construct a limb depth and line-of-sight T-P profile from geometrical considerations. The limb of the planet, as defined above, is approximately 0.01 $`R_P`$. Together with $`R_P=1.54R_J`$ and $`R_{}=1.3R_{}`$, this thickness gives a ratio of the projected transparent atmosphere to the stellar disk of $`3.6\times 10^4`$. The line-of-sight T-P profile describes a plane-parallel column of gas through which the stellar flux passes. The gas is optically thin; flux at most wavelengths passes through relatively unattenuated. We compute the sum through several line-of-sight columns of gas, from the densest column tangential to the cloud top to the column that just skims the uppermost atmosphere. However, in an atmosphere with exponential density fall off, almost all of the absorption occurs in the densest column.
To solve for the radial atmosphere structure we must make a specific assumption about cloud particle type and size. Here we consider 10 $`\mu `$m grains of MgSiO<sub>3</sub>, Fe, and Al<sub>2</sub>O<sub>3</sub>. For this particular model ($`T_{\mathrm{eff}}=1350`$ K), the transparent atmosphere above the limb is comprised of a gas between 850-1000 K with pressures 20-600 dyne cm<sup>-2</sup>. The main constituents of a gas at these temperatures and pressures are H<sub>2</sub>, CO, H<sub>2</sub>O, and He. Because of the irradiative heating, CO dominates over CH<sub>4</sub> in this CEGP model. Most gases are in molecular form with the exception of He and the alkali metals. The volatile elements (e.g. Mg, Ca, Ti), have condensed into grains and we assume they have settled to within 1 pressure scale height of the cloud base. Photochemistry is not included, but the UV radiation from the parent star could photoionize a small fraction of H<sub>2</sub>, CO, etc. through the transparent atmosphere. This, however, would have little consequence on the results (See §2.2.3).
#### 2.1.2 Cloud Models
We caution that the transmission spectra presented here are estimates that depend on several assumptions. Once observations are successful, a more careful computation to accurately interpret the data is necessary. One point to make is that the location of the cloud base — and hence the cloud tops — depends on the irradiative heating of the planet atmosphere, which itself depends heavily on the absorbtivity of the type and size of condensates present throughout the planet atmosphere. A model with absorptive grains causes the upper atmosphere temperature to be higher compared to a model with highly reflective grains, and the grain condensation boundary will be closer to the top of the planet atmosphere. In the model used here we consider 10 $`\mu `$m grains of MgSiO<sub>3</sub>, Fe, and Al<sub>2</sub>O<sub>3</sub>. While the MgSiO<sub>3</sub> clouds are the highest in the atmosphere, all are important for computing the irradiated T-P profile. The choice of cloud particle type and size distribution is highly uncertain; this is the most complex free parameter of the models. In Seager et al. (1999) we discuss this in more detail.
Another major assumption used is that the radial T-P profiles are similar on all parts of the planet. This would be the case if strong winds redistribute the heat efficiently (e.g. as on Jupiter where there is no apparent difference from one side of the terminator to the other.) A different line-of-sight T-P profile does not change our predictions much; the flux-ratio is set by the planet limb, star area, and planet area, and the features are superimposed on that. Indeed it is very difficult to predict the exact line shape with so many unknowns about the planet atmosphere such as element abundances (from both planet metallicity and non-equilibrium chemistry) and cloud location due to heating.
### 2.2 Results
The CEGP and parent solar-type star have almost no spectral features in common; at effective temperatures ($`T_{\mathrm{eff}}`$s) of $``$1100 K to $``$1600 K, the CEGPs are almost 5 times cooler than the stars at $`T_{\mathrm{eff}}`$’s 5000–6000 K. Thus, the transmission spectra method is promising. In this subsection we discuss strong signals in the planetary atmosphere.
#### 2.2.1 Alkali Metals
The Na I and K I resonance doublets are predicted to be strong in CEGPs (Seager et al. 1999; Sudarsky et al. 1999), assuming the atmospheres are similar to brown dwarfs and cool L dwarfs which have similar $`T_{\mathrm{eff}}`$s. Alkali metals have clear signatures in cool L dwarfs and brown dwarfs where the metals such as Ti have condensed out of molecular form into solids, depleting the strong optical molecular absorbers. The K I doublet $`4^2S`$$`4^2P`$ absorption line at 767.0 nm is extremely broad in methane dwarfs, such as Gliese 229B, which have $`T_{\mathrm{eff}}`$s a few hundred degrees lower than the CEGPs. The broad wings of the K I resonance doublet extend for several tens of nm and are responsible for the large continuum depression in the optical and flux slope redward to 1 $`\mu `$m (Tsuji et al. 1999; Burrows et al. 1999). The broad lines are caused by strong pressure broadening by H<sub>2</sub>, and are in part so prominent because there are no other strong absorbers present at those wavelengths throughout the entire atmosphere. There is some question about the shape of the alkali metal lines in the CEGP atmospheres; they will be very broad if the clouds are low in the atmosphere and the large pressures deep in the atmosphere can contribute to strong line broadening. Sudarsky et al. predict this scenario, where K I and Na I absorb essentially all incoming optical radiation redward of 500 nm. Sudarsky et al. use ad hoc modified T-P profiles, to simulate heating, which have clouds at the 10 bar level. Seager et al. (1999) include heating from the parent star on the T-P profile, and find that the lines are much narrower, since clouds exist higher in the atmosphere at lower pressure. Observations should be able to distinguish between the two cases.
With the columns of gas defined above (§2.1.1) we compute the attenuation of the incoming intensity with the radiative transfer equation in the limit of no scattering $`I(\nu ,z)=I_{}(\nu ,z)\mathrm{exp}(\kappa (\nu ,z))`$, where $`I_{}(\nu ,z)`$ is the stellar intensity, $`\nu `$ is the frequency, and $`z`$ is the depth along the line-of-sight through the planet’s transparent atmosphere. Here $`\kappa (\nu ,z)`$ is the extinction which includes absorption and scattering. Our code includes the dominant opacities expected for cool L dwarfs and brown dwarfs: H<sub>2</sub>O, TiO, CH<sub>4</sub>, H<sub>2</sub>-H<sub>2</sub> and H<sub>2</sub>-He collision induced opacities, Rayleigh scattering from H, He, H<sub>2</sub>, and the alkali metal lines. The oscillator strengths and energy levels for the lower levels of the alkali metals (Na, K, Li, Rb, Cs) were taken from Radzig & Smirnov (1985) and the Kurucz atomic line list (Kurucz CD ROM 23), and we compute line broadening using a Voigt profile with H<sub>2</sub> and He broadening and Doppler broadening. We do not need to solve the full radiative transfer equation since we assume the effect of transmitted intensity through the planet’s transparent atmosphere is negligible with regard to the radiative structure which is already accounted for in the irradiation model.
Figure 1 shows the flux from the star and the stellar flux that has passed through the planet’s transparent atmosphere. The curves are essentially the same in the UV through the optical, with the exception of the absorption lines, including Na I at 285.4 nm and 342.8 nm, the Na I resonance doublet at 589.4 nm and the K I resonance doublet at 767.0 nm. The He I triplet line is at 1083 nm. In the infrared, the stellar flux is absorbed by the water bands, as well as the 5$`\mu `$m and the 3.3$`\mu `$m methane band (not visible in the figures). No molecular features such as TiO appear in the spectra, since those molecules have been depleted into solids. Solar-type stars also have alkali metal lines but they are weak because most of the alkali metals are ionized. In addition, the stellar optical spectrum is crowded with other atomic absorption lines.
Figure 2 shows the normalized in-transit minus out-of-transit spectra, i.e. the transmission spectra as the percentage occulted area of the star at different wavelengths. In transmitted light the planet is different sizes at different wavelengths. The zero point (at $``$ 1.47%) is set by the atmosphere depth at which the planet is optically thick at all wavelengths (the observed transit). The planet appears largest in the line cores, where photons passing very high in the planet atmosphere are still absorbed along the line of sight due to the strong absorption by the Na I and K I resonance lines. This figure differs from the model in Figure 1 in that we have considered the atmosphere out to the distance where the Na I resonance line becomes optically thin along the line of sight. In this case the limb depth is several percent of $`R_P`$ (where the limb coincides with the observed transit). Rayleigh scattering from H<sub>2</sub> is important below 200 nm; otherwise it is negligible. The Na I resonance doublet is broader than the K I resonance doublet because its abundance is an order of magnitude higher.
Figure 2 also shows the transmission spectra from an atmosphere with a cloud base much deeper than predicted by our model, at 0.2 bar instead of at 2.4$`\times 10^3`$ bar. (We note that in our range of models for different condensate type and size the highest clouds have bases ranging from roughly 0.5 to $`10^3`$ bar.) There will be two main consequences. First the transparent atmosphere area will be larger, resulting in the planet’s total line depths being larger with respect to the zero point. Second, the stellar flux will pass through higher densities, pressures, and temperatures. Rayleigh scattering is strong in this relatively high density transparent atmosphere. When higher pressures are sampled by the transmitting photons, the lines become more pressure broadened. This is seen in the Na I and K I resonance doublets. The higher densities will cause stronger alkali lines and additional absorption lines from other transitions that were too weak to appear in the low densities of our line-of-sight model, for example, non-resonance lines of Na I and K I. Observations of the alkali metal lines will be able to constrain cloud location. Exospheric escape or photoionization by stellar UV radiation could affect the line cores.
#### 2.2.2 Neutral Helium
We expect a strong absorption line at 1083 nm due to scattering of background stellar photosphere photons off the overpopulated He atoms excited to the triplet $`2^3S`$ metastable state. The mechanism works as follows. The stellar EUV radiation shortward of 50.4 nm will photoionize neutral He atoms and they will recombine at the local kinetic temperature, which may be as low as 800 K in the upper atmosphere of the planet. The He I recombination cascade is efficient for the singlet states, but stops at $`2^3S`$ for the triplet states which lack a fast radiative decay path to the ground state. Due to the low local kinetic temperature, collisional de-excitation is negligible. On the other hand, the number of 1083 nm continuum photons from the G0V star is very large $``$ they scatter efficiently in the $`2^3S2^3P`$ transition and produce the strong absorption feature in the transmission spectrum.
Versions of this mechanism are responsible for enhanced He I 1083 nm absorption in solar spectra (Zirin 1975; Andretta & Jones 1997), Algol binary systems (Zirin & Liggett 1982), etc. In the case of HD 209458, a sun-like star, we assume the EUV radiation to be that of the Sun (from Tobiska 1991). We use the T-P distribution of the transparent atmosphere generated in the Seager & Sasselov code and illuminate it with that diluted EUV field, simultaneously solving the NLTE transfer for a helium model atom with singlet and triplet states up to $`n=4`$. The details of this calculation are essentially the same as in Sasselov & Lester (1994). One has to realize that this calculation is more of an estimate than an fully consistent treatment of He I 1083 nm line formation in the unusual conditions of the EGP’s atmosphere. However, the mechanism described above is very robust and given no other competing target(s) for the EUV photons except H and H<sub>2</sub>, the $`2^3S2^3P`$ transition is optically thick at line center. In fact, He (and H) are prone to creating an extended exosphere around the planet; if so, the strength of the He I 1083.0 nm absorption may well be extremely strong (due to a much larger transparent atmosphere) and easy to observe. The broadening of the line ($``$ 0.3 nm) is not significant, but this needs further study in terms of He collisional broadening by molecular species like H<sub>2</sub>.
In the spectrum of the inactive parent star, the He I 1083.0 nm triplet line is extremely weak, if it is present at all. This makes it a promising signature in the combined planet-star flux. Enhanced absorption of the $`3^3P3^3D`$ transition of C is also seen in binary systems, but in the CEGP atmospheres the C is locked in CO or CH<sub>4</sub>.
#### 2.2.3 UV and Infrared Wavelengths
Solar System outer planet occultation transmission lines have been very successfully observed in the UV where the molecules such as H<sub>2</sub>, N<sub>2</sub>, and O<sub>2</sub> have strong absorption signatures. In addition, in CEGPs the H resonance line transitions are strong (and may be enhanced by photodissociation of H<sub>2</sub>) and alkali metal resonance line absorption appears in the near-UV (Figure 2). For the CEGPs orbiting sun-like stars, the UV flux is very low, and the lines — while useful diagnostics — will be difficult to detect.
The spectral transmission signatures in the infrared (e.g. H<sub>2</sub>O and CH<sub>4</sub>) shown in Figure 1 will be difficult to distinguish from the planet’s own thermal emission, which is present at all phases. More importantly, redward of 2000 nm, the planet’s thermal emission which roughly follows a blackbody (dotted line in Figure 1), will be much stronger than the transmitted spectrum which follows the optical-peaking stellar spectrum. This is in contrast to the optical where the planet has no emission; during transit the optical transmission spectrum is the planet’s only contribution to the total flux. Blueward of 2000 nm the transmission spectrum may be brighter than the planet’s own emission, and for the $`2.4\times 10^3`$ bar cloud base model, observations in this region may be very promising. (Note that for some models the planet’s thermal emission can be much higher than a blackbody in that region (Seager 1999)). CH<sub>4</sub> is an important temperature diagnostic, because the temperature-pressure profiles of the CEGPs fall near the CO/CH<sub>4</sub> equilibrium curve (Seager 1999, Goukenleuque et al. 1999). The strength of the methane absorption could indicate the temperature in the planet’s upper atmosphere layers, which in turn will distinguish between different irradiated models.
In principle the infrared magnitude of the brightness of the planet will increase slightly just after first contact as illumination passes through the top of the transparent atmosphere, but against the backdrop of the star this will not be detectable. CO (not included in this model) may also be good transmission signatures in the infrared.
#### 2.2.4 Comparison of spectra using different $`R_P`$ and R
We have also run calculations using $`R_P=1.27`$ (and $`i=87^{}`$) from Charbonneau et al. (1999a), who assumed $`R_{}=1.1`$. In this case the ratio of planet-to-star area is slightly lower ($``$ 5%) than with the correct values due to the higher inclination. In our self-consistent models, the effect of a smaller planet radius and smaller stellar radius decreases the transmitted-to-stellar flux ratio by an even smaller amount. The reason is that a smaller star and planet means the star-planet distance is larger, making the planet’s atmosphere slightly cooler. The cooler atmosphere has the MgSiO<sub>3</sub> cloud-top lower in the atmosphere and hence has a larger transparent atmosphere area compared to the hotter model. For example, the equilibrium effective temperature is $`T_{\mathrm{eq}}=T_{}(R_{}/2D)^{1/2}[(1A)]^{1/4}`$, where $`T_{}`$ is the effective temperature of the star, and $`A`$ is the Bond albedo. Because $`T_{\mathrm{eq}}R_{}^{1/2}`$, decreasing $`R_{}`$ from 1.3$`R_{}`$ to 1.1$`R_{}`$ will change the planet’s $`T_{\mathrm{eq}}`$ by $``$100 K.
There is also a small change due to density. At the same optical depth, the smaller radius planet with a higher surface gravity will have a higher density compared to the planet with a larger radius. Thus in comparison the absorption lines will be slightly stronger, for the same optical depth.
## 3 Other EGPs
As of this writing there are 29 known extrasolar planets around 27 stars. $`M\mathrm{sin}i`$ ranges from 0.42 to 11 $`M_J`$ and planet-star distances from 0.042 AU to 2.5 AU. In principle observations of the transmission features of any transiting planet can be attempted. Many of the stars are being monitored for transits, and transits around several of these stars have been excluded (Henry et al. 2000, 1997; Baliunas et al. 1997). Three others of these known planet systems have their inclinations limited by observation of Kuiper-belt-like disks (Trilling et al. 1999). Wide-field transit searches are excluded to transiting systems and are mostly sensitive to systems with orbital distances below 0.2 AU.
CEGPs much more massive than HD 209458 b will have a much smaller transparent atmosphere area. The planet radius is a weak function of mass (Guillot et al. 1996) and more massive planets are expected to be more compact as the degenerate core increases at the expense of the gaseous atmosphere. For a more massive planet with all other parameters equal (atmosphere structure, radius, etc.), the scale height is smaller, and defined by optical depth the entire atmosphere including the transparent atmosphere is smaller. For example $`\tau `$ Boo b which has $`M\mathrm{sin}i=3.87`$ (Butler et al. 1997) is at least 5.6 times more massive than HD 209458 b. CEGPs with higher $`T_{\mathrm{eff}}s`$ than HD 209458 b (e.g $`\tau `$ Boo b) may have cloud bases closer to the top of the atmosphere, also resulting in a smaller transparent atmosphere area. The flux ratio of transmitted to total stellar flux is also sensitive to the stellar radius, and would change by a factor of three for solar-type parent stars; from evolutionary calculations (Ford, Rasio, & Sills 1999) the solar-type parent stars of known CEGPs with orbital distances below $``$0.2 AU range from $`0.93`$$`1.56R_{}`$.
Wide-field transit searches will find short-period planets orbiting stars where radial velocity planet detections are not possible, for example around active cool stars and hot stars that have rotationally broadened atomic lines and activity. The parent stars of known EGPs range from F6IV ($`\tau `$ Boo) down to M4V (Gliese 876). Planet transmission spectra may be difficult to disentangle from parent M stars whose optical spectra are very crowded with molecular lines such as TiO and VO. Nevertheless, because of the larger planet-to-star area ratio, the flux ratio is enhanced by a factor of $``$10. Jupiter-sized planets orbiting hot stars such as a B or O star would have a flux ratio decreased compared to solar by a factor of 10-100, but the UV flux (observed from space) would provide many useful molecular absorption signatures such as H<sub>2</sub>, N<sub>2</sub>, and O<sub>2</sub>.
## 4 Summary and Prospects
We have estimated the transmission spectra of a CEGP during an occultation of the parent star. We find very strong absorption signatures of Na I and K I, and a strong signature of the He I $`2^3S`$$`2^3P`$ triplet line at 1083.0 nm. We find the number, strength, and depth of spectral features are sensitive to the cloud-top depth in the planet atmosphere.
Detecting spectral features will require high resolution, high signal-to-noise observations (e.g. with Keck HIRES). During the transit of HD 209458, the Doppler shift of the planet is strong enough so that it should be taken into account when analysing the spectra. Spectral separation techniques, designed to detect absorption signatures at the $`<10^4`$ level (e.g. Cameron et al. 1999; Charbonneau et al. 1999b), may be necessary to recover the weak planet signal. Systematic red or blue shifting from winds blowing between the day and night side may have a detectable effect on the planet spectrum (D. Charbonneau and T. Brown, private communication). Measurements of metallic lines will also constrain (together with $`M_P`$, $`R_P`$, and $`\rho _P`$) the interior models and shed light on the formation scenario of CEGPs, e.g. core accretion vs. gravitational disk instability. The CEGP’s extended exosphere may be easy to detect in the He 1083.0 nm transition.
If successful, observations of transmission spectra will be the first made of an extrasolar planet atmosphere, and will provide constraints on the upper atmosphere column density, temperature, and pressure. In addition the observations should easily constrain the cloud-top depth, which naturally defines the planet limb. This information will help distinguish between atmosphere models. Most importantly, detection of the alkali metal absorption lines will confirm the very basic postulate that the CEGPs have similar atmospheres to those of methane dwarfs and cool L dwarfs which have similar $`T_{\mathrm{eff}}`$s.
We are grateful to Dave Latham for providing the stellar and planet parameters for the HD 209458 system before publication. We thank Bob Noyes, Tim Brown and Mark Marley for reading the manuscript and for helpful comments and discussion. We also thank Dave Charbonneau and Avi Loeb for useful discussions. SS is supported by NSF grant PHY-9513835. DDS acknowledges support from the Alfred P. Sloan foundation. Note added in proof: Mazeh et al. (2000) gives $`R_P=1.4\pm 0.17R_J`$ and $`R_{}=1.2\pm 0.1R_{}`$, corrected from an earlier version quoted in this paper. We have not incorporated these values in this paper which is meant to be a conceptual description and estimate of CEGP transmission spectra rather than an exact prediction. |
no-problem/9912/chao-dyn9912020.html | ar5iv | text | # Extraction of information about periodic orbits from scattering functions
## Abstract
As a contribution to the inverse scattering problem for classical chaotic systems, we show that we can select sequences of intervals of continuity, each of which yields the information about period, eigenvalue and symmetry of one unstable periodic orbit. PACS: 03.80.+r; 05.45.+b; 94.30.Hn
In the framework of the study of the inverse scattering problem for classical chaotic scattering in two dimensions, the attention has been focussed on the topology of the chaotic saddle and on the definition of an appropriate partition ; thermodynamic quantities were also discussed marginally . In none of these approaches the specific properties of the unstable periodic orbits were explicitly searched for. Yet knowledge of these, goes a long way towards understanding the chaotic saddle. Usually the shortest orbits overshadow the chaotic set or, in other words, they form the skeleton of the globally unstable component of the invariant set. As far as semi-classical approximations are concerned, these orbits form the backbone of all considerations that lead to trace formulae . The most useful informations are their periods and Lyapunov exponents. These can be used to determine the hierarchical order in the pattern of intervals of continuity (henceforth abbreviated to IOC) in the scattering functions, but at the same time it is this very pattern, which allows us to learn something about the periodic orbits from scattering functions.
In this article we shall focus on the latter part, i.e. on obtaining the periods and the Lyapunov exponents of unstable periodic orbits from regular patterns of the IOC of scattering functions. The latter we define as a function of the points on a line in the space of initial conditions, that gives some property of the scattering process such as e.g. time delay or scattering angle (cf fig. 1). After recalling some known results concerning the external periodic orbits we present two methods to achieve our goal for periodic orbits that are both fairly short and not too unstable. We illustrate these methods by scattering a charged particle off a magnetic dipole. The reader may want to refer to the figures of the example while reading the description of the general method.
The trajectories belonging to one IOC of a scattering function all leave the interaction region crossing the same external orbit i.e. leaving through the same saddle or “doorway”. When the initial condition approaches one of the boundary points of the IOC from the inside, the scattering trajectory executes an increasing number of revolutions along the saddle orbit before it finally leaves the interaction region. For each revolution we expect an oscillation of the scattering angle (cf fig. 2). The frequency of this oscillations increases with the diverging time delay. An asymptotic observer can determine a sequence of initial conditions from within one and the same IOC, which are separated by a full oscillation of the scattering angle. This sequence converges to the boundary of the IOC in such a way, that, in the limit, the distance from the boundary point decreases by a factor determined by the eigenvalue of the saddle orbit. In the same limit the time delay increases by the period of this orbit, when going from one member of this sequence to the next. From these two quantities the Lyapunov exponent can be calculated. If there are several external orbits we can investigate each of them separately, choosing an appropriate IOC, where the outgoing scattering trajectories cross exactly this orbit before finally leaving the interaction region. Information about any of the internal orbits cannot be obtained in this way.
Based on this result we propose two methods to obtain both periods and Lyapunov exponents for inner periodic orbits. The first will use the time delay function only whereas the second exploits in addition the other scattering functions. To achieve this we analyze how the inner orbits influence the hierarchical pattern of IOC.
The basic idea for our method is provided by the following property of chaotic scattering: To each periodic orbit $`\sigma `$, which can be reached from the outside, there exist infinitely many sequences of IOC (cf fig. 1) of the scattering functions. The common feature of these sequences is the following: As we move along the sequence, sooner or later one step implies precisely one revolution of the trajectory close to the periodic orbit $`\sigma `$. The central problem will be to identify a sequence which is clearly associated with one periodic orbit. If we knew such sequences in advance, then it would be easy to find the period and the Lyapunov exponent of the periodic orbit $`\sigma `$.
We label the IOC forming the selected sequence by $`\{I_\sigma ^{(k)}\}_k\mathrm{}`$ where $`k`$ labels the order of the elements of the sequence, $`L(I_\sigma ^{(k)})`$ indicates the length of the IOC and $`t(I_\sigma ^{(k)})`$ a typical time delay for this IOC which we obtain by choosing a representative trajectory for each IOC. It is convenient to choose the representatives to have the absolute minimal time delay within an IOC.
At large time delays two consecutive representatives differ only in one additional revolution along the unstable periodic orbit. In order to extract periods and Lyapunov exponents of unstable periodic orbits from the time delay function, we need to know the representative time delays and lengths of as many IOC as possible. We determine the ratio $`\stackrel{~}{\lambda }_\sigma ^{(k)}=L(I_\sigma ^{(k)})/L(I_\sigma ^{(k+1)})`$ of the lengths of consecutive pairs of IOC and the difference of their representative time delays $`\stackrel{~}{T}_\sigma ^{(k)}=t(I_\sigma ^{(k+1)})t(I_\sigma ^{(k)})`$. We also define
$$\stackrel{~}{\mathrm{\Lambda }}_\sigma ^{(k)}=\frac{1}{\stackrel{~}{T}_\sigma ^{(k)}}\mathrm{log}(\stackrel{~}{\lambda }_\sigma ^{(k)}).$$
(1)
If the sequence was chosen appropriately, $`\stackrel{~}{T}_\sigma ^{(k)}`$ will converge to the period $`T_\sigma `$ of the orbit $`\sigma `$, $`\stackrel{~}{\lambda }_\sigma ^{(k)}`$ to its eigenvalue $`\lambda _\sigma `$ and $`\stackrel{~}{\mathrm{\Lambda }}_\sigma ^{(k)}`$ to its Lyapunov exponent $`\mathrm{\Lambda }_\sigma `$.
As mentioned above the choice of the sequence is critical and difficult. We therefore proceed to show how this choice can be partially avoided simultaneously with the approximative evaluation of the quantities above. If we plot either $`\stackrel{~}{\lambda }_\sigma ^{(k)}`$ or $`\stackrel{~}{\mathrm{\Lambda }}_\sigma ^{(k)}`$ against $`\stackrel{~}{T}_\sigma ^{(k)}`$ we should find an accumulation point near the correct value of ($`\lambda _\sigma `$,$`T_\sigma `$) or ($`\mathrm{\Lambda }_\sigma `$,$`T_\sigma `$) respectively. Note that each of the infinitely many sequences associated to one periodic orbit will approach the same accumulation point. Furthermore points ($`\stackrel{~}{\lambda }_\sigma ^{}`$,$`\stackrel{~}{T}_\sigma ^{}`$) or ($`\stackrel{~}{\mathrm{\Lambda }}_\sigma ^{}`$,$`\stackrel{~}{T}_\sigma ^{}`$) erroneously taken because they belong to a sequence for a different periodic orbit will have different accumulation points, except if both period and eigenvalue for the two orbits coincide. If the error is such that the pairs are obtained from IOC of sequences that belong to different periodic orbits no accumulation points are expected. It is thus tempting to perform this plot using points generated for all pairs of IOC available.
In principle we should then find accumulation points for many periodic orbits. This will not be implemented easily because if we have sufficient IOC to expect reasonable convergence along one sequence, the plot would have so many points that it would be very difficult to identify these accumulation points. Even worse, the discrete numerics may simulate structures that do not exist. In order to identify the accumulation points, only pairs of IOC that might belong to the same sequence have to be taken into account. Appropriate strategies to achieve this goal must be developed. An inspection of the hierarchical structure corresponding to the time delay function shows that successive IOC have to be connected by pieces of equal or higher hierarchical order. Therefore only pairs of IOC are tested with no intervening IOC possessing lower time delays. Further filtering may well be necessary.
* If we proceed using the time delay function as the only source of information, we can start to look for multiplets of IOC which apparently belong to one sequence. Each of its members should have a higher representative time delay than its predecessor without intervening IOC with lower time delays. If such a multiplet belongs to a single sequence its successive IOC should show roughly the same differences in time delay and ratios of successive lengths (cf fig. 3). By narrowing the limits for the allowed differences in time delay and ratios of the multiplets, improved convergence towards a periodic orbit can be promoted.
* If other scattering functions are also available, pairs of IOC that belong to a sequence can be identified by comparing their representative final conditions. Again only pairs of IOC without intervening IOC with lower representative time delays are taken into account. Since the final conditions of representative trajectories converge in the limit of high time delays to a point, two successive IOC can be tentatively identified by looking for those with close representative final conditions.
Both methods yield not only the basic periods but also, in a diminished extent, integer multiples thereof.
To illustrate these methods we use the scattering of a charged point particle off a magnetic dipole. This model is also known as the Störmer problem . After eliminating the rotational degree of freedom and a suitable rescaling in cylindrical coordinates one is left with a Hamiltonian of the form:
$$H(\rho ,z)=\frac{1}{2}\left[p_\rho ^2+p_z^2+\left(\frac{1}{\rho }\frac{\rho }{\sqrt{\rho ^2+z^2}^3}\right)^2\right].$$
(2)
Here $`(p_\rho ,p_z)`$ are the momenta conjugate to $`(\rho ,z)`$. It is well known that the resulting phase flow represents a chaotic scattering system . An analytical proof for its non-integrability is given in . Every scattering trajectory is uniquely determined in the asymptotic region by the impact parameter $`b_{\mathrm{in}}`$ and the angle $`\alpha _{\mathrm{in}}`$ between its velocity vector and the negative $`\rho `$-axis. A detailed description of the chaotic dynamics of this system for the energy range $`[0.031,0.081]`$ can be found in . The two fundamental periodic orbits of the binary horseshoe will be called $`A`$ and $`C`$. The outer orbit $`C`$ is accessible from the outside for all energies whereas the inner one $`A`$ may be at the center of an elliptic island for some energies.
For illustration we will consider the hyperbolic case at an energy $`E=0.062`$ and show how to extract information on $`A`$ and other inner periodic orbits. Note that hyperbolicity is no prerequisite for the methods presented here. For the integration of the equations of motion a Bulirsch-Stoer algorithm has been used. The simulation takes place in a disk of radius $`1000`$ around the origin of the $`(\rho ,z)`$-plane.
A part of a typical sequence whose trajectories come close to $`A`$ is shown as IOC in a time delay function in fig. 1. In fig. 2 the final angles $`\alpha _{\mathrm{out}}`$ of the trajectories from the IOC $`\{I_A^{(k)}\}_{1k5}`$ are plotted against their corresponding time delay values.
In this plot several essential features can be seen.
1. As the time delay increases the period of the curves converge towards the period of the outer periodic orbit $`C`$. This is a direct consequence of the additional revolutions near $`C`$ with initial conditions near the boundary of an IOC.
2. The differences of the representative (minimal) time delays $`t(I_A^{(k)})`$ of consecutive IOC converges towards half the period of the inner periodic orbit $`A`$.
3. The lower end of the curves of every second IOC point in the same direction. This effect has its origin in the common reflection symmetry on the $`\rho `$-axis of $`A`$ and the Hamiltonian. If $`A`$ and the Hamiltonian had no common symmetry
$$t(I_A^{(k)})=\frac{1}{2}\left(t(I_A^{(k1)})+t(I_A^{(k+1)})\right)$$
(3)
would not be fulfilled. In such a case there would be two sequences whose members were separated by the full period of the corresponding periodic orbit. This shows that by using only the time delay function as a source of information it might be unclear whether we measure the whole period of a periodic orbit without symmetries or its $`n`$-th part of an periodic orbit with a cyclic symmetry group of order $`n`$. Using additional information provided by other scattering functions as $`\alpha _{\mathrm{out}}`$ and $`b_{\mathrm{out}}`$ can give helpful hints to the answer of this question.
In order to resolve about $`608`$ IOC we scatter $`10^7`$ trajectories with $`\alpha _{\mathrm{in}}=0^{}`$ and $`b_{\mathrm{in}}`$ evenly distributed on the interval \[1.3049,1.3065\]. If we apply the filter that respects the hierarchical structure and look for triples of IOC $`(I_\sigma ^{(k1)},I_\sigma ^{(k)},I_\sigma ^{(k+1)})`$ with the property
$`\left|\stackrel{~}{T}_\sigma ^{(k1)}\stackrel{~}{T}_\sigma ^{(k)}\right|`$ $`<`$ $`0.02`$ (4)
$`\left|\stackrel{~}{\mathrm{\Lambda }}_\sigma ^{(k1)}\stackrel{~}{\mathrm{\Lambda }}_\sigma ^{(k)}\right|`$ $`<`$ $`0.02`$
we end up with $`64`$ of such triples. Each triple of IOC yields two points $`(\stackrel{~}{\mathrm{\Lambda }}^{(k)},\stackrel{~}{T}^{(k)})`$ in fig. 3. Not all $`128`$ points are located in the area shown in this figure but the chosen resolution of $`10^7`$ points was too poor for a meaningful prediction of orbits with symmetry reduced periods longer than $`T=15`$. The more dots are clustered in a group the likelier it is to find a periodic orbit with roughly these values for its symmetry reduced period and Lyapunov exponent. In fig. 3, apart from the times for one revolution around a symmetry reduced periodic orbit $`(A^{1/2},C^{1/2})`$ integer multiples of these times are also recognizable $`(A,C)`$. Among the periodic orbits of our system shown in fig. 4 only three orbits ($`A`$,$`C`$ and $`I`$) remain invariant under a reflection on the $`\rho `$-axis. The rest of the orbits either change their direction of rotation ($`B`$,$`D`$,$`F`$,$`G`$ and $`H`$) or are mapped onto their mirror image ($`E`$). Thus for orbits $`A`$,$`C`$ and $`I`$ the half periods show up besides the full ones in fig. 3.
Alternatively we may use pairs with similar final conditions retaining the restriction derived from the hierarchical structure of the time delay function. We accept pairs of IOC whose representative trajectories differ in outgoing angle by less than $`0.1^o`$ and in the outgoing impact parameter by less than $`0.01`$ in our scale. We plot the $`115`$ obtained points in fig. 5 similar to fig. 3. Now half periods will not appear because we did not symmetrize the outgoing asymptotic parameter. Comparing the two figures we see that the second method allows to identify more periods. While in fig. 3 accumulation points are not very obvious in fig. 5 they appear rather clearly in a few instances. Where they appear the Lyapunov exponent indeed seems to converge toward the exact value. Note the difference in scales for time delay and exponents. Considering that the Lyapunov exponent is an average, once identified, the points belonging to one period can be averaged. Table 1 shows that for the points in fig. 5 the average matches the exact value quite well.
Summarising we have presented a new approach to the inverse scattering problem for chaotic Hamiltonian systems. In distinction to earlier work we do not require the system to have an internal or external clock. Furthermore we are able to extract more detailed information about the properties of the most important periodic orbits. The possibilities inherent to this approach are much larger than what was presented in this letter. More extensive research on the filtering techniques must be done and the optimal approach may well depend on the problem. Also much larger numbers of intervals of continuity must be generated to be able to analyze the structure of the $`(\stackrel{~}{\mathrm{\Lambda }},\stackrel{~}{T})`$-plane. Finally the possibility to obtain quite easily the periods of the shortest periodic orbits may be used to provide an inner clock that always exist for the method proposed in .
The authors wish to thank L. Benet for useful discussions. This work was partially supported by the SNF, the DGAPA (UNAM) project IN-102597 and the CONACYT grant 25192-E. One of the authors (T.B.) wants to thank the CIC for their generous hospitality. |
no-problem/9912/astro-ph9912549.html | ar5iv | text | # 1 How Cool is Cold Dark Matter?
## 1 How Cool is Cold Dark Matter?
The successful match of predictions for large scale structure and microwave anisotropy vindicates many assumptions of standard cosmology, in particular the hypothesis that the dark matter is composed of primordial particles which are cold and collisionless. At the same time, the CDM paradigm finds difficulty explaining the small-scale structure within galaxy haloes: CDM appears to predict excessive relic substructure in the form of many dwarf galaxies which are not seen and may disrupt disks, and also predicts a universal, monotonic increase of density towards the center of all halos which is not seen in close studies of dark-matter-dominated galaxies. The latter problem seems to arise as a generic result of low-entropy material sinking during halo formation, quite independently of initial conditions; the former effect seems to be a generic result of hierarchical clustering predicted by CDM power spectra, which produce fluctuations on small scales that collapse early and survive as substructure. Although these problems are still controversial from both a theoretical and observational point of view, it is not easy to dismiss these effects by various complicated baryonic devices.
It is possible that the problems with halo structure are giving specific quantitative clues about new properties of the dark matter particles. The existence of dwarf cores and smooth substructure are just what one would expect if the dark matter is not absolutely cold but has a small nonzero primordial velocity dispersion. Such a model produces two separate effects: a phase packing or Liouville limit which produces halo cores, and a filter in the primordial power spectrum which limits small-scale substructure. The estimated dispersion required for the two effects does not quite agree but is close enough to motivate a closer look. This discussion is meant to motivate more detailed comparison of models and theory aimed at using halo properties to test the hypothesis of primordial velocity dispersion and ultimately to measure particle properties.
The physics of the filtering by freely streaming particles closely parallels that of massive neutrinos, the standard form of “hot” dark matter. A thermal particle which is more weakly interacting and therefore separates out of equilibrium earlier than neutrinos, when there are more particle degrees of freedom, has a lower temperature than neutrinos and therefore both a lower density and lower velocity dispersion for a given mass. Such “thermal relics” constitute one class of warm dark matter candidate; there are other possibilities, including degenerate particles and products of decaying particles. In many respects their astrophysical effects are very similar to the thermal case since up to numerical factors the damping scale and phase packing limit are both fixed by the same quantity, the classical “phase density” of the particles. For particles which separate out when relativistic, the phase density depends only on the particle properties and not on any cosmological parameters. It is similar (though not identical) for bosons and fermions, and in thermal and degenerate limits.
The phase-packing constraint is also familiar from the context of massive neutrinos. Tremaine and Gunn showed that the phase density of dark matter in giant galaxies implies a large neutrino mass, therefore too large a mean cosmic density. However, their argument can be turned around to explain the lack of a cusp at the center of dwarf halos. An upper limit to the central density of an isothermal sphere can be derived for a given phase density; a very rough comparison with dwarf dynamics suggests a limit corresponding to that of a thermal relic with a mass of about 200 eV. This is lighter (that is, a larger dispersion) than the $``$1 keV currently guessed at from the filtering effect, but not by so much that the idea should be abandoned; the simulations and comparison with data may yet reconcile the two effects and could reveal correlations of core radius and velocity dispersion predicted by the phase-packing hypothesis. Comparison of the two effects may also reveal new dark matter physics: for example, if the particles scatter off each other by self-interactions (which may even be negligible today), free streaming is suppressed and the filtering occurs on a somewhat smaller scale.
Warm dark matter has most often been invoked as a solution to fixing apparent (and no longer problematic) difficulties with predictions of the CDM power spectrum for matching galaxy clustering data. A filtered spectrum may however solve several other classic problems of CDM on smaller scales, in galaxy formation itself. Baryons tend to cool and collapse early into lumps smaller and denser than observed galaxies. Although this may be prevented by stellar-feedback effects, recent studies suggest that CDM has fundamental difficulties explaining the basic properties of galaxies such as the Tully-Fisher relation; dynamical loss of angular momentum results in halos which are too concentrated. Some of these problems may be solved in warm dark matter models which suppress the early collapse of subgalactic structures. Modeling disk formation includes baryonic evolution so requires understanding the ionization history of gas; an important constraint comes from the observed structures of the Lyman-$`\alpha `$ forest. Simulations suggest that the optimal filtering scale corresponds to a thermal particle mass of about 1 keV.
For these relatively massive thermal relics to have the right mean density today, the particle must have separated out at least as early as the QCD era, when the number of degrees of freedom was significantly larger than at classical weak decoupling. Its interactions with normal Standard Model particles must be “weaker than weak,” ruling out not only neutrinos but many other particle candidates. The leading CDM particle candidates, such as WIMPs and axions, form in standard scenarios with extremely high phase densities, although more elaborate mechanisms are possible to endow these particles with the required velocities. It is therefore of considerable interest from a particle physics point of view to find evidence for the existence of finite primordial phase density from galaxy halo dynamics. Neither the theory or the observational side allow a definitive case to be made as yet, but the evidence is certainly suggestive. In principle, dynamics can provide detailed clues to the dark matter mass and interactions. Here I give a few examples of simple calculations which reveal the connections between the particle properties and the halo properties.
## 2 Phase Density of Relativistically-Decoupled Relics
Consider particles of mass $`m`$ originating in equilibrium and decoupling at a temperature $`T_D>>m`$ or chemical potential $`\mu >>m`$. The original distribution function is
$$f(\stackrel{}{p})=(e^{(E\mu )/T_D}\pm 1)^1(e^{(p\mu )/T_D}\pm 1)^1$$
(1)
with $`E^2=p^2+m^2`$ and $`\pm `$ applies to fermions and bosons respectively. The density and pressure of the particles are
$$n=\frac{g}{(2\pi )^3}fd^3p$$
(2)
$$P=\frac{g}{(2\pi )^3}\frac{p^2}{3E}fd^3p$$
(3)
where $`g`$ is the number of spin degrees of freedom. Unless stated otherwise, we adopt units with $`\mathrm{}=c=1`$.
With adiabatic expansion this distribution is preserved with momenta of particles vary as $`pR^1`$, so the density and pressure can be calculated at any subsequent time. For thermal relics $`\mu =0`$, we can derive the density and pressure in the limit when the particles have cooled to be nonrelativistic:
$$n=\frac{gT_0^3}{(2\pi )^3}\frac{d^3p}{e^p\pm 1}$$
(4)
$$P=\frac{gT_0^5}{(2\pi )^33m}\frac{p^2d^3p}{e^p\pm 1}$$
(5)
where the pseudo-temperature $`T_0=T_D(R_D/R_0)`$ records the expansion of any fluid element.
It is useful to define a “phase density” $`Q\rho /v^2^{3/2}`$ proportional to the inverse specific entropy for nonrelativistic matter. This quantity is preserved under adiabatic expansion but for nondissipative particles cannot increase. Combining the above expressions for density and pressure and using $`v^2=3P/nm`$, we find
$$Q_X=q_Xg_Xm_X^4.$$
(6)
The coefficient for the thermal case is
$$q_T=\frac{4\pi }{(2\pi )^3}\frac{[𝑑p(p^2/e^p\pm 1)]^{5/2}}{[𝑑p(p^4/e^p\pm 1)]^{3/2}}=0.0019625,$$
(7)
where the last equality holds for thermal fermions. An analogous calculation for the degenerate fermion case ($`T=0,\mu _D>>m_X`$) yields the same expression for $`Q`$ but with a different coefficient,
$$q_d=\frac{4\pi }{(2\pi )^3}\frac{[_0^1p^2𝑑p]^{5/2}}{[_0^1p^4𝑑p]^{3/2}}=0.036335.$$
(8)
The phase density depends on the particle properties but not at all on the cosmology; the decoupling temperature, the current temperature and density do not matter. Up to numerical factors (which depend on thermal or degenerate, boson or fermion cases), the phase density for relativistically decoupled or degenerate matter is set just by the particle mass $`m_X`$.
## 3 Phase Packing and the Core Radius of an Isothermal Halo
For a given velocity dispersion at any point in space, the primordial phase density of particles imposes an upper limit on their density $`\rho `$. Thus dark matter halos do not form the singular central cusps predicted by Cold Dark Matter but instead form cores with constant density at small radius. A lower limit to the size of the core can be estimated if we assume that the matter in the central parts of the halo lies close to the primordial adiabat defined by $`Q`$. This will be good model for cores which form quietly without too much dynamical heating. This seems to be the case in typical CDM halos, indicated by the cusp prediction; it could be that warm matter typically experiences more additional dynamical heating than cold matter, in which case the core could be larger.
The conventional definition of core size in an isothermal sphere is the “King radius”
$$r_0=\sqrt{9\sigma ^2/4\pi G\rho _0}$$
(9)
where $`\sigma `$ denotes the one-dimensional velocity dispersion and $`\rho `$ denotes the central density. Making the adiabatic assumption, $`\rho _0=Q(3\sigma ^2)^{3/2}`$, we find
$$r_0=\sqrt{9\sqrt{2}/4\pi 3^{3/2}}(QGv_c\mathrm{})^{1/2}=0.44(QGv_c\mathrm{})^{1/2}$$
(10)
(Note that aside from numerical factors this is the same as a degenerate dwarf star; the galaxy core is bigger than a Chandrasekhar dwarf of the same specific binding energy by a factor $`(m_{proton}/m_X)^2`$.) For comparison with observations we have expressed the core radius as a function of the asymptotic circular velocity $`v_c\mathrm{}=\sqrt{2}\sigma `$ of the halo’s flat rotation curve.
For the thermal and degenerate phase densities derived above,
$$r_{0,thermal}=5.5\mathrm{kpc}(m_X/100\mathrm{e}\mathrm{V})^2(v_c\mathrm{}/30\mathrm{k}\mathrm{m}\mathrm{s}^1)^{1/2}$$
(11)
$$r_{0,degenerate}=1.3\mathrm{kpc}(m_X/100\mathrm{e}\mathrm{V})^2(v_c\mathrm{}/30\mathrm{k}\mathrm{m}\mathrm{s}^1)^{1/2},$$
(12)
where we have set $`g=2`$. The circular velocity in the central core displays the harmonic behavior $`v_cr`$; it reaches half of its asymptotic value at a radius $`r_{1/2}0.4r_0`$.
The best venue for studying the effect is in the small, dark-matter-dominated disk galaxies where the cusp problem of CDM seems most clearly defined. Material on circular orbits gives a direct measure of the enclosed mass and therefore of the density profile. A rough guess from current observations of inner rotation curves of a few dwarf spiral galaxies suggests a halo core corresponding to a thermal particle mass of about 200 eV; for a larger mass, additional (nonprimordial) dynamical heating is required. This is also consistent with what is known about dark matter in dwarf elliptical galaxies from studies of stellar velocities.
The relationship of core radius with halo velocity dispersion is a simple prediction of the primordial phase density explanation of cores which will probably generalize in some form to a cosmic population of halos. In particular if phase packing is the explanation of dwarf galaxy cores, the dark matter cores of giant galaxies and galaxy clusters are predicted to be much smaller than for dwarfs, unobservably hidden in a central region dominated by baryons.
## 4 Filtering of Small-Scale Fluctuations
The transfer function of Warm Dark Matter is almost the same as Cold Dark Matter on large scales, but is filtered by free-streaming on small scales. The characteristic wavenumber for filtering at any time is given by $`k_XH/v^2^{1/2}`$, with a filter shape depending on the detailed form of the distribution function. In the current application, we are concerned with $`H`$ during the radiation-dominated era ($`z10^4`$), so that $`H^2=8\pi G\rho _{rel}/3(1+z)^2`$, where $`\rho _{rel}`$ includes all relativistic degrees of freedom. For constant $`Q`$, $`v^2^{1/2}=(\rho _X/Q)^{1/3}(1+z)`$ as long as the $`X`$ particle are nonrelativistic. The maximum comoving filtering scale is thus approximately independent of redshift and is given simply by
$$k_XH_0\mathrm{\Omega }_{rel}^{1/2}v_{X0}^1$$
(13)
where $`\mathrm{\Omega }_{rel}=4.3\times 10^5h^2`$ and $`v_{X0}=(Q/\overline{\rho }_{X0})^{1/3}`$ is the rms velocity of the particles at their present mean cosmic density $`\overline{\rho }_{X0}`$:
$$k_X0.65\mathrm{Mpc}^1(v_{X0}/1\mathrm{k}\mathrm{m}\mathrm{s}^1)^1,$$
(14)
with no dependence on $`H_0`$. For the thermal and degenerate cases, in terms of particle mass we have
$$v_{X0,thermal}=0.93\mathrm{km}\mathrm{s}^1h_{70}^{2/3}(m_X/100\mathrm{e}\mathrm{V})^{4/3}(\mathrm{\Omega }_X/0.3)^{1/3}g^{1/3}$$
(15)
$$v_{X0,degenerate}=0.35\mathrm{km}\mathrm{s}^1h_{70}^{2/3}(m_X/100\mathrm{e}\mathrm{V})^{4/3}(\mathrm{\Omega }_X/0.3)^{1/3}g^{1/3}$$
(16)
In the case of free-streaming, relativistically-decoupled thermal particles, the transfer function has been computed precisely; the characteristic wavenumber where the square of the transfer function falls to half the CDM value is $`k_{1/2,stream}=k_X/5.5`$. The mass implied for this kind of candidate to preserve the success of CDM on galaxy scales and above is about 1 keV; if it is much smaller (in particular, as small as the 200 eV we require for phase packing alone to help the core problem), filtering occurs on too large a scale. (A filtering scale of roughly $`k3h_{70}\mathrm{Mpc}^1`$ preserves the successes of CDM on large scales and helps to solve the CDM predictions of excess dwarf galaxies, excessive substructure in halos, insufficient angular momentum, and excessive baryon concentration.) This problem might be fixed in other models with a different relationship of $`k_{1/2}`$ and $`k_X`$. For example, if the particles are self-interacting the free streaming is suppressed and the relevant scale is the standard Jeans scale for acoustic oscillations, $`k_J=\sqrt{3}H/c_S=\sqrt{27/5}k_X`$, which is significantly shorter than $`k_{1/2,stream}`$ at a fixed phase density. Alternatively it is possible that warm models might be more effective at producing smooth cores than we have guessed from the minimal phase-packing constraint above; an evaluation of this possibility requires simulations which include not just a filtered spectrum but a reasonably complete sampling of a warm distribution function in the particle velocities.
## 5 Density of Thermal Relics
A simple candidate for warm dark matter is a standard thermal relic— a particle that decouples from the thermal background very early, while it is still relativistic. In this case the mean density of the particles can be estimated from the number of particle degrees of freedom at the epoch $`T_D`$ of decoupling, $`g_D`$:
$$\mathrm{\Omega }_X=7.83h^2[g_{eff}/g_D](m/100\mathrm{e}\mathrm{V})=0.24h_{70}^2(m_X/100\mathrm{e}\mathrm{V})(g_{eff}/1.5)(g_D/100)^1$$
(17)
where $`g_{eff}`$ is the number of effective photon degrees of freedom of the particle ($`=1.5`$ for a two-component fermion). For standard neutrinos which decouple at around 1MeV, $`g_D=10.75`$.
An acceptable mass density for a warm relic with $`m_X`$ 200 eV clearly requires a much larger $`g_D`$ than the standard value for neutrino decoupling. Above about 200 MeV, the activation of the extra gluon and quark degrees of freedom (24 and 15.75 respectively including $`uds`$ quarks) give $`g_D50`$; activation of heavier modes of the Standard Model above $`200`$GeV produces $`g_d100`$ which give a better match for $`\mathrm{\Omega }_X0.5`$, as suggested by current evidence. Masses of the order of 1 keV can be accomodated by adding extra, supersymmetric degrees of freedom. Alternatively a degenerate particle can be introduced via mixing of a sterile neutrino, combined with a primordial chemical potential adjusted to give the right density. Either way, the particle must interact with Standard Model particles much more weakly than normal weak interactions, which decouple at $`1`$ MeV.
Note that all warm dark matter particles have low densities compared with photons and other species at 1 MeV so they do not strongly affect nucleosynthesis. However, their effect is not entirely negligible. They add the equivalent of $`(T_X/T_\nu )^3=10.75/g_d`$ 0.1 to 0.2 of an effective extra neutrino species, which leads to a small increase in the predicted primordial helium abundance for a given $`\eta `$. Because the phase density fixes the mean density at which the particles become relativistic, this is a generic feature for any warm particle. This effect might become detectable with increasingly precise measurements of cosmic abundances.
###### Acknowledgements.
I am grateful for useful discussions of these issues with F. van den Bosch, J. Dalcanton, A. Dolgov, G. Fuller, B. Moore, J. Navarro, T. Quinn, J. Stadel, J. Wadsley, and S. White, and for the hospitality of the Isaac Newton Institute for Mathematical Sciences and the Ettore Majorana Centre for Scientific Culture. This work was supported at the University of Washington by NSF and NASA, and at the Max-Planck-Institute für Astrophysik by a Humboldt Research Award. |
no-problem/9912/hep-lat9912033.html | ar5iv | text | # Techniques for using the overlap-Dirac operator to calculate hadron spectroscopy.**footnote * Talk presented at Chiral ’99, Sept 13-18, 1999, Taipei, Taiwan.
## I INTRODUCTION
The results of lattice QCD calculations of weak matrix elements are a critical component of the experimental program in heavy flavour and kaon physics. The results from lattice gauge theory calculations would significantly improve if the masses of both the sea and valence quarks could be reduced. Unfortunately progress in doing this is very slow with simulations that use either the Wilson or clover fermion operators .
It seems plausible that the difficulty of simulating with light quark masses with the clover and Wilson fermions operators is due to explicit chiral symmetry breaking in the actions. If the fermion operators were invariant under chiral symmetry transformations, their eigenvalue spectrum would be constrained to a smaller region . The performance of the simulation algorithms degrades as the range of eigenvalues gets larger. Simulations that use the staggered fermion operator can reach much lower quark masses than simulations that use the Wilson or clover operators, because the staggered action has a residual of the continuum chiral symmetry. Neuberger has derived a fermion operator, called the overlap-Dirac operator, that has a lattice chiral symmetry .
Our goal is to simulate QCD with the overlap-Dirac operator in the mass region: ($`M_{PS}/M_V=0.30.5`$). This quark mass region is inaccessible to lattice QCD simulations, that are currently computationally feasible, with the clover or Wilson operators .
## II THE OVERLAP-DIRAC OPERATOR
The massive overlap-Dirac operator is
$$D^N=\frac{1}{2}(1+\mu +(1\mu )\gamma _5\frac{H(m)}{\sqrt{H(m)H(m)}})$$
(1)
where $`H(m)`$ is the hermitian Wilson fermion operator with negative mass, defined by
$$H(m)=\gamma _5(D^Wm)$$
(2)
where $`D^W`$ is the standard Wilson fermion operator. The parameter $`\mu `$ is related to the physical quark mass and lies in the range $`0`$ to $`1`$. The $`m`$ parameter is a regulating mass, in the range between a critical value and 2.
## III NUMERICAL TECHNIQUES
Quark propagators are calculated using a sparse matrix inversion algorithm. The inner step of the inverter is the application of the fermion matrix to a vector. For computations that use the overlap-Dirac, the step function
$$ϵ(H)\underset{¯}{b}=\frac{H}{\sqrt{HH}}\underset{¯}{b}$$
(3)
must be computed using some sparse matrix algorithm. A number of algorithms that calculate quark propagators from the overlap-Dirac operator, without using a nested algorithm have been proposed . We discuss one of them in section VI.
Practical calculations of the overlap operator are necessarily approximate. To judge the accuracy of our approximate calculation we used the (GW) Ginsparg-Wilson error:
$$\gamma _5D^N\underset{¯}{x}+D^N\gamma _5\underset{¯}{x}2D^N\gamma _5D^N\underset{¯}{x}\frac{1}{\underset{¯}{x}}$$
(4)
which just checks that the matrix obeys the Ginsparg-Wilson relation . Other groups have found that the step function must be calculated very accurately, so we also use more sophisticated estimates of the numerical error (see Eq. 8).
Our numerical simulations were done using $`\beta =5.85`$ quenched gauge configurations, with a volume of $`8^3\mathrm{\hspace{0.33em}32}`$. This allows us to directly compare our results with the two other QCD spectroscopy calculations . The quark propagators were generated from point sources. For all the algorithms we investigated, we used $`m`$ equal to $`1.5`$. Although we are currently investigating using the overlap-Dirac operator in the quenched theory, most of the algorithms can also be used in full QCD simulations .
## IV LANCZOS BASED METHOD
Borici has developed a method to calculate the action of the overlap-Dirac operator on a vector, using the Lanczos algorithm. In exact arithmetic, the Lanczos algorithm generates an orthonormal set of vectors that tridiagonalises the matrix.
$$HQ_n=Q_nT_n$$
(5)
where $`T_n`$ is a tridiagonal matrix. The columns of $`Q_n`$ contain the Lanczos vectors.
The “trick”, to evaluate the step function (Eq. 3), is to set the target vector $`\underset{¯}{b}`$, as the first vector in the Lanczos sequence. An arbitrary function $`f`$ of the matrix $`H`$ acting on a vector is constructed using
$`(f(H)b)_i`$ $`=`$ $`{\displaystyle \underset{j}{}}(Q_nf(T_n)Q_n^{})_{ij}b_j`$ (6)
$`=`$ $`b(Q_nf(T_n))_{i\mathrm{\hspace{0.33em}1}}`$ (7)
where the orthogonality of the Lanczos vectors has been used. The $`f(T_n)`$ matrix is computed using standard dense linear algebra routines. For the step function the eigenvalues of $`T_n`$ are replaced by their moduli. Eq. 7 is linear in the Lanczos vectors and thus can be computed in two passes.
The major problem with the Lanczos procedure is the loss of the orthogonality of the sequence of vectors due to rounding errors. It is not clear how this lack of orthogonality effects the final results. Some theoretical analysis has been done on the use of the Lanczos algorithm to calculate functions of matrices . It is claimed that the lack of orthogonality is not important for some classes of functions.
On small lattice we checked that the eigenvalue spectrum of the overlap-Dirac operator moves closer to a circle , as the number of Lanczos steps increases. Even after 50 iterations of the Lanczos algorithm, there are still small deviations from the circle. For a hot gauge configuration with a volume of $`4^4`$, all the Lanczos vectors were stored and then were used to investigate the effect of the loss of orthogonality. The scalar product between two Lanczos vectors drops from $`10^7`$ to $`10^3`$ after about 130 iterations, indicating problems with orthogonality. The GW error was $`0.11`$ at 50 iterations and $`0.012`$ at 250 iterations. This is some evidence that the Borici’s algorithm may still work even when the orthogonality of the Lanczos vectors is lost. It is much harder to look at the eigenvalue spectrum of the overlap-Dirac operator on a $`8^3\mathrm{\hspace{0.33em}32}`$ $`\beta =5.85`$ gauge configuration, so we computed the GW error instead. The GW error on a single gauge configuration was: $`\mathrm{2\hspace{0.33em}10}^3`$ (100 iterations), $`\mathrm{1\hspace{0.33em}10}^3`$ (200) iterations, $`\mathrm{1\hspace{0.33em}10}^4`$ (300) iterations, and $`\mathrm{2\hspace{0.33em}10}^6`$ (500 iterations).
Fig. IV, is an effective mass plot of the pion, for four different values of the quark mass $`\mu `$. The number of iterations in the Lanczos algorithm was kept constant at $`100`$. The effective mass plots for the pion in Fig. IV can be compared to the two other published spectroscopy calculations, that both use configurations with parameters: $`8^316`$ and $`\beta =5.85`$. For a quark mass of $`\mu =`$ $`0.1`$ ($`0.05`$), Liu et al. obtain a pion mass in lattice units of $`0.63`$ ($`0.45`$). From the graph by Edwards et al. , the pion mass was $`0.87`$ ($`0.63`$) at $`\mu `$ = $`0.1`$ ($`0.05`$). The differences between the two groups are probably explained by them using different values of $`m`$ in their simulations, because the quark mass $`\mu `$ is renormalized by a multiplicative factor that depends on the domain mass . The effective mass plots in Fig. IV are consistent with the data of Edwards et al. . Although smeared correlators should be used for a more detailed comparison. The quality of the $`\mu `$ = 0.03 effective mass plot of the pion is disappointing. The inversion of the overlap-Dirac operator that used 100 Lanczos iterations, at a mass $`\mu =0.1`$, required 150 iterations in the inverter and took 105 minutes on 32 nodes of our cray t3e.
We checked the stability of the pion’s effective mass with the number of iterations used in the Lanczos procedure. The pion effective mass plot was stable for the quark masses $`\mu =`$ $`0.1`$ and $`0.03`$, as the number of Lanczos iterations were varied from 100 to 300.
Liu et al. used the Gell-Mann-Oakes-Renner (GOR) relation, that was derived in for the overlap-Dirac operator, as a check on the accuracy of the computation of the step function.
$$\mu \underset{x}{}\pi (x)\pi (0)=\frac{1}{V}\underset{x}{}\overline{\psi }(x)\psi (x)$$
(8)
where $`\pi `$ is the pion interpolation field, and $`x`$ is summed over the space-time volume (V). The “external” quark propagators defined by
$$\widehat{D}(\mu )=\frac{1}{1\mu }[D^1(\mu )1]$$
(9)
should be used in equation 8.
The data in Fig. IV show the GOR relation is satisfied up to 2% for the masses $`\mu =`$ $`0.1`$ and $`0.15`$, and 4% for the mass $`\mu `$ = $`0.05`$. Increasing the number of Lanczos iterations did not decrease the violation of the GOR relation. This may be due to the loss of orthogonality in the Lanczos vectors.
## V RATIONAL APPROXIMATION
The step function can be approximated by a rational approximation .
$$ϵ(H)H(c_0+\underset{k=1}{\overset{N}{}}\frac{c_k}{H^2+d_k})$$
(10)
The rational approximation is only an accurate approximation to the step function in a certain region. The eigenvalues of the matrix $`H`$ should lie in this region. The coefficients $`c_k`$ and $`d_k`$ can be obtained from the Remez algorithm . The number of iterations required in the inverter is controlled by the smallest $`d_k`$ coefficient, that acts like a mass. If $`d_k`$ is small, the number of iterations required is controlled by the condition number of $`H^2`$.
On one configuration we obtained GW errors of: $`\mathrm{8\hspace{0.33em}10}^5`$, $`\mathrm{1\hspace{0.33em}10}^5`$, and $`\mathrm{2\hspace{0.33em}10}^6`$, for the $`N=6`$, $`N=8`$, and $`N=10`$, optimal rational approximations . Unfortunately, the above results required up to 600 iterations for the smallest $`d_k`$, which was too large to use as the inner step of a quark propagator inverter. We have not yet implemented the technique of projecting out some of the low lying eigenmodes . This projection will reduce the condition number of the matrix, and hence the number of iterations required in the inner inversions.
## VI Five dimensional representation of the overlap-Dirac operator
One undesirable feature of the algorithms just presented for inverting the overlap-Dirac operator is that at each iteration of the inverter, some sparse matrix techniques must be done to calculate the overlap-Dirac operator. In the language of Krylov spaces, the theory of which underlies the numerical calculations, two independent Krylov spaces are used in a nested inverter. If the overlap-Dirac operator could be calculated using one Krylov space, this may be more efficient. Neuberger has proposed one method to calculate the overlap-Dirac operator without the nested inversion . The first implementation of Neubeger’s ideas was discussed by Edwards at this meeting.
To explain the idea we will use a simplified rational approximation. The generalisation to higher order rational approximations is obvious.
$$\left(\frac{1}{2}(1+\mu )+\frac{1}{2}(1\mu )\gamma _5c_0H\frac{(H^2+p_1)(H^2+p_2)}{(H^2+q_1)(H^2+q_2)}\right)\psi =b$$
(11)
The equation for $`\psi `$ can be solved using additional variables ($`\varphi _i`$). The additional equations generated by the new variables can be written in matrix form.
$$\left(\begin{array}{ccccc}1& 0& 0& 0& H_{p1}\\ 1& H_{q1}& 0& 0& 0\\ 0& H_{p1}& 1& 0& 0\\ 0& 0& 1& H_{q1}& 0\\ 0& 0& 0& \frac{1}{2}(1\mu )\gamma _5c_0H& \frac{1}{2}(1+\mu )\end{array}\right)\left(\begin{array}{c}\varphi _1\\ \varphi _2\\ \varphi _3\\ \varphi _4\\ \psi \end{array}\right)=\left(\begin{array}{c}0\\ 0\\ 0\\ 0\\ b\end{array}\right)$$
(12)
where we have introduced the notation $`H_c=H^2+c`$.
The additional variables makes the calculation of the overlap-Dirac operator look similar to the calculation of the domain wall operator . Although with an accurate enough rational approximation, this technique will calculate the overlap-Dirac operator exactly. The key issue is the condition number of the five dimensional matrix, because this controls the number of iterations required in the inverter. As the various rational approximations use small coefficients, these could have a large effect on the condition number.
To study the effect of the rational approximation on the condition number of the five dimensional matrix, we have started to study the problem in free field theory. The calculation of the eigenvalues of the matrix in Eq. 12 is simple in free field, because Fourier analysis can be used. Although the free field theory eigenvalues will not be too similar to those of the interacting theory (although projecting out the lowest topological eigenmodes will improve the agreement), this is the only case where we have any hope of analytical insight into the condition number of the five dimensional matrix.
The five dimensional matrix was constructed using MATLAB, using the free hermitian Wilson operator . Table VI contains some results for a $`8^4`$ lattice, with a quark mass of $`\mu =0.1`$, and a domain mass of $`m=1.0`$. The $`N16`$ approximation is the $`16`$th order approximation to the step function introduced by Neuberger and Higham . The $`R6`$, $`R8`$, and $`R10`$ rows are for the Remez approximations to the step function introduced by Edwards et al. . The validity column is the maximum distance that the rational approximation deviates by $`10^3`$ from unity, divided by the minimum distance. This is a measure of how good an approximation the rational function is. The order in Table VI has been normalised so that it is comparable to the length of the lattice in the fifth dimension for domain wall fermions (the true order is obtained by multiplying by 12).
Table VI shows that the condition number of the five dimensional matrix strongly depends on the type of rational approximation used to construct it. It is interesting to compare the $`N16`$ and $`R8`$ approximations that are almost equally good, but which have very different condition numbers. It would be instructive to compare the condition numbers in Table VI with the condition number of the domain wall fermion operator .
## VII CONCLUSIONS
The results for the masses of the light hadrons obtained by Liu et al. and Edwards et al. seem to show that QCD simulations can be run at much lighter quark masses, than can be explored with the standard Wilson or clover operators. To work with quarks as light as those in the calculations of Edwards et al. and Liu et al., we need to project out the lowest eigenvalues from the $`H`$ matrix. We are working on an implementation of the eigenvalue projection technique.
What is not clear is how expensive the simulations with the overlap-Dirac operator will be on more realistic lattice volumes. The only way the Wilson or clover operators can be used to simulate QCD with lighter quarks is by the brute force approach of simulating closer to the continuum limit. This too will be very expensive. As the overlap-Dirac operator has a lattice chiral symmetry, it should be able to be used to explore the light quark mass region of QCD in an elegant way.
This work is supported by PPARC. The computations were carried out on the T3E at EPCC in Edinburgh. We thank K. Liu, T. Kennedy, R. Edwards, C. Michael, A. Irving and U. Heller for discussions. |
no-problem/9912/hep-th9912148.html | ar5iv | text | # Comment on “On spin-1 massive particles coupled to a Chern-Simons field”
## Abstract
In this comment we discuss some serious inconsistencies presented by Gomes, Malacarne and da Silva in their paper, Phys.Rev. D60 (1999) 125016 (hep-th/9908181).
In the paper of Ref., Gomes, Malacarne and da Silva, set up some conclusions about the dynamics and interactions between charged vector bosons ($`\varphi _\mu `$) and the gauge field ($`A_\mu `$) in 3 dimensions. Besides, they also discuss the issue of 1-loop renormalizability. This comment is devoted to point out some inconsistencies in their misleading analysis; mainly, we criticize the way the authors heavily use the Ward identities to “ensure” 1-loop renormalizability for a model which is not even unitary at tree-level thanks to the violation of the Froissart-Martin bound.
According to the results previously worked out in the papers of Refs., following a line initiated by , it is known that charged vector fields minimally (or non-minimally) coupled to a gauge field display severe problems in what concerns the quantum-mechanical consistency of the model. To be more specific: unitarity is jeopardized by complex massive vector fields, regardless the mass is introduced via a Proca or a Chern-Simons term, as we shall clarify below.
The authors of Ref. claim that, even if a Proca term assigns mass to the charged vector field, the 1-loop renormalizability of the model is guaranteed by virtue of the identity of Eq.(16) in their paper. However, the use of such an identity in the calculation of 1-loop graphs such as vacuum-polarization and the 4-point function for the Chern-Simons field is not appropriate to reduce the superficial degree of divergence, for there is not reason to set the momenta associated to the matter-field lines in the 3-vertex equal to zero. Our remark is that there is no way to tame the ultraviolet divergences brought about by the Proca term. On the other hand, following the results of , the dynamical induction of a Proca term always takes place for topologically massive complex vector fields. The criterium that is neglected in the analysis of the work of Ref. (the same criticism applies to the work by Bezerra de Mello and Mostepanenko ) is the lack of reference to the Froissart-Martin bound in 3 dimensions , which is of the type “$`s\mathrm{ln}s`$” for the total scattering squared amplitude in a Compton-like process. Though it is not very evident, the actually serious problem of the massive Proca complex vector field is that it leads to a clear violation of the Froissart-Martin unitarity bound in 3 dimensions, yielding an upper bound of the form “$`s^2`$. Had we started with a topologically massive complex vector field, unitarity would apparently be respected through the Froissart-Martin bound , since an upper bound of the type “$`s^0`$ shows up for that case; nevertheless, a Proca term ($`\varphi _\mu ^{}\varphi ^\mu `$) is always radiatively induced and a non-unitary bound “$`s^2`$” drops out .
Our comment sets out to raise the question whether it is sensible to consider the massive charged vector model beyond the tree-approximation, once, as stated above, the unitarity bound is clearly violated at that approximation. Usually, we draw our attention to the renormalizability and unitarity by taking into account power-counting, counter-terms, Ward identities and the positivity of the states in the Hilbert space. Though these are necessary requirements to be fulfilled, some additional criteria ought to be checked, such as the validity of the Froissart-Martin bound. The class of models discussed in Refs. is a good warning example for what we have just mentioned: though the analysis of propagators and power-counting seems to point out to a healthy model in the case of Maxwell-Chern-Simons theory for the complex vector field, the induction of the Proca mass breaks the unitarity bound and we believe it is not sensible to go beyond tree-level, or, in short, to second-quantize such a model.
We should also stress that the introduction of a gauge-invariant non-minimal magnetic coupling, which in the Proca case is non-renormalizable, does not restore the Froissart-Martin bound in that case of Maxwell-Chern-Simons-Proca model for the charged vector field, as it was attained in Ref..
To end our short comment, we conclude that, besides the lack of power-counting renormalizability, unitarity is the key ingredient to rule out the theory of massive charged vector fields coupled to a gauge field in 3 dimensions as a fundamental field theory, therefore, the results of Refs. turn those of Ref. useless. |
no-problem/9912/astro-ph9912446.html | ar5iv | text | # Soft Gamma-ray Repeaters in Clusters of Massive Stars
## Introduction
Neutron stars and stellar mass black holes are the last phase of the rapid evolution of the most massive stars, which are known to be formed in groups. In this context, it is expected that the most recently formed collapsed objects should be found near clusters of massive stars, still enshrouded in their placental clouds of gas and dust. This should be the case for SGRs, if they indeed are very young neutron stars kouveliotou . Among the four SGRs that have been identified with certainty, SGR 1806-20 and SGR 1900+14 are the two with the best localizations with precisions of a few arcsec (Hurley et al., 1999 hurley99a ; hurley99b and references therein). Both are on the galactic plane at distances of $``$ 14 kpc, beyond large columns of interstellar material; A<sub>v</sub> $``$ 30 mag in front of SGR 1806-20 corbel and A<sub>v</sub> $``$ 19 mag in front of SGR 1900+14 vrba96 . Because of these large optical obscurations along the line of sights, and in the immediate environment of the sources, infrared observations are needed to understand their origin and nature.
## Infrared observations
Mid-infrared (5-18 $`\mu `$m) observations of the environment of SGR 1806-20 were carried out with the ISOCAM instrument aboard the Infrared Space Observatory (ISO) satellite fuchs . By chance, the ISO observations were made in two epochs, 11 days before, and 1-4 hours after a soft gamma-ray burst detected with the Interplanetary Network on 1997 April 14 hurley99a .
We also observed<sup>1</sup><sup>1</sup>1Based on observations collected at the European Southern Observatory, La Silla, Chile under proposal numbers 59.D-0719 and 63.H-0511. SGR 1806-20 in the J ($`1.25\pm 0.30\mu `$m), H ($`1.65\pm 0.30\mu `$m) and K ($`2.15\pm 0.32\mu `$m) bands on 1997 July 19, and SGR 1900+14 in the J ($`1.25\pm 0.30\mu `$m), H ($`1.65\pm 0.30\mu `$m) and Ks ($`2.162\pm 0.275\mu `$m) bands on 1999 July 25, at the European Southern Observatory (ESO), using the IRAC2b camera on the ESO/MPI 2.2m telescope for SGR 1806-20, and using the NTT/SOFI for SGR 1900+14. In the near infrared, SGR 1806-20 was monitored by us during the last four years, and SGR 1900+14 by Vrba et al. (2000) vrba00 .
## SGRs in clusters of massive stars
The results of the infrared observations of SGR 1806-20 and SGR 1900+14 are summarized in Figures 1 and 2 respectively. Figure 1 shows a cluster of massive stars deeply embedded in a dense cloud of molecular gas and dust. Using the ISO fluxes as a calorimeter, Fuchs et al. (1999) fuchs show that each of the four stars at the centre of the cluster could be equaly, or even more luminous than the LBV identified in the field by Kulkarni et al. (1995) kulkarni .
van Paradijs et al. (1996) vanP reported the possible association of SGRs to strong IRAS sources. The IRAS fluxes listed in Table 1 suggest that the infrared emission at longer wavelengths detected by IRAS do arise in clouds of gas and dust that enshroud these two clusters of massive stars.
## CONCLUSIONS
1) SGR 1806-20 and SGR 1900+14 are associated to clusters of massive stars. From ISO observations we find evidence that the cluster associated to SGR 1806-20 is enshrouded and heats a dust cloud that appears very bright at 12-18 $`\mu `$m. Although we did not made ISO observations of SGR 1900+14, the latter is as SGR 1806-20, a strong IRAS source vanP , and very likely it is also enshrouded in a dust cloud.
2) These SGRs cannot be older than a few 10<sup>3</sup> years. At the runaway speeds of neutron stars this is the time required to have moved away from the centers of their parental clusters of stars.
3) J, H and K bands observations of the massive stars close to the SGRs positions show no significant flux variations fuchs ; vrba00 . Therefore, these SGRs do not form bound binary systems with any of these massive stars.
4) There is strong excess emission at 12-18 $`\mu `$m associated to SGR 1806-20. However, there is no evidence of heating by the high energy SGR activity, although observations were made only 2 hours after a soft gamma-ray burst reported by Hurley et al. (1999) hurley99a .
## ACKNOWLEDGMENTS
The authors are grateful to F.J. Vrba for communicating his results on SGR 1900+14 prior to publication. |
no-problem/9912/astro-ph9912512.html | ar5iv | text | # INTERACTION OF SUPERNOVA BLAST WAVES WITH WIND-DRIVEN SHELLS: FORMATION OF ”JETS”, ”BULLETS”, ”EARS”, ETC
ABSTRACT. Most of middle-aged supernova remnants (SNRs) have a distorted and complicated appearance which cannot be explained in the framework of the Sedov-Taylor model. We consider three typical examples of such SNRs (Vela SNR, MSH 15-52, G 309.2-00.6) and show that their structure could be explained as a result of interaction of a supernova (SN) blast wave with the ambient medium preprocessed by the action of the SN progenitor’s wind and ionized emission.
Key words: ISM: bubbles; ISM: supernova remnants.
1. Introduction
Most of middle-aged SNRs have a distorted and complicated appearance which cannot be explained in the framework of the standard Sedov-Taylor model. Three possibilities are usually considered to describe the general structure of such remnants:
– the SN blast wave interacts with the inhomogeneous (density stratified and/or clumpy) interstellar medium;
– the SN ejecta is anisotropic and/or clumpy;
– the stellar remnant (e.g. a pulsar) is a source of the relativistic wind and/or collimated outflows (jets) which power the central synchrotron nebula (plerion) and/or interact with the SNR’s shell.
For example, all above possibilities were considered to explain the structure of the Vela SNR. Namely, the general asymmetry of this remnant (the northeast half of the Vela SNR faced towards the Galactic plane has a nearly circular boundary, whereas the opposite half is very distorted) as well as its patchy appearance in soft X-rays were attributed to the expansion of the SN blast wave in the inhomogeneous (large-scale cloud + a multitude of cloudlets) interstellar medium (e.g. Kahn et al. 1985, Bocchino et al. 1997). One of consequences of this suggestion is the proposal that the origin of optical filaments constituting the shell of the remnant is due to the slowing and cooling of parts of the SN blast wave propagating into dense clumps of matter (cloudlets). A number of radial structures (most prominent in soft X-rays) protruding far outside the main body of the remnant was interpreted as bow shocks produced by fragments of the exploded SN star (”bullets”) supersonically moving through the interstellar medium (Aschenbach et al. 1995). An elongated X-ray structure stretched from the Vela pulsar position to the center of the brightest radio component of the Vela SNR (known as Vela X) was interpreted as a one-sided jet emanating from the Vela pulsar and transferring the pulsar’s slow-down energy to the Vela X (e.g. Markwardt & Ögelman 1995). This interpretation supports the proposal of Weiler & Panagia (1980) that the Vela X is a plerion. A nebula of hard X-ray (2.5-10 keV) emission stretched nearly symmetrically for about $`1^{}`$ on either side of the pulsar in the northeast-southwest direction was also interpreted as a plerion (Willmore et al. 1992).
The first and third possibilities were considered in connection with the SNR MSH 15-52 (G 320.4-01.2). The radio map of this remnant given by Caswell et al. (1981) shows the elongated shell consisting of two bright components stretched parallel to the Galactic plane and separated by a gap of weak emission. The brightest X-ray emission of this remnant comes from two components, one of which centres on the position of the pulsar PSR B1509-58 (located close to the geometrical center of the MSH 1509-58), while the second one coincides with the maximum of emission of the brightest (closer to the Galactic plane) radio component and with the bright optical nebula (known as RCW 89). It was suggested that the central X-ray component of the MSH 15-52 is a plerion (e.g. Seward et al. 1984) and that the general structure of this remnant is affected by one (Tamura et al. 1996, Brazier & Becker 1997) or two (Manchester 1987, Gaensler et al. 1999) jets emanating from the pulsar.
And the third example is the SNR G 309.2-00.6, which consists (at radio wavelengths) of a nearly circular shell and two ”ears” – arclike filamentary structures protruding from the shell in the opposite directions (nearly parallel to the Galactic plane). It was suggested, by analogy with the well-known system SS433/W50, that the distorted appearance of the G 309.2-00.6 is due to the interaction between a pair of jets produced by the central (unvisible) stellar remnant and the originally spherical shell of the SNR (Gaensler et al. 1998). It was also suggested that one of linear filaments in the northeast ”ear” represents one of the proposed jets.
The goal of this paper is to show that the structure of at least three above-mentioned SNRs could be explained as a result of interaction of a SN blast wave with the ambient medium preprocessed by the action of the SN progenitor’s wind and ionized emission.
2. Interaction of SN blast waves with wind-driven shells
It is known that progenitors of most of SN stars are massive ones (e.g. van den Bergh & Tammann 1991). Such stars are sources of intense stellar winds and ionizing emission which strongly modify the ambient interstellar medium. The ionizing radiation of the progenitor star creates an H II region, the inner, homogenized part of which gradually expands due to the continuous photoevaporation of density inhomogeneities in stellar environs (McKee et al. 1984). If the mechanical luminosity of the stellar wind $`L`$ is much smaller than some characteristic wind luminosity, $`L^{}10^{34}(S_{46}^2/n)^{1/3}\mathrm{ergs}\mathrm{s}^1`$, where $`S_{46}`$ is the stellar ionizing flux in units of $`10^{46}\mathrm{photons}\mathrm{s}^1`$ and $`n`$ is the mean density the ambient medium would have if were homogenized, the stellar wind flows through a homogeneous medium and creates a bubble of radius (e.g. Weaver et al. 1977) $`R(t)=11L_{34}^{1/5}n^{1/5}t_6^{3/5}`$ pc, where $`L_{34}=L/(10^{34}\mathrm{ergs}\mathrm{s}^1),t_6=t/(10^6\mathrm{years})`$. Initially the expanding bubble is surrounded by a thin, dense shell of swept-up interstellar gas, but eventually the gas pressure in the bubble becomes comparable to that of the ambient medium, and the bubble stalls, while the shell disappears. The radius of the stalled bubble is $`R_\mathrm{s}=5.5L_{34}^{1/2}n^{1/2}`$ pc. Since the star continues to supply the energy in the bubble, the radius of the bubble continues to grow, $`t^{1/3}`$, until the radiative losses in the bubble interior becomes comparable to $`L`$. Then the bubble recedes to some stable radius $`R_\mathrm{r}`$, at which radiative losses exactly balance L (D’Ercole 1992): $`R_\mathrm{r}=2.2L_{34}^{6/13}n^{7/13}`$ pc. Before a massive star exploded as a supernova it becomes for a relatively short time, $`t_{\mathrm{RSG}}10^6`$ years, a red supergiant (RSG). The ionized gas outside the bubble rapidly cools off because the central star cannot keep it hot. At the same time the rarefied interior of the bubble remains hot as the radiative losses there are negligible on time-scales of $`t_{\mathrm{RSG}}`$. As a result, the bubble supersonically reexpands in the external cold medium and creates a new dense shell (D’Ercole 1992; cf. Shull et al. 1985). Two main factors could significantly affect the structure of the shell. The first one is the regular interstellar magnetic field (generally it is parallel to the Galactic plane). This factor leads to the matter redistribution over the shell and to its concentration near the magnetic equator: the column density at the equator is increased about ten times (Ferrière et al. 1991). The second factor is the large-scale density gradient. It is known (Landecker et al. 1989, Gosachinskij & Morozova 1999) that molecular clouds tend to be stretched along the Galactic plane, therefore one might expect that due to the interaction with a nearby cloud one of two sides of the shell (not necessary the nearest to the Galactic plane) could be more massive than the opposite one. These two factors naturally define two symmetry axes (parallel and perpendicular to the Galactic plane) of the future SNR.
During the RSG stage a massive star lost most of its mass (e.g. a $`20M_{}`$ star loses about two thirds of its mass) in the form of slow, dense wind. This material expands in the interior of the reexpanded main-sequence (MS) bubble and occupies a compact region surrounded by a dense shell. The size of this region is determined by the counter-pressure of the external hot gas and is equal to about few parsecs (e.g. Chevalier & Emmering 1989, D’Ercole 1992). Most probably that this region is far from the spherical symmetry (it is believed that the wind of a RSG is concentrated close to the stellar equatorial plane).
After the SN exploded, the blast wave interacts with the dense RSG wind. This interaction continues few hundreds years and determines the appearance of young SNRs (e.g. Cas A, see Borkowski et al. 1996). Then the blast wave propagates through the low-density interior of the MS bubble until it catches up the dense shell. During this period (lasting about one thousand years) the blast wave is unobservable. The subsequent evolution of the blast wave (i.e. the SNR) depends on the mass of the shell. If the mass of the shell is smaller than about 50 times the mass of the SN ejecta the blast wave overruns the shell and continues to expand adiabatically as a Sedov-Taylor shock wave. For more massive ones, the blast wave merges with the shell, and the reaccelerated shell evolves into a momentum-conserving stage (e.g. Franco et al. 1991). The impact of the blast wave with the shell causes the Rayleigh-Taylor and other dynamical instabilities. The inhomogeneous mass distribution over the shell affects the development of instabilities and results in the asymmetry of the resulting SNR. The more massive half of a shell created in the density-stratified medium is less sensitive to the impact of the SN blast wave, while the opposite (less massive) one becomes strongly deformed and sometimes even disrupted. The effect of the regular magnetic field is twofold: first, it leads to the bilateral appearance of SNRs (cf. Ferrière et al. 1991, Gaensler 1998), second, it results in the elongated form of remnants (because of reduced inertia of shells at the magnetic poles).
3. Three examples
Let us consider the SNRs mentioned in Sect. 1.
3.1. Vela SNR
We suggest that the Vela SNR is a result of type II SN explosion in a cavity created by the wind of a 15-20 $`M_{}`$ star and propose that the general structure of the remnant is determined by the interaction of the SN blast wave with the massive shell created around the reexpanded MS bubble (see Sect. 2; for details see Gvaramadze (1999a)). The impact of the blast wave with the shell causes the development of Rayleigh-Taylor deformations of the shell (”blisters”), which appear as arclike and looplike filaments when our line of sight is tangential to their surfaces. The optical emission is expected to come from the outer layers of the shell, where the transmitted SN blast wave slows to become radiative, while the soft X-ray emission represents the inner layers of the shell heated by the blast wave up to X-ray temperatures. The origin of some radial protrusions (labelled by Aschenbach et al. (1995) as ”bullets” A,B,C, and D/D’) could be connected with the shell deformations, while the ”bullets” E and F could be interpreted as outflows of a hot gas escaping through the breaks in the SNR’s shell (Gvaramadze 1998a, Bock & Gvaramadze 1999). As to the X-ray ”jet” discovered by Markwardt & Ögelman (1995), an analysis of the radio, optical, and X-ray data suggested that it is a dense filament in the Vela SNR’s shell (projected by chance near the line of sight to the Vela pulsar), and that its origin is connected with the nonlinear interaction of the shell deformations (see Gvaramadze 1999a). The nature of the radio source Vela X is considered in the paper by Gvaramadze (1998b), where it is shown that the Vela X is also a part of the shell of the Vela SNR, but not a plerion. In conclusion one should be noted that the slow, dense RSG wind lost by the progenitor star and subsequently reheated and reaccelerated by the passage of the SN blast wave could be responsible for the origin of a hard X-ray nebula discovered by Willmore et al. (1992) (Willmore et al. mentioned that their data do not allow to discern the thermal and nonthermal forms of the spectrum of this nebula).
3.2. SNR MSH 15-52
The SNR MSH 15-52, associated with the pulsar PSR B1509-58, is usually classified as a composite SNR. This is because of it consists of an extended nonthermal radio shell (at the distance of $`5`$ pc (e.g. Gaensler et al. 1999) the diameter of the remnant $`40`$ pc) and a central elongated X-ray nebula ($`7`$ pc $`\times 12`$ pc) which is thought to be a synchrotron pulsar-powered nebula (a plerion). The spin-down age of the pulsar is $`1700`$ years (i.e. nearly the same as that of the Crab pulsar), while the size and general appearance of the MSH 15-52 suggest that this system should be much older (few times $`10^4`$ years). To reconcile the ages of the pulsar and remnant, Seward et al. (1983) considered two possibilities: 1) MSH 15-52 is a young SNR, and 2) PSR B1509-58 is an old pulsar. The first one implies (in the framework of the Sedov-Taylor model) that the SN explosion was very energetic and occured in a tenuous medium (see also Bhattacharya 1990). This point of view is generally accepted (e.g. Gaensler et al. 1999). The second possibility was reexaminated by Blandford & Romani (1988), who suggested that the pulsar spin-down torque grew within the last $`10^3`$ years (due to the growth of the pulsar’s magnetic field) and therefore the true age of the pulsar could be as large as it follows from the age estimates for the SNR. We propose an alternative explanation (Gvaramadze 1999b,c) and suggest that the high spin-down rate of the pulsar is inherent only for a relatively short period of the present spin history and that the enhanced braking torque is connected with the interaction of the pulsar’s magnetosphere with a dense clump of circumstellar matter (whose origin is connected with the late evolutionary stages of the progenitor star). This suggestion implies that the central X-ray nebula could be interpreted as a dense material lost by the progenitor star during the RSG stage and reheated to high temperatures by the SN blast wave. The existance of a hot plasma (of mass of about few $`M_{}`$) around the pulsar follows from the IR observations of the MSH 15-52 by Arendt (1991), who discovered an IR source near the position of the pulsar. We believe that the thermal emission of this plasma is contaminated by the hard nonthermal emission from a (much smaller) compact nebula powered by the pulsar (similar to the $`1^{^{}}(4\times 10^{17}\mathrm{cm})`$ nebula discovered by Harnden et al. (1985; see also de Jager et al. 1996) around the Vela pulsar), and that this is the reason why the spectrum of the whole central nebula is usually described by a nonthermal model (e.g. Greiveldinger et al. 1995, Tamura et al. 1996).
The shell of the MSH 15-52 remainds that of the Vela SNR (cf. Fig.8 of Gaensler 1998 and Fig.1 of Gvaramadze 1999a). In both remnants the halves faced towards the Galactic plane are brighter and more regular than the opposite ones. We suggest that the MSH 15-52 is a result of interaction of the SN blast wave with the wind-driven shell created in the inhomogeneous interstellar medium: the northwest half of the shell interacts with the region of enhanced density (that results in the origin of bright radio, optical and X-ray emission), and therefore is less affected (distorted) by the impact of the SN blast wave than the southeast half. The bilateral and elongated appearance of the shell could be connected with the effect of the large-scale interstellar magnetic field (cf. Gaensler 1998, Gaensler et al. 1999).
3.3. SNR G 309.2-00.6
We suggest that the ”ears” of this SNR were blown up in the polar regions of the (former) wind-driven shell created in the interstellar medium with regular magnetic field (oriented nearly parallel to the Galactic plane). The origin of the ”jet” and other filamentary structures visible in the remnant (see Fig.2 of Gaensler et al. 1998) we connect with projection effects in the Rayleigh-Taylor unstable shell. We suggest also that the SN explosion site<sup>1</sup><sup>1</sup>1Note that it could be shifted from the geometrical centre of the SNR due to the proper motion of the SN progenitor star. should be marked by a hard X-ray nebula and predict that the angular size of the nebula (for the distance to the remnant of 5-14 kpc (Gaensler et al. 1998)) is about $`1.5^{^{}}2^{^{}}`$.
References
Arendt R.G.: 1991, AJ, 101, 2160.
Aschenbach B., Egger R., Trümper J.: 1995, Nat, 373, 587.
Bhattacharya D.: 1990, JA&A, 11, 125.
Blandford R.D., Romani R.W.: 1988, MNRAS, 234, 57p.
Bocchino F., Maggio A., Sciortino S.: 1997, ApJ, 481, 872.
Bock D.C.-J., Gvaramadze V.V.: 1999, in preparation.
Borkowski K.J., Szymkowiak A.E., Blondin J.M., Sarazin C.L.: 1996, ApJ, 466, 866.
Brazier K.T.S., Becker W.: 1997, MNRAS, 284, 335.
Caswell J.L., Milne D.K., Wellington K.J.: 1981, MNRAS, 195, 89.
Chevalier R.A., Emmering R.T.: 1989, ApJ, 342, L75.
D’Ercole A.: 1992, MNRAS, 255, 572.
de Jager O.C., Harding A.K., Strickman M.S.: 1996, ApJ, 460, 729.
Ferrière K.M., Mac Low M.-M., Zweibel E.G.: 1991, ApJ, 375, 239.
Franco J., Tenorio-Tagle G., Bodenheimer P., Różyczka M.: 1991, PASP, 103, 803.
Gaensler B.M.: 1998, ApJ, 493, 781.
Gaensler B.M., Green A.J., Manchester R.N.: 1998, MNRAS, 299, 812.
Gaensler B.M., Brazier K.T.S., Manchester R.N., Johnston S., Green A.J.: 1999, MNRAS, 305, 724.
Greiveldinger C., Caucino S., Massaglia S., Ögelman H., Trussoni E.: 1995, ApJ, 454, 855.
Gvaramadze V.V.: 1998a, in: The Local Bubble and Beyond, eds. D.Breitschwerdt, M.Freyberg, J.Trümper, Springer-Verlag, Heidelberg, p. 141.
Gvaramadze V.V.: 1998b, Astronomy Letters, 24, 178.
Gvaramadze V.V.: 1999a, A&A, 352, 712.
Gvaramadze V.V.: 1999b, in: Proceedings of the All-Russian Conference ”Astrophysics on the Boundary of Centuries” (17-22 May 1999, Pushchino, Russia), in press.
Gvaramadze V.V.: 1999c, submitted to A&A.
Gosachinskij I.V., Morozova V.V.: 1999, Astronomy Reports, in press.
Harnden F.R., Grant P.D., Seward F.D., Kahn S.M.: 1985, ApJ, 299, 828.
Kahn S.M., Gorenstein P., Harnden F.R., Seward F.D.: 1985, ApJ, 299, 821.
Landecker T.L., Pineault S., Routledge D., Vaneldik J.F.: 1989, MNRAS, 237, 277.
McKee C.F., Van Buren D., Lazareff R.: 1984, ApJ, 278, L115.
Manchester R.N.: 1987, A&A, 171, 205.
Markwardt C.B., Ögelman H.: 1995, Nat, 375, 40.
Seward F.D., Harnden Jr., F.R., Murdin P., Clark D.H.: 1983, ApJ, 267, 698.
Seward F.D., Harnden Jr., F.R., Szymkowiak A., Swank J.: 1984, ApJ, 281, 650.
Shull P., Dyson J.E., Kahn F.D., West K.A.: 1985, MNRAS, 212, 799.
Tamura K., Kawai N., Yoshida A., Brinkmann W.: 1996, PASP, 48, L33.
van den Bergh S., Tammann G.A.: 1991, ARA&A, 29, 363.
Weaver R., McCray R., Castor J., Shapiro P., Moore R.: 1977, ApJ, 218, 377.
Weiler K.W., Panagia N.: 1980, A&A, 90, 269.
Willmore A.P., Eyles C.J., Skinner G.K., Watt M.P.: 1992, MNRAS, 254, 139. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.